当前位置:首页 » 《随便一记》 » 正文

云原生之kubectl命令详解

11 人参与  2023年04月13日 14:43  分类 : 《随便一记》  评论

点击全文阅读


目录

1、查看版本信息:kubectl version

2、查看资源对象简写(缩写):kubectl api-resources

3、查看集群信息:kubectl cluster-info

4、查看帮助信息:kubectl --help

5、node节点日志查看:journalctl -u kubelet -f

6、获取一个或多个资源信息:kubectl get 

6.1、查看所有命名空间运行的pod信息: kubectl get pods -A

6.2、查看所有命名空间运行的pod详细信息: kubectl get pods -A -o wide

6.3、 查看所有资源对象:kubectl get all -A

6.4、查看node节点上的标签:kube get nodes --show-labels 

6.5、查看pod节点上的标签:kubectl get pods --show-labels -A

6.6、查看节点状态信息:kubectl get cs

6.7、查看命名空间:kubectl get namespaces

7、创建命名空间 :kubectl create ns app

8、删除命名空间:kubectl delete ns ceshi

9、在命名空间kube-public创建无状态控制器deployment来启动pod,暴露80端口,副本集为3

11、暴露发布pod中的服务供用户访问

12、删除pod中的nginx服务及service

13、查看endpoint的信息

14、修改/更新(镜像、参数......) kubectl set 

 15、调整副本集的数量:kubectl scale

16、查看详细信息:kubectl describe

16.1、显示所有的nodes的详细信息:kubectl describe nodes

16.2、查看某个node的详细信息

16.3、显示所有Pod的详细信息:kubectl describe pods

16.4、显示一个pod的详细信息 :kubectl describe deploy/nginx

16.5、显示所有svc详细信息:kubectl describe svc

16.6、显示指定svc的详细信息:kubectl describe svc nginx-service

16.7、显示所有namespace的详细信息:kubectl describe ns

16.8、显示指定namespace的详细信息:kubectl describe ns kube-public


1、查看版本信息:kubectl version

[root@master ~]# kubectl version

[root@master ~]# kubectl versionClient Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}[root@master ~]# kubectl 

2、查看资源对象简写(缩写):kubectl api-resources

[root@master ~]# kubectl api-resources

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}[root@master ~]# kubectl api-resourcesNAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KINDbindings                                       v1                                     true         Bindingcomponentstatuses                 cs           v1                                     false        ComponentStatusconfigmaps                        cm           v1                                     true         ConfigMapendpoints                         ep           v1                                     true         Endpointsevents                            ev           v1                                     true         Eventlimitranges                       limits       v1                                     true         LimitRangenamespaces                        ns           v1                                     false        Namespacenodes                             no           v1                                     false        Nodepersistentvolumeclaims            pvc          v1                                     true         PersistentVolumeClaimpersistentvolumes                 pv           v1                                     false        PersistentVolumepods                              po           v1                                     true         Podpodtemplates                                   v1                                     true         PodTemplatereplicationcontrollers            rc           v1                                     true         ReplicationControllerresourcequotas                    quota        v1                                     true         ResourceQuotasecrets                                        v1                                     true         Secretserviceaccounts                   sa           v1                                     true         ServiceAccountservices                          svc          v1                                     true         Servicemutatingwebhookconfigurations                  admissionregistration.k8s.io/v1        false        MutatingWebhookConfigurationvalidatingwebhookconfigurations                admissionregistration.k8s.io/v1        false        ValidatingWebhookConfigurationcustomresourcedefinitions         crd,crds     apiextensions.k8s.io/v1                false        CustomResourceDefinitionapiservices                                    apiregistration.k8s.io/v1              false        APIServicecontrollerrevisions                            apps/v1                                true         ControllerRevisiondaemonsets                        ds           apps/v1                                true         DaemonSetdeployments                       deploy       apps/v1                                true         Deploymentreplicasets                       rs           apps/v1                                true         ReplicaSetstatefulsets                      sts          apps/v1                                true         StatefulSettokenreviews                                   authentication.k8s.io/v1               false        TokenReviewlocalsubjectaccessreviews                      authorization.k8s.io/v1                true         LocalSubjectAccessReviewselfsubjectaccessreviews                       authorization.k8s.io/v1                false        SelfSubjectAccessReviewselfsubjectrulesreviews                        authorization.k8s.io/v1                false        SelfSubjectRulesReviewsubjectaccessreviews                           authorization.k8s.io/v1                false        SubjectAccessReviewhorizontalpodautoscalers          hpa          autoscaling/v1                         true         HorizontalPodAutoscalercronjobs                          cj           batch/v1                               true         CronJobjobs                                           batch/v1                               true         Jobcertificatesigningrequests        csr          certificates.k8s.io/v1                 false        CertificateSigningRequestleases                                         coordination.k8s.io/v1                 true         Leaseendpointslices                                 discovery.k8s.io/v1                    true         EndpointSliceevents                            ev           events.k8s.io/v1                       true         Eventingresses                         ing          extensions/v1beta1                     true         Ingressflowschemas                                    flowcontrol.apiserver.k8s.io/v1beta1   false        FlowSchemaprioritylevelconfigurations                    flowcontrol.apiserver.k8s.io/v1beta1   false        PriorityLevelConfigurationingressclasses                                 networking.k8s.io/v1                   false        IngressClassingresses                         ing          networking.k8s.io/v1                   true         Ingressnetworkpolicies                   netpol       networking.k8s.io/v1                   true         NetworkPolicyruntimeclasses                                 node.k8s.io/v1                         false        RuntimeClasspoddisruptionbudgets              pdb          policy/v1                              true         PodDisruptionBudgetpodsecuritypolicies               psp          policy/v1beta1                         false        PodSecurityPolicyclusterrolebindings                            rbac.authorization.k8s.io/v1           false        ClusterRoleBindingclusterroles                                   rbac.authorization.k8s.io/v1           false        ClusterRolerolebindings                                   rbac.authorization.k8s.io/v1           true         RoleBindingroles                                          rbac.authorization.k8s.io/v1           true         Rolepriorityclasses                   pc           scheduling.k8s.io/v1                   false        PriorityClasscsidrivers                                     storage.k8s.io/v1                      false        CSIDrivercsinodes                                       storage.k8s.io/v1                      false        CSINodecsistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacitystorageclasses                    sc           storage.k8s.io/v1                      false        StorageClassvolumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

3、查看集群信息:kubectl cluster-info

[root@master ~]# kubectl cluster-info

Kubernetes control plane is running at https://192.168.159.10:6443CoreDNS is running at https://192.168.159.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

4、查看帮助信息:kubectl --help

[root@master ~]# kubectl --help

kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/Basic Commands (Beginner):  create        Create a resource from a file or from stdin.  expose        使用 replication controller, service, deployment 或者 pod 并暴露它作为一个 新的 KubernetesService  run           在集群中运行一个指定的镜像  set           为 objects 设置一个指定的特征Basic Commands (Intermediate):  explain       查看资源的文档  get           显示一个或更多 resources  edit          在服务器上编辑一个资源  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selectorDeploy Commands:  rollout       Manage the rollout of a resource  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller  autoscale     Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationControllerCluster Management Commands:  certificate   修改 certificate 资源.  cluster-info  显示集群信息  top           显示 Resource (CPU/Memory) 使用.  cordon        标记 node 为 unschedulable  uncordon      标记 node 为 schedulable  drain         Drain node in preparation for maintenance  taint         更新一个或者多个 node 上的 taintsTroubleshooting and Debugging Commands:  describe      显示一个指定 resource 或者 group 的 resources 详情  logs          输出容器在 pod 中的日志  attach        Attach 到一个运行中的 container  exec          在一个 container 中执行一个命令  port-forward  Forward one or more local ports to a pod  proxy         运行一个 proxy 到 Kubernetes API server  cp            复制 files 和 directories 到 containers 和从容器中复制 files 和 directories.  auth          Inspect authorization  debug         Create debugging sessions for troubleshooting workloads and nodesAdvanced Commands:  diff          Diff live version against would-be applied version  apply         通过文件名或标准输入流(stdin)对资源进行配置  patch         Update field(s) of a resource  replace       通过 filename 或者 stdin替换一个资源  wait          Experimental: Wait for a specific condition on one or many resources.  kustomize     Build a kustomization target from a directory or URL.Settings Commands:  label         更新在这个资源上的 labels  annotate      更新一个资源的注解  completion    Output shell completion code for the specified shell (bash or zsh)Other Commands:  api-resources Print the supported API resources on the server  api-versions  Print the supported API versions on the server, in the form of "group/version"  config        修改 kubeconfig 文件  plugin        Provides utilities for interacting with plugins.  version       输出 client 和 server 的版本信息Usage:  kubectl [flags] [options]Use "kubectl <command> --help" for more information about a given command.Use "kubectl options" for a list of global command-line options (applies to all commands).

官方帮助信息:Kubernetes kubectl 命令表 _ Kubernetes(K8S)中文文档_Kubernetes中文社区http://docs.kubernetes.org.cn/683.html

5、node节点日志查看:journalctl -u kubelet -f

-- Logs begin at 三 2022-11-02 02:24:50 CST. --11月 01 19:51:04 master kubelet[13641]: I1101 19:51:04.415651   13641 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="041b38b5bb2ff0161170bea161fd70e9175cc27fdc98877944d899ebe7b90d2f"11月 01 19:51:06 master kubelet[13641]: I1101 19:51:06.428325   13641 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/771ef2517500c43b40e7df4c76198cac-kubeconfig\") pod \"771ef2517500c43b40e7df4c76198cac\" (UID: \"771ef2517500c43b40e7df4c76198cac\") "11月 01 19:51:06 master kubelet[13641]: I1101 19:51:06.428370   13641 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771ef2517500c43b40e7df4c76198cac-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "771ef2517500c43b40e7df4c76198cac" (UID: "771ef2517500c43b40e7df4c76198cac"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue ""11月 01 19:51:06 master kubelet[13641]: I1101 19:51:06.529163   13641 reconciler.go:319] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/771ef2517500c43b40e7df4c76198cac-kubeconfig\") on node \"master\" DevicePath \"\""11月 01 19:51:07 master kubelet[13641]: I1101 19:51:07.282148   13641 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/771ef2517500c43b40e7df4c76198cac/volumes"11月 01 19:51:10 master kubelet[13641]: I1101 19:51:10.913108   13641 topology_manager.go:187] "Topology Admit Handler"11月 01 19:51:11 master kubelet[13641]: I1101 19:51:11.079185   13641 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e72c0f5a18f84d50f027106c98ab6b1-kubeconfig\") pod \"kube-scheduler-master\" (UID: \"5e72c0f5a18f84d50f027106c98ab6b1\") "11月 01 19:51:15 master kubelet[13641]: E1101 19:51:15.849398   13641 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/771ef2517500c43b40e7df4c76198cac/etc-hosts with error exit status 1" pod="kube-system/kube-scheduler-master"11月 01 19:51:25 master kubelet[13641]: E1101 19:51:25.874999   13641 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/771ef2517500c43b40e7df4c76198cac/etc-hosts with error exit status 1" pod="kube-system/kube-scheduler-master"11月 01 19:51:31 master kubelet[13641]: I1101 19:51:31.

k8s中查看核心组件日志怎么看

①、通过kubeadm部署的:kubectl logs -f pod_组件名 -n namespace

                                             或者 journalctl -u kubelet -f

②、二进制部署的:journalctl -u kubelet -f

6、获取一个或多个资源信息:kubectl get 

语法格式:

kubectl get <resource> [-o wide | json| yaml] [-n namespace]

注释:

resource:可以是具体资源名称

-o :指定输出格式

-n :指定名称空间

6.1、查看所有命名空间运行的pod信息: kubectl get pods -A

[root@master ~]# kubectl get pods -A

NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGEkube-flannel   kube-flannel-ds-7clld            1/1     Running   0          5h35mkube-flannel   kube-flannel-ds-psgvb            1/1     Running   0          5h35mkube-flannel   kube-flannel-ds-xxncr            1/1     Running   0          5h35mkube-system    coredns-6f6b8cc4f6-lbvl5         1/1     Running   0          5h45mkube-system    coredns-6f6b8cc4f6-m6brz         1/1     Running   0          5h45mkube-system    etcd-master                      1/1     Running   0          5h45mkube-system    kube-apiserver-master            1/1     Running   0          5h45mkube-system    kube-controller-manager-master   1/1     Running   0          5h11mkube-system    kube-proxy-jwpnz                 1/1     Running   0          5h40mkube-system    kube-proxy-xqcqm                 1/1     Running   0          5h41mkube-system    kube-proxy-z6rhl                 1/1     Running   0          5h45mkube-system    kube-scheduler-master            1/1     Running   0          5h11m

6.2、查看所有命名空间运行的pod详细信息: kubectl get pods -A -o wide

[root@master ~]# kubectl get pods -A -o wide

NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATESkube-flannel   kube-flannel-ds-7clld            1/1     Running   0          5h39m   192.168.159.13   node02   <none>           <none>kube-flannel   kube-flannel-ds-psgvb            1/1     Running   0          5h39m   192.168.159.11   node01   <none>           <none>kube-flannel   kube-flannel-ds-xxncr            1/1     Running   0          5h39m   192.168.159.10   master   <none>           <none>kube-system    coredns-6f6b8cc4f6-lbvl5         1/1     Running   0          5h49m   10.150.2.2       node02   <none>           <none>kube-system    coredns-6f6b8cc4f6-m6brz         1/1     Running   0          5h49m   10.150.1.2       node01   <none>           <none>kube-system    etcd-master                      1/1     Running   0          5h49m   192.168.159.10   master   <none>           <none>kube-system    kube-apiserver-master            1/1     Running   0          5h49m   192.168.159.10   master   <none>           <none>kube-system    kube-controller-manager-master   1/1     Running   0          5h16m   192.168.159.10   master   <none>           <none>kube-system    kube-proxy-jwpnz                 1/1     Running   0          5h45m   192.168.159.13   node02   <none>           <none>kube-system    kube-proxy-xqcqm                 1/1     Running   0          5h45m   192.168.159.11   node01   <none>           <none>kube-system    kube-proxy-z6rhl                 1/1     Running   0          5h49m   192.168.159.10   master   <none>           <none>kube-system    kube-scheduler-master            1/1     Running   0          5h15m   192.168.159.10   master   <none>           <none>

6.3、 查看所有资源对象:kubectl get all -A

[root@master ~]# kubectl get all -A

NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGEkube-flannel   pod/kube-flannel-ds-7whbw            1/1     Running   0          5h14mkube-flannel   pod/kube-flannel-ds-nj4vl            1/1     Running   0          5h14mkube-flannel   pod/kube-flannel-ds-w55x5            1/1     Running   0          5h14mkube-system    pod/coredns-6f6b8cc4f6-lg6hc         1/1     Running   0          5h20mkube-system    pod/coredns-6f6b8cc4f6-tdwhx         1/1     Running   0          5h20mkube-system    pod/etcd-master                      1/1     Running   0          5h20mkube-system    pod/kube-apiserver-master            1/1     Running   0          5h20mkube-system    pod/kube-controller-manager-master   1/1     Running   0          5h13mkube-system    pod/kube-proxy-gv58r                 1/1     Running   0          5h20mkube-system    pod/kube-proxy-xd4lz                 1/1     Running   0          5h18mkube-system    pod/kube-proxy-zzs2s                 1/1     Running   0          5h18mkube-system    pod/kube-scheduler-master            1/1     Running   0          5h12mNAMESPACE     NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGEdefault       service/kubernetes      ClusterIP   10.125.0.1     <none>        443/TCP                  5h20mdefault       service/nginx-service   NodePort    10.125.18.84   <none>        80:32476/TCP             5h3mkube-system   service/kube-dns        ClusterIP   10.125.0.10    <none>        53/UDP,53/TCP,9153/TCP   5h20mNAMESPACE      NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGEkube-flannel   daemonset.apps/kube-flannel-ds   3         3         3       3            3           <none>                   5h14mkube-system    daemonset.apps/kube-proxy        3         3         3       3            3           kubernetes.io/os=linux   5h20mNAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGEkube-system   deployment.apps/coredns   2/2     2            2           5h20mNAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGEkube-system   replicaset.apps/coredns-6f6b8cc4f6   2         2         2       5h20m

6.4、查看node节点上的标签:kube get nodes --show-labels 

[root@master ~]# kubectl get nodes --show-labels

NAME     STATUS   ROLES                  AGE     VERSION   LABELSmaster   Ready    control-plane,master   5h58m   v1.21.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=node01   Ready    node                   5h53m   v1.21.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,node-role.kubernetes.io/node=nodenode02   Ready    node                   5h53m   v1.21.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,node-role.kubernetes.io/node=node

6.5、查看pod节点上的标签:kubectl get pods --show-labels -A

[root@master ~]# kubectl get pods --show-labels -A

[root@master ~]# kubectl get pods --show-labels -ANAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE     LABELSkube-flannel   kube-flannel-ds-7clld            1/1     Running   0          5h51m   app=flannel,controller-revision-hash=5b775b5b5c,pod-template-generation=1,tier=nodekube-flannel   kube-flannel-ds-psgvb            1/1     Running   0          5h51m   app=flannel,controller-revision-hash=5b775b5b5c,pod-template-generation=1,tier=nodekube-flannel   kube-flannel-ds-xxncr            1/1     Running   0          5h51m   app=flannel,controller-revision-hash=5b775b5b5c,pod-template-generation=1,tier=nodekube-system    coredns-6f6b8cc4f6-lbvl5         1/1     Running   0          6h1m    k8s-app=kube-dns,pod-template-hash=6f6b8cc4f6kube-system    coredns-6f6b8cc4f6-m6brz         1/1     Running   0          6h1m    k8s-app=kube-dns,pod-template-hash=6f6b8cc4f6kube-system    etcd-master                      1/1     Running   0          6h1m    component=etcd,tier=control-planekube-system    kube-apiserver-master            1/1     Running   0          6h1m    component=kube-apiserver,tier=control-planekube-system    kube-controller-manager-master   1/1     Running   0          5h27m   component=kube-controller-manager,tier=control-planekube-system    kube-proxy-jwpnz                 1/1     Running   0          5h56m   controller-revision-hash=6b87fcb57c,k8s-app=kube-proxy,pod-template-generation=1kube-system    kube-proxy-xqcqm                 1/1     Running   0          5h56m   controller-revision-hash=6b87fcb57c,k8s-app=kube-proxy,pod-template-generation=1kube-system    kube-proxy-z6rhl                 1/1     Running   0          6h1m    controller-revision-hash=6b87fcb57c,k8s-app=kube-proxy,pod-template-generation=1kube-system    kube-scheduler-master            1/1     Running   0          5h26m   component=kube-scheduler,tier=control-plane

6.6、查看节点状态信息:kubectl get cs

[root@master ~]# kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+NAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-0               Healthy   {"health":"true"}  

6.7、查看命名空间:kubectl get namespaces

或者使用缩写:[root@master ~]# kubectl get ns

[root@master ~]# kubectl  get namespaceNAME              STATUS   AGEdefault           Active   6h8mkube-flannel      Active   5h58mkube-node-lease   Active   6h8mkube-public       Active   6h8mkube-system       Active   6h8m

7、创建命名空间 :kubectl create ns app

[root@master ~]# kubectl create ns ceshinamespace/ceshi created

 

8、删除命名空间:kubectl delete ns ceshi

[root@master ~]# kubectl delete ns ceshinamespace "ceshi" deleted

9、在命名空间kube-public创建无状态控制器deployment来启动pod,暴露80端口,副本集为3

在kube-public命名空间创建一个nginx

[root@master ~]# kubectl create deployment nginx --image=nginx:1.15 --port=80 --replicas=3 -n kube-public

[root@master ~]# kubectl create deployment nginx --image=nginx:1.15 --port=80 --replicas=3 -n kube-publicdeployment.apps/nginx created

11、暴露发布pod中的服务供用户访问

[root@master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=NodePort -n kube-public

访问:

 

 

12、删除pod中的nginx服务及service

[root@master ~]# kubectl delete deployment nginx -n kube-public

[root@master ~]# kubectl delete svc -n kube-public nginx-service 

 

13、查看endpoint的信息

[root@master ~]# kubectl get endpoints

14、修改/更新(镜像、参数......) kubectl set 

例如:查看nginx的版本号:

需求:修改这个运行中的nginx的版本号

[root@master ~]# kubectl set image deployment/nginx nginx=nginx:1.11

 

过程中,他会先下载新的镜像进行创建,启动后删除老版本的容器 

 15、调整副本集的数量:kubectl scale

[root@master ~]# kubectl scale deployment nginx --replicas=5 -n default
deployment.apps/nginx scaled       ## -n 指定namespace,本服务就创建在默认namespace中

16、查看详细信息:kubectl describe

16.1、显示所有的nodes的详细信息:kubectl describe nodes

[root@master ~]# kubectl describe nodes

16.2、查看某个node的详细信息

 [root@master ~]# kubectl describe nodes node01

Name:               node01Roles:              nodeLabels:             beta.kubernetes.io/arch=amd64                    beta.kubernetes.io/os=linux                    kubernetes.io/arch=amd64                    kubernetes.io/hostname=node01                    kubernetes.io/os=linux                    node-role.kubernetes.io/node=nodeAnnotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"c2:b4:d2:1b:fa:c2"}                    flannel.alpha.coreos.com/backend-type: vxlan                    flannel.alpha.coreos.com/kube-subnet-manager: true                    flannel.alpha.coreos.com/public-ip: 192.168.159.13                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock                    node.alpha.kubernetes.io/ttl: 0                    volumes.kubernetes.io/controller-managed-attach-detach: trueCreationTimestamp:  Sat, 05 Nov 2022 09:26:33 +0800Taints:             <none>Unschedulable:      falseLease:  HolderIdentity:  node01  AcquireTime:     <unset>  RenewTime:       Sat, 05 Nov 2022 17:10:49 +0800Conditions:  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message  ----                 ------  -----------------                 ------------------                ------                       -------  NetworkUnavailable   False   Sat, 05 Nov 2022 09:31:06 +0800   Sat, 05 Nov 2022 09:31:06 +0800   FlannelIsUp                  Flannel is running on this node  MemoryPressure       False   Sat, 05 Nov 2022 17:07:16 +0800   Sat, 05 Nov 2022 09:26:33 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available  DiskPressure         False   Sat, 05 Nov 2022 17:07:16 +0800   Sat, 05 Nov 2022 09:26:33 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure  PIDPressure          False   Sat, 05 Nov 2022 17:07:16 +0800   Sat, 05 Nov 2022 09:26:33 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available  Ready                True    Sat, 05 Nov 2022 17:07:16 +0800   Sat, 05 Nov 2022 09:31:14 +0800   KubeletReady                 kubelet is posting ready statusAddresses:  InternalIP:  192.168.159.13  Hostname:    node01Capacity:  cpu:                2  ephemeral-storage:  15349Mi  hugepages-1Gi:      0  hugepages-2Mi:      0  memory:             3861508Ki  pods:               110Allocatable:  cpu:                2  ephemeral-storage:  14485133698  hugepages-1Gi:      0  hugepages-2Mi:      0  memory:             3759108Ki  pods:               110System Info:  Machine ID:                 737b63dadf104cdaa76db981253e1baa  System UUID:                F0114D56-06E7-3FC5-4619-BAD443CE9F72  Boot ID:                    ca48a336-8e26-4d35-bd9b-2f403926c2b5  Kernel Version:             3.10.0-957.el7.x86_64  OS Image:                   CentOS Linux 7 (Core)  Operating System:           linux  Architecture:               amd64  Container Runtime Version:  docker://20.10.21  Kubelet Version:            v1.21.3  Kube-Proxy Version:         v1.21.3PodCIDR:                      10.150.2.0/24PodCIDRs:                     10.150.2.0/24Non-terminated Pods:          (6 in total)  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age  ---------                   ----                        ------------  ----------  ---------------  -------------  ---  default                     nginx-6fc77dcb7c-dq86p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m  default                     nginx-6fc77dcb7c-gv9qq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m  default                     nginx-6fc77dcb7c-hkcbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49m  kube-flannel                kube-flannel-ds-nj4vl       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      7h40m  kube-system                 coredns-6f6b8cc4f6-lg6hc    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7h46m  kube-system                 kube-proxy-xd4lz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7h44mAllocated resources:  (Total limits may be over 100 percent, i.e., overcommitted.)  Resource           Requests    Limits  --------           --------    ------  cpu                200m (10%)  100m (5%)  memory             120Mi (3%)  220Mi (5%)  ephemeral-storage  0 (0%)      0 (0%)  hugepages-1Gi      0 (0%)      0 (0%)  hugepages-2Mi      0 (0%)      0 (0%)Events:              <none>

16.3、显示所有Pod的详细信息:kubectl describe pods

[root@master ~]# kubectl describe pods

16.4、显示一个pod的详细信息 :kubectl describe deploy/nginx

[root@master ~]# kubectl describe deploy/nginx
[root@master ~]# kubectl describe deploy/nginxName:                   nginxNamespace:              defaultCreationTimestamp:      Sat, 05 Nov 2022 16:06:47 +0800Labels:                 app=nginxAnnotations:            deployment.kubernetes.io/revision: 3Selector:               app=nginxReplicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailableStrategyType:           RollingUpdateMinReadySeconds:        0RollingUpdateStrategy:  25% max unavailable, 25% max surgePod Template:  Labels:  app=nginx  Containers:   nginx:    Image:        nginx:1.21    Port:         80/TCP    Host Port:    0/TCP    Environment:  <none>    Mounts:       <none>  Volumes:        <none>Conditions:  Type           Status  Reason  ----           ------  ------  Progressing    True    NewReplicaSetAvailable  Available      True    MinimumReplicasAvailableOldReplicaSets:  <none>NewReplicaSet:   nginx-6fc77dcb7c (5/5 replicas created)Events:  Type    Reason             Age                From                   Message  ----    ------             ----               ----                   -------  Normal  ScalingReplicaSet  56m                deployment-controller  Scaled up replica set nginx-6fc77dcb7c to 1  Normal  ScalingReplicaSet  56m                deployment-controller  Scaled down replica set nginx-897f8f586 to 2  Normal  ScalingReplicaSet  52m (x5 over 56m)  deployment-controller  (combined from similar events): Scaled up replica set nginx-6fc77dcb7c to 5

16.5、显示所有svc详细信息:kubectl describe svc

[root@master ~]# kubectl describe svc

[root@master ~]# kubectl describe svcName:              kubernetesNamespace:         defaultLabels:            component=apiserver                   provider=kubernetesAnnotations:       <none>Selector:          <none>Type:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.125.0.1IPs:               10.125.0.1Port:              https  443/TCPTargetPort:        6443/TCPEndpoints:         192.168.159.12:6443Session Affinity:  NoneEvents:            <none>Name:                     nginx-serviceNamespace:                defaultLabels:                   app=nginxAnnotations:              <none>Selector:                 app=nginxType:                     NodePortIP Family Policy:         SingleStackIP Families:              IPv4IP:                       10.125.204.159IPs:                      10.125.204.159Port:                     <unset>  80/TCPTargetPort:               80/TCPNodePort:                 <unset>  30307/TCPEndpoints:                10.150.1.10:80,10.150.1.11:80,10.150.2.11:80 + 2 more...Session Affinity:         NoneExternal Traffic Policy:  ClusterEvents:                   <none>

16.6、显示指定svc的详细信息:kubectl describe svc nginx-service

[root@master ~]# kubectl describe svc nginx-service

[root@master ~]# kubectl describe svc nginx-serviceName:                     nginx-serviceNamespace:                defaultLabels:                   app=nginxAnnotations:              <none>Selector:                 app=nginxType:                     NodePortIP Family Policy:         SingleStackIP Families:              IPv4IP:                       10.125.204.159IPs:                      10.125.204.159Port:                     <unset>  80/TCPTargetPort:               80/TCPNodePort:                 <unset>  30307/TCPEndpoints:                10.150.1.10:80,10.150.1.11:80,10.150.2.11:80 + 2 more...Session Affinity:         NoneExternal Traffic Policy:  ClusterEvents:                   <none>

16.7、显示所有namespace的详细信息:kubectl describe ns

[root@master ~]# kubectl describe ns

[root@master ~]# kubectl describe nsName:         ceshiLabels:       kubernetes.io/metadata.name=ceshiAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.Name:         defaultLabels:       kubernetes.io/metadata.name=defaultAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.Name:         kube-flannelLabels:       kubernetes.io/metadata.name=kube-flannel              pod-security.kubernetes.io/enforce=privilegedAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.Name:         kube-node-leaseLabels:       kubernetes.io/metadata.name=kube-node-leaseAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.Name:         kube-publicLabels:       kubernetes.io/metadata.name=kube-publicAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.Name:         kube-systemLabels:       kubernetes.io/metadata.name=kube-systemAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.

16.8、显示指定namespace的详细信息:kubectl describe ns kube-public

[root@master ~]# kubectl describe ns kube-public

[root@master ~]# kubectl describe ns publicError from server (NotFound): namespaces "public" not found[root@master ~]# kubectl describe ns kube-publicName:         kube-publicLabels:       kubernetes.io/metadata.name=kube-publicAnnotations:  <none>Status:       ActiveNo resource quota.No LimitRange resource.

故障现象:节点3个node 1个master1、现在创建一个pod,发现结果是pod无法创建

kubectl describe po pod_name-n namespaces

events:

调度器正常完成工作正常退出pod 创建失败,可能原因之一:资源现在问题

 


点击全文阅读


本文链接:http://zhangshiyu.com/post/59485.html

<< 上一篇 下一篇 >>

  • 评论(0)
  • 赞助本站

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。

关于我们 | 我要投稿 | 免责申明

Copyright © 2020-2022 ZhangShiYu.com Rights Reserved.豫ICP备2022013469号-1