首页 >热点 > > 正文

每日热闻!k8s资源对象

博客园 2023-05-11 06:42:11

什么是资源对象?


【资料图】

所谓资源对象是指在k8s上创建的资源实例;即通过apiserver提供的各资源api接口(可以理解为各种资源模板),使用yaml文件或者命令行的方式向对应资源api接口传递参数赋值实例化的结果;比如我们在k8s上创建一个pod,那么我们就需要通过给apiserver交互,传递创建pod的相关参数,让apiserver拿着这些参数去实例化一个pod的相关信息存放在etcd中,然后再由调度器进行调度,由node节点的kubelet执行创建pod;简单讲资源对象就是把k8s之上的api接口进行实例化的结果;

k8s逻辑运行环境

提示:k8s运行环境如上,k8s能够将多个node节点的底层提供的资源(如内存,cpu,存储,网络等)逻辑的整合成一个大的资源池,统一由k8s进行调度编排;用户只管在k8s上创建各种资源即可,创建完成的资源是通过k8s统一编排调度,用户无需关注具体资源在那个node上运行,也无需关注node节点资源情况;

k8s的设计理念——分层架构

k8s的设计理念——API设计原则

1、所有API应该是声明式的;

2、API对象是彼此互补而且可组合的,即“高内聚,松耦合”;

3、高层API以操作意图为基础设计;

4、低层API根据高层API的控制需要设计;

5、尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制;

6、API操作复杂度与对象数量成正比;

7、API对象状态不能依赖于网络连接状态;

8、尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的;

kubernetes API简介

提示:在k8s上api分内置api和自定义api;所谓内置api是指部署好k8s集群后自带的api接口;自定义api也称自定义资源(CRD,Custom Resource Definition),部署好k8s之后,通过安装其他组件等方式扩展出来的api;

apiserver资源组织逻辑

提示:apiserver对于不同资源是通过分类,分组,分版本的方式逻辑组织的,如上图所示;

k8s内置资源对象简介

k8s资源对象操作命令

资源配置清单必需字段

1、apiVersion - 创建该对象所使用的Kubernetes API的版本;

2、kind - 想要创建的对象的类型;

3、metadata - 定义识别对象唯一性的数据,包括一个name名称 、可选的namespace,默认不写就是default名称空间;

4、spec:定义资源对象的详细规范信息(统一的label标签、容器名称、镜像、端口映射等),即用户期望对应资源处于什么状态;

5、status(Pod创建完成后k8s自动生成status状态),该字段信息由k8s自动维护,用户无需定义,即对应资源的实际状态;

Pod资源对象

提示:pod是k8s中最小控制单元,一个pod中可以运行一个或多个容器;一个pod的中的容器是一起调度,即调度的最小单位是pod;pod的生命周期是短暂的,不会自愈,是用完就销毁的实体;一般我们通过Controller来创建和管理pod;使用控制器创建的pod具有自动恢复功能,即pod状态不满足用户期望状态,对应控制器会通过重启或重建的方式,让对应pod状态和数量始终和用户定义的期望状态一致;

示例:自主式pod配置清单

apiVersion: v1kind: Podmetadata:  name: "pod-demo"  namespace: default  labels:    app: "pod-demo"spec:  containers:  - name: pod-demo    image: "harbor.ik8s.cc/baseimages/nginx:v1"    ports:    - containerPort:  80      name:  http    volumeMounts:    - name: localtime      mountPath: /etc/localtime  volumes:    - name: localtime      hostPath:        path: /usr/share/zoneinfo/Asia/Shanghai

应用配置清单

root@k8s-deploy:/yaml# kubectl get pods NAME        READY   STATUS    RESTARTS        AGEnet-test1   1/1     Running   2 (4m35s ago)   7d7htest        1/1     Running   4 (4m34s ago)   13dtest1       1/1     Running   4 (4m35s ago)   13dtest2       1/1     Running   4 (4m35s ago)   13droot@k8s-deploy:/yaml# kubectl apply -f pod-demo.yamlpod/pod-demo createdroot@k8s-deploy:/yaml# kubectl get pods NAME        READY   STATUS              RESTARTS        AGEnet-test1   1/1     Running             2 (4m47s ago)   7d7hpod-demo    0/1     ContainerCreating   0               4stest        1/1     Running             4 (4m46s ago)   13dtest1       1/1     Running             4 (4m47s ago)   13dtest2       1/1     Running             4 (4m47s ago)   13droot@k8s-deploy:/yaml# kubectl get pods NAME        READY   STATUS    RESTARTS        AGEnet-test1   1/1     Running   2 (4m57s ago)   7d7hpod-demo    1/1     Running   0               14stest        1/1     Running   4 (4m56s ago)   13dtest1       1/1     Running   4 (4m57s ago)   13dtest2       1/1     Running   4 (4m57s ago)   13droot@k8s-deploy:/yaml# 

提示:此pod只是在k8s上运行起来,它没有控制器的监视,对应pod删除,故障都不会自动恢复;

Job控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;

job控制器配置清单示例

apiVersion: batch/v1kind: Jobmetadata:  name: job-demo  namespace: default  labels:    app: job-demospec:  template:    metadata:      name: job-demo      labels:        app: job-demo    spec:          containers:      - name: job-demo-container        image: harbor.ik8s.cc/baseimages/centos7:2023        command: ["/bin/sh"]        args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]        volumeMounts:        - mountPath: /cache          name: cache-volume        - name: localtime          mountPath: /etc/localtime      volumes:      - name: cache-volume        hostPath:          path: /tmp/jobdata      - name: localtime        hostPath:          path: /usr/share/zoneinfo/Asia/Shanghai      restartPolicy: Never

提示:定义job资源必须定义restartPolicy;

应用清单

root@k8s-deploy:/yaml# kubectl get pods NAME        READY   STATUS    RESTARTS      AGEnet-test1   1/1     Running   3 (48m ago)   7d10hpod-demo    1/1     Running   1 (48m ago)   3h32mtest        1/1     Running   5 (48m ago)   14dtest1       1/1     Running   5 (48m ago)   14dtest2       1/1     Running   5 (48m ago)   14droot@k8s-deploy:/yaml# kubectl apply -f job-demo.yaml job.batch/job-demo createdroot@k8s-deploy:/yaml# kubectl get pods -o wideNAME             READY   STATUS      RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATESjob-demo-z8gmb   0/1     Completed   0             26s     10.200.211.130   192.168.0.34              net-test1        1/1     Running     3 (49m ago)   7d10h   10.200.211.191   192.168.0.34              pod-demo         1/1     Running     1 (49m ago)   3h32m   10.200.155.138   192.168.0.36              test             1/1     Running     5 (49m ago)   14d     10.200.209.6     192.168.0.35              test1            1/1     Running     5 (49m ago)   14d     10.200.209.8     192.168.0.35              test2            1/1     Running     5 (49m ago)   14d     10.200.211.177   192.168.0.34              root@k8s-deploy:/yaml# 

验证:查看192.168.0.34的/tmp/jobdata目录下是否有job执行的任务数据?

root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata"data.logroot@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log"data init job at 2023-05-06_23-31-32root@k8s-deploy:/yaml# 

提示:可以看到对应job所在宿主机的/tmp/jobdata/目录下有job执行过后的数据,这说明我们定义的job任务顺利完成;

定义并行job

apiVersion: batch/v1kind: Jobmetadata:  name: job-multi-demo  namespace: default  labels:    app: job-multi-demospec:  completions: 5     template:    metadata:      name: job-multi-demo      labels:        app: job-multi-demo    spec:       containers:      - name: job-multi-demo-container        image: harbor.ik8s.cc/baseimages/centos7:2023        command: ["/bin/sh"]        args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]        volumeMounts:        - mountPath: /cache          name: cache-volume        - name: localtime          mountPath: /etc/localtime      volumes:      - name: cache-volume        hostPath:          path: /tmp/jobdata      - name: localtime        hostPath:          path: /usr/share/zoneinfo/Asia/Shanghai      restartPolicy: Never

提示:spec字段下使用completions来指定执行任务需要的对应pod的数量;

应用清单

root@k8s-deploy:/yaml# kubectl get pods NAME             READY   STATUS      RESTARTS      AGEjob-demo-z8gmb   0/1     Completed   0             24mnet-test1        1/1     Running     3 (73m ago)   7d11hpod-demo         1/1     Running     1 (73m ago)   3h56mtest             1/1     Running     5 (73m ago)   14dtest1            1/1     Running     5 (73m ago)   14dtest2            1/1     Running     5 (73m ago)   14droot@k8s-deploy:/yaml# kubectl apply -f job-multi-demo.yaml job.batch/job-multi-demo createdroot@k8s-deploy:/yaml# kubectl get jobNAME             COMPLETIONS   DURATION   AGEjob-demo         1/1           5s         24mjob-multi-demo   1/5           10s        10sroot@k8s-deploy:/yaml# kubectl get pods -o wideNAME                   READY   STATUS              RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATESjob-demo-z8gmb         0/1     Completed           0             24m     10.200.211.130   192.168.0.34              job-multi-demo-5vp9w   0/1     Completed           0             12s     10.200.211.144   192.168.0.34              job-multi-demo-frstg   0/1     Completed           0             22s     10.200.211.186   192.168.0.34              job-multi-demo-gd44s   0/1     Completed           0             17s     10.200.211.184   192.168.0.34              job-multi-demo-kfm79   0/1     ContainerCreating   0             2s                 192.168.0.34              job-multi-demo-nsmpg   0/1     Completed           0             7s      10.200.211.135   192.168.0.34              net-test1              1/1     Running             3 (73m ago)   7d11h   10.200.211.191   192.168.0.34              pod-demo               1/1     Running             1 (73m ago)   3h56m   10.200.155.138   192.168.0.36              test                   1/1     Running             5 (73m ago)   14d     10.200.209.6     192.168.0.35              test1                  1/1     Running             5 (73m ago)   14d     10.200.209.8     192.168.0.35              test2                  1/1     Running             5 (73m ago)   14d     10.200.211.177   192.168.0.34              root@k8s-deploy:/yaml# kubectl get pods -o wideNAME                   READY   STATUS      RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATESjob-demo-z8gmb         0/1     Completed   0             24m     10.200.211.130   192.168.0.34              job-multi-demo-5vp9w   0/1     Completed   0             33s     10.200.211.144   192.168.0.34              job-multi-demo-frstg   0/1     Completed   0             43s     10.200.211.186   192.168.0.34              job-multi-demo-gd44s   0/1     Completed   0             38s     10.200.211.184   192.168.0.34              job-multi-demo-kfm79   0/1     Completed   0             23s     10.200.211.140   192.168.0.34              job-multi-demo-nsmpg   0/1     Completed   0             28s     10.200.211.135   192.168.0.34              net-test1              1/1     Running     3 (73m ago)   7d11h   10.200.211.191   192.168.0.34              pod-demo               1/1     Running     1 (73m ago)   3h57m   10.200.155.138   192.168.0.36              test                   1/1     Running     5 (73m ago)   14d     10.200.209.6     192.168.0.35              test1                  1/1     Running     5 (73m ago)   14d     10.200.209.8     192.168.0.35              test2                  1/1     Running     5 (73m ago)   14d     10.200.211.177   192.168.0.34              root@k8s-deploy:/yaml# 

验证:查看192.168.0.34的/tmp/jobdata/目录下是否有job数据产生?

root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata" data.logroot@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log"data init job at 2023-05-06_23-31-32data init job at 2023-05-06_23-55-44data init job at 2023-05-06_23-55-49data init job at 2023-05-06_23-55-54data init job at 2023-05-06_23-55-59data init job at 2023-05-06_23-56-04root@k8s-deploy:/yaml# 

定义并行度

apiVersion: batch/v1kind: Jobmetadata:  name: job-multi-demo2  namespace: default  labels:    app: job-multi-demo2spec:  completions: 6  parallelism: 2     template:    metadata:      name: job-multi-demo2      labels:        app: job-multi-demo2    spec:       containers:      - name: job-multi-demo2-container        image: harbor.ik8s.cc/baseimages/centos7:2023        command: ["/bin/sh"]        args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]        volumeMounts:        - mountPath: /cache          name: cache-volume        - name: localtime          mountPath: /etc/localtime      volumes:      - name: cache-volume        hostPath:          path: /tmp/jobdata      - name: localtime        hostPath:          path: /usr/share/zoneinfo/Asia/Shanghai      restartPolicy: Never

提示:在spec字段下使用parallelism字段来指定并行度,即一次几个pod同时运行;上述清单表示,一次2个pod同时运行,总共需要6个pod;

应用清单

root@k8s-deploy:/yaml# kubectl get jobsNAME             COMPLETIONS   DURATION   AGEjob-demo         1/1           5s         34mjob-multi-demo   5/5           25s        9m56sroot@k8s-deploy:/yaml# kubectl apply -f job-multi-demo2.yaml job.batch/job-multi-demo2 createdroot@k8s-deploy:/yaml# kubectl get jobsNAME              COMPLETIONS   DURATION   AGEjob-demo          1/1           5s         34mjob-multi-demo    5/5           25s        10mjob-multi-demo2   0/6           2s         3sroot@k8s-deploy:/yaml# kubectl get pods NAME                    READY   STATUS      RESTARTS      AGEjob-demo-z8gmb          0/1     Completed   0             34mjob-multi-demo-5vp9w    0/1     Completed   0             10mjob-multi-demo-frstg    0/1     Completed   0             10mjob-multi-demo-gd44s    0/1     Completed   0             10mjob-multi-demo-kfm79    0/1     Completed   0             9m59sjob-multi-demo-nsmpg    0/1     Completed   0             10mjob-multi-demo2-7ppxc   0/1     Completed   0             10sjob-multi-demo2-mxbtq   0/1     Completed   0             5sjob-multi-demo2-rhgh7   0/1     Completed   0             4sjob-multi-demo2-th6ff   0/1     Completed   0             11snet-test1               1/1     Running     3 (83m ago)   7d11hpod-demo                1/1     Running     1 (83m ago)   4h6mtest                    1/1     Running     5 (83m ago)   14dtest1                   1/1     Running     5 (83m ago)   14dtest2                   1/1     Running     5 (83m ago)   14droot@k8s-deploy:/yaml# kubectl get pods NAME                    READY   STATUS      RESTARTS      AGEjob-demo-z8gmb          0/1     Completed   0             34mjob-multi-demo-5vp9w    0/1     Completed   0             10mjob-multi-demo-frstg    0/1     Completed   0             10mjob-multi-demo-gd44s    0/1     Completed   0             10mjob-multi-demo-kfm79    0/1     Completed   0             10mjob-multi-demo-nsmpg    0/1     Completed   0             10mjob-multi-demo2-7ppxc   0/1     Completed   0             16sjob-multi-demo2-8bh22   0/1     Completed   0             6sjob-multi-demo2-dbjqw   0/1     Completed   0             6sjob-multi-demo2-mxbtq   0/1     Completed   0             11sjob-multi-demo2-rhgh7   0/1     Completed   0             10sjob-multi-demo2-th6ff   0/1     Completed   0             17snet-test1               1/1     Running     3 (83m ago)   7d11hpod-demo                1/1     Running     1 (83m ago)   4h6mtest                    1/1     Running     5 (83m ago)   14dtest1                   1/1     Running     5 (83m ago)   14dtest2                   1/1     Running     5 (83m ago)   14droot@k8s-deploy:/yaml# kubectl get pods -o wideNAME                    READY   STATUS      RESTARTS      AGE     IP               NODE           NOMINATED NODE   READINESS GATESjob-demo-z8gmb          0/1     Completed   0             35m     10.200.211.130   192.168.0.34              job-multi-demo-5vp9w    0/1     Completed   0             10m     10.200.211.144   192.168.0.34              job-multi-demo-frstg    0/1     Completed   0             11m     10.200.211.186   192.168.0.34              job-multi-demo-gd44s    0/1     Completed   0             11m     10.200.211.184   192.168.0.34              job-multi-demo-kfm79    0/1     Completed   0             10m     10.200.211.140   192.168.0.34              job-multi-demo-nsmpg    0/1     Completed   0             10m     10.200.211.135   192.168.0.34              job-multi-demo2-7ppxc   0/1     Completed   0             57s     10.200.211.145   192.168.0.34              job-multi-demo2-8bh22   0/1     Completed   0             47s     10.200.211.148   192.168.0.34              job-multi-demo2-dbjqw   0/1     Completed   0             47s     10.200.211.141   192.168.0.34              job-multi-demo2-mxbtq   0/1     Completed   0             52s     10.200.211.152   192.168.0.34              job-multi-demo2-rhgh7   0/1     Completed   0             51s     10.200.211.143   192.168.0.34              job-multi-demo2-th6ff   0/1     Completed   0             58s     10.200.211.136   192.168.0.34              net-test1               1/1     Running     3 (84m ago)   7d11h   10.200.211.191   192.168.0.34              pod-demo                1/1     Running     1 (84m ago)   4h7m    10.200.155.138   192.168.0.36              test                    1/1     Running     5 (84m ago)   14d     10.200.209.6     192.168.0.35              test1                   1/1     Running     5 (84m ago)   14d     10.200.209.8     192.168.0.35              test2                   1/1     Running     5 (84m ago)   14d     10.200.211.177   192.168.0.34              root@k8s-deploy:/yaml# 

验证job数据

提示:可以看到后面job追加的时间几乎都是两个重复的,这说明两个pod同时执行了job里的任务;

Cronjob控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;

示例:定义cronjob

apiVersion: batch/v1kind: CronJobmetadata:  name: job-cronjob  namespace: defaultspec:  schedule: "*/1 * * * *"  jobTemplate:    spec:      parallelism: 2      template:        spec:          containers:          - name: job-cronjob-container            image: harbor.ik8s.cc/baseimages/centos7:2023            command: ["/bin/sh"]            args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/cronjob-data.log"]            volumeMounts:            - mountPath: /cache              name: cache-volume            - name: localtime              mountPath: /etc/localtime          volumes:          - name: cache-volume            hostPath:              path: /tmp/jobdata          - name: localtime            hostPath:              path: /usr/share/zoneinfo/Asia/Shanghai          restartPolicy: OnFailure

应用清单

root@k8s-deploy:/yaml# kubectl apply -f cronjob-demo.yamlcronjob.batch/job-cronjob createdroot@k8s-deploy:/yaml# kubectl get cronjobNAME          SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGEjob-cronjob   */1 * * * *   False     0                  6sroot@k8s-deploy:/yaml# kubectl get podsNAME                         READY   STATUS      RESTARTS       AGEjob-cronjob-28056516-njddz   0/1     Completed   0              12sjob-cronjob-28056516-wgbns   0/1     Completed   0              12sjob-demo-z8gmb               0/1     Completed   0              64mjob-multi-demo-5vp9w         0/1     Completed   0              40mjob-multi-demo-frstg         0/1     Completed   0              40mjob-multi-demo-gd44s         0/1     Completed   0              40mjob-multi-demo-kfm79         0/1     Completed   0              40mjob-multi-demo-nsmpg         0/1     Completed   0              40mjob-multi-demo2-7ppxc        0/1     Completed   0              30mjob-multi-demo2-8bh22        0/1     Completed   0              30mjob-multi-demo2-dbjqw        0/1     Completed   0              30mjob-multi-demo2-mxbtq        0/1     Completed   0              30mjob-multi-demo2-rhgh7        0/1     Completed   0              30mjob-multi-demo2-th6ff        0/1     Completed   0              30mnet-test1                    1/1     Running     3 (113m ago)   7d11hpod-demo                     1/1     Running     1 (113m ago)   4h36mtest                         1/1     Running     5 (113m ago)   14dtest1                        1/1     Running     5 (113m ago)   14dtest2                        1/1     Running     5 (113m ago)   14droot@k8s-deploy:/yaml# kubectl get cronjobNAME          SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGEjob-cronjob   */1 * * * *   False     0        12s             108sroot@k8s-deploy:/yaml# kubectl get pods   NAME                         READY   STATUS      RESTARTS       AGEjob-cronjob-28056516-njddz   0/1     Completed   0              77sjob-cronjob-28056516-wgbns   0/1     Completed   0              77sjob-cronjob-28056517-d6n9h   0/1     Completed   0              17sjob-cronjob-28056517-krsvb   0/1     Completed   0              17sjob-demo-z8gmb               0/1     Completed   0              65mjob-multi-demo-5vp9w         0/1     Completed   0              41mjob-multi-demo-frstg         0/1     Completed   0              41mjob-multi-demo-gd44s         0/1     Completed   0              41mjob-multi-demo-kfm79         0/1     Completed   0              41mjob-multi-demo-nsmpg         0/1     Completed   0              41mjob-multi-demo2-7ppxc        0/1     Completed   0              31mjob-multi-demo2-8bh22        0/1     Completed   0              31mjob-multi-demo2-dbjqw        0/1     Completed   0              31mjob-multi-demo2-mxbtq        0/1     Completed   0              31mjob-multi-demo2-rhgh7        0/1     Completed   0              31mjob-multi-demo2-th6ff        0/1     Completed   0              31mnet-test1                    1/1     Running     3 (114m ago)   7d11hpod-demo                     1/1     Running     1 (114m ago)   4h38mtest                         1/1     Running     5 (114m ago)   14dtest1                        1/1     Running     5 (114m ago)   14dtest2                        1/1     Running     5 (114m ago)   14droot@k8s-deploy:/yaml# 

提示:cronjob 默认保留最近3个历史记录;

验证:查看周期执行任务的数据

提示:从上面的时间就可以看到每过一分钟就有两个pod执行一次任务;

RC/RS 副本控制器

RC(Replication Controller),副本控制器,该控制器主要负责控制pod副本数量始终满足用户期望的副本数量,该副本控制器是第一代pod副本控制器,仅支持selector = !=;

rc控制器示例

apiVersion: v1  kind: ReplicationController  metadata:    name: ng-rcspec:    replicas: 2  selector:      app: ng-rc-80   template:       metadata:        labels:          app: ng-rc-80    spec:        containers:      - name: pod-demo        image: "harbor.ik8s.cc/baseimages/nginx:v1"        ports:        - containerPort:  80          name:  http        volumeMounts:        - name: localtime          mountPath: /etc/localtime      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai

应用配置清单

root@k8s-deploy:/yaml# kubectl get pods NAME    READY   STATUS    RESTARTS      AGEtest    1/1     Running   6 (11m ago)   16dtest1   1/1     Running   6 (11m ago)   16dtest2   1/1     Running   6 (11m ago)   16droot@k8s-deploy:/yaml# kubectl apply -f rc-demo.yamlreplicationcontroller/ng-rc createdroot@k8s-deploy:/yaml# kubectl get pods -o wideNAME          READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATESng-rc-l7xmp   1/1     Running   0             10s   10.200.211.136   192.168.0.34              ng-rc-wl5d6   1/1     Running   0             9s    10.200.155.185   192.168.0.36              test          1/1     Running   6 (11m ago)   16d   10.200.209.24    192.168.0.35              test1         1/1     Running   6 (11m ago)   16d   10.200.209.31    192.168.0.35              test2         1/1     Running   6 (11m ago)   16d   10.200.211.186   192.168.0.34              root@k8s-deploy:/yaml# kubectl get rcNAME    DESIRED   CURRENT   READY   AGEng-rc   2         2         2       25sroot@k8s-deploy:/yaml# 

验证:修改pod标签,看看对应pod是否会重新创建?

root@k8s-deploy:/yaml# kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS      AGE     LABELSng-rc-l7xmp   1/1     Running   0             2m32s   app=ng-rc-80ng-rc-wl5d6   1/1     Running   0             2m31s   app=ng-rc-80test          1/1     Running   6 (13m ago)   16d     run=testtest1         1/1     Running   6 (13m ago)   16d     run=test1test2         1/1     Running   6 (13m ago)   16d     run=test2root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=nginx-demo --overwritepod/ng-rc-l7xmp labeledroot@k8s-deploy:/yaml# kubectl get pods --show-labels                          NAME          READY   STATUS              RESTARTS      AGE     LABELSng-rc-l7xmp   1/1     Running             0             4m42s   app=nginx-demong-rc-rxvd4   0/1     ContainerCreating   0             3s      app=ng-rc-80ng-rc-wl5d6   1/1     Running             0             4m41s   app=ng-rc-80test          1/1     Running             6 (15m ago)   16d     run=testtest1         1/1     Running             6 (15m ago)   16d     run=test1test2         1/1     Running             6 (15m ago)   16d     run=test2root@k8s-deploy:/yaml# kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS      AGE     LABELSng-rc-l7xmp   1/1     Running   0             4m52s   app=nginx-demong-rc-rxvd4   1/1     Running   0             13s     app=ng-rc-80ng-rc-wl5d6   1/1     Running   0             4m51s   app=ng-rc-80test          1/1     Running   6 (16m ago)   16d     run=testtest1         1/1     Running   6 (16m ago)   16d     run=test1test2         1/1     Running   6 (16m ago)   16d     run=test2root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=ng-rc-80 --overwrite  pod/ng-rc-l7xmp labeledroot@k8s-deploy:/yaml# kubectl get pods --show-labels                        NAME          READY   STATUS    RESTARTS      AGE     LABELSng-rc-l7xmp   1/1     Running   0             5m27s   app=ng-rc-80ng-rc-wl5d6   1/1     Running   0             5m26s   app=ng-rc-80test          1/1     Running   6 (16m ago)   16d     run=testtest1         1/1     Running   6 (16m ago)   16d     run=test1test2         1/1     Running   6 (16m ago)   16d     run=test2root@k8s-deploy:/yaml# 

提示:rc控制器是通过标签选择器来识别对应pod是否归属对应rc控制器管控,如果发现对应pod的标签发生改变,那么rc控制器会通过新建或删除的方法将对应pod数量始终和用户定义的数量保持一致;

RS(ReplicaSet),副本控制器,该副本控制器和rc类似,都是通过标签选择器来匹配归属自己管控的pod数量,如果标签或对应pod数量少于或多余用户期望的数量,该控制器会通过新建或删除pod的方式将对应pod数量始终和用户期望的pod数量保持一致;rs控制器和rc控制器唯一区别就是rs控制器支持selector = !=精确匹配外,还支持模糊匹配in notin;是k8s之上的第二代pod副本控制器;

rs控制器示例

apiVersion: apps/v1kind: ReplicaSetmetadata:  name: rs-demo  labels:    app: rs-demospec:  replicas: 3  selector:    matchLabels:      app: rs-demo  template:    metadata:      labels:        app: rs-demo    spec:      containers:      - name: rs-demo        image: "harbor.ik8s.cc/baseimages/nginx:v1"        ports:        - name: web          containerPort:  80          protocol: TCP        env:        - name: NGX_VERSION          value: 1.16.1        volumeMounts:        - name: localtime          mountPath: /etc/localtime      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai

应用配置清单

验证:修改pod标签,看看对应pod是否会发生变化?

root@k8s-deploy:/yaml# kubectl get pods --show-labelsNAME            READY   STATUS    RESTARTS      AGE   LABELSng-rc-l7xmp     1/1     Running   0             18m   app=ng-rc-80ng-rc-wl5d6     1/1     Running   0             18m   app=ng-rc-80rs-demo-nzmqs   1/1     Running   0             71s   app=rs-demors-demo-v2vb6   1/1     Running   0             71s   app=rs-demors-demo-x27fv   1/1     Running   0             71s   app=rs-demotest            1/1     Running   6 (29m ago)   16d   run=testtest1           1/1     Running   6 (29m ago)   16d   run=test1test2           1/1     Running   6 (29m ago)   16d   run=test2root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=nginx --overwrite pod/rs-demo-nzmqs labeledroot@k8s-deploy:/yaml# kubectl get pods --show-labels                       NAME            READY   STATUS    RESTARTS      AGE    LABELSng-rc-l7xmp     1/1     Running   0             19m    app=ng-rc-80ng-rc-wl5d6     1/1     Running   0             19m    app=ng-rc-80rs-demo-bdfdd   1/1     Running   0             4s     app=rs-demors-demo-nzmqs   1/1     Running   0             103s   app=nginxrs-demo-v2vb6   1/1     Running   0             103s   app=rs-demors-demo-x27fv   1/1     Running   0             103s   app=rs-demotest            1/1     Running   6 (30m ago)   16d    run=testtest1           1/1     Running   6 (30m ago)   16d    run=test1test2           1/1     Running   6 (30m ago)   16d    run=test2root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=rs-demo --overwritepod/rs-demo-nzmqs labeledroot@k8s-deploy:/yaml# kubectl get pods --show-labels                         NAME            READY   STATUS    RESTARTS      AGE    LABELSng-rc-l7xmp     1/1     Running   0             19m    app=ng-rc-80ng-rc-wl5d6     1/1     Running   0             19m    app=ng-rc-80rs-demo-nzmqs   1/1     Running   0             119s   app=rs-demors-demo-v2vb6   1/1     Running   0             119s   app=rs-demors-demo-x27fv   1/1     Running   0             119s   app=rs-demotest            1/1     Running   6 (30m ago)   16d    run=testtest1           1/1     Running   6 (30m ago)   16d    run=test1test2           1/1     Running   6 (30m ago)   16d    run=test2root@k8s-deploy:/yaml# 

提示:可以看到当我们修改pod标签为其他标签以后,对应rs控制器会新建一个pod,其标签为app=rs-demo,这是因为当我们修改pod标签以后,rs控制器发现标签选择器匹配的pod数量少于用户定义的数量,所以rs控制器会新建一个标签为app=rs-demo的pod;当我们把pod标签修改为rs-demo时,rs控制器发现对应标签选择器匹配pod数量多余用户期望的pod数量,此时rs控制器会通过删除pod方法,让app=rs-demo标签的pod和用户期望的pod数量保持一致;

Deployment 副本控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14149042.html;

Deployment副本控制器时k8s第三代pod副本控制器,该控制器比rs控制器更高级,除了有rs的功能之外,还有很多高级功能,,比如说最重要的滚动升级、回滚等;

deploy控制器示例

apiVersion: apps/v1kind: Deploymentmetadata:  name:  deploy-demo  namespace: default  labels:    app:  deploy-demospec:  selector:    matchLabels:      app: deploy-demo  replicas: 2  template:    metadata:      labels:        app:  deploy-demo    spec:      containers:      - name:  deploy-demo        image:  "harbor.ik8s.cc/baseimages/nginx:v1"        ports:        - containerPort:  80          name:  http        volumeMounts:        - name: localtime          mountPath: /etc/localtime      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai

应用配置清单

提示:deploy控制器是通过创建rs控制器来实现管控对应pod数量;

通过修改镜像版本来更新pod版本

应用配置清单

使用命令更新pod版本

查看rs更新历史版本

查看更新历史记录

提示:这里历史记录中没有记录版本信息,原因是默认不记录,需要记录历史版本,可以手动使用--record选项来记录版本信息;如下所示

查看某个历史版本的详细信息

提示:查看某个历史版本的详细信息,加上--revision=对应历史版本的编号即可;

回滚到上一个版本

提示:使用kubectl rollout undo 命令可以将对应deploy回滚到上一个版本;

回滚指定编号的历史版本

提示:使用--to-revision选项来指定对应历史版本编号,即可回滚到对应编号的历史版本;

Service资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14161950.html;

nodeport类型的service访问流程

nodeport类型service主要解决了k8s集群外部客户端访问pod,其流程是外部客户端访问k8s集群任意node节点的对应暴露的端口,被访问的node或通过本机的iptables或ipvs规则将外部客户端流量转发给对应pod之上,从而实现外部客户端访问k8s集群pod的目的;通常使用nodeport类型service为了方便外部客户端访问,都会在集群外部部署一个负载均衡器,即外部客户端访问对应负载均衡器的对应端口,通过负载均衡器将外部客户端流量引入k8s集群,从而完成对pod的访问;

ClusterIP类型svc示例

apiVersion: v1kind: Servicemetadata:  name: ngx-svc  namespace: defaultspec:  selector:    app: deploy-demo  type: ClusterIP  ports:  - name: http    protocol: TCP    port: 80    targetPort: 80

应用配置清单

提示:可以看到创建clusterip类型service以后,对应svc会有一个clusterip,后端endpoints会通过标签选择器去关联对应pod,即我们访问对应svc的clusterip,对应流量会被转发至后端endpoint pod之上进行响应;不过这种clusterip类型svc只能在k8s集群内部客户端访问,集群外部客户端是访问不到的,原因是这个clusterip是k8s内部网络IP地址;

验证,访问10.100.100.23的80端口,看看对应后端nginxpod是否可以正常被访问呢?

root@k8s-node01:~# curl 10.100.100.23Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.

Thank you for using nginx.

root@k8s-node01:~#

nodeport类型service示例

apiVersion: v1kind: Servicemetadata:  name: ngx-nodeport-svc  namespace: defaultspec:  selector:    app: deploy-demo  type: NodePort  ports:  - name: http    protocol: TCP    port: 80    targetPort: 80    nodePort: 30012

提示:nodeport类型service只需要在clusterip类型的svc之上修改type为NodePort,然后再ports字段下用nodePort指定对应node端口即可;

应用配置清单

root@k8s-deploy:/yaml# kubectl apply -f nodeport-svc-demo.yamlservice/ngx-nodeport-svc createdroot@k8s-deploy:/yaml# kubectl get svcNAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEkubernetes         ClusterIP   10.100.0.1               443/TCP        16dngx-nodeport-svc   NodePort    10.100.209.225           80:30012/TCP   11sroot@k8s-deploy:/yaml# kubectl describe svc ngx-nodeport-svcName:                     ngx-nodeport-svcNamespace:                defaultLabels:                   Annotations:              Selector:                 app=deploy-demoType:                     NodePortIP Family Policy:         SingleStackIP Families:              IPv4IP:                       10.100.209.225IPs:                      10.100.209.225Port:                     http  80/TCPTargetPort:               80/TCPNodePort:                 http  30012/TCPEndpoints:                10.200.155.178:80,10.200.211.138:80Session Affinity:         NoneExternal Traffic Policy:  ClusterEvents:                   root@k8s-deploy:/yaml# 

验证:访问k8s集群任意node的30012端口,看看对应nginxpod是否能够被访问到?

root@k8s-deploy:/yaml# curl 192.168.0.34:30012  Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.

Thank you for using nginx.

root@k8s-deploy:/yaml#

提示:可以看到k8s外部客户端访问k8snode节点的30012端口是能够正常访问到nginxpod;当然集群内部的客户端是可以通过对应生成的clusterip进行访问的;

root@k8s-node01:~# curl 10.100.209.225:30012curl: (7) Failed to connect to 10.100.209.225 port 30012 after 0 ms: Connection refusedroot@k8s-node01:~# curl 127.0.0.1:30012     curl: (7) Failed to connect to 127.0.0.1 port 30012 after 0 ms: Connection refusedroot@k8s-node01:~# curl 192.168.0.34:30012Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.

Thank you for using nginx.

root@k8s-node01:~#

提示:集群内部客户端只能访问clusterip的80端口,或者访问node的对外IP的30012端口;

Volume资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14180752.html;

pod挂载nfs的使用

在nfs服务器上准备数据目录

root@harbor:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported#               to NFS clients.  See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)root@harbor:~# mkdir -p /pod-volroot@harbor:~# ls /pod-vol -d/pod-volroot@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 

在pod上挂载nfs目录

apiVersion: apps/v1kind: Deploymentmetadata:  name:  ngx-nfs-80  namespace: default  labels:    app:  ngx-nfs-80spec:  selector:    matchLabels:      app: ngx-nfs-80  replicas: 1  template:    metadata:      labels:        app:  ngx-nfs-80    spec:      containers:      - name:  ngx-nfs-80        image:  "harbor.ik8s.cc/baseimages/nginx:v1"        resources:          requests:            cpu: 100m            memory: 100Mi          limits:            cpu: 100m            memory: 100Mi        ports:        - containerPort:  80          name:  ngx-nfs-80        volumeMounts:        - name: localtime          mountPath: /etc/localtime        - name: nfs-vol          mountPath: /usr/share/nginx/html/      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai        - name: nfs-vol          nfs:            server: 192.168.0.42            path: /pod-vol      restartPolicy: Always---apiVersion: v1kind: Servicemetadata:  name: ngx-nfs-svc  namespace: defaultspec:  selector:    app: ngx-nfs-80  type: NodePort  ports:  - name: ngx-nfs-svc    protocol: TCP    port: 80    targetPort: 80    nodePort: 30013

应用配置清单

root@k8s-deploy:/yaml# kubectl apply -f nfs-vol.yaml deployment.apps/ngx-nfs-80 createdservice/ngx-nfs-svc createdroot@k8s-deploy:/yaml# kubectl get pods NAME                           READY   STATUS    RESTARTS      AGEdeploy-demo-6849bdf444-pvsc9   1/1     Running   1 (57m ago)   46hdeploy-demo-6849bdf444-sg8fz   1/1     Running   1 (57m ago)   46hng-rc-l7xmp                    1/1     Running   1 (57m ago)   47hng-rc-wl5d6                    1/1     Running   1 (57m ago)   47hngx-nfs-80-66c9697cf4-8pm9k    1/1     Running   0             7srs-demo-nzmqs                  1/1     Running   1 (57m ago)   47hrs-demo-v2vb6                  1/1     Running   1 (57m ago)   47hrs-demo-x27fv                  1/1     Running   1 (57m ago)   47htest                           1/1     Running   7 (57m ago)   17dtest1                          1/1     Running   7 (57m ago)   17dtest2                          1/1     Running   7 (57m ago)   17droot@k8s-deploy:/yaml# kubectl get svcNAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEkubernetes         ClusterIP   10.100.0.1               443/TCP        18dngx-nfs-svc        NodePort    10.100.16.14             80:30013/TCP   15sngx-nodeport-svc   NodePort    10.100.209.225           80:30012/TCP   45hroot@k8s-deploy:/yaml# 

在nfs服务器上/pod-vol目录下提供index.html文件

root@harbor:~# echo "this page from nfs server.." >> /pod-vol/index.htmlroot@harbor:~# cat /pod-vol/index.htmlthis page from nfs server..root@harbor:~# 

访问pod,看看nfs服务器上的inde.html是否能够正常访问到?

root@k8s-deploy:/yaml# curl 192.168.0.35:30013this page from nfs server..root@k8s-deploy:/yaml# 

提示:能够看到访问pod对应返回的页面就是刚才在nfs服务器上创建的页面,说明pod正常挂载了nfs提供的目录;

PV、PVC资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14188621.html;

nfs实现静态pvc的使用

在nfs服务器上准备目录

root@harbor:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported#               to NFS clients.  See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)/data/k8sdata/myserver/myappdata *(rw,no_root_squash)root@harbor:~# mkdir -p /data/k8sdata/myserver/myappdataroot@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver/myappdata".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/myserver/myappdataexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 

创建pv

apiVersion: v1kind: PersistentVolumemetadata:  name: myapp-static-pv  namespace: defaultspec:  capacity:    storage: 2Gi  accessModes:    - ReadWriteOnce  nfs:    path: /data/k8sdata/myserver/myappdata    server: 192.168.0.42

创建pvc关联pv

apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: myapp-static-pvc  namespace: defaultspec:  volumeName: myapp-static-pv  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 2Gi

创建pod使用pvc

apiVersion: apps/v1kind: Deploymentmetadata:  name:  ngx-nfs-pvc-80  namespace: default  labels:    app:  ngx-pvc-80spec:  selector:    matchLabels:      app: ngx-pvc-80  replicas: 1  template:    metadata:      labels:        app:  ngx-pvc-80    spec:      containers:      - name:  ngx-pvc-80        image:  "harbor.ik8s.cc/baseimages/nginx:v1"        resources:          requests:            cpu: 100m            memory: 100Mi          limits:            cpu: 100m            memory: 100Mi        ports:        - containerPort:  80          name:  ngx-pvc-80        volumeMounts:        - name: localtime          mountPath: /etc/localtime        - name: data-pvc          mountPath: /usr/share/nginx/html/      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai        - name: data-pvc          persistentVolumeClaim:            claimName: myapp-static-pvc ---apiVersion: v1kind: Servicemetadata:  name: ngx-pvc-svc  namespace: defaultspec:  selector:    app: ngx-pvc-80  type: NodePort  ports:  - name: ngx-nfs-svc    protocol: TCP    port: 80    targetPort: 80    nodePort: 30014

应用上述配置清单

root@k8s-deploy:/yaml# kubectl apply -f nfs-static-pvc-demo.yaml persistentvolume/myapp-static-pv createdpersistentvolumeclaim/myapp-static-pvc createddeployment.apps/ngx-nfs-pvc-80 createdservice/ngx-pvc-svc createdroot@k8s-deploy:/yaml# kubectl get pvNAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGEmyapp-static-pv   2Gi        RWO            Retain           Bound    default/myapp-static-pvc                           4sroot@k8s-deploy:/yaml# kubectl get pvcNAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGEmyapp-static-pvc   Pending   myapp-static-pv   0                                        7sroot@k8s-deploy:/yaml# kubectl get pods NAME                            READY   STATUS    RESTARTS       AGEdeploy-demo-6849bdf444-pvsc9    1/1     Running   1 (151m ago)   47hdeploy-demo-6849bdf444-sg8fz    1/1     Running   1 (151m ago)   47hng-rc-l7xmp                     1/1     Running   1 (151m ago)   2d1hng-rc-wl5d6                     1/1     Running   1 (151m ago)   2d1hngx-nfs-pvc-80-f776bb6d-nwwwq   0/1     Pending   0              10srs-demo-nzmqs                   1/1     Running   1 (151m ago)   2drs-demo-v2vb6                   1/1     Running   1 (151m ago)   2drs-demo-x27fv                   1/1     Running   1 (151m ago)   2dtest                            1/1     Running   7 (151m ago)   18dtest1                           1/1     Running   7 (151m ago)   18dtest2                           1/1     Running   7 (151m ago)   18droot@k8s-deploy:/yaml# 

在nfs服务器上/data/k8sdata/myserver/myappdata创建index.html,看看对应主页是否能够被访问?

root@harbor:~# echo "this page from nfs-server /data/k8sdata/myserver/myappdata/index.html" >> /data/k8sdata/myserver/myappdata/index.htmlroot@harbor:~# cat /data/k8sdata/myserver/myappdata/index.htmlthis page from nfs-server /data/k8sdata/myserver/myappdata/index.htmlroot@harbor:~# 

访问pod

root@harbor:~# curl 192.168.0.36:30014this page from nfs-server /data/k8sdata/myserver/myappdata/index.htmlroot@harbor:~# 

nfs实现动态pvc的使用

创建名称空间、服务账号、clusterrole、clusterrolebindding、role、rolebinding

apiVersion: v1kind: Namespacemetadata:  name: nfs---apiVersion: v1kind: ServiceAccountmetadata:  name: nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: nfs---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: nfs-client-provisioner-runnerrules:  - apiGroups: [""]    resources: ["nodes"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get", "list", "watch", "create", "delete"]  - apiGroups: [""]    resources: ["persistentvolumeclaims"]    verbs: ["get", "list", "watch", "update"]  - apiGroups: ["storage.k8s.io"]    resources: ["storageclasses"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["events"]    verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: run-nfs-client-provisionersubjects:  - kind: ServiceAccount    name: nfs-client-provisioner    # replace with namespace where provisioner is deployed    namespace: nfsroleRef:  kind: ClusterRole  name: nfs-client-provisioner-runner  apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: leader-locking-nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: nfsrules:  - apiGroups: [""]    resources: ["endpoints"]    verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: leader-locking-nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: nfssubjects:  - kind: ServiceAccount    name: nfs-client-provisioner    # replace with namespace where provisioner is deployed    namespace: nfsroleRef:  kind: Role  name: leader-locking-nfs-client-provisioner  apiGroup: rbac.authorization.k8s.io

创建sc

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: managed-nfs-storageprovisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment"s env PROVISIONER_NAME"reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据mountOptions:  #- vers=4.1 #containerd有部分参数异常  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能parameters:  #mountOptions: "vers=4.1,noresvport,noatime"  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 

创建provision

apiVersion: apps/v1kind: Deploymentmetadata:  name: nfs-client-provisioner  labels:    app: nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: nfsspec:  replicas: 1  strategy: #部署策略    type: Recreate  selector:    matchLabels:      app: nfs-client-provisioner  template:    metadata:      labels:        app: nfs-client-provisioner    spec:      serviceAccountName: nfs-client-provisioner      containers:        - name: nfs-client-provisioner          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2           image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2           volumeMounts:            - name: nfs-client-root              mountPath: /persistentvolumes          env:            - name: PROVISIONER_NAME              value: k8s-sigs.io/nfs-subdir-external-provisioner            - name: NFS_SERVER              value: 192.168.0.42            - name: NFS_PATH              value: /data/volumes      volumes:        - name: nfs-client-root          nfs:            server: 192.168.0.42            path: /data/volumes

调用sc创建pvc

apiVersion: v1kind: Namespacemetadata:  name: myserver---   # Test PVCkind: PersistentVolumeClaimapiVersion: v1metadata:  name: myserver-myapp-dynamic-pvc  namespace: myserverspec:  storageClassName: managed-nfs-storage #调用的storageclass 名称  accessModes:    - ReadWriteMany #访问权限  resources:    requests:      storage: 500Mi #空间大小

创建app使用pvc

kind: Deployment#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata:  labels:    app: myserver-myapp   name: myserver-myapp-deployment-name  namespace: myserverspec:  replicas: 1   selector:    matchLabels:      app: myserver-myapp-frontend  template:    metadata:      labels:        app: myserver-myapp-frontend    spec:      containers:        - name: myserver-myapp-container          image: nginx:1.20.0           #imagePullPolicy: Always          volumeMounts:          - mountPath: "/usr/share/nginx/html/statics"            name: statics-datadir      volumes:        - name: statics-datadir          persistentVolumeClaim:            claimName: myserver-myapp-dynamic-pvc ---kind: ServiceapiVersion: v1metadata:  labels:    app: myserver-myapp-service  name: myserver-myapp-service-name  namespace: myserverspec:  type: NodePort  ports:  - name: http    port: 80    targetPort: 80    nodePort: 30015  selector:    app: myserver-myapp-frontend

应用上述配置清单

root@k8s-deploy:/yaml/myapp# kubectl apply -f .namespace/nfs createdserviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdstorageclass.storage.k8s.io/managed-nfs-storage createddeployment.apps/nfs-client-provisioner creatednamespace/myserver createdpersistentvolumeclaim/myserver-myapp-dynamic-pvc createddeployment.apps/myserver-myapp-deployment-name createdservice/myserver-myapp-service-name createdroot@k8s-deploy:

验证:查看sc、pv、pvc是否创建?pod是否正常运行?

root@k8s-deploy:/yaml/myapp# kubectl get scNAME                  PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGEmanaged-nfs-storage   k8s-sigs.io/nfs-subdir-external-provisioner   Retain          Immediate           false                  105sroot@k8s-deploy:/yaml/myapp# kubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS          REASON   AGEpvc-01709c7f-0cf9-4554-9ae9-72db89e7308c   500Mi      RWX            Retain           Bound    myserver/myserver-myapp-dynamic-pvc   managed-nfs-storage            107sroot@k8s-deploy:/yaml/myapp# kubectl get pvc -n myserverNAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGEmyserver-myapp-dynamic-pvc   Bound    pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c   500Mi      RWX            managed-nfs-storage   117sroot@k8s-deploy:/yaml/myapp# kubectl get pods -n myserverNAME                                              READY   STATUS    RESTARTS   AGEmyserver-myapp-deployment-name-65ff65446f-xpd5p   1/1     Running   0          2m8sroot@k8s-deploy:/yaml/myapp# 

提示:可以看到pv自动由sc创建,pvc自动和pv关联;

验证:在nfs服务器上的/data/volumes/下创建index.html文件,访问pod service,看看对应文件是否能够正常被访问到?

root@harbor:/data/volumes# lsmyserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308croot@harbor:/data/volumes# cd myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c/root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# lsroot@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# echo "this page from nfs-server /data/volumes" >> index.htmlroot@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# cat index.html this page from nfs-server /data/volumesroot@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# 

提示:在nfs服务器上的/data/volumes目录下会自动生成一个使用pvcpod所在名称空间+pvc名字+pv名字的一个目录,这个目录就是由provision创建;

访问pod

root@harbor:~# curl 192.168.0.36:30015/statics/index.htmlthis page from nfs-server /data/volumesroot@harbor:~# 

提示:能够访问到我们刚才创建的文件,说明pod正常挂载nfs服务器对应目录;

PV/PVC总结

PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。

PVC是对PV资源的申请调用,pod是通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储。

PersistentVolume参数

Capacity: #当前PV空间大小,kubectl explain PersistentVolume.spec.capacity

accessModes :访问模式,#kubectl explain PersistentVolume.spec.accessModes

ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO

ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX    ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX

persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:

Retain – 删除PV后保持原装,最后需要管理员手动删除

Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath

Delete – 自动删除存储卷

volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode;定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统

mountOptions #附加的挂载选项列表,实现更精细的权限控制;

官方文档:持久卷 | Kubernetes;

PersistentVolumeClaim创建参数

accessModes :PVC 访问模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode

ReadWriteOnce – PVC只能被单个节点以读写权限挂载,RWO

ReadOnlyMany – PVC以可以被多个节点挂载但是权限是只读的,ROX

ReadWriteMany – PVC可以被多个节点是读写方式挂载使用,RWX

resources: #定义PVC创建存储卷的空间大小

selector: #标签选择器,选择要绑定的PV

matchLabels #匹配标签名称

matchExpressions #基于正则表达式匹配

volumeName #要绑定的PV名称

volumeMode #卷类型,定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统

Volume- 存储卷类型

static:静态存储卷 ,需要在使用前手动创建PV、然后创建PVC并绑定到PV然后挂载至pod使用,适用于PV和PVC相对比较固定的业务场景。

dynamin:动态存储卷,先创建一个存储类storageclass,后期pod在使用PVC的时候可以通过存储类动态创建PVC,适用于有状态服务集群如MySQL一主多从、zookeeper集群等。

存储类官方文档:存储类 | Kubernetes

上一篇:金奥博:公司副总经理赵海涛离职 下一篇:最后一页
x
推荐阅读

每日热闻!k8s资源对象

2023-05-11

金奥博:公司副总经理赵海涛离职

2023-05-11

笔记本能换显卡增加流畅吗_笔记本能换显卡

2023-05-11

费迪南德:加拉格尔在为留在蓝军而战,他还有很多工作要做-环球报道

2023-05-11

世界快报:真我 11 Pro+ 图赏:1TB 储存搭配 2 亿像素相机,两千元机卷王再升级

2023-05-10

天天快报!黑海港口农产品外运协议四方会谈在土耳其伊斯坦布尔举行

2023-05-10

太珍贵!江西首次在野外发现!-百事通

2023-05-10

全球短讯!十大经典鸡尾酒 十大经典鸡尾酒的调法

2023-05-10

首批全IP化工业控制协议自动化总线系列国际标准正式发布-全球热门

2023-05-10

*ST搜特:公司股票及可转债存在被终止上市风险 重点聚焦

2023-05-10

天天快报!公认口碑最好的莆田微商,推荐十个口碑最好的良心商家

2023-05-10

挽巴功河

2023-05-10

天天热议:全球市场观察:美元盘整涨幅 关注美国4月通胀数据

2023-05-10

全球快报:《王国之泪》为何用前作音效?官方回应:感觉是在同一世界

2023-05-10

当前最新:基金导读:动漫游戏人工智能强势延续 已有基金年内收益率翻倍

2023-05-10

深圳常住人口 首次下降!什么信号?_世界即时

2023-05-10

焦点关注:半月谈丨活动搞了一箩筐,学生咋就不买账?就业服务,容不得花拳绣腿

2023-05-10

无锡:深耕生物医药产业,公共服务平台让更多创新药”跑”起来-当前热讯

2023-05-10

通讯!美国4月CPI今晚公布 加息是否暂缓?一图回溯历史数据!

2023-05-10

空间站模型、飞行汽车……中国品牌博览会亮点展品一览

2023-05-10

警方回应“吉林一高校两学生因矛盾斗殴”:伤人者已被拘留

2023-05-10

甘肃省白银市2023-05-10 11:32发布大风蓝色预警 世界速看

2023-05-10

江西省九江市柴桑区市场监管局开展打击传销进社区宣传活动 当前短讯

2023-05-10

环球资讯:和气生财多少钱一包细支图片-和气生财多少钱一包

2023-05-10

亚马逊欧洲站:建议卖家使用亚马逊发票吸引大型企业客户 环球速读

2023-05-10

全力保障夏粮丰收 河南各地及时做好小麦夏管工作 每日时讯

2023-05-10

神似普瑞维亚?溜背造型+800V平台 小鹏将推全新MPV!|环球速递

2023-05-10

49%受访者担心被AI“抢饭碗”,微软公布2023 年工作趋势指数报告_新视野

2023-05-10

倍杀韩国!中国造船业4月接单继续坐稳第一

2023-05-10

环球信息:2022年规模以上企业就业人员年平均工资情况

2023-05-10

每日讯息!美元指数9日上涨

2023-05-10

意甲女排决赛II:米兰3-0科内场分扳平 希拉获MVP

2023-05-10

紧急绑架预警!这些地区中国公民立即撤出或转移!

2023-05-10

荣耀X50i性价比如何?1499元起,1亿像素+天玑6020

2023-05-10

当前播报:什么是潘多拉盒子_潘多拉盒子的解释

2023-05-10

和记电讯香港(00215.HK):5月9日南向资金增持4000股|环球今日讯

2023-05-10

世界报道:罗技g102宏设置教程原神-罗技g102宏设置教程

2023-05-10

听香港霓虹灯招牌师讲灯火阑珊下的城市往事 天天热文

2023-05-09

中国支付通(08325)收到有关债券的违约通知|环球观焦点

2023-05-09

汉阳社区用传统文化树文明新风

2023-05-09

抱厦厅图片_抱厦_速读

2023-05-09

全球速读:辽宁胜浙江,王世龙调整不如杨鸣

2023-05-09

世界通讯!民航局完成“7+1”智慧民航数据治理系列规范发布

2023-05-09

格科微: 格科微有限公司关于回购股份事项前十名股东和前十名无限售条件股东持股情况的公告 当前速读

2023-05-09

非衍生工具通俗解释(非衍生工具)_全球今日报

2023-05-09

环球观天下!公司问答丨拓日新能:目前光伏产业技术升级换代的速度远远高于大多数行业

2023-05-09

航拍鄱阳湖平原广袤田野 阡陌纵横水满田畴美如画-天天快资讯

2023-05-09

微信怎么发视频不压缩_微信发视频如何不压缩

2023-05-09

举行结婚仪式了,但是没打结婚证,财产怎么分割,要归还彩礼吗?

2023-05-09

【全球聚看点】中欣氟材(002915)5月9日主力资金净卖出406.68万元

2023-05-09

新疆阿克苏地区乌什县发生3.1级地震 震源深度10千米-全球今热点

2023-05-09

季诺漫画:季诺疗法

2023-05-09

今日欧元/美元汇率走势图分析(5月9日)_独家焦点

2023-05-09

今日港币对人民币汇率是多少(2023年5月9日)|世界讯息

2023-05-09

日化收益骗局(日化收益投资平台) 天天热推荐

2023-05-09

达州这13家医院检查结果,川渝互认! 播资讯

2023-05-09

这才是真正属于夏季的连衣裙穿搭,掌握这三个技巧,高级又显瘦 世界独家

2023-05-09

【新要闻】6万块换20度电!哪吒S 520版上市:19万就能买中大型纯电轿车

2023-05-09

[路演]上海莱士业绩说明会:稳步推进浆源拓展工作 强化研发创新能力建设|环球新资讯

2023-05-09

环球时讯:浙江自学考试服务平台怎么注册算报名了成人查询

2023-05-09

券商观点|军工行业2022年年报及2023年一季报业绩回顾:业绩亮点少但变化多;预计全年增长前低后高

2023-05-09

要闻速递:青少年普法网手机怎么登录_切勿用手机操作

2023-05-09

调整电商渠道 甘源食品补课线上

2023-05-09

安徽省绩溪县发布大雾黄色预警

2023-05-09

新疆高考分数线文科和理科 2021新疆高考分数线|每日速看

2023-05-09

播报:中国海装2艘海上风电运维船齐开工!

2023-05-09

新鸿基地产(00016.HK):5月8日南向资金增持500股 重点聚焦

2023-05-09

当前通讯!海贼王七武海全家福_海贼王七武海图片

2023-05-09

寒假工招聘广告吸引人文案 寒假工招聘广告|环球速读

2023-05-09

《春潮》第四期今晚20:10开播,聚焦“扩大消费 提振信心 潍坊打造区域消费中心城市” 新要闻

2023-05-08

环球观热点:青岛队记:赵继伟超神 足以说明他的比赛经验&对总决赛的把控能力

2023-05-08

老师!我可以喜欢你吗 百度网盘(老师我可以喜欢你吗百度云)-焦点消息

2023-05-08

刚果(金)东部洪灾已致超400人死亡

2023-05-08

空间站再传喜讯!我国首次在轨对导电环磨屑产生过程和团簇现象实现观测

2023-05-08

特斯拉:被BYD群殴了,真是日了狗!

2023-05-08

环球聚焦:卢森地板新品重磅发布 悠然宁静远离尘嚣

2023-05-08

太古股份公司B(00087)5月8日斥资约559.37万港元回购55.75万股 环球观焦点

2023-05-08

2023中式快餐市场调研 中式快餐市场现状及前景分析-世界播资讯

2023-05-08

环球视讯!有乙肝转氨酶会高吗

2023-05-08

ATP最新排名公布:德约世界第一即将不保,张之臻飙升30位

2023-05-08

天天快看点丨无锡房企前4月销售榜发布 远洋集团位列前十

2023-05-08

焦点速看:云从科技(688327)5月8日主力资金净卖出234.10万元

2023-05-08

内卷的B面:KRAS纷争加速向前

2023-05-08

现货黄金交易策略:债务违约危机担忧升温,金价小幅反弹

2023-05-08

东方证券给予楚江新材买入评级 金属材料毛利率下降影响22年业绩表现 一季度环比实现改善

2023-05-08

【全球快播报】工商银行步步高升金条50g价格今天多少一克(2023年05月08日)

2023-05-08

天天视点!武汉本周前雨后晴,注意合理穿衣

2023-05-08

世界热点!刷屏!央视《新闻联播》聚焦同济大学

2023-05-08

每日精选:龙珠超宇宙2迅雷下载地址 星际穿越迅雷下载地址

2023-05-08

魅力礼仪_关于魅力礼仪简述|环球新资讯

2023-05-08

吉利入股的宝腾汽车推出首款新能源车型-最资讯

2023-05-08

【央广时评】4个多月400亿件 快递“递”出什么信号 当前视讯

2023-05-08

家庭医生签约服务 当好健康“守门人”

2023-05-08

国际投资者持续看好中国市场发展潜力与前景 中国外储稳增长 黄金储备释放多重信号 快消息

2023-05-08

天台华顶海拔多少米(浙江天台华顶海拔多少) 天天看热讯

2023-05-08

2022年993家A股上市公司参与“炒股” 3家投资超百亿元

2023-05-08

全球视点!这场姐弟重逢,跨越了80年

2023-05-08

关于嘉年华游乐园,最新回复来了!

2023-05-08

当前时讯:泰安市“点线面”推进票据电子化应用 开启“数字财政”管理服务新模式

2023-05-08

aaa电池容量_aaa电池-最新快讯

2023-05-08