JAVA和Nginx 教程大全

网站首页 > 精选教程 正文

6. 玩转树莓派——DevOps之Kubernetes

wys521 2024-10-13 07:54:06 精选教程 18 ℃ 0 评论

k8s

kubernetes是一个开源的容器编排引擎,用于对容器化应用进行自动化部署、扩缩容、滚动升级等。
主节点的核心组件有API Server(负责提供RESTful形式的Kunernetes API服务)、Controller Manager(负责维护集群状态:故障检测、自动扩展、滚动更新)、Scheduler(负责监听和新建Pod)、Etcd(负责存储资源对象状态信息、网络配置)。
工作节点的kubelet组件用于确保容器在pod正常运行,kube-proxy组件作为网络代理,container runtime用于运行容器。
附加组件有CoreDNS为集群提供DNS服务,Flannel提供跨节点的Pod之间通讯服务,Nginx Ingress为服务提供外网入口,Dashboard提供k8s的Web控制台界面。

k3s

k3s为物联网、边缘计算及ARM等类型设备做了优化,减少k8s的内存占用(server端最低512M,agent端最低75M),减少应用本身体积。
服务器节点运行k3s server,工作节点运行k3s agent。如果要做到k3s的高可用,需使用多个服务节点,并使用外部数据库。
k3s内置了containerd来运行容器、内置klipper作为负载均衡器,使用traefik作为Ingress Controller,另内置附加组件Flannel、CoreDNS,还提供了local-path-provider来将本地存储映射到Pod。

服务器节点安装

curl -sfL https://get.k3s.io | sh -
# [INFO]  Finding release for channel stable
# [INFO]  Using v1.18.9+k3s1 as release
# [INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/sha256sum-arm.txt
# [INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/k3s-armhf
# [INFO]  Verifying binary download
# [INFO]  Installing k3s to /usr/local/bin/k3s
# [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
# [INFO]  Creating /usr/local/bin/crictl symlink to k3s
# [INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
# [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
# [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
# [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
# [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
# [INFO]  systemd: Enabling k3s unit
# Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
# [INFO]  systemd: Starting k3s
sudo kubectl get nodes
# 解决拉取镜像时报非权威证书受信任问题: certificate signed by unknown authority
sudo curl -X GET https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem --output /etc/ssl/certs/lets-encrypt-x3-cross-signed.pem
sudo curl -X GET https://letsencrypt.org/certs/isrgrootx1.pem --output /etc/ssl/certs/isrgrootx1.pem
# 卸载k3s
# /usr/local/bin/k3s-uninstall.sh

工作节点安装

# k3s-master指向服务器节点,可编辑/etc/hosts达到
K3S_URL="https://k3s-master:6443"
sudo cat /var/lib/rancher/k3s/server/node-token
# token从上面命令的输出获取
K3S_TOKEN="::node::"
curl -sfL https://get.k3s.io | K3S_URL=${K3S_URL} K3S_TOKEN=${K3S_TOKEN} sh -
# 或者下载到本地执行(ipvs需要内核模块支持,支持万级节点;flannel默认后端为vxlan,wireguard对流量进行加密,设为none时可使用其它cni实现。通过修改/etc/systemd/system/k3s.service达到同样目的)
# ipvs需下载ipvsadm: sudo apt-get install ipvsadm,并且将/etc/default/ipvsadm配置文件的DAEMON改为master或backup,IFACE改成真实网卡名称
# linux5.6内核自带wireguard,其它版本需自行下载
# cp k3s-armhf /usr/local/bin/k3s-armhf && chmod 755 /usr/local/bin/k3s-armhf && export INSTALL_K3S_SKIP_DOWNLOAD=true
# curl -sfL https://get.k3s.io | sh - agent --server $K3S_URL --token $K3S_TOKEN --kube-proxy-arg "proxy-mode=ipvs" --flannel-backend=wireguard
# alias docker='k3s crictl'
# kubectl get pod
# 解决拉取镜像时报非权威证书受信任问题
sudo curl -X GET https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem --output /etc/ssl/certs/lets-encrypt-x3-cross-signed.pem
sudo curl -X GET https://letsencrypt.org/certs/isrgrootx1.pem --output /etc/ssl/certs/isrgrootx1.pem
# 卸载k3s agent
# /usr/local/bin/k3s-agent-uninstall.sh

服务节点安装kubernetes-dashboard

kubectl apply -f https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# 某些版本的部署脚本使用了被墙的域名k8s.gcr.io,可以将部署配置下载到本地后编辑替换成registry.aliyuncs.com/google_containers
# 可以自定义kubernetes-dashboard的kubernetes-dashboard-certs,并指定Deployment的参数“--tls-cert-file=tls.crt”和“--tls-key-file=tls.key”来使用自己的证书,注意不要删掉自动生成证书参数
kubectl get pod --all-namespaces --show-labels
# 如果状态异常,可以查看失败原因
# kubectl -n kubernetes-dashboard describe pod kubernetes-dashboard
# 创建登入用户,并绑定到角色: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
# 查看token
kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
# 启用本地代理
kubectl proxy
# 访问http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
# 输入之前显示的token,点击登入
# 删除pod,删除namespace
# kubectl -n kubernetes-dashboard delete pod -l k8s-app=kubernetes-dashboard
# kubectl delete ns kubernetes-dashboard

使用dashboard部署

# 创建Secret配置
# https://kubernetes.io/zh/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
  name: registry-secret
type: kubernetes.io/basic-auth # 内置类型有Opaque、kubernetes.io/service-account-token、kubernetes.io/tls、kubernetes.io/ssh-auth等
stringData: # data的值为base64编码: echo -n 'myusername' | base64
  username: myusername
  password: mypassword
# k3s server私有镜像仓库
# https://docs.rancher.cn/docs/k3s/installation/private-registry/_index
cat > /etc/rancher/k3s/registries.yaml << "EOF"
mirrors:
  docker.io:
    endpoint:
      - "https://docker.mirrors.ustc.edu.cn"
      - "https://registry-1.docker.io"
  hub.iamwhatiam.ml:
    endpoint:
      - "https://hub.iamwhatiam.ml:5000"
configs:
  "hub.iamwhatiam.ml":
# 有认证  
    auth:
      username: myusername
      password: mypassword
# 使用TLS      
    tls:
# 双向认证    
#      cert_file: /home/pi/.acme.sh/\*.iamwhatiam.ml/pi.cer
#      key_file: /home/pi/.acme.sh/\*.iamwhatiam.ml/pi.key
# 自签ssl证书      
      ca_file: /home/pi/.acme.sh/\*.iamwhatiam.ml/ca.cer
EOF
# crictl rmi --prune
# k3s agent私有镜像仓库
# vi /var/lib/rancher/k3s/agent/etc/containerd/config.toml
#
#[plugins.cri.registry.mirrors]
#
#[plugins.cri.registry.mirrors."docker.io"]
#  endpoint = ["https://hub.iamwhatiam.ml:5000", "https://registry-1.docker.io"]
#
#[plugins.cri.registry.mirrors."hub.iamwhatiam.ml"]
#  endpoint = ["https://hub.iamwhatiam.ml:5000"]

#[plugins.cri.registry.configs."hub.iamwhatiam.ml".auth]
#  username = "myusername"
#  password = "mypassword"
# PVC用于使用卷资源
# https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany # ReadWriteOnce, ReadOnlyMany, ReadWriteMany. 可被多节点读写。
  storageClassName: "" # 绑定到指定的PV/storageClass
  resources:
    requests:
      storage: 10240Mi
# PV用于创建卷资源
# https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 102400Mi
  accessModes:
    - ReadWriteMany
#  storageClass: nfs-storage-class
  nfs: # 数据可以事先存在,pod不存在时数据也可以在持久卷上存在,数据可以在pod间共享
    server: nfs-server.default.svc.cluster.local
    path: "/mnt/nfs/data"
# 创建ConfigMap配置
# kubectl create configmap app-config-name --from-file=configmap/application.properties --from-file=configmap/server.properties
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config-name
  namespace: default
data: # 如果是多文件形式,key是文件名
  application.properties: |
    spring.application.name=demo
    server.port: 8080
# 创建部署
#cat > app.yml << "EOF"
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app # 部署名称
  namespace: default
  labels:
    app: app-name # 部署标签
spec:
  replicas: 2 # 部署副本数
  selector:
    matchLabels:
      app.kubernetes.io/name: app-name
  template:
    metadata:
      labels: # 常用标签有name、instance、version、component、part-of、managed-by
        app.kubernetes.io/name: app-name # 标签必须与spec.selector.matchLabels相同
    spec:
# 未配置/etc/rancher/k3s/registries.yaml,而私有镜像仓库又使用TLS时需提供密钥
#      imagePullSecrets:
#        name: registry-secret
      nodeSelector: # 可以简单使用nodeName来指定节点,但对于一些自动生成节点名称的不好处理。或者配置affinity,根据亲和性选择节点!
        kubernetes.io/arch: amd64 # 节点知名标签:arch, os, hostname。可通过命令给节点增加标签:kubectl label nodes <node-name> <label-key>=<label-value>	  
      containers:
        - name: container-name
          image: hub.iamwhatiam.ml:5000/repoName/imageName:imageVersion # 使用域名,镜像注册服务需启用SSL,非权威证书需将根证书导入/etc/ssl/certs/
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: dev
            - name: INSTANCE_NAME
              valueFrom: 
                fieldRef:
                  fieldPath: spec.nodeName
# 环境变量也可以引用命名空间下的configNap和secret                  
#            - name: APPLICATION_CONFIG
#              valueFrom:
#                configMapKeyRef:
#                  name: app-config-name
#                  key: keyName # 如果configMap是文件名与键值对形式,此处为文件名。
#            - name: TOKEN
#              valueFrom:
#                secretKeyRef:
#                  name: app-secret-name
#                  key: tokenKey
# k8s的启动命令和启动参数会覆盖容器镜像中自带的命令与参数
          command: # remote debug: $CATALINA_HOME/bin/catalina jpda start
            - java
          args:
            - '-Xdebug'
            - '-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000'
#            - '-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n'
            - '-jar'
            - 'spring.application.name.jar'
          ports:
            - containerPort: 8080
              protocol: TCP
            - containerPort: 8000
              protocol: TCP
          volumeMounts:
            - name: cache-vol
              mountPath: /var/cache
            - name: nfs-vol # 使用NFS前要求节点NFS客户端已安装: apt-get install nfs-common
              mountPath: /srv/nfs
            - name: config-vol
              mountPath: /etc/config/app
            - name: secret-vol
              mountPath: /etc/secret/app
              readOnly: true
          resources: # 请求和限制都不存在时,QoS类型为BestEffort;请求小于限制时,QoS类型为Burstable;请求等于限制时,QoS类型为Guaranteed
            limits:
              memory: "1024Mi"
              cpu: "1"
            requests:
              memory: "512Mi"
              cpu: 1000m
          startupProbe: # 启动探测
            tcpSocket:
              port: 8080
            failureThreshold: 3 # 探测失败时,Kubernetes 的重试次数
            periodSeconds: 10 # 执行探测的时间间隔(单位是秒)
          livenessProbe: # 非存活时pod会被kill
            httpGet:
              port: 8080
              path: /actuator/health/liveness
            initialDelaySeconds: 0 # 第一次探测前等待时间
            periodSeconds: 5
          readinessProbe: # 就绪才会接收流量
            httpGet:
              port: 8080
              path: /actuator/health/readiness
            initialDelaySeconds: 30
            periodSeconds: 5
      volumes:
        - name: cache-vol
          emptyDir:
            medium: Memory # emptyDir作为临时存储,指定使用内存
        - name: nfs-vol
          persistentVolumeClaim:
            claimName: nfs-pvc
        - name: config-vol
          configMap:
            name: app-config-name
        - name: secret-vol
          secret:
            secretName: app-secret-name
            defaultMode: 256 # 0400,需将八进制转换成json支持的十进制
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailabel: 25%
      maxSurge: 25%
# 进入pod查看:kubectl exec -it app-pod-id -- /bin/sh
#EOF

Klipper LoadBalancer

# k3s内置的LoadBalancer,无需云服务提供商的LoadBalancer实现
apiVersion: v1
kind: Service
metadata:
  name: app-service
spec:
  selector:
    app.kubernetes.io/name: app-name
  ports:
    - name: http
      port: 8080
      # nodePort: 30080 # --service-node-port-range默认范围为30000~32767
      # targetPort: 8080
# klipper-lb会创建svclb为前缀的DaemonSet,符合条件的所有节点都会被创建DaemonSet名为前缀的Pod来代理请求
  type: LoadBalancer # 默认为ClusterIP,LoadBalancer和NodePort都可以直接将服务暴露给外部集群访问

Treafik Ingress

k3s安装后自动执行helm-install-traefik任务,该job创建一个klipper-helm pod来部署名为traefik的ingress controller。 同时创建traefik服务,服务类型为LoadBalancer,此时会创建镜像为klipper-lb的DaemonSet。

修改名为traefik的ConfigMap对象,启用Traefik的web-ui

apiVersion: v1
kind: ConfigMap                                 
metadata:                                       
  annotations:                                  
    meta.helm.sh/release-name: traefik          
    meta.helm.sh/release-namespace: kube-system     
  labels:                                       
    app: traefik 
    app.kubernetes.io/managed-by: Helm          
    chart: traefik-1.81.0                       
    heritage: Helm                                                        
  name: traefik                                
  namespace: kube-system 
data:                                 
  traefik.toml: |
    # traefik.toml
    logLevel = "info"
    insecureSkipVerify = true # 允许非权威证书,避免https请求dashboard时由于直接请求pod ip,而证书不含”IP SANs“报“Internal Server Error“
    checkNewVersion = false
    rootCAs = [] # https://github.com/traefik/traefik/blob/v1.7/integration/fixtures/https/rootcas/https.toml
    defaultEntryPoints = ["http","https"]
    [entryPoints]                        
      [entryPoints.http]                 
      address = ":80"                    
      compress = true  
        [entryPoints.http.redirect] # 强制使用HTTPS
        entryPoint = "https"  
        [entryPoints.http.auth] # 认证
          [entryPoints.http.auth.digest]
            users = ["test:traefik:a2688e031edb4be6a3797f3882655c05"]        
      [entryPoints.https]
      address = ":443"   
      compress = true    
        [entryPoints.https.tls]
          [[entryPoints.https.tls.certificates]]
          CertFile = "/ssl/tls.crt"             
          KeyFile = "/ssl/tls.key"              
      [entryPoints.prometheus]                  
      address = ":9100"  
    # frontends定义的规则包含域名和/或路径,并指定backend名称
    # backends定义backend名称及对应的服务url或负载均衡、健康检查等      
    [ping]                        
    entryPoint = "http" 
    [kubernetes]                                
      [kubernetes.ingressEndpoint]              
      publishedService = "kube-system/traefik"  
    [traefikLog]                                
      format = "json"                           
    [metrics]                                   
      [metrics.prometheus]                      
        entryPoint = "prometheus"  
    [api] # https://doc.traefik.io/traefik/v1.7/configuration/api/
      dashboard = true # 开启web控制台
      entrypoint = "http" # 默认定义了http和https,默认入口名称为traefik
      # insecure = true # basic auth, digest auth, forward auth, white list
      [api.statistics]
        recentErrors = 10        
# kubectl get service traefik -n kube-system --output=custom-columns=CLUSTER-IP:.spec.clusterIP,EXTERNAL-IP:.status.loadBalancer.ingress[0].ip,PORT:.spec.ports[0].port>traefik-web-ui.txt
# sed -n '2,1p' traefik-web-ui.txt |awk '{print "http://"$1":"$3"/dashboard/"}'
# 访问以上命令在控制台的输出地址,输入认证用户名test,密码test,即可进入traefik的dashboard
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: traefik-lb
spec:
  controller: traefik.io/ingress-controller
# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=iamwhatiam.ml"
# kubectl -n kubernetes-dashboard create secret tls iamwhatiam.ml --key=tls.key --cert=tls.crt
apiVersion: v1
kind: Secret
metadata:
  name: iamwhatiam.ml
  namespace: kubernetes-dashboard # 必须与Ingress的namespace一致,否则traefik无法正常工作
data:
  tls.crt: base64编码的cert
  tls.key: base64编码的key
type: kubernetes.io/tls
# 将kubenetes-dashboard暴露,无需再使用“kubectl proxy”命令
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: "k8s-ingress"
  namespace: kubernetes-dashboard # Ingress namespace必须和service namespace一致, 否则traefik无法使用service
#  annotations:
#    ingress.kubernetes.io/protocol: https # 使用HTTPS调用后端服务,traefik也支持自动使用HTTPS(svc的端口为443,或者端口名称以https开始,注意traefik默认占用443端口导致dashboard服务失效,dashboard服务声明服务端口名称为https时会导致kubectl proxy形式无法正常使用)
#    ingress.kubernetes.io/force-hsts: "true"

spec:
  ingressClassName: "traefik-lb" # 多个ingress时选择指定ingress
  tls:
    - secretName: iamwhatiam.ml
  defaultBackend: # 规则匹配失败时流量经过默认后端
    resource:
      apiGroup: static.iamwhatiam.ml
      kind: StorageBucket
      name: assets
  rules:
    - host: k8s.iamwhatiam.ml # 可选,域名可使用通配符形式: *.iamwhatiam.ml
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific # Exact, Prefix
            backend:
              service:
                name:  kubernetes-dashboard
                port:
                  number: 443
# 现在可以访问https://k8s.iamwhatiam.ml试试了!               

本文暂时没有评论,来添加一个吧(●'◡'●)

欢迎 发表评论:

最近发表
标签列表