JAVA和Nginx 教程大全

网站首页 > 精选教程 正文

rancher HA 部署方法 rancher 配置文件

wys521 2024-10-24 16:21:54 精选教程 36 ℃ 0 评论

rancher HA 部署方法

架构设计

192.168.10.60 rancher1

192.168.10.61 rancher2

192.168.10.62 rancher3

192.168.10.63 rancher-nginx


一:基础环境配置

1.centos 7.6

配置固定IP

vi /etc/sysconfig/network-scripts/ifcfg-ens33

配置固定IP

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

NAME=ens33

DEVICE=ens33

ONBOOT=yes

IPADDR0=192.168.10.60

PREFIXO0=24

GATEWAY0=192.168.10.2

DNS1=192.168.10.2


修改主机名称

export HostName="rancher3"

hostname ${HostName}

echo "${HostName}" >/etc/hostname

su -

配置hosts

cat>>/etc/hosts<

192.168.10.60 rancher1

192.168.10.61 rancher2

192.168.10.62 rancher3

192.168.10.63 rancher-nginx

EOF


CentOS关闭selinux

sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

关闭防火墙

systemctl stop firewalld.service && systemctl disable firewalld.service


配置主机时间、时区、系统语言

查看时区

date -R或者timedatectl

修改时区

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

修改系统语言环境

sudo echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile

配置主机NTP时间同步


Kernel性能调优

cat >> /etc/sysctl.conf<

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.ipv4.neigh.default.gc_thresh1=4096

net.ipv4.neigh.default.gc_thresh2=6144

net.ipv4.neigh.default.gc_thresh3=8192

vm.swappiness=0

vm.max_map_count=655360

EOF


关闭swap

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

系统调优

echo - "root soft nofile 65535\nroot hard nofile 65535\n* soft nofile 65535\n* hard nofile 65535\n" >> /etc/security/limits.conf

sed -i 's#4096#65535#g' /etc/security/limits.d/20-nproc.conf

安装一些基础软件

yum -y install wget ntpdate lrzsz curl yum-utils device-mapper-persistent-data lvm2 bash-completion && ntpdate -u cn.pool.ntp.org


内核模块

for i in `echo br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel VETH VXLAN x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set xt_statistic xt_tcpudp`;do echo $i; modprobe $i;done

ssh免密登录

su - rancher

ssh-keygen

ssh-copy-id rancher@192.168.10.60

ssh-copy-id rancher@192.168.10.61

ssh-copy-id rancher@192.168.10.62


2.Docker-ce安装

docker 版本

  • 18.06.x
  • 18.09.x

查看docker版本

docker --version

rpm -qa|grep docker

删除

yum remove docker-ce-cli-19.03.1-3.el7.x86_64

docker 部署脚本

rancher_install_docker.sh


Docker配置

/etc/docker/daemon.json

下载和上传并发数

"max-concurrent-downloads": 3,

"max-concurrent-uploads": 5

镜像加速地址

"registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/","https://IP:PORT/"]

私有仓库

"insecure-registries": ["192.168.1.100","IP:PORT"]


存储驱动

"storage-driver": "overlay2",

"storage-opts": ["overlay2.override_kernel_check=true"]

日志驱动

"log-driver": "json-file",

"log-opts": {

"max-size": "100m",

"max-file": "3"

}


##

{

"max-concurrent-downloads": 3,

"max-concurrent-uploads": 5,

"registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/"],

"insecure-registries": ["192.168.10.33"],

"storage-driver": "overlay2",

"storage-opts": ["overlay2.override_kernel_check=true"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m",

"max-file": "3"

}

}

systemctl daemon-reload

systemctl start docker

重启节点


4.Nginx 安装 (nginx 节点上执行)

yum -y install nginx

配置 nginx.conf

user nginx;

worker_processes 4;

worker_rlimit_nofile 40000;

events {

worker_connections 8192;

}

http {

# Gzip Settings

gzip on;

gzip_disable "msie6";

gzip_disable "MSIE [1-6]\.(?!.*SV1)";

gzip_vary on;

gzip_static on;

gzip_proxied any;

gzip_min_length 0;

gzip_comp_level 8;

gzip_buffers 16 8k;

gzip_http_version 1.1;

gzip_types text/xml application/xml application/atom+xml application/rss+xml application/xhtml+xml image/svg+xml application/font-woff text/javascript application/javascript application/x-javascript text/x-json application/json application/x-web-app-manifest+json text/css text/plain text/x-component font/opentype application/x-font-ttf application/vnd.ms-fontobjectfont/woff2 image/x-icon image/png image/jpeg;

server {

listen 80;

return 301 https://$host$request_uri;

}

}

stream {

upstream rancher_servers {

least_conn;

server 192.168.10.60:443 max_fails=3 fail_timeout=5s;

server 192.168.10.61:443 max_fails=3 fail_timeout=5s;

server 192.168.10.62:443 max_fails=3 fail_timeout=5s;

}

server {

listen 443;

proxy_pass rancher_servers;

}

}

加载nginx配置

nginx -s reload


或使用容器的方式配置

docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v /etc/nginx.conf:/etc/nginx/nginx.conf \ nginx:1.14


5.导入下载的镜像

除了nginx,所有节点上执行

rancher_load_images.sh

6.RKE安装K8S (其中一台rancher节点,如 10.60机器上)

切换到 rancher账号

su - rancher

创建rancher-cluster.yml文件

nodes:

- address: 192.168.10.60

user: rancher

role: [controlplane,worker,etcd]

- address: 192.168.10.61

user: rancher

role: [controlplane,worker,etcd]

- address: 192.168.10.62

user: rancher

role: [controlplane,worker,etcd]

services:

etcd:

snapshot: true

creation: 6h

retention: 24h


7.创建Kubernetes集群

配置rke

cp v0.2.6-rke_linux-amd64 /usr/local/bin/

mv /usr/local/bin/v0.2.6-rke_linux-amd64 /usr/local/bin/rke

安装

切换到 rancher 账号

rke up --config ./rancher-cluster.yml

当出现

INFO[0168] Finished building Kubernetes cluster successfully

即部署完成


RKE应该已经创建了一个文件kube_config_rancher-cluster.yml

将此文件复制到$HOME/.kube/config

[rancher@rancher1 ~]$ mkdir /home/rancher/.kube

[rancher@rancher1 ~]$ cp kube_config_rancher-cluster.yml /home/rancher/.kube/config


8.安装和配置kubectl

文件下载

https://www.cnrancher.com/docs/rancher/v2.x/cn/install-prepare/download/

cp linux-amd64-v1.14.5-kubectl /usr/local/bin/kubectl

chmod +x /usr/local/bin/kubectl

验证集群状态

kubectl get nodes


检查Pod运行状态

kubectl get pods --all-namespaces


9.安装配置Helm (节点1 10.60)

配置Helm客户端访问权限

tiller服务以管理charts

在kube-system命名空间中创建ServiceAccount

kubectl -n kube-system create serviceaccount tiller

创建ClusterRoleBinding以授予tiller帐户对集群的访问权限

kubectl create clusterrolebinding tiller \

--clusterrole cluster-admin --serviceaccount=kube-system:tiller

安装helm客户端

tar zxvf helm-v2.14.3-linux-amd64.tar.gz

cp linux-amd64/helm /usr/local/bin/

chmod +x /usr/local/bin/helm


安装Helm Server(Tiller)

查看集群信息

kubectl config view

执行以下命令在Rancher中安装Tiller:

helm版本

helm_version=`helm version |grep Client | awk -F""\" '{print $2}'`

配置源站

helm init \

--service-account tiller --skip-refresh \

--tiller-image registry.cn-shanghai.aliyuncs.com/rancher/tiller:$helm_version


本地已经导入镜像

helm init \

--service-account tiller --skip-refresh \

--tiller-image rancher/tiller:$helm_version

查询是否部署

kubectl get pods --namespace kube-system

检查

helm version

显示client server版本信息


10.Helm安装Rancher (节点1 10.60)

添加Chart仓库地址

helm repo add rancher-stable \

https://releases.rancher.com/server-charts/stable

配置ssl

./rancher_gen_ssl.sh --ssl-domain=meng.com --ssl-trusted-domain=meng.com \

--ssl-trusted-ip=192.168.10.60,192.168.10.61,192.168.10.62,192.168.10.63 --ssl-size=2048 --ssl-date=3650

使用kubectl在命名空间cattle-system中创建tls-ca和tls-rancher-ingress两个secret

证书、私钥、ca名称必须是tls.crt、tls.key、cacerts.pem


# 创建命名空间

切换 rancher 账号

kubectl create namespace cattle-system

# 服务证书和私钥密文

cd ssl

kubectl -n cattle-system create \

secret tls tls-rancher-ingress \

--cert=./tls.crt \

--key=./tls.key

(需切换root账号,拷贝ssl证书)

cp -rf ssl /home/rancher/

chown -R rancher.rancher /home/rancher*


# ca证书密文

kubectl -n cattle-system create secret \

generic tls-ca \

--from-file=cacerts.pem

安装Rancher Server (需要修改域名)

helm install rancher-stable/rancher \

--name rancher \

--namespace cattle-system \

--set hostname=meng.com \

--set ingress.tls.source=secret \

--set privateCA=true

输出:

NAME: rancher

LAST DEPLOYED: Wed Aug 14 18:50:44 2019

NAMESPACE: cattle-system

STATUS: DEPLOYED

RESOURCES:

==> v1/ClusterRoleBinding

NAME AGE

rancher 1s

==> v1/Deployment

NAME READY UP-TO-DATE AVAILABLE AGE

rancher 0/3 3 0 1s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE

rancher-6746c85fc4-k42qm 0/1 ContainerCreating 0 1s

rancher-6746c85fc4-pvbzf 0/1 ContainerCreating 0 1s

rancher-6746c85fc4-scswd 0/1 ContainerCreating 0 1s

==> v1/Service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

rancher ClusterIP 10.43.102.119 80/TCP 1s

==> v1/ServiceAccount

NAME SECRETS AGE

rancher 1 1s

==> v1beta1/Ingress

NAME HOSTS ADDRESS PORTS AGE

rancher meng.com 80, 443 1s

NOTES:

Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.

Check out our docs at https://rancher.com/docs/rancher/v2.x/en/

Browse to https://meng.com

Happy Containering!


11.登录rancher管理端

配置windows机器 Hosts

#rancher

192.168.10.63 meng.com

访问

http://meng.com

设置管理密码 ledou2019


12.为Agent Pod添加主机别名

如果您没有内部DNS服务器而是通过添加/etc/hosts主机别名的方式指定的Rancher Server域名,那么不管通过哪种方式(自定义、导入、Host驱动等)创建K8S集群,K8S集群运行起来之后,因为cattle-cluster-agent Pod和cattle-node-agent无法通过DNS记录找到Rancher Server URL,最终导致无法通信

cattle-cluster-agent Pod和cattle-node-agent需要在LOCAL集群初始化之后才会部署,所以先通过Rancher Server URL访问Rancher Web UI进行初始化

执行以下命令为Rancher Server容器配置hosts:

su - rancher

kubectl -n cattle-system \

patch deployments rancher --patch '{

"spec": {

"template": {

"spec": {

"hostAliases": [

{

"hostnames":

[

"meng.com"

],

"ip": "192.168.10.63"

}

]

}

}

}

}'


在Rancher Web UI中依次进入local集群/system项目,在cattle-system命名空间中查看【工作负载】是否有cattle-cluster-agent Pod和cattle-node-agent被创建。如果有创建则进行下面的步骤,没有创建则等待;

命令行:

kubectl -n cattle-system get pods

cattle-cluster-agent pod

kubectl -n cattle-system \

patch deployments cattle-cluster-agent --patch '{

"spec": {

"template": {

"spec": {

"hostAliases": [

{

"hostnames":

[

"meng.com"

],

"ip": "192.168.10.63"

}

]

}

}

}

}'


cattle-node-agent pod

kubectl -n cattle-system \

patch daemonsets cattle-node-agent --patch '{

"spec": {

"template": {

"spec": {

"hostAliases": [

{

"hostnames":

[

"meng.com"

],

"ip": "192.168.10.63"

}

]

}

}

}

}'


替换其中的域名和IP

Rancher UI自定义安装kubernetes集群

进入Rancher UI\全局\系统设置\system-default-registry,把这个默认仓库设置修改为registry.cn-shanghai.aliyuncs.com,这样在构建kubernetes集群时将默认从这个仓库拉取镜像




Tags:

本文暂时没有评论,来添加一个吧(●'◡'●)

欢迎 发表评论:

最近发表
标签列表