ceph集群搭建方案,ceph集群故障数据如何恢复

首页 > 技术 > 作者:YD1662023-11-10 19:23:42

mon-node

10.5.77.61

mon-node admin-node

mon-node

10.5.77.62

mon-node

mon-node

10.5.77.63

mon-node

osd-node

10.5.77.64

osd-node 备注:需要事先规划独立硬盘或分区

osd-node

10.5.77.65

osd-node 备注:需要事先规划独立硬盘或分区

osd-node

10.5.77.66

osd-node 备注:需要事先规划独立硬盘或分区

1、 关闭防火墙、selinux以及swap分区

setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config systemctl stop firewalld systemctl disable firewalld swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab

2、配置内核参数,将桥接的IPV4流量传递到iptalbes的链

cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF vim /etc/security/limits.conf # 末尾添加如下内容 * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited sysctl --system

3、节点时间同步

rpm -ivh http://mirrors.wlnmp.com/CentOS/wlnmp-release-centos.noarch.rpm yum install ntpdate -y ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com 配置crontab自动同步时间 */5 * * * * ntpdate time2.aliyun.com

4、yum源配置

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo #安装依赖包 yum install -y yum-utils device-mapper-persistent-data lvm2 yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF > /etc/yum.repos.d/Kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

5、安装依赖包(所有节点)

yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org* yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

6、配置ceph的yum源(所有节点)

cat /etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/ enabled=1 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [ceph] name=Ceph packages for $basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS enabled=0 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc priority=1

7、配置hosts并安置

10.5.77.61 ceph-moni-0 10.5.77.62 ceph-moni-1 10.5.77.63 ceph-moni-2 10.5.77.64 ceph-osd-0 10.5.77.65 ceph-osd-1 10.5.77.66 ceph-osd-2 yum install ceph-deploy

8、创建ceph用户,设置sudo免密(所有节点)

useradd -d /home/ceph -m ceph && echo 123456 | passwd --stdin ceph && echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph su - ceph ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-1 ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-2 ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-0 ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-1 ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-2 vi ~/.ssh/config Host ceph-moni-1 Hostname ceph-moni-1 User ceph Host ceph-moni-2 Hostname ceph-moni-2 User ceph Host ceph-osd-0 Hostname cceph-osd-0 User ceph Host ceph-osd-1 Hostname cceph-osd-1 User ceph Host ceph-osd-2 Hostname cceph-osd-2 User ceph sudo chmod 600 ~/.ssh/config

9、创建管理节点并安装服务

su - ceph mkdir ceph-cluster cd ceph-cluster ceph-deploy new ceph-moni-0 ceph-moni-1 ceph-moni-2 vi ceph.conf [global] fsid = 192743b0-7540-474d-9776-1facda354671 mon_initial_members = ceph-moni-0, ceph-moni-1, ceph-moni-2 mon_host = 10.5.77.61,10.5.77.62,10.5.77.63 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx iosd pool default size = 3 cluster_network = 10.5.77.0/24 public_network = 10.5.77.0/24 osd max object name len = 256 osd max object namespace len = 64 mon_pg_warn_max_per_osd = 1000 [mon] mon_allow_pool_delete = true #安装 ceph-deploy install ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd-2

10、在管理节点配置初始化monitors,收集所有秘钥

ceph-deploy mon create-initial

11、创建 osd 节点的数据存储目录(所有osd节点)

官方建议为 OSD 使用独立硬盘或分区作为存储空间,不过本机虚拟机上不具备条件,但是我们可以在虚拟机本地磁盘上创建目录,来作为 OSD 的存储空间。

ceph-osd-0: mkdir /var/local/osd0 chmod 777 -R /var/local/osd0 ceph-osd-1: mkdir /var/local/osd1 chmod 777 -R /var/local/osd1 ceph-osd-2: mkdir /var/local/osd2 chmod 777 -R /var/local/osd2

12、使每个 osd 就绪(管理节点执行)

ceph-deploy osd prepare ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

13、激活每个 osd 节点(管理节点执行)

ceph-deploy osd activate ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

14、在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,赋予

ceph.client.admin.keyring 有操作权限(所有节点) ceph-deploy admin ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd-2

15、所有节点执行:

chmod r /etc/ceph/ceph.client.admin.keyring

16、查看集群状态:

ceph health ceph -s

备注:

1、集群初始化失败或者重新修改配置文件,需要重新覆盖配置文件

ceph-deploy --overwrite-conf config push ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd-2

2、 ceph-deploy mon create-initial 重新收集秘钥报错,需要删除当前目录下的旧秘钥文件,以及所有节点执行sudo pkill ceph。

,

栏目热文

文档排行

本站推荐

Copyright © 2018 - 2021 www.yd166.com., All Rights Reserved.