openshift-docs
  • 不知所措的openshift kubernetes
  • 3scale
    • 在openshift使用3scale-operator部署3scale
  • Hyperledger-Fabric
    • Hyperledger Fabric on openshift 3.11
  • adminer
    • helm部署adminer
  • admission-controller
    • k8s nameapce增加默认node-selector和defaultTolerations
  • cert-manager
    • cert-manager-1.5升级到1.6
  • cicd
    • Argocd定时备份到us3
    • Argocd添加服务流程
    • Argocd自定义健康检查
    • helm安装argocd
    • k8s1.22部署gitlab对接keycloak
    • 使用Tekton+Helm-Chart+ArgoCD构建GitOps CICD
    • 使用 Tekton 构建CI流程
    • 使用argocd-notifications通知Tekton触发api-test
    • 使用 docker buildx 构建多CPU架构镜像
    • 使用image-syncer同步多CPU架构镜像到私有仓库
    • 开源helm chart 发布到 https://artifacthub.io/
    • 快速编写通用helm chart
  • client-go
    • k8s client-go 创建ingress示例
  • cluster-monitor-opertor
    • Openshift3.11 alertmanager 持久化
    • cluster-monitor-operator alertmanager配置
    • cluster-monitor-operator添加外部metrics
    • openshift3.11-cluster-monitoring-operator数据持久化
  • config-syncer
    • k8s使用config-syncer(kubed)同步secret
  • dns
    • k8s coredns 优化
    • k8s 使用coredns 自定义hosts 解析
  • dnsmasq
    • MAC 环境使用 dnsmasq 配置openshift相关自定义域名
    • 配置dnsmasq apps通配解析
  • elasticsearch
    • Elasticsearch查询重复数据
    • elasticsearch-kibana-8.10创建向量索引模板
    • openshift3.11中使用ECK安装filebeat+elasticsearch+kibana收集日志初探
    • openshift3.11部署eck1.6+es7.14.1
    • 使用kibana修改数据流索引mapping
  • etcd
    • k8s 1.22 使用cronjob 备份etcd
    • k8s1.22使用CronJob定时备份etcd到US3
    • 使用cronjob备份etcd
    • 恢复openshift3.11-etcd数据快照
  • flowiseai
    • argocd2.2.1+helm3.9-chart+k8s1.22部署flowise
  • ingress-nginx
    • ingress-nginx启用header名称中下划线
  • ipfs
    • golang计算文件ipfs cid
    • helm安装ipfs-cluster
  • kafka
    • banzaicloud-stable/kafka-operator+local-path迁移主机
    • 使用bitnami/kafka部署外部可访问的 kafka
  • keycloak
    • openshift使用keycloak登录
  • kong
    • Kong使用ip-pestriction插件配置IP白名单
    • kong admin api 使用 go-kong 调用
    • kong manager页面显示空白,报错net:ERR_HTTP2_PROTOCOL_ERROR
    • kong helm 安装
    • kong 自定义默认error html
    • 使用kong转发TCP服务
  • kube-flannel
    • kube-flannel-v0.20.1升级v0.22.2
  • kubeadm
    • RockLinux+kubeadm+k8s-1.22.16 升级到1.22.17
    • RockLinux+kubeadm+k8s-1.22.2 升级到1.22.16
  • kubevirt
    • Kubevirt on Openshift
    • kubebirt 中使用 cloud-init
    • kubevirt限制vm发布主机
    • openshift-3.11-kubevirt从v0.19.0升级到v0.27.0
    • 使用alpine-make-vm-image制作alpine-qcow2云镜像
    • 使用virtualbox自定义Alpine-vrit云镜像
  • load-balance
    • ucloud 添加负载均衡报文转发配置
  • metrics-sever
    • k8s-1.22安装metrics-server
  • mongodb
    • 使用argocd部署mongo-express
    • 阿里云 Mongodb副本集实例使用
  • mysql
    • Helm部署mysql
    • helm安装phpmyadmin
    • mysql批量修改utf8mb3为utf8mb4字符集
    • 部署MySQL Server Exporter
  • openfaas
    • OpenFaaS定时任务
    • OpenFaas使用Go模板创建Function
    • helm 安装openfaas
  • operator
    • 使用Operator-SDK构建基于Helm 的 Operator
  • playwright
    • 使用playwright截图Kibana图表
  • prometheus-operator
    • helm+kube-prometheus-stack-prometheus-operator+local-path(storageclass)部署的prometheus迁移主机
    • k8s 1.22 环境 kube-prometheus-stack 22.x 升级至 41.x
    • 使用helm+kube-prometheus-stack只部署prometheus
  • proxy
    • 使用快代理使用海外代理访问海外网站
  • rancher
    • helm 安装rancher 2.6.3
    • rancher-backup使用US3备份
    • rancher2.6.3升级至rancher2.6.9
    • rancher2.6.9对接keycloak
    • 解决rancher-v2.6.3报helm-operator更新rancher-webhook异常问题
    • 解决更新rancher2.6.13后报webhook和fleet chart版本不支持
  • raspberry-pi
    • mac os golang编译ARM环境go-sqlite3项目
    • 无头(headless) raspberry pi 4 ssh wifi 安装(mac)
    • 树莓派4B+raspberry-pi-os-buster在线安装k3s
    • 树莓派Raspberry Pi OS 设置静态ip
    • 树莓派raspberry-pi-os(32bit)安装docker
    • 树莓派raspberry pi os开启ssh
    • 树莓派安装centos7并简单优化
  • rbac
    • openshift给没能打开web terminal终端的用户添加权限
  • registry
    • 使用image-syncer同步所需镜像到仓库
  • ssh
    • Mac OSX ssh隧道使用方法
  • storage
    • lvm分区配置备份与恢复测试
    • openshift3.11使用nfs-client-provisioner+UCloud-UFS提供动态pv存储
    • openshift3.11使用nfs-client-provisioner+阿里云NAS提供动态nfs
    • openshift3.11配置local volume
    • openshift动态nfs
  • tracing
    • Ipfs cluseter使用分布式追踪系统jaeper tracing
  • troubleshooting
    • coredns service 连接超时(connection timed out; no servers could be reached)
    • etcdDatabaseHighFragmentationRatio 异常处理
    • helm更新服务报错提示statefulset更新是被禁止的
    • k8s如果防止容器中出现僵尸进程
    • kubevirt api server 证书过期问题导致openshfit调度异常
    • macOS Chrome访问https://registry-console-default.appsxxx.xxx.xxx/页面显示ERR_CERT_INVALID,且不能点继续
    • master 主机df 卡死
    • openshift project Terminaing处理
    • OpenShift Docker Registry 500
    • 解决openshift3.11 node NotReady csr Pending
    • openshift3.11-pvc-delete-Terminating-hang
    • openshift3.11清理Terminating 状态project
    • pod pending event报错cni无可用IP
    • ucloud环境开启selinux后/var/log/messages不能写入问题
    • ucloud环境开启selinux
    • 解决openshift3.11不能下载redhat registry.access.redhat.com中镜像问题
    • 证书未过期但是报NET::ERR_CERT_AUTHORITY_INVALID证书错误处理
  • walletconnect
    • WalletConnect-Relay 部署
Powered by GitBook
On this page
  • 主机规划
  • 迁移kafka 数据
  • 备份pvc pv yaml
  • 迁移pod kafka-2 pv 数据
  • 迁移pod kafka-1 kafka-0 pv 数据

Was this helpful?

  1. kafka

banzaicloud-stable/kafka-operator+local-path迁移主机

主机规划

主机名
ip
node label

Logging1

172.16.13.77

logging=true

Logging2

172.16.36.25

logging=true

Logging3

172.16.115.194

logging=true

kafka1

172.16.230.153

kafka=true

Kafka2

172.16.53.28

kafka=true

Kafka3

172.16.32.59

kafka=true

迁移kafka 数据

备份pvc pv yaml

查看pvc

kubectl get pvc

# 显示如下
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
kafka-0-storage-0-4ngpp   Bound    pvc-38dc6d2b-b612-445a-9a53-77903380a91f   10Gi       RWO            local-path     53d
kafka-1-storage-0-5hqjr   Bound    pvc-083d1a6c-dc2c-4ec1-bb18-d4008ee6f0b8   10Gi       RWO            local-path     53d
kafka-2-storage-0-9t4zd   Bound    pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f   10Gi       RWO            local-path     53d

备份pvc pv yaml

kubectl get pvc kafka-0-storage-0-4ngpp -o yaml > kafka0.pvc.yaml
kubectl get pvc kafka-1-storage-0-5hqjr -o yaml > kafka1.pvc.yaml
kubectl get pvc kafka-2-storage-0-9t4zd -o yaml > kafka2.pvc.yaml
kubectl get pv pvc-38dc6d2b-b612-445a-9a53-77903380a91f -o yaml > kafka0.pv.yaml
kubectl get pv pvc-083d1a6c-dc2c-4ec1-bb18-d4008ee6f0b8 -o yaml > kafka1.pv.yaml
kubectl get pv pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f -o yaml > kafka2.pv.yaml

查看pv yaml 得知pv 所在主机及目录分别为:

主机
kafka pod 编号
目录

Logging1

kafka-2

/data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd

Logging2

kafka-1

/data/local-path-provisioner/pvc-083d1a6c-dc2c-4ec1-bb18-d4008ee6f0b8_kafka_kafka-1-storage-0-5hqjr

Logging3

kafka-0

/data/local-path-provisioner/pvc-38dc6d2b-b612-445a-9a53-77903380a91f_kafka_kafka-0-storage-0-4ngpp

新规划的 kafka pv 主机及目录

主机
kafka pod 编号
目录

Kafka1

kafka-0

/data/local-path-provisioner/pvc-38dc6d2b-b612-445a-9a53-77903380a91f_kafka_kafka-0-storage-0-4ngpp

Kafka2

kafka-1

/data/local-path-provisioner/pvc-083d1a6c-dc2c-4ec1-bb18-d4008ee6f0b8_kafka_kafka-1-storage-0-5hqjr

Kafka3

kafka-2

/data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd

迁移pod kafka-2 pv 数据

修改helm 安装values.yaml,注释broker id 2 的内容

  brokers:
    - id: 0
      brokerConfigGroup: "default"
      brokerConfig:
        nodePortExternalIP:
          external: "172.16.152.20" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP
    - id: 1
      brokerConfigGroup: "default"
      brokerConfig:
        nodePortExternalIP:
          external: "172.16.142.252" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP
    #- id: 2
    #  brokerConfigGroup: "default"
    #  brokerConfig:
    #    nodePortExternalIP:
    #     external: "172.16.233.132" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP

更新kafka cluster

kubectl apply -f kafkacluster-with-nodeport-external.yaml

删除对应broker pod

因为删除了broker 2 的配置, 所以删除kafka-2 pod 不会触发operator 新建pod

kubectl delete pod kafka-2-g6wjd

在kafka3 主机复制pod kafka-2 pv 数据到新kafka3 主机

mkdir -p /data/local-path-provisioner
scp -r 172.16.13.77:/data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd /data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd.bak

删除kafka2 pvc

因为是local-path storageclass 创建pvc , pv 自动删除

kubectl delete pvc kafka-2-storage-0-9t4zd

重建 kafka2 pv

cp kafka2.pv.yaml kafka2.new.pv.yaml

编辑 kafka2.new.pv.yaml, 删除uid / resourceVersion/ creationTimestamp/claimRef/status信息, 主机名信息 logging1 替换为 kafka3

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: rancher.io/local-path
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  hostPath:
    path: /data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kafka3.solarfs.k8s
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-path
  volumeMode: Filesystem
kubectl apply -f kafka2.new.pv.yaml

重建 kafka2 pvc

cp kafka2.pvc.yaml kafka2.new.pvc.yaml

编辑 kafka2.new.pvc.yaml, 删除uid (只保留ownerReferences中uid) / resourceVersion/ creationTimestamp/status信息, 主机名信息 logging1 替换为 kafka3

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    banzaicloud.com/last-applied: UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWxMkU+P1DAMxb+Lz8luu3/K0usiIYRg0R7ggEbITdwhahoXxwGJUb87SjuD5pY47/3ybJ9gJkWPitCfAFNiRQ2ccr3OXJJ+Qf0JPdxOOE5oIx8zrAaOlEhQ6TPOBD3sj3c2KwseyTYWDEQcKG4gXJaLCAwMwhPJBw893IHZyz+c/FesBhLOlBd0dGXjP4nklUYSSo4y9N8rOHwlyYHTRXgzYPqLwUUu/ibw7e92IMW2fhvZTS8V8o4i6eZRKWTAcVLhGEkulSmkGu9jJT7HkpUE9lRXgUqomrHz/ukN3tuH5nG0D+29s/jk0Xbj2A1t1zaPw1tYD6uBvJDbpuEc5fyJ/dYEvBL6bxKUXpIjOBgQylxka/EEQr8KZd3O5+lCD23zPsBamXvpOWLO511EdhjtUte2CVBLda/rvwAAAP//UEsHCKBYdcU/AQAA7AEAAFBLAQIUABQACAAIAAAAAACgWHXFPwEAAOwBAAAIAAAAAAAAAAAAAAAAAAAAAABvcmlnaW5hbFBLBQYAAAAAAQABADYAAAB1AQAAAAA=
    mountPath: /kafka-logs
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/selected-node: kafka3.solarfs.k8s
  finalizers:
  - kubernetes.io/pvc-protection
  generateName: kafka-2-storage-0-
  labels:
    app: kafka
    brokerId: "2"
    kafka_cr: kafka
  name: kafka-2-storage-0-9t4zd
  namespace: kafka
  ownerReferences:
  - apiVersion: kafka.banzaicloud.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: KafkaCluster
    name: kafka
    uid: f6dd87a3-405f-413c-a8da-6ff6b16105b9
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f
kubectl apply -f kafka2.new.pvc.yaml

查看pvc 是否正常绑定 对应pv

kubectl get pvc kafka-2-storage-0-9dl5s

# 显示如下
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
kafka-2-storage-0-9dl5s   Bound    pvc-c525fa5d-896a-4f88-b9f5-6e5c0726a8e1   10Gi       RWO            local-path     20mf

kafka3 主机备份目录移动为正式目录,并重置目录权限, kafka

mv /data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd.bak /data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd
chmod 777 /data/local-path-provisioner/pvc-21082bbf-22f3-4ce3-8e73-9d4bd4ad2b8f_kafka_kafka-2-storage-0-9t4zd

修改helm 安装values.yaml,放开注释broker id 2 的内容

  brokers:
    - id: 0
      brokerConfigGroup: "default"
      brokerConfig:
        nodePortExternalIP:
          external: "172.16.152.20" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP
    - id: 1
      brokerConfigGroup: "default"
      brokerConfig:
        nodePortExternalIP:
          external: "172.16.142.252" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP
    - id: 2
      brokerConfigGroup: "default"
      brokerConfig:
        nodePortExternalIP:
         external: "172.16.233.132" # if "hostnameOverride" is not set for "external" external listener than broker is advertised on this IP

更新kafka cluster

kubectl apply -f kafkacluster-with-nodeport-external.yaml

迁移pod kafka-1 kafka-0 pv 数据

同pod kafka-2 pv 数据

PreviouskafkaNext使用bitnami/kafka部署外部可访问的 kafka

Last updated 3 years ago

Was this helpful?