openshift-docs
  • 不知所措的openshift kubernetes
  • 3scale
    • 在openshift使用3scale-operator部署3scale
  • Hyperledger-Fabric
    • Hyperledger Fabric on openshift 3.11
  • adminer
    • helm部署adminer
  • admission-controller
    • k8s nameapce增加默认node-selector和defaultTolerations
  • cert-manager
    • cert-manager-1.5升级到1.6
  • cicd
    • Argocd定时备份到us3
    • Argocd添加服务流程
    • Argocd自定义健康检查
    • helm安装argocd
    • k8s1.22部署gitlab对接keycloak
    • 使用Tekton+Helm-Chart+ArgoCD构建GitOps CICD
    • 使用 Tekton 构建CI流程
    • 使用argocd-notifications通知Tekton触发api-test
    • 使用 docker buildx 构建多CPU架构镜像
    • 使用image-syncer同步多CPU架构镜像到私有仓库
    • 开源helm chart 发布到 https://artifacthub.io/
    • 快速编写通用helm chart
  • client-go
    • k8s client-go 创建ingress示例
  • cluster-monitor-opertor
    • Openshift3.11 alertmanager 持久化
    • cluster-monitor-operator alertmanager配置
    • cluster-monitor-operator添加外部metrics
    • openshift3.11-cluster-monitoring-operator数据持久化
  • config-syncer
    • k8s使用config-syncer(kubed)同步secret
  • dns
    • k8s coredns 优化
    • k8s 使用coredns 自定义hosts 解析
  • dnsmasq
    • MAC 环境使用 dnsmasq 配置openshift相关自定义域名
    • 配置dnsmasq apps通配解析
  • elasticsearch
    • Elasticsearch查询重复数据
    • elasticsearch-kibana-8.10创建向量索引模板
    • openshift3.11中使用ECK安装filebeat+elasticsearch+kibana收集日志初探
    • openshift3.11部署eck1.6+es7.14.1
    • 使用kibana修改数据流索引mapping
  • etcd
    • k8s 1.22 使用cronjob 备份etcd
    • k8s1.22使用CronJob定时备份etcd到US3
    • 使用cronjob备份etcd
    • 恢复openshift3.11-etcd数据快照
  • flowiseai
    • argocd2.2.1+helm3.9-chart+k8s1.22部署flowise
  • ingress-nginx
    • ingress-nginx启用header名称中下划线
  • ipfs
    • golang计算文件ipfs cid
    • helm安装ipfs-cluster
  • kafka
    • banzaicloud-stable/kafka-operator+local-path迁移主机
    • 使用bitnami/kafka部署外部可访问的 kafka
  • keycloak
    • openshift使用keycloak登录
  • kong
    • Kong使用ip-pestriction插件配置IP白名单
    • kong admin api 使用 go-kong 调用
    • kong manager页面显示空白,报错net:ERR_HTTP2_PROTOCOL_ERROR
    • kong helm 安装
    • kong 自定义默认error html
    • 使用kong转发TCP服务
  • kube-flannel
    • kube-flannel-v0.20.1升级v0.22.2
  • kubeadm
    • RockLinux+kubeadm+k8s-1.22.16 升级到1.22.17
    • RockLinux+kubeadm+k8s-1.22.2 升级到1.22.16
  • kubevirt
    • Kubevirt on Openshift
    • kubebirt 中使用 cloud-init
    • kubevirt限制vm发布主机
    • openshift-3.11-kubevirt从v0.19.0升级到v0.27.0
    • 使用alpine-make-vm-image制作alpine-qcow2云镜像
    • 使用virtualbox自定义Alpine-vrit云镜像
  • load-balance
    • ucloud 添加负载均衡报文转发配置
  • metrics-sever
    • k8s-1.22安装metrics-server
  • mongodb
    • 使用argocd部署mongo-express
    • 阿里云 Mongodb副本集实例使用
  • mysql
    • Helm部署mysql
    • helm安装phpmyadmin
    • mysql批量修改utf8mb3为utf8mb4字符集
    • 部署MySQL Server Exporter
  • openfaas
    • OpenFaaS定时任务
    • OpenFaas使用Go模板创建Function
    • helm 安装openfaas
  • operator
    • 使用Operator-SDK构建基于Helm 的 Operator
  • playwright
    • 使用playwright截图Kibana图表
  • prometheus-operator
    • helm+kube-prometheus-stack-prometheus-operator+local-path(storageclass)部署的prometheus迁移主机
    • k8s 1.22 环境 kube-prometheus-stack 22.x 升级至 41.x
    • 使用helm+kube-prometheus-stack只部署prometheus
  • proxy
    • 使用快代理使用海外代理访问海外网站
  • rancher
    • helm 安装rancher 2.6.3
    • rancher-backup使用US3备份
    • rancher2.6.3升级至rancher2.6.9
    • rancher2.6.9对接keycloak
    • 解决rancher-v2.6.3报helm-operator更新rancher-webhook异常问题
    • 解决更新rancher2.6.13后报webhook和fleet chart版本不支持
  • raspberry-pi
    • mac os golang编译ARM环境go-sqlite3项目
    • 无头(headless) raspberry pi 4 ssh wifi 安装(mac)
    • 树莓派4B+raspberry-pi-os-buster在线安装k3s
    • 树莓派Raspberry Pi OS 设置静态ip
    • 树莓派raspberry-pi-os(32bit)安装docker
    • 树莓派raspberry pi os开启ssh
    • 树莓派安装centos7并简单优化
  • rbac
    • openshift给没能打开web terminal终端的用户添加权限
  • registry
    • 使用image-syncer同步所需镜像到仓库
  • ssh
    • Mac OSX ssh隧道使用方法
  • storage
    • lvm分区配置备份与恢复测试
    • openshift3.11使用nfs-client-provisioner+UCloud-UFS提供动态pv存储
    • openshift3.11使用nfs-client-provisioner+阿里云NAS提供动态nfs
    • openshift3.11配置local volume
    • openshift动态nfs
  • tracing
    • Ipfs cluseter使用分布式追踪系统jaeper tracing
  • troubleshooting
    • coredns service 连接超时(connection timed out; no servers could be reached)
    • etcdDatabaseHighFragmentationRatio 异常处理
    • helm更新服务报错提示statefulset更新是被禁止的
    • k8s如果防止容器中出现僵尸进程
    • kubevirt api server 证书过期问题导致openshfit调度异常
    • macOS Chrome访问https://registry-console-default.appsxxx.xxx.xxx/页面显示ERR_CERT_INVALID,且不能点继续
    • master 主机df 卡死
    • openshift project Terminaing处理
    • OpenShift Docker Registry 500
    • 解决openshift3.11 node NotReady csr Pending
    • openshift3.11-pvc-delete-Terminating-hang
    • openshift3.11清理Terminating 状态project
    • pod pending event报错cni无可用IP
    • ucloud环境开启selinux后/var/log/messages不能写入问题
    • ucloud环境开启selinux
    • 解决openshift3.11不能下载redhat registry.access.redhat.com中镜像问题
    • 证书未过期但是报NET::ERR_CERT_AUTHORITY_INVALID证书错误处理
  • walletconnect
    • WalletConnect-Relay 部署
Powered by GitBook
On this page
  • 在UCloud创建 NFS 协议的UFS文件系统
  • 部署
  • 所有需要挂载nfs的宿主机开放selinux pod 挂载nfs权限

Was this helpful?

  1. storage

openshift3.11使用nfs-client-provisioner+UCloud-UFS提供动态pv存储

在UCloud创建 NFS 协议的UFS文件系统

创建UFS挂载点,挂载点地址为:172.16.16.137:/

准备部署使用的yaml

rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

nfs-client-provisioner.yaml

其中NFS_SERVER 为 172.16.16.137, UFS 的 nfs接口不支持 nfs-client-provisioner 默认的挂载参数 ,所有需要手动 nfs-client-provisioner 使用的pvc和pv 配和nfs-client-provisioner发布

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      nodeSelector:
        node-role.kubernetes.io/infra: 'true'
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 172.16.16.137
            - name: NFS_PATH
              value: /
      volumes:
        - name: nfs-client-root
          persistentVolumeClaim:   #这里由volume改为persistentVolumeClaim
             claimName: nfs-client-root  
---
# 创建PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-client-root
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Gi  # 不必纠结是否与实际UFS 申请的存储大小一致,因为这个是不是后期storageclass创建的动态pv
---
# 创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-client-root
spec:
  capacity:
    storage: 200Gi  # 不必纠结是否与实际UFS 申请的存储大小一致,因为这个是不是后期storageclass创建的动态pv
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /
    server: 172.16.16.137  #这里直接写UFS的Server地址即可。
  mountOptions:
    - nolock
    - nfsvers=4.0

storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: custom-nfs-storage
mountOptions:
  - nolock,tcp,noresvport
  - nfsvers=4.0
provisioner: fuseim.pri/ifs  # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

部署

oc create -f rbac.yaml
oc create -f nfs-client-provisioner.yaml
oc create -f storageclass.yaml

所有需要挂载nfs的宿主机开放selinux pod 挂载nfs权限

setsebool virt_use_nfs on

参考:

Previouslvm分区配置备份与恢复测试Nextopenshift3.11使用nfs-client-provisioner+阿里云NAS提供动态nfs

Last updated 4 years ago

Was this helpful?

https://liujinye.gitbook.io/openshift-docs/storage/openshift3.11-shi-yong-nfsclientprovisioner+ali-yun-nas-ti-gong-dong-tai-nfs
https://docs.ucloud.cn/uk8s/volume/dynamic_ufs