当前位置: 首页 > news >正文

cdr 做网站页面深圳网站建设公司哪家

cdr 做网站页面,深圳网站建设公司哪家,一个网站怎么做pc和移动端,燕郊seo文章目录 1. 介绍2. 预备条件3. 部署 nfs4. 部署 NFS subdir external provisioner4.1 集群配置 containerd 代理4.2 配置代理堡垒机通过 kubeconfig 部署 部署 MinIO添加仓库修改可配置项 访问nodepotingress 1. 介绍 NFS subdir external provisioner 使用现有且已配置的NFS…

文章目录

    • 1. 介绍
    • 2. 预备条件
    • 3. 部署 nfs
    • 4. 部署 NFS subdir external provisioner
      • 4.1 集群配置 containerd 代理
      • 4.2 配置代理堡垒机通过 kubeconfig 部署
    • 部署 MinIO
      • 添加仓库
      • 修改可配置项
    • 访问
      • nodepot
      • ingress

1. 介绍

NFS subdir external provisioner 使用现有且已配置的NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。持久卷配置为${namespace}-${pvcName}-${pvName}.

变量配置:

VariableValue
nfs_provisioner_namespacenfsstorage
nfs_provisioner_rolenfs-provisioner-runner
nfs_provisioner_serviceaccountnfs-provisioner
nfs_provisioner_namehpe.com/nfs
nfs_provisioner_storage_class_namenfs
nfs_provisioner_server_iphpe2-nfs.am2.cloudra.local
nfs_provisioner_server_share/k8s

注意:此存储库是从https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client迁移的。作为迁移的一部分:容器镜像名称和存储库已分别更改为registry.k8s.io/sig-storagenfs-subdir-external-provisioner。为了保持与早期部署文件的向后兼容性,NFS Client Provisioner 的命名保留为nfs-client-provisioner部署 YAML 中的名称

2. 预备条件

  • CentOS Linux release 7.9.2009 (Core)
  • kubernetes 集群
$ kubectl get node
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready    control-plane   275d   v1.25.0
node1     Ready    <none>          275d   v1.25.0
node2     Ready    <none>          275d   v1.25.0

3. 部署 nfs

  • linux 配置 NFS 共享服务
[root@master1 helm]# exportfs -s
/app/nfs/k8snfs  192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

4. 部署 NFS subdir external provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace

报错:Error: INSTALLATION FAILED: failed to download "nfs-subdir-external-provisioner/nfs-subdir-external-provisioner"

忘记配置代理无法拉取 helm charts 和 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

有两种办法,但都需要找到一个专门配置代理的节点

4.1 集群配置 containerd 代理

$ vim /etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.10.105:7890"
Environment="HTTPS_PROXY=http://192.168.10.105:7890"
Environment="NO_PROXY=localhost"#重启
$ systemctl restart containerd.service

这样镜像的问题就解决了。下面解决拉取 helm charts的问题

再执行部署 debug ,发现拉取的 helm charts 的版本

$ helm --debug install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace
Error: INSTALLATION FAILED: Get "https://objects.githubusercontent.com/github-production-release-asset-2e65be/250135810/33156d2f-3fef-4b00-bf34-1817d30653bc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230716T153116Z&X-Amz-Expires=300&X-Amz-Signature=7219da0622fe22795d526f742064ee0da00a5821c37a5e1fe1bb0eb6b046e3c0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=250135810&response-content-disposition=attachment%3B%20filename%3Dnfs-subdir-external-provisioner-4.0.18.tgz&response-content-type=application%2Foctet-stream": read tcp 192.168.10.28:46032->192.168.10.105:7890: read: connection reset by peer
helm.go:84: [debug] Get "https://objects.githubusercontent.com/github-production-release-asset-2e65be/250135810/33156d2f-3fef-4b00-bf34-1817d30653bc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230716T153116Z&X-Amz-Expires=300&X-Amz-Signature=7219da0622fe22795d526f742064ee0da00a5821c37a5e1fe1bb0eb6b046e3c0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=250135810&response-content-disposition=attachment%3B%20filename%3Dnfs-subdir-external-provisioner-4.0.18.tgz&response-content-type=application%2Foctet-stream": read tcp 192.168.10.28:46032->192.168.10.105:7890: read: connection reset by peer

手动去下载 nfs-subdir-external-provisioner-4.0.18.tgz


再指定本地 helm charts 包执行部署

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner-4.0.18.tgz --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace

4.2 配置代理堡垒机通过 kubeconfig 部署

拉取 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

$ podman pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Trying to pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2...
Getting image source signatures
Copying blob 528677575c0b done  
Copying blob 60775238382e done  
Copying config 932b0bface done  
Writing manifest to image destination
Storing signatures
932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3
$ podman save -o nfs-subdir-external-provisioner-v4.0.2.tar  registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Getting image source signatures
Copying blob 1a5ede0c966b done  
Copying blob ad321585b8f5 done  
Copying config 932b0bface done  
Writing manifest to image destination
Storing signatures
$ scp nfs-subdir-external-provisioner-v4.0.2.tar root@192.168.10.62:/root
$ scp nfs-subdir-external-provisioner-v4.0.2.tar root@192.168.10.63:/root

配置 kubeconfig

$ mkdir kubeconfig
$ vim kubeconfig/61cluster.yaml
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1UQXhOREE1TURFeE9Gb1lEekl4TWpJd09USXdNRGt3TVRFNFdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnR1WTAvblE1OTZXVm5pZFFOdmJpWFNRczJjSVh5WCthZVBMZ0ptUXVpb0pjeGlyQ2dxdStLT0hQTWcwamgra1MKT0RqWS80K3hvZlpjakhydFRDYlg0U1dpUUFqK0diSTJVdmd1ei91U29JVHhhZzNId2JCVnk0REZrUjdpSVUxOQpVVWd0Yy9VYlB6L2I0aGJnT3prYkcyVGo0eDF1b3U4aTErTUVyZnRZRmtyTjJ1bzNTU1RaMVhZejB5d08xbzZvCkxiYktudDB3TUthUmFqKzRKS3lPRkd2dHVMODhjTXRYSXN3KzZ5QndqNWVlYUFnZXVRbUZYcHZ3M1BNRWt3djIKWFN6RTVMRy9SUUhaWTNTeGpWdUNPVXU5SllvNFVWK2RwRUdncUdmRXJDOHNvWHAvcG9PSkhERVhKNFlwdnFDOApJSnErRldaUXE1VEhKYy8rMUFoenhRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVMb0ZPcDZ1cFBHVldUQ1N3WWlpRkpqZkowOWd3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFFV2orRmcxTFNkSnRSM1FFdFBnKzdHbEJHNnJsZktCS3U2Q041TnJGeEN5Y3UwMwpNNG1JUEg3VXREYUMyRHNtQVNUSWwrYXMzMkUrZzBHWXZDK0VWK0F4dG40RktYaHhVSkJ2Smw3RFFsY2VWQTEyCjk0bDExYUk1VE5IOGN5WDVsQ3draXRRMks4ekxTdUgySFlKeG15cTVVK092UVBaS3J4ekN3NFBCdk5Rem1lSFMKR0VuKzdVUjFFamZQaGZ5UTZIdGh5VmZ2MWNtL283L2tCWkJ4OGJmQWt4T0drUnR4eHo4V1JVVTNOUkwwbUt4YwpIc2xPMm43a09BZnB4U3Jya2w3UFRXd0doSEN1VGtxRUdaOEsycW9wK285ajQyS3U5eldqUUlaMjJLcytLMXk2CjFmd3h0Zit2c2hFaFZURGZSU2ZoTDYyUEh3RnAxQklZTFZoVUhJcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=server: https://192.168.10.61:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJWVNHaHV4c1poUWt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWdGdzB5TWpFd01UUXdPVEF4TVRoYUdBOHlNVEl5TURreU1EQTVNREV5TmxvdwpOREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhHVEFYQmdOVkJBTVRFR3QxWW1WeWJtVjBaWE10CllXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ25KeHVSd0FERGw5RkMKMGRtSWVmV05hcE9DL2R1OXUwWWIwTXA5Nzh2eW5IcFJXMEI4QWlTTitkOHZCelNwMi9GdmVZeGlPSUpwbDlTVgpwcTdtSXM0T1A3cXN5Znc0TTBXKzM5c2dEditGYlJ1OUVMUlV6cXg1T1RwZVlDZVRnaFplQXRSU0dOamhKS2N0Cmd1SzA5OHJoNkpSWnZhUk1TYkYzK21GZ0RrbHNpL0Z4c2s1Uzl1Rk9Zb3lxTWdTUjdGTjFlOHVRSmxwU09Zem8KQlBWc3NsQ2FUTUNoQ2RrVnFteThiRVVtdzFvRzhhTGwrYXRuaW1QdEFXaWNzMGZjMGV0Zm9MRUpDcno4Wlo4UApBSnRackVHaDcxM0d0czdGblpXNnJ6RFppc3Z0Zml1WGFyanFQd2Z3a0ZBekJhYlRiYUF1NlJIdWloSWZSZWJxCjB2djR0c2tCQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlF1Z1U2bnE2azhaVlpNSkxCaUtJVW1OOG5UMkRBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUF0akk4c2c3KzlORUNRaStwdDZ5bWVtWjZqOG5SQjFnbm5aU2dGN21GYk03NXdQSUQ0NDJYCkhENnIwOEF6bDZGei9sZEtxbkN0cDJ2QnJWQmxVaWl6Ry9naWVWQTVKa3NIVEtveFFpV1llWEwwYmxsVDA2RDcKV240V1BTKzUvcGZMWktmd25jL20xR0owVWtQQUJHQVdSVTFJSi9kK0dJUlFtNTJTck9VYktLUTIzbHhGa2xqMwpYaDYveEg0eVRUeGsxRjVEVUhwcnFSTVdDTXZRYkRkM0pUaEpvdWNpZWRtcCs1YWV0ZStQaGZLSUtCT1JoMC9OCnIyTWpCZjNNaENyMUMwK0dydGMyeC80eC9PejRwbGRGNmQ1a2c3NmZvOCtiTW1ISmVyaVV6MXZKSkU0bFYxUDEKK21wN1E5Y1BGUVBJdkpNakRBVXdBUkRGcHNNTEhYQ0FYZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcHljYmtjQUF3NWZSUXRIWmlIbjFqV3FUZ3YzYnZidEdHOURLZmUvTDhweDZVVnRBCmZBSWtqZm5mTHdjMHFkdnhiM21NWWppQ2FaZlVsYWF1NWlMT0RqKzZyTW44T0RORnZ0L2JJQTcvaFcwYnZSQzAKVk02c2VUazZYbUFuazRJV1hnTFVVaGpZNFNTbkxZTGl0UGZLNGVpVVdiMmtURW14ZC9waFlBNUpiSXZ4Y2JKTwpVdmJoVG1LTXFqSUVrZXhUZFh2TGtDWmFVam1NNkFUMWJMSlFta3pBb1FuWkZhcHN2R3hGSnNOYUJ2R2k1Zm1yClo0cGo3UUZvbkxOSDNOSHJYNkN4Q1FxOC9HV2ZEd0NiV2F4Qm9lOWR4cmJPeFoyVnVxOHcyWXJMN1g0cmwycTQKNmo4SDhKQlFNd1dtMDIyZ0x1a1I3b29TSDBYbTZ0TDcrTGJKQVFJREFRQUJBb0lCQUNoY29DS2NtMUtmaVM4NgpYdTIralZXZGc0c2c0M3U0Q2VEVGxPRytFcUE5dXFlRWdsaXZaOFpFck9pOU03RkVZOU5JSldiZVFGZGhDenNyCnFaWDJsNDBIUkh0T3RyR1haK01FU1BRL3l1R2NEQk9tUWZVc2hxY3E4M1l3ZjczMXJwTDYyZXdOQmVtdm9SS3oKUlN6dm5MVGFKV0JhRTU4OE9EZEJaVnY5ZHl0WFoxSkVqWHZTVUowaWY4bWZvMUlxNUdBa1FLZWZuMlVLcTRROApYYzJTTkd5QTZxUThGNGd0ZWJ1WGI2QVFLdko4K05KRlI1b2ppNG9hWVlkcE5yR0MzUnJ5VHVSc29ZNFIxRko5ClA5WjcwZGtCcnExcDlNOVA2aDFxWVlaT1FISDdNRklaaFBra3dHNllVLzdzRVBZS2h1R29LVVNJR0FsUXU1czIKOGFtM0toVUNnWUVBM1AwaDRRd0xoeXFXVDBnRC9CcFRPR21iWjd2enA2Z3B3NTNhQXppWDE1UVRHcjdwaWF5RApFSlI1c01vUkF5ckthdUVVZjR1MzkyeGc0NlY4eVJNN3ZlVzZzZ2ZDSnUva3N6Nkxqa3FWRXBXUktycWVQRzhKClIwZXQ2TXRIaExxRHBDSytIdTJIYXhtbWdzMzB6Wk9EQ0Vma2dOSGY3cmM0ZnlxY2pETEpOZHNDZ1lFQXdhSisKRmhQSmpTdTVBYlJ1d2dVUDJnd0REM2ZiQ09peW5VSHpweGhUMWhRcUNPNm14dE1VaUE4bFJraTgxb1NLVEN2eAoxd1VpcnMwYzVNVFRiUS9kekpSVEtTSlRGZFhWNUdxUXppclc3SE5meGcvS1RkTVUyNDRvZG9WY2E4M0Q5WjJ6CmxybVNQQkEvaS9SOVVSNTRnODdFbHBuVi9Cc21wSDcrbUlkQzZWTUNnWUI3dGZsUlVyemhYaVhuSEJtZTk5MisKcHVBb29qODBqQjlWTXZqbzlMV01LWWpJWURlOHFxWjBrYW5PSGxDSHhWeXJtSFV4TWJZNi9LRUF6NU9idlBpawp4Z1pOdzZvY3dnNzFpUDMzR2lsNXplRUdXcEphb280L0tSRmlVT29vazRFK1VYUzlPNXVqaVNoOThXNHA1M3BqCkdGd0RBWHFxMkViNGFaSlpxZFNhSVFLQmdRQ3A2TTdneW40cVhQcGJUNXQ4cm5wcForN3JqTTFyZE56K2R0ZTUKZ1BSWHZwdmYrS0hwaDJEVnZ3eURMdUpkRGpKWWdwc1VoVklZdHEwcTVMZHRWT1hZVlRMZnZsblBxREttMndlegprUTNFcjd5VGpGbUZqcm9YcWhkQllPWm5Sa2cwWnl3bUR6SU5lR2g2ZzQvUE5ZQ2trRFFhdm1SeGN0V210RFR0ClhJdFBOd0tCZ0RkTnlRRU5pNmptd0tEaDBMeUNJTXBlWVA4TEYyKzVGSHZPWExBSFBuSzFEb2I1djMrMjFZTVoKTmtibGNJNzNBd2RiRnJpRjhqbVBxYXZmdUowNlA4UUJZVGVEbGhiSjZBZW1nWG1kVlRaL2IwTnV1ZktiNFdvVgo0eHA3TUJYa0NYNTUxWVB6djloc2M2RTZkYm5KRHJCajV4M3RsbWdyV2ZmL00weUtTOEF4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

测试

$ kubectl --kubeconfig kubeconfig/61cluster.yaml get node
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready    control-plane   275d   v1.25.0
node1     Ready    <none>          275d   v1.25.0
node2     Ready    <none>          275d   v1.25.0

部署

$ helm install --kubeconfig kubeconfig/61cluster.yaml  nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.61 --set nfs.path=/app/nfs/k8snfs -n nfs-provisioner --create-namespace
NAME: nfs-subdir-external-provisioner
LAST DEPLOYED: Sun Jul 16 22:51:28 2023
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None$ kubectl --kubeconfig kubeconfig/61cluster.yaml get all -n nfs-provisioner  -owide
NAME                                                   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod/nfs-subdir-external-provisioner-688456c5d9-f5xkt   1/1     Running   0          39m   100.108.11.220   node2   <none>           <none>NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                        IMAGES                                                               SELECTOR
deployment.apps/nfs-subdir-external-provisioner   1/1     1            1           39m   nfs-subdir-external-provisioner   registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   app=nfs-subdir-external-provisioner,release=nfs-subdir-external-provisionerNAME                                                         DESIRED   CURRENT   READY   AGE   CONTAINERS                        IMAGES                                                               SELECTOR
replicaset.apps/nfs-subdir-external-provisioner-688456c5d9   1         1         1       39m   nfs-subdir-external-provisioner   registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   app=nfs-subdir-external-provisioner,pod-template-hash=688456c5d9,release=nfs-subdir-external-provisioner$ kubectl --kubeconfig kubeconfig/61cluster.yaml get sc
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   37m

遇到这种镜像无法拉取的 helm charts ,我们可以定制属于自己的 helm charts,方便日常测试使用。

部署 MinIO

添加仓库

kubectl create ns minio
helm repo add minio https://helm.min.io/
helm repo update
helm search repo minio/minio

修改可配置项

helm show values minio/minio > values.yaml

修改内容:

accessKey: 'minio'
secretKey: 'minio123'
persistence:enabled: truestorageCalss: 'nfs-client'VolumeName: ''accessMode: ReadWriteOncesize: 5Giservice:type: ClusterIPclusterIP: ~port: 9000# nodePort: 32000resources:requests:memory: 128M

如果你想知道最终生成的模版,可以使用 helm template 命令。

helm template -f values.yaml --namespace minio minio/minio | tee -a  minio.yaml

输出:

---
# Source: minio/templates/post-install-prometheus-metrics-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
---
# Source: minio/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: "release-name-minio"namespace: "minio"labels:app: miniochart: minio-8.0.10release: "release-name"
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
type: Opaque
data:accesskey: "bWluaW8="secretkey: "bWluaW8xMjM="
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
data:initialize: |-#!/bin/shset -e ; # Have script exit in the event of a failed command.MC_CONFIG_DIR="/etc/minio/mc/"MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"# connectToMinio# Use a check-sleep-check loop to wait for Minio service to be availableconnectToMinio() {SCHEME=$1ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attemptsset -e ; # fail if we can't read the keys.ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;set +e ; # The connections to minio are allowed to fail.echo "Connecting to Minio server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;MC_COMMAND="${MC} config host add myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;$MC_COMMAND ;STATUS=$? ;until [ $STATUS = 0 ]doATTEMPTS=`expr $ATTEMPTS + 1` ;echo \"Failed attempts: $ATTEMPTS\" ;if [ $ATTEMPTS -gt $LIMIT ]; thenexit 1 ;fi ;sleep 2 ; # 1 second intervals between attempts$MC_COMMAND ;STATUS=$? ;done ;set -e ; # reset `e` as activereturn 0}# checkBucketExists ($bucket)# Check if the bucket exists, by using the exit code of `mc ls`checkBucketExists() {BUCKET=$1CMD=$(${MC} ls myminio/$BUCKET > /dev/null 2>&1)return $?}# createBucket ($bucket, $policy, $purge)# Ensure bucket exists, purging if asked tocreateBucket() {BUCKET=$1POLICY=$2PURGE=$3VERSIONING=$4# Purge the bucket, if set & exists# Since PURGE is user input, check explicitly for `true`if [ $PURGE = true ]; thenif checkBucketExists $BUCKET ; thenecho "Purging bucket '$BUCKET'."set +e ; # don't exit if this fails${MC} rm -r --force myminio/$BUCKETset -e ; # reset `e` as activeelseecho "Bucket '$BUCKET' does not exist, skipping purge."fifi# Create the bucket if it does not existif ! checkBucketExists $BUCKET ; thenecho "Creating bucket '$BUCKET'"${MC} mb myminio/$BUCKETelseecho "Bucket '$BUCKET' already exists."fi# set versioning for bucketif [ ! -z $VERSIONING ] ; thenif [ $VERSIONING = true ] ; thenecho "Enabling versioning for '$BUCKET'"${MC} version enable myminio/$BUCKETelif [ $VERSIONING = false ] ; thenecho "Suspending versioning for '$BUCKET'"${MC} version suspend myminio/$BUCKETfielseecho "Bucket '$BUCKET' versioning unchanged."fi# At this point, the bucket should exist, skip checking for existence# Set policy on the bucketecho "Setting policy of bucket '$BUCKET' to '$POLICY'."${MC} policy set $POLICY myminio/$BUCKET}# Try connecting to Minio instancescheme=httpconnectToMinio $scheme
---
# Source: minio/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:accessModes:- "ReadWriteOnce"resources:requests:storage: "1Gi"storageClassName: "nfs-client"
---
# Source: minio/templates/post-install-prometheus-metrics-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
rules:- apiGroups:- ""resources:- secretsverbs:- get- create- update- patchresourceNames:- release-name-minio-prometheus- apiGroups:- ""resources:- secretsverbs:- create- apiGroups:- monitoring.coreos.comresources:- servicemonitorsverbs:- getresourceNames:- release-name-minio
---
# Source: minio/templates/post-install-prometheus-metrics-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: release-name-minio-update-prometheus-secretlabels:app: minio-update-prometheus-secretchart: minio-8.0.10release: release-nameheritage: Helm
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: release-name-minio-update-prometheus-secret
subjects:- kind: ServiceAccountname: release-name-minio-update-prometheus-secretnamespace: "minio"
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:type: NodePortports:- name: httpport: 9000protocol: TCPnodePort: 32000selector:app: miniorelease: release-name
---
# Source: minio/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: release-name-miniolabels:app: miniochart: minio-8.0.10release: release-nameheritage: Helm
spec:strategy:type: RollingUpdaterollingUpdate:maxSurge: 100%maxUnavailable: 0selector:matchLabels:app: miniorelease: release-nametemplate:metadata:name: release-name-miniolabels:app: miniorelease: release-nameannotations:checksum/secrets: f48e042461f5cd95fe36906895a8518c7f1592bd568c0caa8ffeeb803c36d4a4checksum/config: 9ec705e3000d8e1f256b822bee35dc238f149dbb09229548a99c6409154a12b8spec:serviceAccountName: "release-name-minio"securityContext:runAsUser: 1000runAsGroup: 1000fsGroup: 1000containers:- name: minioimage: "minio/minio:RELEASE.2021-02-14T04-01-33Z"imagePullPolicy: IfNotPresentcommand:- "/bin/sh"- "-ce"- "/usr/bin/docker-entrypoint.sh minio -S /etc/minio/certs/ server /export"volumeMounts:- name: exportmountPath: /export            ports:- name: httpcontainerPort: 9000env:- name: MINIO_ACCESS_KEYvalueFrom:secretKeyRef:name: release-name-miniokey: accesskey- name: MINIO_SECRET_KEYvalueFrom:secretKeyRef:name: release-name-miniokey: secretkeyresources:requests:memory: 1Gi      volumes:- name: exportpersistentVolumeClaim:claimName: release-name-minio- name: minio-usersecret:secretName: release-name-minio

创建 MinIO

helm install -f values.yaml minio  minio/minio -n minio

输出:

NAME: minio
LAST DEPLOYED: Wed Jul 19 10:56:23 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.minio.svc.cluster.localTo access Minio from localhost, run the below commands:1. export POD_NAME=$(kubectl get pods --namespace minio -l "release=minio" -o jsonpath="{.items[0].metadata.name}")2. kubectl port-forward $POD_NAME 9000 --namespace minioRead more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client:1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide2. Get the ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode) and the SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)3. mc alias set minio-local http://localhost:9000 "$ACCESS_KEY" "$SECRET_KEY" --api s3v44. mc ls minio-localAlternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17

查看 minio 状态

$ kubectl get pod -n minio
NAME                     READY   STATUS    RESTARTS   AGE
minio-66f8b9444b-lml5f   1/1     Running   0          62s
[root@master1 helm]# kubectl get all -n minio
NAME                         READY   STATUS    RESTARTS   AGE
pod/minio-66f8b9444b-lml5f   1/1     Running   0          73sNAME            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/minio   NodePort   10.96.0.232   <none>        9000:32000/TCP   73sNAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio   1/1     1            1           73sNAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-66f8b9444b   1         1         1       73s$ kubectl get pv,pvc,sc -n minio
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
persistentvolume/pvc-667a9c76-7d14-484c-aeeb-6e07cffd2c10   1Gi        RWO            Delete           Bound    minio/minio   nfs-client              2m20sNAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/minio   Bound    pvc-667a9c76-7d14-484c-aeeb-6e07cffd2c10   1Gi        RWO            nfs-client     2m20sNAME                                     PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   2d12h

访问

nodepot

界面访问:http://192.168.10.61:32000
在这里插入图片描述

ingress

修改 values.yamlservice

service:type: ClusterIPclusterIP: ~port: 9000

更新

$ helm upgrade -f values.yaml minio  minio/minio -n minio
Release "minio" has been upgraded. Happy Helming!
NAME: minio
LAST DEPLOYED: Wed Jul 19 11:49:22 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.minio.svc.cluster.local$ kubectl get all -n minio
NAME                         READY   STATUS    RESTARTS   AGE
pod/minio-66f8b9444b-lml5f   1/1     Running   0          53mNAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/minio   ClusterIP   10.96.0.232   <none>        9000/TCP   53mNAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio   1/1     1            1           53mNAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-66f8b9444b   1         1         1       53m

service 已经由 nodePort 类型改为 ClusterIP

接下来,我们需要配置证书和域名,你需要在集群内 部署 cert-manager

查看 minio的 secret tls 证书

$ kubectl get secret -n minio
NAME                          TYPE                 DATA   AGE
minio                         Opaque               2      58m
minio-letsencrypt-tls-fn4vt   Opaque               1      2m47s

查看已经创建好的 cluster-issuer名称

$ kubectl get ClusterIssuer
NAME               READY   AGE
letsencrypt-prod   True    33m
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: minionamespace: minioannotations:cert-manager.io/cluster-issuer: letsencrypt-prod # 配置自动生成 https 证书kubernetes.io/ingress.class: nginxnginx.ingress.kubernetes.io/rewrite-target: /
spec:tls:- hosts:- 'minio.demo.com'secretName: minio-letsencrypt-tlsrules:- host: minio.demo.comhttp:paths:- path: /pathType: Prefixbackend:service:name: minioport:number: 9000

创建

kubectl apply -f ingress.yaml

域名解析:

  • linux 在 /etc/hosts 添加 192.168.10.61 minio.demo.com
  • windows 在 C:\Windows\System32\drivers\etc\hosts 添加 192.168.10.61 minio.demo.com

参考:

  • Deploying the NFS provisioner for Kubernetes
  • https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
  • 部署 MinIO 以支持对象存储
http://www.yayakq.cn/news/909042/

相关文章:

  • 定制制作网站哪家好手机主题wordpress免费下载
  • 做原创视频网站张店网站建设公司
  • 秦皇岛市 网站建设网络营销的定价策略6个
  • 网站建设服务费怎么写分录筑人才官网
  • 网站建设wuliankj网站开发+兼职挣钱吗
  • 用户体验差有哪些网站苏醒主题做的网站
  • 金华专业做网站公司什么软件做网站描述
  • 网站建设合作合同范文阿里巴巴网址
  • 做国外网站做什么内容网站建设与管理教学视频下载
  • 创客网站建设织梦做的网站删除
  • 为网站网站做代理被黑的网站
  • 外贸网站模板设计谷歌搜索引擎免费入口2022
  • 移动微网站欢迎页网页设计作品欣赏
  • 福田深圳网站建设湘潭网站建设 都来磐石网络
  • 公司做的网站列入什么科目三合一网站介绍
  • 50万做网站现在还有企业做网站吗
  • 策勒网站建设做网站建设的公司有哪些
  • 网站怎么做付款平台手机app ui设计在线网站
  • 营销型网站建设策划书做策划有帮助的网站
  • 大庆市工程建设信息去哪个网站手机创建网页快捷方式
  • 做企业网站需要买什么黄山旅游攻略ppt
  • 网站设计建设步骤wordpress百度和分类
  • 网站建设采购项目合同书网站彩票代理怎么做
  • 快速建设房产网站2018年公司网站建设费分录
  • 杭州网站建设找思创网络深圳网站建设报价
  • 河南专业网站建设创新中心设计招聘专业网站
  • 建筑人才网官方网站查询专业网站建设设计装饰
  • 政务服务网站建设方案苏州设计工作室
  • wap网站建设策划方案东莞网站建设营销哪家好
  • 响应式网站开发需要的条件中国十大地推公司