一、概述
-
Apache ZooKeeper
是一個集中式服務,用于維護配置信息、命名、提供分布式同步和提供組服務,ZooKeeper 致力于開發和維護一個開源服務器,以實現高度可靠的分布式協調,其實也可以認為就是一個分布式數據庫,只是結構比較特殊,是樹狀結構。官網文檔:
二、Zookeeper on k8s 部署
1)添加源
部署包地址:
helmrepoaddbitnamihttps://charts.bitnami.com/bitnami
helmpullbitnami/zookeeper
tar-xfzookeeper-10.2.1.tgz
2)修改配置
-
修改
zookeeper/values.yaml
image:
registry:myharbor.com
repository:bigdata/zookeeper
tag:3.8.0-debian-11-r36
...
replicaCount:3
...
service:
type:NodePort
nodePorts:
#NodePort默認范圍是30000-32767
client:"32181"
tls:"32182"
...
persistence:
storageClass:"zookeeper-local-storage"
size:"10Gi"
#目錄需要提前在宿主機上創建
local:
-name:zookeeper-0
host:"local-168-182-110"
path:"/opt/bigdata/servers/zookeeper/data/data1"
-name:zookeeper-1
host:"local-168-182-111"
path:"/opt/bigdata/servers/zookeeper/data/data1"
-name:zookeeper-2
host:"local-168-182-112"
path:"/opt/bigdata/servers/zookeeper/data/data1"
...
#EnablePrometheustoaccessZooKeepermetricsendpoint
metrics:
enabled:true
-
添加
zookeeper/templates/pv.yaml
{{-range.Values.persistence.local}}
---
apiVersion:v1
kind:PersistentVolume
metadata:
name:{{.name}}
labels:
name:{{.name}}
spec:
storageClassName:{{$.Values.persistence.storageClass}}
capacity:
storage:{{$.Values.persistence.size}}
accessModes:
-ReadWriteOnce
local:
path:{{.path}}
nodeAffinity:
required:
nodeSelectorTerms:
-matchExpressions:
-key:kubernetes.io/hostname
operator:In
values:
-{{.host}}
---
{{-end}}
-
添加
zookeeper/templates/storage-class.yaml
kind:StorageClass
apiVersion:storage.k8s.io/v1
metadata:
name:{{.Values.persistence.storageClass}}
provisioner:kubernetes.io/no-provisioner
3)開始安裝
#先準備好鏡像
dockerpulldocker.io/bitnami/zookeeper:3.8.0-debian-11-r36
dockertagdocker.io/bitnami/zookeeper:3.8.0-debian-11-r36myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
dockerpushmyharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
#開始安裝
helminstallzookeeper./zookeeper-nzookeeper--create-namespace
NOTES
NAME:zookeeper
LASTDEPLOYED:SunSep1818032022
NAMESPACE:zookeeper
STATUS:deployed
REVISION:1
TESTSUITE:None
NOTES:
CHARTNAME:zookeeper
CHARTVERSION:10.2.1
APPVERSION:3.8.0
**Pleasebepatientwhilethechartisbeingdeployed**
ZooKeepercanbeaccessedviaport2181onthefollowingDNSnamefromwithinyourcluster:
zookeeper.zookeeper.svc.cluster.local
ToconnecttoyourZooKeeperserverrunthefollowingcommands:
exportPOD_NAME=$(kubectlgetpods--namespacezookeeper-l"app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper"-ojsonpath="{.items[0].metadata.name}")
kubectlexec-it$POD_NAME--zkCli.sh
ToconnecttoyourZooKeeperserverfromoutsidetheclusterexecutethefollowingcommands:
exportNODE_IP=$(kubectlgetnodes--namespacezookeeper-ojsonpath="{.items[0].status.addresses[0].address}")
exportNODE_PORT=$(kubectlget--namespacezookeeper-ojsonpath="{.spec.ports[0].nodePort}"serviceszookeeper)
zkCli.sh$NODE_IP:$NODE_PORT
查看 pod 狀態
kubectlgetpods,svc-nzookeeper-owide
4)測試驗證
#登錄zookeeperpod
kubectlexec-itzookeeper-0-nzookeeper--zkServer.shstatus
kubectlexec-itzookeeper-1-nzookeeper--zkServer.shstatus
kubectlexec-itzookeeper-2-nzookeeper--zkServer.shstatus
kubectlexec-itzookeeper-0-nzookeeper--bash
5)Prometheus 監控
Prometheus:https://prometheus.k8s.local/targets?search=zookeeper
可以通過命令查看采集數據
kubectlget--rawhttp://10.244.0.52:9141/metrics
kubectlget--rawhttp://10.244.1.101:9141/metrics
kubectlget--rawhttp://10.244.2.137:9141/metrics
Grafana:https://grafana.k8s.local/
賬號:admin
,密碼通過下面命令獲取
kubectlgetsecret--namespacegrafanagrafana-ojsonpath="{.data.admin-password}"|base64--decode;echo
導入 grafana 模板,集群資源監控:10465
官方模塊下載地址:
6)卸載
helmuninstallzookeeper-nzookeeper
kubectldeletepod-nzookeeper`kubectlgetpod-nzookeeper|awk'NR>1{print$1}'`--force
kubectlpatchnszookeeper-p'{"metadata":{"finalizers":null}}'
kubectldeletenszookeeper--force
三、Kafka on k8s 部署
1)添加源
部署包地址:
helmrepoaddbitnamihttps://charts.bitnami.com/bitnami
helmpullbitnami/kafka
tar-xfkafka-18.4.2.tgz
2)修改配置
-
修改
kafka/values.yaml
image:
registry:myharbor.com
repository:bigdata/kafka
tag:3.2.1-debian-11-r16
...
replicaCount:3
...
service:
type:NodePort
nodePorts:
client:"30092"
external:"30094"
...
externalAccess
enabled:true
service:
type:NodePort
nodePorts:
-30001
-30002
-30003
useHostIPs:true
...
persistence:
storageClass:"kafka-local-storage"
size:"10Gi"
#目錄需要提前在宿主機上創建
local:
-name:kafka-0
host:"local-168-182-110"
path:"/opt/bigdata/servers/kafka/data/data1"
-name:kafka-1
host:"local-168-182-111"
path:"/opt/bigdata/servers/kafka/data/data1"
-name:kafka-2
host:"local-168-182-112"
path:"/opt/bigdata/servers/kafka/data/data1"
...
metrics:
kafka:
enabled:true
image:
registry:myharbor.com
repository:bigdata/kafka-exporter
tag:1.6.0-debian-11-r8
jmx:
enabled:true
image:
registry:myharbor.com
repository:bigdata/jmx-exporter
tag:0.17.1-debian-11-r1
annotations:
prometheus.io/path:"/metrics"
...
zookeeper:
enabled:false
...
externalZookeeper
servers:
-zookeeper-0.zookeeper-headless.zookeeper
-zookeeper-1.zookeeper-headless.zookeeper
-zookeeper-2.zookeeper-headless.zookeeper
-
添加
kafka/templates/pv.yaml
{{-range.Values.persistence.local}}
---
apiVersion:v1
kind:PersistentVolume
metadata:
name:{{.name}}
labels:
name:{{.name}}
spec:
storageClassName:{{$.Values.persistence.storageClass}}
capacity:
storage:{{$.Values.persistence.size}}
accessModes:
-ReadWriteOnce
local:
path:{{.path}}
nodeAffinity:
required:
nodeSelectorTerms:
-matchExpressions:
-key:kubernetes.io/hostname
operator:In
values:
-{{.host}}
---
{{-end}}
-
添加
kafka/templates/storage-class.yaml
kind:StorageClass
apiVersion:storage.k8s.io/v1
metadata:
name:{{.Values.persistence.storageClass}}
provisioner:kubernetes.io/no-provisioner
3)開始安裝
#先準備好鏡像
dockerpulldocker.io/bitnami/kafka:3.2.1-debian-11-r16
dockertagdocker.io/bitnami/kafka:3.2.1-debian-11-r16myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
dockerpushmyharbor.com/bigdata/kafka:3.2.1-debian-11-r16
#node-export
dockerpulldocker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8
dockertagdocker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
dockerpushmyharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
#JXM
docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1
dockertagdocker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
dockerpushmyharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
#開始安裝
helminstallkafka./kafka-nkafka--create-namespace
NAME:kafka
LASTDEPLOYED:SunSep1820022022
NAMESPACE:kafka
STATUS:deployed
REVISION:1
TESTSUITE:None
NOTES:
CHARTNAME:kafka
CHARTVERSION:18.4.2
APPVERSION:3.2.1
---------------------------------------------------------------------------------------------
WARNING
Byspecifying"serviceType=LoadBalancer"andnotconfiguringtheauthentication
youhavemostlikelyexposedtheKafkaserviceexternallywithoutany
authenticationmechanism.
Forsecurityreasons,westronglysuggestthatyouswitchto"ClusterIP"or
"NodePort".Asalternative,youcanalsoconfiguretheKafkaauthentication.
---------------------------------------------------------------------------------------------
**Pleasebepatientwhilethechartisbeingdeployed**
Kafkacanbeaccessedbyconsumersviaport9092onthefollowingDNSnamefromwithinyourcluster:
kafka.kafka.svc.cluster.local
EachKafkabrokercanbeaccessedbyproducersviaport9092onthefollowingDNSname(s)fromwithinyourcluster:
kafka-0.kafka-headless.kafka.svc.cluster.local:9092
kafka-1.kafka-headless.kafka.svc.cluster.local:9092
kafka-2.kafka-headless.kafka.svc.cluster.local:9092
TocreateapodthatyoucanuseasaKafkaclientrunthefollowingcommands:
kubectlrunkafka-client--restart='Never'--imagedocker.io/bitnami/kafka:3.2.1-debian-11-r16--namespacekafka--command--sleepinfinity
kubectlexec--tty-ikafka-client--namespacekafka--bash
PRODUCER:
kafka-console-producer.sh
--broker-listkafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092
--topictest
CONSUMER:
kafka-console-consumer.sh
--bootstrap-serverkafka.kafka.svc.cluster.local:9092
--topictest
--from-beginning
ToconnecttoyourKafkaserverfromoutsidethecluster,followtheinstructionsbelow:
Kafkabrokersdomain:YoucangettheexternalnodeIPfromtheKafkaconfigurationfilewiththefollowingcommands(ChecktheEXTERNALlistener)
1.Obtainthepodname:
kubectlgetpods--namespacekafka-l"app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka"
2.Obtainpodconfiguration:
kubectlexec-itKAFKA_POD--cat/opt/bitnami/kafka/config/server.properties|grepadvertised.listeners
Kafkabrokersport:YouwillhaveadifferentnodeportforeachKafkabroker.Youcangetthelistofconfigurednodeportsusingthecommandbelow:
echo"$(kubectlgetsvc--namespacekafka-l"app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod"-ojsonpath='{.items[*].spec.ports[0].nodePort}'|tr'''
')"
查看 pod 狀態
kubectlgetpods,svc-nkafka-owide
4)測試驗證
#登錄zookeeperpod
kubectlexec-itkafka-0-nkafka--bash
1、創建 Topic(一個副本一個分區)
--create:指定創建topic動作
--topic:指定新建topic的名稱
--bootstrap-server:指定kafka連接地址
--config:指定當前topic上有效的參數值,參數列表參考文檔為:Topic-levelconfiguration
--partitions:指定當前創建的kafka分區數量,默認為1個
--replication-factor:指定每個分區的復制因子個數,默認1個
kafka-topics.sh--create--topictest001--bootstrap-serverkafka.kafka:9092--partitions1--replication-factor1
#查看
kafka-topics.sh--describe--bootstrap-serverkafka.kafka:9092--topictest001
2、查看 Topic 列表
kafka-topics.sh--list--bootstrap-serverkafka.kafka:9092
3、生產者/消費者測試
【生產者】
kafka-console-producer.sh--broker-listkafka.kafka:9092--topictest001
{"id":"1","name":"n1","age":"20"}
{"id":"2","name":"n2","age":"21"}
{"id":"3","name":"n3","age":"22"}
【消費者】
#從頭開始消費
kafka-console-consumer.sh--bootstrap-serverkafka.kafka:9092--topictest001--from-beginning
#指定從分區的某個位置開始消費,這里只指定了一個分區,可以多寫幾行或者遍歷對應的所有分區
kafka-console-consumer.sh--bootstrap-serverkafka.kafka:9092--topictest001--partition0--offset100--grouptest001
4、查看數據積壓
kafka-consumer-groups.sh--bootstrap-serverkafka.kafka:9092--describe--grouptest001
5、刪除 topic
kafka-topics.sh--delete--topictest001--bootstrap-serverkafka.kafka:9092
5)Prometheus 監控
Prometheus:
可以通過命令查看采集數據
kubectlget--rawhttp://10.244.2.165:9308/metrics
Grafana:https://grafana.k8s.local/
賬號:admin
,密碼通過下面命令獲取
kubectlgetsecret--namespacegrafanagrafana-ojsonpath="{.data.admin-password}"|base64--decode;echo
導入 grafana 模板,集群資源監控:11962
官方模塊下載地址:
6)卸載
helmuninstallkafka-nkafka
kubectldeletepod-nkafka`kubectlgetpod-nkafka|awk'NR>1{print$1}'`--force
kubectlpatchnskafka-p'{"metadata":{"finalizers":null}}'
kubectldeletenskafka--force
zookeeper + kafka on k8s 環境部署 就先到這里了,小伙伴有任何疑問,歡迎給我留言!
-
數據庫
+關注
關注
7文章
3794瀏覽量
64362 -
kafka
+關注
關注
0文章
51瀏覽量
5216 -
zookeeper
+關注
關注
0文章
33瀏覽量
3670
原文標題:14 張圖詳解 Zookeeper + Kafka on K8S 環境部署
文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關注!文章轉載請注明出處。
發布評論請先 登錄
相關推薦
評論