主机环境分析

kubernetes主机:
kube-master:192.168.1.20
node1:192.168.1.21
node2:192.168.1.22
nfs部署:192.168.1.22
nfs挂载:192.168.1.20、192.168.1.21均挂载192.168.1.22共享目录(具体操作下文会详细说明)

NFS共享磁盘部署

三台主机均下载nfs软件包

yum install -y nfs-utils

三台主机启动nfs服务,并设置为开机自启

systemctl enable nfs-server rpcbind --now

192.168.1.22作为nfs服务端

设置共享磁盘目录

mkdir /data && chmod 777 /data

设置共享目录配置

 vi /etc/exports   #添加如下配置
 /data *(fsid=0,rw,sync,no_root_squash)

重启服务端192.168.1.22的nfs服务

systemctl restart nfs-server

验证服务端共享目录是否可以发现

showmount -e 192.168.1.22

客户端192.168.1.20和192.168.1.21挂载nfs磁盘

客户端创建挂载点

mkdir /data   

客户端添加开启自动挂载

vi /etc/fstab  #添加如下配置
192.168.1.22:/data /data nfs soft,timeo=1 0 0

客户端执行挂载操作

mount -a

#验证挂载是否成功

df -h   #如图1-1显示为挂载成功

图1-1

准备elk集群镜像

主节点(192.168.1.20)下载镜像

docker pull elasticsearch:7.14.0
docker pull  logstash:7.14.0
docker pull  kibana:7.14.0

主节点(192.168.1.20)将elk镜像导出,拷贝到集群node节点(192.168.1.22)

主节点导出镜像

docker save -o elasticsearch.tar elasticsearch:7.14.0
docker save -o logstash.tar logstash:7.14.0
docker save -o kibana.tar kibana:7.14.0

从节点导入镜像

docker load -i elasticsearch.tar
docker load -i logstash.tar
docker load -i kibana.tar

准备yaml配置文件

主节点(192.168.1.20)创建namespace

kubectl create namespace kube-elasticsearch

创建elasticearch集群pv和pvc

elasticsearch-pvc.yaml如下

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-pv
  namespace: kube-elasticsearch
  labels: 
    pv: elk-nfs
spec:
  capacity:                             # PV的存储容量
    storage: 1Gi
[root@kube-master elk-ssl]# 
[root@kube-master elk-ssl]# 
[root@kube-master elk-ssl]# 
[root@kube-master elk-ssl]# 
[root@kube-master elk-ssl]# 
[root@kube-master elk-ssl]# cat pv.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-pv
  namespace: kube-elasticsearch
  labels: 
    pv: elk-nfs
spec:
  capacity:                             # PV的存储容量
    storage: 1Gi
  accessModes:                  # PV的访问模式
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-storage
  nfs:
    path: /data/master
    server: 192.168.1.22
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elk-pvc
  namespace: kube-elasticsearch
spec :       
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-storage
  selector:
    matchLabels:
      pv: elk-nfs

执行elasticsearch-pvc.yaml创建pv和pvc

kubectl apply -f elasticsearch-pvc.yaml

查看创建的pv和pvc是否成功STATUS列状态为Bound为正常,如图1-2和图1-3为创建的pv和pvc

kubectl get pv -n kube-elasticsearch
kubectl get pvc -n kube-elasticsearch

图1-2
图1-3

在任意节点(192.168.1.20、192.168.1.21、192.168.1.22三者选其中一个节点即可)中创建elasticsearch证书

node节点执行如下

docker run --name elastic-certs -i -w /ssl elasticsearch:7.14.0 /bin/sh -c "elasticsearch-certutil ca --out /ssl/es-ca.p12 --pass '' && elasticsearch-certutil cert --name security-master --dns security-master --ca /ssl/es-ca.p12 --pass '' --ca-pass '' --out /ssl/elastic-certificates.p12“

注意执行上面命令后,证书并不会生成在自己指定的目录,而是创建在容器的ssl目录下,因此需要在主机上使用find进行查找证书文件,然后拷贝到主机目录的/data中。

查找证书文件

find / -name elastic-certificates.p12   #如下图1-2为查找创建证书的文件

图1-3

证书文件拷贝到/data目录下

cp /var/lib/docker/overlay2/8d9cb62fbf8d23145c3786853ad3da6545e2fc5a07bfed4e2dda86d2deda7790/diff/ssl/elastic-certificates.p12 /data

创建elasticsearch集群secrets

主节点(192.168.1.20)执行

kubectl -n kube-elasticsearch create secret generic elastic-certificates --from-file=./elastic-certificates.p12

查看创建的额secret

kubectl get secrets -n kube-elasticsearch

图1-4

如果想要删除创建的secret可执行如下(未创建错误不需要执行)

kubectl -n kube-elasticsearch delete secret elastic-certificates

创建elasticsearch的StatefulSet和Service

elasticsearch.yaml如下

---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-master
  namespace: kube-elasticsearch
  labels:
    app: elasticsearch-master
spec:
  type: ClusterIP
  selector:
    app: elasticsearch-master
  ports:
    - port: 9200
      name: db
      targetPort: http
    - port: 9300
      name: inter
      targetPort: 9300

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: kube-elasticsearch
  name: elasticsearch-master
  labels:
    app: elasticsearch-master
    role: master
spec:
  serviceName: elasticsearch-master
  replicas: 1            #本地实验资源有限采用1个副本,实际推荐3个副本以上
  selector:
    matchLabels:
      app: elasticsearch-master
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch-master
        role: master
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.14.0
        command: ["bash", "-c", "chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport
        env:
        - name: discovery.seed_hosts
          value: "elasticsearch-master-0.elasticsearch-master"       #如果未多个节点,请用逗号隔开例如下面写法:
           #value: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master"
        - name: cluster.initial_master_nodes
          value: "elasticsearch-master-0"       #声明es集群主节点主机
        - name: ES_JAVA_OPTS
          value: -Xms256m -Xmx256m
        - name: node.master
          value: "true"
        - name: node.ingest
          value: "true"
        - name: node.data
          value: "true"
        - name: cluster.name
          value: "elasticsearch"
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: xpack.security.enabled
          value: "true"
        - name: xpack.security.transport.ssl.enabled
          value: "true"
        - name: xpack.monitoring.collection.enabled
          value: "true"
        - name: xpack.security.transport.ssl.verification_mode
          value: "certificate"
        - name: xpack.security.transport.ssl.keystore.path
          value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
        - name: xpack.security.transport.ssl.truststore.path
          value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/data
          name: elk-data
        - name: elastic-certificates
          readOnly: true
          mountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"
          subPath: elastic-certificates.p12
      volumes:
      - name: elk-data
        persistentVolumeClaim:
          claimName: elk-pvc
      - name: elastic-certificates
        secret:
          secretName: elastic-certificates

执行创建elasticserch服务

kubectl apply -f elasticsearch.yaml

设置elasticsearch集群认证密码

kubectl -n kube-elasticsearch exec -it $(kubectl -n kube-elasticsearch get pods | grep elasticsearch-master | sed -n 1p | awk '{print $1}') -- bin/elasticsearch-setup-passwords interactive

执行上面个命令后会出现下图1-5所示内容,这里为了方便所有的密码均为111qqq

图1-5

创建kibana连接elasticsearch的secret

创建命令如下

kubectl create secret generic elasticsearch-credentials --from-literal=username=elastic --from-literal=password=111qqq -n kuber-elasticsearch

注意上面的命令中username和password为key

查看创建的secret,图1-6所示,注意图中圈起来地方为key后面的值为value

kubectl get secret elasticsearch-credentials -o jsonpath='{.data}' -n kube-elasticsearch

图1-6

创建kibana的ConfigMap、Deployment、 Service

kibana.yaml配置如下

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-elasticsearch
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.port: 5601
    server.host: 0.0.0.0
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kibana
  name: kibana
  namespace: kube-elasticsearch
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.14.0
        resources:
          limits:
            cpu: 1
            memory: 1G
          requests:
            cpu: 0.5
            memory: 500Mi
        ports:
        - containerPort: 5601
          protocol: TCP
        env:
        - name: SERVER_PUBLICBASEURL
          value: "http://0.0.0.0:5601"
        - name: I18N.LOCALE
          value: zh-CN
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch-master:9200"
        - name: ELASTICSEARCH_USER
          value: "elastic"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-credentials
              key: password   #这里的key为上面命令生成的secret中的key,而不是对应的加密value
        volumeMounts:
        - name: kibana-config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: kibana-config
        configMap:
          name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-elasticsearch
spec:
  selector:
    app: kibana
  ports:
    - name: http
      port: 5601
      targetPort: 5601
      nodePort: 30033
  type: NodePort

执行创建kibana服务

kubectl apply -f kibana.yaml

启动后浏览器访问kibana的服务端口这里映射为30033端口访问url如下,访问界面见图1-7

http://192.168.1.20:30033   #这里我集群的主节点为192.168.1.20,也可以访问kubernetes集群的任何节点http://ip:30033

图1-7

创建logstash的pv和pvc

创建pv所指定目录,这里创建在node2节点(192.168.1.22)

mkdir /logstash

logstash-pvc.yaml配置如下

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: logstash-pv
  namespace: kube-elasticsearch
  labels: 
    pv: logstash-pv
spec:
  capacity:                             # PV的存储容量
    storage: 1Gi
  accessModes:                  # PV的访问模式
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: logstash-storage
  local:
    path: /logstas             #此目录作为日志存放路径
  nodeAffinity:
    required:
      nodeSelectorTerms:       #这里设置集群节点作为pv创建
      - matchExpressions:
        - key: kubernetes.io/hostname   #主要为节点主机名
          operator: In
          values:
          - node2                       #这里选择集群节点node2    
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: logstash-pvc
  namespace: kube-elasticsearch
spec :       
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: logstash-storage
  selector:
    matchLabels:
      pv: logstash-pv

执行创建logstash-pvc

kubectl apply -f logstash-pvc.yaml

查看创建的logstash的pv和pvc,如图1-8所示

kubectl get pv
kubectl get pvc -n kube-elasticsearch

图1-8

创建logstash的ConfigMap、Deployment、 Service

logstash.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: kube-elasticsearch
  labels:
    k8s-app: logstash-configmap
data:
  logstash.yml: |    #路径在/usr/share/logstash/config/中
    http.host: "0.0.0.0"
    xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch-master:9200" ]  #设置elasticsearch主机开启x-pack认证的url
    xpack.monitoring.elasticsearch.username: elastic     #elasticsearch开启x-pack的访问用户
    xpack.monitoring.elasticsearch.password: 111qqq      #elasticsearch开启x-pack的访问用户的密码
  logstash.conf: |   #路径在/usr/share/logstash/pipeline/
      input {       #输入日志
        file {
          type => "test-log"  #类型可以在output调用同时区分不同的日志环境
          path => "/logstash/java-test-log/test.log"    #读取日志所在目录中的文件
          codec => multiline{     
             pattern => "^%{TIMESTAMP_ISO8601}"
             negate => true 
             what => previous 
          }
          ignore_older => 86400 
          start_position => "end"
          sincedb_path => "/dev/null"       
          stat_interval => "3"  
        }
      }
      filter {        #分析日志
        grok {
          match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}" }
        }
        date {
          match => ["time", "yyyy-MM-dd HH:mm:ss.SSS"]
            target => "@timestamp"
        }
      }
      output {    #输出 将日志输出到elasticsearch索引库
        if [type] == "test-log"  { 
          elasticsearch {    
            hosts => ["http://elasticsearch-master:9200"]   
            index => "test-java-%{+YYYY.MM.dd}"
            user => "elastic"     
            password => "111qqq"
          }
        }
      }
---
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: logstash
  namespace: kube-elasticsearch
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: logstash
  template:
    metadata:
      labels:
        k8s-app: logstash
    spec:
      containers:
      - name: logstash
        image: logstash:7.14.0 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-yml
            mountPath: /usr/share/logstash/config/logstash.yml   #挂载需要修改的logstash.yml配置文件
            subPath: logstash.yml
          - name: config-volume
            mountPath: /usr/share/logstash/pipeline/logstash.conf  #挂载需要配置的logstash.conf分析日志配置文件
            subPath: logstash.conf
          - name: read-data
            mountPath: /logstash
      volumes:
      - name: config-yml
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
      - name: read-data
        persistentVolumeClaim:
          claimName: logstash-pvc
        
---
apiVersion: v1
kind: Service
metadata:
  name: logstash
  namespace: kube-elasticsearch
spec:
  ports:
  - port: 5044
    targetPort: 5044
    protocol: TCP
  selector:
    k8s-app: logstash
  type: ClusterIP

在上面的yaml配置中,编写了logstash.conf指定的读取日志路径为”path => “/logstash/java-test-log/test.log”“,在node2(192。168.1.22)创建对应的目录

mkdir /logstash/java-test-log

至于上面指定目录中的test.log里面内容为java日志格式的日志,自己可以找一些日志写入到test.Log即可

执行创建logstash服务

kubectl apply -f logstash.yaml

执行创建logstash后查看elasticsearch的索引

可在集群中访问elasticsearch的ClusterIP来查看如下图查看elasticsearch的ClusterIP,如下图1-9所示

kubectl get svc -n kube-elasticsearch

图1-9

使用的访问请求查看索引命令如下,如图1-10为分析日志创建的索引

curl --user elastic:111qqq http://10.103.101.187:9200/_cat/indices?v

图1-10

此时kubernetes部署的elk集群已经完成,如果想查看kibana浏览器界面如何添加elasticsearch索引数据,可参考此地址https://www.zhanghaobk.com/archives/elasticsearch-logstash-kibana-ji-qun-bu-shu-ji-qi-shi-yong