一看必會系列:k8s 練習34 日志收集 efk 實戰
從官方下載對應yaml
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
es-statefulset.yaml: – image: quay.io/fluentd_elasticsearch/elasticsearch:v7.2.0
es-statefulset.yaml: – image: alpine:3.6
fluentd-es-ds.yaml: image: quay.io/fluentd_elasticsearch/fluentd:v2.6.0
kibana-deployment.yaml: image: docker.elastic.co/kibana/kibana-oss:7.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kibana-oss7.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:fluentdv2.6.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:elasticsearchv7.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kibana-oss7.2.0 \
docker.elastic.co/kibana/kibana-oss:7.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:fluentdv2.6.0 \
quay.io/fluentd_elasticsearch/fluentd:v2.6.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:elasticsearchv7.2.0 \
quay.io/fluentd_elasticsearch/elasticsearch:v7.2.0
-rw-r–r– 1 root root 382 Apr 3 23:28 es-service.yaml
-rw-r–r– 1 root root 2900 Apr 4 04:15 es-statefulset.yaml
-rw-r–r– 1 root root 16124 Apr 3 23:28 fluentd-es-configmap.yaml
-rw-r–r– 1 root root 2717 Apr 4 06:19 fluentd-es-ds.yaml
-rw-r–r– 1 root root 1166 Apr 4 05:46 kibana-deployment.yaml
-rw-r–r– 1 root root 272 Apr 4 05:27 kibana-ingress.yaml #這個在后面
-rw-r–r– 1 root root 354 Apr 3 23:28 kibana-service.yaml
特別注意,一定要按照yaml里的文件來下載image不然會有各種錯
先執行這個
kubectl create -f fluentd-es-configmap.yaml
configmap/fluentd-es-config-v0.2.0 created
再執行
[root@k8s-master elk]# kubectl create -f fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.0 created
[root@k8s-master elk]# kubectl get pod -n kube-system |grep flu
fluentd-es-v2.5.0-hjzw8 1/1 Running 0 19s
fluentd-es-v2.5.0-zmlm2 1/1 Running 0 19s
[root@k8s-master elk]#
再啟動elasticsearch
[root@k8s-master elk]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[root@k8s-master elk]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[root@k8s-master elk]#
[root@k8s-master elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0 1/1 Running 0 11s
elasticsearch-logging-1 1/1 Running 0 8s
[root@k8s-master elk]#
再高動 kibana/kibana
kubectl create -f kibana-deployment.yaml
kubectl get pod -n kube-system
kubectl create -f kibana-service.yaml
驗證
[root@k8s-master elk]# kubectl get pod,svc -n kube-system |grep kiba
pod/kibana-logging-65f5b98cf6-2p8cj 1/1 Running 0 46s
service/kibana-logging ClusterIP 10.100.152.68 <none> 5601/TCP 21s
[root@k8s-master elk]#
查看集群信息
[root@k8s-master elk]# kubectl cluster-info
Elasticsearch is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
因為只開了 容器端口,在外部機器上是無法訪問的。有以下幾種方法來訪問
1.開proxy 在master上開
#這玩意是前臺執行的,退出后就沒了。–address 是master的Ip 實際上哪臺上面都行
kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’
如需后臺運行。使用。 nohup kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’ *
在master上查看端口是否開啟
netstat -ntlp |grep 80
tcp 0 0 192.168.10.68:2380 0.0.0.0:* LISTEN 8897/etcd
tcp 0 0 192.168.10.68:8085 0.0.0.0:* LISTEN 16718/kubectl
進去kibana后操作出圖
1.點擊左邊management
2. 建立index Create index pattern
3. 輸入* 查看具體的日志名
4. 例如 logstash-2019.03.25 ,改成logstash-* 下一步到完成
4.1 一定要把那個 星星點一下, 設為index默認以logstash-*
5. discover 就可以看到日志了
驗證結果,以下為正常,沒有https 要注意
curl http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/
{
"name" : "bc30CKf",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "C3oV5BnMTByxYltuuYjTjg",
"version" : {
"number" : "6.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8453f77",
"build_date" : "2019-03-21T15:32:29.844721Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
方法二:
[root@k8s-master elk]# kubectl get ingress -n kube-system -o wide
NAME HOSTS ADDRESS PORTS AGE
kibana-logging elk.ccie.wang 80 6m42s
可以是可以。但是會報 404 這個需要再查下問題在哪
創建ingress
配置文件如下 kibana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana-logging-ingress
namespace: kube-system
spec:
rules:
– host: elk.ccie.wang
http:
paths:
– path: /
backend:
serviceName: kibana-logging
servicePort: 5601
kubectl create -f kibana-ingress.yaml
方法三:
修改 kibana-service.yaml 可直接訪問http://node:nodeport
11 spec:
12 ports:
13 – port: 5601
14 protocol: TCP
15 targetPort: ui
16 #add nodeport
17 type: NodePort
驗證文件信息
[root@k8s-master elk]# kubectl get -f fluentd-es-ds.yaml
NAME SECRETS AGE
serviceaccount/fluentd-es 1 85s
NAME AGE
clusterrole.rbac.authorization.k8s.io/fluentd-es 85s
NAME AGE
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es 85s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/fluentd-es-v2.5.0 2 2 2 2 2 <none> 85s
[root@k8s-master elk]#
----------報錯
[root@k8s-master elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0 0/1 ErrImagePull 0 71s
[root@k8s-master elk]#
拉境像報錯
containers:
#將下面改成
#- image: gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
– image: reg.ccie.wang/library/elk/elasticsearch:6.7.0
—————-知識擴展
1. fluentd
怎么使用這個鏡像
docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd:v1.3-debian-1
默認的配置如下
監聽端口 24224
存儲標記為 docker.** 到 /fluentd/log/docker.*.log (and symlink docker.log)
存儲其它日志到 /fluentd/log/data.*.log (and symlink data.log)
當然也能自定議參數
docker run -ti –rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/配置文件 -v
第一個-v 掛載/path/to/dir到容器里的/fluentd/etc
-c 前的是容器名 告訴 fluentd去哪找這個配置文件
第二個-v 傳遞詳細的配置信息給 fluented
切換運行用戶 foo
docker run -p 24224:24224 -u foo -v …
暫時還木有人評論,坐等沙發!