什么是ElasticStack
ElasticStack早期名称为elk
elk代表了三个组件
由于Logstash是一个重量级产品,安装包超过300MB+,很多同学只是用于采集日志,于是使用其他采集工具代替,比如flume,fluentd等产品替代。
后来elastic公司也发现了这个问题,于是开发了一堆beats产品,其中典型代表就是Filebeat,metricbeat,heartbeat等。
而后,对于安全而言,又推出了xpack等相关组件,以及云环境的组件。
后期名称命名为elk stack(elk 技术栈),后来公司为了宣传ElasticStack
ElasticStack 架构
ElasticStack版本
https://www.elastic.co/ elastic官网
最新版本8+,8版本默认启用了https协议,我们先安装7.17版本,然后手动启动https协议。
后面再练习安装8版本
选择elastic安装方式,我们再Ubuntu上部署elastic
二进制包部署单机es环境
部署
1.下载elk安装包
root@elk:~# cat install_elk.sh
#!/bin/bash
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-Linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-linux-x86_64.tar.gz.sha512
shasum -a 512 -c elasticsearch-7.17.28-linux-x86_64.tar.gz.sha512
tar -xzf elasticsearch-7.17.28-linux-x86_64.tar.gz -C /usr/local
cd elasticsearch-7.17.28/
2.修改配置文件
root@elk:~# vim /usr/local/elasticsearch-7.17.28/config/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /usr/local/elasticsearch-7.17.28/config/elasticsearch.yml
cluster.name: xu-elasticstack
path.data: /var/lib/es7
path.logs: /var/log/es7
network.host: 0.0.0.0
discovery.type: single-node
相关参数说明:
port
默认端口是9200
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
cluster.name
集群的名称
path.data
ES的数据存储路径。
path.logs
ES的日志存储路径。
network.host
# elasticStack默认只允许本机访问
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
ES服务监听的地址。
discovery.type
# 如果部署的es集群就需要配置discovery.seed_hosts和cluster.initial_master_nodes参数
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
指的ES集群的部署类型,此处的"single-node",表示的是一个单点环境。
3.如果此时直接启动elastic会报错
3.1测试报错,官方给出的启动命令
Elasticsearch can be started from the command line as follows:
./bin/elasticsearch
root@elk:~# /usr/local/elasticsearch-7.17.28/bin/elasticsearch
# 这些是Java类型报错
Mar 17, 2025 7:44:51 AM sun.util.locale.provider.LocaleProviderAdapter <clinit>
WARNING: COMPAT locale provider will be removed in a future release
[2025-03-17T07:44:53,125][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elk] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:173) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.28.jar:7.17.28]
at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.28.jar:7.17.28]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.28.jar:7.17.28]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.28.jar:7.17.28]
... 6 more
uncaught exception in thread [main]
java.lang.RuntimeException: can not run elasticsearch as root # 不允许root直接启动
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
at org.elasticsearch.cli.Command.main(Command.java:77)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
For complete error details, refer to the log at /var/log/es7/xu-elasticstack.log
2025-03-17 07:44:53,713764 UTC [1860] INFO Main.cc@111 Parent process died - ML controller exiting
3.2创建启动用户
root@elk:~# useradd -m elastic
root@elk:~# id elastic
uid=1001(elastic) gid=1001(elastic) groups=1001(elastic)
# 通过elastic用户启动,此时还有一个报错
root@elk:~# su - elastic -c "/usr/local/elasticsearch-7.17.28/bin/elasticsearch"
could not find java in bundled JDK at /usr/local/elasticsearch-7.17.28/jdk/bin/java
# 系统中是存在java包的,但是elastic用户找不到,切换到elastic查看下
root@elk:~# ll /usr/local/elasticsearch-7.17.28/jdk/bin/java
-rwxr-xr-x 1 root root 12328 Feb 20 09:09 /usr/local/elasticsearch-7.17.28/jdk/bin/java*
root@elk:~# su - elastic
$ pwd
/home/elastic
$ ls /usr/local/elasticsearch-7.17.28/jdk/bin/java
# 报错的原因就是权限被拒绝,也就是elastic没有权限访问java包
ls: cannot access '/usr/local/elasticsearch-7.17.28/jdk/bin/java': Permission denied
# 一层一层地向外找,最后能找到/usr/local/elasticsearch-7.17.28/jdk/bin目录没有权限,导致报错
root@elk:~# chown elastic:elastic -R /usr/local/elasticsearch-7.17.28/
root@elk:~# ll -d /usr/local/elasticsearch-7.17.28/jdk/bin/
drwxr-x--- 2 elastic elastic 4096 Feb 20 09:09 /usr/local/elasticsearch-7.17.28/jdk/bin//
# 此时再进行启动测试,发现其他错误
# 我们指定地path.data和path.log不存在,我们需要手动创建
java.lang.IllegalStateException: Unable to access 'path.data' (/var/lib/es7)
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Unable to access 'path.data' (/var/lib/es7)
root@elk:~# install -d /var/{log,lib}/es7 -o elastic -g elastic
root@elk:~# ll -d /var/{log,lib}/es7
drwxr-xr-x 2 elastic elastic 4096 Mar 17 08:01 /var/lib/es7/
drwxr-xr-x 2 elastic elastic 4096 Mar 17 07:44 /var/log/es7/
# 现在重新启动服务,可以启动成功,检测端口
root@elk:~# su - elastic -c "/usr/local/elasticsearch-7.17.28/bin/elasticsearch"
root@elk:~# netstat -tunlp | egrep "9[2|3]00"
tcp6 0 0 :::9200 :::* LISTEN 2544/java
tcp6 0 0 :::9300 :::* LISTEN 2544/java
通过浏览器访问9200
同时elastic提供了一个api,可以查看当前的主机数
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
卸载环境
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
基于deb包安装ES单点
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
es常见术语
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
查看集群状态
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
es集群环境部署
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
DSL
往es中添加单条请求
使用postman进行测试
# 本质上是使用了curl
curl --location 'http://192.168.121.21:9200/test_linux/doc' \
--header 'Content-Type: application/json' \
--data '{
"name": "孙悟空",
"hobby": [
"蟠桃",
"紫霞仙子"
]
}
curl --location '192.168.121.21:9200/_bulk' \
--header 'Content-Type: application/json' \
--data '{ "create" : { "_index" : "test_linux_ss", "_id" : "1001" } }
{ "name" : "猪八戒","hobby": ["猴哥","高老庄"] }
{"create": {"_index":"test_linux_ss","_id":"1002"}}
{"name":"白龙马","hobby":["驮唐僧","吃草"]}
'
查询数据
curl --location '192.168.121.22:9200/test_linux_ss/_doc/1001' \
--data ''
curl --location --request GET '192.168.121.22:9200/test_linux_ss/_search' \
--header 'Content-Type: application/json' \
--data '{
"query":{
"match":{
"name":"猪八戒"
}
}
}'
删除数据
curl --location --request DELETE '192.168.121.22:9200/test_linux_ss/_doc/1001'
kibana
部署kibana
kibana是针对ES做的一款可视化工具。将来的操作都可以在ES中完成。
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
0
web端访问测试
基于KQL基本使用
过滤数据
Filebeat
部署Filebeat
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
1
Filebeat特性
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
2
由此可以得出Filebeat的第一条性质:
filebeat默认是按行采集数据;
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
3
这是Filebeat的第二个特性
filebeat默认会在"/var/lib/filebeat"目录下记录已经采集的文件offset信息,以便于下一次采集接着该位置继续采集数据;
Filebeat写入es
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
4
在kibana中收集到数据了
查看收集到的数据
设置刷新频率
自定义索引
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
5
索引模板已被建立
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
6
此时还是3shared 0replicas
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
7
Filebeat采集nginx实战
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
8
Filebeat分析nginx日志
filebeat modules
[root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
# 在命令行访问,由于目前的单节点部署es,所以node只有一个
# 前面我们启动es是前台启动前台启动会存在两个问题
1.占用终端
2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
官方给我们的后台运行方式
elasticsearch 的-d参数
To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
./bin/elasticsearch -d -p pid
root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
# 常见报错问题
Q1:最大虚拟内存映射太小
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 65530
root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
root@elk:~# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144
root@elk:~# sysctl -q vm.max_map_count
vm.max_map_count = 262144
Q2:es配置文件写错
java.net.UnknownHostException: single-node
Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Q5:ES集群部署的有问题,缺少master角色。
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
9
配置filebeat监控nginx
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
0
kibana分析PV
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
1
kibana分析IP
kibana分析带宽
kibana制作Dashboard
kibana分析设备
kibana分析操作系统占比
kibana分析全球用户占比
filebeat采集tomcat日志
部署tomcat
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
2
配置filebeat监控tomcat
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
3
filebeat processors
filebeat 处理器
https://www.elastic.co/guide/en/beats/filebeat/7.17/filtering-and-enhancing-data.html
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
4
filebeat采集es集群日志
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
5
filebeat采集mysql日志
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
6
filebeat采集redis
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
7
filebeat多行合并问题
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
8
manager multiline redis log message
1.停止elasticsearch
root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
root@elk:~# ps -ef | grep java
root 4437 1435 0 09:21 pts/2 00:00:00 grep --color=auto java
2.删除数据目录、日志目录、安装包、用户
root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
root@elk:~# userdel -r elastic
9
manager multiline tomcat error log message
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
0
filebeat多实例
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
1
EFK分析web集群
部署web集群
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
2
采集web集群日志
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
3
更改字段类型
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
4
Ansible部署EFK集群
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
5
logstash
安装配置logstash
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
6
Logstash采集文本日志策略
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
7
start_position
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
8
filter plugins
1.安装deb包
root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
2.安装es
root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
# 通过二进制包安装es可以使用systemctl管理
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
3.修改es配置文件
root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: xu-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.type: single-node
4.启动es
systemctl enable elasticsearch --now
# 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
User=elasticsearch
Group=elasticsearch
ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
cat /usr/share/elasticsearch/bin/systemd-entrypoint
#!/bin/sh
# This wrapper script allows SystemD to feed a file containing a passphrase into
# the main Elasticsearch startup script
if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
else
exec /usr/share/elasticsearch/bin/elasticsearch "$@"
fi
9
logstash架构
logstash多实例
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
0
logstash与pipeline关系
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
1
logstash采集nginx日志
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
2
grok plugins
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
3
useragent plugins
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
4
geoip plugins
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
5
date plugins
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
6
mutate plugins
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
7
logstash 采集日志输出到es
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
8
解决在写入es时geoip plugins识别时间过长问题
1.索引 Index
用户进行数据读写的单元
2.分片 Shared
一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
换句话说,分片是ES集群最小的调度单元。
一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
3.副本 replica
副本是针对分片来说的,一个分片可以有0个或多个副本。
当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
主分片负责数据的读写(read write,rw)
副本分片负责数据的读的负载均衡(read only,ro)
4.文档 document
指的是用户存储的数据。其中包含元数据和源数据。
元数据:
用于描述源数据的数据。
源数据:
用户实际存储的数据。
5.分配: allocation
指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
9
解决geoip.location数据类型不正确问题
这时的经纬度是float类型,是不能出图的
在kibana中创建索引模板
ELFK架构
json插件案例
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
0
写入es
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
1
ELFK架构梳理之电商指标分享项目案例
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
2
ELK架构
logstash if语句
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
3
pipeline
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
4
ES集群安全
基于base_auth加密
es集群加密
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
5
重置es密码
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
6
filebeat对接es加密
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
7
logstash对接es加密
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
8
api-key
# es提供了api /_cat/health
root@elk:~# curl 127.1:9200/_cat/health
1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
root@elk:~# curl 127.1:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1742210512 11:21:52 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
9
启动es api功能
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
0
创建api
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
1
基于ES的api创建api-key并实现权限管理
参考链接:
https://www.elastic.co/guide/en/beats/filebeat/7.17/beats-api-keys.html
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html#privileges-list-cluster
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html#privileges-list-indices
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
2
https
es集群配置https
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
3
filebeat对接https加密
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
4
logstash对接https加密
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
5
基于kibana实现RBAC
参考链接:
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html
创建角色
创建用户
ES8部署
单点部署ES8集群
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
6
部署kibana8
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
7
es8集群部署
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
8
常见错误
1.安装es集群服务
root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
2.配置es,三台机器一样的配置
# 不需要配置discovery.type了
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
3.启动服务
systemctl enable elasticsearch --now
4.测试,带有*的是master节点
root@elk:~# curl 127.1:9200/_cat/nodes
172.16.1.23 6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
172.16.1.22 5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
root@elk:~# curl 127.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.1.23 9 83 2 0.12 0.21 0.18 cdfhilmrstw - elk3
172.16.1.22 8 96 3 0.16 0.28 0.24 cdfhilmrstw - elk2
172.16.1.21 22 97 3 0.09 0.30 0.25 cdfhilmrstw * elk
# 集群部署故障 没有uuid 集群缺少master
[root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
[root@elk3 ~]# curl 192.168.121.91:9200
{
"name" : "elk91",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.92:9200
{
"name" : "elk92",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]#
[root@elk3 ~]# curl 10.0.0.93:9200
{
"name" : "elk93",
"cluster_name" : "es-cluster",
"cluster_uuid" : "_na_",
...
}
[root@elk3 ~]#
[root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
# 解决方式
1.停止集群的ES服务
[root@elk91 ~]# systemctl stop elasticsearch.service
[root@elk92 ~]# systemctl stop elasticsearch.service
[root@elk93 ~]# systemctl stop elasticsearch.service
2.删除数据,日志,和临时数据
[root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
[root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
3.添加配置项
[root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"] ######
4.重启服务
5.测试
es集群master选举流程
1.启动时会检查集群是否有master,如果有则不发起选举master;
1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
2.基于类似Gossip协议获取所有可以参与master选举的节点列表;
3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
9
es8和es7区别
# 本质上是使用了curl
curl --location 'http://192.168.121.21:9200/test_linux/doc' \
--header 'Content-Type: application/json' \
--data '{
"name": "孙悟空",
"hobby": [
"蟠桃",
"紫霞仙子"
]
}
curl --location '192.168.121.21:9200/_bulk' \
--header 'Content-Type: application/json' \
--data '{ "create" : { "_index" : "test_linux_ss", "_id" : "1001" } }
{ "name" : "猪八戒","hobby": ["猴哥","高老庄"] }
{"create": {"_index":"test_linux_ss","_id":"1002"}}
{"name":"白龙马","hobby":["驮唐僧","吃草"]}
'
0
ES7 JVM调优
# 本质上是使用了curl
curl --location 'http://192.168.121.21:9200/test_linux/doc' \
--header 'Content-Type: application/json' \
--data '{
"name": "孙悟空",
"hobby": [
"蟠桃",
"紫霞仙子"
]
}
curl --location '192.168.121.21:9200/_bulk' \
--header 'Content-Type: application/json' \
--data '{ "create" : { "_index" : "test_linux_ss", "_id" : "1001" } }
{ "name" : "猪八戒","hobby": ["猴哥","高老庄"] }
{"create": {"_index":"test_linux_ss","_id":"1002"}}
{"name":"白龙马","hobby":["驮唐僧","吃草"]}
'
1