filebeat+redis+ELK 集群环境

ELK搭建手册

2018-12-30

filebeat+redis+ELK 集群环境

服务器环境:3台

10.88.120.100   node1       安装redis,nginx,elasticsearch_master,kibana

10.88.120.110   node2      安装logstash,elasticsearch_node 

10.88.120.120   node3      安装logstash,elasticsearch_node

软件环境

说明

Linux Server

CentOS 7.5

Elasticsearch

6.0.0

Logstash

6.0.0

Kibana

6.0.0

Redis

5.0.3

JDK

1.8

 

对指定filebeat客户端开启6379端口用于日志传送)、80端口仅限于公司内网访问ELK服务器

[[email protected] elk]#   firewall-cmd --permanent --new-ipset=elk_whitelist --type=hash:ip

[[email protected] elk]#   firewall-cmd --permanent --new-ipset=web_whitelist --type=hash:ip

[[email protected] elk]#   firewall-cmd --permanent --ipset=web_whitelist --add-entry=113.61.35.0/24    ##公司内网

[[email protected] elk]#   firewall-cmd --permanent --ipset=elk_whitelist --add-entry=filebeat客户端IP  

[[email protected] elk]#  firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source ipset="elk_whitelist" port port="6379" protocol="tcp" accept'

[[email protected] elk]#  firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source ipset="web_whitelist" port port="80" protocol="tcp" accept

[[email protected] elk]#   systemctl restart firewalld

备注:

后期加filebeat客户端,需要添加6379白名单的时候,直接在elk_whitelist添加IP即可

[[email protected] elk]#   vim /etc/profile

#######JDK环境变量########################

export JAVA_HOME=/usr/local/jdk

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

#########elk环境变量##########

export PATH=$PATH:/usr/local/elk/elasticsearch/bin/

export PATH=$PATH:/usr/local/elk/logstash/bin/

export PATH=$PATH:/usr/local/elk/kibana/bin/

 

[[email protected] ~]# source /etc/profile

  • 系统优化 ---开集群要配置

[[email protected] elk]#   vim /etc/sysctl.conf

fs.file-max = 262144

vm.max_map_count= 262144

[[email protected] elk]#   sysctl -p

[[email protected] elk]#   vim /etc/security/limits.conf

* soft nproc 262144

* hard nproc 262144

* soft nofile 262144

* hard nofile 262144

[[email protected] ~]#   ulimit -n 262144

 

创建elk用户,创建对应目录

[[email protected] ~]#    useradd elk

[[email protected] ~]#    passwd elk

[[email protected] ~]#    mkdir /usr/local/elk

[[email protected] ~]#    mkdir /elk/es/data/ -p

[[email protected] ~]#    mkdir /elk/es/logs/ -p

[[email protected] ~]#    ls |sed "s:^:`pwd`/:"

/elk/es/data

/elk/es/logs

[[email protected] ~]#    chown -R elk.elk /elk/

[[email protected] ~]#    chown -R elk.elk /usr/local/elk/

  • JDK环境搭建 ---需要准备JDK-8.0环境

[[email protected] ~]#    tar -xf jdk-8u181-linux-x64.tar.gz

[[email protected] ~]#    mv jdk1.8.0_181/ /usr/local/jdk

  • 下载ELK包到/opt/src并解压安装

[[email protected] ~]#   su elk

[[email protected] elk]$   cd /usr/local/elk

[[email protected] elk]$   wget  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.zip

[[email protected] elk]$   wget  https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-linux-x86_64.tar.gz

[[email protected] elk]$   wget  https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.zip

[[email protected] elk]$   unzip elasticsearch-6.0.0.zip  && unzip logstash-6.0.0.zip

[[email protected] elk]$   tar -xf kibana-6.0.0-linux-x86_64.tar.gz

[[email protected] elk]$   mv elasticsearch-6.0.0   elasticsearch

[[email protected] elk]$   mv logstash-6.0.0       logstash

[[email protected] elk]$   mv kibana-6.0.0-linux-x86_64   kibana

 

 

cat  elasticsearch/config/elasticsearch.yml  |egrep -v "^$|^#"

cluster.name: es

node.name: node1                   ##每个节点根据自己主机名定义

path.data: /elk/es/data

path.logs: /elk/es/logs

network.host: 0.0.0.0

http.port: 9200

transport.tcp.port: 9300

node.master: true

node.data: true

discovery.zen.ping.unicast.hosts: ["node1:9300","node2:9300","node3:9300"]

discovery.zen.minimum_master_nodes: 2

action.destructive_requires_name: true

xpack.security.enabled: true

  • 配置项说明

说明

cluster.name

集群名

node.name

节点名

path.data

数据保存目录

path.logs

日志保存目录

network.host

节点host/ip

http.port

HTTP访问端口

transport.tcp.port

TCP传输端口

node.master

是否允许作为主节点

node.data

是否保存数据

discovery.zen.ping.unicast.hosts

集群中的主节点的初始列表,当节点(主节点或者数据节点)启动时使用这个列表进行探测

discovery.zen.minimum_master_nodes

主节点个数

 

  • elastic安装geoip模块和x-pack模块

[[email protected] elk]$    source /etc/profile

[[email protected] elk]$    elasticsearch-plugin install ingest-geoip                  ##geoip模块

[[email protected] elk]$    kibana-plugin install x-pack                          

[[email protected] elk]$    elasticsearch-plugin  install x-pack

[[email protected] elk]$    logstash-plugin install x-pack

[[email protected] elk]$    /usr/local/elk/elasticsearch/bin/x-pack/setup-passwords interactive

依次设置elasticsearch kibana logstash的密码

[[email protected] elk]$    vim /opt/apps/elk/kibana/config/kibana.yml

server.port: 5601

server.host: "0.0.0.0"

elasticsearch.url: "http://10.88.120.100:9200"

elasticsearch.username: "elastic"

elasticsearch.password: "qwe7410"

备注:

  • /elk为nginx反代的url,如http://103.68.110.223/elk
  • elastic用户名不要写其他
  • 这个密码是前面设置的elasticsearch的x-pack密码
  •  
  • 启动elastic和kibana

[[email protected] elk]$   nohup elasticsearch &>/dev/null &

[[email protected] elk]$   nohup kibana  &>/dev/null &

查看下是否启动elastic(9200端口)和kibana(5601端口)

[[email protected] elk]$   netstat -antpu |egrep "5601|9200"

tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      27715/node         

tcp6       0      0 :::9200                 :::*                   LISTEN      26482/java      

备注:

前面kibana进行地址重置再重启的话,需要花费好几分钟的时间才能启动成功,属于正常现象

logstash在搭完客户端后再做配置启动

  • 安装kibana汉化包

[[email protected] elk]$    wget https://github.com/anbai-inc/Kibana_Hanization/archive/master.zip

[[email protected] elk]$    unzip master.zip && mv Kibana_Hanization-master/ KIBANA-CHINA

[[email protected] elk]$    cd  KIBANA-CHINA

[[email protected] elk]$    python main.py "/usr/local/elk/kibana"

耐心等待,安装完毕,重启kibana!!!

  • 安装并启动redis 指定端口为6379 设置密码 修改bind ---略
  1. 查看key

127.0.0.1:6379> KEYS *

1) "john-test"

Redis.conf主要修改内容

bind 0.0.0.0

requirepass qwe7410

  • 配置nginx反代
  1. 安装过程 ---- 略
  2. 配置

[[email protected] elk]$   vim/etc/nginx/conf.d/nginx.conf

    server {

        listen       80;

        server_name  ELK服务器IP或域名;

        location ^~ /elk {

            rewrite /elk/(.*)$ /$1 break;

            proxy_pass http://127.0.0.1:5601/ ;

            proxy_http_version 1.1;

            proxy_set_header Upgrade $http_upgrade;

            proxy_set_header Connection 'upgrade';

            proxy_set_header Host $host;

            proxy_cache_bypass $http_upgrade;

        }

     }

备注:

  • /elk是前面配置的kibana的url,要保持一致
  1. 启动nginx   --- 略
  • 搭建客户端日志采集器

介绍资料:http://blog.****.net/weixin_39077573/article/details/73467712

  1. 安装filebeat

[[email protected] ~]#   wget  https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-linux-x86_64.tar.gz

[[email protected] ~]#   tar -xf filebeat-6.0.0-linux-x86_64.tar.gz

[[email protected] ~]#   mv filebeat-6.0.0-linux-x86_64 /usr/local/filebeat

  1. 修改配置文件

[[email protected] ~]#   cd /usr/local/filebeat

[[email protected] filebeat]#   cp filebeat.yml filebeat.yml.bak

[[email protected] filebeat]#   vim filebeat.yml

filebeat.prospectors:

- type: log

  enabled: true

  paths:

    - /var/log/nginx/access.log

- /var/log/nginx/error.log

  tail_files: true

  fields:

    input_type: log

tag: nginx-log

- type: log

  enabled: true

  paths:

   - "/home/website/logs/manager.log"

  tail_files: true

  fields:

    input_type: log

    tag: domainsystem-log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

output.redis:

 hosts: ["10.88.120.100:6379"]

 data_type: "list"

 password: "qwe7410"

 key: "john-test"

 db: 0

filebeat.prospectors:

####nginx #######

- type: log

  enabled: true

  paths:

    - /opt/bet/logs/nginx/web_access*.log

    - /opt/ylgj2/logs/nginx/web_access*.log

  fields: 

log_type: hbet_cyl_nginx_log

  ignore_older: 24h              ###采集24小时以内的数据

 

#================================ Outputs =====================================

output.redis:

  hosts: ["103.68.110.223:17693"]

  data_type: "list"

  password: "9tN6GFGK60Jk8BNkBJM611GwA66uDFeG①"

  key: "hbet_cyl_nginx②"

  db: 0                                             #redis数据库的编号

 

 

备注:

  • 为ELK服务器redis密码
  • redis监听的键值,在logstash的input中的key要保持一致
  1. 启动filebeat

[[email protected] filebeat]# nohup ./filebeat -e -c filebeat.yml &>/dev/null &

检查下是否启动成功

[[email protected] filebeat]# ps aux |grep filebeat

root       2808  0.0  0.3 296664 13252 pts/0    Sl   22:27   0:00 ./filebeat -e -c filebeat.yml

 

  • elasticsearch的设置
  • setting

通过setting可以更改es配置可以用来修改副本数和分片数

http://blog.****.net/tanga842428/article/details/60953579

  1. 通过curl或浏览器查看索引、副本、分片信息

[[email protected] ~]$ curl -XGET http://127.0.0.1:9200/nginx-2018.03.13/_settings?pretty  -u elastic

 { "nginx-2018.03.13" : {

    "settings" : {

      "index" : {

        "creation_date" : "1520899209420",

        "number_of_shards" : "5",

        "number_of_replicas" : "1",

        "uuid" : "tGs5EcupT3W-UX-w38GYFg",

        "version" : {

          "created" : "6000099"

        },

        "provided_name" : "nginx-2018.03.13"

      }

    }

  }

}

 

备注: shards ---- 分片   replicas -- 索引  provided_name --索引名

  1. map查看

https://www.cnblogs.com/zlslch/p/6474424.html

[[email protected]]$curl -XGET http://192.168.175.223:9200/java_log-2018.03.23/_mappings?pretty -u elastic

 

  • 配置logstash,实现对filebeat客户端的日志管理
  • 建立logstash的配置目录和data目录

[[email protected] ~]$ mkdir /opt/apps/elk/logstash/conf.d/

[[email protected] ~]$ mkdir /opt/apps/elk/logstash/data/hbet_cyl_nginx/

备注:

conf.d目录存放logstash的配置文件

data下面的新建的目录为各个站点的库,用于启动多个logstash

  • 编写logstash配置文件,实现对filebeat客户端的日志检索及推送给elasticsearch
  1. 单个日志文件fliter

[[email protected] ~]$ vim /opt/apps/elk/logstash/conf.d/hbet_cyl_nginx.conf

input {

  redis {

    data_type => "list"

    password => "9tN6GFGK60Jk8BNkBJM611GwA66uDFeG"

    key => "hbet_cyl_nginx①"

    host => "127.0.0.1"

    port => 17693

    threads => 5

  }

}

filter {

 grok {

   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]

   overwrite => [ "message" ]

 }

 mutate {

   convert => ["response", "integer"]

   convert => ["bytes", "integer"]

   convert => ["responsetime", "float"]

 }

 geoip {

   source => "clientip"

   target => "geoip"

   add_tag => [ "nginx-geoip" ]

 }

 date {

   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

   remove_field => [ "timestamp" ]

 }

 useragent {

   source => "agent"

 }

}

output {

    elasticsearch {

        hosts => ["127.0.0.1:9200"]

        index => "hbet_cyl_test_nginx②"

        user => "elastic"

        password => "Passw0rd③"

    }

    stdout { codec => "rubydebug" }

}

 

备注:

  • redis与filebeat客户端通信的键值,要与前面配置的一致
  • 后面kibana用来查看日志的index,用令牌码,统一格式,以便对各站点的日志进行区分
  • 前面设置的elastic x-pack的密码,要一致
  1. 多个日志文件fliter

参考:https://discuss.elastic.co/t/filter-multiple-different-file-beat-logs-in-logstash/76847/4 

input {

    file {

        path => "/opt/src/log_source/8hcp/gameplat_work.2018-03-23-13.log"

        start_position => "beginning"

              type => "8hcp-gameplat_work-log"

        ignore_older => 0

    }

    file {

        path => "/opt/src/log_source/8hcp/tomcat_18001/catalina.out"

        start_position => "beginning"

              type => "8hcp-tomcat8001-log"

        ignore_older => 0

    }

    file {

        path => "/opt/src/log_source/8hcp/nginx/web_access.log"

        start_position => "beginning"

              type => "8hcp-nginx-log"

        ignore_older => 0

    }

}

filter {

 if ([type] =~ "gameplat" or [type] =~ "tomcat") {

     mutate {

         "remove_field" => ["beat", "host", "offset", "@version"]

     }

     grok {

         match => { "message" => "%{COMBINEDAPACHELOG}" }

     tag_on_failure => []

       }

     date {

       match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]

     }

   }

 else if ([type] =~ "nginx") {

     grok {

       match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]

       overwrite => [ "message" ]

     }

     mutate {

       convert => ["response", "integer"]

       convert => ["bytes", "integer"]

       convert => ["responsetime", "float"]

       "remove_field" => ["beat", "host", "offset", "@version"]

     }

     geoip {

       source => "clientip"

       target => "geoip"

       database => "/opt/apps/elk/logstash/geoData/GeoLite2-City_20180306/GeoLite2-City.mmdb"

       add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]

       add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]

     }

     date {

       match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

       remove_field => [ "timestamp" ]

     }

     useragent {

       source => "agent"

     }

}

}

output {

 if ([type] =~ "gameplat") {

    elasticsearch {

        hosts => ["192.168.175.241:9200"]

        index => "gameplat-%{+YYYY.MM.dd}"

        user => "elastic"

        password => "Passw0rd!**yibo"

    }

 }

 else if ([type] =~ "tomcat") {

    elasticsearch {

        hosts => ["192.168.175.241:9200"]

        index => "tomcat-%{+YYYY.MM.dd}"

        user => "elastic"

        password => "Passw0rd!**yibo"

    }

 }

 else if ([type] =~ "nginx") {

    elasticsearch {

        hosts => ["192.168.175.241:9200"]

        index => "logstash-nginx-%{+YYYY.MM.dd}"

        user => "elastic"

        password => "Passw0rd!**yibo"

    }

 }

 stdout {codec => rubydebug}

}

备注:

索引加时间戳: index => "%{type}-%{+YYYY.MM.dd}"

 

 

input {

  redis {

    data_type => "list"

    password => "qwe7410"

    key => "john-test"

    host => "10.88.120.100"

    port => 6379

    threads => 5

    db => 0

  }

}

 

filter {

 if "nginx-log" in [tags] {

 grok {

   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]

   overwrite => [ "message" ]

 }

 }

 mutate {

   convert => ["response", "integer"]

   convert => ["bytes", "integer"]

   convert => ["responsetime", "float"]

 }

 geoip {

   source => "clientip"

   target => "geoip"

   add_tag => [ "nginx-geoip" ]

 }

 date {

   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

   remove_field => [ "timestamp" ]

 }

 useragent {

   source => "agent"

 }

 if "domainsystem-log" in [tags] {

 grok {

   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]

   overwrite => [ "message" ]

 } 

 }

 mutate {

   convert => ["response", "integer"]

   convert => ["bytes", "integer"]

   convert => ["responsetime", "float"]

 }         

 geoip {

   source => "clientip"

   target => "geoip"

   add_tag => [ "nginx-geoip" ]

 }

 date {

   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

   remove_field => [ "timestamp" ]

 }

 useragent {

   source => "agent"

 }

}

output {

    if [fields][tag] == "nginx-log" {

    elasticsearch {

    hosts => ["10.88.120.100:9200", "10.88.120.110:9200", "10.88.120.120:9200"]

    index => "nginx"

    document_type => "%{type}"

    user => elastic

    password => qwe7410

            }

    stdout { codec => rubydebug }

            }

    if [fields][tag] == "domainsystem-log" {

    elasticsearch {

    hosts => ["10.88.120.100:9200", "10.88.120.110:9200", "10.88.120.120:9200"]

    index => "domainsystem"

    document_type => "%{type}"

    user => elastic

    password => qwe7410

            }

        }

    stdout { codec => rubydebug }

        }

  1. filter插件用法

参考:https://www.jianshu.com/p/d469d9271f19  

系统自带语法:HOME{logstash}/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patternsmutate

删除无用字段

mutate {   

  remove_field => "message"         

  remove_field => "@version"          

mutate {

"remove_field" => ["beat", "host", "offset", "@version"]

 

加一个字段

    mutate {

      add_field => {

       "web_log" => "%{[fields][web_log]}"

      }

 

  • ELK服务启动logstash

[[email protected] logstash]$   nohup logstash -f config/input-output.conf &>/dev/null &

稍等片刻检查是否启动成功

[[email protected] logstash]$   ps aux |grep logstash

  • 登录kibana查看日志信息
  1. 用浏览器登录http://10.88.120.100:5601

登录用户:elastic 登录密码: 前面设置的

  1. 创建index

管理 ---》kibana:索引模式 ---》创建索引模式(如下图)

filebeat+redis+ELK 集群环境

 

  1. 点击发现查看日志内容

filebeat+redis+ELK 集群环境

  • Elk优化

优化指南:

删除时间范围的数据:https://juejin.im/post/58e5de06ac502e006c254145

ELK平台性能优化 http://www.th7.cn/db/nosql/201708/250381.shtml

http://blog.****.net/jiao_fuyou/article/details/49783861

Logstash优化 http://blog.****.net/ypc123ypc/article/details/78033142   https://yq.aliyun.com/articles/413002 

elasticsearch优化 https://www.jianshu.com/p/29ffce0850af 

安装pv (pipe views) http://blog.****.net/u011478909/article/details/52584935 

优化: https://zhuanlan.zhihu.com/p/30888923  有具体参考方法

 

  • elasticsearch优化
  1. 索引的curl管理
  1. 查看所有indices  

[[email protected] ~]$ curl -XGET http://127.0.0.1:9200/_cat/indices -u elastic

  1. 删除指定indices

[[email protected] ~]$ curl -XDELETE "http://127.0.0.1:9200/hbet_tomcat_9002"  -u elastic

备注:

获取7天前的日期:[[email protected] ~]$ date +%Y.%m.%d --date="-7 day"

Curator  ---删除indices    https://zhuanlan.zhihu.com/p/30888923     

删除索引脚本

#!/bin/bash

valite_date=$(date +%Y.%m.%d --date="-1 day")

elastic_ip=192.168.175.241

elastic_port=9200

elastic_user=elastic

elastic_pass=Passw0rd\!\*\*yibo

function delete_index(){

expect << EOF

 set timeout 2

 spawn curl -XDELETE http://$elastic_ip:$elastic_port/$1 -u $elastic_user ;

 expect {

     "elastic\'\:" { send "$elastic_pass\r" }

     }

expect eof

EOF

}

##### 删除指定索引

delete_index gameplat-$valite_date

delete_index tomcat-$valite_date

delete_index logstash-nginx-$valite_date

  1. 查看threadpool

curl -XGET 'http://localhost:9200/_nodes/stats?pretty'  -u elastic

  1. 查看集群信息

curl 'http://127.0.0.1:9200/_cluster/health?pretty'  -u elastic

  1. 定期删除索引数据

参考: https://juejin.im/post/58e5de06ac502e006c254145

 

  1. elasticsearch集群搭建

参考资料:

https://www.felayman.com/articles/2017/12/12/1513085668561.html

http://cwiki.apachecn.org/pages/viewpage.action?pageId=4882617

https://www.zybuluo.com/tinadu/note/516453

节点1 --- master  

[[email protected] ~]$ cd /opt/apps/elk/elasticsearch/

[[email protected] ~]$ cp config/elasticsearch.yml config/elasticsearch.yml.bak

[[email protected] elasticsearch]$ grep -v \# config/elasticsearch.yml

cluster.name: my-application

node.name: node-1

path.data: /opt/apps/elk/elasticsearch/data/

path.logs: /opt/apps/elk/elasticsearch/logs/

network.host: 103.68.110.227

http.port: 9200

transport.tcp.port: 9600

node.master: true                    

discovery.zen.ping.unicast.hosts: ["103.68.110.227:9600", "103.68.110.242:9601"]

discovery.zen.minimum_master_nodes: 1

生成x-pack证书

103.68.110.223,103.68.110.225,103.68.110.227,103.68.110.242

参考资料:https://segmentfault.com/a/1190000012789290

[[email protected] es_crt]$ cd /opt/apps/elk/elasticsearch/config/

[[email protected] elk]$ /opt/apps/elk/elasticsearch/bin/x-pack/certgen

依次输入:my_cluster.zip ---> my_cluster ---> my_cluster ---> 103.68.110.242 ---> enter  ---> enter

[[email protected] config]$ unzip my_cluster.zip

[[email protected] ~]$ vim /opt/apps/elk/elasticsearch/config/elasticsearch.yml   ##末行追加

################x-pack相关配置

######停用x-pack

##xpack.security.enabled: false

######所有节点进行如下配置,证书秘钥路径一定要正确

xpack.ssl.key: my_cluster/my_cluster.key

xpack.ssl.certificate: my_cluster/my_cluster.crt

xpack.ssl.certificate_authorities: ca/ca.crt

xpack.security.transport.ssl.enabled: true

[[email protected]_01 config]$ ../bin/x-pack/setup-passwords interactive  ###重新配置x-pack密码

节点2

[[email protected] ~]$ cd /opt/apps/elk/elasticsearch/

[[email protected] ~]$ cp config/elasticsearch.yml config/elasticsearch.yml.bak

[[email protected] elasticsearch]$ grep -v \# config/elasticsearch.yml

cluster.name: my-application

node.name: node-2

path.data: /opt/apps/elk/elasticsearch/data/

path.logs: /opt/apps/elk/elasticsearch/logs/

network.host: 103.68.110.242

http.port: 9201

transport.tcp.port: 9601

discovery.zen.ping.unicast.hosts: ["103.68.110.227:9600", "103.68.110.242:9601"]

discovery.zen.minimum_master_nodes: 1

从节点1将my_cluster 和ca目录拉过来放到/opt/apps/elk/elasticsearch/config/下

  1. 其他优化
  1. 日志优化

[[email protected]]$ vim log4j2.properties

logger.index_search_slowlog_rolling.level = info      ##默认为trace

 

  1. 启动多个elasticsearch

https://my.oschina.net/u/3470972/blog/1586637

http://knktc.com/2016/06/10/elasticsearch-multiple-instances/

 

elasticsearch -Epath.conf=/opt/apps/elk/elasticsearch/config/my_cluster/  -Ecluster.name=my_cluster -Enode.name=node_2

 

  1. 开放elasticsearch给外网

[[email protected] ~]$ vim /opt/apps/elk/elasticsearch/config/elasticsearch.yml

transport.host: localhost

network.host: 0.0.0.0

 

[[email protected]]$ vim /opt/apps/elk/elasticsearch/config/jvm.options

-Xms5g

-Xmx5g

备注:内存分配物理内存1/2或1/4

 

[[email protected]]$ vim /opt/apps/elk/elasticsearch/bin/elasticsearch

ES_JAVA_OPTS="-Xms6g -Xmx6g"

 

 

 

完整优化

https://www.cnblogs.com/ningskyer/articles/5788667.html

  1. 禁用x-pack功能   ---有必要的时候

[[email protected] ~]$ vim /opt/apps/elk/elasticsearch-01/config/elasticsearch.yml

xpack.security.enabled: false

 

  • Logstash优化
  1. 利用Pipeline启动多个管道

参考:

http://blog.****.net/ypc123ypc/article/details/78033142 

http://blog.****.net/ypc123ypc/article/details/69945031

http://blog.51niux.com/?id=205

[[email protected]]$ vim /opt/apps/elk/logstash/config/pipelines.yml

- pipeline.id: tomcat_log

  queue.type: persisted

  path.config: "/opt/apps/elk/logstash/conf.d/elk_tomcat.conf"

  pipeline.workers: 6

  pipeline.batch.size: 1000

  pipeline.batch.delay: 5

- pipeline.id: nginx_log

  path.config: "/opt/apps/elk/logstash/conf.d/elk_nginx.conf"

  pipeline.workers: 4

  pipeline.batch.size: 800

  pipeline.batch.delay: 5

- pipeline.id: gameplat _log

  queue.type: persisted

  path.config: "/opt/apps/elk/logstash/conf.d/elk_gameplat.conf"

  pipeline.batch.size: 1000

  pipeline.batch.delay: 5

 

  1. 内存和线程优化

[[email protected]]$ vim /opt/apps/elk/logstash/config/jvm.options

-Xms6g

-Xmx6g

Logstash参数表

 

参数

类别

说明

1

LS_HEAP_SIZE

LS

Logstash堆内存大小,默认1g

2

-w

LS启动

logstash线程数,默认与cpu数相同

3

-b

LS启动

Batch数,即logstash取多少数据进行一次filter,默认125

4

redis.threads

LS input

Redis线程数,默认1

5

redis.batch_count

LS input

Redis每次pop的数量,默认1

6

es.workers

LS output

Es提交线程,默认1

7

es.flush_size

LS output

ESbulk提

8

-l

LS启动

产生日志文件

 

备注:

带pv启动logstash

logstash -f logstash_dots_zzm1.conf  -l ./logstash_zzm1.log -b 8000 | pv -abt >/dev/null 

 

#####

再次优化索引

用一个

If ([fields][source] =~ "gameplat")

  • x-pack常见参数优化

参考资料:https://www.felayman.com/articles/2017/12/12/1513085668561.html

 

  • elk安全告警机制

参考:

https://xizhibei.github.io/2017/11/19/alerting-with-elastalert/   

https://github.com/xuyaoqiang/elastalert-dingtalk-plugin  钉钉报警插件

http://ksowo.com/2018/02/01/ELK%E6%8E%A5%E6%94%B6paloalto%E6%97%A5%E5%BF%97%E5%B9%B6%E7%94%A8%E9%92%89%E9%92%89%E5%91%8A%E8%AD%A6/

钉钉添加群---》群设置----》群机器人----》添加机器人---》编辑

创建的钉钉群链接:https://oapi.dingtalk.com/robot/send?access_token=db5c6b508ee0ffb30dfa9dc88589582f9fe5f0904def7ec8bcb4fb1c597cb436

sudo pip install setuptools --upgrade

 

  • GeoIP地图库

Logstash下载GeoData

[[email protected]]$ cd /opt/apps/elk/logstash/ && mkdir geoData/ 

[[email protected] logstash]$ cd geoData && wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz 

[[email protected] logstash]$ vim  /opt/apps/elk/logstash/conf.d/elk_nginx.conf

.....

     geoip {

       source => "clientip"

       target => "geoip"

       database => "/opt/apps/elk/logstash/geoData/GeoLite2-City_20180306/GeoLite2-City.mmdb"

       add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]

       add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]

     }

....

        index => "logstash-nginx-%{+YYYY.MM.dd}"

...

备注:index必须以logstash开头

  • 常见报错
  1. 删掉tags的_grokparsefailure

    grok {

      match => { "message" => "%{COMBINEDAPACHELOG}" }

      tag_on_failure => []

      }