Centos7--EFK监控nginx日志
主: 192.168.227.170
从: 192.168.227.171
从: 192.168.227.173
三台都安装zookeeper和kafka
时间同步
修改主机名
三台都配置/etc/hosts文件
测试能否ping通
1.安装java环境
2.安装zookeeper
在最下面添加三行:
server.1=192.168.227.170:2888:3888
server.2=192.168.227.171:2888:3888
server.3=192.168.227.173:2888:3888
启动zookeeper
查看状态,三台分别为:
3.安装kafka
第一台
第二台
第三台
启动kafka
查看日志
模拟生产者和消费者进行验证
创建
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.227.170:2181 --replication-factor 2 --partitions 3 --topic wg001
生产者
/usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.227.170:9092 --topic wg001
消费者
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.227.170:9092 --topic wg001 --from-beginning
验证
4.安装filebeat
配置filebeat源
[filebeat-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
监控一个日志
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
Change to true to enable this input configuration.
enabled: truePaths that should be crawled and fetched. Glob based paths.
paths:- /var/log/messages
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
output.kafka:
Array of hosts to connect to.
enabled: true
hosts: [“192.168.227.170:9092”,“192.168.227.171:9092”,“192.168.227.173"9092”]
topic: messages
监控两个日志
启动filebeat
消费者
5.安装Elasticsearch
查看elasticsearch日志
6.安装logstash
7.安装kibana
访问 http://ip:5601
在上面的基础上做监控nginx的日志
先安装nginx
压测nginx,让nginx产生日志
创建nginx主题
创建nginx生产者
去nginx消费者查看是否有数据产生
去logstash编辑nginx.conf
重启logstash,查看logstash日志看是否有报错
访问 http://ip:5601
如果不能创建,使用ab -n 1000 -c 1000 http://ip/index.html压测nginx,是nginx产生日志
Nginx匹配正则
NGINXACCESS %{IPORHOST:client_ip} (%{USER:ident}|- ) (%{USER:auth}|-) [%{HTTPDATE:timestamp}] “(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)” %{NUMBER:status} (?:%{NUMBER:bytes}|-) “(?:%{URI:referrer}|-)” “%{GREEDYDATA:agent}”
开启debug模式测试
开启logstash,
input {
kafka {
bootstrap_servers => [“192.168.116.128:9092,192.168.116.129:9092,192.168.116.130:9092”]
group_id => “logstash”
topics => “nginx”
consumer_threads => 5
}
}
filter {
json {
source => “message”
}
mutate {
remove_field => [“version”,“auth”,“log”,“prospector”,“input”,“offset”,“http_version”,“fields”,“log_topics”,“beat”,“ident”,“source”,“host”]
}
grok {
match => {“message” => “%{NGINXACCESS}” }
}
}
output {
elasticsearch {
hosts => “192.168.116.130:9200”
index => “nginx-%{+YYYY.MM.dd}”
}
#stdout {
# codec => rubydebug
#}
}