ELK分析Nginx日志和可视化展示

 

一、概述、

使用ELK收集nginx access日志,利用Grafana做出一套可视化图表

 

二、环境准备

环境说明

操作系统:centos 7.6

docker版本:19.03.12

ip地址:192.168.31.196

 

elk搭建

关于elk的搭建,请参考以下3篇文章:

docker安装elasticsearch和head插件

docker安装logstash

docker安装kibana

 

nginx安装

线上nginx直接用yum安装的

yum install -y nginx

 

三、nginx日志格式

默认的nginx日志格式,需要改为指定的格式   '"request_body":"$request_body",'  若有视频传输会导致日志文件不停刷16进制日志导致日志过大从而导致采集器内存增大影响服务器甚至崩溃

复制代码
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    log_format aka_logs
                      '{"@timestamp":"$time_iso8601",'
                      '"host":"$hostname",'
                      '"server_ip":"$server_addr",'
                      '"client_ip":"$remote_addr",'
                      '"xff":"$http_x_forwarded_for",'
                      '"domain":"$host",'
                      '"url":"$uri",'
                      '"referer":"$http_referer",'
                      '"args":"$args",'
                      '"upstreamtime":"$upstream_response_time",'
                      '"responsetime":"$request_time",'
                      '"request_method":"$request_method",'
                      '"status":"$status",'
                      '"size":"$body_bytes_sent",'
                      '"request_body":"$request_body",'
                      '"request_length":"$request_length",'
                      '"protocol":"$server_protocol",'
                      '"upstreamhost":"$upstream_addr",'
                      '"file_dir":"$request_filename",'
                      '"http_user_agent":"$http_user_agent"'
    '}';

    #access_log  /var/log/nginx/access.log  main;
    access_log  /var/log/nginx/access.log  aka_logs;
复制代码

注意:这里的nginx的所有访问日志,统一在/var/log/nginx/access.log。

如果你的环境,单独为每一个虚拟主机分配了access日志,那么就需要在对应的虚拟主机配置文件,应用格式aka_logs即可。

 

四、logstash配置

注意:由于本文,直接使用logstash采集nginx日志,并发送给elasticsearch。

如果是使用filebeat收集nginx日志,请查阅参考文章中的配置说明。

nginx.conf

新建文件nginx.conf

/data/elk7/logstash/config/conf.d/nginx.conf

内容如下:

复制代码
input {
  file {
        ## 修改你环境nginx日志路径
        path => "/var/log/nginx/access.log"
        ignore_older => 0 
    codec => json
    }
}

filter {
  geoip {
    #multiLang => "zh-CN"
    target => "geoip"
    source => "client_ip"
    database => "/usr/share/logstash/GeoLite2-City.mmdb"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    # 去掉显示 geoip 显示的多余信息
    remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
  }
  mutate {
    convert => [ "size", "integer" ]
    convert => [ "status", "integer" ]
    convert => [ "responsetime", "float" ]
    convert => [ "upstreamtime", "float" ]
    convert => [ "[geoip][coordinates]", "float" ]
    # 过滤 filebeat 没用的字段,这里过滤的字段要考虑好输出到es的,否则过滤了就没法做判断
    remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]
  }
  # 根据http_user_agent来自动处理区分用户客户端系统与版本
  useragent {
    source => "http_user_agent"
    target => "ua"
    # 过滤useragent没用的字段
    remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
  }
}
output {
  elasticsearch {
    hosts => ["172.16.0.7:9200"]
    #user => "elastic"
    #password => "password"
    index => "logstash-nginx-%{+YYYY.MM.dd}"
  }
}
复制代码

注意2个位置:

/usr/share/logstash/GeoLite2-City.mmdb,这个是地图数据文件

elasticsearch {xx} 这里填写elasticsearch信息,由于我的elasticsearch 没有开启认证,因此不需要用户名和密码,请根据实际情况填写。

 

GeoLite2-City.mmdb

GeoLite2-City.mmdb是IP信息解析和地理定位的。官方下载是需要收费的,因此我找了一个免费下载地址。

 

下载地址1:https://pan.baidu.com/s/1sjtdvPV

备用下载地址:

链接:https://pan.baidu.com/s/1eJkDqq2nvV3vETyUfOBypw
提取码:2jq5

 

下载完成后,将此文件,上传到/data/elk7/logstash目录,待会启动logstash,就会挂载进去。

 

以新的方式启动logstash

复制代码
docker rm -f logstash

docker run -d \
  --name=logstash \
  --restart=always \
  -p 5044:5044 \
  -v /data/elk7/logstash:/usr/share/logstash \
  -v /var/log/messages:/var/log/messages \
  -v /var/log/nginx:/var/log/nginx \
  logstash:7.5.1
复制代码

等待30秒,查看logstash日志是否有错误

docker logs -f logstash

 

访问head插件,查看索引是否生成

http://192.168.31.196:9100/

有出现logstash-nginx 索引,说明成功了。

 

五、Grafana配置

关于grafana的安装,请参考链接:

https://chuna2.787528.xyz/xiao987334176/p/9930517.html

添加数据源

添加elasticsearch数据源,这里输入elasticsearch的url

如果elasticsearch需要认证,在下面的Auth设置中,Basic auth开启,输入用户名和密码。

 

输入索引值,时间戳,选择版本:7.0+

 

如果测试通过,则会添加成功。

 

安装插件

进入grafana容器,安装2个插件,用来支持展示图表的。

grafana-cli plugins install grafana-piechart-panel
grafana-cli plugins install grafana-worldmap-panel

重启grafana

docker restart grafana

 

导入模板

模板下载地址为:https://grafana.com/grafana/dashboards/11190

下载最新版本,导入json文件,选择2个数据源

 

 

查看效果

刷新页面,效果如下:

 

#######使用filebeat采集###############

安装filebeat 

services:
  filebeat:
    container_name: filebeat  # 容器名称,对应 --name 参数
    image: elastic/filebeat:7.5.1     # 镜像版本,与原命令一致
    network_mode: host
    volumes:
      - /data/elk7/filebeat:/usr/share/filebeat
      - /data/nginx-docker/nginx/logs:/var/log/nginx
    restart: always

修改filebeat yaml配置文件

filebeat.inputs:
# 收集nginx日志
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
# 日志是json开启这个
# json.keys_under_root: true
#  json.overwrite_keys: true
#  json.add_error_key: true
  fields:
     log-type: nginx-logs
     filebeat-ip: 172.16.0.3   #记录采集地址
     
output.logstash:
  enabled: true
  hosts: ["172.16.0.7:5044"]

修改logstash 配置文件

cat config/logstash.yml 
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://172.16.0.7:9200" ]
path.config: /usr/share/logstash/config/conf.d/*.conf
path.logs: /usr/share/logstash/logs

 

input {
  beats {
        port => 5044
    codec => json
    }
}

filter {
if [fields][log-type] == "nginx-logs"{
  geoip {
    #multiLang => "zh-CN"
    target => "geoip"
    source => "client_ip"
    database => "/usr/share/logstash/GeoLite2-City.mmdb"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    # 去掉显示 geoip 显示的多余信息
    remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
  }
  mutate {
    convert => [ "size", "integer" ]
    convert => [ "status", "integer" ]
    convert => [ "responsetime", "float" ]
    convert => [ "upstreamtime", "float" ]
    #convert => [ "[geoip][coordinates]", "float" ]
    convert => [ "[geoip][coordinates][0]", "float" ] #经度
    convert => [ "[geoip][coordinates][1]", "float" ] #维度
    # 过滤 filebeat 没用的字段,这里过滤的字段要考虑好输出到es的,否则过滤了就没法做判断
    remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]
  }
  # 根据http_user_agent来自动处理区分用户客户端系统与版本
  useragent {
    source => "http_user_agent"
    target => "ua"
    # 过滤useragent没用的字段
    remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
  }
}
}
output {
  elasticsearch {
    hosts => ["172.16.0.7:9200"]
    #user => "elastic"
    #password => "password"
    index => "logstash-nginx-%{+YYYY.MM.dd}"
  }
}

 

docker-compose.yml 编写

cat docker-compose.yaml 
services:
  elasticsearch:
    container_name: elasticsearch  # 容器名称,对应 --name 参数
    image: elasticsearch:7.5.1     # 镜像版本,与原命令一致
    ports:
      - "9200:9200"  # HTTP 端口映射
      - "9300:9300"  # TCP 通信端口映射
    environment:
      - cluster.name=elasticsearch       # 集群名称
      - discovery.type=single-node       # 单节点模式
      - ES_JAVA_OPTS=-Xms512m -Xmx1024m  # JVM 内存配置
    volumes:
      - /data/elk7/elasticsearch/config:/usr/share/elasticsearch/config
      - /data/elk7/elasticsearch/data:/usr/share/elasticsearch/data
      - /data/elk7/elasticsearch/logs:/usr/share/elasticsearch/logs
    restart: unless-stopped  # 可选:容器异常退出时自动重启(增强稳定性)
  
  elasticsearch-head:
    container_name: elasticsearch-head
    image:  docker.io/mobz/elasticsearch-head:5-alpine
    ports:
      - "8100:9100"
    restart: unless-stopped    

  kibana:
    container_name: kibana
    image:  kibana:7.5.1
    ports:
      - "5601:5601"
    volumes:
      - /data/elk7/kibana/config:/usr/share/kibana/config 
      - /data/elk7/kibana/data:/usr/share/kibana/data
    restart: unless-stopped    

  logstash:
    container_name: logstash  # 容器名称,对应 --name 参数
    image: logstash:7.5.1     # 镜像版本,与原命令一致
    ports:
      - "5044:5044"
    volumes:
      - /data/elk7/logstash:/usr/share/logstash
      - /data/nginx-docker/nginx/logs:/var/log/nginx
    restart: always

 

本次参考链接:https://grafana.com/grafana/dashboards/11190

 

#######多类型日志采集############

 

配置logstash

cat elk7/logstash/config/conf.d/nginx.conf
input {
  beats {
        ## 修改你环境nginx日志路径
    port => 5044
#    codec => json
    client_inactivity_timeout => 120
    }
}

filter {
if [log_type] == "nginx-logs"{

  json {
    source => "message"
  }
  geoip {
    #multiLang => "zh-CN"
    target => "geoip"
    source => "client_ip"
    database => "/usr/share/logstash/GeoLite2-City.mmdb"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    # 去掉显示 geoip 显示的多余信息
    remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
  }
  mutate {
    convert => [ "size", "integer" ]
    convert => [ "status", "integer" ]
    convert => [ "responsetime", "float" ]
    convert => [ "upstreamtime", "float" ]
    #convert => [ "[geoip][coordinates]", "float" ]
    convert => [ "[geoip][coordinates][0]", "float" ] #经度
    convert => [ "[geoip][coordinates][1]", "float" ] #维度
    # 过滤 filebeat 没用的字段,这里过滤的字段要考虑好输出到es的,否则过滤了就没法做判断
    remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]
  }
  # 根据http_user_agent来自动处理区分用户客户端系统与版本
  useragent {
    source => "http_user_agent"
    target => "ua"
    # 过滤useragent没用的字段
    remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
  }
}

if [log_type] != "nginx-logs" {
  if [message] == "" or [message] == "_null_" {
    drop { }
  }
}


}
output {
  if [log_type] == "nginx-logs" {
    elasticsearch {
      hosts => ["172.16.0.7:9200"]
      #user => "elastic"
      #password => "password"
      index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
  }else if [log_type] == "nginx-error" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "nginx-error-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "gateway-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "gateway-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "system-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "system-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "third-fdd-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "third-fdd-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "order-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "order-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "third-tencent-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "third-tencent-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "infra-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "infra-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "third-rongcloud-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "third-rongcloud-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "basic-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "basic-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "third-crm-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "third-crm-log-%{+YYYY.MM.dd}"
   }
  }else if [log_type] == "im-server-log" {
   elasticsearch {
   hosts => ["172.16.0.7:9200"]
   index => "im-server-log-%{+YYYY.MM.dd}"
   }
  }
}

####filebeat 采集配置########

cat filebeat/filebeat.yml
filebeat.inputs:
# 收集nginx日志
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
     tag: nginx-logs
     log_type: nginx-logs
     filebeat-ip: 172.16.0.100
  fields_under_root: true

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  fields:
     log_type: nginx-error
     filebeat-ip: 172.16.0.100
  fields_under_root: true


- type: log
  enabled: true
  paths:
    - /var/log/gateway-server/gateway-server.log
  fields:
     log_type: gateway-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/basic-server/basic-server.log
  fields:
     log_type: basic-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/order-server/order-server.log
  fields:
     log_type: order-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/third-crm-server/third-crm-server.log
  fields:
     log_type: third-crm-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/system-server/system-server.log
  fields:
     log_type: system-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/third-rongcloud-server/third-rongcloud-server.log
  fields:
     log_type: third-rongcloud-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/third-tencent-server/third-tencent-server.log
  fields:
     log_type: third-tencent-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

- type: log
  enabled: true
  paths:
    - /var/log/third-fdd-server/third-fdd-server.log
  fields:
     log_type: third-fdd-log
     filebeat-ip: 172.16.0.100
  fields_under_root: true

  multiline.type: pattern
  multiline.pattern: '^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d+'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 500
  multiline.timeout: 10s

output.logstash:
  enabled: true
  hosts: ["172.16.0.7:5044"]

 

posted @ 2022-11-23 16:00  追梦$少年  阅读(1298)  评论(0)    收藏  举报