模版收集nginx正常日志

修nginx配置文件日志格式

#db01
vi /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
  • 重启nginx,清空日志
systemctl restart nginx
> /var/log/nginx/access.log

修改filebeat配置文件获取日志

  • 修改配置文件后重启filebeat,再制造日志
  • 参考默认配置文件66行
#db01
vi /etc/filebeat/filebeat.yml

filebeat.config.modules:
  # Glob pattern for configuration loading 
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes 
  reload.period: 10s

setup.kibana:
  host: "10.0.0.51:5601"

output.elasticsearch:
  hosts: ["10.0.0.51:9200"]
  indices:
  - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
    when.contains:
      fileset.name: "access"

  - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
    when.contains:
      fileset.name: "error"

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

激活filebeat模块修改配置

#db01
filebeat modules enable nginx

vi /etc/filebeat/modules.d/nginx.yml

- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/access.log"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/error.log"]

安装elasticsearch插件

直接上传到本地安装,如果使用国外资源下载很慢 6.7之后这两个插件默认集成到了elasticsearch,不需要单独安装了

#国外资源安装
cd /usr/share/elasticsearch
bin/elasticsearch-plugin install ingest-user-agent
bin/elasticsearch-plugin install ingest-geoip

#本地安装
/usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/ingest-user-agent-6.6.0.zip
/usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/ingest-geoip-6.6.0.zip
  • 重启elasticsearch
systemctl restart elasticsearch
  • 重启filebeat
systemctl restart filebeat

如果没有安装指定的2个elasticsearch插件重启filebeat,在filebeat日志中会出现以下报错

2020-06-03T14:00:04.477+0800    ERROR   fileset/factory.go:142  Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
    sudo bin/elasticsearch-plugin install ingest-user-agent
    sudo bin/elasticsearch-plugin install ingest-geoip

查看结果

QQ截图20200603141116.jpg

重复之前kibana创建索引的步骤重新创建索引

  • access的步骤和之前相同

QQ截图20200603141410.jpg

  • error的步骤和之前有区别

QQ截图20200603141431.jpg

kibana视图

导入kibana视图

默认如果使用filbeat模版导入视图会把所有的服务都导入进去,而我们实际上并不需要这么多视图,而且默认的视图模版只能匹配filebeat-*开头的索引,所以这里我们有2个需要需要解决

  1. 通过一定处理只导入我们需要的模版
  2. 导入的视图模版索引名称可以自定义

解决方法

  1. 备份一份filebeat的kibana视图,删除不需要的视图模版文件
  2. 修改视图文件里默认的索引名称为我们需要的索引名称
#以nginx为例
#复制一份到/root目录
cp -a /usr/share/filebeat/kibana /root

#删除其他的文件只保留nginx
rm -rf /root/kibana/5
cd /root/kibana/6/dashboard/
find . -type f ! -name "*nginx*"|xargs rm -rf
rm -rf ml-nginx*

#默认的模版文件中以filebeat开头的索引名,如果使用需要修改成和当前elasticsearch使用的索引名相同
#替换索引名称
sed -i 's#filebeat\-\*#nginx\-\*#g' /root/kibana/6/dashboardFilebeat-nginx-overview.json
sed -i 's#filebeat\-\*#nginx\-\*#g' /root/kibana/6/dashboardFilebeat-nginx-logs.json
sed -i 's#filebeat\-\*#nginx\-\*#g' /root/kibana/6/index-pattern/filebeat.json

#导入指定模版
filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/

查看kibana

QQ截图20200603153757.jpg

QQ截图20200603153748.jpg

制作kibana视图

  • 修改配置文件后重启filebeat,再制造日志
vi /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "10.0.0.51:5601"

output.elasticsearch:
  hosts: ["10.0.0.51:9200"]
#  index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"
#不添加以下参数会产生报错
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
  • 修改配置文件后重启nginx,再制造日志
vi /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format json '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
    ' }';

    access_log  /var/log/nginx/access.log  json;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
systemctl restart nginx.service
systemctl restart filebeat.service

#清空日志
> /var/log/nginx/access.log

创建数据后根据截图操作

  • 创建图示

QQ截图20200603163347.jpg

  • 选择第一个

QQ截图20200603163454.jpg

  • 选择普通日志

QQ截图20200603163526.jpg

  • 选择X-Axis,Terms,remote_addr.keyword

QQ截图20200603163641.jpg

  • 出现图示

QQ截图20200603163737.jpg

  • 修改样式

QQ截图20200603164039.jpg

  • 点击保存

QQ截图20200603164135.jpg

  • 保存多个图示后创建面板

QQ截图20200603164337.jpg

  • 添加图示

QQ截图20200603164351.jpg

  • 选择图示

QQ截图20200603164455.jpg

  • 保存面板

QQ截图20200603164515.jpg

使用缓存收集日志

当日志的数量非常多的时候,可能需要引入缓存层作为临时存储数据的地方,防止因为ES处理不过来导致日志丢失的情况.filebeat支持将日志发送到redis或者kafka作为消息队列缓存.但是使用了缓存层,就不能使用模版来配置日志收集了.所以最好日志是json格式

安装启动redis

安装启动redis参考之前的redis安装部分内容

调整nginx配置文件

  • 修改配置文件,日志为json格式,重启nginx,再制造日志
vi /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format json '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
    ' }';

    access_log  /var/log/nginx/access.log  json;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
  • 重启nginx
systemctl resatrt nginx

调整filebeat配置文件

vi /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "10.0.0.51:5601"

output.redis:
  hosts: ["10.0.0.51"]
  keys:
    - key: "nginx_access"   
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"

重启filebeat

systemctl restart filebeat

安装配置logstash

#db01
### 下载安装软件
mkdir -p /data/es_soft/
cd /data/es_soft/
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.6.0.rpm
rpm -ivh logstash-6.6.0.rpm

编写配置文件

vi /etc/logstash/conf.d/redis.conf

input {
  redis {
    host => "10.0.0.51"
    port => "6379"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "10.0.0.51"
    port => "6379"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

启动logstash

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

查看效果

QQ截图20200603201345.png

简化配置文件

QQ截图20200603201319.png

  • nginx日志通过filebeat发送到redis,此时filebeat给日志添加上tags
  • logstash将日志发送到elasticsearch,依靠tags区分普通日志和错误日志
  • 如果使用redis作为缓存,可以将不同的日志类型单独写成一个键,这样好处是清晰,但是缺点是logstash写起来起来复杂.也可以将所有的日志全部写入到一个键中,然后靠后端的logstash去过滤处理

修改filebeat配置文件

vi /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "10.0.0.51:5601"

output.redis:
  hosts: ["10.0.0.51"]
  key: "filebeat"
  • 重启filebeat
systemctl restart filebeat

修改logstash

vi /etc/logstash/conf.d/redis.conf

input {
  redis {
    host => "10.0.0.51"
    port => "6379"
    db => "0"
    key => "filebeat"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

启动logstash

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

查看效果

QQ截图20200603202202.png