页面“Unlock Bootloader, Install LineageOS ,TWRP and Root Essential PH-1”与“Elk基础”之间的差异
第1行: | 第1行: | ||
− | = | + | =install= |
− | + | ==elk download== | |
+ | <pre> | ||
+ | wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz | ||
− | + | https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-linux-x86_64.tar.gz | |
− | |||
− | + | https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.tar.gz | |
− | + | </pre> | |
− | + | ==二进制包 == | |
+ | ===jdk ins=== | ||
+ | ==== RPM ==== | ||
+ | <pre> | ||
+ | #set java environment 如果是rpm安装 | ||
+ | JAVA_HOME=/usr/java/jdk1.8.0_121 | ||
+ | JRE_HOME=/usr/java/jdk1.8.0_121/jre | ||
+ | CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib | ||
+ | PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin | ||
+ | export JAVA_HOME JRE_HOME CLASS_PATH PATH | ||
− | + | </pre> | |
− | |||
− | |||
+ | ==== tar.gz==== | ||
− | == | + | ====tomcat 自带 ==== |
− | + | <pre> | |
− | + | yum install tomcat -y #这些比较懒 这样自动上了openjdk | |
− | |||
− | |||
+ | [root@localhost ~]# java -version | ||
+ | openjdk version "1.8.0_212" | ||
+ | OpenJDK Runtime Environment (build 1.8.0_212-b04) | ||
+ | OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode) | ||
+ | </pre> | ||
− | = | + | ===elasticsearch ins=== |
+ | <pre> | ||
− | |||
− | + | tar xvf elasticsearch-6.4.3.tar.gz | |
+ | mv elasticsearch-6.4.3/ /usr/local/elasticsearch/ | ||
+ | |||
+ | vim elasticsearch.yml 修改配置文件,在最下面加入如下几行 | ||
+ | |||
+ | network.host: 0.0.0.0 | ||
+ | http.port: 9200 | ||
+ | http.cors.enabled: true | ||
+ | http.cors.allow-origin: "*" | ||
+ | |||
+ | 注意,root用户是不能直接启动elasticsearch的,需要新建用户,然后切换用户去启动elasticsearch,如下: | ||
+ | 创建elsearch用户组及elsearch用户 | ||
+ | |||
+ | groupadd elsearch | ||
+ | useradd elsearch -g elsearch -p elasticsearch | ||
+ | |||
+ | 更改elasticsearch文件夹及内部文件的所属用户及组为elsearch:elsearch | ||
+ | chown -R elsearch:elsearch | ||
+ | |||
+ | 切换到elsearch用户再启动 | ||
+ | |||
+ | su elsearch | ||
+ | cd elasticsearch/bin | ||
+ | bash elasticsearch & | ||
+ | |||
+ | systemctl stop firewalld | ||
+ | systemctl disable firewalld | ||
+ | |||
+ | |||
+ | |||
+ | 配置管理 | ||
+ | Elasticsearch一般不需额外配置,但是为了提高Elasticsearch性能可以通过elasticsearch.yml文件修改配置参数。当然,也可以根据用户系统配置降低配置参数,如jvm.heapsize。Elasticsearch默认占用2G内存,对于系统配置较低的服务器,很可能带来负载过大的问题,因此需要适当减少jvm.heapsize | ||
+ | |||
+ | |||
+ | </pre> | ||
+ | |||
+ | === nginx ins=== | ||
<pre> | <pre> | ||
− | + | vi /etc/yum.repos.d/nginx.repo | |
− | + | [nginx] | |
+ | name=nginx repo | ||
+ | baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ | ||
+ | gpgcheck=0 | ||
+ | enabled=1 | ||
+ | |||
+ | yum install nginx -y #或者你用 yum install epel-release | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | vi /etc/nginx/nginx.conf#修改nginx的日志默认输出格式 | |
− | + | log_format json '{"@timestamp":"$time_iso8601",' | |
+ | '"@version":"1",' | ||
+ | '"client":"$remote_addr",' | ||
+ | '"url":"$uri",' | ||
+ | '"status":"$status",' | ||
+ | '"domian":"$host",' | ||
+ | '"host":"$server_addr",' | ||
+ | '"size":"$body_bytes_sent",' | ||
+ | '"responsetime":"$request_time",' | ||
+ | '"referer":"$http_referer",' | ||
+ | '"ua":"$http_user_agent"' | ||
+ | '}'; | ||
+ | #access_log /opt/access.log json; | ||
+ | access_log /var/log/nginx/access.log json; | ||
+ | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</pre> | </pre> | ||
+ | https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-centos-7 | ||
− | + | https://www.cyberciti.biz/faq/how-to-install-and-use-nginx-on-centos-7-rhel-7/ | |
− | |||
− | == | + | ===Kibana === |
− | === | + | ====install==== |
− | + | <pre> | |
− | |||
− | + | #kibana主要是搜索elasticsearch的数据,并进行数据可视化的展现,新版使用nodejs | |
− | + | * kibana配置启动 | |
+ | [root@localhost kibana]# pwd | ||
+ | /usr/local/kibana | ||
+ | vim config/kibana.yml | ||
+ | server.port: 5601 | ||
+ | server.host: "0.0.0.0" | ||
+ | elasticsearch.url: "http://localhost:9200" | ||
+ | kibana.index: ".kibana" | ||
+ | cd bin/ | ||
+ | sh kibana & 启动kibana | ||
+ | 启动完毕,可以浏览器输入url: 服务器外网ip:5601 查看是否成功启动: | ||
+ | http://192.168.88.52:5601/app/kibana#/home?_g=() | ||
− | |||
− | |||
− | |||
− | + | 配置******** | |
+ | Kibana配置可以通过命令行参数或配置文件kibana.yml。Kibana应用的默认地址为localhost,无法从远程访问Kibana,因此,用户需要修改配置文件的server.host属性 | ||
+ | </pre> | ||
− | === | + | ====配置nginx,为kibana配置反向代理 ==== |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<pre> | <pre> | ||
− | + | server{ | |
− | + | ||
+ | listen 80; | ||
+ | |||
+ | server_name elk.com; | ||
+ | |||
+ | location / { | ||
+ | |||
+ | proxy_set_header Host $host; | ||
+ | |||
+ | proxy_pass http://localhost:5601; | ||
+ | |||
+ | } | ||
− | |||
− | |||
− | |||
− | |||
</pre> | </pre> | ||
− | === | + | |
+ | ===Logstash=== | ||
<pre> | <pre> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | mv logstash-6.4.2/ /usr/local/logstash/ | |
− | + | cd /usr/local/logstash/bin/ | |
− | + | ||
− | + | ||
− | + | #用这个nginx的 | |
− | + | cat /usr/local/logstash/config/nginx.conf | |
+ | input { | ||
+ | file { | ||
+ | path => "/var/log/nginx/access.log" | ||
+ | type => "nginx" | ||
+ | codec => "json" | ||
+ | start_position => "beginning" | ||
+ | } | ||
+ | } | ||
+ | |||
+ | filter { | ||
+ | geoip { | ||
+ | fields => ["city_name", "country_name", "latitude", "longitude", "region_name","region_code"] | ||
+ | source => "client" | ||
+ | } | ||
+ | } | ||
+ | |||
+ | output { | ||
+ | if [type] == "nginx" { | ||
+ | elasticsearch { | ||
+ | hosts => ["127.0.0.1:9200"] | ||
+ | index => "nelson-nginx-%{+YYYY.MM.dd}" | ||
+ | } | ||
+ | stdout {} | ||
+ | } | ||
+ | } | ||
+ | |||
+ | # 是Elasticsearch 的ip哦 千万不能写错啦 线上的情况一般是l 和ek 不在同一个机器 | ||
+ | |||
+ | # hosts => ["127.0.0.1:9200"] | ||
+ | |||
+ | ./bin/logstash -f ./config/nginx.conf | ||
+ | |||
+ | 访问nginx 就会 在控制台看到如下输出 | ||
+ | |||
+ | "@timestamp" => 2019-05-31T08:26:26.000Z, | ||
+ | "domian" => "192.168.88.52", | ||
+ | "size" => "0", | ||
+ | "ua" => "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", | ||
+ | "geoip" => {}, | ||
+ | "tags" => [ | ||
+ | [0] "_geoip_lookup_failure" | ||
+ | ], | ||
+ | "status" => "304", | ||
+ | "referer" => "-", | ||
+ | "path" => "/var/log/nginx/access.log", | ||
+ | "url" => "/index.html", | ||
+ | "type" => "nginx", | ||
+ | "client" => "192.168.88.4", | ||
+ | "host" => "192.168.88.52", | ||
+ | "@version" => "1", | ||
+ | "responsetime" => "0.000" | ||
+ | } | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
− | |||
− | |||
</pre> | </pre> | ||
− | == | + | ===启动脚本=== |
− | + | ==add redis == | |
− | |||
− | + | ==docker== | |
+ | <pre> | ||
+ | |||
+ | </pre> | ||
− | |||
− | |||
− | + | [https://blog.csdn.net/gongxsh00/article/details/77001603 使用Docker快速部署ELK环境(最新5.5.1版本)] | |
+ | [https://www.cnblogs.com/soar1688/p/6849183.html Docker ElK安装部署使用教程] | ||
+ | =usage= | ||
+ | ==tomcat logs== | ||
<pre> | <pre> | ||
+ | Step 1 of 2: Define index pattern | ||
+ | Index pattern | ||
− | + | nelson-nginx-* #因为前面的output index => "nelson-nginx | |
− | |||
− | |||
− | |||
− | |||
+ | Step 2 of 2: Configure settings | ||
+ | @timestamp | ||
− | |||
− | + | #这个老的 | |
+ | Step 1 of 2: Define index pattern | ||
+ | Index pattern | ||
+ | logstash-* | ||
+ | |||
+ | 有这些字些Success! Your index pattern matches 1 index | ||
+ | |||
+ | Step 2 of 2: Configure settings | ||
+ | |||
</pre> | </pre> | ||
− | |||
− | + | [https://blog.csdn.net/ZHANG_H_A/article/details/53129565 elk部署配置,收集nginx和tomcat日志] | |
− | |||
− | == | + | [https://www.cnblogs.com/FengGeBlog/p/10558912.html ELK收集tomcat状态日志] |
+ | |||
+ | ==logstash配置mysql数据同步到elasticsearch== | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | [https://www.cnblogs.com/zhang-shijie/p/5384624.html ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理] | ||
+ | |||
+ | =安全= | ||
+ | ==nginx代理== | ||
<pre> | <pre> | ||
− | + | 1.安装nginx | |
− | + | 2.安装Apache密码生产工具 httpd-tools | |
− | + | 3.生成密码文件 | |
− | + | 4.配置Nginx | |
+ | 5.修改 kibna配置文件 | ||
+ | 6.重启kibna,Nginx | ||
+ | 查看登录界面 | ||
+ | </pre> | ||
+ | [https://www.linuxgogo.com/1873.html 06-使用 Nginx 做 kibana 安全认证1] | ||
+ | |||
+ | ==x-pack== | ||
+ | <pre> | ||
+ | 官方提供x-pack组件,进行安全防护,报表,集群实时监控。 | ||
+ | |||
+ | 只安装x-pack中的Shield | ||
+ | |||
+ | 只是对 kibna放在公网 对kibna进行登录验证的话,可以用nginx 代理功能 | ||
+ | |||
+ | 1.nginx代理 | ||
+ | 2.使用Shield | ||
+ | 3.x-pack组件 | ||
+ | </pre> | ||
+ | |||
+ | |||
− | + | [https://www.jianshu.com/p/d4b19b5150dc ELK的安全加固good] | |
+ | [https://blog.csdn.net/qq_24434491/article/details/80820275 ELK安全配置] | ||
− | + | [https://elasticsearch.cn/article/129 Elasticsearch 安全加固 101] | |
− | + | [https://www.jianshu.com/p/5a42b3560b27 ElasticSearch&Search-guard 5 权限配置] | |
− | + | [https://blog.csdn.net/qq_41980563/article/details/88725584 elk设置密码,elasticsearch设置密码] | |
− | [https:// | ||
− | |||
− | + | =集群= | |
+ | =trouble= | ||
+ | <pre> | ||
+ | max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] | ||
+ | elasticsearch启动时遇到的错误 | ||
− | + | 问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144; | |
− | + | /etc/sysctl.conf文件最后添加一行 | |
+ | vm.max_map_count=262144 | ||
− | |||
+ | [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] | ||
− | + | max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] | |
− | |||
− | + | 每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量 | |
− | |||
− | |||
− | + | ulimit -Hn | |
+ | ulimit -Sn | ||
+ | 修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效 | ||
− | + | * soft nofile 65536 | |
+ | * hard nofile 65536 | ||
− | + | https://blog.csdn.net/qq942477618/article/details/53414983 | |
− | |||
− | |||
− | |||
+ | https://www.jianshu.com/p/89f8099a6d09 Elasticsearch5.2.0部署过程的坑 | ||
− | + | https://www.cnblogs.com/yidiandhappy/p/7714489.html | |
− | + | https://www.cnblogs.com/zhi-leaf/p/8484337.html | |
+ | </pre> | ||
− | + | =see also= | |
− | + | [https://zhuanlan.zhihu.com/p/22400290 ELK不权威指南] | |
− | + | [https://blog.csdn.net/yp090416/article/details/81589174 good ELK+logback+kafka+nginx 搭建分布式日志分析平台] | |
− | + | [https://blog.csdn.net/li123128/article/details/81052374 小白都会超详细--ELK日志管理平台搭建教程] | |
− | |||
− | |||
− | |||
− | + | [https://blog.51cto.com/wzlinux/category21.html ELK 教程] | |
− | |||
− | |||
− | |||
− | |||
− | + | https://www.elastic.co/guide/cn/index.html | |
− | |||
− | |||
− | + | ||
+ | [https://blog.csdn.net/tanqian351/article/details/83827583 ELK搭建教程(全过程)] | ||
+ | |||
+ | [https://www.cnblogs.com/xiaoqi/p/elk-part1.html ELK日志套件安装与使用ubuntu] | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | [https://www.jianshu.com/p/666c662f0068 ELK日志分析系统初体验] | ||
+ | |||
+ | |||
+ | [https://www.elastic.co/guide/cn/kibana/current/introduction.html kibana基础教程] | ||
+ | |||
+ | |||
+ | [https://blog.csdn.net/mjlfto/article/details/79772848 elasticsearch集成head插件查看es的数据] | ||
+ | |||
+ | *** | ||
+ | |||
+ | [https://blog.csdn.net/buqutianya/article/details/71941351 日志系统ELK使用详解(一)--如何使用] | ||
+ | |||
+ | [https://blog.csdn.net/buqutianya/article/details/72019264 日志系统ELK使用详解(二)--Logstash安装和使用] | ||
+ | |||
+ | [https://blog.csdn.net/buqutianya/article/details/72026768 日志系统ELK使用详解(三)--elasticsearch安装] | ||
+ | |||
+ | |||
+ | [https://blog.csdn.net/buqutianya/article/details/72027209 日志系统ELK使用详解(四)--kibana安装和使用] | ||
+ | |||
+ | |||
+ | [https://blog.csdn.net/buqutianya/article/details/72028868 日志系统ELK使用详解(五)--补充] | ||
+ | |||
+ | |||
+ | |||
+ | [https://www.cnblogs.com/zhang-shijie/p/5303905.html ELK 之一:ElasticSearch 基础和集群搭建] | ||
+ | |||
+ | [https://www.cnblogs.com/zhang-shijie/p/5377127.html ELK 之二:ElasticSearch 和Logstash高级使用] | ||
+ | |||
+ | [https://www.cnblogs.com/zhang-shijie/p/5384624.html ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理] | ||
+ | |||
+ | [https://www.cnblogs.com/zhang-shijie/p/5464805.html ELK 之四:搭建集群处理日PV 四亿次超大访问量优化方法] | ||
− | https://www. | + | [https://www.cnblogs.com/xuwujing/p/11567053.html ElasticSearch实战系列二: ElasticSearch的DSL语句使用教程---图文详解] |
− | + | [https://www.cnblogs.com/xuwujing/p/11645630.html ElasticSearch实战系列三: ElasticSearch的JAVA API使用教程] | |
− | [https:// | ||
− | |||
− | + | [https://www.cnblogs.com/xuwujing/p/12093933.html ElasticSearch实战系列四: ElasticSearch理论知识介绍] | |
+ | [https://www.cnblogs.com/xuwujing/p/12385903.html ElasticSearch实战系列五: ElasticSearch的聚合查询基础使用教程之度量(Metric)聚合] | ||
+ | [https://www.cnblogs.com/xuwujing/p/13412108.html ElasticSearch实战系列六: Logstash快速入门] | ||
− | + | [https://www.cnblogs.com/xuwujing/p/13520666.html ElasticSearch实战系列七: Logstash实战使用-图文讲解] | |
− | + | [https://www.cnblogs.com/xuwujing/p/13532125.html ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解] | |
− | + | [https://www.jianshu.com/p/4c1f2afa0b6c docker安装ELK后kibana的汉化] | |
− | |||
− | + | [https://www.cnblogs.com/xiaoqi/p/elk-part1.html ELK日志套件安装与使用] | |
− | [https:// | + | [https://blog.csdn.net/BuquTianya/article/details/71941351 日志系统ELK使用详解(一)--如何使用] |
− | |||
− | [ | + | [http://www.ttlsa.com/elk/howto-install-elasticsearch-logstash-and-kibana-elk-stack/ ELK 部署指南ttlsa] |
− | |||
+ | [https://blog.csdn.net/enweitech/article/details/81744250 ELK+kafka+Winlogbeat/FileBeat搭建统一日志收集分析管理系统] | ||
− | + | 日志分析 第一章 ELK介绍 | |
+ | http://www.cnblogs.com/xiaoming279/p/6100613.html | ||
− | + | 日志分析 第二章 统一访问日志格式 | |
+ | http://www.cnblogs.com/xiaoming279/p/6101628.html | ||
− | + | 日志分析 第三章 安装前准备及系统初始化 | |
+ | http://www.cnblogs.com/xiaoming279/p/6101951.html | ||
− | |||
− | + | 这里开始还没看 | |
+ | 日志分析 第四章 安装filebeat | ||
+ | http://www.cnblogs.com/xiaoming279/p/6112715.html | ||
− | [https://zhuanlan.zhihu.com/p/ | + | [https://zhuanlan.zhihu.com/p/152217444 ELK 日志收集简易教程有配置和一点点使用] |
− | |||
− | [[category: | + | [[category:ops]] |
2021年4月25日 (日) 06:22的版本
install
elk download
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.tar.gz
二进制包
jdk ins
RPM
#set java environment 如果是rpm安装 JAVA_HOME=/usr/java/jdk1.8.0_121 JRE_HOME=/usr/java/jdk1.8.0_121/jre CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export JAVA_HOME JRE_HOME CLASS_PATH PATH
tar.gz
tomcat 自带
yum install tomcat -y #这些比较懒 这样自动上了openjdk [root@localhost ~]# java -version openjdk version "1.8.0_212" OpenJDK Runtime Environment (build 1.8.0_212-b04) OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
elasticsearch ins
tar xvf elasticsearch-6.4.3.tar.gz mv elasticsearch-6.4.3/ /usr/local/elasticsearch/ vim elasticsearch.yml 修改配置文件,在最下面加入如下几行 network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" 注意,root用户是不能直接启动elasticsearch的,需要新建用户,然后切换用户去启动elasticsearch,如下: 创建elsearch用户组及elsearch用户 groupadd elsearch useradd elsearch -g elsearch -p elasticsearch 更改elasticsearch文件夹及内部文件的所属用户及组为elsearch:elsearch chown -R elsearch:elsearch 切换到elsearch用户再启动 su elsearch cd elasticsearch/bin bash elasticsearch & systemctl stop firewalld systemctl disable firewalld 配置管理 Elasticsearch一般不需额外配置,但是为了提高Elasticsearch性能可以通过elasticsearch.yml文件修改配置参数。当然,也可以根据用户系统配置降低配置参数,如jvm.heapsize。Elasticsearch默认占用2G内存,对于系统配置较低的服务器,很可能带来负载过大的问题,因此需要适当减少jvm.heapsize
nginx ins
vi /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ gpgcheck=0 enabled=1 yum install nginx -y #或者你用 yum install epel-release vi /etc/nginx/nginx.conf#修改nginx的日志默认输出格式 log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' '}'; #access_log /opt/access.log json; access_log /var/log/nginx/access.log json;
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-centos-7
https://www.cyberciti.biz/faq/how-to-install-and-use-nginx-on-centos-7-rhel-7/
Kibana
install
#kibana主要是搜索elasticsearch的数据,并进行数据可视化的展现,新版使用nodejs * kibana配置启动 [root@localhost kibana]# pwd /usr/local/kibana vim config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://localhost:9200" kibana.index: ".kibana" cd bin/ sh kibana & 启动kibana 启动完毕,可以浏览器输入url: 服务器外网ip:5601 查看是否成功启动: http://192.168.88.52:5601/app/kibana#/home?_g=() 配置******** Kibana配置可以通过命令行参数或配置文件kibana.yml。Kibana应用的默认地址为localhost,无法从远程访问Kibana,因此,用户需要修改配置文件的server.host属性
配置nginx,为kibana配置反向代理
server{ listen 80; server_name elk.com; location / { proxy_set_header Host $host; proxy_pass http://localhost:5601; }
Logstash
mv logstash-6.4.2/ /usr/local/logstash/ cd /usr/local/logstash/bin/ #用这个nginx的 cat /usr/local/logstash/config/nginx.conf input { file { path => "/var/log/nginx/access.log" type => "nginx" codec => "json" start_position => "beginning" } } filter { geoip { fields => ["city_name", "country_name", "latitude", "longitude", "region_name","region_code"] source => "client" } } output { if [type] == "nginx" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "nelson-nginx-%{+YYYY.MM.dd}" } stdout {} } } # 是Elasticsearch 的ip哦 千万不能写错啦 线上的情况一般是l 和ek 不在同一个机器 # hosts => ["127.0.0.1:9200"] ./bin/logstash -f ./config/nginx.conf 访问nginx 就会 在控制台看到如下输出 "@timestamp" => 2019-05-31T08:26:26.000Z, "domian" => "192.168.88.52", "size" => "0", "ua" => "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", "geoip" => {}, "tags" => [ [0] "_geoip_lookup_failure" ], "status" => "304", "referer" => "-", "path" => "/var/log/nginx/access.log", "url" => "/index.html", "type" => "nginx", "client" => "192.168.88.4", "host" => "192.168.88.52", "@version" => "1", "responsetime" => "0.000" }
启动脚本
add redis
docker
usage
tomcat logs
Step 1 of 2: Define index pattern Index pattern nelson-nginx-* #因为前面的output index => "nelson-nginx Step 2 of 2: Configure settings @timestamp #这个老的 Step 1 of 2: Define index pattern Index pattern logstash-* 有这些字些Success! Your index pattern matches 1 index Step 2 of 2: Configure settings
logstash配置mysql数据同步到elasticsearch
ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理
安全
nginx代理
1.安装nginx 2.安装Apache密码生产工具 httpd-tools 3.生成密码文件 4.配置Nginx 5.修改 kibna配置文件 6.重启kibna,Nginx 查看登录界面
x-pack
官方提供x-pack组件,进行安全防护,报表,集群实时监控。 只安装x-pack中的Shield 只是对 kibna放在公网 对kibna进行登录验证的话,可以用nginx 代理功能 1.nginx代理 2.使用Shield 3.x-pack组件
ElasticSearch&Search-guard 5 权限配置
集群
trouble
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] elasticsearch启动时遇到的错误 问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144; /etc/sysctl.conf文件最后添加一行 vm.max_map_count=262144 [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] 每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量 ulimit -Hn ulimit -Sn 修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效 * soft nofile 65536 * hard nofile 65536 https://blog.csdn.net/qq942477618/article/details/53414983 https://www.jianshu.com/p/89f8099a6d09 Elasticsearch5.2.0部署过程的坑 https://www.cnblogs.com/yidiandhappy/p/7714489.html https://www.cnblogs.com/zhi-leaf/p/8484337.html
see also
good ELK+logback+kafka+nginx 搭建分布式日志分析平台
https://www.elastic.co/guide/cn/index.html
日志系统ELK使用详解(三)--elasticsearch安装
ELK 之二:ElasticSearch 和Logstash高级使用
ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理
ElasticSearch实战系列二: ElasticSearch的DSL语句使用教程---图文详解
ElasticSearch实战系列三: ElasticSearch的JAVA API使用教程
ElasticSearch实战系列四: ElasticSearch理论知识介绍
ElasticSearch实战系列五: ElasticSearch的聚合查询基础使用教程之度量(Metric)聚合
ElasticSearch实战系列六: Logstash快速入门
ElasticSearch实战系列七: Logstash实战使用-图文讲解
ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解
ELK+kafka+Winlogbeat/FileBeat搭建统一日志收集分析管理系统
日志分析 第一章 ELK介绍 http://www.cnblogs.com/xiaoming279/p/6100613.html
日志分析 第二章 统一访问日志格式 http://www.cnblogs.com/xiaoming279/p/6101628.html
日志分析 第三章 安装前准备及系统初始化 http://www.cnblogs.com/xiaoming279/p/6101951.html
这里开始还没看
日志分析 第四章 安装filebeat
http://www.cnblogs.com/xiaoming279/p/6112715.html