“Elk基础”的版本间的差异
(→docker) |
(→learn) |
||
(未显示同一用户的52个中间版本) | |||
第1行: | 第1行: | ||
+ | =注意 还没完全的= | ||
+ | Thu 29 Apr 2021 | ||
+ | |||
+ | 1.安全 | ||
+ | |||
+ | 2. volumes | ||
+ | |||
+ | 3.cluster | ||
=install= | =install= | ||
+ | https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html | ||
+ | == deb == | ||
+ | Install ELK/Elastic Stack on Debian 10 | ||
+ | |||
+ | 查看一下4个软件的版本 | ||
+ | |||
+ | |||
+ | |||
+ | [https://itnixpro.com/install-elk-elastic-stack-on-debian/ Install ELK/Elastic Stack on Debian 10] | ||
+ | |||
+ | https://itnixpro.com/how-to-install-logstash-on-debian/ | ||
+ | |||
==elk download== | ==elk download== | ||
<pre> | <pre> | ||
第36行: | 第56行: | ||
<pre> | <pre> | ||
+ | 注意 这个是7 特别的config 不能改了ip后不能访问 | ||
+ | |||
+ | |||
+ | # https://www.elastic.co/guide/en/elasticsearch/reference/index.html | ||
+ | # | ||
+ | # ---------------------------------- Cluster ----------------------------------- | ||
+ | # Use a descriptive name for your cluster: | ||
+ | # | ||
+ | #cluster.name: my-application | ||
+ | cluster.name: myxps | ||
+ | # | ||
+ | # ------------------------------------ Node ------------------------------------ | ||
+ | # | ||
+ | # Use a descriptive name for the node: | ||
+ | # | ||
+ | node.name: node-1 | ||
+ | # | ||
+ | # Add custom attributes to the node: | ||
+ | # | ||
+ | #node.attr.rack: r1 | ||
+ | # | ||
+ | # ----------------------------------- Paths ------------------------------------ | ||
+ | # | ||
+ | # Path to directory where to store the data (separate multiple locations by comma): | ||
+ | # | ||
+ | path.data: /var/lib/elasticsearch | ||
+ | # Path to log files: | ||
+ | path.logs: /var/log/elasticsearch | ||
+ | # | ||
+ | # ----------------------------------- Memory ----------------------------------- | ||
+ | # | ||
+ | # Lock the memory on startup: | ||
+ | # | ||
+ | #bootstrap.memory_lock: true | ||
+ | # | ||
+ | # Make sure that the heap size is set to about half the memory available | ||
+ | # on the system and that the owner of the process is allowed to use this | ||
+ | # limit. | ||
+ | # | ||
+ | # Elasticsearch performs poorly when the system is swapping the memory. | ||
+ | # | ||
+ | # ---------------------------------- Network ----------------------------------- | ||
+ | # | ||
+ | # By default Elasticsearch is only accessible on localhost. Set a different | ||
+ | # address here to expose this node on the network: | ||
+ | # | ||
+ | #network.host: 192.168.0.1 | ||
+ | #network.host: 0.0.0.0 | ||
+ | network.host: 192.168.88.108 | ||
+ | #network.host: 0.0.0.0 | ||
+ | # | ||
+ | # By default Elasticsearch listens for HTTP traffic on the first free port it | ||
+ | # finds starting at 9200. Set a specific HTTP port here: | ||
+ | # | ||
+ | http.port: 9200 | ||
+ | # | ||
+ | # For more information, consult the network module documentation. | ||
+ | # | ||
+ | # --------------------------------- Discovery ---------------------------------- | ||
+ | # Pass an initial list of hosts to perform discovery when this node is started: | ||
+ | # The default list of hosts is ["127.0.0.1", "[::1]"] | ||
+ | # | ||
+ | #discovery.seed_hosts: ["host1", "host2"] | ||
+ | # | ||
+ | # Bootstrap the cluster using an initial set of master-eligible nodes: | ||
+ | # | ||
+ | cluster.initial_master_nodes: ["node-1"] | ||
+ | #cluster.initial_master_nodes: ["node-1", "node-2"] | ||
+ | # | ||
+ | # For more information, consult the discovery and cluster formation module documentation. | ||
+ | # | ||
+ | # ---------------------------------- Various ----------------------------------- | ||
+ | # | ||
+ | # Require explicit names when deleting indices: | ||
+ | # | ||
+ | #action.destructive_requires_name: true | ||
+ | #network.host: 0.0.0.0 | ||
+ | #http.cors.enabled: true | ||
+ | #http.cors.allow-origin: "* | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | apt 也要记得看这个配置 参考 | ||
tar xvf elasticsearch-6.4.3.tar.gz | tar xvf elasticsearch-6.4.3.tar.gz | ||
第73行: | 第179行: | ||
</pre> | </pre> | ||
+ | |||
+ | [https://blog.csdn.net/bobozai86/article/details/108037378 Elasticsearch7.7修改network.host IP地址 start启动失败] | ||
=== nginx ins=== | === nginx ins=== | ||
第224行: | 第332行: | ||
− | + | 查看 | |
+ | 你的 kibana 地址 ,这个 这次变了 | ||
+ | http://192.168.88.167:5601 | ||
+ | 右菜单 ->Discover | ||
第237行: | 第348行: | ||
==docker== | ==docker== | ||
+ | attention 注意 官方 教程 | ||
+ | |||
+ | https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html | ||
+ | |||
+ | 或者参考这个 3.2 yml 好有意思 | ||
+ | |||
+ | https://github.com/deviantony/docker-elk | ||
+ | |||
+ | ===v3 yml version: '3.2' === | ||
+ | |||
+ | [https://blog.csdn.net/weixin_43759757/article/details/109067456 记一次centos7下安装dockr及docker-compose部署elk全过程] | ||
+ | |||
+ | ===v2 === | ||
<pre> | <pre> | ||
+ | |||
+ | #有空要加上安全相关呢 | ||
+ | |||
+ | elasticsearch 不能正常在机器重启后启动 | ||
+ | elasticsearch7.12.0 /bin/tini -- Exit 143 | ||
+ | |||
+ | |||
+ | 记得把 Elasticsearch 的 data 和 logs 设置 chmod 777 不然有下面这个报措的 | ||
+ | volumes 有几个加了就不行 参考官方也是这样 有空再搞吧 是不是写得不对呢 | ||
+ | cat logstash.conf | ||
+ | input { | ||
+ | beats { | ||
+ | port => 5044 | ||
+ | } | ||
+ | } | ||
+ | |||
+ | output { | ||
+ | stdout { | ||
+ | codec => rubydebug | ||
+ | } | ||
+ | } | ||
+ | |||
+ | |||
+ | cat elasticsearch.yml | ||
+ | cluster.name: "docker-cluster" | ||
+ | network.host: 0.0.0.0 | ||
+ | |||
+ | mkdir -p /data/elasticsearch/plugins | ||
+ | mkdir -p /data/elasticsearch/data | ||
+ | mkdir -p /data/logstash | ||
+ | mkdir /data/elasticsearch/config | ||
+ | cp logstash.conf /data/logstash/ | ||
+ | |||
+ | mkdir -p /data/elasticsearch/logs | ||
+ | |||
+ | #想起在lx的呀 | ||
+ | chmod -R 0777 /data/elasticsearch/data/ | ||
+ | chmod -R 0777 /data/elasticsearch/logs/ | ||
+ | |||
+ | |||
+ | |||
+ | cat docker-compose.yml | ||
+ | version: '3' | ||
+ | services: | ||
+ | elasticsearch: | ||
+ | image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 | ||
+ | container_name: elasticsearch7.12.0 | ||
+ | environment: | ||
+ | - "cluster.name=elasticsearch" #设置集群名称为elasticsearch | ||
+ | - "discovery.type=single-node" #以单一节点模式启动 | ||
+ | - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #设置使用jvm内存大小 | ||
+ | - TZ=Asia/Shanghai | ||
+ | volumes: | ||
+ | - /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins | ||
+ | - /data/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/leasticsearch.yml | ||
+ | #- /data/elasticsearch/data:/usr/share/elasticsearch/data | ||
+ | #- /data/elasticsearch/logs:/usr/share/elasticsearch/logs | ||
+ | #- /data/elasticsearch/data:/usr/share/elasticsearch/data | ||
+ | ports: | ||
+ | - 9200:9200 | ||
+ | - 9300:9300 | ||
+ | kibana: | ||
+ | image: docker.elastic.co/kibana/kibana:7.12.0 | ||
+ | container_name: kibana7.12.0 | ||
+ | links: | ||
+ | - elasticsearch:es #可以用es这个域名访问elasticsearch服务 | ||
+ | depends_on: | ||
+ | - elasticsearch #kibana在elasticsearch启动之后再启动 | ||
+ | environment: | ||
+ | - "elasticsearch.hosts=http://es:9200" #设置访问elasticsearch的地址 | ||
+ | - TZ=Asia/Shanghai | ||
+ | ports: | ||
+ | - 5601:5601 | ||
+ | restart: always | ||
+ | logstash: | ||
+ | image: docker.elastic.co/logstash/logstash:7.12.0 | ||
+ | container_name: logstash7.12.0 | ||
+ | environment: | ||
+ | - TZ=Asia/Shanghai | ||
+ | volumes: | ||
+ | - /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf #挂载logstash的配置文件 | ||
+ | depends_on: | ||
+ | - elasticsearch #kibana在elasticsearch启动之后再启动 | ||
+ | links: | ||
+ | - elasticsearch:es #可以用es这个域名访问elasticsearch服务 | ||
+ | ports: | ||
+ | - 9600:9600 | ||
+ | - 5044:5044 | ||
+ | restart: always | ||
+ | |||
</pre> | </pre> | ||
+ | === 进阶 === | ||
+ | [https://blog.csdn.net/wo18237095579/article/details/103880397 DockerCompose一键 ELK 部署还有安全cluster各种不错] | ||
+ | |||
+ | [https://www.jianshu.com/p/2d78ce6bc504 docker-compose安装ELK-htpasswd密码] | ||
+ | |||
+ | 注意 | ||
+ | 如果需要 X-Pack 功能支持,需要选择 docker-elk 的 x-pack 分支 https://github.com/deviantony/docker-elk/tree/x-pack | ||
+ | |||
+ | [https://liuxingqi.com/docker-elk/ 通过 docker-compose 安装 ELK 的问题总结] | ||
+ | https://elk-docker.readthedocs.io/ | ||
[https://www.cnblogs.com/myzony/p/12206073.html Docker 安装 ELK -sebp] | [https://www.cnblogs.com/myzony/p/12206073.html Docker 安装 ELK -sebp] | ||
第248行: | 第472行: | ||
[https://www.cnblogs.com/soar1688/p/6849183.html Docker ElK安装部署使用教程] | [https://www.cnblogs.com/soar1688/p/6849183.html Docker ElK安装部署使用教程] | ||
+ | |||
+ | https://github.com/rickding/HelloDocker/tree/master/elk | ||
+ | |||
+ | https://www.yisu.com/zixun/5973.html | ||
+ | |||
+ | [https://wyunfei.github.io/2018/07/10/docker-compose-%E5%AE%89%E8%A3%85elk.html Docker compose安装elk6.x] | ||
+ | |||
+ | |||
+ | [https://blog.csdn.net/thinkingshu/article/details/87872223?utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromMachineLearnPai2%7Edefault-1.control&dist_request_id=&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromMachineLearnPai2%7Edefault-1.control docker-compose搭建ELK(原)] | ||
+ | |||
+ | |||
+ | [https://zhuanlan.zhihu.com/p/97718826 Docker-compose 部署 ELK6但是配置看起来不错] | ||
+ | |||
+ | [https://blog.csdn.net/qq_31093329/article/details/107686438?utm_medium=distribute.pc_relevant.none-task-blog-baidujs_title-9&spm=1001.2101.3001.4242 docker集群ELK部署读取本地日志--(六)通过docker-compose部署ELK] | ||
=usage= | =usage= | ||
+ | ===filebeat 新的client === | ||
+ | |||
+ | enabled:filebeat 6.0后,enabled默认为关闭,必须要修改成true 注意 | ||
+ | |||
+ | [https://itnixpro.com/install-filebeat-on-debian-10/ Install Filebeat on Debian 10] | ||
+ | |||
+ | |||
+ | https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-installation-configuration.html | ||
+ | |||
+ | [https://www.cnblogs.com/xuwujing/p/13532125.html ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解内有各种es教程呢] | ||
+ | |||
+ | [[filebeat redis log]] 站内知识 | ||
+ | |||
+ | [[filebeat nginx log]] 站内知识 | ||
+ | |||
+ | [http://www.javaobj.com/2020/05/nginx-logs/ 通过filebeat、logstash、rsyslog采集nginx日志的几种方式] | ||
+ | |||
+ | https://www.cnblogs.com/xiejava/p/12452434.html | ||
+ | |||
+ | [[filebeat mysql log]] 站内知识 | ||
+ | |||
+ | [[filebeat apache log]] 站内知识 | ||
+ | |||
+ | [https://www.cnblogs.com/cjsblog/p/9495024.html Filebeat 模块与配置] | ||
+ | |||
+ | [https://blog.csdn.net/Junzizhiai/article/details/114283915 filebeat的基本配置(基本配置)] | ||
+ | |||
+ | [https://www.cnblogs.com/miclesvic/articles/10511859.html filebeat安装、配置及测试] | ||
+ | |||
+ | [https://www.jianshu.com/p/2f050b8ab859 Filebeat和Logstash的简单配置和使用] | ||
+ | |||
+ | |||
+ | [https://www.cnblogs.com/zlslch/p/6622079.html filebeat.yml(中文配置详解)] | ||
+ | |||
+ | [https://www.cnblogs.com/zlslch/p/6619108.html Elasticsearch之es学习工作中遇到的坑(陆续更新)] | ||
+ | |||
+ | |||
+ | |||
+ | == spring boot 日志== | ||
+ | |||
+ | [https://www.jianshu.com/p/9d9d4ec99f61 SpringBoot开发专题-SpringBoot+ELK(Docker)实现日志收集] | ||
+ | |||
+ | |||
+ | [https://www.jianshu.com/p/e55e5419e54c docker-compose 搭建ELK spring-boot] | ||
+ | |||
+ | [https://www.jianshu.com/p/6f1a0487acf8 SpringBoot应用整合ELK实现日志收集] | ||
+ | |||
+ | [https://www.jianshu.com/p/038cb7c320a8 SpringBoot应用整合ELK实现日志收集没有x-pack] | ||
+ | |||
==tomcat logs== | ==tomcat logs== | ||
<pre> | <pre> | ||
第282行: | 第569行: | ||
+ | [https://www.cnblogs.com/zhang-shijie/p/5384624.html ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理] | ||
+ | [https://www.cnblogs.com/lsdb/p/9806190.html ELK安装和使用教程] | ||
− | + | =安全= | |
− | + | https://www.elastic.co/guide/en/elasticsearch/reference/7.12/security-minimal-setup.html | |
==nginx代理== | ==nginx代理== | ||
<pre> | <pre> | ||
第323行: | 第612行: | ||
[https://blog.csdn.net/qq_41980563/article/details/88725584 elk设置密码,elasticsearch设置密码] | [https://blog.csdn.net/qq_41980563/article/details/88725584 elk设置密码,elasticsearch设置密码] | ||
+ | |||
+ | =learn= | ||
+ | |||
+ | [https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html Elasticsearch: 权威指南] | ||
+ | |||
+ | [https://www.cnblogs.com/zlslch/category/950999.html?page=3 - ELK(Elasticsearch/Logstash/Kibana)概念学习系列] | ||
=集群= | =集群= | ||
第351行: | 第646行: | ||
* soft nofile 65536 | * soft nofile 65536 | ||
* hard nofile 65536 | * hard nofile 65536 | ||
− | + | </pre> | |
https://blog.csdn.net/qq942477618/article/details/53414983 | https://blog.csdn.net/qq942477618/article/details/53414983 | ||
第359行: | 第654行: | ||
https://www.cnblogs.com/zhi-leaf/p/8484337.html | https://www.cnblogs.com/zhi-leaf/p/8484337.html | ||
− | + | ||
+ | |||
+ | [https://blog.csdn.net/lixiaohai_918/article/details/89569611 解决elasticsearch配置network.host: 0.0.0.0导致elasticsearch服务启动不成功的问题] | ||
=see also= | =see also= | ||
+ | |||
+ | [https://zhuanlan.zhihu.com/p/33101736 贝聊ELK实战讲场景好处什么的不错 ] | ||
[https://zhuanlan.zhihu.com/p/22400290 ELK不权威指南] | [https://zhuanlan.zhihu.com/p/22400290 ELK不权威指南] | ||
第367行: | 第666行: | ||
[https://blog.csdn.net/yp090416/article/details/81589174 good ELK+logback+kafka+nginx 搭建分布式日志分析平台] | [https://blog.csdn.net/yp090416/article/details/81589174 good ELK+logback+kafka+nginx 搭建分布式日志分析平台] | ||
+ | [https://blog.csdn.net/tanqian351/article/details/83827583 ELK搭建教程(全过程)] | ||
[https://blog.csdn.net/li123128/article/details/81052374 小白都会超详细--ELK日志管理平台搭建教程] | [https://blog.csdn.net/li123128/article/details/81052374 小白都会超详细--ELK日志管理平台搭建教程] |
2021年6月3日 (四) 06:53的最新版本
注意 还没完全的
Thu 29 Apr 2021
1.安全
2. volumes
3.cluster
install
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html
deb
Install ELK/Elastic Stack on Debian 10
查看一下4个软件的版本
Install ELK/Elastic Stack on Debian 10
https://itnixpro.com/how-to-install-logstash-on-debian/
elk download
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.tar.gz
二进制包
jdk ins
RPM
#set java environment 如果是rpm安装 JAVA_HOME=/usr/java/jdk1.8.0_121 JRE_HOME=/usr/java/jdk1.8.0_121/jre CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export JAVA_HOME JRE_HOME CLASS_PATH PATH
tar.gz
tomcat 自带
yum install tomcat -y #这些比较懒 这样自动上了openjdk [root@localhost ~]# java -version openjdk version "1.8.0_212" OpenJDK Runtime Environment (build 1.8.0_212-b04) OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
elasticsearch ins
注意 这个是7 特别的config 不能改了ip后不能访问 # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # Use a descriptive name for your cluster: # #cluster.name: my-application cluster.name: myxps # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /var/lib/elasticsearch # Path to log files: path.logs: /var/log/elasticsearch # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # #network.host: 192.168.0.1 #network.host: 0.0.0.0 network.host: 192.168.88.108 #network.host: 0.0.0.0 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["node-1"] #cluster.initial_master_nodes: ["node-1", "node-2"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true #network.host: 0.0.0.0 #http.cors.enabled: true #http.cors.allow-origin: "* apt 也要记得看这个配置 参考 tar xvf elasticsearch-6.4.3.tar.gz mv elasticsearch-6.4.3/ /usr/local/elasticsearch/ vim elasticsearch.yml 修改配置文件,在最下面加入如下几行 network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" 注意,root用户是不能直接启动elasticsearch的,需要新建用户,然后切换用户去启动elasticsearch,如下: 创建elsearch用户组及elsearch用户 groupadd elsearch useradd elsearch -g elsearch -p elasticsearch 更改elasticsearch文件夹及内部文件的所属用户及组为elsearch:elsearch chown -R elsearch:elsearch 切换到elsearch用户再启动 su elsearch cd elasticsearch/bin bash elasticsearch & systemctl stop firewalld systemctl disable firewalld 配置管理 Elasticsearch一般不需额外配置,但是为了提高Elasticsearch性能可以通过elasticsearch.yml文件修改配置参数。当然,也可以根据用户系统配置降低配置参数,如jvm.heapsize。Elasticsearch默认占用2G内存,对于系统配置较低的服务器,很可能带来负载过大的问题,因此需要适当减少jvm.heapsize
Elasticsearch7.7修改network.host IP地址 start启动失败
nginx ins
vi /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ gpgcheck=0 enabled=1 yum install nginx -y #或者你用 yum install epel-release vi /etc/nginx/nginx.conf#修改nginx的日志默认输出格式 log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' '}'; #access_log /opt/access.log json; access_log /var/log/nginx/access.log json;
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-centos-7
https://www.cyberciti.biz/faq/how-to-install-and-use-nginx-on-centos-7-rhel-7/
Kibana
install
#kibana主要是搜索elasticsearch的数据,并进行数据可视化的展现,新版使用nodejs * kibana配置启动 [root@localhost kibana]# pwd /usr/local/kibana vim config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://localhost:9200" kibana.index: ".kibana" cd bin/ sh kibana & 启动kibana 启动完毕,可以浏览器输入url: 服务器外网ip:5601 查看是否成功启动: http://192.168.88.52:5601/app/kibana#/home?_g=() 配置******** Kibana配置可以通过命令行参数或配置文件kibana.yml。Kibana应用的默认地址为localhost,无法从远程访问Kibana,因此,用户需要修改配置文件的server.host属性
配置nginx,为kibana配置反向代理
server{ listen 80; server_name elk.com; location / { proxy_set_header Host $host; proxy_pass http://localhost:5601; }
Logstash
#为了测试 这和ng 和logstash 在同一台机器上 mv logstash-6.4.2/ /usr/local/logstash/ cd /usr/local/logstash/bin/ #用这个nginx的 cat /usr/local/logstash/config/nginx.conf input { file { path => "/var/log/nginx/access.log" type => "nginx" codec => "json" start_position => "beginning" } } filter { geoip { fields => ["city_name", "country_name", "latitude", "longitude", "region_name","region_code"] source => "client" } } output { if [type] == "nginx" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "nelson-nginx-%{+YYYY.MM.dd}" } stdout {} } } # 是Elasticsearch 的ip哦 千万不能写错啦 线上的情况一般是l 和ek 不在同一个机器 # hosts => ["127.0.0.1:9200"] ./bin/logstash -f ./config/nginx.conf 访问nginx 就会 在控制台看到如下输出 "@timestamp" => 2019-05-31T08:26:26.000Z, "domian" => "192.168.88.52", "size" => "0", "ua" => "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", "geoip" => {}, "tags" => [ [0] "_geoip_lookup_failure" ], "status" => "304", "referer" => "-", "path" => "/var/log/nginx/access.log", "url" => "/index.html", "type" => "nginx", "client" => "192.168.88.4", "host" => "192.168.88.52", "@version" => "1", "responsetime" => "0.000" } 查看 你的 kibana 地址 ,这个 这次变了 http://192.168.88.167:5601 右菜单 ->Discover
启动脚本
add redis
docker
attention 注意 官方 教程
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
或者参考这个 3.2 yml 好有意思
https://github.com/deviantony/docker-elk
v3 yml version: '3.2'
记一次centos7下安装dockr及docker-compose部署elk全过程
v2
#有空要加上安全相关呢 elasticsearch 不能正常在机器重启后启动 elasticsearch7.12.0 /bin/tini -- Exit 143 记得把 Elasticsearch 的 data 和 logs 设置 chmod 777 不然有下面这个报措的 volumes 有几个加了就不行 参考官方也是这样 有空再搞吧 是不是写得不对呢 cat logstash.conf input { beats { port => 5044 } } output { stdout { codec => rubydebug } } cat elasticsearch.yml cluster.name: "docker-cluster" network.host: 0.0.0.0 mkdir -p /data/elasticsearch/plugins mkdir -p /data/elasticsearch/data mkdir -p /data/logstash mkdir /data/elasticsearch/config cp logstash.conf /data/logstash/ mkdir -p /data/elasticsearch/logs #想起在lx的呀 chmod -R 0777 /data/elasticsearch/data/ chmod -R 0777 /data/elasticsearch/logs/ cat docker-compose.yml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 container_name: elasticsearch7.12.0 environment: - "cluster.name=elasticsearch" #设置集群名称为elasticsearch - "discovery.type=single-node" #以单一节点模式启动 - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #设置使用jvm内存大小 - TZ=Asia/Shanghai volumes: - /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins - /data/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/leasticsearch.yml #- /data/elasticsearch/data:/usr/share/elasticsearch/data #- /data/elasticsearch/logs:/usr/share/elasticsearch/logs #- /data/elasticsearch/data:/usr/share/elasticsearch/data ports: - 9200:9200 - 9300:9300 kibana: image: docker.elastic.co/kibana/kibana:7.12.0 container_name: kibana7.12.0 links: - elasticsearch:es #可以用es这个域名访问elasticsearch服务 depends_on: - elasticsearch #kibana在elasticsearch启动之后再启动 environment: - "elasticsearch.hosts=http://es:9200" #设置访问elasticsearch的地址 - TZ=Asia/Shanghai ports: - 5601:5601 restart: always logstash: image: docker.elastic.co/logstash/logstash:7.12.0 container_name: logstash7.12.0 environment: - TZ=Asia/Shanghai volumes: - /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf #挂载logstash的配置文件 depends_on: - elasticsearch #kibana在elasticsearch启动之后再启动 links: - elasticsearch:es #可以用es这个域名访问elasticsearch服务 ports: - 9600:9600 - 5044:5044 restart: always
进阶
DockerCompose一键 ELK 部署还有安全cluster各种不错
docker-compose安装ELK-htpasswd密码
注意 如果需要 X-Pack 功能支持,需要选择 docker-elk 的 x-pack 分支 https://github.com/deviantony/docker-elk/tree/x-pack
通过 docker-compose 安装 ELK 的问题总结
https://elk-docker.readthedocs.io/
https://github.com/rickding/HelloDocker/tree/master/elk
https://www.yisu.com/zixun/5973.html
Docker-compose 部署 ELK6但是配置看起来不错
docker集群ELK部署读取本地日志--(六)通过docker-compose部署ELK
usage
filebeat 新的client
enabled:filebeat 6.0后,enabled默认为关闭,必须要修改成true 注意
https://www.elastic.co/guide/en/beats/filebeat/7.12/filebeat-installation-configuration.html
ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解内有各种es教程呢
filebeat redis log 站内知识
filebeat nginx log 站内知识
通过filebeat、logstash、rsyslog采集nginx日志的几种方式
https://www.cnblogs.com/xiejava/p/12452434.html
filebeat mysql log 站内知识
filebeat apache log 站内知识
Elasticsearch之es学习工作中遇到的坑(陆续更新)
spring boot 日志
SpringBoot开发专题-SpringBoot+ELK(Docker)实现日志收集
docker-compose 搭建ELK spring-boot
SpringBoot应用整合ELK实现日志收集没有x-pack
tomcat logs
Step 1 of 2: Define index pattern Index pattern nelson-nginx-* #因为前面的output index => "nelson-nginx Step 2 of 2: Configure settings @timestamp #这个老的 Step 1 of 2: Define index pattern Index pattern logstash-* 有这些字些Success! Your index pattern matches 1 index Step 2 of 2: Configure settings
logstash配置mysql数据同步到elasticsearch
ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理
安全
https://www.elastic.co/guide/en/elasticsearch/reference/7.12/security-minimal-setup.html
nginx代理
1.安装nginx 2.安装Apache密码生产工具 httpd-tools 3.生成密码文件 4.配置Nginx 5.修改 kibna配置文件 6.重启kibna,Nginx 查看登录界面
x-pack
官方提供x-pack组件,进行安全防护,报表,集群实时监控。 只安装x-pack中的Shield 只是对 kibna放在公网 对kibna进行登录验证的话,可以用nginx 代理功能 1.nginx代理 2.使用Shield 3.x-pack组件
ElasticSearch&Search-guard 5 权限配置
learn
- ELK(Elasticsearch/Logstash/Kibana)概念学习系列
集群
trouble
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] elasticsearch启动时遇到的错误 问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144; /etc/sysctl.conf文件最后添加一行 vm.max_map_count=262144 [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] 每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量 ulimit -Hn ulimit -Sn 修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效 * soft nofile 65536 * hard nofile 65536
https://blog.csdn.net/qq942477618/article/details/53414983
https://www.jianshu.com/p/89f8099a6d09 Elasticsearch5.2.0部署过程的坑
https://www.cnblogs.com/yidiandhappy/p/7714489.html
https://www.cnblogs.com/zhi-leaf/p/8484337.html
解决elasticsearch配置network.host: 0.0.0.0导致elasticsearch服务启动不成功的问题
see also
good ELK+logback+kafka+nginx 搭建分布式日志分析平台
https://www.elastic.co/guide/cn/index.html
日志系统ELK使用详解(三)--elasticsearch安装
ELK 之二:ElasticSearch 和Logstash高级使用
ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理
ElasticSearch实战系列二: ElasticSearch的DSL语句使用教程---图文详解
ElasticSearch实战系列三: ElasticSearch的JAVA API使用教程
ElasticSearch实战系列四: ElasticSearch理论知识介绍
ElasticSearch实战系列五: ElasticSearch的聚合查询基础使用教程之度量(Metric)聚合
ElasticSearch实战系列六: Logstash快速入门
ElasticSearch实战系列七: Logstash实战使用-图文讲解
ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解
ELK+kafka+Winlogbeat/FileBeat搭建统一日志收集分析管理系统
日志分析 第一章 ELK介绍 http://www.cnblogs.com/xiaoming279/p/6100613.html
日志分析 第二章 统一访问日志格式 http://www.cnblogs.com/xiaoming279/p/6101628.html
日志分析 第三章 安装前准备及系统初始化 http://www.cnblogs.com/xiaoming279/p/6101951.html
这里开始还没看
日志分析 第四章 安装filebeat
http://www.cnblogs.com/xiaoming279/p/6112715.html