Logstash
Logstash | Elastic
Logstash Reference | Elastic
Input plugins | Logstash Reference | Elastic
elastic/logstash: logstash - transport and process your logs, events, or other data
How Logstash Works | Logstash Reference | Elastic
The Logstash event processing pipeline has three stages: inputs → filters → outputs. Inputs generate events, filters modify them, and outputs ship them elsewhere. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.
Introduction to Logstash: Getting Started | Cybera
A Practical Introduction to Logstash | Elastic
The evolving story about the Logstash File Input | Elastic
A History of Logstash Output Workers | Elastic
Logstash Persistent Queue | Elastic ditched the need for Redis
Just Enough Redis for Logstash | Elastic
Little Logstash Lessons: Handling Duplicates | Elastic custom IDs
The future of Log4j input in Logstash | Elastic write to local file and use Filebeat instead
Archiving your event stream with Logstash | Elastic archive log to S3
Little Logstash Lessons - Part I: Using grok and mutate to type your data | Elastic
​Little Logstash Lessons: Using Logstash to help create an Elasticsearch mapping template | Elastic
Introducing Logstash Dissect | Elastic
Do you grok Grok? | Elastic
Questions about Self Monitoring Systems blog post - Beats - Discuss the Elastic Stack Configuring Logstash as a monitoring system
Install and run
Logstash is easy to install and run in command line. This is very useful when trying out filters.
export LOGSTASH_PACKAGE=logstash-5.5.1.tar.gz
curl -O https://artifacts.elastic.co/downloads/logstash/${LOGSTASH_PACKAGE}
tar xzf ${LOGSTASH_PACKAGE}
See elk-docker/Dockerfile at master · spujadas/elk-docker
bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }'
bin/logstash -f config.conf
Parse nginx log from stdin
or beat:
input {
stdin { }
beats { port => "5043" }
}
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "%{HTTPD_COMBINEDLOG}" }
}
geoip {
source => "clientip"
}
}
}
output {
stdout { codec => rubydebug }
}
Heartbeat and ES with credentials:
input {
heartbeat {
interval => 5
message => 'Hello from Logstash 💓'
}
}
output {
elasticsearch {
hosts => [ 'elasticsearch' ]
user => 'elastic'
password => 'changeme'
}
}
Plugins
Working with plugins | Logstash Reference | Elastic
Logstash Plugins
Input plugins | Logstash Reference | Elastic
Output plugins | Logstash Reference | Elastic
Codec plugins | Logstash Reference | Elastic
Filters Plugins
Filter plugins | Logstash Reference | Elastic
Date filter plugin | Logstash Reference | Elastic
Grok
Grok filter plugin | Logstash Reference | Elastic
Grok Debugger
Grok Constructor
The syntax for a grok pattern is %{SYNTAX:SEMANTIC}
.
SYNTAX
are regular expressions supported by Oniguruma. You can define your own patterns by putting it in patterns_dir
.
Logstash provides a bunch of patterns by default: see logstash-patterns-core/patterns.
Examples:
%{NGINXACCESS}
%{SYSLOGBASE} %{DATA:message}
Search with tags:_grokparsefailure
to look for filed logs.
Docker
docker run -it --rm logstash logstash -e 'input { stdin { } }
filter {
grok { match => {
"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
} }
}
output { stdout { codec => rubydebug } }'
55.3.244.1 GET /index.html 15824 0.043
{
"duration" => "0.043",
"request" => "/index.html",
"@timestamp" => 2017-08-22T04:23:42.127Z,
"method" => "GET",
"bytes" => "15824",
"@version" => "1",
"host" => "kylee-arch",
"client" => "55.3.244.1",
"message" => "55.3.244.1 GET /index.html 15824 0.043"
}