Logstash config for ELK stack explained
On my previous blog post about installing ELK stack without sudo access[1] one of the commentator asked me about explaining the Logstash config in more detail.
For ease of reference the Logstash
config is reproduced below:
input {
redis {
host => "127.0.0.1"
type => "redis"
data_type => "list"
key => "logstash"
}
}
output {
stdout { }
elasticsearch {
cluster => "elasticsearch"
}
}
I am going to break down the config piecemeal. Line: 1-8 is the input to Logstash
. In this config Logstash is expecting input from redis
server. The server in this case is expected to be running on the same machine as logstash. But it could very well be on it's own server. The host
config tells Logstash the IP address to connect to for redis server. Notice the port
of the redis server is not provided. That is because the default port that logstash connects to redis on is: 6379. If your redis server is running on a different port then you need to provide the port config like so: port => 1234
. The type
config is used primarily for filter activation. In case you want to activate a logstash filter when type is redis you can do so. For example:
filter {
if [type] == "redis" {
metrics {
meter => "events"
add_tag => "metric"
}
}
}
The data_type
can be either list
or channel
. From logstash redis documentation[2].
If redis_type is list, then we will BLPOP the key. If redis_type is channel, then we will SUBSCRIBE to the key. If redis_type is pattern_channel, then we will PSUBSCRIBE to the key.
You can find redis documentation on BLPOP/SUBSCRIBE/PSUBSCRIBE here [2:1].
The key
config tells which key to use for BLPOP
operation. The crucial part is your key
and data_type
on the ELK stack (consumer) side and client sending the logs (publisher) should be the same.
Therefore on your producer/client side you should have a logstash config like:
output {
redis {
host => "127.0.0.1"
type => "redis"
data_type => "list"
key => "logstash"
}
}
Finally, the output configuration from the top section. Each log event/line is sent to two outputs: stdout
and elasticsearch
.
output {
stdout { }
elasticsearch {
cluster => "elasticsearch"
}
}
stdout { }
is pretty straightforward. It simply means each log event will be sent to stdout. I keep this mostly for debugging purposes. You can remove it from your production logstash config.
elasticsearch
is where you want your log events to go. Elasticsearch consumes your log events, stores them and provides a way to query them whether it be through itself directly or via kibana
.
Well there it is? Questions/feedback welcome!
Resources:
Installing ELK stack without sudo access: http://www.javawithravi.com/install-elk-stack-without-sudo-access/ ↩︎
PSUBSCRIBE: http://redis.io/commands/psubscribe ↩︎ ↩︎