Oh, I see. Yes, I meant to check if the data is in the same index, but if it's the same, then the problem with duplicated fields is clearly in the parsers. They are usually located in /etc/logstash/conf.d/<parser_name>/
, but you can access them also from UI from Network Probe
UI module.
If the data is not sensitive, maybe you can share some kind of screenshot or example of that issue? Do I understand correctly that the field names are not duplicated, but the data itself is appearing in multiple different fields?
If the data is duplicated within single document then it might be the case that parser plugins (like grok
for example, which is plugin used in filter section of the parsers) are extracting the same data multiple times. This might happen if first grok is operating on default message
field in order to extract some information and then later on second grok is doing something somewhat similar. That would result in showing the same data in multiple fields.
If that is the case, then it's a little annoying, but not complicated issue. It's related with understanding how document travels through pipeline* (or multiple pipelines, as one can send data to another) and what plugins are being used in order to parse the data and form the document, which is then stored in database.
* pipeline is a parsing thread, responsible for parsing specific type of data. They are built from multiple plugins, like mutate
, grok
, csv
, kv
and more.