Easy logging refinement with FlowG
Logs are an essential part of any monitoring infrastructure. They are the very first thing a system operator or a developer will reach for when debugging past or on going issues. As the infrastructure of your team or organization grows, more and more applications produce logs, in various locations and shape. In our use case, we have multiple clusters, each running multiple virtual machines, each running many containers, and some even have multiple processes running inside. Some logs are structured and formatted using JSON, or logfmt. Some other are unstructured and textual. Some applications integrate with OpenTelemetry, most do not. Solutions has been developed to aggregate and ingest those logs, such as Splunk, Datadog, the Elastic stack, or even OpenObserve, but all of them require careful processing of the logs before sending it to said services. Processing such as categorization (to identify where the log is coming from), refinement (parsing the log to give it more structure or metadata, or remove the unwanted information), filtering (to reduce storage costs), anonymization (to remove sensitive information that must not be persisted). This can be done with tools such as Logstash, but setup is complex, time consuming, and not very flexible. This is where FlowG comes in, and shines. It provides a low code environment with flow-based pipelines and scripts using the Vector Remap Language to easily categorize, refine, filter, anonymize your logs and store them in distinct destinations or forward them to your favorite third party service. In this talk, we will present FlowG’s feature set and showcase a typical infrastructure where FlowG shines.
Speaker
-
David DelassusLink SocietyDavid is an autodidact developer and system operator since he has been a little kid. He has grown an interest to monitoring through his professional career. David is currently working for the European Commission as a SecOps, which is where the need for FlowG emerged.