The vast amount of machine-generated data created by digital business operations can be overwhelming to those striving to extract value from it. With increasingly large and complex applications, growing consumer demand for a smoother digital experience, and the rise of cloud and microservices architectures, it seems that the mighty rivers of observability data are now constantly rushing in. .

One company looking to harness the deluge of observability data into a manageable stream is Mezmo, formerly known as LogDNA. The company recently unveiled its new Observability pipelinea solution designed to centralize the flow of observability data from various sources while adding context and routing it to the appropriate destinations.

“Teams across the organization need to easily centralize data from multiple sources, transform it to drive actionability, protect their budget, and empower everyone to manage that data with proper oversight. This gives them actionable insights and triggers they can use to improve agility, efficiency, and security,” the company said in a statement. Blog.

Last year, the company released a report which found that 74% of companies struggle to achieve true observability, despite significant investments in tools, with 38% admitting to spending $300,000 or more per year. Mezmo says it realized it was uniquely positioned to address observability issues through its technical foundation, a Kubernetes-based log management SaaS that IBM incorporated in its global cloud computing framework. The company says its new pipeline integrates functionality from its log management platform, including search, alerting, and visualization features.

Mezmo says the Observability Pipeline ingests and routes data from any source (cloud platforms, Syslog, Fluentd, Logstash, etc.) and delivers it to destinations like Splunk, S3, and the platform Mezmo log analysis tool. Pipeline control features simplify managing multiple sources and destinations while protecting against uncontrollable data flows, according to Mezmo. The company says its solution allows teams to reduce the number of tools needed to manage and ingest logs while increasing visibility and access, enabling better cost management and forecasting.

Source: Mezmo

Observability data isn’t always in a usable format for those who need it, or it can lack context or be difficult to locate, slowing important decisions. Mezmo says its pipeline contains multiple processors to enrich and transform data for various use cases to filter and encrypt the most important data for the job. The solution ostensibly reduces log volume by removing duplicate events and filtering specific log types to remove from the stream, which can reduce costs. Teams can also optimize data streams and transform data into consumable formats for each destination. Mezmo says data from aggregated sources can be searched from a user-friendly interface, and automatic and custom analyzers, enrichment, correlation and alerting capabilities can make that data more actionable.

“Data provides a competitive advantage, but organizations struggle to extract real value from it. First-generation observability data pipelines focus primarily on data movement and control, reducing the amount of data collected, but failing to deliver value. Data preprocessing is a great first step,” said Mezmo CEO Tucker Callaway. “We’ve built on this foundation and our success in making log data actionable to create an intelligent observability data pipeline that enriches and correlates large volumes of data in motion to provide additional context and drive action. .”

Mezmo announced its rebranding of LogDNA last May, stating that its new corporate identity reflects its growing capabilities and vision for observability.

Related articles:

Observability and AIOps tools increase with Big MELT data

LogDNA Research Shows 74% of Companies Fail to Achieve True Observability

Companies are drowning in observability data, says Dynatrace