Spark connector for Newts

Abstract

One of the broken things that doesn’t fit well in OpenNMS when using Cassandra is reports. By reports, I mean Statsd, Reportd and all the Jasper related reports. This is due to how data is retrieved from Cassandra using the measurements API.

There are better ways to retrieve and process a massive amount of data from Cassandra, and here is where Spark comes in.

Imagine you’re trying to find the Top 10 most used interfaces on a Cassandra cluster with millions of resources and hundreds of gigabytes of data per node. That will just fail with the current reporting solutions.

Also, imagine you’re trying to calculate some baselines, or do some complex trending operation. This could also take a long time to complete and will consume OpenNMS resources while doing this.

Thinking loud, you can even use the MLlib (or Machine Learning Library) which is part of Spark, to implement “Smart Thresholds” or “Smart Trending” that could replace the Statistical implementation based on R, and the current online-based threshold processing, with a non-configuration based solution that will tell you when an abnormal behavior is detected, or when the something is about to fail due to hard limits like disk space, connection throughput, etc.

Cassandra is well know for being extremely fast writing data, but could be potentially slow when retrieving data. This is why any third-party integration that requires read lots of data from Cassandra, should be implemented using Spark (Sprint and IronPort have use cases for this; in fact, Spark could be used to create a Teradata exporter).

The easiest element that can be replaced with Spark is Statsd for obvious reasons, so this will focus on how to do that. Then, this can be extended to JasperReports and other custom reports or calculations on which you might be interested.

Another interesting idea could be implement a spike-hunter, to replace spikes with NaN or just remove the sample (depending on what the user wants).

Folks

  • Alejandro Galue