With OpenNMS, it is possible to monitor services and devices in a distributed environments. We differentiate between two use cases, “distributed monitoring” and “perspective monitoring”
Application monitored from a different network perspective
The first scenario is when you want to test if a provided service can be reached from a different remote location. The Application Perspective Monitoring (APM) gives you an additional view from a different network perspective. As an example, you provide a service (blue circle) in a central location in the drawing below. You monitor this service from your central OpenNMS instance in your central location.
You have to install the Minion in each location. You provision a Node with the Service in your central OpenNMS server instance. For each remote location, you have to create a remote Location and deploy a Minion in each location.
You can answer questions like, what is the quality and availability of a service from a different network perspective.
- The service in the central location needs to be accessible from the remote locations
- The node gets additional response time data from the remote location
- You get dedicated service events when the service is unavailable in the remote location
The second scenario is when you want to monitor services in network locations your OpenNMS server can’t reach easily or you need a more resilient monitoring infrastructure. The Minion can be installed in Locations. It acts as a proxy for your OpenNMS instance. The Minion can act as a Syslog, SNMP Trap, and Flow receiver. The Minion executes poller tasks and the communication between OpenNMS and the Minion is by default ActiveMQ. If you deploy multiple Minions in the same location gives you more resilience.
A Minion doesn’t have an own scheduler, polling tasks are still scheduled on the central OpenNMS instance. This is nothing to offload polling or data collection work. The Minion helps you to get firewall friendly access to all the network management protocols in your remote location.
A Minion can help you to deal with overlapping IP address space. You can use a location in each address space that ensures services get monitored in the right way.
- Ensure your OpenNMS Server and the Minion can communicate, by default ActiveMQ on port 61616/TCP or Apache Kafka
- Deploy a Minion in each remote network and define a Location
- Each Node and the Services are provisioned in the central OpenNMS server and are assigned to a Location.
- Polling tasks for a Node assigned to a Location will be forwarded to a Minion in the Location and are not tested from the OpenNMS Server itself.
Terminology and History
Remote Poller and Minions are two different components in OpenNMS. The Remote Poller was developed first and is now fully replaced with the Minion.
The Location you see in the Web UI and you assign to Nodes during provisioning is related to the Minion. When you come from an older OpenNMS installation, the XML file which defines Monitoring Locations is related to the old Remote Poller and goes away.
In the article Good migration plan from remote-poller to perspective monitoring (preventing multiple copies of nodes)? - #3 by indigo you can find some more detailed hints how to use the new APM functionality.