Good migration plan from remote-poller to perspective monitoring (preventing multiple copies of nodes)?

Does anyone migrated a working remote-poller setup (which is skipped in v27) towards a perspective monitoring?

Any good pointers are appreciated!

Or, to clarify the issue: Quoting the older discussions

Can I do the same with Minion instead of using the Remote Poller?

You can do a similar thing with Minion as with the Remote Poller. You would need to install a Minion in each remote location. You have to provision for every location a Node with the services and assign it to the location you want to monitor.

There are caveats:

A Node can only be in **one** location. This means you have to provision copies of the same node in each location you want to monitor it from.

is there a way to overcome the limitation and to monitor a node from several sites and preventing to deal with different copies of nodes?

With Horizon 27+ we have introduced a true replacement for the Remote Poller. You need at least one Minion in a remote location and we call this function Application Perspective Monitoring.

  1. Create an Application as a container for the services you want to monitor from multiple remote locations. [ Gear Box ] → [ Manage Applications ] → [ Add New Application ]
  2. Add just the services you want to monitor from additional locations
  3. Assign your existing locations to monitor the services

As soon you assign the “Locations” Minions get scheduled to monitor the services with the next polling schedule. No restarts required.

You get dedicated events if a service is down from the perspective of a remote location. The status can be shown in the Application Status Page [ Status ] → [ Application ] like here:

Additionally you will see in the service details of the service if it is monitored from other perspectives and what the status is.

Hope this helps.

1 Like

Thanks, Ronny, for the explanation. Now, I have been able to migrate all (but one) recent remote-pollers.

The one which does not work is placed behind a web proxy… a challenge for the ActiveMQ protocol (for the time being)

We are working right now to migrate and improve our docs a bit. What I’ve seen regarding to your problem, I can point you to two things for investigation a) you can try the gRPC connector instead of ActiveMQ. b) if you need larger scale @agalue worked on a gRPC bridge in front of Apache Kafka.

Had been trying to follow another way: The activation of ActiveMQ via HTTP.

It is easy to setup on the main side within opennms-activemq.xml by adding a line

<transportConnector name="openwirehttp" uri=";maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>

but the Minion side (which is adapted by org.opennms.minion.controller definition
broker-url = failover:http://FQDN-IP:61617
seems to lack the appropriate part:

Caused by: java.lang.ClassNotFoundException: com.thoughtworks.xstream.converters.Converter not found by org.apache.activemq.activemq-osgi [170]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation( ~[?:?]
at org.apache.felix.framework.BundleWiringImpl.access$200( ~[?:?]
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass( ~[?:?]
at java.lang.ClassLoader.loadClass( ~[?:?]
1 Like

This is very interesting, I wasn’t aware of it is possible to use HTTP/s transport for ActiveMQ. Maybe we have a missing dependency on our side. I’ve capture this in NMS-13068. I think this would be a very valuable enhancement.

1 Like

Hey gs0800! Hope you’ve had a good few months :slight_smile:

Wondering how the remote poller migration to APM has been treating you? Were you able to bring that last one online?