Simple OpenNMS/Minion Environment Using the Embedded ActiveMQ in Azure

*Sharing this article by Alejandro Galue, Senior Manager, Services and Support at The OpenNMS Group.*

This lab starts an OpenNMS instance in the cloud and two Minions on your machine, using ActiveMQ for communication through Multipass and Azure , for learning purposes.

The lab doesn’t cover security (in terms of encryption), which is crucial if you ever want to expose AMQ to the Internet.

Keep in mind that nothing prevents you from skipping using the cloud provider and doing everything with Multipass (or VirtualBox , or Hyper-V , or VMWare ). The reason to use a cloud provider is to prove that OpenNMS can monitor unreachable devices via Minion. Similarly, you could use any other cloud provider instead of Azure. However, I won’t explain how to port the solution here.

Requirements

Make sure to log into Azure using az login prior creating the VM.

If you have a restricted account in Azure, make sure you have the Network Contributor role and the Virtual Machine Contributor role associated with your Azure AD account for the resource group on which you would like to create the VM. Of course, Owner or Contributor at resource group level are welcome.

Tune the VMs image size accordingly.

Create common Environment Variables

export RG_NAME="OpenNMS"
export RG_LOCATION="eastus"
export ONMS_VM_NAME="onms01"
export VM_USERNAME="agalue"
export VM_PASSWORD="0p3nNM5Rules;"
export ONMS_HEAP_SIZE="4096" # Expressed in MB and must fit ONMS_VM_SIZE
export MINION_LOCATION="Durham"
export MINION_HEAP_SIZE="1G" # Must fit VM RAM
export MINION_ID1="minion01"
export MINION_ID2="minion02"

Feel free to change the content if needed.

Do not confuse the Azure Location or Region with the Minion Location; they are both unrelated things.

Create the Azure Resource Group

az group create -n $RG_NAME -l $RG_LOCATION

This is a necessary step, as every resource in Azure must belong to a resource group and a location.

Create an Azure VM for OpenNMS

Create a cloud-init script to deploy OpenNMS in Ubuntu:

cat <<EOF > /tmp/opennms.yaml
#cloud-config
package_upgrade: true
apt:
  preserve_sources_list: true
  sources:
    opennms:
      source: deb https://debian.opennms.org stable main main
packages:
  - opennms
  - opennms-webapp-hawtio
bootcmd:
  - curl -s https://debian.opennms.org/OPENNMS-GPG-KEY | apt-key add -
runcmd:
  - systemctl --now enable postgresql
  - sudo -u postgres createuser opennms
  - sudo -u postgres psql -c "ALTER USER postgres WITH PASSWORD 'postgres';"
  - sudo -u postgres psql -c "ALTER USER opennms WITH PASSWORD 'opennms';"
  - sed -r -i 's/password=""/password="postgres"/' /etc/opennms/opennms-datasources.xml
  - sed -r -i '/0.0.0.0:61616/s/([<][!]--|--[>])//g' /etc/opennms/opennms-activemq.xml
  - sed -r -i '/enabled="false"/{\$!{N;s/ enabled="false"[>]\n(.*OpenNMS:Name=Syslogd.*)/>\n\1/}}' /etc/opennms/service-configuration.xml
  - /usr/share/opennms/bin/runjava -s
  - /usr/share/opennms/bin/install -dis
  - echo 'JAVA_HEAP_SIZE=$ONMS_HEAP_SIZE' > /etc/opennms/opennms.conf
  - systemctl --now enable opennms
EOF

The above installs the latest OpenJDK 11, the latest PostgreSQL, and the latest OpenNMS Horizon. I added the most basic configuration for PostgreSQL to work with authentication. The embedded ActiveMQ is enabled, as well as Syslogd .

Create an Ubuntu VM with 2 Cores an 8GB of RAM for OpenNMS (that’s what you’d get with Standard_D2s_v3):

az vm create --resource-group $RG_NAME --name $ONMS_VM_NAME \
 --size Standard_D2s_v3 \
 --image UbuntuLTS \
 --admin-username "$VM_USERNAME" \
 --admin-password "$VM_PASSWORD" \
 --public-ip-address-allocation static \
 --custom-data /tmp/opennms.yaml

By default, the above creates a VNet, a default subnet within it, and it associates a public IP address to the NIC that will be created for the VM. I’ve chosen password-based access for simplicity but fell free to use SSH-Keys if needed.

Once finished, the above command should show the Public IP assigned to the VM, required to configure the Minions. Here is how to obtain it:

az vm show -d -g $RG_NAME -n $ONMS_VM_NAME --query publicIps -o tsv

Keep in mind that the cloud-init process starts once the VM is running, meaning you should wait about 5 minutes after the az vm create is finished to see OpenNMS up and running.

In case there is a problem, SSH into the VM using the public IP and the provided credentials and check /var/log/cloud-init-output.log to verify the progress and the status of the cloud-init execution.

Allow access to OpenNMS

Open ports to access ActiveMQ and the OpenNMS WebUI:

az vm open-port -g $RG_NAME -n $ONMS_VM_NAME --port 61616 --priority 100
az vm open-port -g $RG_NAME -n $ONMS_VM_NAME --port 8980 --priority 200

The above is one way to do it. Alternatively, you could modify the NSG created for the VM.

Create Minion VMs using multipass

After verifying that OpenNMS is up and running, you can proceed to create the Minions.

The first step is to create the cloud-init configuration for the first Minion on your machine:

ONMS_IP=$(az vm show -d -g $RG_NAME -n $ONMS_VM_NAME --query publicIps -o tsv)
cat <<EOF > /tmp/$MINION_ID1.yaml
#cloud-config
package_upgrade: true
write_files:
  - owner: root:root
    path: /tmp/org.opennms.minion.controller.cfg
    content: |
      location=$MINION_LOCATION
      id=$MINION_ID1
      http-url=http://$ONMS_IP:8980/opennms
      broker-url=failover:tcp://$ONMS_IP:61616
apt:
  preserve_sources_list: true
  sources:
    opennms:
      source: deb https://debian.opennms.org stable main main
packages:
  - opennms-minion
bootcmd:
  - curl -s https://debian.opennms.org/OPENNMS-GPG-KEY | apt-key add -
runcmd:
  - mv -f /tmp/org.opennms.minion.controller.cfg /etc/minion/
  - sed -i -r 's/# export JAVA_MIN_MEM=.*/export JAVA_MIN_MEM="$MINION_HEAP_SIZE"/' /etc/default/minion
  - sed -i -r 's/# export JAVA_MAX_MEM=.*/export JAVA_MAX_MEM="$MINION_HEAP_SIZE"/' /etc/default/minion
  - /usr/share/minion/bin/scvcli set opennms.http admin admin
  - /usr/share/minion/bin/scvcli set opennms.broker admin admin
  - systemctl --now enable minion
EOF

Then, start the new Minion via multipass with one core and 2GB of RAM:

multipass launch -c 1 -m 2G -n $MINION_ID1 --cloud-init /tmp/$MINION_ID1.yaml

Optionally, create a cloud-init configuration for a second Minion on your machine based on the work we did for the first one (same location):

sed "s/$MINION_ID1/$MINION_ID2/" /tmp/$MINION_ID1.yaml > /tmp/$MINION_ID2.yaml

Then, start the second Minion via multipass :

multipass launch -c 1 -m 2G -n $MINION_ID2 --cloud-init /tmp/$MINION_ID2.yaml

In case there is a problem, access the VM (e.x., multipass shell $MINION_ID1 ) and check /var/log/cloud-init-output.log to verify the progress and the status of the cloud-init execution.

Test

As you can see, the location name is Durham (a.k.a. $MINION_LOCATION ), and you should see the Minions on that location registered in OpenNMS.

SSH into the OpenNMS server and create a requisition with a node in the same network as the Minion VMs, and make sure to associate it with the appropriate location. For instance,

/usr/share/opennms/bin/provision.pl requisition add Test
/usr/share/opennms/bin/provision.pl node add Test srv01 srv01
/usr/share/opennms/bin/provision.pl node set Test srv01 location Durham
/usr/share/opennms/bin/provision.pl interface add Test srv01 192.168.0.40
/usr/share/opennms/bin/provision.pl interface set Test srv01 192.168.0.40 snmp-primary P
/usr/share/opennms/bin/provision.pl requisition import Test

Make sure to replace 192.168.0.40 with the IP of a working server in your network (reachable from the Minion VM), and do not forget to use the same location as defined in $MINION_LOCATION .

Please keep in mind that Minions are VMs on your machine. 192.168.0.40 is the IP of my machine which is why Minions can reach it (and vice versa), to access an external machine on your network, make sure to define static routes on that machine so it can reach the Minions through your machine (assuming you’re running Linux or macOS).

OpenNMS which runs in Azure, and have no access to 192.168.0.40 directly, should be able to collect data and monitor that node through any of the Minions. In fact, you can stop one of them, and OpenNMS would continue monitoring it.

To test asynchronous messages, you can send SNMP traps or Syslog messages to one of the Minions. Usually, you could put a Load Balancer in front of the Minions and use its IP when sending messages from the monitored devices. Alternatively, you could use udpgen for this purpose.

The machine that will be running udpgen must be part of the OpenNMS inventory. Then, find the IP of the Minion using multipass list , then execute the following from the machine added as a node above (the examples assumes the IP of the Minion is 192.168.75.16 ):

To send SNMP Traps:

udpgen -h 192.168.75.16 -x snmp -r 1 -p 1162

To send Syslog Messages:

udpgen -h 192.168.75.16 -x syslog -r 1 -p 1514

The C++ version of udpgen only works on Linux. If you’re on MacOS or Windows, you can use the Go version of it.

The Hawtio UI in OpenNMS can help to visualize the Camel and ActiveMQ internals, to understand what’s circulating between OpenNMS and the Minions.

For OpenNMS, Hawtio is available through http://$ONMS_IP:8980/hawtio (use the ActiveMQ Tab) if the package opennms-webapp-hawtio was installed (which is the case with the cloud-init template used).

For Minions, Hawtio is available through http://$MINION_IP1:8181/hawtio and http://$MINION_IP2:8181/hawtio respectively (use the Camel Tab).

Add a Load Balancer in front of the Minions (Optional)

In production, when having multiple Minions per location, it is a good practice to put a Load Balancer in front of them so that the devices can use a single destination for SNMP Traps, Syslog, and Flows.

The following creates a basic LB using nginx through multipass for SNMP Traps (with a listener on port 162) and Syslog Messages (with a listener on port 514):

MINION_IP1=$(multipass info $MINION_ID1 | grep IPv4 | awk '{print $2}')
MINION_IP2=$(multipass info $MINION_ID2 | grep IPv4 | awk '{print $2}')
cat <<EOF > /tmp/nginx.yaml
#cloud-config
package_upgrade: true
packages:
  - nginx
write_files:
  - owner: root:root
    path: /etc/nginx/nginx.conf
    content: |
      user www-data;
      worker_processes auto;
      pid /run/nginx.pid;
      include /etc/nginx/modules-enabled/*.conf;
      events {
        worker_connections 768;
      }
      stream {
        upstream syslog_udp  {
          server $MINION_IP1:1514;
          server $MINION_IP2:1514;
        }
        upstream trap_udp  {
          server $MINION_IP1:1162;
          server $MINION_IP2:1162;
        }
        server {
          listen 514 udp;
          proxy_pass syslog_udp;
          proxy_responses 0;
        }
        server {
          listen 162 udp;
          proxy_pass trap_udp;
          proxy_responses 0;
        }
      }
runcmd:
  - systemctl restart nginx
EOF
multipass launch -n nginx --cloud-init /tmp/nginx.yaml
echo "Load Balancer $(multipass info nginx | grep IPv4)"

Flows are outside the scope of this test as that requires more configuration on Minions and OpenNMS besides having an Elasticsearch cluster up and running with the required plugin in place.

Clean Up

When you’re done, make sure to delete the cloud resources:

az group delete -g $RG_NAME

Then clean the local resources:

multipass delete $MINION_ID1 $MINION_ID2
multipass purge

Remember to remove the nginx instance if you decided to use it.

2 Likes