How to use pmacct as a Netflow 9 probe on Ubuntu Linux and Mac OS Big Sur

Collecting Netflow with OpenNMS helps you to get an idea about what type of traffic you have on network interfaces. Configuring a Netflow 9 exporter on a Linux system is a little bit tricky. First, most flow probes rely on libpcap to listen on the network traffic. Libpcap has no idea about what is the interface name, MAC address or index. You have to provide the information in your configuration. With Netflow 9 you can get traffic direction ingress and egress. I’ve spent some time to figure out how to configure pmacct to send flows tagged with the right information to OpenNMS. You should have enabled the Telemetryd Netflow 9 flow listener and parser. In my example my server has just one network interface and you have to adapt it accordingly to your environment. I have an Ubuntu Server running. Make sure you have the Server in OpenNMS provisioned and it is monitored with SNMP and the physical interfaces are in monitoring.

Install pmacct with apt install -y pmacct and create the following configuration files:

File: /etc/pmacct/pmacctd.conf

daemonize: true
interface: eth0
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe[eth0]
nfprobe_receiver: {opennms-ip}:{udp-port}
nfprobe_version: 9
nfprobe_direction[eth0]: tag
nfprobe_ifindex[eth0]: tag2
pre_tag_map: /etc/pmacct/
timestamps_secs: true

Important here is the interface name and the opennms-ip and udp-port where you have your OpenNMS as Netflow 9 collector running. In this example, my interface name is eth0. So change this here according to your environment. It is important to set timestamps_secs: true otherwise it won’t work and we don’t have timing information about the duration of flows.

The next step is about the variable tag and tag2 which is related to identifying the ingress/egress direction and setting the correct interface index. Create the following configuration file:

File: /etc/pmacct/

# Use a filter to determine direction
# Set 1 for ingress and 2 for egress
# Local MAC
set_tag=1 filter='ether dst de:ad:be:ef:00:00' jeq=eval_ifindexes
set_tag=2 filter='ether src de:ad:be:ef:00:00' jeq=eval_ifindexes

# Use a filter to set the ifindexes
set_tag2=2 filter='ether src de:ad:be:ef:00:00' label=eval_ifindexes
set_tag2=2 filter='ether dst de:ad:be:ef:00:00'

To figure out the direction and setting the correct interface index I use here in my example the MAC address. You can also use host and the IP address instead the MAC address. The value of tag is set to 1 to mark it as ingress traffic and 2 to mark it as egress traffic.

With the tag2 we set the correct interface index. You can figure it out by using the ip command:

ip a s eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether de:ad:be:ef:00:00 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever

The leading 2: is the interface index. Once you have configured the daemon you can now start and enable it with systemctl enable --now pmacctd. After a while you should see flows coming into your OpenNMS and you should be able to see the flows in ElasticSearch with Kibana and the Flow Deep Dive Dashboard coming with our OpenNMS Helm plugin in Grafana.

gl & hf


The daemon did not start in my case.

The configuration syslog: daemon will activate logging into syslog.
Then I was able to see an error:

Apr 15 17:31:52 server kernel: [2534516.982252] device ens18 entered promiscuous mode
Apr 15 17:31:52 server pmacctd[31050]: OK ( default/core ): link type is: 1
Apr 15 17:31:53 server pmacctd[31050]: ERROR ( ens18/nfprobe ): plugin_buffer_size is too short.
Apr 15 17:31:53 server kernel: [2534517.986540] device ens18 left promiscuous mode
Apr 15 17:31:53 server systemd[1]: pmacctd.service: Main process exited, code=exited, status=1/FA

So I’ve added plugin_buffer_size: 1000 and the daemon now works for me.


I was able to have pmacctd running on macOS Big Sur not only for Netflow 9 but also for SFlow.

For Netflow 9, this is what I use:

daemonize: false
interface: en0
aggregate: src_host, dst_host, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, tcpflags
plugins: nfprobe[en0]
nfprobe_version: 9
nfprobe_direction[en0]: tag
nfprobe_ifindex[en0]: tag2
pre_tag_map: /opt/pmacct/etc/
timestamps_secs: true

For SFlow:

daemonize: false
interface: en0
aggregate: src_host, dst_host, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto
plugins: sfprobe[en0]
sampling_rate: 20
sfprobe_direction[en0]: tag
sfprobe_ifindex[en0]: tag2
pre_tag_map: /opt/pmacct/etc/
timestamps_secs: true

The content of is:

set_tag=1 filter='ether dst f8:ff:c2:0f:b6:7b' jeq=eval_ifindexes
set_tag=2 filter='ether src f8:ff:c2:0f:b6:7b' jeq=eval_ifindexes

# Use a filter to set the ifindexes
set_tag2=6 filter='ether src f8:ff:c2:0f:b6:7b' label=eval_ifindexes
set_tag2=6 filter='ether dst f8:ff:c2:0f:b6:7b'


➜  snmpget -v 2c -c public localhost ifName.6 ifPhysAddress.6
IF-MIB::ifName.6 = STRING: en0
IF-MIB::ifPhysAddress.6 = STRING: 0:ff:c2:f:b6:7b