Producer Integration

Kafka is the "front door" of PNDA, allowing the ingest of high-velocity data streams, distributing data to all interested consumers and decoupling data sources from data processing applications and platform clients.

It is normally not necessary to create a new producer to start acquiring network data as there are a growing number of data plugins that have already been integrated with PNDA. It’s not always clear which plugins to use for which data types, hence we’ve summarized some common combinations in the table at the bottom of this page.

If you have have other data sources you want to integrate with PNDA it’s easy enough to write a PNDA producer – see

PNDA adopts a schema-on-read approach to data processing, so all data directed towards the platform is stored in as close to its raw form as possible. The only requirement is that each datum is encoded as a simple Avro schema fragment that adds the logical & network source of the data and a timestamp to the data payload.

Kafka data is stored in topics, each topic being divided into partitions and each partition being replicated to avoid data loss. Ingest is achieved by delivering data through a "producer" which is implemented to send data to one or more well defined topics by direct connection to the broker cluster. Load balancing is carried out by the broker cluster itself via negotiation with topic partition leaders.

PNDA is typically deployed with a set of well defined topics in accordance with the deployment context, each topic being carefully configured with a set of replicated partitions in line with the expected ingest and consumption rates. By convention topics are named according to a hierarchical scheme such that consumers are able to "whitelist" data of interest and subscribe to multiple topics at once (e.g. mytelco.service6.netflow.* or mytelco.*).

PNDA includes tools for managing topics, partitions and brokers and for monitoring the data flow across them.

Integrators can make use of the high and low level Kafka APIs. Please refer to our Data Preparation Guide to understand how to encapsulate data to the required Avro schema. We will also provide a reference in a variety of common implementation languages to illustrate how to correctly use the Avro schema in conjunction with the Kafka API.

Data types mapped to existing PNDA producers

Data Type Data Aggregator Data Aggregator Reference PNDA Producer Reference
BGP (inc. BGP LS) OpenBMP!
BGP PMACCT (BGP listener)
Bulk Ingest PNDA Bulk Ingest Tool
Cisco XR streaming telemetry Pipeline
CollectD (CollectD supports multiple plugins as listed here Logstash
IoT sensor via HTTP Node-RED
Logstash (Logstash supports multiple plugins as listed here Logstash
NETCONF Notifications ODL
Netflow / IPFIX Logstash
Netflow / IPFIX / sFlow pmacct
Openstack Work in progress
sFlow Logstash
SNMP Metrics and Traps ODL
SNMP Traps Logstash
Syslog Logstash
Syslog (RFC3164 or RFC5424 - needed for newer IOS/IOS XR/ NX OS etc.) Logstash

results matching ""

    No results matching ""