Apache Kafka Producer in OpenSIPS 3.2

Integrating the main SIP engine with different components is an important requirement when developing and operating complex VoIP platforms. OpenSIPS has always aimed to offer support for interacting with a variety of implementations for services such as databases, accounting/billing systems, monitoring tools and so on.

Continuing in this direction, OpenSIPS 3.2 introduces a new integration capability, with Apache Kafka, a distributed, highly scalable, fault-tolerant event streaming platform. Without getting into details about its architecture and specific features, Kafka is essentially a publish-subscribe system, and from OpenSIPS’s perspective, quite similar to a message broker like RabbitMQ.

Kafka module

The event_kafka module is an implementation of an Apache Kafka producer client and provides a new transport backend for the OpenSIPS Event Interface. The configuration for a simple Kafka event subscription would look like this:

loadmodule "event_kafka.so"

startup_route {
    subscribe_event("E_MY_EVENT","kafka:127.0.0.1:9092/topic1?g.linger.ms=100&t.message.timeout.ms=1000");
}

In the event socket from the above snippet, besides setting the topic to publish to and the initial broker servers to connect to, we can also provide some client configuration properties. The settings are passed transparently to the librdkafka library used by the event_kafka module, and correspond to general Kafka Producer Configs [4] provided across various APIs/client libraries.

Script connector

Besides publishing messages as JSON-RPC notifications corresponding to OpenSIPS events, the module can also be used to publish generic messages directly from the OpenSIPS script. To configure this, the broker specification fomr the broker_id module parameter has a similar syntax to the Kafka event socket, for example:

loadmodule "event_kafka.so"
modparam("event_kafka", "broker_id", "[k2]127.0.0.1:9092/topic2?g.retries=3&g.linger.ms=100")

route[kafka_report] {
    xlog("[$avp(kafka_id)] status=$avp(kafka_status) key=$avp(kafka_key) msg=$avp(kafka_msg)\n");
    ...
}

route {
    ...
    kafka_publish("k2", $var(msg), $ci, "kafka_report");
    ...
}

Asynchronous reporting for events

For quite some time OpenSIPS has offered the possibility of doing failover for event delivery, between multiple transport backends, through the event_virtual module. This however, came with a limitation that could potentially have a considerable impact on the overall OpenSIPS performance.

The proper reporting of failures for event delivery, required OpenSIPS worker processes that trigger an event, to block and wait for the response from a dedicated process, which was responsible for the actual communication with the external service. As such, raising an event for a backend that did blocking or slow I/O would lead to slowing down the processing of the current SIP request. This bottleneck could be prevented by setting the sync_mode module parameter (for event modules that could suffer from this: event_rabbitmq, event_xmlrpc and  event_stream), but the ability to report back the status of the event delivery would be lost.

In order to fix this limitation, OpenSIPS 3.2 introduces a proper mechanism for asynchronous reporting of event delivery. This way, there is no longer a compromise required between performance and the ability to properly use the event_virtual module’s capabilities. Implementing the new event_kafka module, based on the librdkafka library, which is asynchronous in nature, provided a good opportunity to finally take a look at the overall reporting mechanism in the Event Interface and event_virtual module.

Below is a config example of publishing events to Kafka, with failover to plain text files in case the Kafka brokers are unreachable:

loadmodule "event_flatstore.so"
loadmodule "event_kafka.so"
loadmodule "event_virtual.so"

startup_route {
	subscribe_event("E_MY_EVENT", "virtual:FAILOVER kafka:127.0.0.1:9092/topic3 flatstore:/var/log/myevents");
}

In conclusion, the OpenSIPS 3.2 release brings a new endpoint that OpenSIPS can publish data to, Apache Kafka, and also, improves the failover capabilities of the Event Interface.

One thought on “Apache Kafka Producer in OpenSIPS 3.2

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s