
OpenSIPS 4.0 is not only about adding new features, but also about rethinking parts of the engine that have remained unchanged for a very long time.
One of the most important examples is the TCP/TLS layer. The TCP framework, in the form we have known it so far, has stayed largely the same since it was introduced more than 20 years ago. It was built around the architectural realities of that time, and it served OpenSIPS well. But over the years, its limitations became more and more visible.
With OpenSIPS 4.0, we took the opportunity to redesign this area entirely: moving away from the old multi-process TCP model toward a new single-process, multi-threaded approach.
Old Model
The traditional TCP framework was based on a TCP Main process distributing connections across a set of TCP workers. Because OpenSIPS itself was designed around a multi-process model, the TCP layer had to support connection sharing and even connection “migration” between processes.
That approach came with a cost. First, it made the whole I/O path more complicated than it should have been. Connections had to be passed back and forth between the TCP Main process and the TCP workers, adding extra overhead and making the flow harder to maintain.
Second, the load balancing itself was somewhat constrained. Since connections and I/O ownership were tied to this process-based model, distributing work efficiently across workers was never as flexible as we would have liked.
And finally, this design became a major burden for TLS.
From day one, TLS support in OpenSIPS has been difficult precisely because of this multi-process SIP architecture. It required special shared-memory handling and explicit locking around OpenSSL structures. Over time, OpenSSL moved further and further away from supporting this kind of usage, forcing us to add more and more tweaks just to keep things working.
Those tweaks did not only affect the TLS transport layer. They also had negative side effects on other modules using OpenSSL in simpler, single-process scenarios, such as db_mysql, db_postgres, stir_shaken, and others. On top of that, the old model opened the door to the kind of bugs nobody likes to debug: dangling memory, double frees, memory leaks, and hard-to-explain crashes.
So the redesign was driven by more than just a wish to modernize the code. We also wanted to simplify the I/O model in OpenSIPS and remove a long-standing architectural pain point. And OpenSIPS 4.0 was the right opportunity to do that!
New Model
Instead of distributing TCP ownership across multiple processes, all TCP/TLS work is now handled inside a single dedicated process. That process uses a configurable number of threads to perform the actual reading and writing on TCP connections. In other words, the connections are no longer passed around between processes: they are owned and managed in one place.
This makes the TCP layer lighter, cleaner, and better aligned with modern systems, where a multi-threaded design is far more practical than it was 20 years ago.
Reading
On the read side, the TCP main process performs the actual I/O through its dedicated threads.
Once data is received, OpenSIPS performs a lightweight parsing step, only to determine the message boundaries and figure out the full message length. Once a complete SIP message is available, it is dispatched to one of the OpenSIPS worker processes, which continue the SIP-level handling.
So the transport I/O stays centralized, while the SIP logic continues to be processed by workers.
Writing and connecting
The write path follows the same philosophy. When a worker needs to send data, it no longer writes directly to the socket. Instead, it builds an async chunk and submits a write job to the TCP process, whose I/O threads then perform the actual send.
Connection establishment is handled there as well, meaning that both writing and connecting are now fully owned by the TCP process and its threads. This removes TCP passing altogether and gives the framework a much cleaner ownership model.
For TLS/OpenSSL, the benefits are even more important. The SSL context can now stay local to the TCP process, without being shared across multiple processes. This removes one of the biggest constraints of the old design and allows us to get rid of many of the tweaks and compromises that were previously required.
Performance
Performance-wise, the main win is latency, especially for connection setup. In synthetic end-to-end TLS tests covering 30,000 SIP exchanges at 500 requests per second, the reworked architecture maintained similar throughput while consistently delivering lower latency than the old model, with the clearest improvement visible during connection establishment.
These were, of course, synthetic tests, since real-world SIP traffic is much harder to replicate accurately in a controlled setup. Still, the results are a good indication that centralizing socket I/O inside a dedicated TCP process with its own thread pool reduces coordination overhead and provides a cleaner, more predictable execution path under load.
Future Work
There are still some obvious next steps for this new framework. One is to implement proper write outcome reporting, so the SIP side can be notified whether an asynchronous send eventually succeeded or failed. Another is to continue simplifying the processing model by dropping the dedicated workers mode and moving toward a unified pool of workers.
This TCP/TLS rework is not just a cleanup of the current stack, but also a step toward broader internal changes planned for OpenSIPS 4.0 and beyond.
Conclusion
This TCP/TLS rework is a significant internal change for OpenSIPS 4.0. It simplifies the transport layer, removes some of the limitations of the old TLS design, and provides a better base for further changes in the OpenSIPS core.
To learn more about OpenSIPS 4.0 and the work going into the next major release, join us at OpenSIPS Summit 2026, 28 April-01 May in Bucharest!
