The access architecture of an SDK refers to the types of events generated by it (synchronous, asynchronous), the way it handles API calls (blocking, non-blocking), and the way it processes events (on event, on poll). Different paradigms fit different needs. I usually divide these architectures to 4 main types:
- Asynchronous events, non-blocking API calls
- Synchronous events, blocking API calls
- Event queue, API calls queue
- Polling for events
Each of these has its advantages and drawbacks, and most importantly, efficient and less efficient implementations.
1. Asynchronous events, non-blocking API calls
This architecture can be called “event driven”. It usually includes a main loop (or select loop) that processes all events from the network, expired times and state changes, as well as an extensive set of APIs to initiate application driven events. The application layer is expected to catch all events it may be interested in and respond to them with API calls, which in turn may generate more events to be handled. This is a standard for me, as most protocol SDKs I worked with (i.e. that RADVISION makes) use this architecture.
- It is flexible, and allows the application to be multithreaded or single threaded, as well as create multiple internal threads that will all raise events to the application.
- This architecture is very convenient when interacting with other devices or with a user interface. It is also convenient when processes need to be stalled or interfered with.
- This type of architecture may seem confusing at first, with many events to catch and many APIs to call.
- It tends to break up a process into many segments, each segment to be carried out at (or after) the proper event.
2. Synchronous events, blocking API calls
In contrast to the previous architecture, we’ll call this one “process driven”. Here, an API call returns only after the requested action is completed (or an error occurred), so there is no need to raise an event that it was done. This means that the application later can call the APIs one after the other in a long chain. In extreme cases, there is no need for a main loop, as even timed actions will block until finished. The events that are raised are synchronous: they are raised during the work process and they require immediate handling.
- There is little need to catch events, as APIs simply report success or failure on exit.
- The process’ procedure is clear and continuous.
- If working in a multithreaded environment, it allows a thread to follow a process from start to end.
- Almost always, this forces multithreaded work, as a single threaded access is very inefficient.
- If some form of interaction is required with a user interface or an asynchronous device, threads have to be stalled and a message passing mechanism has to be implemented.
- Many times, SDK or library implementers do not use a pure synchronous or asynchronous architecture, but a mix of the two. This could create a powerful SDK, but could also be very confusing to users unless a clear API naming convention is used.
Event queue, API calls queue
We’ll call this architecture “message driven”. Anyone who used Windows’ window object and its event queue is familiar with this architecture. Events are placed in a single queue, and the application draws them one by one and handles them. Sometimes, there is even a message queue into the SDK – the application, instead of calling APIs, places requests in the queue. More often, a standard (blocking or non-blocking) API is provided for the application’s use.
- This is the most flexible architecture where it comes to multithreading: the stack can have any number of internal threads and the application can have an unrelated number of threads, and the queue mechanism will keep them out of each other’s way.
- Depending on the APIs implemented, this architecture shares some of the advantages of the previous two.
- This method forces long functions with large switch statements to handle multiple types of messages.
- Parameters passed will usually need to be of a generic nature (void pointers for the C/C++ developers), which limits the amount of static type checking possible.
- Thread usage, while convenient, is not very efficient. May times, two threads will handle events related to the same object and will lock each other out. If these events need to be processes sequentially, there is a risk that the later one will be handled before the earlier one.
There is a way to overcome the last issue. In case the relation of message to handling object is clear (for instance, where the messages are identified by a session id), the messages can be sorted out not to one queue but to as many queues as there are threads, in such a way that messages relating to the same object are sorted to the same queue. This is, in fact, a type of a hash (well, not exactly, but it uses the same principles as hashing functions do). In this arrangement, we achieve two things:
- Messages that should be processed sequentially are inserted into the queue in the order the arrives and will be processes correctly
- each threads withdraws from its own queue, and will only use the objects relating to the messages in that queue, preventing threads from locking each other
Polling for events
We’ll call this one “time driven”. Here, no events are raised to the application. Objects collect events internally, and if polled, they will process them and report to the application. Some hardware drivers use this approach. API calls may work normally, but more often they also wait for the object to be “triggered” and only then requests are carried out. Our 3G-324M Stack works like this: it will process events, including timed ones only when it is triggered externally from the application.
- Threads can be used on a per-object basis, which is very efficient, as all processing and access is done from the same thread.
- As an application, you get complete control on what is running in the system and when – no hidden threads or uncalled for events.
- It could be confusing to developers, especially if they are used to one of the other models.
- Inefficient when working single threaded. It also forces the application to poll each object in turn.