summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/img/tevent_context_stucture.pngbin0 -> 21888 bytes
-rw-r--r--doc/img/tevent_subrequest.pngbin0 -> 22453 bytes
-rw-r--r--doc/mainpage.dox47
-rw-r--r--doc/tevent_context.dox75
-rw-r--r--doc/tevent_data.dox137
-rw-r--r--doc/tevent_events.dox341
-rw-r--r--doc/tevent_queue.dox275
-rw-r--r--doc/tevent_request.dox189
-rw-r--r--doc/tevent_thread.dox322
-rw-r--r--doc/tevent_tutorial.dox22
-rw-r--r--doc/tutorials.dox43
11 files changed, 1451 insertions, 0 deletions
diff --git a/doc/img/tevent_context_stucture.png b/doc/img/tevent_context_stucture.png
new file mode 100644
index 0000000..fba8161
--- /dev/null
+++ b/doc/img/tevent_context_stucture.png
Binary files differ
diff --git a/doc/img/tevent_subrequest.png b/doc/img/tevent_subrequest.png
new file mode 100644
index 0000000..ea79223
--- /dev/null
+++ b/doc/img/tevent_subrequest.png
Binary files differ
diff --git a/doc/mainpage.dox b/doc/mainpage.dox
new file mode 100644
index 0000000..5b76013
--- /dev/null
+++ b/doc/mainpage.dox
@@ -0,0 +1,47 @@
+/**
+ * @mainpage
+ *
+ * Tevent is an event system based on the talloc memory management library. It
+ * is the core event system used in Samba.
+ *
+ * The low level tevent has support for many event types, including timers,
+ * signals, and the classic file descriptor events.
+ *
+ * Tevent also provide helpers to deal with asynchronous code providing the
+ * tevent_req (tevent request) functions.
+ *
+ * @section main_tevent_tutorial Tutorial
+ *
+ * You should start by reading @subpage tevent_tutorial, then reading the
+ * documentation of the interesting functions as you go.
+ *
+ * @section main_tevent_download Download
+ *
+ * You can download the latest releases of tevent from the
+ * <a href="http://samba.org/ftp/tevent" target="_blank">tevent directory</a>
+ * on the samba public source archive.
+ *
+ * @section main_tevent_bugs Discussion and bug reports
+ *
+ * tevent does not currently have its own mailing list or bug tracking system.
+ * For now, please use the
+ * <a href="https://lists.samba.org/mailman/listinfo/samba-technical" target="_blank">samba-technical</a>
+ * mailing list, and the
+ * <a href="http://bugzilla.samba.org/" target="_blank">Samba bugzilla</a>
+ * bug tracking system.
+ *
+ * @section main_tevent_devel Development
+ * You can download the latest code either via git or rsync.
+ *
+ * To fetch via git see the following guide:
+ *
+ * <a href="http://wiki.samba.org/index.php/Using_Git_for_Samba_Development" target="_blank">Using Git for Samba Development</a>
+ *
+ * Once you have cloned the tree switch to the master branch and cd into the
+ * lib/tevent directory.
+ *
+ * To fetch via rsync use this command:
+ *
+ * rsync -Pavz samba.org::ftp/unpacked/standalone_projects/lib/tevent .
+ *
+ */
diff --git a/doc/tevent_context.dox b/doc/tevent_context.dox
new file mode 100644
index 0000000..39eb85e
--- /dev/null
+++ b/doc/tevent_context.dox
@@ -0,0 +1,75 @@
+/**
+@page tevent_context Chapter 1: Tevent context
+
+@section context Tevent context
+
+Tevent context is an essential logical unit of tevent library. For working with
+events at least one such context has to be created - allocated, initialized.
+Then, events which are meant to be caught and handled have to be registered
+within this specific context. Reason for subordinating events to a tevent
+context structure rises from the fact that several context can be created and
+each of them is processed at different time. So, there can be 1 context
+containing just file descriptor events, another one taking care of signal and
+time events and the third one which keeps information about the rest.
+
+Tevent loops are the part of the library which represents the mechanism where
+noticing events and triggering handlers actually happens. They accept just one
+argument - tevent context structure. Therefore if theoretically an infinity
+loop (tevent_loop_wait) was called, only those arguments which belong to the
+passed tevent context structure can be caught and invoked within this call.
+Although some more signal events were registered (but within some other
+context) they will not be noticed.
+
+@subsection Example
+
+First lines which handle <code>mem_ctx</code> belong to talloc library
+knowledge but because of the fact that tevent uses the talloc library for its
+mechanisms it is necessary to understand a bit talloc as well. For more
+information about working with talloc, please visit <a
+href="http://talloc.samba.org/">talloc website</a> where tutorial and
+documentation are located.
+
+Tevent context structure <code>*event_ctx</code> represents the unit which will
+further contain information about registered events. It is created via calling
+tevent_context_init().
+
+@code
+TALLOC_CTX *mem_ctx = talloc_new(NULL);
+if (mem_ctx == NULL) {
+ // error handling
+}
+
+struct tevent_context *ev_ctx = tevent_context_init(mem_ctx);
+if(ev_ctx == NULL) {
+ // error handling
+}
+@endcode
+
+Tevent context has a structure containing lots of information. It include lists
+of all events which are divided according their type and are in order showing
+the sequence as they came.
+
+@image html tevent_context_stucture.png
+
+In addition to the lists shown in the diagram, the tevent context also contains
+many other data (e.g. information about the available system mechanism for
+triggering callbacks).
+
+@section tevent_loops Tevent loops
+
+Tevent loops are the dispatcher for events. They catch them and trigger the
+handlers. In the case of longer processes, the program spends most of its time
+at this point waiting for events, invoking handlers and waiting for another
+event again. There are 2 types of loop available for use in tevent library:
+
+<ul>
+<li>int tevent_loop_wait()</li>
+<li>int tevent_loop_once()</li>
+</ul>
+
+Both of functions accept just one parameter (tevent context) and the only
+difference lies in the fact that the first loop can theoretically last for ever
+but the second one will wait just for a single one event to catch and then the
+loop breaks and the program continue.
+
+*/
diff --git a/doc/tevent_data.dox b/doc/tevent_data.dox
new file mode 100644
index 0000000..dbe7a04
--- /dev/null
+++ b/doc/tevent_data.dox
@@ -0,0 +1,137 @@
+/**
+@page tevent_data Chapter 3: Accessing data
+@section data Accessing data with tevent
+
+A tevent request is (usually) created together with a structure for storing the
+data necessary for an asynchronous computation. For these private data, tevent
+library uses void (generic) pointers, therefore any data type can be very
+simply pointed at. However, this attitude requires clear and guaranteed
+knowledge of the data type that will be handled, in advance. Private data can
+be of 2 types: connected with a request itself or given as an individual
+argument to a callback. It is necessary to differentiate these types, because
+there is a slightly different method of data access for each. There are two
+possibilities how to access data that is given as an argument directly to a
+callback. The difference lies in the pointer that is returned. In one case it
+is the data type specified in the function’s argument, in another void* is
+returned.
+
+@code
+void tevent_req_callback_data (struct tevent_req *req, #type)
+void tevent_req_callback_data_void (struct tevent_req *req)
+@endcode
+
+
+To obtain data that are strictly bound to a request, this function is the only
+direct procedure.
+
+@code
+void *tevent_req_data (struct tevent_req *req, #type)
+@endcode
+
+Example with both calls which differs between private data within tevent
+request and data handed over as an argument.
+
+@code
+#include <stdio.h>
+#include <unistd.h>
+#include <tevent.h>
+
+struct foo_state {
+ int x;
+};
+
+struct testA {
+ int y;
+};
+
+
+static void foo_done(struct tevent_req *req) {
+ // a->x contains 10 since it came from foo_send
+ struct foo_state *a = tevent_req_data(req, struct foo_state);
+
+ // b->y contains 9 since it came from run
+ struct testA *b = tevent_req_callback_data(req, struct testA);
+
+ // c->y contains 9 since it came from run we just used a different way
+ // of getting it.
+ struct testA *c = (struct testA *)tevent_req_callback_data_void(req);
+
+ printf("a->x: %d\n", a->x);
+ printf("b->y: %d\n", b->y);
+ printf("c->y: %d\n", c->y);
+}
+
+
+struct tevent_req * foo_send(TALLOC_CTX *mem_ctx, struct tevent_context *event_ctx) {
+
+printf("_send\n");
+struct tevent_req *req;
+struct foo_state *state;
+
+req = tevent_req_create(event_ctx, &state, struct foo_state);
+state->x = 10;
+
+return req;
+}
+
+static void run(struct tevent_context *ev, struct tevent_timer *te,
+ struct timeval current_time, void *private_data) {
+ struct tevent_req *req;
+ struct testA *tmp = talloc(ev, struct testA);
+
+ // Note that we did not use the private data passed in
+
+ tmp->y = 9;
+ req = foo_send(ev, ev);
+
+ tevent_req_set_callback(req, foo_done, tmp);
+ tevent_req_done(req);
+
+}
+
+int main (int argc, char **argv) {
+
+ struct tevent_context *event_ctx;
+ struct testA *data;
+ TALLOC_CTX *mem_ctx;
+ struct tevent_timer *time_event;
+
+ mem_ctx = talloc_new(NULL); //parent
+ if (mem_ctx == NULL)
+ return EXIT_FAILURE;
+
+ event_ctx = tevent_context_init(mem_ctx);
+ if (event_ctx == NULL)
+ return EXIT_FAILURE;
+
+ data = talloc(mem_ctx, struct testA);
+ data->y = 11;
+
+ time_event = tevent_add_timer(event_ctx,
+ mem_ctx,
+ tevent_timeval_current(),
+ run,
+ data);
+ if (time_event == NULL) {
+ fprintf(stderr, " FAILED\n");
+ return EXIT_FAILURE;
+ }
+
+ tevent_loop_once(event_ctx);
+
+ talloc_free(mem_ctx);
+
+ printf("Quit\n");
+ return EXIT_SUCCESS;
+}
+@endcode
+
+Output of this example is:
+
+@code
+a->x: 10
+b->y: 9
+c->y: 9
+@endcode
+
+*/
diff --git a/doc/tevent_events.dox b/doc/tevent_events.dox
new file mode 100644
index 0000000..d56af25
--- /dev/null
+++ b/doc/tevent_events.dox
@@ -0,0 +1,341 @@
+/**
+@page tevent_events Chapter 2: Tevent events
+@section pools Tevent events
+
+Ok, after reading previous chapter we can start doing something useful. So, the
+way of creating events is similar for all types - signals, file descriptors,
+time or immediate events. At the beginning it is good to know about some
+typedefs which are set in tevent library and which specify the arguments for
+each callback. These callbacks are:
+
+- tevent_timer_handler_t()
+
+- tevent_immediate_handler_t()
+
+- tevent_signal_handler_t()
+
+- tevent_fd_handler_t()
+
+According their names it is obvious that for creating callback for e.g. time
+event, tevent_timer_handler_t will be used.
+
+The best way how to introduce registering an event and setting up a callback
+would be example, so examples describing all the types of events follow.
+
+@subsection Time Time event
+
+This example shows how to set up an event which will be repeated for a minute
+with interval of 2 seconds (will be triggered 30 times). After exceeding this
+limit, the event loop will finish and all the memory resources will be freed.
+This is just example describing repeated activity, nothing usefull is done
+within foo function
+
+@code
+#include <stdio.h>
+#include <unistd.h>
+#include <tevent.h>
+#include <sys/time.h>
+
+struct state {
+ struct timeval endtime;
+ int counter;
+ TALLOC_CTX *ctx;
+};
+
+static void callback(struct tevent_context *ev, struct tevent_timer *tim,
+ struct timeval current_time, void *private_data)
+{
+ struct state *data = talloc_get_type_abort(private_data, struct state);
+ struct tevent_timer *time_event;
+ struct timeval schedule;
+
+ printf("Data value: %d\n", data->counter);
+ data->counter += 1; // increase counter
+
+ // if time has not reached its limit, set another event
+ if (tevent_timeval_compare(&current_time, &(data->endtime)) < 0) {
+ // do something
+ // set repeat with delay 2 seconds
+ schedule = tevent_timeval_current_ofs(2, 0);
+ time_event = tevent_add_timer(ev, data->ctx, schedule, callback, data);
+ if (time_event == NULL) { // error ...
+ fprintf(stderr, "MEMORY PROBLEM\n");
+ return;
+ }
+ } else {
+ // time limit exceeded
+ }
+}
+
+int main(void) {
+ struct tevent_context *event_ctx;
+ TALLOC_CTX *mem_ctx;
+ struct tevent_timer *time_event;
+ struct timeval schedule;
+
+ mem_ctx = talloc_new(NULL); // parent
+ event_ctx = tevent_context_init(mem_ctx);
+
+ struct state *data = talloc(mem_ctx, struct state);
+
+ schedule = tevent_timeval_current_ofs(2, 0); // +2 second time value
+ data->endtime = tevent_timeval_add(&schedule, 60, 0); // one minute time limit
+ data->ctx = mem_ctx;
+ data->counter = 0;
+
+ // add time event
+ time_event = tevent_add_timer(event_ctx, mem_ctx, schedule, callback, data);
+ if (time_event == NULL) {
+ fprintf(stderr, "FAILED\n");
+ return EXIT_FAILURE;
+ }
+
+ tevent_loop_wait(event_ctx);
+ talloc_free(mem_ctx);
+ return EXIT_SUCCESS;
+}
+@endcode
+
+Variable <code>counter</code> is only used for counting the number of triggered
+functions. List of all available functions which tevent offers for working with
+time are listed
+<a href="http://tevent.samba.org/group__tevent__helpers.html">here</a> together
+with their description. More detailed view at these functions is unnecessary
+because their purpose and usage is quite simple and clear.
+
+@subsection Immediate Immediate event
+
+These events are, as their name indicates, activated and performed immediately.
+It means that this kind of events have priority over others (except signal
+events). So if there is a bulk of events registered and after that a
+tevent loop is launched, then all the immediate events will be triggered before
+the other events. Except other immediate events (and signal events) because
+they are also processed sequentially - according the order they were scheduled.
+Signals have the highest priority and therefore they are processed
+preferentially. Therefore the expression immediate may not correspond exactly
+to the dictionary definition of "something without delay" but rather "as soon
+as possible" after all preceding immediate events.
+
+For creating an immediate event there is a small different which lies in the
+fact that the creation of such event is done in 2 steps. One represents the
+creation (memory allocation), the second one represents registering as the
+event within some tevent context.
+
+@code
+struct tevent_immediate *run(TALLOC_CTX* mem_ctx,
+ struct tevent_context event_ctx,
+ void * data)
+{
+ struct tevent_immediate *im;
+
+ im = tevent_create_immediate(mem_ctx);
+ if (im == NULL) {
+ return NULL;
+ }
+ tevent_schedule_immediate(im, event_ctx, foo, data);
+
+ return im;
+}
+@endcode
+
+Example which may be compiled and run representing the creation of immediate event.
+
+@code
+
+#include <stdio.h>
+#include <unistd.h>
+#include <tevent.h>
+
+struct info_struct {
+ int counter;
+};
+
+static void foo(struct tevent_context *ev, struct tevent_immediate *im,
+ void *private_data)
+{
+ struct info_struct *data = talloc_get_type_abort(private_data, struct info_struct);
+ printf("Data value: %d\n", data->counter);
+}
+
+int main (void) {
+ struct tevent_context *event_ctx;
+ TALLOC_CTX *mem_ctx;
+ struct tevent_immediate *im;
+
+ printf("INIT\n");
+
+ mem_ctx = talloc_new(NULL);
+ event_ctx = tevent_context_init(mem_ctx);
+
+ struct info_struct *data = talloc(mem_ctx, struct info_struct);
+
+ // setting up private data
+ data->counter = 1;
+
+ // first immediate event
+ im = tevent_create_immediate(mem_ctx);
+ if (im == NULL) {
+ fprintf(stderr, "FAILED\n");
+ return EXIT_FAILURE;
+ }
+ tevent_schedule_immediate(im, event_ctx, foo, data);
+
+ tevent_loop_wait(event_ctx);
+ talloc_free(mem_ctx);
+
+ return 0;
+}
+@endcode
+
+@subsection Signal Signal event
+
+This is an alternative to standard C library functions signal() or sigaction().
+The main difference that distinguishes these ways of treating signals is their
+setting up of handlers for different time intervals of the running program.
+
+While standard C library methods for dealing with signals offer sufficient
+tools for most cases, they are inadequate for handling signals within the
+tevent loop. It could be necessary to finish certain tevent requests within the
+tevent loop without interruption. If a signal was sent to a program at a moment
+when the tevent loop is in progress, a standard signal handler would not return
+processing to the application at the very same place and it would quit the
+tevent loop for ever. In such cases, tevent signal handlers offer the
+possibility of dealing with these signals by masking them from the rest of
+application and not quitting the loop, so the other events can still be
+processed.
+
+Tevent offers also a control function, which enables us to verify whether it is
+possible to handle signals via tevent, is defined within tevent library and it
+returns a boolean value revealing the result of the verification.
+
+@code
+bool tevent_signal_support (struct tevent_context *ev)
+@endcode
+
+Checking for signal support is not necessary, but if it is not guaranteed, this
+is a good and easy control to prevent unexpected behaviour or failure of the
+program occurring. Such a test of course does not have to be run every single
+time you wish to create a signal handler, but simply at the beginning - during
+the initialization procedures of the program. Afterthat, simply adapt to each
+situation that arises.
+
+@code
+
+#include <stdio.h>
+#include <tevent.h>
+#include <signal.h>
+
+static void handler(struct tevent_context *ev,
+ struct tevent_signal *se,
+ int signum,
+ int count,
+ void *siginfo,
+ void *private_data)
+{
+
+ // Do something usefull
+
+ printf("handling signal...\n");
+ exit(EXIT_SUCCESS);
+}
+
+int main (void)
+{
+ struct tevent_context *event_ctx;
+ TALLOC_CTX *mem_ctx;
+ struct tevent_signal *sig;
+
+ mem_ctx = talloc_new(NULL); //parent
+ if (mem_ctx == NULL) {
+ fprintf(stderr, "FAILED\n");
+ return EXIT_FAILURE;
+ }
+
+ event_ctx = tevent_context_init(mem_ctx);
+ if (event_ctx == NULL) {
+ fprintf(stderr, "FAILED\n");
+ return EXIT_FAILURE;
+ }
+
+ if (tevent_signal_support(event_ctx)) {
+ // create signal event
+ sig = tevent_add_signal(event_ctx, mem_ctx, SIGINT, 0, handler, NULL);
+ if (sig == NULL) {
+ fprintf(stderr, "FAILED\n");
+ return EXIT_FAILURE;
+ }
+ tevent_loop_wait(event_ctx);
+ }
+
+ talloc_free(mem_ctx);
+ return EXIT_SUCCESS;
+}
+@endcode
+
+
+@subsection File File descriptor event
+
+Support of events on file descriptors is mainly useful for socket communication
+but it certainly works flawlessly with standard streams (stdin, stdout, stderr)
+ as well. Working asynchronously with file descriptors enables switching
+ within processing I/O operations. This ability may rise with a greater
+ number of I/O operations and such overlapping leads to enhancement of the
+ throughput.
+
+There are several other functions included in tevent API related to handling
+file descriptors (there are too many functions defined within tevent therefore
+just some of them are fully described within this thesis. The
+declaration of the rest can be easily found on the library’s website or
+directly from the source code):
+
+<ul>
+<li>tevent_fd_set_close_fn() - can add another function to be called at the
+ moment when a structure tevent fd is freed.</li>
+<li>tevent_fd_set_auto_close() - calling this function can simplify the
+ maintenance of file descriptors, because it instructs tevent to close the
+ appropriate file descriptor when the tevent fd structure is about to be
+ freed.</li>
+<li>tevent_fd_get_flags() - returns flags which are set on the file descriptor
+ connected with this tevent fd structure.</li>
+<li>tevent_fd_set_flags() - sets specified flags on the event’s file
+ descriptor.</li>
+</ul>
+
+@code
+
+static void close_fd(struct tevent_context *ev, struct tevent_fd *fd_event,
+ int fd, void *private_data)
+{
+ // processing when fd_event is freed
+}
+
+struct static void handler(struct tevent_context *ev,
+ struct tevent_fd *fde,
+ uint16_t flags,
+ void *private_data)
+{
+ // handling event; reading from a file descriptor
+ tevent_fd_set_close_fn (fd_event, close_fd);
+}
+
+int run(TALLOC_CTX *mem_ctx, struct tevent_context *event_ctx,
+ int fd, uint16_t flags, char *buffer)
+{
+ struct tevent_fd* fd_event = NULL;
+
+ if (flags & TEVENT_FD_READ) {
+ fd_event = tevent_add_fd(event_ctx,
+ mem_ctx,
+ fd,
+ flags,
+ handler,
+ buffer);
+ }
+ if (fd_event == NULL) {
+ // error handling
+ }
+ return tevent_loop_once();
+}
+@endcode
+
+*/
diff --git a/doc/tevent_queue.dox b/doc/tevent_queue.dox
new file mode 100644
index 0000000..9c247e5
--- /dev/null
+++ b/doc/tevent_queue.dox
@@ -0,0 +1,275 @@
+/**
+@page tevent_queue Chapter 5: Tevent queue
+@section queue Tevent queue
+
+There is a possibility that the dispatcher and its handlers may not be able to
+handle all the incoming events as quickly as they arrive. One way to deal with
+this situation is to buffer the received events by introducing an event queue
+into the events stream, between the events generator and the dispatcher. Events
+are added to the queue as they arrive, and the dispatcher pops them off the
+beginning of the queue as fast as possible. In tevent library it is
+similar, but the queue is not automatically set for any event. The queue has to
+be created on purpose, and events which should follow the order of the FIFO
+queue have to be explicitly pinpointed. Creating such a queue is crucial in
+situations when sequential processing is absolutely essential for the
+successful
+completion of a task, e.g. for a large quantity of data that are about to be
+written from a buffer into a socket. The tevent library has its own queue
+structure that is ready to use after it has been initialized and started up
+once.
+
+@subsection cr_queue Creation of Queues
+
+The first and most important step is the creation of the tevent queue
+(represented by struct tevent_queue), which will then be in running mode.
+
+@code
+struct tevent_queue* tevent_queue_create (TALLOC_CTX *mem_ctx, const char *name)
+@endcode
+
+When the program returns from this function, the allocated memory, set
+destructor and labeled queue as running has been done and the structure is
+ready to be filled with entries. Stopping and starting queues on the run. If
+you need to stop a queue from processing its entries, and then turn it on
+again, a couple of functions which serve this purpose are:
+
+- bool tevent_queue_stop()
+- bool tevent_queue_start()
+
+These functions actually only provide for the simple setting of a variable,
+which indicates that the queue has been stopped/started. Returned value
+indicates result.
+
+@subsection add_queue Adding Requests to a Queue
+
+Tevent in fact offers 3 possible ways of inserting a request into a queue.
+There are no vast differences between them, but still there might be situations
+where one of them is more suitable and desired than another.
+
+@code
+bool tevent_queue_add(struct tevent_queue *queue,
+ struct tevent_context *ev,
+ struct tevent_req *req,
+ tevent_queue_trigger_fn_t trigger,
+ void *private_data)
+@endcode
+
+This call is the simplest of all three. It offers only boolean verification of
+whether the operation of adding the request into a queue was successful or not.
+No additional deletion of an item from the queue is possible, i.e. it is only
+possible to deallocate the whole tevent request, which would cause triggering
+of destructor handling and also dropping the request from the queue.
+
+<strong>Extended Options</strong>
+
+Both of the following functions have a feature in common - they return tevent
+queue entry structure representing the item in a queue. There is no further
+possible handling with this structure except the use of the structure’s pointer
+for its deallocation (which leads also its removal from the queue). The
+difference lies in the possibility that with the following functions it is
+possible to remove the tevent request from a queue without its deallocation.
+The previous function can only deallocate the tevent request as it was from
+memory, and thereby logically cause its removal from the queue as well. There
+is no other utilization of this structure via API at this stage of tevent
+library. The possibility of easier debugging while developing with tevent could
+be considered to be an advantage of this returned pointer.
+
+@code
+struct tevent_queue_entry *tevent_queue_add_entry(struct tevent_queue *queue,
+ struct tevent_context *ev,
+ struct tevent_req *req,
+ tevent_queue_trigger_fn_t trigger,
+ void *private_data)
+@endcode
+
+The feature that allows for the optimized addition of entries to a queue is
+that a check for an empty queue with no items is first of all carried out. If
+it is found that the queue is empty, then the request for inserting the entry
+into a queue will be omitted and directly triggered.
+
+@code
+struct tevent_queue_entry *tevent_queue_add_optimize_empty(struct tevent_queue *queue,
+ struct tevent_context *ev,
+ struct tevent_req *req,
+ tevent_queue_trigger_fn_t trigger,
+ void *private_data)
+@endcode
+
+When calling any of the functions serving for inserting an item into a queue,
+it is possible to leave out the fourth argument (trigger) and instead of a
+function pass a NULL pointer. This usage sets so-called blocking entries.
+These entries, since they do not have any trigger operation to be activated,
+just sit in their position until they are labeled as a done by another
+function. Their purpose is to block other items in the queue from being
+triggered.
+
+@subsection example_q Example of tevent queue
+
+@code
+#include <stdio.h>
+#include <unistd.h>
+#include <tevent.h>
+
+struct foo_state {
+ int local_var;
+ int x;
+};
+
+struct juststruct {
+ TALLOC_CTX * ctx;
+ struct tevent_context *ev;
+ int y;
+};
+
+int created = 0;
+
+static void timer_handler(struct tevent_context *ev, struct tevent_timer *te,
+ struct timeval current_time, void *private_data)
+{
+ // time event which after all sets request as done. Following item from
+ // the queue may be invoked.
+ struct tevent_req *req = private_data;
+ struct foo_state *stateX = tevent_req_data(req, struct foo_state);
+
+ // processing some stuff
+
+ printf("time_handler\n");
+
+ tevent_req_done(req);
+ talloc_free(req);
+
+ printf("Request #%d set as done.\n", stateX->x);
+}
+
+static void trigger(struct tevent_req *req, void *private_data)
+{
+ struct juststruct *priv = tevent_req_callback_data (req, struct juststruct);
+ struct foo_state *in = tevent_req_data(req, struct foo_state);
+ struct timeval schedule;
+ struct tevent_timer *tim;
+ schedule = tevent_timeval_current_ofs(1, 0);
+ printf("Processing request #%d\n", in->x);
+
+ if (in->x % 3 == 0) { // just example; third request does not contain
+ // any further operation and will be finished right
+ // away.
+ tim = NULL;
+ } else {
+ tim = tevent_add_timer(priv->ev, req, schedule, timer_handler, req);
+ }
+
+ if (tim == NULL) {
+ tevent_req_done(req);
+ talloc_free(req);
+ printf("Request #%d set as done.\n", in->x);
+ }
+}
+
+struct tevent_req *foo_send(TALLOC_CTX *mem_ctx, struct tevent_context *ev,
+ const char *name, int num)
+{
+ struct tevent_req *req;
+ struct foo_state *state;
+ struct foo_state *in;
+ struct tevent_timer *tim;
+
+ printf("foo_send\n");
+ req = tevent_req_create(mem_ctx, &state, struct foo_state);
+ if (req == NULL) { // check for appropriate allocation
+ tevent_req_error(req, 1);
+ return NULL;
+ }
+
+ // exemplary filling of variables
+ state->local_var = 1;
+ state->x = num;
+
+ return req;
+}
+
+static void foo_done(struct tevent_req *req) {
+
+ enum tevent_req_state state;
+ uint64_t err;
+
+ if (tevent_req_is_error(req, &state, &err)) {
+ printf("ERROR WAS SET %d\n", state);
+ return;
+ } else {
+ // processing some stuff
+ printf("Callback is done...\n");
+ }
+}
+
+int main (int argc, char **argv)
+{
+ TALLOC_CTX *mem_ctx;
+ struct tevent_req* req[6];
+ struct tevent_req* tmp;
+ struct tevent_context *ev;
+ struct tevent_queue *fronta = NULL;
+ struct juststruct *data;
+ int ret;
+ int i = 0;
+
+ const char * const names[] = {
+ "first", "second", "third", "fourth", "fifth"
+ };
+
+ printf("INIT\n");
+
+ mem_ctx = talloc_new(NULL); //parent
+ talloc_parent(mem_ctx);
+ ev = tevent_context_init(mem_ctx);
+ if (ev == NULL) {
+ fprintf(stderr, "MEMORY ERROR\n");
+ return EXIT_FAILURE;
+ }
+
+ // setting up queue
+ fronta = tevent_queue_create(mem_ctx, "test_queue");
+ tevent_queue_stop(fronta);
+ tevent_queue_start(fronta);
+ if (tevent_queue_running(fronta)) {
+ printf ("Queue is runnning (length: %d)\n", tevent_queue_length(fronta));
+ } else {
+ printf ("Queue is not runnning\n");
+ }
+
+ data = talloc(ev, struct juststruct);
+ data->ctx = mem_ctx;
+ data->ev = ev;
+
+
+ // create 4 requests
+ for (i = 1; i < 5; i++) {
+ req[i] = foo_send(mem_ctx, ev, names[i], i);
+ tmp = req[i];
+ if (req[i] == NULL) {
+ fprintf(stderr, "Request error! %d \n", ret);
+ break;
+ }
+ tevent_req_set_callback(req[i], foo_done, data);
+ created++;
+ }
+
+ // add item to a queue
+ tevent_queue_add(fronta, ev, req[1], trigger, data);
+ tevent_queue_add(fronta, ev, req[2], trigger, data);
+ tevent_queue_add(fronta, ev, req[3], trigger, data);
+ tevent_queue_add(fronta, ev, req[4], trigger, data);
+
+ printf("Queue length: %d\n", tevent_queue_length(fronta));
+ while(tevent_queue_length(fronta) > 0) {
+ tevent_loop_once(ev);
+ printf("Queue: %d items left\n", tevent_queue_length(fronta));
+ }
+
+ talloc_free(mem_ctx);
+ printf("FINISH\n");
+
+ return EXIT_SUCCESS;
+}
+@endcode
+
+*/
diff --git a/doc/tevent_request.dox b/doc/tevent_request.dox
new file mode 100644
index 0000000..e1e45b1
--- /dev/null
+++ b/doc/tevent_request.dox
@@ -0,0 +1,189 @@
+/**
+@page tevent_request Chapter 4: Tevent request
+@section request Tevent request
+
+A specific feature of the library is the tevent request API that provides for
+asynchronous computation and allows much more interconnected working and
+cooperation among functions and events. When working with tevent request it
+is possible to nest one event under another and handle them bit by bit. This
+enables the creation of sequences of steps, and provides an opportunity to
+prepare for all problems which may unexpectedly happen within the different
+phases. One way or another, subrequests split bigger tasks into smaller ones
+which allow a clearer view of each task as a whole.
+
+@subsection name Naming conventions
+
+There is a naming convention which is not obligatory but it is followed in this
+tutorial:
+
+- Functions triggered before the event happens. These establish a request.
+- \b foo_send(...) - this function is called first and it includes the
+ creation of tevent request - tevent req structure. It does not block
+ anything, it simply creates a request, sets a callback (foo done) and lets
+ the program continue
+- Functions as a result of event.
+- \b foo_done(...) - this function contains code providing for handling itself
+ and based upon its results, the request is set either as a done or, if an
+ error occurs, the request is set as a failure.
+- \b foo_recv(...) - this function contains code which should, if demanded,
+ access the result data and make them further visible. The foo state should
+ be deallocated from memory when the request’s processing is over and
+ therefore all computed data up to this point would be lost.
+
+As was already mentioned, specific naming subsumes not only functions but also
+the data themselves:
+
+- \b foo_state - this is a structure. It contains all the data necessary for
+ the asynchronous task.
+
+@subsection cr_req Creating a New Asynchronous Request
+
+The first step for working asynchronously is the allocation of memory
+requirements. As in previous cases, the talloc context is required, upon which
+the asynchronous request will be tied. The next step is the creation of the
+request itself.
+
+@code
+struct tevent_req* tevent_req_create (TALLOC_CTX *mem_ctx, void **pstate, #type)
+@endcode
+
+The pstate is the pointer to the private data. The necessary amount of memory
+(based on data type) is allocated during this call. Within this same memory
+area all the data from the asynchronous request that need to be preserved for
+some time should be kept.
+
+<b>Dealing with a lack of memory</b>
+
+The verification of the returned pointer against NULL is necessary in order to
+identify a potential lack of memory. There is a special function which helps
+with this check tevent_req_nomem().
+
+It handles verification both of the talloc memory allocation and of the
+associated tevent request, and is therefore a very useful function for avoiding
+unexpected situations. It can easily be used when checking the availability of
+further memory resources that are required for a tevent request. Imagine an
+example where additional memory needs arise although no memory resources are
+currently available.
+
+@code
+bar = talloc(mem_ctx, struct foo);
+if(tevent_req_nomem (bar, req)) {
+ // handling a problem
+}
+@endcode
+
+This code ensures that the variable bar, which contains NULL as a result of the
+unsuccessful satisfaction of its memory requirements, is noticed, and also that
+the tevent request req declares it exceeds memory capacity, which implies the
+impossibility of finishing the request as originally programmed.
+
+
+@subsection fini_req Finishing a Request
+
+Marking each request as finished is an essential principle of the tevent
+library. Without marking the request as completed - either successfully or with
+an error - the tevent loop could not let the appropriate callback be triggered.
+It is important to understand that this would be a significant threat, because
+it is not usually a question of one single function which prints some text on a
+screen, but rather the request is itself probably just a link in a series of
+other requests. Stopping one request would stop the others, memory resources
+would not be freed, file descriptors might remain open, communication via
+socket could be interrupted, and so on. Therefore it is important to think
+about finishing requests, either successfully or not, and also to prepare
+functions for all possible scenarios, so that the the callbacks do not process
+data that are actually invalid or, even worse, in fact non-existent meaning
+that a segmentation fault may arise.
+
+<ul>
+<li>\b Manually - This is the most common type of finishing request. Calling
+this function sets the request as a TEVENT_REQ_DONE. This is the only purpose
+of this function and it should be used when everything went well. Typically it
+is used within the done functions.
+
+@code
+void tevent_req_done (struct tevent_req *req)
+@endcode
+Alternatively, the request can end up being unsuccessful.
+@code
+bool tevent_req_error (struct tevent_req *req, uint64_t error)
+@endcode
+
+The second argument takes the number of an error (declared by the programmer,
+for example in an enumerated variable). The function tevent_req_error() sets
+the status of the request as a TEVENT_REQ_USER_ERROR and also stores the code
+of error within the structure so it can be used, for example for debugging. The
+function returns true, if marking the request as an error was processed with no
+problem - value error passed to this function is not equal to 1.</li>
+
+<li>
+<b>Setting up a timeout for request</b> - A request can be finished virtually,
+or if the process takes too much time, it can be timed out. This is considered
+as an error of the request and it leads to calling callback. In the
+background, this timeout is set through a time event (described in
+@subpage tevent_events ) which eventually triggers an operation marking the
+request as a TEVENT_REQ_TIMED_OUT (can not be considered as successfully
+finished). In case a time out was already set, this operation will overwrite it
+with a new time value (so the timeout may be lengthened) and if everything is
+set properly, it returns true.
+
+@code
+bool tevent_req_set_endtime(struct tevent_req *req,
+ struct tevent_context *ev,
+ struct timeval endtime);
+@endcode
+</li>
+
+
+<li><b>Premature Triggering</b> - Imagine a situation in which some part of a
+nested subrequest ended up with a failure and it is still required to trigger a
+callback. Such as example might result from lack of memory leading to the
+impossibility of allocating enough memory requirements for the event to start
+processing another subrequest, or from a clear intention to skip other
+procedures and trigger the callback regardless of other progress. In these
+cases, the function tevent_req_post() is very handy and offers this option.
+
+@code
+struct tevent_req* tevent_req_post (struct tevent_req *req,
+ struct tevent_context *ev);
+@endcode
+
+A request finished in this way does not behave as a time event nor as a file
+descriptor event but as a immediately scheduled event, and therefore it will be
+treated according the description laid down in @subpage tevent_events .
+</li>
+</ul>
+
+
+@section nested Subrequests - Nested Requests
+
+To create more complex and interconnected asynchronous operations, it is
+possible to submerge a request into another and thus create a so-called
+subrequest. Subrequests are not represented by any other special structure but
+they are created from tevent_req_create(). This diagram shows the nesting and
+life time of each request. The table below describes the same in words, and
+shows the triggering of functions during the application run.
+
+<i>Wrapper</i> represents the trigger of the whole cascade of (sub)requests. It
+may be e.g. a time or file descriptor event, or another request that was
+created at a specific time by the function tevent_wakeup_send() which is a
+slightly exceptional method of creating
+
+@code
+struct tevent_req *tevent_wakeup_send(TALLOC_CTX *mem_ctx,
+ struct tevent_context *ev,
+ struct timeval wakeup_time);
+@endcode
+
+By calling this function, it is possible to create a tevent request which is
+actually the return value of this function. In summary, it sets the time value
+of the tevent request’s creation. While using this function it is necessary to
+use another function in the subrequest’s callback to check for any problems
+tevent_wakeup_recv() )
+
+@image html tevent_subrequest.png
+
+A comprehensive example of nested subrequests can be found in the file
+echo_server.c. It implements a complete, self-contained echo server with no
+dependencies but libevent and libtalloc.
+
+*/
diff --git a/doc/tevent_thread.dox b/doc/tevent_thread.dox
new file mode 100644
index 0000000..875dae8
--- /dev/null
+++ b/doc/tevent_thread.dox
@@ -0,0 +1,322 @@
+/**
+@page tevent_thread Chapter 6: Tevent with threads
+
+@section threads Tevent with threads
+
+In order to use tevent with threads, you must first understand
+how to use the talloc library in threaded programs. For more
+information about working with talloc, please visit <a
+href="https://talloc.samba.org/">talloc website</a> where tutorial and
+documentation are located.
+
+If a tevent context structure is talloced from a NULL, thread-safe talloc
+context, then it can be safe to use in a threaded program. The function
+<code>talloc_disable_null_tracking()</code> <b>must</b> be called from the initial
+program thread before any talloc calls are made to ensure talloc is thread-safe.
+
+Each thread must create it's own tevent context structure as follows
+<code>tevent_context_init(NULL)</code> and no talloc memory contexts
+can be shared between threads.
+
+Separate threads using tevent in this way can communicate
+by writing data into file descriptors that are being monitored
+by a tevent context on another thread. For example (simplified
+with no error handling):
+
+@code
+Main thread:
+
+main()
+{
+ talloc_disable_null_tracking();
+
+ struct tevent_context *master_ev = tevent_context_init(NULL);
+ void *mem_ctx = talloc_new(master_ev);
+
+ // Create file descriptor to monitor.
+ int pipefds[2];
+
+ pipe(pipefds);
+
+ struct tevent_fd *fde = tevent_add_fd(master_ev,
+ mem_ctx,
+ pipefds[0], // read side of pipe
+ TEVENT_FD_READ,
+ pipe_read_handler, // callback function
+ private_data_pointer);
+
+ // Create sub thread, pass pipefds[1] write side of pipe to it.
+ // The above code not shown here..
+
+ // Process events.
+ tevent_loop_wait(master_ev);
+
+ // Cleanup if loop exits.
+ talloc_free(master_ev);
+}
+
+@endcode
+
+When the subthread writes to pipefds[1], the function
+<code>pipe_read_handler()</code> will be called in the main thread.
+
+@subsection More sophisticated use
+
+A popular way to use an event library within threaded programs
+is to allow a sub-thread to asynchronously schedule a tevent_immediate
+function call from the event loop of another thread. This can be built
+out of the basic functions and isolation mechanisms of tevent,
+but tevent also comes with some utility functions that make
+this easier, so long as you understand the limitations that
+using threads with talloc and tevent impose.
+
+To allow a tevent context to receive an asynchronous tevent_immediate
+function callback from another thread, create a struct tevent_thread_proxy *
+by calling @code
+
+struct tevent_thread_proxy *tevent_thread_proxy_create(
+ struct tevent_context *dest_ev_ctx);
+
+@endcode
+
+This function allocates the internal data structures to
+allow asynchronous callbacks as a talloc child of the
+struct tevent_context *, and returns a struct tevent_thread_proxy *
+that can be passed to another thread.
+
+When you have finished receiving asynchronous callbacks, simply
+talloc_free the struct tevent_thread_proxy *, or talloc_free
+the struct tevent_context *, which will deallocate the resources
+used.
+
+To schedule an asynchronous tevent_immediate function call from one
+thread on the tevent loop of another thread, use
+@code
+
+void tevent_thread_proxy_schedule(struct tevent_thread_proxy *tp,
+ struct tevent_immediate **pp_im,
+ tevent_immediate_handler_t handler,
+ void **pp_private_data);
+
+@endcode
+
+This function causes the function <code>handler()</code>
+to be invoked as a tevent_immediate callback from the event loop
+of the thread that created the struct tevent_thread_proxy *
+(so the owning <code>struct tevent_context *</code> should be
+long-lived and not in the process of being torn down).
+
+The <code>struct tevent_thread_proxy</code> object being
+used here is a child of the event context of the target
+thread. So external synchronization mechanisms must be
+used to ensure that the target object is still in use
+at the time of the <code>tevent_thread_proxy_schedule()</code>
+call. In the example below, the request/response nature
+of the communication ensures this.
+
+The <code>struct tevent_immediate **pp_im</code> passed into this function
+should be a struct tevent_immediate * allocated on a talloc context
+local to this thread, and will be reparented via talloc_move
+to be owned by <code>struct tevent_thread_proxy *tp</code>.
+<code>*pp_im</code> will be set to NULL on successful scheduling
+of the tevent_immediate call.
+
+<code>handler()</code> will be called as a normal tevent_immediate
+callback from the <code>struct tevent_context *</code> of the destination
+event loop that created the <code>struct tevent_thread_proxy *</code>
+
+Returning from this functions does not mean that the <code>handler</code>
+has been invoked, merely that it has been scheduled to be called in the
+destination event loop.
+
+Because the calling thread does not wait for the
+callback to be scheduled and run on the destination
+thread, this is a fire-and-forget call. If you wish
+confirmation of the <code>handler()</code> being
+successfully invoked, you must ensure it replies to the
+caller in some way.
+
+Because of asynchronous nature of this call, the nature
+of the parameter passed to the destination thread has some
+restructions. If you don't need parameters, merely pass
+<code>NULL</code> as the value of
+<code>void **pp_private_data</code>.
+
+If you wish to pass a pointer to data between the threads,
+it <b>MUST</b> be a pointer to a talloced pointer, which is
+not part of a talloc-pool, and it must not have a destructor
+attached. The ownership of the memory pointed to will
+be passed from the calling thread to the tevent library,
+and if the receiving thread does not talloc-reparent
+it to its own contexts, it will be freed once the
+<code>handler</code> is called.
+
+On success, <code>*pp_private</code> will be <code>NULL</code>
+to signify the talloc memory ownership has been moved.
+
+In practice for message passing between threads in
+event loops these restrictions are not very onerous.
+
+The easiest way to to a request-reply pair between
+tevent loops on different threads is to pass the
+parameter block of memory back and forth using
+a reply <code>tevent_thread_proxy_schedule()</code>
+call.
+
+Here is an example (without error checking for
+simplicity):
+
+@code
+------------------------------------------------
+// Master thread.
+
+main()
+{
+ // Make talloc thread-safe.
+
+ talloc_disable_null_tracking();
+
+ // Create the master event context.
+
+ struct tevent_context *master_ev = tevent_context_init(NULL);
+
+ // Create the master thread proxy to allow it to receive
+ // async callbacks from other threads.
+
+ struct tevent_thread_proxy *master_tp =
+ tevent_thread_proxy_create(master_ev);
+
+ // Create sub-threads, passing master_tp in
+ // some way to them.
+ // This code not shown..
+
+ // Process events.
+ // Function master_callback() below
+ // will be invoked on this thread on
+ // master_ev event context.
+
+ tevent_loop_wait(master_ev);
+
+ // Cleanup if loop exits.
+
+ talloc_free(master_ev);
+}
+
+// Data passed between threads.
+struct reply_state {
+ struct tevent_thread_proxy *reply_tp;
+ pthread_t thread_id;
+ bool *p_finished;
+};
+
+// Callback Called in child thread context.
+
+static void thread_callback(struct tevent_context *ev,
+ struct tevent_immediate *im,
+ void *private_ptr)
+{
+ // Move the ownership of what private_ptr
+ // points to from the tevent library back to this thread.
+
+ struct reply_state *rsp =
+ talloc_get_type_abort(private_ptr, struct reply_state);
+
+ talloc_steal(ev, rsp);
+
+ *rsp->p_finished = true;
+
+ // im will be talloc_freed on return from this call.
+ // but rsp will not.
+}
+
+// Callback Called in master thread context.
+
+static void master_callback(struct tevent_context *ev,
+ struct tevent_immediate *im,
+ void *private_ptr)
+{
+ // Move the ownership of what private_ptr
+ // points to from the tevent library to this thread.
+
+ struct reply_state *rsp =
+ talloc_get_type_abort(private_ptr, struct reply_state);
+
+ talloc_steal(ev, rsp);
+
+ printf("Callback from thread %s\n", thread_id_to_string(rsp->thread_id));
+
+ /* Now reply to the thread ! */
+ tevent_thread_proxy_schedule(rsp->reply_tp,
+ &im,
+ thread_callback,
+ &rsp);
+
+ // Note - rsp and im are now NULL as the tevent library
+ // owns the memory.
+}
+
+// Child thread.
+
+static void *thread_fn(void *private_ptr)
+{
+ struct tevent_thread_proxy *master_tp =
+ talloc_get_type_abort(private_ptr, struct tevent_thread_proxy);
+ bool finished = false;
+ int ret;
+
+ // Create our own event context.
+
+ struct tevent_context *ev = tevent_context_init(NULL);
+
+ // Create the local thread proxy to allow us to receive
+ // async callbacks from other threads.
+
+ struct tevent_thread_proxy *local_tp =
+ tevent_thread_proxy_create(master_ev);
+
+ // Setup the data to send.
+
+ struct reply_state *rsp = talloc(ev, struct reply_state);
+
+ rsp->reply_tp = local_tp;
+ rsp->thread_id = pthread_self();
+ rsp->p_finished = &finished;
+
+ // Create the immediate event to use.
+
+ struct tevent_immediate *im = tevent_create_immediate(ev);
+
+ // Call the master thread.
+
+ tevent_thread_proxy_schedule(master_tp,
+ &im,
+ master_callback,
+ &rsp);
+
+ // Note - rsp and im are now NULL as the tevent library
+ // owns the memory.
+
+ // Wait for the reply.
+
+ while (!finished) {
+ tevent_loop_once(ev);
+ }
+
+ // Cleanup.
+
+ talloc_free(ev);
+ return NULL;
+}
+
+@endcode
+
+Note this doesn't have to be a master-subthread communication.
+Any thread that has access to the <code>struct tevent_thread_proxy *</code>
+pointer of another thread that has called <code>tevent_thread_proxy_create()
+</code> can send an async tevent_immediate request.
+
+But remember the caveat that external synchronization must be used
+to ensure the target <code>struct tevent_thread_proxy *</code> object
+exists at the time of the <code>tevent_thread_proxy_schedule()</code>
+call or unreproducible crashes will result.
+*/
diff --git a/doc/tevent_tutorial.dox b/doc/tevent_tutorial.dox
new file mode 100644
index 0000000..207a244
--- /dev/null
+++ b/doc/tevent_tutorial.dox
@@ -0,0 +1,22 @@
+/**
+@page tevent_tutorial The Tutorial
+
+@section tevent_tutorial_introduction Introduction
+
+Tutorial describing working with tevent library.
+
+@section tevent_tutorial_toc Table of contents
+
+@subpage tevent_context
+
+@subpage tevent_events
+
+@subpage tevent_data
+
+@subpage tevent_request
+
+@subpage tevent_queue
+
+@subpage tevent_thread
+
+*/
diff --git a/doc/tutorials.dox b/doc/tutorials.dox
new file mode 100644
index 0000000..e8beed7
--- /dev/null
+++ b/doc/tutorials.dox
@@ -0,0 +1,43 @@
+/**
+ * @page tevent_queue_tutorial The tevent_queue tutorial
+ *
+ * @section Introduction
+ *
+ * A tevent_queue is used to queue up async requests that must be
+ * serialized. For example writing buffers into a socket must be
+ * serialized. Writing a large lump of data into a socket can require
+ * multiple write(2) or send(2) system calls. If more than one async
+ * request is outstanding to write large buffers into a socket, every
+ * request must individually be completed before the next one begins,
+ * even if multiple syscalls are required.
+ *
+ * To do this, every socket gets assigned a tevent_queue struct.
+ *
+ * Creating a serialized async request follows the usual convention to
+ * return a tevent_req structure with an embedded state structure. To
+ * serialize the work the requests is about to so, instead of directly
+ * starting or doing that work, tevent_queue_add must be called. When it
+ * is time for the serialized async request to do its work, the trigger
+ * callback function tevent_queue_add was given is called. In the example
+ * of writing to a socket, the trigger is called when the write request
+ * can begin accessing the socket.
+ *
+ * How does this engine work behind the scenes? When the queue is empty,
+ * tevent_queue_add schedules an immediate call to the trigger
+ * callback. The trigger callback starts its work, likely by starting
+ * other async subrequests. While these async subrequests are working,
+ * more requests can accumulate in the queue by tevent_queue_add. While
+ * there is no function to explicitly trigger the next waiter in line, it
+ * still works: When the active request in the queue is done, it will be
+ * destroyed by talloc_free. Talloc_free of an serialized async request
+ * that had been added to a queue will trigger the next request in the
+ * queue via a talloc destructor attached to a child of the serialized
+ * request. This way the queue will be kept busy when an async request
+ * finishes.
+ *
+ * @section Example
+ *
+ * @code
+ * Metze: Please add a code example here.
+ * @endcode
+ */