summaryrefslogtreecommitdiffstats
path: root/doc/design-thoughts/ring-v2.txt
blob: 48c539a904602049e48716f48add08b18be05705 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
2024-02-20 - Ring buffer v2
===========================

Goals:
  - improve the multi-thread performance of rings so that traces can be written
    from all threads in parallel without the huge bottleneck of the lock that
    is currently necessary to protect the buffer. This is important for mmapped
    areas that are left as a file when the process crashes.

  - keep traces synchronous within a given thread, i.e. when the TRACE() call
    returns, the trace is either written into the ring or lost due to slow
    readers.

  - try hard to limit the cache line bounces between threads due to the use of
    a shared work area.

  - make waiting threads not disturb working ones

  - continue to work on all supported platforms, with a particular focus on
    performance for modern platforms (memory ordering, DWCAS etc can be used if
    they provide any benefit), with a fallback for inferior platforms.

  - do not reorder traces within a given thread.

  - do not break existing features

  - do not significantly increase memory usage


Analysis of the current situation
=================================

Currently, there is a read lock around the call to __sink_write() in order to
make sure that an attempt to write the number of lost messages is delivered
with highest priority and is consistent with the lost counter. This doesn't
seem to pose any problem at this point though if it were, it could possibly
be revisited.

__sink_write() calls ring_write() which first measures the input string length
from the multiple segments, and locks the ring:
  - while trying to free space
  - while copying the message, due to the buffer's API

Because of this, there is a huge serialization and threads wait in queue. Tests
involving a split of the lock and a release around the message copy have shown
a +60% performance increase, which is still not acceptable.


First proposed approach
=======================

The first approach would have consisted in writing messages in small parts:
  1) write 0xFF in the tag to mean "size not filled yet"
  2) write the message's length and write a zero tag after the message's
     location
  3) replace the first tag to 0xFE to indicate the size is known, but the
     message is not filled yet.
  4) memcpy() of the message to the area
  5) replace the first tag to 0 to mark the entry as valid.

It's worth noting that doing that without any lock will allow a second thread
looping on the first tag to jump to the second tag after step 3. But the cost
is high: in a 64-thread scenario where each of them wants to send one message,
the work would look like this:
  - 64 threads try to CAS the tag. One gets it, 63 fail. They loop on the byte
    in question in read-only mode, waiting for the byte to change. This loop
    constantly forces the cache line to switch from MODIFIED to SHARED in the
    writer thread, and makes it a pain for it to write the message's length
    just after it.

  - once the first writer thread finally manages to write the length (step 2),
    it writes 0xFE on the tag to release the waiting threads, and starts with
    step 4. At this point, 63 threads try a CAS on the same entry, and this
    hammering further complicates the memcpy() of step 4 for the first 63 bytes
    of the message (well, 32 on avg since the tag is not necessarily aligned).
    One thread wins, 62 fail. All read the size field and jump to the next tag,
    waiting in read loops there. The second thread starts to write its size and
    faces the same difficulty as described above, facing 62 competitors when
    writing its size and the beginning of its message.

  - when the first writer thread writes the end of its message, it gets close
    to the final tag where the 62 waiting threads are still reading, causing
    a slow down again with the loss of exclusivity on the cache line. This is
    the same for the second thread etc.

Thus, on average, a writing thread is hindered by N-1 threads at the beginning
of its message area (in the first 32 bytes on avg) and by N-2 threads at the
end of its area (in the last 32 bytes on avg). Given that messages are roughly
218 bytes on avg for HTTP/1, this means that roughly 1/3 of the message is
written under severe cache contention.

In addition to this, the buffer's tail needs to be updated once all threads are
ready, something that adds the need for synchronization so that the last writing
threads (the most likely to complete fast due to less perturbations) needs to
wait for all previous ones. This also means N atomic writes to the tail.


New proposal
============

In order to address the contention scenarios above, let's try to factor the
work as much as possible. The principle is that threads that want to write will
either do it themselves or declare their intent and wait for a writing thread
to do it for them. This aims at ensuring a maximum usage of read-only data
between threads, and to leave the work area read-write between very few
threads, and exclusive for multiple messages at once, avoiding the bounces.

First, the buffer will have 2 indexes:
  - head: where the valid data start
  - tail: where new data need to be appended

When a thread starts to work, it will keep a copy of $tail and push it forward
by as many bytes as needed to write all the messages it has to. In order to
guarantee that neither the previous nor the new $tail point to an outdated or
overwritten location but that there is always a tag there, $tail contains a
lock bit in its highest bit that will guarantee that only one at a time will
update it. The goal here is to perform as few atomic ops as possible in the
contended path so as to later amortize the costs and make sure to limit the
number of atomic ops on the wait path to the strict minimum so that waiting
threads do not hinder the workers:

  Fast path:
    1 load($tail) to check the topmost bit
    1 CAS($tail,$tail|BIT63) to set the bit (atomic_fetch_or / atomic_bts also work)
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    1 store(1 byte tag=0) at the beginning to release the message

  Contented path:
    N load($tail) while waiting for the bit to be zero
    M CAS($tail,$tail|BIT63) to try to set the bit on tail, competing with others
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    1 store(1 byte tag=0) at the beginning to release the message

Queue
-----

In order to limit the contention, writers will not start to write but will wait
in a queue, announcing their message pointers/lengths and total lengths. The
queue is made of a (ptr, len) pair that points to one such descriptor, located
in the waiter thread's stack, that itself points to the next pair. In fact
messages are ordered in a LIFO fashion but that isn't important since intra-
thread ordering is preserved (and in the worst case it will also be possible
to write them from end to beginning).

The approach is the following: a writer loasd $tail and sees it's busy, there's
no point continuing, it will add itself to the queue, announcing (ptr, len +
next->len) so that by just reading the first entry, one knows the total size
of the queue. And it will wait there as long as $tail has its topmost bit set
and the queue points to itself (meaning it's the queue's leader), so that only
one thread in the queue watches $tail, limiting the number of cache line
bounces. If the queue doesn't point anymore to the current thread, it means
another thread has taken it over so there's no point continuing, this thread
just becomes passive. If the lock bit is dropped from $tail, the watching
thread needs to re-check that it's still the queue's leader before trying to
grab the lock, so that only the leading thread will attempt it. Indeed, a few
of the last leading threads might still be looping, unaware that they're no
longer leaders. A CAS(&queue, self, self) will do it. Upon failure, the thread
just becomes a passive thread. Upon success, the thread is a confirmed leader,
it must then try to grab the tail lock. Only this thread and a few potential
newcomers will compete on this one. If the leading thread wins, it brings all
the queue with it and the newcomers will queue again. If the leading thread
loses, it needs to loop back to the point above, watching $tail and the
queue. In this case a newcomer might have grabbed the lock. It will notice
the non-empty queue and will take it with it. Thus in both cases the winner
thread does a CAS(queue, queue, NULL) to reset the queue, keeping the previous
pointer.

At this point the winner thread considers its own message size plus the
retrieved queue's size as the total required size and advances $tail by as
much, and will iterate over all messages to copy them in turn. The passive
threads are released by doing XCHG(&ptr->next, ptr) for each message, that
is normally impossible otherwise. As such, a passive thread just has to
loop over its own value, stored in its own stack, reading from its L1 cache
in loops without any risk of disturbing others, hence no need for EBO.

During the time it took to update $tail, more messages will have been
accumulating in the queue from various other threads, and once $tail is
written, one thread can pick them up again.

The benefit here is that the longer it takes one thread to free some space,
the more messages add up in the queue and the larger the next batch, so that
there are always very few contenders on the ring area and on the tail index.
At worst, the queue pointer is hammered but it's not on the fast path, since
wasting time here means all waiters will be queued.

Also, if we keep the first tag unchanged after it's set to 0xFF, it allows to
avoid atomic ops inside all the message. Indeed there's no reader in the area
as long as the tag is 0xFF, so we can just write all contents at once including
the varints and subsequent message tags without ever using atomic ops, hence
not forcing ordered writes. So maybe in the end there is some value in writing
the messages backwards from end to beginning, and just writing the first tag
atomically but not the rest.

The scenario would look like this:

  (without queue)

    - before starting to work:
        do {
            while (ret=(load(&tail) & BIT63))
                ;
        } while (!cas(&tail, &ret, ret | BIT63));

    - at this point, alone on it and guaranteed not to change
    - after new size is calculated, write it and drop the lock:

        store(&tail, new_tail & ~BIT63);

    - that's sufficient to unlock other waiters.

  (with queue)

      in_queue = 0;
      do {
         ret = load(&tail);
         if (ret & BIT63) {
             if (!in_queue) {
                 queue_this_node();
                 in_queue = 1;
             }
             while (ret & BIT63)
                 ;
         }
      } while (!cas(&tail, &ret, ret | BIT63));

      dequeue(in_queue) etc.

  Fast path:
    1 load($tail) to check the topmost bit
    1 CAS($tail,$tail|BIT63) to set the bit (atomic_fetch_or / atomic_bts also work)
    1 load of the queue to see that it's empty
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    1 store(1 byte tag=0) at the beginning to release the message

  Contented path:
    1 load($tail) to see the tail is changing
    M CAS(queue,queue,self) to try to add the thread to the queue (avgmax nbthr/2)
    N load($tail) while waiting for the lock bit to become zero
    1 CAS(queue,self,self) to check the leader still is
    M CAS($tail,$tail|BIT63) to try to set the bit on tail, competing with others
    1 CAS(queue,queue,NULL) to reset the queue
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    P copies of individual messages
    P stores of individual pointers to release writers
    1 store(1 byte tag=0) at the beginning to release the message

Optimal approach (later if needed?): multiple queues. Each thread has one queue
assigned, either from a thread group, or using a modulo from the thread ID.
Same as above then.


Steps
-----

It looks that the queue is what allows the process to scale by amortizing a
single lock for every N messages, but that it's not a prerequisite to start,
without a queue threads can just wait on $tail.


Options
-------

It is possible to avoid the extra check on CAS(queue,self,self) by forcing
writers into the queue all the time. It would slow down the fast path but
may improve the slow path, both of which would become the same:

  Contented path:
    1 XCHG(queue,self) to try to add the thread to the queue
    N load($tail) while waiting for the lock bit to become zero
    M CAS($tail,$tail|BIT63) to try to set the bit on tail, competing with others
    1 CAS(queue,self,NULL) to reset the queue
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    P copies of individual messages
    P stores of individual pointers to release writers
    1 store(1 byte tag=0) at the beginning to release the message

There seems to remain a race when resetting the queue, where a newcomer thread
would queue itself while not being the leader. It seems it can be addressed by
deciding that whoever gets the bit is not important, what matters is the thread
that manages to reset the queue. This can then be done using another XCHG:

    1 XCHG(queue,self) to try to add the thread to the queue
    N load($tail) while waiting for the lock bit to become zero
    M CAS($tail,$tail|BIT63) to try to set the bit on tail, competing with others
    1 XCHG(queue,NULL) to reset the queue
    1 store(1 byte tag=0xFF) at the beginning to mark the area busy
    1 store($tail) to update the new value
    1 copy of the whole message
    P copies of individual messages
    P stores of individual pointers to release writers
    1 store(1 byte tag=0) at the beginning to release the message

However this time this can cause fragmentation of multiple sub-queues that will
need to be reassembled. So finally the CAS is better, the leader thread should
recognize itself.

It seems tricky to reliably store the next pointer in each element, and a DWCAS
wouldn't help here either. Maybe uninitialized elements should just have a
special value (eg 0x1) for their next pointer, meaning "not initialized yet",
and that the thread will then replace with the previous queue pointer. A reader
would have to wait on this value when meeting it, knowing the pointer is not
filled yet but is coming.