summaryrefslogtreecommitdiffstats
path: root/toolkit/components/crashes/docs/crash-events.rst
blob: 20afdac482fd36620c75b88e588ed2ba7e8769c4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
============
Crash Events
============

**Crash Events** refers to a special subsystem of Gecko that aims to capture
events of interest related to process crashing and hanging.

When an event worthy of recording occurs, a file containing that event's
information is written to a well-defined location on the filesystem. The Gecko
process periodically scans for produced files and consolidates information
into a more unified and efficient backend store.

Crash Event Files
=================

When a crash-related event occurs, a file describing that event is written
to a well-defined directory. That directory is likely in the directory of
the currently-active profile. However, if a profile is not yet active in
the Gecko process, that directory likely resides in the user's *app data*
directory (*UAppData* from the directory service).

The filename of the event file is not relevant. However, producers need
to choose a filename intelligently to avoid name collisions and race
conditions. Since file locking is potentially dangerous at crash time,
the convention of generating a UUID and using it as a filename has been
adopted.

File Format
-----------

All crash event files share the same high-level file format. The format
consists of the following fields delimited by a UNIX newline (*\n*)
character:

* String event name (valid UTF-8, but likely ASCII)
* String representation of integer seconds since UNIX epoch
* Payload

The payload is event specific and may contain UNIX newline characters.
The recommended method for parsing is to split at most 3 times on UNIX
newline and then dispatch to an event-specific parsed based on the
event name.

If an unknown event type is encountered, the event can safely be ignored
until later. This helps ensure that application downgrades (potentially
due to elevated crash rate) don't result in data loss.

The format and semantics of each event type are meant to be constant once
that event type is committed to the main Firefox repository. If new metadata
needs to be captured or the meaning of data captured in an event changes,
that change should be expressed through the invention of a new event type.
For this reason, event names are highly recommended to contain a version.
e.g. instead of a *Gecko process crashed* event, we prefer a *Gecko process
crashed v1* event.

Event Types
-----------

Each subsection documents the different types of crash events that may be
produced. Each section name corresponds to the first line of the crash
event file.

Currently only main process crashes produce event files. Because crashes and
hangs in child processes can be easily recorded by the main process, we do not
foresee the need for writing event files for child processes, design
considerations below notwithstanding.

crash.main.3
^^^^^^^^^^^^

This event is produced when the main process crashes.

The payload of this event is delimited by UNIX newlines (*\n*) and contains the
following fields:

* The crash ID string, very likely a UUID
* One line holding the crash metadata serialized as a JSON string

crash.main.2
^^^^^^^^^^^^

This event is produced when the main process crashes.

The payload of this event is delimited by UNIX newlines (*\n*) and contains the
following fields:

* The crash ID string, very likely a UUID
* 0 or more lines of metadata, each containing one key=value pair of text

This event is obsolete.

crash.main.1
^^^^^^^^^^^^

This event is produced when the main process crashes.

The payload of this event is the string crash ID, very likely a UUID.
There should be ``UUID.dmp`` and ``UUID.extra`` files on disk, saved by
Breakpad.

This event is obsolete.

crash.submission.1
^^^^^^^^^^^^^^^^^^

This event is produced when a crash is submitted.

The payload of this event is delimited by UNIX newlines (*\n*) and contains the
following fields:

* The crash ID string
* "true" if the submission succeeded or "false" otherwise
* The remote crash ID string if the submission succeeded

Aggregated Event Log
====================

Crash events are aggregated together into a unified event *log*. Currently,
this *log* is really a JSON file. However, this is an implementation detail
and it could change at any time. The interface to crash data provided by
the JavaScript API is the only supported interface.

Design Considerations
=====================

There are many considerations influencing the design of this subsystem.
We attempt to document them in this section.

Decoupling of Event Files from Final Data Structure
---------------------------------------------------

While it is certainly possible for the Gecko process to write directly to
the final data structure on disk, there is an intentional decoupling between
the production of events and their transition into final storage. Along the
same vein, the choice to have events written to multiple files by producers
is deliberate.

Some recorded events are written immediately after a process crash. This is
a very uncertain time for the host system. There is a high liklihood the
system is in an exceptional state, such as memory exhaustion. Therefore, any
action taken after crashing needs to be very deliberate about what it does.
Excessive memory allocation and certain system calls may cause the system
to crash again or the machine's condition to worsen. This means that the act
of recording a crash event must be very light weight. Writing a new file from
nothing is very light weight. This is one reason we write separate files.

Another reason we write separate files is because if the main Gecko process
itself crashes (as opposed to say a plugin process), the crash reporter (not
Gecko) is running and the crash reporter needs to handle the writing of the
event info. If this writing is involved (say loading, parsing, updating, and
reserializing back to disk), this logic would need to be implemented in both
Gecko and the crash reporter or would need to be implemented in such a way
that both could use. Neither of these is very practical from a software
lifecycle management perspective. It's much easier to have separate processes
write a simple file and to let a single implementation do all the complex
work.

Idempotent Event Processing
===========================

Processing of event files has been designed such that the result is
idempotent regardless of what order those files are processed in. This is
not only a good design decision, but it is arguably necessary. While event
files are processed in order by file mtime, filesystem times may not have
the resolution required for proper sorting. Therefore, processing order is
merely an optimistic assumption.

Aggregated Storage Format
=========================

Crash events are aggregated into a unified data structure on disk. That data
structure is currently LZ4-compressed JSON and is represented by a single file.

The choice of a single JSON file was initially driven by time and complexity
concerns. Before changing the format or adding significant amounts of new
data, some considerations must be taken into account.

First, in well-behaving installs, crash data should be minimal. Crashes and
hangs will be rare and thus the size of the crash data should remain small
over time.

The choice of a single JSON file has larger implications as the amount of
crash data grows. As new data is accumulated, we need to read and write
an entire file to make small updates. LZ4 compression helps reduce I/O.
But, there is a potential for unbounded file growth. We establish a
limit for the max age of records. Anything older than that limit is
pruned. We also establish a daily limit on the number of crashes we will
store. All crashes beyond the first N in a day have no payload and are
only recorded by the presence of a count. This count ensures we can
distinguish between ``N`` and ``100 * N``, which are very different
values!