summaryrefslogtreecommitdiffstats
path: root/third_party/rust/tracing-core/src
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
commit36d22d82aa202bb199967e9512281e9a53db42c9 (patch)
tree105e8c98ddea1c1e4784a60a5a6410fa416be2de /third_party/rust/tracing-core/src
parentInitial commit. (diff)
downloadfirefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.tar.xz
firefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.zip
Adding upstream version 115.7.0esr.upstream/115.7.0esrupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'third_party/rust/tracing-core/src')
-rw-r--r--third_party/rust/tracing-core/src/callsite.rs621
-rw-r--r--third_party/rust/tracing-core/src/dispatcher.rs1008
-rw-r--r--third_party/rust/tracing-core/src/event.rs128
-rw-r--r--third_party/rust/tracing-core/src/field.rs1263
-rw-r--r--third_party/rust/tracing-core/src/lazy.rs76
-rw-r--r--third_party/rust/tracing-core/src/lib.rs295
-rw-r--r--third_party/rust/tracing-core/src/metadata.rs1114
-rw-r--r--third_party/rust/tracing-core/src/parent.rs11
-rw-r--r--third_party/rust/tracing-core/src/span.rs341
-rw-r--r--third_party/rust/tracing-core/src/spin/LICENSE21
-rw-r--r--third_party/rust/tracing-core/src/spin/mod.rs7
-rw-r--r--third_party/rust/tracing-core/src/spin/mutex.rs118
-rw-r--r--third_party/rust/tracing-core/src/spin/once.rs158
-rw-r--r--third_party/rust/tracing-core/src/stdlib.rs78
-rw-r--r--third_party/rust/tracing-core/src/subscriber.rs870
15 files changed, 6109 insertions, 0 deletions
diff --git a/third_party/rust/tracing-core/src/callsite.rs b/third_party/rust/tracing-core/src/callsite.rs
new file mode 100644
index 0000000000..f887132364
--- /dev/null
+++ b/third_party/rust/tracing-core/src/callsite.rs
@@ -0,0 +1,621 @@
+//! Callsites represent the source locations from which spans or events
+//! originate.
+//!
+//! # What Are Callsites?
+//!
+//! Every span or event in `tracing` is associated with a [`Callsite`]. A
+//! callsite is a small `static` value that is responsible for the following:
+//!
+//! * Storing the span or event's [`Metadata`],
+//! * Uniquely [identifying](Identifier) the span or event definition,
+//! * Caching the subscriber's [`Interest`][^1] in that span or event, to avoid
+//! re-evaluating filters,
+//! * Storing a [`Registration`] that allows the callsite to be part of a global
+//! list of all callsites in the program.
+//!
+//! # Registering Callsites
+//!
+//! When a span or event is recorded for the first time, its callsite
+//! [`register`]s itself with the global callsite registry. Registering a
+//! callsite calls the [`Subscriber::register_callsite`][`register_callsite`]
+//! method with that callsite's [`Metadata`] on every currently active
+//! subscriber. This serves two primary purposes: informing subscribers of the
+//! callsite's existence, and performing static filtering.
+//!
+//! ## Callsite Existence
+//!
+//! If a [`Subscriber`] implementation wishes to allocate storage for each
+//! unique span/event location in the program, or pre-compute some value
+//! that will be used to record that span or event in the future, it can
+//! do so in its [`register_callsite`] method.
+//!
+//! ## Performing Static Filtering
+//!
+//! The [`register_callsite`] method returns an [`Interest`] value,
+//! which indicates that the subscriber either [always] wishes to record
+//! that span or event, [sometimes] wishes to record it based on a
+//! dynamic filter evaluation, or [never] wishes to record it.
+//!
+//! When registering a new callsite, the [`Interest`]s returned by every
+//! currently active subscriber are combined, and the result is stored at
+//! each callsite. This way, when the span or event occurs in the
+//! future, the cached [`Interest`] value can be checked efficiently
+//! to determine if the span or event should be recorded, without
+//! needing to perform expensive filtering (i.e. calling the
+//! [`Subscriber::enabled`] method every time a span or event occurs).
+//!
+//! ### Rebuilding Cached Interest
+//!
+//! When a new [`Dispatch`] is created (i.e. a new subscriber becomes
+//! active), any previously cached [`Interest`] values are re-evaluated
+//! for all callsites in the program. This way, if the new subscriber
+//! will enable a callsite that was not previously enabled, the
+//! [`Interest`] in that callsite is updated. Similarly, when a
+//! subscriber is dropped, the interest cache is also re-evaluated, so
+//! that any callsites enabled only by that subscriber are disabled.
+//!
+//! In addition, the [`rebuild_interest_cache`] function in this module can be
+//! used to manually invalidate all cached interest and re-register those
+//! callsites. This function is useful in situations where a subscriber's
+//! interest can change, but it does so relatively infrequently. The subscriber
+//! may wish for its interest to be cached most of the time, and return
+//! [`Interest::always`][always] or [`Interest::never`][never] in its
+//! [`register_callsite`] method, so that its [`Subscriber::enabled`] method
+//! doesn't need to be evaluated every time a span or event is recorded.
+//! However, when the configuration changes, the subscriber can call
+//! [`rebuild_interest_cache`] to re-evaluate the entire interest cache with its
+//! new configuration. This is a relatively costly operation, but if the
+//! configuration changes infrequently, it may be more efficient than calling
+//! [`Subscriber::enabled`] frequently.
+//!
+//! # Implementing Callsites
+//!
+//! In most cases, instrumenting code using `tracing` should *not* require
+//! implementing the [`Callsite`] trait directly. When using the [`tracing`
+//! crate's macros][macros] or the [`#[instrument]` attribute][instrument], a
+//! `Callsite` is automatically generated.
+//!
+//! However, code which provides alternative forms of `tracing` instrumentation
+//! may need to interact with the callsite system directly. If
+//! instrumentation-side code needs to produce a `Callsite` to emit spans or
+//! events, the [`DefaultCallsite`] struct provided in this module is a
+//! ready-made `Callsite` implementation that is suitable for most uses. When
+//! possible, the use of `DefaultCallsite` should be preferred over implementing
+//! [`Callsite`] for user types, as `DefaultCallsite` may benefit from
+//! additional performance optimizations.
+//!
+//! [^1]: Returned by the [`Subscriber::register_callsite`][`register_callsite`]
+//! method.
+//!
+//! [`Metadata`]: crate::metadata::Metadata
+//! [`Interest`]: crate::subscriber::Interest
+//! [`Subscriber`]: crate::subscriber::Subscriber
+//! [`register_callsite`]: crate::subscriber::Subscriber::register_callsite
+//! [`Subscriber::enabled`]: crate::subscriber::Subscriber::enabled
+//! [always]: crate::subscriber::Interest::always
+//! [sometimes]: crate::subscriber::Interest::sometimes
+//! [never]: crate::subscriber::Interest::never
+//! [`Dispatch`]: crate::dispatch::Dispatch
+//! [macros]: https://docs.rs/tracing/latest/tracing/#macros
+//! [instrument]: https://docs.rs/tracing/latest/tracing/attr.instrument.html
+use crate::stdlib::{
+ any::TypeId,
+ fmt,
+ hash::{Hash, Hasher},
+ ptr,
+ sync::{
+ atomic::{AtomicBool, AtomicPtr, AtomicU8, Ordering},
+ Mutex,
+ },
+ vec::Vec,
+};
+use crate::{
+ dispatcher::Dispatch,
+ lazy::Lazy,
+ metadata::{LevelFilter, Metadata},
+ subscriber::Interest,
+};
+
+use self::dispatchers::Dispatchers;
+
+/// Trait implemented by callsites.
+///
+/// These functions are only intended to be called by the callsite registry, which
+/// correctly handles determining the common interest between all subscribers.
+///
+/// See the [module-level documentation](crate::callsite) for details on
+/// callsites.
+pub trait Callsite: Sync {
+ /// Sets the [`Interest`] for this callsite.
+ ///
+ /// See the [documentation on callsite interest caching][cache-docs] for
+ /// details.
+ ///
+ /// [`Interest`]: super::subscriber::Interest
+ /// [cache-docs]: crate::callsite#performing-static-filtering
+ fn set_interest(&self, interest: Interest);
+
+ /// Returns the [metadata] associated with the callsite.
+ ///
+ /// <div class="example-wrap" style="display:inline-block">
+ /// <pre class="ignore" style="white-space:normal;font:inherit;">
+ ///
+ /// **Note:** Implementations of this method should not produce [`Metadata`]
+ /// that share the same callsite [`Identifier`] but otherwise differ in any
+ /// way (e.g., have different `name`s).
+ ///
+ /// </pre></div>
+ ///
+ /// [metadata]: super::metadata::Metadata
+ fn metadata(&self) -> &Metadata<'_>;
+
+ /// This method is an *internal implementation detail* of `tracing-core`. It
+ /// is *not* intended to be called or overridden from downstream code.
+ ///
+ /// The `Private` type can only be constructed from within `tracing-core`.
+ /// Because this method takes a `Private` as an argument, it cannot be
+ /// called from (safe) code external to `tracing-core`. Because it must
+ /// *return* a `Private`, the only valid implementation possible outside of
+ /// `tracing-core` would have to always unconditionally panic.
+ ///
+ /// THIS IS BY DESIGN. There is currently no valid reason for code outside
+ /// of `tracing-core` to override this method.
+ // TODO(eliza): this could be used to implement a public downcasting API
+ // for `&dyn Callsite`s in the future.
+ #[doc(hidden)]
+ #[inline]
+ fn private_type_id(&self, _: private::Private<()>) -> private::Private<TypeId>
+ where
+ Self: 'static,
+ {
+ private::Private(TypeId::of::<Self>())
+ }
+}
+
+/// Uniquely identifies a [`Callsite`]
+///
+/// Two `Identifier`s are equal if they both refer to the same callsite.
+///
+/// [`Callsite`]: super::callsite::Callsite
+#[derive(Clone)]
+pub struct Identifier(
+ /// **Warning**: The fields on this type are currently `pub` because it must
+ /// be able to be constructed statically by macros. However, when `const
+ /// fn`s are available on stable Rust, this will no longer be necessary.
+ /// Thus, these fields are *not* considered stable public API, and they may
+ /// change warning. Do not rely on any fields on `Identifier`. When
+ /// constructing new `Identifier`s, use the `identify_callsite!` macro
+ /// instead.
+ #[doc(hidden)]
+ pub &'static dyn Callsite,
+);
+
+/// A default [`Callsite`] implementation.
+#[derive(Debug)]
+pub struct DefaultCallsite {
+ interest: AtomicU8,
+ registration: AtomicU8,
+ meta: &'static Metadata<'static>,
+ next: AtomicPtr<Self>,
+}
+
+/// Clear and reregister interest on every [`Callsite`]
+///
+/// This function is intended for runtime reconfiguration of filters on traces
+/// when the filter recalculation is much less frequent than trace events are.
+/// The alternative is to have the [`Subscriber`] that supports runtime
+/// reconfiguration of filters always return [`Interest::sometimes()`] so that
+/// [`enabled`] is evaluated for every event.
+///
+/// This function will also re-compute the global maximum level as determined by
+/// the [`max_level_hint`] method. If a [`Subscriber`]
+/// implementation changes the value returned by its `max_level_hint`
+/// implementation at runtime, then it **must** call this function after that
+/// value changes, in order for the change to be reflected.
+///
+/// See the [documentation on callsite interest caching][cache-docs] for
+/// additional information on this function's usage.
+///
+/// [`max_level_hint`]: super::subscriber::Subscriber::max_level_hint
+/// [`Callsite`]: super::callsite::Callsite
+/// [`enabled`]: super::subscriber::Subscriber#tymethod.enabled
+/// [`Interest::sometimes()`]: super::subscriber::Interest::sometimes
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [cache-docs]: crate::callsite#rebuilding-cached-interest
+pub fn rebuild_interest_cache() {
+ CALLSITES.rebuild_interest(DISPATCHERS.rebuilder());
+}
+
+/// Register a new [`Callsite`] with the global registry.
+///
+/// This should be called once per callsite after the callsite has been
+/// constructed.
+///
+/// See the [documentation on callsite registration][reg-docs] for details
+/// on the global callsite registry.
+///
+/// [`Callsite`]: crate::callsite::Callsite
+/// [reg-docs]: crate::callsite#registering-callsites
+pub fn register(callsite: &'static dyn Callsite) {
+ rebuild_callsite_interest(callsite, &DISPATCHERS.rebuilder());
+
+ // Is this a `DefaultCallsite`? If so, use the fancy linked list!
+ if callsite.private_type_id(private::Private(())).0 == TypeId::of::<DefaultCallsite>() {
+ let callsite = unsafe {
+ // Safety: the pointer cast is safe because the type id of the
+ // provided callsite matches that of the target type for the cast
+ // (`DefaultCallsite`). Because user implementations of `Callsite`
+ // cannot override `private_type_id`, we can trust that the callsite
+ // is not lying about its type ID.
+ &*(callsite as *const dyn Callsite as *const DefaultCallsite)
+ };
+ CALLSITES.push_default(callsite);
+ return;
+ }
+
+ CALLSITES.push_dyn(callsite);
+}
+
+static CALLSITES: Callsites = Callsites {
+ list_head: AtomicPtr::new(ptr::null_mut()),
+ has_locked_callsites: AtomicBool::new(false),
+};
+
+static DISPATCHERS: Dispatchers = Dispatchers::new();
+
+static LOCKED_CALLSITES: Lazy<Mutex<Vec<&'static dyn Callsite>>> = Lazy::new(Default::default);
+
+struct Callsites {
+ list_head: AtomicPtr<DefaultCallsite>,
+ has_locked_callsites: AtomicBool,
+}
+
+// === impl DefaultCallsite ===
+
+impl DefaultCallsite {
+ const UNREGISTERED: u8 = 0;
+ const REGISTERING: u8 = 1;
+ const REGISTERED: u8 = 2;
+
+ const INTEREST_NEVER: u8 = 0;
+ const INTEREST_SOMETIMES: u8 = 1;
+ const INTEREST_ALWAYS: u8 = 2;
+
+ /// Returns a new `DefaultCallsite` with the specified `Metadata`.
+ pub const fn new(meta: &'static Metadata<'static>) -> Self {
+ Self {
+ interest: AtomicU8::new(0xFF),
+ meta,
+ next: AtomicPtr::new(ptr::null_mut()),
+ registration: AtomicU8::new(Self::UNREGISTERED),
+ }
+ }
+
+ /// Registers this callsite with the global callsite registry.
+ ///
+ /// If the callsite is already registered, this does nothing. When using
+ /// [`DefaultCallsite`], this method should be preferred over
+ /// [`tracing_core::callsite::register`], as it ensures that the callsite is
+ /// only registered a single time.
+ ///
+ /// Other callsite implementations will generally ensure that
+ /// callsites are not re-registered through another mechanism.
+ ///
+ /// See the [documentation on callsite registration][reg-docs] for details
+ /// on the global callsite registry.
+ ///
+ /// [`Callsite`]: crate::callsite::Callsite
+ /// [reg-docs]: crate::callsite#registering-callsites
+ #[inline(never)]
+ // This only happens once (or if the cached interest value was corrupted).
+ #[cold]
+ pub fn register(&'static self) -> Interest {
+ // Attempt to advance the registration state to `REGISTERING`...
+ match self.registration.compare_exchange(
+ Self::UNREGISTERED,
+ Self::REGISTERING,
+ Ordering::AcqRel,
+ Ordering::Acquire,
+ ) {
+ Ok(_) => {
+ // Okay, we advanced the state, try to register the callsite.
+ rebuild_callsite_interest(self, &DISPATCHERS.rebuilder());
+ CALLSITES.push_default(self);
+ self.registration.store(Self::REGISTERED, Ordering::Release);
+ }
+ // Great, the callsite is already registered! Just load its
+ // previous cached interest.
+ Err(Self::REGISTERED) => {}
+ // Someone else is registering...
+ Err(_state) => {
+ debug_assert_eq!(
+ _state,
+ Self::REGISTERING,
+ "weird callsite registration state"
+ );
+ // Just hit `enabled` this time.
+ return Interest::sometimes();
+ }
+ }
+
+ match self.interest.load(Ordering::Relaxed) {
+ Self::INTEREST_NEVER => Interest::never(),
+ Self::INTEREST_ALWAYS => Interest::always(),
+ _ => Interest::sometimes(),
+ }
+ }
+
+ /// Returns the callsite's cached `Interest`, or registers it for the
+ /// first time if it has not yet been registered.
+ #[inline]
+ pub fn interest(&'static self) -> Interest {
+ match self.interest.load(Ordering::Relaxed) {
+ Self::INTEREST_NEVER => Interest::never(),
+ Self::INTEREST_SOMETIMES => Interest::sometimes(),
+ Self::INTEREST_ALWAYS => Interest::always(),
+ _ => self.register(),
+ }
+ }
+}
+
+impl Callsite for DefaultCallsite {
+ fn set_interest(&self, interest: Interest) {
+ let interest = match () {
+ _ if interest.is_never() => Self::INTEREST_NEVER,
+ _ if interest.is_always() => Self::INTEREST_ALWAYS,
+ _ => Self::INTEREST_SOMETIMES,
+ };
+ self.interest.store(interest, Ordering::SeqCst);
+ }
+
+ #[inline(always)]
+ fn metadata(&self) -> &Metadata<'static> {
+ self.meta
+ }
+}
+
+// ===== impl Identifier =====
+
+impl PartialEq for Identifier {
+ fn eq(&self, other: &Identifier) -> bool {
+ core::ptr::eq(
+ self.0 as *const _ as *const (),
+ other.0 as *const _ as *const (),
+ )
+ }
+}
+
+impl Eq for Identifier {}
+
+impl fmt::Debug for Identifier {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "Identifier({:p})", self.0)
+ }
+}
+
+impl Hash for Identifier {
+ fn hash<H>(&self, state: &mut H)
+ where
+ H: Hasher,
+ {
+ (self.0 as *const dyn Callsite).hash(state)
+ }
+}
+
+// === impl Callsites ===
+
+impl Callsites {
+ /// Rebuild `Interest`s for all callsites in the registry.
+ ///
+ /// This also re-computes the max level hint.
+ fn rebuild_interest(&self, dispatchers: dispatchers::Rebuilder<'_>) {
+ let mut max_level = LevelFilter::OFF;
+ dispatchers.for_each(|dispatch| {
+ // If the subscriber did not provide a max level hint, assume
+ // that it may enable every level.
+ let level_hint = dispatch.max_level_hint().unwrap_or(LevelFilter::TRACE);
+ if level_hint > max_level {
+ max_level = level_hint;
+ }
+ });
+
+ self.for_each(|callsite| {
+ rebuild_callsite_interest(callsite, &dispatchers);
+ });
+ LevelFilter::set_max(max_level);
+ }
+
+ /// Push a `dyn Callsite` trait object to the callsite registry.
+ ///
+ /// This will attempt to lock the callsites vector.
+ fn push_dyn(&self, callsite: &'static dyn Callsite) {
+ let mut lock = LOCKED_CALLSITES.lock().unwrap();
+ self.has_locked_callsites.store(true, Ordering::Release);
+ lock.push(callsite);
+ }
+
+ /// Push a `DefaultCallsite` to the callsite registry.
+ ///
+ /// If we know the callsite being pushed is a `DefaultCallsite`, we can push
+ /// it to the linked list without having to acquire a lock.
+ fn push_default(&self, callsite: &'static DefaultCallsite) {
+ let mut head = self.list_head.load(Ordering::Acquire);
+
+ loop {
+ callsite.next.store(head, Ordering::Release);
+
+ assert_ne!(
+ callsite as *const _, head,
+ "Attempted to register a `DefaultCallsite` that already exists! \
+ This will cause an infinite loop when attempting to read from the \
+ callsite cache. This is likely a bug! You should only need to call \
+ `DefaultCallsite::register` once per `DefaultCallsite`."
+ );
+
+ match self.list_head.compare_exchange(
+ head,
+ callsite as *const _ as *mut _,
+ Ordering::AcqRel,
+ Ordering::Acquire,
+ ) {
+ Ok(_) => {
+ break;
+ }
+ Err(current) => head = current,
+ }
+ }
+ }
+
+ /// Invokes the provided closure `f` with each callsite in the registry.
+ fn for_each(&self, mut f: impl FnMut(&'static dyn Callsite)) {
+ let mut head = self.list_head.load(Ordering::Acquire);
+
+ while let Some(cs) = unsafe { head.as_ref() } {
+ f(cs);
+
+ head = cs.next.load(Ordering::Acquire);
+ }
+
+ if self.has_locked_callsites.load(Ordering::Acquire) {
+ let locked = LOCKED_CALLSITES.lock().unwrap();
+ for &cs in locked.iter() {
+ f(cs);
+ }
+ }
+ }
+}
+
+pub(crate) fn register_dispatch(dispatch: &Dispatch) {
+ let dispatchers = DISPATCHERS.register_dispatch(dispatch);
+ dispatch.subscriber().on_register_dispatch(dispatch);
+ CALLSITES.rebuild_interest(dispatchers);
+}
+
+fn rebuild_callsite_interest(
+ callsite: &'static dyn Callsite,
+ dispatchers: &dispatchers::Rebuilder<'_>,
+) {
+ let meta = callsite.metadata();
+
+ let mut interest = None;
+ dispatchers.for_each(|dispatch| {
+ let this_interest = dispatch.register_callsite(meta);
+ interest = match interest.take() {
+ None => Some(this_interest),
+ Some(that_interest) => Some(that_interest.and(this_interest)),
+ }
+ });
+
+ let interest = interest.unwrap_or_else(Interest::never);
+ callsite.set_interest(interest)
+}
+
+mod private {
+ /// Don't call this function, it's private.
+ #[allow(missing_debug_implementations)]
+ pub struct Private<T>(pub(crate) T);
+}
+
+#[cfg(feature = "std")]
+mod dispatchers {
+ use crate::{dispatcher, lazy::Lazy};
+ use std::sync::{
+ atomic::{AtomicBool, Ordering},
+ RwLock, RwLockReadGuard, RwLockWriteGuard,
+ };
+
+ pub(super) struct Dispatchers {
+ has_just_one: AtomicBool,
+ }
+
+ static LOCKED_DISPATCHERS: Lazy<RwLock<Vec<dispatcher::Registrar>>> =
+ Lazy::new(Default::default);
+
+ pub(super) enum Rebuilder<'a> {
+ JustOne,
+ Read(RwLockReadGuard<'a, Vec<dispatcher::Registrar>>),
+ Write(RwLockWriteGuard<'a, Vec<dispatcher::Registrar>>),
+ }
+
+ impl Dispatchers {
+ pub(super) const fn new() -> Self {
+ Self {
+ has_just_one: AtomicBool::new(true),
+ }
+ }
+
+ pub(super) fn rebuilder(&self) -> Rebuilder<'_> {
+ if self.has_just_one.load(Ordering::SeqCst) {
+ return Rebuilder::JustOne;
+ }
+ Rebuilder::Read(LOCKED_DISPATCHERS.read().unwrap())
+ }
+
+ pub(super) fn register_dispatch(&self, dispatch: &dispatcher::Dispatch) -> Rebuilder<'_> {
+ let mut dispatchers = LOCKED_DISPATCHERS.write().unwrap();
+ dispatchers.retain(|d| d.upgrade().is_some());
+ dispatchers.push(dispatch.registrar());
+ self.has_just_one
+ .store(dispatchers.len() <= 1, Ordering::SeqCst);
+ Rebuilder::Write(dispatchers)
+ }
+ }
+
+ impl Rebuilder<'_> {
+ pub(super) fn for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch)) {
+ let iter = match self {
+ Rebuilder::JustOne => {
+ dispatcher::get_default(f);
+ return;
+ }
+ Rebuilder::Read(vec) => vec.iter(),
+ Rebuilder::Write(vec) => vec.iter(),
+ };
+ iter.filter_map(dispatcher::Registrar::upgrade)
+ .for_each(|dispatch| f(&dispatch))
+ }
+ }
+}
+
+#[cfg(not(feature = "std"))]
+mod dispatchers {
+ use crate::dispatcher;
+
+ pub(super) struct Dispatchers(());
+ pub(super) struct Rebuilder<'a>(Option<&'a dispatcher::Dispatch>);
+
+ impl Dispatchers {
+ pub(super) const fn new() -> Self {
+ Self(())
+ }
+
+ pub(super) fn rebuilder(&self) -> Rebuilder<'_> {
+ Rebuilder(None)
+ }
+
+ pub(super) fn register_dispatch<'dispatch>(
+ &self,
+ dispatch: &'dispatch dispatcher::Dispatch,
+ ) -> Rebuilder<'dispatch> {
+ // nop; on no_std, there can only ever be one dispatcher
+ Rebuilder(Some(dispatch))
+ }
+ }
+
+ impl Rebuilder<'_> {
+ #[inline]
+ pub(super) fn for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch)) {
+ if let Some(dispatch) = self.0 {
+ // we are rebuilding the interest cache because a new dispatcher
+ // is about to be set. on `no_std`, this should only happen
+ // once, because the new dispatcher will be the global default.
+ f(dispatch)
+ } else {
+ // otherwise, we are rebuilding the cache because the subscriber
+ // configuration changed, so use the global default.
+ // on no_std, there can only ever be one dispatcher
+ dispatcher::get_default(f)
+ }
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/dispatcher.rs b/third_party/rust/tracing-core/src/dispatcher.rs
new file mode 100644
index 0000000000..36b3cfd85f
--- /dev/null
+++ b/third_party/rust/tracing-core/src/dispatcher.rs
@@ -0,0 +1,1008 @@
+//! Dispatches trace events to [`Subscriber`]s.
+//!
+//! The _dispatcher_ is the component of the tracing system which is responsible
+//! for forwarding trace data from the instrumentation points that generate it
+//! to the subscriber that collects it.
+//!
+//! # Using the Trace Dispatcher
+//!
+//! Every thread in a program using `tracing` has a _default subscriber_. When
+//! events occur, or spans are created, they are dispatched to the thread's
+//! current subscriber.
+//!
+//! ## Setting the Default Subscriber
+//!
+//! By default, the current subscriber is an empty implementation that does
+//! nothing. To use a subscriber implementation, it must be set as the default.
+//! There are two methods for doing so: [`with_default`] and
+//! [`set_global_default`]. `with_default` sets the default subscriber for the
+//! duration of a scope, while `set_global_default` sets a default subscriber
+//! for the entire process.
+//!
+//! To use either of these functions, we must first wrap our subscriber in a
+//! [`Dispatch`], a cloneable, type-erased reference to a subscriber. For
+//! example:
+//! ```rust
+//! # pub struct FooSubscriber;
+//! # use tracing_core::{
+//! # dispatcher, Event, Metadata,
+//! # span::{Attributes, Id, Record}
+//! # };
+//! # impl tracing_core::Subscriber for FooSubscriber {
+//! # fn new_span(&self, _: &Attributes) -> Id { Id::from_u64(0) }
+//! # fn record(&self, _: &Id, _: &Record) {}
+//! # fn event(&self, _: &Event) {}
+//! # fn record_follows_from(&self, _: &Id, _: &Id) {}
+//! # fn enabled(&self, _: &Metadata) -> bool { false }
+//! # fn enter(&self, _: &Id) {}
+//! # fn exit(&self, _: &Id) {}
+//! # }
+//! # impl FooSubscriber { fn new() -> Self { FooSubscriber } }
+//! use dispatcher::Dispatch;
+//!
+//! let my_subscriber = FooSubscriber::new();
+//! let my_dispatch = Dispatch::new(my_subscriber);
+//! ```
+//! Then, we can use [`with_default`] to set our `Dispatch` as the default for
+//! the duration of a block:
+//! ```rust
+//! # pub struct FooSubscriber;
+//! # use tracing_core::{
+//! # dispatcher, Event, Metadata,
+//! # span::{Attributes, Id, Record}
+//! # };
+//! # impl tracing_core::Subscriber for FooSubscriber {
+//! # fn new_span(&self, _: &Attributes) -> Id { Id::from_u64(0) }
+//! # fn record(&self, _: &Id, _: &Record) {}
+//! # fn event(&self, _: &Event) {}
+//! # fn record_follows_from(&self, _: &Id, _: &Id) {}
+//! # fn enabled(&self, _: &Metadata) -> bool { false }
+//! # fn enter(&self, _: &Id) {}
+//! # fn exit(&self, _: &Id) {}
+//! # }
+//! # impl FooSubscriber { fn new() -> Self { FooSubscriber } }
+//! # let my_subscriber = FooSubscriber::new();
+//! # let my_dispatch = dispatcher::Dispatch::new(my_subscriber);
+//! // no default subscriber
+//!
+//! # #[cfg(feature = "std")]
+//! dispatcher::with_default(&my_dispatch, || {
+//! // my_subscriber is the default
+//! });
+//!
+//! // no default subscriber again
+//! ```
+//! It's important to note that `with_default` will not propagate the current
+//! thread's default subscriber to any threads spawned within the `with_default`
+//! block. To propagate the default subscriber to new threads, either use
+//! `with_default` from the new thread, or use `set_global_default`.
+//!
+//! As an alternative to `with_default`, we can use [`set_global_default`] to
+//! set a `Dispatch` as the default for all threads, for the lifetime of the
+//! program. For example:
+//! ```rust
+//! # pub struct FooSubscriber;
+//! # use tracing_core::{
+//! # dispatcher, Event, Metadata,
+//! # span::{Attributes, Id, Record}
+//! # };
+//! # impl tracing_core::Subscriber for FooSubscriber {
+//! # fn new_span(&self, _: &Attributes) -> Id { Id::from_u64(0) }
+//! # fn record(&self, _: &Id, _: &Record) {}
+//! # fn event(&self, _: &Event) {}
+//! # fn record_follows_from(&self, _: &Id, _: &Id) {}
+//! # fn enabled(&self, _: &Metadata) -> bool { false }
+//! # fn enter(&self, _: &Id) {}
+//! # fn exit(&self, _: &Id) {}
+//! # }
+//! # impl FooSubscriber { fn new() -> Self { FooSubscriber } }
+//! # let my_subscriber = FooSubscriber::new();
+//! # let my_dispatch = dispatcher::Dispatch::new(my_subscriber);
+//! // no default subscriber
+//!
+//! dispatcher::set_global_default(my_dispatch)
+//! // `set_global_default` will return an error if the global default
+//! // subscriber has already been set.
+//! .expect("global default was already set!");
+//!
+//! // `my_subscriber` is now the default
+//! ```
+//!
+//! <pre class="ignore" style="white-space:normal;font:inherit;">
+//! <strong>Note</strong>:the thread-local scoped dispatcher
+//! (<a href="#fn.with_default"><code>with_default</code></a>) requires the
+//! Rust standard library. <code>no_std</code> users should use
+//! <a href="#fn.set_global_default"><code>set_global_default</code></a>
+//! instead.
+//! </pre>
+//!
+//! ## Accessing the Default Subscriber
+//!
+//! A thread's current default subscriber can be accessed using the
+//! [`get_default`] function, which executes a closure with a reference to the
+//! currently default `Dispatch`. This is used primarily by `tracing`
+//! instrumentation.
+//!
+use crate::{
+ callsite, span,
+ subscriber::{self, NoSubscriber, Subscriber},
+ Event, LevelFilter, Metadata,
+};
+
+use crate::stdlib::{
+ any::Any,
+ fmt,
+ sync::{
+ atomic::{AtomicBool, AtomicUsize, Ordering},
+ Arc, Weak,
+ },
+};
+
+#[cfg(feature = "std")]
+use crate::stdlib::{
+ cell::{Cell, RefCell, RefMut},
+ error,
+};
+
+#[cfg(feature = "alloc")]
+use alloc::sync::{Arc, Weak};
+
+#[cfg(feature = "alloc")]
+use core::ops::Deref;
+
+/// `Dispatch` trace data to a [`Subscriber`].
+#[derive(Clone)]
+pub struct Dispatch {
+ subscriber: Arc<dyn Subscriber + Send + Sync>,
+}
+
+/// `WeakDispatch` is a version of [`Dispatch`] that holds a non-owning reference
+/// to a [`Subscriber`].
+///
+/// The Subscriber` may be accessed by calling [`WeakDispatch::upgrade`],
+/// which returns an `Option<Dispatch>`. If all [`Dispatch`] clones that point
+/// at the `Subscriber` have been dropped, [`WeakDispatch::upgrade`] will return
+/// `None`. Otherwise, it will return `Some(Dispatch)`.
+///
+/// A `WeakDispatch` may be created from a [`Dispatch`] by calling the
+/// [`Dispatch::downgrade`] method. The primary use for creating a
+/// [`WeakDispatch`] is to allow a Subscriber` to hold a cyclical reference to
+/// itself without creating a memory leak. See [here] for details.
+///
+/// This type is analogous to the [`std::sync::Weak`] type, but for a
+/// [`Dispatch`] rather than an [`Arc`].
+///
+/// [`Arc`]: std::sync::Arc
+/// [here]: Subscriber#avoiding-memory-leaks
+#[derive(Clone)]
+pub struct WeakDispatch {
+ subscriber: Weak<dyn Subscriber + Send + Sync>,
+}
+
+#[cfg(feature = "alloc")]
+#[derive(Clone)]
+enum Kind<T> {
+ Global(&'static (dyn Collect + Send + Sync)),
+ Scoped(T),
+}
+
+#[cfg(feature = "std")]
+thread_local! {
+ static CURRENT_STATE: State = State {
+ default: RefCell::new(None),
+ can_enter: Cell::new(true),
+ };
+}
+
+static EXISTS: AtomicBool = AtomicBool::new(false);
+static GLOBAL_INIT: AtomicUsize = AtomicUsize::new(UNINITIALIZED);
+
+const UNINITIALIZED: usize = 0;
+const INITIALIZING: usize = 1;
+const INITIALIZED: usize = 2;
+
+static mut GLOBAL_DISPATCH: Option<Dispatch> = None;
+
+/// The dispatch state of a thread.
+#[cfg(feature = "std")]
+struct State {
+ /// This thread's current default dispatcher.
+ default: RefCell<Option<Dispatch>>,
+ /// Whether or not we can currently begin dispatching a trace event.
+ ///
+ /// This is set to `false` when functions such as `enter`, `exit`, `event`,
+ /// and `new_span` are called on this thread's default dispatcher, to
+ /// prevent further trace events triggered inside those functions from
+ /// creating an infinite recursion. When we finish handling a dispatch, this
+ /// is set back to `true`.
+ can_enter: Cell<bool>,
+}
+
+/// While this guard is active, additional calls to subscriber functions on
+/// the default dispatcher will not be able to access the dispatch context.
+/// Dropping the guard will allow the dispatch context to be re-entered.
+#[cfg(feature = "std")]
+struct Entered<'a>(&'a State);
+
+/// A guard that resets the current default dispatcher to the prior
+/// default dispatcher when dropped.
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+#[derive(Debug)]
+pub struct DefaultGuard(Option<Dispatch>);
+
+/// Sets this dispatch as the default for the duration of a closure.
+///
+/// The default dispatcher is used when creating a new [span] or
+/// [`Event`].
+///
+/// <pre class="ignore" style="white-space:normal;font:inherit;">
+/// <strong>Note</strong>: This function required the Rust standard library.
+/// <code>no_std</code> users should use <a href="../fn.set_global_default.html">
+/// <code>set_global_default</code></a> instead.
+/// </pre>
+///
+/// [span]: super::span
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [`Event`]: super::event::Event
+/// [`set_global_default`]: super::set_global_default
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+pub fn with_default<T>(dispatcher: &Dispatch, f: impl FnOnce() -> T) -> T {
+ // When this guard is dropped, the default dispatcher will be reset to the
+ // prior default. Using this (rather than simply resetting after calling
+ // `f`) ensures that we always reset to the prior dispatcher even if `f`
+ // panics.
+ let _guard = set_default(dispatcher);
+ f()
+}
+
+/// Sets the dispatch as the default dispatch for the duration of the lifetime
+/// of the returned DefaultGuard
+///
+/// <pre class="ignore" style="white-space:normal;font:inherit;">
+/// <strong>Note</strong>: This function required the Rust standard library.
+/// <code>no_std</code> users should use <a href="../fn.set_global_default.html">
+/// <code>set_global_default</code></a> instead.
+/// </pre>
+///
+/// [`set_global_default`]: super::set_global_default
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+#[must_use = "Dropping the guard unregisters the dispatcher."]
+pub fn set_default(dispatcher: &Dispatch) -> DefaultGuard {
+ // When this guard is dropped, the default dispatcher will be reset to the
+ // prior default. Using this ensures that we always reset to the prior
+ // dispatcher even if the thread calling this function panics.
+ State::set_default(dispatcher.clone())
+}
+
+/// Sets this dispatch as the global default for the duration of the entire program.
+/// Will be used as a fallback if no thread-local dispatch has been set in a thread
+/// (using `with_default`.)
+///
+/// Can only be set once; subsequent attempts to set the global default will fail.
+/// Returns `Err` if the global default has already been set.
+///
+/// <div class="example-wrap" style="display:inline-block"><pre class="compile_fail" style="white-space:normal;font:inherit;">
+/// <strong>Warning</strong>: In general, libraries should <em>not</em> call
+/// <code>set_global_default()</code>! Doing so will cause conflicts when
+/// executables that depend on the library try to set the default later.
+/// </pre></div>
+///
+/// [span]: super::span
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [`Event`]: super::event::Event
+pub fn set_global_default(dispatcher: Dispatch) -> Result<(), SetGlobalDefaultError> {
+ // if `compare_exchange` returns Result::Ok(_), then `new` has been set and
+ // `current`—now the prior value—has been returned in the `Ok()` branch.
+ if GLOBAL_INIT
+ .compare_exchange(
+ UNINITIALIZED,
+ INITIALIZING,
+ Ordering::SeqCst,
+ Ordering::SeqCst,
+ )
+ .is_ok()
+ {
+ unsafe {
+ GLOBAL_DISPATCH = Some(dispatcher);
+ }
+ GLOBAL_INIT.store(INITIALIZED, Ordering::SeqCst);
+ EXISTS.store(true, Ordering::Release);
+ Ok(())
+ } else {
+ Err(SetGlobalDefaultError { _no_construct: () })
+ }
+}
+
+/// Returns true if a `tracing` dispatcher has ever been set.
+///
+/// This may be used to completely elide trace points if tracing is not in use
+/// at all or has yet to be initialized.
+#[doc(hidden)]
+#[inline(always)]
+pub fn has_been_set() -> bool {
+ EXISTS.load(Ordering::Relaxed)
+}
+
+/// Returned if setting the global dispatcher fails.
+pub struct SetGlobalDefaultError {
+ _no_construct: (),
+}
+
+impl fmt::Debug for SetGlobalDefaultError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_tuple("SetGlobalDefaultError")
+ .field(&Self::MESSAGE)
+ .finish()
+ }
+}
+
+impl fmt::Display for SetGlobalDefaultError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.pad(Self::MESSAGE)
+ }
+}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl error::Error for SetGlobalDefaultError {}
+
+impl SetGlobalDefaultError {
+ const MESSAGE: &'static str = "a global default trace dispatcher has already been set";
+}
+
+/// Executes a closure with a reference to this thread's current [dispatcher].
+///
+/// Note that calls to `get_default` should not be nested; if this function is
+/// called while inside of another `get_default`, that closure will be provided
+/// with `Dispatch::none` rather than the previously set dispatcher.
+///
+/// [dispatcher]: super::dispatcher::Dispatch
+#[cfg(feature = "std")]
+pub fn get_default<T, F>(mut f: F) -> T
+where
+ F: FnMut(&Dispatch) -> T,
+{
+ CURRENT_STATE
+ .try_with(|state| {
+ if let Some(entered) = state.enter() {
+ return f(&*entered.current());
+ }
+
+ f(&Dispatch::none())
+ })
+ .unwrap_or_else(|_| f(&Dispatch::none()))
+}
+
+/// Executes a closure with a reference to this thread's current [dispatcher].
+///
+/// Note that calls to `get_default` should not be nested; if this function is
+/// called while inside of another `get_default`, that closure will be provided
+/// with `Dispatch::none` rather than the previously set dispatcher.
+///
+/// [dispatcher]: super::dispatcher::Dispatch
+#[cfg(feature = "std")]
+#[doc(hidden)]
+#[inline(never)]
+pub fn get_current<T>(f: impl FnOnce(&Dispatch) -> T) -> Option<T> {
+ CURRENT_STATE
+ .try_with(|state| {
+ let entered = state.enter()?;
+ Some(f(&*entered.current()))
+ })
+ .ok()?
+}
+
+/// Executes a closure with a reference to the current [dispatcher].
+///
+/// [dispatcher]: super::dispatcher::Dispatch
+#[cfg(not(feature = "std"))]
+#[doc(hidden)]
+pub fn get_current<T>(f: impl FnOnce(&Dispatch) -> T) -> Option<T> {
+ let dispatch = get_global()?;
+ Some(f(&dispatch))
+}
+
+/// Executes a closure with a reference to the current [dispatcher].
+///
+/// [dispatcher]: super::dispatcher::Dispatch
+#[cfg(not(feature = "std"))]
+pub fn get_default<T, F>(mut f: F) -> T
+where
+ F: FnMut(&Dispatch) -> T,
+{
+ if let Some(d) = get_global() {
+ f(d)
+ } else {
+ f(&Dispatch::none())
+ }
+}
+
+fn get_global() -> Option<&'static Dispatch> {
+ if GLOBAL_INIT.load(Ordering::SeqCst) != INITIALIZED {
+ return None;
+ }
+ unsafe {
+ // This is safe given the invariant that setting the global dispatcher
+ // also sets `GLOBAL_INIT` to `INITIALIZED`.
+ Some(GLOBAL_DISPATCH.as_ref().expect(
+ "invariant violated: GLOBAL_DISPATCH must be initialized before GLOBAL_INIT is set",
+ ))
+ }
+}
+
+#[cfg(feature = "std")]
+pub(crate) struct Registrar(Weak<dyn Subscriber + Send + Sync>);
+
+impl Dispatch {
+ /// Returns a new `Dispatch` that discards events and spans.
+ #[inline]
+ pub fn none() -> Self {
+ Dispatch {
+ subscriber: Arc::new(NoSubscriber::default()),
+ }
+ }
+
+ /// Returns a `Dispatch` that forwards to the given [`Subscriber`].
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ pub fn new<S>(subscriber: S) -> Self
+ where
+ S: Subscriber + Send + Sync + 'static,
+ {
+ let me = Dispatch {
+ subscriber: Arc::new(subscriber),
+ };
+ callsite::register_dispatch(&me);
+ me
+ }
+
+ #[cfg(feature = "std")]
+ pub(crate) fn registrar(&self) -> Registrar {
+ Registrar(Arc::downgrade(&self.subscriber))
+ }
+
+ /// Creates a [`WeakDispatch`] from this `Dispatch`.
+ ///
+ /// A [`WeakDispatch`] is similar to a [`Dispatch`], but it does not prevent
+ /// the underlying [`Subscriber`] from being dropped. Instead, it only permits
+ /// access while other references to the `Subscriber` exist. This is equivalent
+ /// to the standard library's [`Arc::downgrade`] method, but for `Dispatch`
+ /// rather than `Arc`.
+ ///
+ /// The primary use for creating a [`WeakDispatch`] is to allow a `Subscriber`
+ /// to hold a cyclical reference to itself without creating a memory leak.
+ /// See [here] for details.
+ ///
+ /// [`Arc::downgrade`]: std::sync::Arc::downgrade
+ /// [here]: Subscriber#avoiding-memory-leaks
+ pub fn downgrade(&self) -> WeakDispatch {
+ WeakDispatch {
+ subscriber: Arc::downgrade(&self.subscriber),
+ }
+ }
+
+ #[inline(always)]
+ #[cfg(not(feature = "alloc"))]
+ pub(crate) fn subscriber(&self) -> &(dyn Subscriber + Send + Sync) {
+ &self.subscriber
+ }
+
+ /// Registers a new callsite with this collector, returning whether or not
+ /// the collector is interested in being notified about the callsite.
+ ///
+ /// This calls the [`register_callsite`] function on the [`Subscriber`]
+ /// that this `Dispatch` forwards to.
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`register_callsite`]: super::subscriber::Subscriber::register_callsite
+ #[inline]
+ pub fn register_callsite(&self, metadata: &'static Metadata<'static>) -> subscriber::Interest {
+ self.subscriber.register_callsite(metadata)
+ }
+
+ /// Returns the highest [verbosity level][level] that this [`Subscriber`] will
+ /// enable, or `None`, if the subscriber does not implement level-based
+ /// filtering or chooses not to implement this method.
+ ///
+ /// This calls the [`max_level_hint`] function on the [`Subscriber`]
+ /// that this `Dispatch` forwards to.
+ ///
+ /// [level]: super::Level
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`register_callsite`]: super::subscriber::Subscriber::max_level_hint
+ // TODO(eliza): consider making this a public API?
+ #[inline]
+ pub(crate) fn max_level_hint(&self) -> Option<LevelFilter> {
+ self.subscriber.max_level_hint()
+ }
+
+ /// Record the construction of a new span, returning a new [ID] for the
+ /// span being constructed.
+ ///
+ /// This calls the [`new_span`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [ID]: super::span::Id
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`new_span`]: super::subscriber::Subscriber::new_span
+ #[inline]
+ pub fn new_span(&self, span: &span::Attributes<'_>) -> span::Id {
+ self.subscriber.new_span(span)
+ }
+
+ /// Record a set of values on a span.
+ ///
+ /// This calls the [`record`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`record`]: super::subscriber::Subscriber::record
+ #[inline]
+ pub fn record(&self, span: &span::Id, values: &span::Record<'_>) {
+ self.subscriber.record(span, values)
+ }
+
+ /// Adds an indication that `span` follows from the span with the id
+ /// `follows`.
+ ///
+ /// This calls the [`record_follows_from`] function on the [`Subscriber`]
+ /// that this `Dispatch` forwards to.
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`record_follows_from`]: super::subscriber::Subscriber::record_follows_from
+ #[inline]
+ pub fn record_follows_from(&self, span: &span::Id, follows: &span::Id) {
+ self.subscriber.record_follows_from(span, follows)
+ }
+
+ /// Returns true if a span with the specified [metadata] would be
+ /// recorded.
+ ///
+ /// This calls the [`enabled`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [metadata]: super::metadata::Metadata
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`enabled`]: super::subscriber::Subscriber::enabled
+ #[inline]
+ pub fn enabled(&self, metadata: &Metadata<'_>) -> bool {
+ self.subscriber.enabled(metadata)
+ }
+
+ /// Records that an [`Event`] has occurred.
+ ///
+ /// This calls the [`event`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [`Event`]: super::event::Event
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`event`]: super::subscriber::Subscriber::event
+ #[inline]
+ pub fn event(&self, event: &Event<'_>) {
+ if self.subscriber.event_enabled(event) {
+ self.subscriber.event(event);
+ }
+ }
+
+ /// Records that a span has been can_enter.
+ ///
+ /// This calls the [`enter`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`enter`]: super::subscriber::Subscriber::enter
+ pub fn enter(&self, span: &span::Id) {
+ self.subscriber.enter(span);
+ }
+
+ /// Records that a span has been exited.
+ ///
+ /// This calls the [`exit`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`exit`]: super::subscriber::Subscriber::exit
+ pub fn exit(&self, span: &span::Id) {
+ self.subscriber.exit(span);
+ }
+
+ /// Notifies the subscriber that a [span ID] has been cloned.
+ ///
+ /// This function must only be called with span IDs that were returned by
+ /// this `Dispatch`'s [`new_span`] function. The `tracing` crate upholds
+ /// this guarantee and any other libraries implementing instrumentation APIs
+ /// must as well.
+ ///
+ /// This calls the [`clone_span`] function on the `Subscriber` that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [span ID]: super::span::Id
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`clone_span`]: super::subscriber::Subscriber::clone_span
+ /// [`new_span`]: super::subscriber::Subscriber::new_span
+ #[inline]
+ pub fn clone_span(&self, id: &span::Id) -> span::Id {
+ self.subscriber.clone_span(id)
+ }
+
+ /// Notifies the subscriber that a [span ID] has been dropped.
+ ///
+ /// This function must only be called with span IDs that were returned by
+ /// this `Dispatch`'s [`new_span`] function. The `tracing` crate upholds
+ /// this guarantee and any other libraries implementing instrumentation APIs
+ /// must as well.
+ ///
+ /// This calls the [`drop_span`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// <pre class="compile_fail" style="white-space:normal;font:inherit;">
+ /// <strong>Deprecated</strong>: The <a href="#method.try_close"><code>
+ /// try_close</code></a> method is functionally identical, but returns
+ /// <code>true</code> if the span is now closed. It should be used
+ /// instead of this method.
+ /// </pre>
+ ///
+ /// [span ID]: super::span::Id
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`drop_span`]: super::subscriber::Subscriber::drop_span
+ /// [`new_span`]: super::subscriber::Subscriber::new_span
+ /// [`try_close`]: Entered::try_close()
+ #[inline]
+ #[deprecated(since = "0.1.2", note = "use `Dispatch::try_close` instead")]
+ pub fn drop_span(&self, id: span::Id) {
+ #[allow(deprecated)]
+ self.subscriber.drop_span(id);
+ }
+
+ /// Notifies the subscriber that a [span ID] has been dropped, and returns
+ /// `true` if there are now 0 IDs referring to that span.
+ ///
+ /// This function must only be called with span IDs that were returned by
+ /// this `Dispatch`'s [`new_span`] function. The `tracing` crate upholds
+ /// this guarantee and any other libraries implementing instrumentation APIs
+ /// must as well.
+ ///
+ /// This calls the [`try_close`] function on the [`Subscriber`] that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [span ID]: super::span::Id
+ /// [`Subscriber`]: super::subscriber::Subscriber
+ /// [`try_close`]: super::subscriber::Subscriber::try_close
+ /// [`new_span`]: super::subscriber::Subscriber::new_span
+ pub fn try_close(&self, id: span::Id) -> bool {
+ self.subscriber.try_close(id)
+ }
+
+ /// Returns a type representing this subscriber's view of the current span.
+ ///
+ /// This calls the [`current`] function on the `Subscriber` that this
+ /// `Dispatch` forwards to.
+ ///
+ /// [`current`]: super::subscriber::Subscriber::current_span
+ #[inline]
+ pub fn current_span(&self) -> span::Current {
+ self.subscriber.current_span()
+ }
+
+ /// Returns `true` if this `Dispatch` forwards to a `Subscriber` of type
+ /// `T`.
+ #[inline]
+ pub fn is<T: Any>(&self) -> bool {
+ <dyn Subscriber>::is::<T>(&self.subscriber)
+ }
+
+ /// Returns some reference to the `Subscriber` this `Dispatch` forwards to
+ /// if it is of type `T`, or `None` if it isn't.
+ #[inline]
+ pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
+ <dyn Subscriber>::downcast_ref(&self.subscriber)
+ }
+}
+
+impl Default for Dispatch {
+ /// Returns the current default dispatcher
+ fn default() -> Self {
+ get_default(|default| default.clone())
+ }
+}
+
+impl fmt::Debug for Dispatch {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_tuple("Dispatch")
+ .field(&format_args!("{:p}", self.subscriber))
+ .finish()
+ }
+}
+
+impl<S> From<S> for Dispatch
+where
+ S: Subscriber + Send + Sync + 'static,
+{
+ #[inline]
+ fn from(subscriber: S) -> Self {
+ Dispatch::new(subscriber)
+ }
+}
+
+// === impl WeakDispatch ===
+
+impl WeakDispatch {
+ /// Attempts to upgrade this `WeakDispatch` to a [`Dispatch`].
+ ///
+ /// Returns `None` if the referenced `Dispatch` has already been dropped.
+ ///
+ /// ## Examples
+ ///
+ /// ```
+ /// # use tracing_core::subscriber::NoSubscriber;
+ /// # use tracing_core::dispatcher::Dispatch;
+ /// let strong = Dispatch::new(NoSubscriber::default());
+ /// let weak = strong.downgrade();
+ ///
+ /// // The strong here keeps it alive, so we can still access the object.
+ /// assert!(weak.upgrade().is_some());
+ ///
+ /// drop(strong); // But not any more.
+ /// assert!(weak.upgrade().is_none());
+ /// ```
+ pub fn upgrade(&self) -> Option<Dispatch> {
+ self.subscriber
+ .upgrade()
+ .map(|subscriber| Dispatch { subscriber })
+ }
+}
+
+impl fmt::Debug for WeakDispatch {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ let mut tuple = f.debug_tuple("WeakDispatch");
+ match self.subscriber.upgrade() {
+ Some(subscriber) => tuple.field(&format_args!("Some({:p})", subscriber)),
+ None => tuple.field(&format_args!("None")),
+ };
+ tuple.finish()
+ }
+}
+
+#[cfg(feature = "std")]
+impl Registrar {
+ pub(crate) fn upgrade(&self) -> Option<Dispatch> {
+ self.0.upgrade().map(|subscriber| Dispatch { subscriber })
+ }
+}
+
+// ===== impl State =====
+
+#[cfg(feature = "std")]
+impl State {
+ /// Replaces the current default dispatcher on this thread with the provided
+ /// dispatcher.Any
+ ///
+ /// Dropping the returned `ResetGuard` will reset the default dispatcher to
+ /// the previous value.
+ #[inline]
+ fn set_default(new_dispatch: Dispatch) -> DefaultGuard {
+ let prior = CURRENT_STATE
+ .try_with(|state| {
+ state.can_enter.set(true);
+ state.default.replace(Some(new_dispatch))
+ })
+ .ok()
+ .flatten();
+ EXISTS.store(true, Ordering::Release);
+ DefaultGuard(prior)
+ }
+
+ #[inline]
+ fn enter(&self) -> Option<Entered<'_>> {
+ if self.can_enter.replace(false) {
+ Some(Entered(self))
+ } else {
+ None
+ }
+ }
+}
+
+// ===== impl Entered =====
+
+#[cfg(feature = "std")]
+impl<'a> Entered<'a> {
+ #[inline]
+ fn current(&self) -> RefMut<'a, Dispatch> {
+ let default = self.0.default.borrow_mut();
+ RefMut::map(default, |default| {
+ default.get_or_insert_with(|| get_global().cloned().unwrap_or_else(Dispatch::none))
+ })
+ }
+}
+
+#[cfg(feature = "std")]
+impl<'a> Drop for Entered<'a> {
+ #[inline]
+ fn drop(&mut self) {
+ self.0.can_enter.set(true);
+ }
+}
+
+// ===== impl DefaultGuard =====
+
+#[cfg(feature = "std")]
+impl Drop for DefaultGuard {
+ #[inline]
+ fn drop(&mut self) {
+ // Replace the dispatcher and then drop the old one outside
+ // of the thread-local context. Dropping the dispatch may
+ // lead to the drop of a subscriber which, in the process,
+ // could then also attempt to access the same thread local
+ // state -- causing a clash.
+ let prev = CURRENT_STATE.try_with(|state| state.default.replace(self.0.take()));
+ drop(prev)
+ }
+}
+
+#[cfg(test)]
+mod test {
+ use super::*;
+ #[cfg(feature = "std")]
+ use crate::stdlib::sync::atomic::{AtomicUsize, Ordering};
+ use crate::{
+ callsite::Callsite,
+ metadata::{Kind, Level, Metadata},
+ subscriber::Interest,
+ };
+
+ #[test]
+ fn dispatch_is() {
+ let dispatcher = Dispatch::new(NoSubscriber::default());
+ assert!(dispatcher.is::<NoSubscriber>());
+ }
+
+ #[test]
+ fn dispatch_downcasts() {
+ let dispatcher = Dispatch::new(NoSubscriber::default());
+ assert!(dispatcher.downcast_ref::<NoSubscriber>().is_some());
+ }
+
+ struct TestCallsite;
+ static TEST_CALLSITE: TestCallsite = TestCallsite;
+ static TEST_META: Metadata<'static> = metadata! {
+ name: "test",
+ target: module_path!(),
+ level: Level::DEBUG,
+ fields: &[],
+ callsite: &TEST_CALLSITE,
+ kind: Kind::EVENT
+ };
+
+ impl Callsite for TestCallsite {
+ fn set_interest(&self, _: Interest) {}
+ fn metadata(&self) -> &Metadata<'_> {
+ &TEST_META
+ }
+ }
+
+ #[test]
+ #[cfg(feature = "std")]
+ fn events_dont_infinite_loop() {
+ // This test ensures that an event triggered within a subscriber
+ // won't cause an infinite loop of events.
+ struct TestSubscriber;
+ impl Subscriber for TestSubscriber {
+ fn enabled(&self, _: &Metadata<'_>) -> bool {
+ true
+ }
+
+ fn new_span(&self, _: &span::Attributes<'_>) -> span::Id {
+ span::Id::from_u64(0xAAAA)
+ }
+
+ fn record(&self, _: &span::Id, _: &span::Record<'_>) {}
+
+ fn record_follows_from(&self, _: &span::Id, _: &span::Id) {}
+
+ fn event(&self, _: &Event<'_>) {
+ static EVENTS: AtomicUsize = AtomicUsize::new(0);
+ assert_eq!(
+ EVENTS.fetch_add(1, Ordering::Relaxed),
+ 0,
+ "event method called twice!"
+ );
+ Event::dispatch(&TEST_META, &TEST_META.fields().value_set(&[]))
+ }
+
+ fn enter(&self, _: &span::Id) {}
+
+ fn exit(&self, _: &span::Id) {}
+ }
+
+ with_default(&Dispatch::new(TestSubscriber), || {
+ Event::dispatch(&TEST_META, &TEST_META.fields().value_set(&[]))
+ })
+ }
+
+ #[test]
+ #[cfg(feature = "std")]
+ fn spans_dont_infinite_loop() {
+ // This test ensures that a span created within a subscriber
+ // won't cause an infinite loop of new spans.
+
+ fn mk_span() {
+ get_default(|current| {
+ current.new_span(&span::Attributes::new(
+ &TEST_META,
+ &TEST_META.fields().value_set(&[]),
+ ))
+ });
+ }
+
+ struct TestSubscriber;
+ impl Subscriber for TestSubscriber {
+ fn enabled(&self, _: &Metadata<'_>) -> bool {
+ true
+ }
+
+ fn new_span(&self, _: &span::Attributes<'_>) -> span::Id {
+ static NEW_SPANS: AtomicUsize = AtomicUsize::new(0);
+ assert_eq!(
+ NEW_SPANS.fetch_add(1, Ordering::Relaxed),
+ 0,
+ "new_span method called twice!"
+ );
+ mk_span();
+ span::Id::from_u64(0xAAAA)
+ }
+
+ fn record(&self, _: &span::Id, _: &span::Record<'_>) {}
+
+ fn record_follows_from(&self, _: &span::Id, _: &span::Id) {}
+
+ fn event(&self, _: &Event<'_>) {}
+
+ fn enter(&self, _: &span::Id) {}
+
+ fn exit(&self, _: &span::Id) {}
+ }
+
+ with_default(&Dispatch::new(TestSubscriber), mk_span)
+ }
+
+ #[test]
+ fn default_no_subscriber() {
+ let default_dispatcher = Dispatch::default();
+ assert!(default_dispatcher.is::<NoSubscriber>());
+ }
+
+ #[cfg(feature = "std")]
+ #[test]
+ fn default_dispatch() {
+ struct TestSubscriber;
+ impl Subscriber for TestSubscriber {
+ fn enabled(&self, _: &Metadata<'_>) -> bool {
+ true
+ }
+
+ fn new_span(&self, _: &span::Attributes<'_>) -> span::Id {
+ span::Id::from_u64(0xAAAA)
+ }
+
+ fn record(&self, _: &span::Id, _: &span::Record<'_>) {}
+
+ fn record_follows_from(&self, _: &span::Id, _: &span::Id) {}
+
+ fn event(&self, _: &Event<'_>) {}
+
+ fn enter(&self, _: &span::Id) {}
+
+ fn exit(&self, _: &span::Id) {}
+ }
+ let guard = set_default(&Dispatch::new(TestSubscriber));
+ let default_dispatcher = Dispatch::default();
+ assert!(default_dispatcher.is::<TestSubscriber>());
+
+ drop(guard);
+ let default_dispatcher = Dispatch::default();
+ assert!(default_dispatcher.is::<NoSubscriber>());
+ }
+}
diff --git a/third_party/rust/tracing-core/src/event.rs b/third_party/rust/tracing-core/src/event.rs
new file mode 100644
index 0000000000..6e25437629
--- /dev/null
+++ b/third_party/rust/tracing-core/src/event.rs
@@ -0,0 +1,128 @@
+//! Events represent single points in time during the execution of a program.
+use crate::parent::Parent;
+use crate::span::Id;
+use crate::{field, Metadata};
+
+/// `Event`s represent single points in time where something occurred during the
+/// execution of a program.
+///
+/// An `Event` can be compared to a log record in unstructured logging, but with
+/// two key differences:
+/// - `Event`s exist _within the context of a [span]_. Unlike log lines, they
+/// may be located within the trace tree, allowing visibility into the
+/// _temporal_ context in which the event occurred, as well as the source
+/// code location.
+/// - Like spans, `Event`s have structured key-value data known as _[fields]_,
+/// which may include textual message. In general, a majority of the data
+/// associated with an event should be in the event's fields rather than in
+/// the textual message, as the fields are more structured.
+///
+/// [span]: super::span
+/// [fields]: super::field
+#[derive(Debug)]
+pub struct Event<'a> {
+ fields: &'a field::ValueSet<'a>,
+ metadata: &'static Metadata<'static>,
+ parent: Parent,
+}
+
+impl<'a> Event<'a> {
+ /// Constructs a new `Event` with the specified metadata and set of values,
+ /// and observes it with the current subscriber.
+ pub fn dispatch(metadata: &'static Metadata<'static>, fields: &'a field::ValueSet<'_>) {
+ let event = Event::new(metadata, fields);
+ crate::dispatcher::get_default(|current| {
+ current.event(&event);
+ });
+ }
+
+ /// Returns a new `Event` in the current span, with the specified metadata
+ /// and set of values.
+ #[inline]
+ pub fn new(metadata: &'static Metadata<'static>, fields: &'a field::ValueSet<'a>) -> Self {
+ Event {
+ fields,
+ metadata,
+ parent: Parent::Current,
+ }
+ }
+
+ /// Returns a new `Event` as a child of the specified span, with the
+ /// provided metadata and set of values.
+ #[inline]
+ pub fn new_child_of(
+ parent: impl Into<Option<Id>>,
+ metadata: &'static Metadata<'static>,
+ fields: &'a field::ValueSet<'a>,
+ ) -> Self {
+ let parent = match parent.into() {
+ Some(p) => Parent::Explicit(p),
+ None => Parent::Root,
+ };
+ Event {
+ fields,
+ metadata,
+ parent,
+ }
+ }
+
+ /// Constructs a new `Event` with the specified metadata and set of values,
+ /// and observes it with the current subscriber and an explicit parent.
+ pub fn child_of(
+ parent: impl Into<Option<Id>>,
+ metadata: &'static Metadata<'static>,
+ fields: &'a field::ValueSet<'_>,
+ ) {
+ let event = Self::new_child_of(parent, metadata, fields);
+ crate::dispatcher::get_default(|current| {
+ current.event(&event);
+ });
+ }
+
+ /// Visits all the fields on this `Event` with the specified [visitor].
+ ///
+ /// [visitor]: super::field::Visit
+ #[inline]
+ pub fn record(&self, visitor: &mut dyn field::Visit) {
+ self.fields.record(visitor);
+ }
+
+ /// Returns an iterator over the set of values on this `Event`.
+ pub fn fields(&self) -> field::Iter {
+ self.fields.field_set().iter()
+ }
+
+ /// Returns [metadata] describing this `Event`.
+ ///
+ /// [metadata]: super::Metadata
+ pub fn metadata(&self) -> &'static Metadata<'static> {
+ self.metadata
+ }
+
+ /// Returns true if the new event should be a root.
+ pub fn is_root(&self) -> bool {
+ matches!(self.parent, Parent::Root)
+ }
+
+ /// Returns true if the new event's parent should be determined based on the
+ /// current context.
+ ///
+ /// If this is true and the current thread is currently inside a span, then
+ /// that span should be the new event's parent. Otherwise, if the current
+ /// thread is _not_ inside a span, then the new event will be the root of its
+ /// own trace tree.
+ pub fn is_contextual(&self) -> bool {
+ matches!(self.parent, Parent::Current)
+ }
+
+ /// Returns the new event's explicitly-specified parent, if there is one.
+ ///
+ /// Otherwise (if the new event is a root or is a child of the current span),
+ /// returns `None`.
+ pub fn parent(&self) -> Option<&Id> {
+ match self.parent {
+ Parent::Explicit(ref p) => Some(p),
+ _ => None,
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/field.rs b/third_party/rust/tracing-core/src/field.rs
new file mode 100644
index 0000000000..e103c75a9d
--- /dev/null
+++ b/third_party/rust/tracing-core/src/field.rs
@@ -0,0 +1,1263 @@
+//! `Span` and `Event` key-value data.
+//!
+//! Spans and events may be annotated with key-value data, referred to as known
+//! as _fields_. These fields consist of a mapping from a key (corresponding to
+//! a `&str` but represented internally as an array index) to a [`Value`].
+//!
+//! # `Value`s and `Subscriber`s
+//!
+//! `Subscriber`s consume `Value`s as fields attached to [span]s or [`Event`]s.
+//! The set of field keys on a given span or is defined on its [`Metadata`].
+//! When a span is created, it provides [`Attributes`] to the `Subscriber`'s
+//! [`new_span`] method, containing any fields whose values were provided when
+//! the span was created; and may call the `Subscriber`'s [`record`] method
+//! with additional [`Record`]s if values are added for more of its fields.
+//! Similarly, the [`Event`] type passed to the subscriber's [`event`] method
+//! will contain any fields attached to each event.
+//!
+//! `tracing` represents values as either one of a set of Rust primitives
+//! (`i64`, `u64`, `f64`, `i128`, `u128`, `bool`, and `&str`) or using a
+//! `fmt::Display` or `fmt::Debug` implementation. `Subscriber`s are provided
+//! these primitive value types as `dyn Value` trait objects.
+//!
+//! These trait objects can be formatted using `fmt::Debug`, but may also be
+//! recorded as typed data by calling the [`Value::record`] method on these
+//! trait objects with a _visitor_ implementing the [`Visit`] trait. This trait
+//! represents the behavior used to record values of various types. For example,
+//! an implementation of `Visit` might record integers by incrementing counters
+//! for their field names rather than printing them.
+//!
+//!
+//! # Using `valuable`
+//!
+//! `tracing`'s [`Value`] trait is intentionally minimalist: it supports only a small
+//! number of Rust primitives as typed values, and only permits recording
+//! user-defined types with their [`fmt::Debug`] or [`fmt::Display`]
+//! implementations. However, there are some cases where it may be useful to record
+//! nested values (such as arrays, `Vec`s, or `HashMap`s containing values), or
+//! user-defined `struct` and `enum` types without having to format them as
+//! unstructured text.
+//!
+//! To address `Value`'s limitations, `tracing` offers experimental support for
+//! the [`valuable`] crate, which provides object-safe inspection of structured
+//! values. User-defined types can implement the [`valuable::Valuable`] trait,
+//! and be recorded as a `tracing` field by calling their [`as_value`] method.
+//! If the [`Subscriber`] also supports the `valuable` crate, it can
+//! then visit those types fields as structured values using `valuable`.
+//!
+//! <pre class="ignore" style="white-space:normal;font:inherit;">
+//! <strong>Note</strong>: <code>valuable</code> support is an
+//! <a href = "../index.html#unstable-features">unstable feature</a>. See
+//! the documentation on unstable features for details on how to enable it.
+//! </pre>
+//!
+//! For example:
+//! ```ignore
+//! // Derive `Valuable` for our types:
+//! use valuable::Valuable;
+//!
+//! #[derive(Clone, Debug, Valuable)]
+//! struct User {
+//! name: String,
+//! age: u32,
+//! address: Address,
+//! }
+//!
+//! #[derive(Clone, Debug, Valuable)]
+//! struct Address {
+//! country: String,
+//! city: String,
+//! street: String,
+//! }
+//!
+//! let user = User {
+//! name: "Arwen Undomiel".to_string(),
+//! age: 3000,
+//! address: Address {
+//! country: "Middle Earth".to_string(),
+//! city: "Rivendell".to_string(),
+//! street: "leafy lane".to_string(),
+//! },
+//! };
+//!
+//! // Recording `user` as a `valuable::Value` will allow the `tracing` subscriber
+//! // to traverse its fields as a nested, typed structure:
+//! tracing::info!(current_user = user.as_value());
+//! ```
+//!
+//! Alternatively, the [`valuable()`] function may be used to convert a type
+//! implementing [`Valuable`] into a `tracing` field value.
+//!
+//! When the `valuable` feature is enabled, the [`Visit`] trait will include an
+//! optional [`record_value`] method. `Visit` implementations that wish to
+//! record `valuable` values can implement this method with custom behavior.
+//! If a visitor does not implement `record_value`, the [`valuable::Value`] will
+//! be forwarded to the visitor's [`record_debug`] method.
+//!
+//! [`valuable`]: https://crates.io/crates/valuable
+//! [`as_value`]: valuable::Valuable::as_value
+//! [`Subscriber`]: crate::Subscriber
+//! [`record_value`]: Visit::record_value
+//! [`record_debug`]: Visit::record_debug
+//!
+//! [span]: super::span
+//! [`Event`]: super::event::Event
+//! [`Metadata`]: super::metadata::Metadata
+//! [`Attributes`]: super::span::Attributes
+//! [`Record`]: super::span::Record
+//! [`new_span`]: super::subscriber::Subscriber::new_span
+//! [`record`]: super::subscriber::Subscriber::record
+//! [`event`]: super::subscriber::Subscriber::event
+//! [`Value::record`]: Value::record
+use crate::callsite;
+use crate::stdlib::{
+ borrow::Borrow,
+ fmt,
+ hash::{Hash, Hasher},
+ num,
+ ops::Range,
+ string::String,
+};
+
+use self::private::ValidLen;
+
+/// An opaque key allowing _O_(1) access to a field in a `Span`'s key-value
+/// data.
+///
+/// As keys are defined by the _metadata_ of a span, rather than by an
+/// individual instance of a span, a key may be used to access the same field
+/// across all instances of a given span with the same metadata. Thus, when a
+/// subscriber observes a new span, it need only access a field by name _once_,
+/// and use the key for that name for all other accesses.
+#[derive(Debug)]
+pub struct Field {
+ i: usize,
+ fields: FieldSet,
+}
+
+/// An empty field.
+///
+/// This can be used to indicate that the value of a field is not currently
+/// present but will be recorded later.
+///
+/// When a field's value is `Empty`. it will not be recorded.
+#[derive(Debug, Eq, PartialEq)]
+pub struct Empty;
+
+/// Describes the fields present on a span.
+///
+/// ## Equality
+///
+/// In well-behaved applications, two `FieldSet`s [initialized] with equal
+/// [callsite identifiers] will have identical fields. Consequently, in release
+/// builds, [`FieldSet::eq`] *only* checks that its arguments have equal
+/// callsites. However, the equality of field names is checked in debug builds.
+///
+/// [initialized]: Self::new
+/// [callsite identifiers]: callsite::Identifier
+pub struct FieldSet {
+ /// The names of each field on the described span.
+ names: &'static [&'static str],
+ /// The callsite where the described span originates.
+ callsite: callsite::Identifier,
+}
+
+/// A set of fields and values for a span.
+pub struct ValueSet<'a> {
+ values: &'a [(&'a Field, Option<&'a (dyn Value + 'a)>)],
+ fields: &'a FieldSet,
+}
+
+/// An iterator over a set of fields.
+#[derive(Debug)]
+pub struct Iter {
+ idxs: Range<usize>,
+ fields: FieldSet,
+}
+
+/// Visits typed values.
+///
+/// An instance of `Visit` ("a visitor") represents the logic necessary to
+/// record field values of various types. When an implementor of [`Value`] is
+/// [recorded], it calls the appropriate method on the provided visitor to
+/// indicate the type that value should be recorded as.
+///
+/// When a [`Subscriber`] implementation [records an `Event`] or a
+/// [set of `Value`s added to a `Span`], it can pass an `&mut Visit` to the
+/// `record` method on the provided [`ValueSet`] or [`Event`]. This visitor
+/// will then be used to record all the field-value pairs present on that
+/// `Event` or `ValueSet`.
+///
+/// # Examples
+///
+/// A simple visitor that writes to a string might be implemented like so:
+/// ```
+/// # extern crate tracing_core as tracing;
+/// use std::fmt::{self, Write};
+/// use tracing::field::{Value, Visit, Field};
+/// pub struct StringVisitor<'a> {
+/// string: &'a mut String,
+/// }
+///
+/// impl<'a> Visit for StringVisitor<'a> {
+/// fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
+/// write!(self.string, "{} = {:?}; ", field.name(), value).unwrap();
+/// }
+/// }
+/// ```
+/// This visitor will format each recorded value using `fmt::Debug`, and
+/// append the field name and formatted value to the provided string,
+/// regardless of the type of the recorded value. When all the values have
+/// been recorded, the `StringVisitor` may be dropped, allowing the string
+/// to be printed or stored in some other data structure.
+///
+/// The `Visit` trait provides default implementations for `record_i64`,
+/// `record_u64`, `record_bool`, `record_str`, and `record_error`, which simply
+/// forward the recorded value to `record_debug`. Thus, `record_debug` is the
+/// only method which a `Visit` implementation *must* implement. However,
+/// visitors may override the default implementations of these functions in
+/// order to implement type-specific behavior.
+///
+/// Additionally, when a visitor receives a value of a type it does not care
+/// about, it is free to ignore those values completely. For example, a
+/// visitor which only records numeric data might look like this:
+///
+/// ```
+/// # extern crate tracing_core as tracing;
+/// # use std::fmt::{self, Write};
+/// # use tracing::field::{Value, Visit, Field};
+/// pub struct SumVisitor {
+/// sum: i64,
+/// }
+///
+/// impl Visit for SumVisitor {
+/// fn record_i64(&mut self, _field: &Field, value: i64) {
+/// self.sum += value;
+/// }
+///
+/// fn record_u64(&mut self, _field: &Field, value: u64) {
+/// self.sum += value as i64;
+/// }
+///
+/// fn record_debug(&mut self, _field: &Field, _value: &fmt::Debug) {
+/// // Do nothing
+/// }
+/// }
+/// ```
+///
+/// This visitor (which is probably not particularly useful) keeps a running
+/// sum of all the numeric values it records, and ignores all other values. A
+/// more practical example of recording typed values is presented in
+/// `examples/counters.rs`, which demonstrates a very simple metrics system
+/// implemented using `tracing`.
+///
+/// <div class="example-wrap" style="display:inline-block">
+/// <pre class="ignore" style="white-space:normal;font:inherit;">
+/// <strong>Note</strong>: The <code>record_error</code> trait method is only
+/// available when the Rust standard library is present, as it requires the
+/// <code>std::error::Error</code> trait.
+/// </pre></div>
+///
+/// [recorded]: Value::record
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [records an `Event`]: super::subscriber::Subscriber::event
+/// [set of `Value`s added to a `Span`]: super::subscriber::Subscriber::record
+/// [`Event`]: super::event::Event
+pub trait Visit {
+ /// Visits an arbitrary type implementing the [`valuable`] crate's `Valuable` trait.
+ ///
+ /// [`valuable`]: https://docs.rs/valuable
+ #[cfg(all(tracing_unstable, feature = "valuable"))]
+ #[cfg_attr(docsrs, doc(cfg(all(tracing_unstable, feature = "valuable"))))]
+ fn record_value(&mut self, field: &Field, value: valuable::Value<'_>) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit a double-precision floating point value.
+ fn record_f64(&mut self, field: &Field, value: f64) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit a signed 64-bit integer value.
+ fn record_i64(&mut self, field: &Field, value: i64) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit an unsigned 64-bit integer value.
+ fn record_u64(&mut self, field: &Field, value: u64) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit a signed 128-bit integer value.
+ fn record_i128(&mut self, field: &Field, value: i128) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit an unsigned 128-bit integer value.
+ fn record_u128(&mut self, field: &Field, value: u128) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit a boolean value.
+ fn record_bool(&mut self, field: &Field, value: bool) {
+ self.record_debug(field, &value)
+ }
+
+ /// Visit a string value.
+ fn record_str(&mut self, field: &Field, value: &str) {
+ self.record_debug(field, &value)
+ }
+
+ /// Records a type implementing `Error`.
+ ///
+ /// <div class="example-wrap" style="display:inline-block">
+ /// <pre class="ignore" style="white-space:normal;font:inherit;">
+ /// <strong>Note</strong>: This is only enabled when the Rust standard library is
+ /// present.
+ /// </pre>
+ #[cfg(feature = "std")]
+ #[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+ fn record_error(&mut self, field: &Field, value: &(dyn std::error::Error + 'static)) {
+ self.record_debug(field, &DisplayValue(value))
+ }
+
+ /// Visit a value implementing `fmt::Debug`.
+ fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug);
+}
+
+/// A field value of an erased type.
+///
+/// Implementors of `Value` may call the appropriate typed recording methods on
+/// the [visitor] passed to their `record` method in order to indicate how
+/// their data should be recorded.
+///
+/// [visitor]: Visit
+pub trait Value: crate::sealed::Sealed {
+ /// Visits this value with the given `Visitor`.
+ fn record(&self, key: &Field, visitor: &mut dyn Visit);
+}
+
+/// A `Value` which serializes using `fmt::Display`.
+///
+/// Uses `record_debug` in the `Value` implementation to
+/// avoid an unnecessary evaluation.
+#[derive(Clone)]
+pub struct DisplayValue<T: fmt::Display>(T);
+
+/// A `Value` which serializes as a string using `fmt::Debug`.
+#[derive(Clone)]
+pub struct DebugValue<T: fmt::Debug>(T);
+
+/// Wraps a type implementing `fmt::Display` as a `Value` that can be
+/// recorded using its `Display` implementation.
+pub fn display<T>(t: T) -> DisplayValue<T>
+where
+ T: fmt::Display,
+{
+ DisplayValue(t)
+}
+
+/// Wraps a type implementing `fmt::Debug` as a `Value` that can be
+/// recorded using its `Debug` implementation.
+pub fn debug<T>(t: T) -> DebugValue<T>
+where
+ T: fmt::Debug,
+{
+ DebugValue(t)
+}
+
+/// Wraps a type implementing [`Valuable`] as a `Value` that
+/// can be recorded using its `Valuable` implementation.
+///
+/// [`Valuable`]: https://docs.rs/valuable/latest/valuable/trait.Valuable.html
+#[cfg(all(tracing_unstable, feature = "valuable"))]
+#[cfg_attr(docsrs, doc(cfg(all(tracing_unstable, feature = "valuable"))))]
+pub fn valuable<T>(t: &T) -> valuable::Value<'_>
+where
+ T: valuable::Valuable,
+{
+ t.as_value()
+}
+
+// ===== impl Visit =====
+
+impl<'a, 'b> Visit for fmt::DebugStruct<'a, 'b> {
+ fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
+ self.field(field.name(), value);
+ }
+}
+
+impl<'a, 'b> Visit for fmt::DebugMap<'a, 'b> {
+ fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
+ self.entry(&format_args!("{}", field), value);
+ }
+}
+
+impl<F> Visit for F
+where
+ F: FnMut(&Field, &dyn fmt::Debug),
+{
+ fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
+ (self)(field, value)
+ }
+}
+
+// ===== impl Value =====
+
+macro_rules! impl_values {
+ ( $( $record:ident( $( $whatever:tt)+ ) ),+ ) => {
+ $(
+ impl_value!{ $record( $( $whatever )+ ) }
+ )+
+ }
+}
+
+macro_rules! ty_to_nonzero {
+ (u8) => {
+ NonZeroU8
+ };
+ (u16) => {
+ NonZeroU16
+ };
+ (u32) => {
+ NonZeroU32
+ };
+ (u64) => {
+ NonZeroU64
+ };
+ (u128) => {
+ NonZeroU128
+ };
+ (usize) => {
+ NonZeroUsize
+ };
+ (i8) => {
+ NonZeroI8
+ };
+ (i16) => {
+ NonZeroI16
+ };
+ (i32) => {
+ NonZeroI32
+ };
+ (i64) => {
+ NonZeroI64
+ };
+ (i128) => {
+ NonZeroI128
+ };
+ (isize) => {
+ NonZeroIsize
+ };
+}
+
+macro_rules! impl_one_value {
+ (f32, $op:expr, $record:ident) => {
+ impl_one_value!(normal, f32, $op, $record);
+ };
+ (f64, $op:expr, $record:ident) => {
+ impl_one_value!(normal, f64, $op, $record);
+ };
+ (bool, $op:expr, $record:ident) => {
+ impl_one_value!(normal, bool, $op, $record);
+ };
+ ($value_ty:tt, $op:expr, $record:ident) => {
+ impl_one_value!(normal, $value_ty, $op, $record);
+ impl_one_value!(nonzero, $value_ty, $op, $record);
+ };
+ (normal, $value_ty:tt, $op:expr, $record:ident) => {
+ impl $crate::sealed::Sealed for $value_ty {}
+ impl $crate::field::Value for $value_ty {
+ fn record(&self, key: &$crate::field::Field, visitor: &mut dyn $crate::field::Visit) {
+ visitor.$record(key, $op(*self))
+ }
+ }
+ };
+ (nonzero, $value_ty:tt, $op:expr, $record:ident) => {
+ // This `use num::*;` is reported as unused because it gets emitted
+ // for every single invocation of this macro, so there are multiple `use`s.
+ // All but the first are useless indeed.
+ // We need this import because we can't write a path where one part is
+ // the `ty_to_nonzero!($value_ty)` invocation.
+ #[allow(clippy::useless_attribute, unused)]
+ use num::*;
+ impl $crate::sealed::Sealed for ty_to_nonzero!($value_ty) {}
+ impl $crate::field::Value for ty_to_nonzero!($value_ty) {
+ fn record(&self, key: &$crate::field::Field, visitor: &mut dyn $crate::field::Visit) {
+ visitor.$record(key, $op(self.get()))
+ }
+ }
+ };
+}
+
+macro_rules! impl_value {
+ ( $record:ident( $( $value_ty:tt ),+ ) ) => {
+ $(
+ impl_one_value!($value_ty, |this: $value_ty| this, $record);
+ )+
+ };
+ ( $record:ident( $( $value_ty:tt ),+ as $as_ty:ty) ) => {
+ $(
+ impl_one_value!($value_ty, |this: $value_ty| this as $as_ty, $record);
+ )+
+ };
+}
+
+// ===== impl Value =====
+
+impl_values! {
+ record_u64(u64),
+ record_u64(usize, u32, u16, u8 as u64),
+ record_i64(i64),
+ record_i64(isize, i32, i16, i8 as i64),
+ record_u128(u128),
+ record_i128(i128),
+ record_bool(bool),
+ record_f64(f64, f32 as f64)
+}
+
+impl<T: crate::sealed::Sealed> crate::sealed::Sealed for Wrapping<T> {}
+impl<T: crate::field::Value> crate::field::Value for Wrapping<T> {
+ fn record(&self, key: &crate::field::Field, visitor: &mut dyn crate::field::Visit) {
+ self.0.record(key, visitor)
+ }
+}
+
+impl crate::sealed::Sealed for str {}
+
+impl Value for str {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_str(key, self)
+ }
+}
+
+#[cfg(feature = "std")]
+impl crate::sealed::Sealed for dyn std::error::Error + 'static {}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl Value for dyn std::error::Error + 'static {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_error(key, self)
+ }
+}
+
+#[cfg(feature = "std")]
+impl crate::sealed::Sealed for dyn std::error::Error + Send + 'static {}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl Value for dyn std::error::Error + Send + 'static {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ (self as &dyn std::error::Error).record(key, visitor)
+ }
+}
+
+#[cfg(feature = "std")]
+impl crate::sealed::Sealed for dyn std::error::Error + Sync + 'static {}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl Value for dyn std::error::Error + Sync + 'static {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ (self as &dyn std::error::Error).record(key, visitor)
+ }
+}
+
+#[cfg(feature = "std")]
+impl crate::sealed::Sealed for dyn std::error::Error + Send + Sync + 'static {}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl Value for dyn std::error::Error + Send + Sync + 'static {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ (self as &dyn std::error::Error).record(key, visitor)
+ }
+}
+
+impl<'a, T: ?Sized> crate::sealed::Sealed for &'a T where T: Value + crate::sealed::Sealed + 'a {}
+
+impl<'a, T: ?Sized> Value for &'a T
+where
+ T: Value + 'a,
+{
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ (*self).record(key, visitor)
+ }
+}
+
+impl<'a, T: ?Sized> crate::sealed::Sealed for &'a mut T where T: Value + crate::sealed::Sealed + 'a {}
+
+impl<'a, T: ?Sized> Value for &'a mut T
+where
+ T: Value + 'a,
+{
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ // Don't use `(*self).record(key, visitor)`, otherwise would
+ // cause stack overflow due to `unconditional_recursion`.
+ T::record(self, key, visitor)
+ }
+}
+
+impl<'a> crate::sealed::Sealed for fmt::Arguments<'a> {}
+
+impl<'a> Value for fmt::Arguments<'a> {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_debug(key, self)
+ }
+}
+
+impl<T: ?Sized> crate::sealed::Sealed for crate::stdlib::boxed::Box<T> where T: Value {}
+
+impl<T: ?Sized> Value for crate::stdlib::boxed::Box<T>
+where
+ T: Value,
+{
+ #[inline]
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ self.as_ref().record(key, visitor)
+ }
+}
+
+impl crate::sealed::Sealed for String {}
+impl Value for String {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_str(key, self.as_str())
+ }
+}
+
+impl fmt::Debug for dyn Value {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ // We are only going to be recording the field value, so we don't
+ // actually care about the field name here.
+ struct NullCallsite;
+ static NULL_CALLSITE: NullCallsite = NullCallsite;
+ impl crate::callsite::Callsite for NullCallsite {
+ fn set_interest(&self, _: crate::subscriber::Interest) {
+ unreachable!("you somehow managed to register the null callsite?")
+ }
+
+ fn metadata(&self) -> &crate::Metadata<'_> {
+ unreachable!("you somehow managed to access the null callsite?")
+ }
+ }
+
+ static FIELD: Field = Field {
+ i: 0,
+ fields: FieldSet::new(&[], crate::identify_callsite!(&NULL_CALLSITE)),
+ };
+
+ let mut res = Ok(());
+ self.record(&FIELD, &mut |_: &Field, val: &dyn fmt::Debug| {
+ res = write!(f, "{:?}", val);
+ });
+ res
+ }
+}
+
+impl fmt::Display for dyn Value {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Debug::fmt(self, f)
+ }
+}
+
+// ===== impl DisplayValue =====
+
+impl<T: fmt::Display> crate::sealed::Sealed for DisplayValue<T> {}
+
+impl<T> Value for DisplayValue<T>
+where
+ T: fmt::Display,
+{
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_debug(key, self)
+ }
+}
+
+impl<T: fmt::Display> fmt::Debug for DisplayValue<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Display::fmt(self, f)
+ }
+}
+
+impl<T: fmt::Display> fmt::Display for DisplayValue<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ self.0.fmt(f)
+ }
+}
+
+// ===== impl DebugValue =====
+
+impl<T: fmt::Debug> crate::sealed::Sealed for DebugValue<T> {}
+
+impl<T> Value for DebugValue<T>
+where
+ T: fmt::Debug,
+{
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_debug(key, &self.0)
+ }
+}
+
+impl<T: fmt::Debug> fmt::Debug for DebugValue<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ self.0.fmt(f)
+ }
+}
+
+// ===== impl ValuableValue =====
+
+#[cfg(all(tracing_unstable, feature = "valuable"))]
+impl crate::sealed::Sealed for valuable::Value<'_> {}
+
+#[cfg(all(tracing_unstable, feature = "valuable"))]
+#[cfg_attr(docsrs, doc(cfg(all(tracing_unstable, feature = "valuable"))))]
+impl Value for valuable::Value<'_> {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_value(key, *self)
+ }
+}
+
+#[cfg(all(tracing_unstable, feature = "valuable"))]
+impl crate::sealed::Sealed for &'_ dyn valuable::Valuable {}
+
+#[cfg(all(tracing_unstable, feature = "valuable"))]
+#[cfg_attr(docsrs, doc(cfg(all(tracing_unstable, feature = "valuable"))))]
+impl Value for &'_ dyn valuable::Valuable {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ visitor.record_value(key, self.as_value())
+ }
+}
+
+impl crate::sealed::Sealed for Empty {}
+impl Value for Empty {
+ #[inline]
+ fn record(&self, _: &Field, _: &mut dyn Visit) {}
+}
+
+impl<T: Value> crate::sealed::Sealed for Option<T> {}
+
+impl<T: Value> Value for Option<T> {
+ fn record(&self, key: &Field, visitor: &mut dyn Visit) {
+ if let Some(v) = &self {
+ v.record(key, visitor)
+ }
+ }
+}
+
+// ===== impl Field =====
+
+impl Field {
+ /// Returns an [`Identifier`] that uniquely identifies the [`Callsite`]
+ /// which defines this field.
+ ///
+ /// [`Identifier`]: super::callsite::Identifier
+ /// [`Callsite`]: super::callsite::Callsite
+ #[inline]
+ pub fn callsite(&self) -> callsite::Identifier {
+ self.fields.callsite()
+ }
+
+ /// Returns a string representing the name of the field.
+ pub fn name(&self) -> &'static str {
+ self.fields.names[self.i]
+ }
+}
+
+impl fmt::Display for Field {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.pad(self.name())
+ }
+}
+
+impl AsRef<str> for Field {
+ fn as_ref(&self) -> &str {
+ self.name()
+ }
+}
+
+impl PartialEq for Field {
+ fn eq(&self, other: &Self) -> bool {
+ self.callsite() == other.callsite() && self.i == other.i
+ }
+}
+
+impl Eq for Field {}
+
+impl Hash for Field {
+ fn hash<H>(&self, state: &mut H)
+ where
+ H: Hasher,
+ {
+ self.callsite().hash(state);
+ self.i.hash(state);
+ }
+}
+
+impl Clone for Field {
+ fn clone(&self) -> Self {
+ Field {
+ i: self.i,
+ fields: FieldSet {
+ names: self.fields.names,
+ callsite: self.fields.callsite(),
+ },
+ }
+ }
+}
+
+// ===== impl FieldSet =====
+
+impl FieldSet {
+ /// Constructs a new `FieldSet` with the given array of field names and callsite.
+ pub const fn new(names: &'static [&'static str], callsite: callsite::Identifier) -> Self {
+ Self { names, callsite }
+ }
+
+ /// Returns an [`Identifier`] that uniquely identifies the [`Callsite`]
+ /// which defines this set of fields..
+ ///
+ /// [`Identifier`]: super::callsite::Identifier
+ /// [`Callsite`]: super::callsite::Callsite
+ pub(crate) fn callsite(&self) -> callsite::Identifier {
+ callsite::Identifier(self.callsite.0)
+ }
+
+ /// Returns the [`Field`] named `name`, or `None` if no such field exists.
+ ///
+ /// [`Field`]: super::Field
+ pub fn field<Q: ?Sized>(&self, name: &Q) -> Option<Field>
+ where
+ Q: Borrow<str>,
+ {
+ let name = &name.borrow();
+ self.names.iter().position(|f| f == name).map(|i| Field {
+ i,
+ fields: FieldSet {
+ names: self.names,
+ callsite: self.callsite(),
+ },
+ })
+ }
+
+ /// Returns `true` if `self` contains the given `field`.
+ ///
+ /// <div class="example-wrap" style="display:inline-block">
+ /// <pre class="ignore" style="white-space:normal;font:inherit;">
+ /// <strong>Note</strong>: If <code>field</code> shares a name with a field
+ /// in this <code>FieldSet</code>, but was created by a <code>FieldSet</code>
+ /// with a different callsite, this <code>FieldSet</code> does <em>not</em>
+ /// contain it. This is so that if two separate span callsites define a field
+ /// named "foo", the <code>Field</code> corresponding to "foo" for each
+ /// of those callsites are not equivalent.
+ /// </pre></div>
+ pub fn contains(&self, field: &Field) -> bool {
+ field.callsite() == self.callsite() && field.i <= self.len()
+ }
+
+ /// Returns an iterator over the `Field`s in this `FieldSet`.
+ pub fn iter(&self) -> Iter {
+ let idxs = 0..self.len();
+ Iter {
+ idxs,
+ fields: FieldSet {
+ names: self.names,
+ callsite: self.callsite(),
+ },
+ }
+ }
+
+ /// Returns a new `ValueSet` with entries for this `FieldSet`'s values.
+ ///
+ /// Note that a `ValueSet` may not be constructed with arrays of over 32
+ /// elements.
+ #[doc(hidden)]
+ pub fn value_set<'v, V>(&'v self, values: &'v V) -> ValueSet<'v>
+ where
+ V: ValidLen<'v>,
+ {
+ ValueSet {
+ fields: self,
+ values: values.borrow(),
+ }
+ }
+
+ /// Returns the number of fields in this `FieldSet`.
+ #[inline]
+ pub fn len(&self) -> usize {
+ self.names.len()
+ }
+
+ /// Returns whether or not this `FieldSet` has fields.
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ self.names.is_empty()
+ }
+}
+
+impl<'a> IntoIterator for &'a FieldSet {
+ type IntoIter = Iter;
+ type Item = Field;
+ #[inline]
+ fn into_iter(self) -> Self::IntoIter {
+ self.iter()
+ }
+}
+
+impl fmt::Debug for FieldSet {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_struct("FieldSet")
+ .field("names", &self.names)
+ .field("callsite", &self.callsite)
+ .finish()
+ }
+}
+
+impl fmt::Display for FieldSet {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_set()
+ .entries(self.names.iter().map(display))
+ .finish()
+ }
+}
+
+impl Eq for FieldSet {}
+
+impl PartialEq for FieldSet {
+ fn eq(&self, other: &Self) -> bool {
+ if core::ptr::eq(&self, &other) {
+ true
+ } else if cfg!(not(debug_assertions)) {
+ // In a well-behaving application, two `FieldSet`s can be assumed to
+ // be totally equal so long as they share the same callsite.
+ self.callsite == other.callsite
+ } else {
+ // However, when debug-assertions are enabled, do NOT assume that
+ // the application is well-behaving; check every the field names of
+ // each `FieldSet` for equality.
+
+ // `FieldSet` is destructured here to ensure a compile-error if the
+ // fields of `FieldSet` change.
+ let Self {
+ names: lhs_names,
+ callsite: lhs_callsite,
+ } = self;
+
+ let Self {
+ names: rhs_names,
+ callsite: rhs_callsite,
+ } = &other;
+
+ // Check callsite equality first, as it is probably cheaper to do
+ // than str equality.
+ lhs_callsite == rhs_callsite && lhs_names == rhs_names
+ }
+ }
+}
+
+// ===== impl Iter =====
+
+impl Iterator for Iter {
+ type Item = Field;
+ fn next(&mut self) -> Option<Field> {
+ let i = self.idxs.next()?;
+ Some(Field {
+ i,
+ fields: FieldSet {
+ names: self.fields.names,
+ callsite: self.fields.callsite(),
+ },
+ })
+ }
+}
+
+// ===== impl ValueSet =====
+
+impl<'a> ValueSet<'a> {
+ /// Returns an [`Identifier`] that uniquely identifies the [`Callsite`]
+ /// defining the fields this `ValueSet` refers to.
+ ///
+ /// [`Identifier`]: super::callsite::Identifier
+ /// [`Callsite`]: super::callsite::Callsite
+ #[inline]
+ pub fn callsite(&self) -> callsite::Identifier {
+ self.fields.callsite()
+ }
+
+ /// Visits all the fields in this `ValueSet` with the provided [visitor].
+ ///
+ /// [visitor]: Visit
+ pub fn record(&self, visitor: &mut dyn Visit) {
+ let my_callsite = self.callsite();
+ for (field, value) in self.values {
+ if field.callsite() != my_callsite {
+ continue;
+ }
+ if let Some(value) = value {
+ value.record(field, visitor);
+ }
+ }
+ }
+
+ /// Returns the number of fields in this `ValueSet` that would be visited
+ /// by a given [visitor] to the [`ValueSet::record()`] method.
+ ///
+ /// [visitor]: Visit
+ /// [`ValueSet::record()`]: ValueSet::record()
+ pub fn len(&self) -> usize {
+ let my_callsite = self.callsite();
+ self.values
+ .iter()
+ .filter(|(field, _)| field.callsite() == my_callsite)
+ .count()
+ }
+
+ /// Returns `true` if this `ValueSet` contains a value for the given `Field`.
+ pub(crate) fn contains(&self, field: &Field) -> bool {
+ field.callsite() == self.callsite()
+ && self
+ .values
+ .iter()
+ .any(|(key, val)| *key == field && val.is_some())
+ }
+
+ /// Returns true if this `ValueSet` contains _no_ values.
+ pub fn is_empty(&self) -> bool {
+ let my_callsite = self.callsite();
+ self.values
+ .iter()
+ .all(|(key, val)| val.is_none() || key.callsite() != my_callsite)
+ }
+
+ pub(crate) fn field_set(&self) -> &FieldSet {
+ self.fields
+ }
+}
+
+impl<'a> fmt::Debug for ValueSet<'a> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ self.values
+ .iter()
+ .fold(&mut f.debug_struct("ValueSet"), |dbg, (key, v)| {
+ if let Some(val) = v {
+ val.record(key, dbg);
+ }
+ dbg
+ })
+ .field("callsite", &self.callsite())
+ .finish()
+ }
+}
+
+impl<'a> fmt::Display for ValueSet<'a> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ self.values
+ .iter()
+ .fold(&mut f.debug_map(), |dbg, (key, v)| {
+ if let Some(val) = v {
+ val.record(key, dbg);
+ }
+ dbg
+ })
+ .finish()
+ }
+}
+
+// ===== impl ValidLen =====
+
+mod private {
+ use super::*;
+
+ /// Marker trait implemented by arrays which are of valid length to
+ /// construct a `ValueSet`.
+ ///
+ /// `ValueSet`s may only be constructed from arrays containing 32 or fewer
+ /// elements, to ensure the array is small enough to always be allocated on the
+ /// stack. This trait is only implemented by arrays of an appropriate length,
+ /// ensuring that the correct size arrays are used at compile-time.
+ pub trait ValidLen<'a>: Borrow<[(&'a Field, Option<&'a (dyn Value + 'a)>)]> {}
+}
+
+macro_rules! impl_valid_len {
+ ( $( $len:tt ),+ ) => {
+ $(
+ impl<'a> private::ValidLen<'a> for
+ [(&'a Field, Option<&'a (dyn Value + 'a)>); $len] {}
+ )+
+ }
+}
+
+impl_valid_len! {
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
+ 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32
+}
+
+#[cfg(test)]
+mod test {
+ use super::*;
+ use crate::metadata::{Kind, Level, Metadata};
+ use crate::stdlib::{borrow::ToOwned, string::String};
+
+ struct TestCallsite1;
+ static TEST_CALLSITE_1: TestCallsite1 = TestCallsite1;
+ static TEST_META_1: Metadata<'static> = metadata! {
+ name: "field_test1",
+ target: module_path!(),
+ level: Level::INFO,
+ fields: &["foo", "bar", "baz"],
+ callsite: &TEST_CALLSITE_1,
+ kind: Kind::SPAN,
+ };
+
+ impl crate::callsite::Callsite for TestCallsite1 {
+ fn set_interest(&self, _: crate::subscriber::Interest) {
+ unimplemented!()
+ }
+
+ fn metadata(&self) -> &Metadata<'_> {
+ &TEST_META_1
+ }
+ }
+
+ struct TestCallsite2;
+ static TEST_CALLSITE_2: TestCallsite2 = TestCallsite2;
+ static TEST_META_2: Metadata<'static> = metadata! {
+ name: "field_test2",
+ target: module_path!(),
+ level: Level::INFO,
+ fields: &["foo", "bar", "baz"],
+ callsite: &TEST_CALLSITE_2,
+ kind: Kind::SPAN,
+ };
+
+ impl crate::callsite::Callsite for TestCallsite2 {
+ fn set_interest(&self, _: crate::subscriber::Interest) {
+ unimplemented!()
+ }
+
+ fn metadata(&self) -> &Metadata<'_> {
+ &TEST_META_2
+ }
+ }
+
+ #[test]
+ fn value_set_with_no_values_is_empty() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), None),
+ (&fields.field("bar").unwrap(), None),
+ (&fields.field("baz").unwrap(), None),
+ ];
+ let valueset = fields.value_set(values);
+ assert!(valueset.is_empty());
+ }
+
+ #[test]
+ fn empty_value_set_is_empty() {
+ let fields = TEST_META_1.fields();
+ let valueset = fields.value_set(&[]);
+ assert!(valueset.is_empty());
+ }
+
+ #[test]
+ fn value_sets_with_fields_from_other_callsites_are_empty() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), Some(&1 as &dyn Value)),
+ (&fields.field("bar").unwrap(), Some(&2 as &dyn Value)),
+ (&fields.field("baz").unwrap(), Some(&3 as &dyn Value)),
+ ];
+ let valueset = TEST_META_2.fields().value_set(values);
+ assert!(valueset.is_empty())
+ }
+
+ #[test]
+ fn sparse_value_sets_are_not_empty() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), None),
+ (&fields.field("bar").unwrap(), Some(&57 as &dyn Value)),
+ (&fields.field("baz").unwrap(), None),
+ ];
+ let valueset = fields.value_set(values);
+ assert!(!valueset.is_empty());
+ }
+
+ #[test]
+ fn fields_from_other_callsets_are_skipped() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), None),
+ (
+ &TEST_META_2.fields().field("bar").unwrap(),
+ Some(&57 as &dyn Value),
+ ),
+ (&fields.field("baz").unwrap(), None),
+ ];
+
+ struct MyVisitor;
+ impl Visit for MyVisitor {
+ fn record_debug(&mut self, field: &Field, _: &dyn (crate::stdlib::fmt::Debug)) {
+ assert_eq!(field.callsite(), TEST_META_1.callsite())
+ }
+ }
+ let valueset = fields.value_set(values);
+ valueset.record(&mut MyVisitor);
+ }
+
+ #[test]
+ fn empty_fields_are_skipped() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), Some(&Empty as &dyn Value)),
+ (&fields.field("bar").unwrap(), Some(&57 as &dyn Value)),
+ (&fields.field("baz").unwrap(), Some(&Empty as &dyn Value)),
+ ];
+
+ struct MyVisitor;
+ impl Visit for MyVisitor {
+ fn record_debug(&mut self, field: &Field, _: &dyn (crate::stdlib::fmt::Debug)) {
+ assert_eq!(field.name(), "bar")
+ }
+ }
+ let valueset = fields.value_set(values);
+ valueset.record(&mut MyVisitor);
+ }
+
+ #[test]
+ fn record_debug_fn() {
+ let fields = TEST_META_1.fields();
+ let values = &[
+ (&fields.field("foo").unwrap(), Some(&1 as &dyn Value)),
+ (&fields.field("bar").unwrap(), Some(&2 as &dyn Value)),
+ (&fields.field("baz").unwrap(), Some(&3 as &dyn Value)),
+ ];
+ let valueset = fields.value_set(values);
+ let mut result = String::new();
+ valueset.record(&mut |_: &Field, value: &dyn fmt::Debug| {
+ use crate::stdlib::fmt::Write;
+ write!(&mut result, "{:?}", value).unwrap();
+ });
+ assert_eq!(result, "123".to_owned());
+ }
+
+ #[test]
+ #[cfg(feature = "std")]
+ fn record_error() {
+ let fields = TEST_META_1.fields();
+ let err: Box<dyn std::error::Error + Send + Sync + 'static> =
+ std::io::Error::new(std::io::ErrorKind::Other, "lol").into();
+ let values = &[
+ (&fields.field("foo").unwrap(), Some(&err as &dyn Value)),
+ (&fields.field("bar").unwrap(), Some(&Empty as &dyn Value)),
+ (&fields.field("baz").unwrap(), Some(&Empty as &dyn Value)),
+ ];
+ let valueset = fields.value_set(values);
+ let mut result = String::new();
+ valueset.record(&mut |_: &Field, value: &dyn fmt::Debug| {
+ use core::fmt::Write;
+ write!(&mut result, "{:?}", value).unwrap();
+ });
+ assert_eq!(result, format!("{}", err));
+ }
+}
diff --git a/third_party/rust/tracing-core/src/lazy.rs b/third_party/rust/tracing-core/src/lazy.rs
new file mode 100644
index 0000000000..4f004e6364
--- /dev/null
+++ b/third_party/rust/tracing-core/src/lazy.rs
@@ -0,0 +1,76 @@
+#[cfg(feature = "std")]
+pub(crate) use once_cell::sync::Lazy;
+
+#[cfg(not(feature = "std"))]
+pub(crate) use self::spin::Lazy;
+
+#[cfg(not(feature = "std"))]
+mod spin {
+ //! This is the `once_cell::sync::Lazy` type, but modified to use our
+ //! `spin::Once` type rather than `OnceCell`. This is used to replace
+ //! `once_cell::sync::Lazy` on `no-std` builds.
+ use crate::spin::Once;
+ use core::{cell::Cell, fmt, ops::Deref};
+
+ /// Re-implementation of `once_cell::sync::Lazy` on top of `spin::Once`
+ /// rather than `OnceCell`.
+ ///
+ /// This is used when the standard library is disabled.
+ pub(crate) struct Lazy<T, F = fn() -> T> {
+ cell: Once<T>,
+ init: Cell<Option<F>>,
+ }
+
+ impl<T: fmt::Debug, F> fmt::Debug for Lazy<T, F> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_struct("Lazy")
+ .field("cell", &self.cell)
+ .field("init", &"..")
+ .finish()
+ }
+ }
+
+ // We never create a `&F` from a `&Lazy<T, F>` so it is fine to not impl
+ // `Sync` for `F`. We do create a `&mut Option<F>` in `force`, but this is
+ // properly synchronized, so it only happens once so it also does not
+ // contribute to this impl.
+ unsafe impl<T, F: Send> Sync for Lazy<T, F> where Once<T>: Sync {}
+ // auto-derived `Send` impl is OK.
+
+ impl<T, F> Lazy<T, F> {
+ /// Creates a new lazy value with the given initializing function.
+ pub(crate) const fn new(init: F) -> Lazy<T, F> {
+ Lazy {
+ cell: Once::new(),
+ init: Cell::new(Some(init)),
+ }
+ }
+ }
+
+ impl<T, F: FnOnce() -> T> Lazy<T, F> {
+ /// Forces the evaluation of this lazy value and returns a reference to
+ /// the result.
+ ///
+ /// This is equivalent to the `Deref` impl, but is explicit.
+ pub(crate) fn force(this: &Lazy<T, F>) -> &T {
+ this.cell.call_once(|| match this.init.take() {
+ Some(f) => f(),
+ None => panic!("Lazy instance has previously been poisoned"),
+ })
+ }
+ }
+
+ impl<T, F: FnOnce() -> T> Deref for Lazy<T, F> {
+ type Target = T;
+ fn deref(&self) -> &T {
+ Lazy::force(self)
+ }
+ }
+
+ impl<T: Default> Default for Lazy<T> {
+ /// Creates a new lazy value using `Default` as the initializing function.
+ fn default() -> Lazy<T> {
+ Lazy::new(T::default)
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/lib.rs b/third_party/rust/tracing-core/src/lib.rs
new file mode 100644
index 0000000000..c1f87b22f0
--- /dev/null
+++ b/third_party/rust/tracing-core/src/lib.rs
@@ -0,0 +1,295 @@
+//! Core primitives for `tracing`.
+//!
+//! [`tracing`] is a framework for instrumenting Rust programs to collect
+//! structured, event-based diagnostic information. This crate defines the core
+//! primitives of `tracing`.
+//!
+//! This crate provides:
+//!
+//! * [`span::Id`] identifies a span within the execution of a program.
+//!
+//! * [`Event`] represents a single event within a trace.
+//!
+//! * [`Subscriber`], the trait implemented to collect trace data.
+//!
+//! * [`Metadata`] and [`Callsite`] provide information describing spans and
+//! `Event`s.
+//!
+//! * [`Field`], [`FieldSet`], [`Value`], and [`ValueSet`] represent the
+//! structured data attached to a span.
+//!
+//! * [`Dispatch`] allows spans and events to be dispatched to `Subscriber`s.
+//!
+//! In addition, it defines the global callsite registry and per-thread current
+//! dispatcher which other components of the tracing system rely on.
+//!
+//! *Compiler support: [requires `rustc` 1.49+][msrv]*
+//!
+//! [msrv]: #supported-rust-versions
+//!
+//! ## Usage
+//!
+//! Application authors will typically not use this crate directly. Instead,
+//! they will use the [`tracing`] crate, which provides a much more
+//! fully-featured API. However, this crate's API will change very infrequently,
+//! so it may be used when dependencies must be very stable.
+//!
+//! `Subscriber` implementations may depend on `tracing-core` rather than
+//! `tracing`, as the additional APIs provided by `tracing` are primarily useful
+//! for instrumenting libraries and applications, and are generally not
+//! necessary for `Subscriber` implementations.
+//!
+//! The [`tokio-rs/tracing`] repository contains less stable crates designed to
+//! be used with the `tracing` ecosystem. It includes a collection of
+//! `Subscriber` implementations, as well as utility and adapter crates.
+//!
+//! ## Crate Feature Flags
+//!
+//! The following crate [feature flags] are available:
+//!
+//! * `std`: Depend on the Rust standard library (enabled by default).
+//!
+//! `no_std` users may disable this feature with `default-features = false`:
+//!
+//! ```toml
+//! [dependencies]
+//! tracing-core = { version = "0.1.22", default-features = false }
+//! ```
+//!
+//! **Note**:`tracing-core`'s `no_std` support requires `liballoc`.
+//!
+//! ### Unstable Features
+//!
+//! These feature flags enable **unstable** features. The public API may break in 0.1.x
+//! releases. To enable these features, the `--cfg tracing_unstable` must be passed to
+//! `rustc` when compiling.
+//!
+//! The following unstable feature flags are currently available:
+//!
+//! * `valuable`: Enables support for recording [field values] using the
+//! [`valuable`] crate.
+//!
+//! #### Enabling Unstable Features
+//!
+//! The easiest way to set the `tracing_unstable` cfg is to use the `RUSTFLAGS`
+//! env variable when running `cargo` commands:
+//!
+//! ```shell
+//! RUSTFLAGS="--cfg tracing_unstable" cargo build
+//! ```
+//! Alternatively, the following can be added to the `.cargo/config` file in a
+//! project to automatically enable the cfg flag for that project:
+//!
+//! ```toml
+//! [build]
+//! rustflags = ["--cfg", "tracing_unstable"]
+//! ```
+//!
+//! [feature flags]: https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
+//! [field values]: crate::field
+//! [`valuable`]: https://crates.io/crates/valuable
+//!
+//! ## Supported Rust Versions
+//!
+//! Tracing is built against the latest stable release. The minimum supported
+//! version is 1.49. The current Tracing version is not guaranteed to build on
+//! Rust versions earlier than the minimum supported version.
+//!
+//! Tracing follows the same compiler support policies as the rest of the Tokio
+//! project. The current stable Rust compiler and the three most recent minor
+//! versions before it will always be supported. For example, if the current
+//! stable compiler version is 1.45, the minimum supported version will not be
+//! increased past 1.42, three minor versions prior. Increasing the minimum
+//! supported compiler version is not considered a semver breaking change as
+//! long as doing so complies with this policy.
+//!
+//!
+//! [`span::Id`]: span::Id
+//! [`Event`]: event::Event
+//! [`Subscriber`]: subscriber::Subscriber
+//! [`Metadata`]: metadata::Metadata
+//! [`Callsite`]: callsite::Callsite
+//! [`Field`]: field::Field
+//! [`FieldSet`]: field::FieldSet
+//! [`Value`]: field::Value
+//! [`ValueSet`]: field::ValueSet
+//! [`Dispatch`]: dispatcher::Dispatch
+//! [`tokio-rs/tracing`]: https://github.com/tokio-rs/tracing
+//! [`tracing`]: https://crates.io/crates/tracing
+#![doc(html_root_url = "https://docs.rs/tracing-core/0.1.22")]
+#![doc(
+ html_logo_url = "https://raw.githubusercontent.com/tokio-rs/tracing/master/assets/logo-type.png",
+ issue_tracker_base_url = "https://github.com/tokio-rs/tracing/issues/"
+)]
+#![cfg_attr(not(feature = "std"), no_std)]
+#![cfg_attr(docsrs, feature(doc_cfg), deny(rustdoc::broken_intra_doc_links))]
+#![warn(
+ missing_debug_implementations,
+ missing_docs,
+ rust_2018_idioms,
+ unreachable_pub,
+ bad_style,
+ const_err,
+ dead_code,
+ improper_ctypes,
+ non_shorthand_field_patterns,
+ no_mangle_generic_items,
+ overflowing_literals,
+ path_statements,
+ patterns_in_fns_without_body,
+ private_in_public,
+ unconditional_recursion,
+ unused,
+ unused_allocation,
+ unused_comparisons,
+ unused_parens,
+ while_true
+)]
+#[cfg(not(feature = "std"))]
+extern crate alloc;
+
+/// Statically constructs an [`Identifier`] for the provided [`Callsite`].
+///
+/// This may be used in contexts such as static initializers.
+///
+/// For example:
+/// ```rust
+/// use tracing_core::{callsite, identify_callsite};
+/// # use tracing_core::{Metadata, subscriber::Interest};
+/// # fn main() {
+/// pub struct MyCallsite {
+/// // ...
+/// }
+/// impl callsite::Callsite for MyCallsite {
+/// # fn set_interest(&self, _: Interest) { unimplemented!() }
+/// # fn metadata(&self) -> &Metadata { unimplemented!() }
+/// // ...
+/// }
+///
+/// static CALLSITE: MyCallsite = MyCallsite {
+/// // ...
+/// };
+///
+/// static CALLSITE_ID: callsite::Identifier = identify_callsite!(&CALLSITE);
+/// # }
+/// ```
+///
+/// [`Identifier`]: callsite::Identifier
+/// [`Callsite`]: callsite::Callsite
+#[macro_export]
+macro_rules! identify_callsite {
+ ($callsite:expr) => {
+ $crate::callsite::Identifier($callsite)
+ };
+}
+
+/// Statically constructs new span [metadata].
+///
+/// /// For example:
+/// ```rust
+/// # use tracing_core::{callsite::Callsite, subscriber::Interest};
+/// use tracing_core::metadata;
+/// use tracing_core::metadata::{Kind, Level, Metadata};
+/// # fn main() {
+/// # pub struct MyCallsite { }
+/// # impl Callsite for MyCallsite {
+/// # fn set_interest(&self, _: Interest) { unimplemented!() }
+/// # fn metadata(&self) -> &Metadata { unimplemented!() }
+/// # }
+/// #
+/// static FOO_CALLSITE: MyCallsite = MyCallsite {
+/// // ...
+/// };
+///
+/// static FOO_METADATA: Metadata = metadata!{
+/// name: "foo",
+/// target: module_path!(),
+/// level: Level::DEBUG,
+/// fields: &["bar", "baz"],
+/// callsite: &FOO_CALLSITE,
+/// kind: Kind::SPAN,
+/// };
+/// # }
+/// ```
+///
+/// [metadata]: metadata::Metadata
+/// [`Metadata::new`]: metadata::Metadata::new
+#[macro_export]
+macro_rules! metadata {
+ (
+ name: $name:expr,
+ target: $target:expr,
+ level: $level:expr,
+ fields: $fields:expr,
+ callsite: $callsite:expr,
+ kind: $kind:expr
+ ) => {
+ $crate::metadata! {
+ name: $name,
+ target: $target,
+ level: $level,
+ fields: $fields,
+ callsite: $callsite,
+ kind: $kind,
+ }
+ };
+ (
+ name: $name:expr,
+ target: $target:expr,
+ level: $level:expr,
+ fields: $fields:expr,
+ callsite: $callsite:expr,
+ kind: $kind:expr,
+ ) => {
+ $crate::metadata::Metadata::new(
+ $name,
+ $target,
+ $level,
+ Some(file!()),
+ Some(line!()),
+ Some(module_path!()),
+ $crate::field::FieldSet::new($fields, $crate::identify_callsite!($callsite)),
+ $kind,
+ )
+ };
+}
+
+pub(crate) mod lazy;
+
+// Trimmed-down vendored version of spin 0.5.2 (0387621)
+// Dependency of no_std lazy_static, not required in a std build
+#[cfg(not(feature = "std"))]
+pub(crate) mod spin;
+
+#[cfg(not(feature = "std"))]
+#[doc(hidden)]
+pub type Once = self::spin::Once<()>;
+
+#[cfg(feature = "std")]
+pub use stdlib::sync::Once;
+
+pub mod callsite;
+pub mod dispatcher;
+pub mod event;
+pub mod field;
+pub mod metadata;
+mod parent;
+pub mod span;
+pub(crate) mod stdlib;
+pub mod subscriber;
+
+#[doc(inline)]
+pub use self::{
+ callsite::Callsite,
+ dispatcher::Dispatch,
+ event::Event,
+ field::Field,
+ metadata::{Level, LevelFilter, Metadata},
+ subscriber::Subscriber,
+};
+
+pub use self::{metadata::Kind, subscriber::Interest};
+
+mod sealed {
+ pub trait Sealed {}
+}
diff --git a/third_party/rust/tracing-core/src/metadata.rs b/third_party/rust/tracing-core/src/metadata.rs
new file mode 100644
index 0000000000..a154419a74
--- /dev/null
+++ b/third_party/rust/tracing-core/src/metadata.rs
@@ -0,0 +1,1114 @@
+//! Metadata describing trace data.
+use super::{callsite, field};
+use crate::stdlib::{
+ cmp, fmt,
+ str::FromStr,
+ sync::atomic::{AtomicUsize, Ordering},
+};
+
+/// Metadata describing a [span] or [event].
+///
+/// All spans and events have the following metadata:
+/// - A [name], represented as a static string.
+/// - A [target], a string that categorizes part of the system where the span
+/// or event occurred. The `tracing` macros default to using the module
+/// path where the span or event originated as the target, but it may be
+/// overridden.
+/// - A [verbosity level]. This determines how verbose a given span or event
+/// is, and allows enabling or disabling more verbose diagnostics
+/// situationally. See the documentation for the [`Level`] type for details.
+/// - The names of the [fields] defined by the span or event.
+/// - Whether the metadata corresponds to a span or event.
+///
+/// In addition, the following optional metadata describing the source code
+/// location where the span or event originated _may_ be provided:
+/// - The [file name]
+/// - The [line number]
+/// - The [module path]
+///
+/// Metadata is used by [`Subscriber`]s when filtering spans and events, and it
+/// may also be used as part of their data payload.
+///
+/// When created by the `event!` or `span!` macro, the metadata describing a
+/// particular event or span is constructed statically and exists as a single
+/// static instance. Thus, the overhead of creating the metadata is
+/// _significantly_ lower than that of creating the actual span. Therefore,
+/// filtering is based on metadata, rather than on the constructed span.
+///
+/// ## Equality
+///
+/// In well-behaved applications, two `Metadata` with equal
+/// [callsite identifiers] will be equal in all other ways (i.e., have the same
+/// `name`, `target`, etc.). Consequently, in release builds, [`Metadata::eq`]
+/// *only* checks that its arguments have equal callsites. However, the equality
+/// of `Metadata`'s other fields is checked in debug builds.
+///
+/// [span]: super::span
+/// [event]: super::event
+/// [name]: Self::name
+/// [target]: Self::target
+/// [fields]: Self::fields
+/// [verbosity level]: Self::level
+/// [file name]: Self::file
+/// [line number]: Self::line
+/// [module path]: Self::module_path
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [callsite identifiers]: Self::callsite
+pub struct Metadata<'a> {
+ /// The name of the span described by this metadata.
+ name: &'static str,
+
+ /// The part of the system that the span that this metadata describes
+ /// occurred in.
+ target: &'a str,
+
+ /// The level of verbosity of the described span.
+ level: Level,
+
+ /// The name of the Rust module where the span occurred, or `None` if this
+ /// could not be determined.
+ module_path: Option<&'a str>,
+
+ /// The name of the source code file where the span occurred, or `None` if
+ /// this could not be determined.
+ file: Option<&'a str>,
+
+ /// The line number in the source code file where the span occurred, or
+ /// `None` if this could not be determined.
+ line: Option<u32>,
+
+ /// The names of the key-value fields attached to the described span or
+ /// event.
+ fields: field::FieldSet,
+
+ /// The kind of the callsite.
+ kind: Kind,
+}
+
+/// Indicates whether the callsite is a span or event.
+#[derive(Clone, Eq, PartialEq)]
+pub struct Kind(u8);
+
+/// Describes the level of verbosity of a span or event.
+///
+/// # Comparing Levels
+///
+/// `Level` implements the [`PartialOrd`] and [`Ord`] traits, allowing two
+/// `Level`s to be compared to determine which is considered more or less
+/// verbose. Levels which are more verbose are considered "greater than" levels
+/// which are less verbose, with [`Level::ERROR`] considered the lowest, and
+/// [`Level::TRACE`] considered the highest.
+///
+/// For example:
+/// ```
+/// use tracing_core::Level;
+///
+/// assert!(Level::TRACE > Level::DEBUG);
+/// assert!(Level::ERROR < Level::WARN);
+/// assert!(Level::INFO <= Level::DEBUG);
+/// assert_eq!(Level::TRACE, Level::TRACE);
+/// ```
+///
+/// # Filtering
+///
+/// `Level`s are typically used to implement filtering that determines which
+/// spans and events are enabled. Depending on the use case, more or less
+/// verbose diagnostics may be desired. For example, when running in
+/// development, [`DEBUG`]-level traces may be enabled by default. When running in
+/// production, only [`INFO`]-level and lower traces might be enabled. Libraries
+/// may include very verbose diagnostics at the [`DEBUG`] and/or [`TRACE`] levels.
+/// Applications using those libraries typically chose to ignore those traces. However, when
+/// debugging an issue involving said libraries, it may be useful to temporarily
+/// enable the more verbose traces.
+///
+/// The [`LevelFilter`] type is provided to enable filtering traces by
+/// verbosity. `Level`s can be compared against [`LevelFilter`]s, and
+/// [`LevelFilter`] has a variant for each `Level`, which compares analogously
+/// to that level. In addition, [`LevelFilter`] adds a [`LevelFilter::OFF`]
+/// variant, which is considered "less verbose" than every other `Level`. This is
+/// intended to allow filters to completely disable tracing in a particular context.
+///
+/// For example:
+/// ```
+/// use tracing_core::{Level, LevelFilter};
+///
+/// assert!(LevelFilter::OFF < Level::TRACE);
+/// assert!(LevelFilter::TRACE > Level::DEBUG);
+/// assert!(LevelFilter::ERROR < Level::WARN);
+/// assert!(LevelFilter::INFO <= Level::DEBUG);
+/// assert!(LevelFilter::INFO >= Level::INFO);
+/// ```
+///
+/// ## Examples
+///
+/// Below is a simple example of how a [`Subscriber`] could implement filtering through
+/// a [`LevelFilter`]. When a span or event is recorded, the [`Subscriber::enabled`] method
+/// compares the span or event's `Level` against the configured [`LevelFilter`].
+/// The optional [`Subscriber::max_level_hint`] method can also be implemented to allow spans
+/// and events above a maximum verbosity level to be skipped more efficiently,
+/// often improving performance in short-lived programs.
+///
+/// ```
+/// use tracing_core::{span, Event, Level, LevelFilter, Subscriber, Metadata};
+/// # use tracing_core::span::{Id, Record, Current};
+///
+/// #[derive(Debug)]
+/// pub struct MySubscriber {
+/// /// The most verbose level that this subscriber will enable.
+/// max_level: LevelFilter,
+///
+/// // ...
+/// }
+///
+/// impl MySubscriber {
+/// /// Returns a new `MySubscriber` which will record spans and events up to
+/// /// `max_level`.
+/// pub fn with_max_level(max_level: LevelFilter) -> Self {
+/// Self {
+/// max_level,
+/// // ...
+/// }
+/// }
+/// }
+/// impl Subscriber for MySubscriber {
+/// fn enabled(&self, meta: &Metadata<'_>) -> bool {
+/// // A span or event is enabled if it is at or below the configured
+/// // maximum level.
+/// meta.level() <= &self.max_level
+/// }
+///
+/// // This optional method returns the most verbose level that this
+/// // subscriber will enable. Although implementing this method is not
+/// // *required*, it permits additional optimizations when it is provided,
+/// // allowing spans and events above the max level to be skipped
+/// // more efficiently.
+/// fn max_level_hint(&self) -> Option<LevelFilter> {
+/// Some(self.max_level)
+/// }
+///
+/// // Implement the rest of the subscriber...
+/// fn new_span(&self, span: &span::Attributes<'_>) -> span::Id {
+/// // ...
+/// # drop(span); Id::from_u64(1)
+/// }
+
+/// fn event(&self, event: &Event<'_>) {
+/// // ...
+/// # drop(event);
+/// }
+///
+/// // ...
+/// # fn enter(&self, _: &Id) {}
+/// # fn exit(&self, _: &Id) {}
+/// # fn record(&self, _: &Id, _: &Record<'_>) {}
+/// # fn record_follows_from(&self, _: &Id, _: &Id) {}
+/// }
+/// ```
+///
+/// It is worth noting that the `tracing-subscriber` crate provides [additional
+/// APIs][envfilter] for performing more sophisticated filtering, such as
+/// enabling different levels based on which module or crate a span or event is
+/// recorded in.
+///
+/// [`DEBUG`]: Level::DEBUG
+/// [`INFO`]: Level::INFO
+/// [`TRACE`]: Level::TRACE
+/// [`Subscriber::enabled`]: crate::subscriber::Subscriber::enabled
+/// [`Subscriber::max_level_hint`]: crate::subscriber::Subscriber::max_level_hint
+/// [`Subscriber`]: crate::subscriber::Subscriber
+/// [envfilter]: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html
+#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
+pub struct Level(LevelInner);
+
+/// A filter comparable to a verbosity [`Level`].
+///
+/// If a [`Level`] is considered less than a `LevelFilter`, it should be
+/// considered enabled; if greater than or equal to the `LevelFilter`,
+/// that level is disabled. See [`LevelFilter::current`] for more
+/// details.
+///
+/// Note that this is essentially identical to the `Level` type, but with the
+/// addition of an [`OFF`] level that completely disables all trace
+/// instrumentation.
+///
+/// See the documentation for the [`Level`] type to see how `Level`s
+/// and `LevelFilter`s interact.
+///
+/// [`OFF`]: LevelFilter::OFF
+#[repr(transparent)]
+#[derive(Copy, Clone, Eq, PartialEq, Hash)]
+pub struct LevelFilter(Option<Level>);
+
+/// Indicates that a string could not be parsed to a valid level.
+#[derive(Clone, Debug)]
+pub struct ParseLevelFilterError(());
+
+static MAX_LEVEL: AtomicUsize = AtomicUsize::new(LevelFilter::OFF_USIZE);
+
+// ===== impl Metadata =====
+
+impl<'a> Metadata<'a> {
+ /// Construct new metadata for a span or event, with a name, target, level, field
+ /// names, and optional source code location.
+ pub const fn new(
+ name: &'static str,
+ target: &'a str,
+ level: Level,
+ file: Option<&'a str>,
+ line: Option<u32>,
+ module_path: Option<&'a str>,
+ fields: field::FieldSet,
+ kind: Kind,
+ ) -> Self {
+ Metadata {
+ name,
+ target,
+ level,
+ module_path,
+ file,
+ line,
+ fields,
+ kind,
+ }
+ }
+
+ /// Returns the names of the fields on the described span or event.
+ pub fn fields(&self) -> &field::FieldSet {
+ &self.fields
+ }
+
+ /// Returns the level of verbosity of the described span or event.
+ pub fn level(&self) -> &Level {
+ &self.level
+ }
+
+ /// Returns the name of the span.
+ pub fn name(&self) -> &'static str {
+ self.name
+ }
+
+ /// Returns a string describing the part of the system where the span or
+ /// event that this metadata describes occurred.
+ ///
+ /// Typically, this is the module path, but alternate targets may be set
+ /// when spans or events are constructed.
+ pub fn target(&self) -> &'a str {
+ self.target
+ }
+
+ /// Returns the path to the Rust module where the span occurred, or
+ /// `None` if the module path is unknown.
+ pub fn module_path(&self) -> Option<&'a str> {
+ self.module_path
+ }
+
+ /// Returns the name of the source code file where the span
+ /// occurred, or `None` if the file is unknown
+ pub fn file(&self) -> Option<&'a str> {
+ self.file
+ }
+
+ /// Returns the line number in the source code file where the span
+ /// occurred, or `None` if the line number is unknown.
+ pub fn line(&self) -> Option<u32> {
+ self.line
+ }
+
+ /// Returns an opaque `Identifier` that uniquely identifies the callsite
+ /// this `Metadata` originated from.
+ #[inline]
+ pub fn callsite(&self) -> callsite::Identifier {
+ self.fields.callsite()
+ }
+
+ /// Returns true if the callsite kind is `Event`.
+ pub fn is_event(&self) -> bool {
+ self.kind.is_event()
+ }
+
+ /// Return true if the callsite kind is `Span`.
+ pub fn is_span(&self) -> bool {
+ self.kind.is_span()
+ }
+}
+
+impl<'a> fmt::Debug for Metadata<'a> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ let mut meta = f.debug_struct("Metadata");
+ meta.field("name", &self.name)
+ .field("target", &self.target)
+ .field("level", &self.level);
+
+ if let Some(path) = self.module_path() {
+ meta.field("module_path", &path);
+ }
+
+ match (self.file(), self.line()) {
+ (Some(file), Some(line)) => {
+ meta.field("location", &format_args!("{}:{}", file, line));
+ }
+ (Some(file), None) => {
+ meta.field("file", &format_args!("{}", file));
+ }
+
+ // Note: a line num with no file is a kind of weird case that _probably_ never occurs...
+ (None, Some(line)) => {
+ meta.field("line", &line);
+ }
+ (None, None) => {}
+ };
+
+ meta.field("fields", &format_args!("{}", self.fields))
+ .field("callsite", &self.callsite())
+ .field("kind", &self.kind)
+ .finish()
+ }
+}
+
+impl Kind {
+ const EVENT_BIT: u8 = 1 << 0;
+ const SPAN_BIT: u8 = 1 << 1;
+ const HINT_BIT: u8 = 1 << 2;
+
+ /// `Event` callsite
+ pub const EVENT: Kind = Kind(Self::EVENT_BIT);
+
+ /// `Span` callsite
+ pub const SPAN: Kind = Kind(Self::SPAN_BIT);
+
+ /// `enabled!` callsite. [`Subscriber`][`crate::subscriber::Subscriber`]s can assume
+ /// this `Kind` means they will never recieve a
+ /// full event with this [`Metadata`].
+ pub const HINT: Kind = Kind(Self::HINT_BIT);
+
+ /// Return true if the callsite kind is `Span`
+ pub fn is_span(&self) -> bool {
+ self.0 & Self::SPAN_BIT == Self::SPAN_BIT
+ }
+
+ /// Return true if the callsite kind is `Event`
+ pub fn is_event(&self) -> bool {
+ self.0 & Self::EVENT_BIT == Self::EVENT_BIT
+ }
+
+ /// Return true if the callsite kind is `Hint`
+ pub fn is_hint(&self) -> bool {
+ self.0 & Self::HINT_BIT == Self::HINT_BIT
+ }
+
+ /// Sets that this `Kind` is a [hint](Self::HINT).
+ ///
+ /// This can be called on [`SPAN`](Self::SPAN) and [`EVENT`](Self::EVENT)
+ /// kinds to construct a hint callsite that also counts as a span or event.
+ pub const fn hint(self) -> Self {
+ Self(self.0 | Self::HINT_BIT)
+ }
+}
+
+impl fmt::Debug for Kind {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.write_str("Kind(")?;
+ let mut has_bits = false;
+ let mut write_bit = |name: &str| {
+ if has_bits {
+ f.write_str(" | ")?;
+ }
+ f.write_str(name)?;
+ has_bits = true;
+ Ok(())
+ };
+
+ if self.is_event() {
+ write_bit("EVENT")?;
+ }
+
+ if self.is_span() {
+ write_bit("SPAN")?;
+ }
+
+ if self.is_hint() {
+ write_bit("HINT")?;
+ }
+
+ // if none of the expected bits were set, something is messed up, so
+ // just print the bits for debugging purposes
+ if !has_bits {
+ write!(f, "{:#b}", self.0)?;
+ }
+
+ f.write_str(")")
+ }
+}
+
+impl<'a> Eq for Metadata<'a> {}
+
+impl<'a> PartialEq for Metadata<'a> {
+ #[inline]
+ fn eq(&self, other: &Self) -> bool {
+ if core::ptr::eq(&self, &other) {
+ true
+ } else if cfg!(not(debug_assertions)) {
+ // In a well-behaving application, two `Metadata` can be assumed to
+ // be totally equal so long as they share the same callsite.
+ self.callsite() == other.callsite()
+ } else {
+ // However, when debug-assertions are enabled, do not assume that
+ // the application is well-behaving; check every field of `Metadata`
+ // for equality.
+
+ // `Metadata` is destructured here to ensure a compile-error if the
+ // fields of `Metadata` change.
+ let Metadata {
+ name: lhs_name,
+ target: lhs_target,
+ level: lhs_level,
+ module_path: lhs_module_path,
+ file: lhs_file,
+ line: lhs_line,
+ fields: lhs_fields,
+ kind: lhs_kind,
+ } = self;
+
+ let Metadata {
+ name: rhs_name,
+ target: rhs_target,
+ level: rhs_level,
+ module_path: rhs_module_path,
+ file: rhs_file,
+ line: rhs_line,
+ fields: rhs_fields,
+ kind: rhs_kind,
+ } = &other;
+
+ // The initial comparison of callsites is purely an optimization;
+ // it can be removed without affecting the overall semantics of the
+ // expression.
+ self.callsite() == other.callsite()
+ && lhs_name == rhs_name
+ && lhs_target == rhs_target
+ && lhs_level == rhs_level
+ && lhs_module_path == rhs_module_path
+ && lhs_file == rhs_file
+ && lhs_line == rhs_line
+ && lhs_fields == rhs_fields
+ && lhs_kind == rhs_kind
+ }
+ }
+}
+
+// ===== impl Level =====
+
+impl Level {
+ /// The "error" level.
+ ///
+ /// Designates very serious errors.
+ pub const ERROR: Level = Level(LevelInner::Error);
+ /// The "warn" level.
+ ///
+ /// Designates hazardous situations.
+ pub const WARN: Level = Level(LevelInner::Warn);
+ /// The "info" level.
+ ///
+ /// Designates useful information.
+ pub const INFO: Level = Level(LevelInner::Info);
+ /// The "debug" level.
+ ///
+ /// Designates lower priority information.
+ pub const DEBUG: Level = Level(LevelInner::Debug);
+ /// The "trace" level.
+ ///
+ /// Designates very low priority, often extremely verbose, information.
+ pub const TRACE: Level = Level(LevelInner::Trace);
+
+ /// Returns the string representation of the `Level`.
+ ///
+ /// This returns the same string as the `fmt::Display` implementation.
+ pub fn as_str(&self) -> &'static str {
+ match *self {
+ Level::TRACE => "TRACE",
+ Level::DEBUG => "DEBUG",
+ Level::INFO => "INFO",
+ Level::WARN => "WARN",
+ Level::ERROR => "ERROR",
+ }
+ }
+}
+
+impl fmt::Display for Level {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match *self {
+ Level::TRACE => f.pad("TRACE"),
+ Level::DEBUG => f.pad("DEBUG"),
+ Level::INFO => f.pad("INFO"),
+ Level::WARN => f.pad("WARN"),
+ Level::ERROR => f.pad("ERROR"),
+ }
+ }
+}
+
+#[cfg(feature = "std")]
+#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
+impl crate::stdlib::error::Error for ParseLevelError {}
+
+impl FromStr for Level {
+ type Err = ParseLevelError;
+ fn from_str(s: &str) -> Result<Self, ParseLevelError> {
+ s.parse::<usize>()
+ .map_err(|_| ParseLevelError { _p: () })
+ .and_then(|num| match num {
+ 1 => Ok(Level::ERROR),
+ 2 => Ok(Level::WARN),
+ 3 => Ok(Level::INFO),
+ 4 => Ok(Level::DEBUG),
+ 5 => Ok(Level::TRACE),
+ _ => Err(ParseLevelError { _p: () }),
+ })
+ .or_else(|_| match s {
+ s if s.eq_ignore_ascii_case("error") => Ok(Level::ERROR),
+ s if s.eq_ignore_ascii_case("warn") => Ok(Level::WARN),
+ s if s.eq_ignore_ascii_case("info") => Ok(Level::INFO),
+ s if s.eq_ignore_ascii_case("debug") => Ok(Level::DEBUG),
+ s if s.eq_ignore_ascii_case("trace") => Ok(Level::TRACE),
+ _ => Err(ParseLevelError { _p: () }),
+ })
+ }
+}
+
+#[repr(usize)]
+#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
+enum LevelInner {
+ /// The "trace" level.
+ ///
+ /// Designates very low priority, often extremely verbose, information.
+ Trace = 0,
+ /// The "debug" level.
+ ///
+ /// Designates lower priority information.
+ Debug = 1,
+ /// The "info" level.
+ ///
+ /// Designates useful information.
+ Info = 2,
+ /// The "warn" level.
+ ///
+ /// Designates hazardous situations.
+ Warn = 3,
+ /// The "error" level.
+ ///
+ /// Designates very serious errors.
+ Error = 4,
+}
+
+// === impl LevelFilter ===
+
+impl From<Level> for LevelFilter {
+ #[inline]
+ fn from(level: Level) -> Self {
+ Self::from_level(level)
+ }
+}
+
+impl From<Option<Level>> for LevelFilter {
+ #[inline]
+ fn from(level: Option<Level>) -> Self {
+ Self(level)
+ }
+}
+
+impl From<LevelFilter> for Option<Level> {
+ #[inline]
+ fn from(filter: LevelFilter) -> Self {
+ filter.into_level()
+ }
+}
+
+impl LevelFilter {
+ /// The "off" level.
+ ///
+ /// Designates that trace instrumentation should be completely disabled.
+ pub const OFF: LevelFilter = LevelFilter(None);
+ /// The "error" level.
+ ///
+ /// Designates very serious errors.
+ pub const ERROR: LevelFilter = LevelFilter::from_level(Level::ERROR);
+ /// The "warn" level.
+ ///
+ /// Designates hazardous situations.
+ pub const WARN: LevelFilter = LevelFilter::from_level(Level::WARN);
+ /// The "info" level.
+ ///
+ /// Designates useful information.
+ pub const INFO: LevelFilter = LevelFilter::from_level(Level::INFO);
+ /// The "debug" level.
+ ///
+ /// Designates lower priority information.
+ pub const DEBUG: LevelFilter = LevelFilter::from_level(Level::DEBUG);
+ /// The "trace" level.
+ ///
+ /// Designates very low priority, often extremely verbose, information.
+ pub const TRACE: LevelFilter = LevelFilter(Some(Level::TRACE));
+
+ /// Returns a `LevelFilter` that enables spans and events with verbosity up
+ /// to and including `level`.
+ pub const fn from_level(level: Level) -> Self {
+ Self(Some(level))
+ }
+
+ /// Returns the most verbose [`Level`] that this filter accepts, or `None`
+ /// if it is [`OFF`].
+ ///
+ /// [`OFF`]: LevelFilter::OFF
+ pub const fn into_level(self) -> Option<Level> {
+ self.0
+ }
+
+ // These consts are necessary because `as` casts are not allowed as
+ // match patterns.
+ const ERROR_USIZE: usize = LevelInner::Error as usize;
+ const WARN_USIZE: usize = LevelInner::Warn as usize;
+ const INFO_USIZE: usize = LevelInner::Info as usize;
+ const DEBUG_USIZE: usize = LevelInner::Debug as usize;
+ const TRACE_USIZE: usize = LevelInner::Trace as usize;
+ // Using the value of the last variant + 1 ensures that we match the value
+ // for `Option::None` as selected by the niche optimization for
+ // `LevelFilter`. If this is the case, converting a `usize` value into a
+ // `LevelFilter` (in `LevelFilter::current`) will be an identity conversion,
+ // rather than generating a lookup table.
+ const OFF_USIZE: usize = LevelInner::Error as usize + 1;
+
+ /// Returns a `LevelFilter` that matches the most verbose [`Level`] that any
+ /// currently active [`Subscriber`] will enable.
+ ///
+ /// User code should treat this as a *hint*. If a given span or event has a
+ /// level *higher* than the returned `LevelFilter`, it will not be enabled.
+ /// However, if the level is less than or equal to this value, the span or
+ /// event is *not* guaranteed to be enabled; the subscriber will still
+ /// filter each callsite individually.
+ ///
+ /// Therefore, comparing a given span or event's level to the returned
+ /// `LevelFilter` **can** be used for determining if something is
+ /// *disabled*, but **should not** be used for determining if something is
+ /// *enabled*.
+ ///
+ /// [`Level`]: super::Level
+ /// [`Subscriber`]: super::Subscriber
+ #[inline(always)]
+ pub fn current() -> Self {
+ match MAX_LEVEL.load(Ordering::Relaxed) {
+ Self::ERROR_USIZE => Self::ERROR,
+ Self::WARN_USIZE => Self::WARN,
+ Self::INFO_USIZE => Self::INFO,
+ Self::DEBUG_USIZE => Self::DEBUG,
+ Self::TRACE_USIZE => Self::TRACE,
+ Self::OFF_USIZE => Self::OFF,
+ #[cfg(debug_assertions)]
+ unknown => unreachable!(
+ "/!\\ `LevelFilter` representation seems to have changed! /!\\ \n\
+ This is a bug (and it's pretty bad). Please contact the `tracing` \
+ maintainers. Thank you and I'm sorry.\n \
+ The offending repr was: {:?}",
+ unknown,
+ ),
+ #[cfg(not(debug_assertions))]
+ _ => unsafe {
+ // Using `unreachable_unchecked` here (rather than
+ // `unreachable!()`) is necessary to ensure that rustc generates
+ // an identity conversion from integer -> discriminant, rather
+ // than generating a lookup table. We want to ensure this
+ // function is a single `mov` instruction (on x86) if at all
+ // possible, because it is called *every* time a span/event
+ // callsite is hit; and it is (potentially) the only code in the
+ // hottest path for skipping a majority of callsites when level
+ // filtering is in use.
+ //
+ // safety: This branch is only truly unreachable if we guarantee
+ // that no values other than the possible enum discriminants
+ // will *ever* be present. The `AtomicUsize` is initialized to
+ // the `OFF` value. It is only set by the `set_max` function,
+ // which takes a `LevelFilter` as a parameter. This restricts
+ // the inputs to `set_max` to the set of valid discriminants.
+ // Therefore, **as long as `MAX_VALUE` is only ever set by
+ // `set_max`**, this is safe.
+ crate::stdlib::hint::unreachable_unchecked()
+ },
+ }
+ }
+
+ pub(crate) fn set_max(LevelFilter(level): LevelFilter) {
+ let val = match level {
+ Some(Level(level)) => level as usize,
+ None => Self::OFF_USIZE,
+ };
+
+ // using an AcqRel swap ensures an ordered relationship of writes to the
+ // max level.
+ MAX_LEVEL.swap(val, Ordering::AcqRel);
+ }
+}
+
+impl fmt::Display for LevelFilter {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match *self {
+ LevelFilter::OFF => f.pad("off"),
+ LevelFilter::ERROR => f.pad("error"),
+ LevelFilter::WARN => f.pad("warn"),
+ LevelFilter::INFO => f.pad("info"),
+ LevelFilter::DEBUG => f.pad("debug"),
+ LevelFilter::TRACE => f.pad("trace"),
+ }
+ }
+}
+
+impl fmt::Debug for LevelFilter {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match *self {
+ LevelFilter::OFF => f.pad("LevelFilter::OFF"),
+ LevelFilter::ERROR => f.pad("LevelFilter::ERROR"),
+ LevelFilter::WARN => f.pad("LevelFilter::WARN"),
+ LevelFilter::INFO => f.pad("LevelFilter::INFO"),
+ LevelFilter::DEBUG => f.pad("LevelFilter::DEBUG"),
+ LevelFilter::TRACE => f.pad("LevelFilter::TRACE"),
+ }
+ }
+}
+
+impl FromStr for LevelFilter {
+ type Err = ParseLevelFilterError;
+ fn from_str(from: &str) -> Result<Self, Self::Err> {
+ from.parse::<usize>()
+ .ok()
+ .and_then(|num| match num {
+ 0 => Some(LevelFilter::OFF),
+ 1 => Some(LevelFilter::ERROR),
+ 2 => Some(LevelFilter::WARN),
+ 3 => Some(LevelFilter::INFO),
+ 4 => Some(LevelFilter::DEBUG),
+ 5 => Some(LevelFilter::TRACE),
+ _ => None,
+ })
+ .or_else(|| match from {
+ "" => Some(LevelFilter::ERROR),
+ s if s.eq_ignore_ascii_case("error") => Some(LevelFilter::ERROR),
+ s if s.eq_ignore_ascii_case("warn") => Some(LevelFilter::WARN),
+ s if s.eq_ignore_ascii_case("info") => Some(LevelFilter::INFO),
+ s if s.eq_ignore_ascii_case("debug") => Some(LevelFilter::DEBUG),
+ s if s.eq_ignore_ascii_case("trace") => Some(LevelFilter::TRACE),
+ s if s.eq_ignore_ascii_case("off") => Some(LevelFilter::OFF),
+ _ => None,
+ })
+ .ok_or(ParseLevelFilterError(()))
+ }
+}
+
+/// Returned if parsing a `Level` fails.
+#[derive(Debug)]
+pub struct ParseLevelError {
+ _p: (),
+}
+
+impl fmt::Display for ParseLevelError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.pad(
+ "error parsing level: expected one of \"error\", \"warn\", \
+ \"info\", \"debug\", \"trace\", or a number 1-5",
+ )
+ }
+}
+
+impl fmt::Display for ParseLevelFilterError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.pad(
+ "error parsing level filter: expected one of \"off\", \"error\", \
+ \"warn\", \"info\", \"debug\", \"trace\", or a number 0-5",
+ )
+ }
+}
+
+#[cfg(feature = "std")]
+impl std::error::Error for ParseLevelFilterError {}
+
+// ==== Level and LevelFilter comparisons ====
+
+// /!\ BIG, IMPORTANT WARNING /!\
+// Do NOT mess with these implementations! They are hand-written for a reason!
+//
+// Since comparing `Level`s and `LevelFilter`s happens in a *very* hot path
+// (potentially, every time a span or event macro is hit, regardless of whether
+// or not is enabled), we *need* to ensure that these comparisons are as fast as
+// possible. Therefore, we have some requirements:
+//
+// 1. We want to do our best to ensure that rustc will generate integer-integer
+// comparisons wherever possible.
+//
+// The derived `Ord`/`PartialOrd` impls for `LevelFilter` will not do this,
+// because `LevelFilter`s are represented by `Option<Level>`, rather than as
+// a separate `#[repr(usize)]` enum. This was (unfortunately) necessary for
+// backwards-compatibility reasons, as the `tracing` crate's original
+// version of `LevelFilter` defined `const fn` conversions between `Level`s
+// and `LevelFilter`, so we're stuck with the `Option<Level>` repr.
+// Therefore, we need hand-written `PartialOrd` impls that cast both sides of
+// the comparison to `usize`s, to force the compiler to generate integer
+// compares.
+//
+// 2. The hottest `Level`/`LevelFilter` comparison, the one that happens every
+// time a callsite is hit, occurs *within the `tracing` crate's macros*.
+// This means that the comparison is happening *inside* a crate that
+// *depends* on `tracing-core`, not in `tracing-core` itself. The compiler
+// will only inline function calls across crate boundaries if the called
+// function is annotated with an `#[inline]` attribute, and we *definitely*
+// want the comparison functions to be inlined: as previously mentioned, they
+// should compile down to a single integer comparison on release builds, and
+// it seems really sad to push an entire stack frame to call a function
+// consisting of one `cmp` instruction!
+//
+// Therefore, we need to ensure that all the comparison methods have
+// `#[inline]` or `#[inline(always)]` attributes. It's not sufficient to just
+// add the attribute to `partial_cmp` in a manual implementation of the
+// trait, since it's the comparison operators (`lt`, `le`, `gt`, and `ge`)
+// that will actually be *used*, and the default implementation of *those*
+// methods, which calls `partial_cmp`, does not have an inline annotation.
+//
+// 3. We need the comparisons to be inverted. The discriminants for the
+// `LevelInner` enum are assigned in "backwards" order, with `TRACE` having
+// the *lowest* value. However, we want `TRACE` to compare greater-than all
+// other levels.
+//
+// Why are the numeric values inverted? In order to ensure that `LevelFilter`
+// (which, as previously mentioned, *has* to be internally represented by an
+// `Option<Level>`) compiles down to a single integer value. This is
+// necessary for storing the global max in an `AtomicUsize`, and for ensuring
+// that we use fast integer-integer comparisons, as mentioned previously. In
+// order to ensure this, we exploit the niche optimization. The niche
+// optimization for `Option<{enum with a numeric repr}>` will choose
+// `(HIGHEST_DISCRIMINANT_VALUE + 1)` as the representation for `None`.
+// Therefore, the integer representation of `LevelFilter::OFF` (which is
+// `None`) will be the number 5. `OFF` must compare higher than every other
+// level in order for it to filter as expected. Since we want to use a single
+// `cmp` instruction, we can't special-case the integer value of `OFF` to
+// compare higher, as that will generate more code. Instead, we need it to be
+// on one end of the enum, with `ERROR` on the opposite end, so we assign the
+// value 0 to `ERROR`.
+//
+// This *does* mean that when parsing `LevelFilter`s or `Level`s from
+// `String`s, the integer values are inverted, but that doesn't happen in a
+// hot path.
+//
+// Note that we manually invert the comparisons by swapping the left-hand and
+// right-hand side. Using `Ordering::reverse` generates significantly worse
+// code (per Matt Godbolt's Compiler Explorer).
+//
+// Anyway, that's a brief history of why this code is the way it is. Don't
+// change it unless you know what you're doing.
+
+impl PartialEq<LevelFilter> for Level {
+ #[inline(always)]
+ fn eq(&self, other: &LevelFilter) -> bool {
+ self.0 as usize == filter_as_usize(&other.0)
+ }
+}
+
+impl PartialOrd for Level {
+ #[inline(always)]
+ fn partial_cmp(&self, other: &Level) -> Option<cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+
+ #[inline(always)]
+ fn lt(&self, other: &Level) -> bool {
+ (other.0 as usize) < (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn le(&self, other: &Level) -> bool {
+ (other.0 as usize) <= (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn gt(&self, other: &Level) -> bool {
+ (other.0 as usize) > (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn ge(&self, other: &Level) -> bool {
+ (other.0 as usize) >= (self.0 as usize)
+ }
+}
+
+impl Ord for Level {
+ #[inline(always)]
+ fn cmp(&self, other: &Self) -> cmp::Ordering {
+ (other.0 as usize).cmp(&(self.0 as usize))
+ }
+}
+
+impl PartialOrd<LevelFilter> for Level {
+ #[inline(always)]
+ fn partial_cmp(&self, other: &LevelFilter) -> Option<cmp::Ordering> {
+ Some(filter_as_usize(&other.0).cmp(&(self.0 as usize)))
+ }
+
+ #[inline(always)]
+ fn lt(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) < (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn le(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) <= (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn gt(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) > (self.0 as usize)
+ }
+
+ #[inline(always)]
+ fn ge(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) >= (self.0 as usize)
+ }
+}
+
+#[inline(always)]
+fn filter_as_usize(x: &Option<Level>) -> usize {
+ match x {
+ Some(Level(f)) => *f as usize,
+ None => LevelFilter::OFF_USIZE,
+ }
+}
+
+impl PartialEq<Level> for LevelFilter {
+ #[inline(always)]
+ fn eq(&self, other: &Level) -> bool {
+ filter_as_usize(&self.0) == other.0 as usize
+ }
+}
+
+impl PartialOrd for LevelFilter {
+ #[inline(always)]
+ fn partial_cmp(&self, other: &LevelFilter) -> Option<cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+
+ #[inline(always)]
+ fn lt(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) < filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn le(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) <= filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn gt(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) > filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn ge(&self, other: &LevelFilter) -> bool {
+ filter_as_usize(&other.0) >= filter_as_usize(&self.0)
+ }
+}
+
+impl Ord for LevelFilter {
+ #[inline(always)]
+ fn cmp(&self, other: &Self) -> cmp::Ordering {
+ filter_as_usize(&other.0).cmp(&filter_as_usize(&self.0))
+ }
+}
+
+impl PartialOrd<Level> for LevelFilter {
+ #[inline(always)]
+ fn partial_cmp(&self, other: &Level) -> Option<cmp::Ordering> {
+ Some((other.0 as usize).cmp(&filter_as_usize(&self.0)))
+ }
+
+ #[inline(always)]
+ fn lt(&self, other: &Level) -> bool {
+ (other.0 as usize) < filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn le(&self, other: &Level) -> bool {
+ (other.0 as usize) <= filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn gt(&self, other: &Level) -> bool {
+ (other.0 as usize) > filter_as_usize(&self.0)
+ }
+
+ #[inline(always)]
+ fn ge(&self, other: &Level) -> bool {
+ (other.0 as usize) >= filter_as_usize(&self.0)
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use crate::stdlib::mem;
+
+ #[test]
+ fn level_from_str() {
+ assert_eq!("error".parse::<Level>().unwrap(), Level::ERROR);
+ assert_eq!("4".parse::<Level>().unwrap(), Level::DEBUG);
+ assert!("0".parse::<Level>().is_err())
+ }
+
+ #[test]
+ fn filter_level_conversion() {
+ let mapping = [
+ (LevelFilter::OFF, None),
+ (LevelFilter::ERROR, Some(Level::ERROR)),
+ (LevelFilter::WARN, Some(Level::WARN)),
+ (LevelFilter::INFO, Some(Level::INFO)),
+ (LevelFilter::DEBUG, Some(Level::DEBUG)),
+ (LevelFilter::TRACE, Some(Level::TRACE)),
+ ];
+ for (filter, level) in mapping.iter() {
+ assert_eq!(filter.into_level(), *level);
+ match level {
+ Some(level) => {
+ let actual: LevelFilter = (*level).into();
+ assert_eq!(actual, *filter);
+ }
+ None => {
+ let actual: LevelFilter = None.into();
+ assert_eq!(actual, *filter);
+ }
+ }
+ }
+ }
+
+ #[test]
+ fn level_filter_is_usize_sized() {
+ assert_eq!(
+ mem::size_of::<LevelFilter>(),
+ mem::size_of::<usize>(),
+ "`LevelFilter` is no longer `usize`-sized! global MAX_LEVEL may now be invalid!"
+ )
+ }
+
+ #[test]
+ fn level_filter_reprs() {
+ let mapping = [
+ (LevelFilter::OFF, LevelInner::Error as usize + 1),
+ (LevelFilter::ERROR, LevelInner::Error as usize),
+ (LevelFilter::WARN, LevelInner::Warn as usize),
+ (LevelFilter::INFO, LevelInner::Info as usize),
+ (LevelFilter::DEBUG, LevelInner::Debug as usize),
+ (LevelFilter::TRACE, LevelInner::Trace as usize),
+ ];
+ for &(filter, expected) in &mapping {
+ let repr = unsafe {
+ // safety: The entire purpose of this test is to assert that the
+ // actual repr matches what we expect it to be --- we're testing
+ // that *other* unsafe code is sound using the transmuted value.
+ // We're not going to do anything with it that might be unsound.
+ mem::transmute::<LevelFilter, usize>(filter)
+ };
+ assert_eq!(expected, repr, "repr changed for {:?}", filter)
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/parent.rs b/third_party/rust/tracing-core/src/parent.rs
new file mode 100644
index 0000000000..cb34b376cc
--- /dev/null
+++ b/third_party/rust/tracing-core/src/parent.rs
@@ -0,0 +1,11 @@
+use crate::span::Id;
+
+#[derive(Debug)]
+pub(crate) enum Parent {
+ /// The new span will be a root span.
+ Root,
+ /// The new span will be rooted in the current span.
+ Current,
+ /// The new span has an explicitly-specified parent.
+ Explicit(Id),
+}
diff --git a/third_party/rust/tracing-core/src/span.rs b/third_party/rust/tracing-core/src/span.rs
new file mode 100644
index 0000000000..44738b2903
--- /dev/null
+++ b/third_party/rust/tracing-core/src/span.rs
@@ -0,0 +1,341 @@
+//! Spans represent periods of time in the execution of a program.
+use crate::field::FieldSet;
+use crate::parent::Parent;
+use crate::stdlib::num::NonZeroU64;
+use crate::{field, Metadata};
+
+/// Identifies a span within the context of a subscriber.
+///
+/// They are generated by [`Subscriber`]s for each span as it is created, by
+/// the [`new_span`] trait method. See the documentation for that method for
+/// more information on span ID generation.
+///
+/// [`Subscriber`]: super::subscriber::Subscriber
+/// [`new_span`]: super::subscriber::Subscriber::new_span
+#[derive(Clone, Debug, PartialEq, Eq, Hash)]
+pub struct Id(NonZeroU64);
+
+/// Attributes provided to a `Subscriber` describing a new span when it is
+/// created.
+#[derive(Debug)]
+pub struct Attributes<'a> {
+ metadata: &'static Metadata<'static>,
+ values: &'a field::ValueSet<'a>,
+ parent: Parent,
+}
+
+/// A set of fields recorded by a span.
+#[derive(Debug)]
+pub struct Record<'a> {
+ values: &'a field::ValueSet<'a>,
+}
+
+/// Indicates what [the `Subscriber` considers] the "current" span.
+///
+/// As subscribers may not track a notion of a current span, this has three
+/// possible states:
+/// - "unknown", indicating that the subscriber does not track a current span,
+/// - "none", indicating that the current context is known to not be in a span,
+/// - "some", with the current span's [`Id`] and [`Metadata`].
+///
+/// [the `Subscriber` considers]: super::subscriber::Subscriber::current_span
+/// [`Metadata`]: super::metadata::Metadata
+#[derive(Debug)]
+pub struct Current {
+ inner: CurrentInner,
+}
+
+#[derive(Debug)]
+enum CurrentInner {
+ Current {
+ id: Id,
+ metadata: &'static Metadata<'static>,
+ },
+ None,
+ Unknown,
+}
+
+// ===== impl Span =====
+
+impl Id {
+ /// Constructs a new span ID from the given `u64`.
+ ///
+ /// <pre class="ignore" style="white-space:normal;font:inherit;">
+ /// <strong>Note</strong>: Span IDs must be greater than zero.
+ /// </pre>
+ ///
+ /// # Panics
+ /// - If the provided `u64` is 0.
+ pub fn from_u64(u: u64) -> Self {
+ Id(NonZeroU64::new(u).expect("span IDs must be > 0"))
+ }
+
+ /// Constructs a new span ID from the given `NonZeroU64`.
+ ///
+ /// Unlike [`Id::from_u64`](Id::from_u64()), this will never panic.
+ #[inline]
+ pub const fn from_non_zero_u64(id: NonZeroU64) -> Self {
+ Id(id)
+ }
+
+ // Allow `into` by-ref since we don't want to impl Copy for Id
+ #[allow(clippy::wrong_self_convention)]
+ /// Returns the span's ID as a `u64`.
+ pub fn into_u64(&self) -> u64 {
+ self.0.get()
+ }
+
+ // Allow `into` by-ref since we don't want to impl Copy for Id
+ #[allow(clippy::wrong_self_convention)]
+ /// Returns the span's ID as a `NonZeroU64`.
+ #[inline]
+ pub const fn into_non_zero_u64(&self) -> NonZeroU64 {
+ self.0
+ }
+}
+
+impl<'a> From<&'a Id> for Option<Id> {
+ fn from(id: &'a Id) -> Self {
+ Some(id.clone())
+ }
+}
+
+// ===== impl Attributes =====
+
+impl<'a> Attributes<'a> {
+ /// Returns `Attributes` describing a new child span of the current span,
+ /// with the provided metadata and values.
+ pub fn new(metadata: &'static Metadata<'static>, values: &'a field::ValueSet<'a>) -> Self {
+ Attributes {
+ metadata,
+ values,
+ parent: Parent::Current,
+ }
+ }
+
+ /// Returns `Attributes` describing a new span at the root of its own trace
+ /// tree, with the provided metadata and values.
+ pub fn new_root(metadata: &'static Metadata<'static>, values: &'a field::ValueSet<'a>) -> Self {
+ Attributes {
+ metadata,
+ values,
+ parent: Parent::Root,
+ }
+ }
+
+ /// Returns `Attributes` describing a new child span of the specified
+ /// parent span, with the provided metadata and values.
+ pub fn child_of(
+ parent: Id,
+ metadata: &'static Metadata<'static>,
+ values: &'a field::ValueSet<'a>,
+ ) -> Self {
+ Attributes {
+ metadata,
+ values,
+ parent: Parent::Explicit(parent),
+ }
+ }
+
+ /// Returns a reference to the new span's metadata.
+ pub fn metadata(&self) -> &'static Metadata<'static> {
+ self.metadata
+ }
+
+ /// Returns a reference to a `ValueSet` containing any values the new span
+ /// was created with.
+ pub fn values(&self) -> &field::ValueSet<'a> {
+ self.values
+ }
+
+ /// Returns true if the new span should be a root.
+ pub fn is_root(&self) -> bool {
+ matches!(self.parent, Parent::Root)
+ }
+
+ /// Returns true if the new span's parent should be determined based on the
+ /// current context.
+ ///
+ /// If this is true and the current thread is currently inside a span, then
+ /// that span should be the new span's parent. Otherwise, if the current
+ /// thread is _not_ inside a span, then the new span will be the root of its
+ /// own trace tree.
+ pub fn is_contextual(&self) -> bool {
+ matches!(self.parent, Parent::Current)
+ }
+
+ /// Returns the new span's explicitly-specified parent, if there is one.
+ ///
+ /// Otherwise (if the new span is a root or is a child of the current span),
+ /// returns `None`.
+ pub fn parent(&self) -> Option<&Id> {
+ match self.parent {
+ Parent::Explicit(ref p) => Some(p),
+ _ => None,
+ }
+ }
+
+ /// Records all the fields in this set of `Attributes` with the provided
+ /// [Visitor].
+ ///
+ /// [visitor]: super::field::Visit
+ pub fn record(&self, visitor: &mut dyn field::Visit) {
+ self.values.record(visitor)
+ }
+
+ /// Returns `true` if this set of `Attributes` contains a value for the
+ /// given `Field`.
+ pub fn contains(&self, field: &field::Field) -> bool {
+ self.values.contains(field)
+ }
+
+ /// Returns true if this set of `Attributes` contains _no_ values.
+ pub fn is_empty(&self) -> bool {
+ self.values.is_empty()
+ }
+
+ /// Returns the set of all [fields] defined by this span's [`Metadata`].
+ ///
+ /// Note that the [`FieldSet`] returned by this method includes *all* the
+ /// fields declared by this span, not just those with values that are recorded
+ /// as part of this set of `Attributes`. Other fields with values not present in
+ /// this `Attributes`' value set may [record] values later.
+ ///
+ /// [fields]: crate::field
+ /// [record]: Attributes::record()
+ /// [`Metadata`]: crate::metadata::Metadata
+ /// [`FieldSet`]: crate::field::FieldSet
+ pub fn fields(&self) -> &FieldSet {
+ self.values.field_set()
+ }
+}
+
+// ===== impl Record =====
+
+impl<'a> Record<'a> {
+ /// Constructs a new `Record` from a `ValueSet`.
+ pub fn new(values: &'a field::ValueSet<'a>) -> Self {
+ Self { values }
+ }
+
+ /// Records all the fields in this `Record` with the provided [Visitor].
+ ///
+ /// [visitor]: super::field::Visit
+ pub fn record(&self, visitor: &mut dyn field::Visit) {
+ self.values.record(visitor)
+ }
+
+ /// Returns the number of fields that would be visited from this `Record`
+ /// when [`Record::record()`] is called
+ ///
+ /// [`Record::record()`]: Record::record()
+ pub fn len(&self) -> usize {
+ self.values.len()
+ }
+
+ /// Returns `true` if this `Record` contains a value for the given `Field`.
+ pub fn contains(&self, field: &field::Field) -> bool {
+ self.values.contains(field)
+ }
+
+ /// Returns true if this `Record` contains _no_ values.
+ pub fn is_empty(&self) -> bool {
+ self.values.is_empty()
+ }
+}
+
+// ===== impl Current =====
+
+impl Current {
+ /// Constructs a new `Current` that indicates the current context is a span
+ /// with the given `metadata` and `metadata`.
+ pub fn new(id: Id, metadata: &'static Metadata<'static>) -> Self {
+ Self {
+ inner: CurrentInner::Current { id, metadata },
+ }
+ }
+
+ /// Constructs a new `Current` that indicates the current context is *not*
+ /// in a span.
+ pub fn none() -> Self {
+ Self {
+ inner: CurrentInner::None,
+ }
+ }
+
+ /// Constructs a new `Current` that indicates the `Subscriber` does not
+ /// track a current span.
+ pub(crate) fn unknown() -> Self {
+ Self {
+ inner: CurrentInner::Unknown,
+ }
+ }
+
+ /// Returns `true` if the `Subscriber` that constructed this `Current` tracks a
+ /// current span.
+ ///
+ /// If this returns `true` and [`id`], [`metadata`], or [`into_inner`]
+ /// return `None`, that indicates that we are currently known to *not* be
+ /// inside a span. If this returns `false`, those methods will also return
+ /// `None`, but in this case, that is because the subscriber does not keep
+ /// track of the currently-entered span.
+ ///
+ /// [`id`]: Current::id()
+ /// [`metadata`]: Current::metadata()
+ /// [`into_inner`]: Current::into_inner()
+ pub fn is_known(&self) -> bool {
+ !matches!(self.inner, CurrentInner::Unknown)
+ }
+
+ /// Consumes `self` and returns the span `Id` and `Metadata` of the current
+ /// span, if one exists and is known.
+ pub fn into_inner(self) -> Option<(Id, &'static Metadata<'static>)> {
+ match self.inner {
+ CurrentInner::Current { id, metadata } => Some((id, metadata)),
+ _ => None,
+ }
+ }
+
+ /// Borrows the `Id` of the current span, if one exists and is known.
+ pub fn id(&self) -> Option<&Id> {
+ match self.inner {
+ CurrentInner::Current { ref id, .. } => Some(id),
+ _ => None,
+ }
+ }
+
+ /// Borrows the `Metadata` of the current span, if one exists and is known.
+ pub fn metadata(&self) -> Option<&'static Metadata<'static>> {
+ match self.inner {
+ CurrentInner::Current { metadata, .. } => Some(metadata),
+ _ => None,
+ }
+ }
+}
+
+impl<'a> From<&'a Current> for Option<&'a Id> {
+ fn from(cur: &'a Current) -> Self {
+ cur.id()
+ }
+}
+
+impl<'a> From<&'a Current> for Option<Id> {
+ fn from(cur: &'a Current) -> Self {
+ cur.id().cloned()
+ }
+}
+
+impl From<Current> for Option<Id> {
+ fn from(cur: Current) -> Self {
+ match cur.inner {
+ CurrentInner::Current { id, .. } => Some(id),
+ _ => None,
+ }
+ }
+}
+
+impl<'a> From<&'a Current> for Option<&'static Metadata<'static>> {
+ fn from(cur: &'a Current) -> Self {
+ cur.metadata()
+ }
+}
diff --git a/third_party/rust/tracing-core/src/spin/LICENSE b/third_party/rust/tracing-core/src/spin/LICENSE
new file mode 100644
index 0000000000..84d5f4d7af
--- /dev/null
+++ b/third_party/rust/tracing-core/src/spin/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014 Mathijs van de Nes
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/third_party/rust/tracing-core/src/spin/mod.rs b/third_party/rust/tracing-core/src/spin/mod.rs
new file mode 100644
index 0000000000..148b192b34
--- /dev/null
+++ b/third_party/rust/tracing-core/src/spin/mod.rs
@@ -0,0 +1,7 @@
+//! Synchronization primitives based on spinning
+
+pub(crate) use mutex::*;
+pub(crate) use once::Once;
+
+mod mutex;
+mod once;
diff --git a/third_party/rust/tracing-core/src/spin/mutex.rs b/third_party/rust/tracing-core/src/spin/mutex.rs
new file mode 100644
index 0000000000..c261a61910
--- /dev/null
+++ b/third_party/rust/tracing-core/src/spin/mutex.rs
@@ -0,0 +1,118 @@
+use core::cell::UnsafeCell;
+use core::default::Default;
+use core::fmt;
+use core::hint;
+use core::marker::Sync;
+use core::ops::{Deref, DerefMut, Drop};
+use core::option::Option::{self, None, Some};
+use core::sync::atomic::{AtomicBool, Ordering};
+
+/// This type provides MUTual EXclusion based on spinning.
+pub(crate) struct Mutex<T: ?Sized> {
+ lock: AtomicBool,
+ data: UnsafeCell<T>,
+}
+
+/// A guard to which the protected data can be accessed
+///
+/// When the guard falls out of scope it will release the lock.
+#[derive(Debug)]
+pub(crate) struct MutexGuard<'a, T: ?Sized> {
+ lock: &'a AtomicBool,
+ data: &'a mut T,
+}
+
+// Same unsafe impls as `std::sync::Mutex`
+unsafe impl<T: ?Sized + Send> Sync for Mutex<T> {}
+unsafe impl<T: ?Sized + Send> Send for Mutex<T> {}
+
+impl<T> Mutex<T> {
+ /// Creates a new spinlock wrapping the supplied data.
+ pub(crate) const fn new(user_data: T) -> Mutex<T> {
+ Mutex {
+ lock: AtomicBool::new(false),
+ data: UnsafeCell::new(user_data),
+ }
+ }
+}
+
+impl<T: ?Sized> Mutex<T> {
+ fn obtain_lock(&self) {
+ while self
+ .lock
+ .compare_exchange_weak(false, true, Ordering::Acquire, Ordering::Relaxed)
+ .is_err()
+ {
+ // Wait until the lock looks unlocked before retrying
+ while self.lock.load(Ordering::Relaxed) {
+ hint::spin_loop();
+ }
+ }
+ }
+
+ /// Locks the spinlock and returns a guard.
+ ///
+ /// The returned value may be dereferenced for data access
+ /// and the lock will be dropped when the guard falls out of scope.
+ pub(crate) fn lock(&self) -> MutexGuard<'_, T> {
+ self.obtain_lock();
+ MutexGuard {
+ lock: &self.lock,
+ data: unsafe { &mut *self.data.get() },
+ }
+ }
+
+ /// Tries to lock the mutex. If it is already locked, it will return None. Otherwise it returns
+ /// a guard within Some.
+ pub(crate) fn try_lock(&self) -> Option<MutexGuard<'_, T>> {
+ if self
+ .lock
+ .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed)
+ .is_ok()
+ {
+ Some(MutexGuard {
+ lock: &self.lock,
+ data: unsafe { &mut *self.data.get() },
+ })
+ } else {
+ None
+ }
+ }
+}
+
+impl<T: ?Sized + fmt::Debug> fmt::Debug for Mutex<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match self.try_lock() {
+ Some(guard) => write!(f, "Mutex {{ data: ")
+ .and_then(|()| (&*guard).fmt(f))
+ .and_then(|()| write!(f, "}}")),
+ None => write!(f, "Mutex {{ <locked> }}"),
+ }
+ }
+}
+
+impl<T: ?Sized + Default> Default for Mutex<T> {
+ fn default() -> Mutex<T> {
+ Mutex::new(Default::default())
+ }
+}
+
+impl<'a, T: ?Sized> Deref for MutexGuard<'a, T> {
+ type Target = T;
+ fn deref<'b>(&'b self) -> &'b T {
+ &*self.data
+ }
+}
+
+impl<'a, T: ?Sized> DerefMut for MutexGuard<'a, T> {
+ fn deref_mut<'b>(&'b mut self) -> &'b mut T {
+ &mut *self.data
+ }
+}
+
+impl<'a, T: ?Sized> Drop for MutexGuard<'a, T> {
+ /// The dropping of the MutexGuard will release the lock it was created from.
+ fn drop(&mut self) {
+ self.lock.store(false, Ordering::Release);
+ }
+}
diff --git a/third_party/rust/tracing-core/src/spin/once.rs b/third_party/rust/tracing-core/src/spin/once.rs
new file mode 100644
index 0000000000..27c99e56ee
--- /dev/null
+++ b/third_party/rust/tracing-core/src/spin/once.rs
@@ -0,0 +1,158 @@
+use core::cell::UnsafeCell;
+use core::fmt;
+use core::hint::spin_loop;
+use core::sync::atomic::{AtomicUsize, Ordering};
+
+/// A synchronization primitive which can be used to run a one-time global
+/// initialization. Unlike its std equivalent, this is generalized so that the
+/// closure returns a value and it is stored. Once therefore acts something like
+/// a future, too.
+pub struct Once<T> {
+ state: AtomicUsize,
+ data: UnsafeCell<Option<T>>, // TODO remove option and use mem::uninitialized
+}
+
+impl<T: fmt::Debug> fmt::Debug for Once<T> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match self.r#try() {
+ Some(s) => write!(f, "Once {{ data: ")
+ .and_then(|()| s.fmt(f))
+ .and_then(|()| write!(f, "}}")),
+ None => write!(f, "Once {{ <uninitialized> }}"),
+ }
+ }
+}
+
+// Same unsafe impls as `std::sync::RwLock`, because this also allows for
+// concurrent reads.
+unsafe impl<T: Send + Sync> Sync for Once<T> {}
+unsafe impl<T: Send> Send for Once<T> {}
+
+// Four states that a Once can be in, encoded into the lower bits of `state` in
+// the Once structure.
+const INCOMPLETE: usize = 0x0;
+const RUNNING: usize = 0x1;
+const COMPLETE: usize = 0x2;
+const PANICKED: usize = 0x3;
+
+use core::hint::unreachable_unchecked as unreachable;
+
+impl<T> Once<T> {
+ /// Initialization constant of `Once`.
+ pub const INIT: Self = Once {
+ state: AtomicUsize::new(INCOMPLETE),
+ data: UnsafeCell::new(None),
+ };
+
+ /// Creates a new `Once` value.
+ pub const fn new() -> Once<T> {
+ Self::INIT
+ }
+
+ fn force_get<'a>(&'a self) -> &'a T {
+ match unsafe { &*self.data.get() }.as_ref() {
+ None => unsafe { unreachable() },
+ Some(p) => p,
+ }
+ }
+
+ /// Performs an initialization routine once and only once. The given closure
+ /// will be executed if this is the first time `call_once` has been called,
+ /// and otherwise the routine will *not* be invoked.
+ ///
+ /// This method will block the calling thread if another initialization
+ /// routine is currently running.
+ ///
+ /// When this function returns, it is guaranteed that some initialization
+ /// has run and completed (it may not be the closure specified). The
+ /// returned pointer will point to the result from the closure that was
+ /// run.
+ pub fn call_once<'a, F>(&'a self, builder: F) -> &'a T
+ where
+ F: FnOnce() -> T,
+ {
+ let mut status = self.state.load(Ordering::SeqCst);
+
+ if status == INCOMPLETE {
+ status = match self.state.compare_exchange(
+ INCOMPLETE,
+ RUNNING,
+ Ordering::SeqCst,
+ Ordering::SeqCst,
+ ) {
+ Ok(status) => {
+ debug_assert_eq!(
+ status, INCOMPLETE,
+ "if compare_exchange succeeded, previous status must be incomplete",
+ );
+ // We init
+ // We use a guard (Finish) to catch panics caused by builder
+ let mut finish = Finish {
+ state: &self.state,
+ panicked: true,
+ };
+ unsafe { *self.data.get() = Some(builder()) };
+ finish.panicked = false;
+
+ self.state.store(COMPLETE, Ordering::SeqCst);
+
+ // This next line is strictly an optimization
+ return self.force_get();
+ }
+ Err(status) => status,
+ }
+ }
+
+ loop {
+ match status {
+ INCOMPLETE => unreachable!(),
+ RUNNING => {
+ // We spin
+ spin_loop();
+ status = self.state.load(Ordering::SeqCst)
+ }
+ PANICKED => panic!("Once has panicked"),
+ COMPLETE => return self.force_get(),
+ _ => unsafe { unreachable() },
+ }
+ }
+ }
+
+ /// Returns a pointer iff the `Once` was previously initialized
+ pub fn r#try<'a>(&'a self) -> Option<&'a T> {
+ match self.state.load(Ordering::SeqCst) {
+ COMPLETE => Some(self.force_get()),
+ _ => None,
+ }
+ }
+
+ /// Like try, but will spin if the `Once` is in the process of being
+ /// initialized
+ pub fn wait<'a>(&'a self) -> Option<&'a T> {
+ loop {
+ match self.state.load(Ordering::SeqCst) {
+ INCOMPLETE => return None,
+
+ RUNNING => {
+ spin_loop() // We spin
+ }
+ COMPLETE => return Some(self.force_get()),
+ PANICKED => panic!("Once has panicked"),
+ _ => unsafe { unreachable() },
+ }
+ }
+ }
+}
+
+struct Finish<'a> {
+ state: &'a AtomicUsize,
+ panicked: bool,
+}
+
+impl<'a> Drop for Finish<'a> {
+ fn drop(&mut self) {
+ if self.panicked {
+ self.state.store(PANICKED, Ordering::SeqCst);
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/stdlib.rs b/third_party/rust/tracing-core/src/stdlib.rs
new file mode 100644
index 0000000000..741549519c
--- /dev/null
+++ b/third_party/rust/tracing-core/src/stdlib.rs
@@ -0,0 +1,78 @@
+//! Re-exports either the Rust `std` library or `core` and `alloc` when `std` is
+//! disabled.
+//!
+//! `crate::stdlib::...` should be used rather than `std::` when adding code that
+//! will be available with the standard library disabled.
+//!
+//! Note that this module is called `stdlib` rather than `std`, as Rust 1.34.0
+//! does not permit redefining the name `stdlib` (although this works on the
+//! latest stable Rust).
+#[cfg(feature = "std")]
+pub(crate) use std::*;
+
+#[cfg(not(feature = "std"))]
+pub(crate) use self::no_std::*;
+
+#[cfg(not(feature = "std"))]
+mod no_std {
+ // We pre-emptively export everything from libcore/liballoc, (even modules
+ // we aren't using currently) to make adding new code easier. Therefore,
+ // some of these imports will be unused.
+ #![allow(unused_imports)]
+
+ pub(crate) use core::{
+ any, array, ascii, cell, char, clone, cmp, convert, default, f32, f64, ffi, future, hash,
+ hint, i128, i16, i8, isize, iter, marker, mem, num, ops, option, pin, ptr, result, task,
+ time, u128, u16, u32, u8, usize,
+ };
+
+ pub(crate) use alloc::{boxed, collections, rc, string, vec};
+
+ pub(crate) mod borrow {
+ pub(crate) use alloc::borrow::*;
+ pub(crate) use core::borrow::*;
+ }
+
+ pub(crate) mod fmt {
+ pub(crate) use alloc::fmt::*;
+ pub(crate) use core::fmt::*;
+ }
+
+ pub(crate) mod slice {
+ pub(crate) use alloc::slice::*;
+ pub(crate) use core::slice::*;
+ }
+
+ pub(crate) mod str {
+ pub(crate) use alloc::str::*;
+ pub(crate) use core::str::*;
+ }
+
+ pub(crate) mod sync {
+ pub(crate) use crate::spin::MutexGuard;
+ pub(crate) use alloc::sync::*;
+ pub(crate) use core::sync::*;
+
+ /// This wraps `spin::Mutex` to return a `Result`, so that it can be
+ /// used with code written against `std::sync::Mutex`.
+ ///
+ /// Since `spin::Mutex` doesn't support poisoning, the `Result` returned
+ /// by `lock` will always be `Ok`.
+ #[derive(Debug, Default)]
+ pub(crate) struct Mutex<T> {
+ inner: crate::spin::Mutex<T>,
+ }
+
+ impl<T> Mutex<T> {
+ // pub(crate) fn new(data: T) -> Self {
+ // Self {
+ // inner: crate::spin::Mutex::new(data),
+ // }
+ // }
+
+ pub(crate) fn lock(&self) -> Result<MutexGuard<'_, T>, ()> {
+ Ok(self.inner.lock())
+ }
+ }
+ }
+}
diff --git a/third_party/rust/tracing-core/src/subscriber.rs b/third_party/rust/tracing-core/src/subscriber.rs
new file mode 100644
index 0000000000..e8f4441196
--- /dev/null
+++ b/third_party/rust/tracing-core/src/subscriber.rs
@@ -0,0 +1,870 @@
+//! Collectors collect and record trace data.
+use crate::{span, Dispatch, Event, LevelFilter, Metadata};
+
+use crate::stdlib::{
+ any::{Any, TypeId},
+ boxed::Box,
+ sync::Arc,
+};
+
+/// Trait representing the functions required to collect trace data.
+///
+/// Crates that provide implementations of methods for collecting or recording
+/// trace data should implement the `Subscriber` interface. This trait is
+/// intended to represent fundamental primitives for collecting trace events and
+/// spans — other libraries may offer utility functions and types to make
+/// subscriber implementations more modular or improve the ergonomics of writing
+/// subscribers.
+///
+/// A subscriber is responsible for the following:
+/// - Registering new spans as they are created, and providing them with span
+/// IDs. Implicitly, this means the subscriber may determine the strategy for
+/// determining span equality.
+/// - Recording the attachment of field values and follows-from annotations to
+/// spans.
+/// - Filtering spans and events, and determining when those filters must be
+/// invalidated.
+/// - Observing spans as they are entered, exited, and closed, and events as
+/// they occur.
+///
+/// When a span is entered or exited, the subscriber is provided only with the
+/// [ID] with which it tagged that span when it was created. This means
+/// that it is up to the subscriber to determine whether and how span _data_ —
+/// the fields and metadata describing the span — should be stored. The
+/// [`new_span`] function is called when a new span is created, and at that
+/// point, the subscriber _may_ choose to store the associated data if it will
+/// be referenced again. However, if the data has already been recorded and will
+/// not be needed by the implementations of `enter` and `exit`, the subscriber
+/// may freely discard that data without allocating space to store it.
+///
+/// ## Overriding default impls
+///
+/// Some trait methods on `Subscriber` have default implementations, either in
+/// order to reduce the surface area of implementing `Subscriber`, or for
+/// backward-compatibility reasons. However, many subscribers will likely want
+/// to override these default implementations.
+///
+/// The following methods are likely of interest:
+///
+/// - [`register_callsite`] is called once for each callsite from which a span
+/// event may originate, and returns an [`Interest`] value describing whether or
+/// not the subscriber wishes to see events or spans from that callsite. By
+/// default, it calls [`enabled`], and returns `Interest::always()` if
+/// `enabled` returns true, or `Interest::never()` if enabled returns false.
+/// However, if the subscriber's interest can change dynamically at runtime,
+/// it may want to override this function to return `Interest::sometimes()`.
+/// Additionally, subscribers which wish to perform a behaviour once for each
+/// callsite, such as allocating storage for data related to that callsite,
+/// can perform it in `register_callsite`.
+///
+/// See also the [documentation on the callsite registry][cs-reg] for details
+/// on [`register_callsite`].
+///
+/// - [`event_enabled`] is called once before every call to the [`event`]
+/// method. This can be used to implement filtering on events once their field
+/// values are known, but before any processing is done in the `event` method.
+/// - [`clone_span`] is called every time a span ID is cloned, and [`try_close`]
+/// is called when a span ID is dropped. By default, these functions do
+/// nothing. However, they can be used to implement reference counting for
+/// spans, allowing subscribers to free storage for span data and to determine
+/// when a span has _closed_ permanently (rather than being exited).
+/// Subscribers which store per-span data or which need to track span closures
+/// should override these functions together.
+///
+/// [ID]: super::span::Id
+/// [`new_span`]: Subscriber::new_span
+/// [`register_callsite`]: Subscriber::register_callsite
+/// [`enabled`]: Subscriber::enabled
+/// [`clone_span`]: Subscriber::clone_span
+/// [`try_close`]: Subscriber::try_close
+/// [cs-reg]: crate::callsite#registering-callsites
+/// [`event`]: Subscriber::event
+/// [`event_enabled`]: Subscriber::event_enabled
+pub trait Subscriber: 'static {
+ /// Invoked when this subscriber becomes a [`Dispatch`].
+ ///
+ /// ## Avoiding Memory Leaks
+ ///
+ /// `Subscriber`s should not store their own [`Dispatch`]. Because the
+ /// `Dispatch` owns the `Subscriber`, storing the `Dispatch` within the
+ /// `Subscriber` will create a reference count cycle, preventing the `Dispatch`
+ /// from ever being dropped.
+ ///
+ /// Instead, when it is necessary to store a cyclical reference to the
+ /// `Dispatch` within a `Subscriber`, use [`Dispatch::downgrade`] to convert a
+ /// `Dispatch` into a [`WeakDispatch`]. This type is analogous to
+ /// [`std::sync::Weak`], and does not create a reference count cycle. A
+ /// [`WeakDispatch`] can be stored within a `Subscriber` without causing a
+ /// memory leak, and can be [upgraded] into a `Dispatch` temporarily when
+ /// the `Dispatch` must be accessed by the `Subscriber`.
+ ///
+ /// [`WeakDispatch`]: crate::dispatcher::WeakDispatch
+ /// [upgraded]: crate::dispatcher::WeakDispatch::upgrade
+ fn on_register_dispatch(&self, subscriber: &Dispatch) {
+ let _ = subscriber;
+ }
+
+ /// Registers a new [callsite] with this subscriber, returning whether or not
+ /// the subscriber is interested in being notified about the callsite.
+ ///
+ /// By default, this function assumes that the subscriber's [filter]
+ /// represents an unchanging view of its interest in the callsite. However,
+ /// if this is not the case, subscribers may override this function to
+ /// indicate different interests, or to implement behaviour that should run
+ /// once for every callsite.
+ ///
+ /// This function is guaranteed to be called at least once per callsite on
+ /// every active subscriber. The subscriber may store the keys to fields it
+ /// cares about in order to reduce the cost of accessing fields by name,
+ /// preallocate storage for that callsite, or perform any other actions it
+ /// wishes to perform once for each callsite.
+ ///
+ /// The subscriber should then return an [`Interest`], indicating
+ /// whether it is interested in being notified about that callsite in the
+ /// future. This may be `Always` indicating that the subscriber always
+ /// wishes to be notified about the callsite, and its filter need not be
+ /// re-evaluated; `Sometimes`, indicating that the subscriber may sometimes
+ /// care about the callsite but not always (such as when sampling), or
+ /// `Never`, indicating that the subscriber never wishes to be notified about
+ /// that callsite. If all active subscribers return `Never`, a callsite will
+ /// never be enabled unless a new subscriber expresses interest in it.
+ ///
+ /// `Subscriber`s which require their filters to be run every time an event
+ /// occurs or a span is entered/exited should return `Interest::sometimes`.
+ /// If a subscriber returns `Interest::sometimes`, then its [`enabled`] method
+ /// will be called every time an event or span is created from that callsite.
+ ///
+ /// For example, suppose a sampling subscriber is implemented by
+ /// incrementing a counter every time `enabled` is called and only returning
+ /// `true` when the counter is divisible by a specified sampling rate. If
+ /// that subscriber returns `Interest::always` from `register_callsite`, then
+ /// the filter will not be re-evaluated once it has been applied to a given
+ /// set of metadata. Thus, the counter will not be incremented, and the span
+ /// or event that corresponds to the metadata will never be `enabled`.
+ ///
+ /// `Subscriber`s that need to change their filters occasionally should call
+ /// [`rebuild_interest_cache`] to re-evaluate `register_callsite` for all
+ /// callsites.
+ ///
+ /// Similarly, if a `Subscriber` has a filtering strategy that can be
+ /// changed dynamically at runtime, it would need to re-evaluate that filter
+ /// if the cached results have changed.
+ ///
+ /// A subscriber which manages fanout to multiple other subscribers
+ /// should proxy this decision to all of its child subscribers,
+ /// returning `Interest::never` only if _all_ such children return
+ /// `Interest::never`. If the set of subscribers to which spans are
+ /// broadcast may change dynamically, the subscriber should also never
+ /// return `Interest::Never`, as a new subscriber may be added that _is_
+ /// interested.
+ ///
+ /// See the [documentation on the callsite registry][cs-reg] for more
+ /// details on how and when the `register_callsite` method is called.
+ ///
+ /// # Notes
+ /// This function may be called again when a new subscriber is created or
+ /// when the registry is invalidated.
+ ///
+ /// If a subscriber returns `Interest::never` for a particular callsite, it
+ /// _may_ still see spans and events originating from that callsite, if
+ /// another subscriber expressed interest in it.
+ ///
+ /// [callsite]: crate::callsite
+ /// [filter]: Self::enabled
+ /// [metadata]: super::metadata::Metadata
+ /// [`enabled`]: Subscriber::enabled()
+ /// [`rebuild_interest_cache`]: super::callsite::rebuild_interest_cache
+ /// [cs-reg]: crate::callsite#registering-callsites
+ fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
+ if self.enabled(metadata) {
+ Interest::always()
+ } else {
+ Interest::never()
+ }
+ }
+
+ /// Returns true if a span or event with the specified [metadata] would be
+ /// recorded.
+ ///
+ /// By default, it is assumed that this filter needs only be evaluated once
+ /// for each callsite, so it is called by [`register_callsite`] when each
+ /// callsite is registered. The result is used to determine if the subscriber
+ /// is always [interested] or never interested in that callsite. This is intended
+ /// primarily as an optimization, so that expensive filters (such as those
+ /// involving string search, et cetera) need not be re-evaluated.
+ ///
+ /// However, if the subscriber's interest in a particular span or event may
+ /// change, or depends on contexts only determined dynamically at runtime,
+ /// then the `register_callsite` method should be overridden to return
+ /// [`Interest::sometimes`]. In that case, this function will be called every
+ /// time that span or event occurs.
+ ///
+ /// [metadata]: super::metadata::Metadata
+ /// [interested]: Interest
+ /// [`Interest::sometimes`]: Interest::sometimes
+ /// [`register_callsite`]: Subscriber::register_callsite()
+ fn enabled(&self, metadata: &Metadata<'_>) -> bool;
+
+ /// Returns the highest [verbosity level][level] that this `Subscriber` will
+ /// enable, or `None`, if the subscriber does not implement level-based
+ /// filtering or chooses not to implement this method.
+ ///
+ /// If this method returns a [`Level`][level], it will be used as a hint to
+ /// determine the most verbose level that will be enabled. This will allow
+ /// spans and events which are more verbose than that level to be skipped
+ /// more efficiently. Subscribers which perform filtering are strongly
+ /// encouraged to provide an implementation of this method.
+ ///
+ /// If the maximum level the subscriber will enable can change over the
+ /// course of its lifetime, it is free to return a different value from
+ /// multiple invocations of this method. However, note that changes in the
+ /// maximum level will **only** be reflected after the callsite [`Interest`]
+ /// cache is rebuilt, by calling the [`callsite::rebuild_interest_cache`][rebuild]
+ /// function. Therefore, if the subscriber will change the value returned by
+ /// this method, it is responsible for ensuring that
+ /// [`rebuild_interest_cache`][rebuild] is called after the value of the max
+ /// level changes.
+ ///
+ /// [level]: super::Level
+ /// [rebuild]: super::callsite::rebuild_interest_cache
+ fn max_level_hint(&self) -> Option<LevelFilter> {
+ None
+ }
+
+ /// Visit the construction of a new span, returning a new [span ID] for the
+ /// span being constructed.
+ ///
+ /// The provided [`Attributes`] contains any field values that were provided
+ /// when the span was created. The subscriber may pass a [visitor] to the
+ /// `Attributes`' [`record` method] to record these values.
+ ///
+ /// IDs are used to uniquely identify spans and events within the context of a
+ /// subscriber, so span equality will be based on the returned ID. Thus, if
+ /// the subscriber wishes for all spans with the same metadata to be
+ /// considered equal, it should return the same ID every time it is given a
+ /// particular set of metadata. Similarly, if it wishes for two separate
+ /// instances of a span with the same metadata to *not* be equal, it should
+ /// return a distinct ID every time this function is called, regardless of
+ /// the metadata.
+ ///
+ /// Note that the subscriber is free to assign span IDs based on whatever
+ /// scheme it sees fit. Any guarantees about uniqueness, ordering, or ID
+ /// reuse are left up to the subscriber implementation to determine.
+ ///
+ /// [span ID]: super::span::Id
+ /// [`Attributes`]: super::span::Attributes
+ /// [visitor]: super::field::Visit
+ /// [`record` method]: super::span::Attributes::record
+ fn new_span(&self, span: &span::Attributes<'_>) -> span::Id;
+
+ // === Notification methods ===============================================
+
+ /// Record a set of values on a span.
+ ///
+ /// This method will be invoked when value is recorded on a span.
+ /// Recording multiple values for the same field is possible,
+ /// but the actual behaviour is defined by the subscriber implementation.
+ ///
+ /// Keep in mind that a span might not provide a value
+ /// for each field it declares.
+ ///
+ /// The subscriber is expected to provide a [visitor] to the `Record`'s
+ /// [`record` method] in order to record the added values.
+ ///
+ /// # Example
+ /// "foo = 3" will be recorded when [`record`] is called on the
+ /// `Attributes` passed to `new_span`.
+ /// Since values are not provided for the `bar` and `baz` fields,
+ /// the span's `Metadata` will indicate that it _has_ those fields,
+ /// but values for them won't be recorded at this time.
+ ///
+ /// ```rust,ignore
+ /// # use tracing::span;
+ ///
+ /// let mut span = span!("my_span", foo = 3, bar, baz);
+ ///
+ /// // `Subscriber::record` will be called with a `Record`
+ /// // containing "bar = false"
+ /// span.record("bar", &false);
+ ///
+ /// // `Subscriber::record` will be called with a `Record`
+ /// // containing "baz = "a string""
+ /// span.record("baz", &"a string");
+ /// ```
+ ///
+ /// [visitor]: super::field::Visit
+ /// [`record`]: super::span::Attributes::record
+ /// [`record` method]: super::span::Record::record
+ fn record(&self, span: &span::Id, values: &span::Record<'_>);
+
+ /// Adds an indication that `span` follows from the span with the id
+ /// `follows`.
+ ///
+ /// This relationship differs somewhat from the parent-child relationship: a
+ /// span may have any number of prior spans, rather than a single one; and
+ /// spans are not considered to be executing _inside_ of the spans they
+ /// follow from. This means that a span may close even if subsequent spans
+ /// that follow from it are still open, and time spent inside of a
+ /// subsequent span should not be included in the time its precedents were
+ /// executing. This is used to model causal relationships such as when a
+ /// single future spawns several related background tasks, et cetera.
+ ///
+ /// If the subscriber has spans corresponding to the given IDs, it should
+ /// record this relationship in whatever way it deems necessary. Otherwise,
+ /// if one or both of the given span IDs do not correspond to spans that the
+ /// subscriber knows about, or if a cyclical relationship would be created
+ /// (i.e., some span _a_ which proceeds some other span _b_ may not also
+ /// follow from _b_), it may silently do nothing.
+ fn record_follows_from(&self, span: &span::Id, follows: &span::Id);
+
+ /// Determine if an [`Event`] should be recorded.
+ ///
+ /// By default, this returns `true` and `Subscriber`s can filter events in
+ /// [`event`][Self::event] without any penalty. However, when `event` is
+ /// more complicated, this can be used to determine if `event` should be
+ /// called at all, separating out the decision from the processing.
+ fn event_enabled(&self, event: &Event<'_>) -> bool {
+ let _ = event;
+ true
+ }
+
+ /// Records that an [`Event`] has occurred.
+ ///
+ /// This method will be invoked when an Event is constructed by
+ /// the `Event`'s [`dispatch` method]. For example, this happens internally
+ /// when an event macro from `tracing` is called.
+ ///
+ /// The key difference between this method and `record` is that `record` is
+ /// called when a value is recorded for a field defined by a span,
+ /// while `event` is called when a new event occurs.
+ ///
+ /// The provided `Event` struct contains any field values attached to the
+ /// event. The subscriber may pass a [visitor] to the `Event`'s
+ /// [`record` method] to record these values.
+ ///
+ /// [`Event`]: super::event::Event
+ /// [visitor]: super::field::Visit
+ /// [`record` method]: super::event::Event::record
+ /// [`dispatch` method]: super::event::Event::dispatch
+ fn event(&self, event: &Event<'_>);
+
+ /// Records that a span has been entered.
+ ///
+ /// When entering a span, this method is called to notify the subscriber
+ /// that the span has been entered. The subscriber is provided with the
+ /// [span ID] of the entered span, and should update any internal state
+ /// tracking the current span accordingly.
+ ///
+ /// [span ID]: super::span::Id
+ fn enter(&self, span: &span::Id);
+
+ /// Records that a span has been exited.
+ ///
+ /// When exiting a span, this method is called to notify the subscriber
+ /// that the span has been exited. The subscriber is provided with the
+ /// [span ID] of the exited span, and should update any internal state
+ /// tracking the current span accordingly.
+ ///
+ /// Exiting a span does not imply that the span will not be re-entered.
+ ///
+ /// [span ID]: super::span::Id
+ fn exit(&self, span: &span::Id);
+
+ /// Notifies the subscriber that a [span ID] has been cloned.
+ ///
+ /// This function is guaranteed to only be called with span IDs that were
+ /// returned by this subscriber's `new_span` function.
+ ///
+ /// Note that the default implementation of this function this is just the
+ /// identity function, passing through the identifier. However, it can be
+ /// used in conjunction with [`try_close`] to track the number of handles
+ /// capable of `enter`ing a span. When all the handles have been dropped
+ /// (i.e., `try_close` has been called one more time than `clone_span` for a
+ /// given ID), the subscriber may assume that the span will not be entered
+ /// again. It is then free to deallocate storage for data associated with
+ /// that span, write data from that span to IO, and so on.
+ ///
+ /// For more unsafe situations, however, if `id` is itself a pointer of some
+ /// kind this can be used as a hook to "clone" the pointer, depending on
+ /// what that means for the specified pointer.
+ ///
+ /// [span ID]: super::span::Id
+ /// [`try_close`]: Subscriber::try_close
+ fn clone_span(&self, id: &span::Id) -> span::Id {
+ id.clone()
+ }
+
+ /// **This method is deprecated.**
+ ///
+ /// Using `drop_span` may result in subscribers composed using
+ /// `tracing-subscriber` crate's `Layer` trait from observing close events.
+ /// Use [`try_close`] instead.
+ ///
+ /// The default implementation of this function does nothing.
+ ///
+ /// [`try_close`]: Subscriber::try_close
+ #[deprecated(since = "0.1.2", note = "use `Subscriber::try_close` instead")]
+ fn drop_span(&self, _id: span::Id) {}
+
+ /// Notifies the subscriber that a [span ID] has been dropped, and returns
+ /// `true` if there are now 0 IDs that refer to that span.
+ ///
+ /// Higher-level libraries providing functionality for composing multiple
+ /// subscriber implementations may use this return value to notify any
+ /// "layered" subscribers that this subscriber considers the span closed.
+ ///
+ /// The default implementation of this method calls the subscriber's
+ /// [`drop_span`] method and returns `false`. This means that, unless the
+ /// subscriber overrides the default implementation, close notifications
+ /// will never be sent to any layered subscribers. In general, if the
+ /// subscriber tracks reference counts, this method should be implemented,
+ /// rather than `drop_span`.
+ ///
+ /// This function is guaranteed to only be called with span IDs that were
+ /// returned by this subscriber's `new_span` function.
+ ///
+ /// It's guaranteed that if this function has been called once more than the
+ /// number of times `clone_span` was called with the same `id`, then no more
+ /// handles that can enter the span with that `id` exist. This means that it
+ /// can be used in conjunction with [`clone_span`] to track the number of
+ /// handles capable of `enter`ing a span. When all the handles have been
+ /// dropped (i.e., `try_close` has been called one more time than
+ /// `clone_span` for a given ID), the subscriber may assume that the span
+ /// will not be entered again, and should return `true`. It is then free to
+ /// deallocate storage for data associated with that span, write data from
+ /// that span to IO, and so on.
+ ///
+ /// **Note**: since this function is called when spans are dropped,
+ /// implementations should ensure that they are unwind-safe. Panicking from
+ /// inside of a `try_close` function may cause a double panic, if the span
+ /// was dropped due to a thread unwinding.
+ ///
+ /// [span ID]: super::span::Id
+ /// [`clone_span`]: Subscriber::clone_span
+ /// [`drop_span`]: Subscriber::drop_span
+ fn try_close(&self, id: span::Id) -> bool {
+ #[allow(deprecated)]
+ self.drop_span(id);
+ false
+ }
+
+ /// Returns a type representing this subscriber's view of the current span.
+ ///
+ /// If subscribers track a current span, they should override this function
+ /// to return [`Current::new`] if the thread from which this method is
+ /// called is inside a span, or [`Current::none`] if the thread is not
+ /// inside a span.
+ ///
+ /// By default, this returns a value indicating that the subscriber
+ /// does **not** track what span is current. If the subscriber does not
+ /// implement a current span, it should not override this method.
+ ///
+ /// [`Current::new`]: super::span::Current#tymethod.new
+ /// [`Current::none`]: super::span::Current#tymethod.none
+ fn current_span(&self) -> span::Current {
+ span::Current::unknown()
+ }
+
+ // === Downcasting methods ================================================
+
+ /// If `self` is the same type as the provided `TypeId`, returns an untyped
+ /// `*const` pointer to that type. Otherwise, returns `None`.
+ ///
+ /// If you wish to downcast a `Subscriber`, it is strongly advised to use
+ /// the safe API provided by [`downcast_ref`] instead.
+ ///
+ /// This API is required for `downcast_raw` to be a trait method; a method
+ /// signature like [`downcast_ref`] (with a generic type parameter) is not
+ /// object-safe, and thus cannot be a trait method for `Subscriber`. This
+ /// means that if we only exposed `downcast_ref`, `Subscriber`
+ /// implementations could not override the downcasting behavior
+ ///
+ /// This method may be overridden by "fan out" or "chained" subscriber
+ /// implementations which consist of multiple composed types. Such
+ /// subscribers might allow `downcast_raw` by returning references to those
+ /// component if they contain components with the given `TypeId`.
+ ///
+ /// # Safety
+ ///
+ /// The [`downcast_ref`] method expects that the pointer returned by
+ /// `downcast_raw` is non-null and points to a valid instance of the type
+ /// with the provided `TypeId`. Failure to ensure this will result in
+ /// undefined behaviour, so implementing `downcast_raw` is unsafe.
+ ///
+ /// [`downcast_ref`]: #method.downcast_ref
+ unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
+ if id == TypeId::of::<Self>() {
+ Some(self as *const Self as *const ())
+ } else {
+ None
+ }
+ }
+}
+
+impl dyn Subscriber {
+ /// Returns `true` if this `Subscriber` is the same type as `T`.
+ pub fn is<T: Any>(&self) -> bool {
+ self.downcast_ref::<T>().is_some()
+ }
+
+ /// Returns some reference to this `Subscriber` value if it is of type `T`,
+ /// or `None` if it isn't.
+ pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
+ unsafe {
+ let raw = self.downcast_raw(TypeId::of::<T>())?;
+ if raw.is_null() {
+ None
+ } else {
+ Some(&*(raw as *const _))
+ }
+ }
+ }
+}
+
+impl dyn Subscriber + Send {
+ /// Returns `true` if this [`Subscriber`] is the same type as `T`.
+ pub fn is<T: Any>(&self) -> bool {
+ self.downcast_ref::<T>().is_some()
+ }
+
+ /// Returns some reference to this [`Subscriber`] value if it is of type `T`,
+ /// or `None` if it isn't.
+ pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
+ unsafe {
+ let raw = self.downcast_raw(TypeId::of::<T>())?;
+ if raw.is_null() {
+ None
+ } else {
+ Some(&*(raw as *const _))
+ }
+ }
+ }
+}
+
+impl dyn Subscriber + Sync {
+ /// Returns `true` if this [`Subscriber`] is the same type as `T`.
+ pub fn is<T: Any>(&self) -> bool {
+ self.downcast_ref::<T>().is_some()
+ }
+
+ /// Returns some reference to this `[`Subscriber`] value if it is of type `T`,
+ /// or `None` if it isn't.
+ pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
+ unsafe {
+ let raw = self.downcast_raw(TypeId::of::<T>())?;
+ if raw.is_null() {
+ None
+ } else {
+ Some(&*(raw as *const _))
+ }
+ }
+ }
+}
+
+impl dyn Subscriber + Send + Sync {
+ /// Returns `true` if this [`Subscriber`] is the same type as `T`.
+ pub fn is<T: Any>(&self) -> bool {
+ self.downcast_ref::<T>().is_some()
+ }
+
+ /// Returns some reference to this [`Subscriber`] value if it is of type `T`,
+ /// or `None` if it isn't.
+ pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
+ unsafe {
+ let raw = self.downcast_raw(TypeId::of::<T>())?;
+ if raw.is_null() {
+ None
+ } else {
+ Some(&*(raw as *const _))
+ }
+ }
+ }
+}
+
+/// Indicates a [`Subscriber`]'s interest in a particular callsite.
+///
+/// `Subscriber`s return an `Interest` from their [`register_callsite`] methods
+/// in order to determine whether that span should be enabled or disabled.
+///
+/// [`Subscriber`]: super::Subscriber
+/// [`register_callsite`]: super::Subscriber::register_callsite
+#[derive(Clone, Debug)]
+pub struct Interest(InterestKind);
+
+#[derive(Copy, Clone, Debug, Eq, PartialEq, Ord, PartialOrd)]
+enum InterestKind {
+ Never = 0,
+ Sometimes = 1,
+ Always = 2,
+}
+
+impl Interest {
+ /// Returns an `Interest` indicating that the subscriber is never interested
+ /// in being notified about a callsite.
+ ///
+ /// If all active subscribers are `never()` interested in a callsite, it will
+ /// be completely disabled unless a new subscriber becomes active.
+ #[inline]
+ pub fn never() -> Self {
+ Interest(InterestKind::Never)
+ }
+
+ /// Returns an `Interest` indicating the subscriber is sometimes interested
+ /// in being notified about a callsite.
+ ///
+ /// If all active subscribers are `sometimes` or `never` interested in a
+ /// callsite, the currently active subscriber will be asked to filter that
+ /// callsite every time it creates a span. This will be the case until a new
+ /// subscriber expresses that it is `always` interested in the callsite.
+ #[inline]
+ pub fn sometimes() -> Self {
+ Interest(InterestKind::Sometimes)
+ }
+
+ /// Returns an `Interest` indicating the subscriber is always interested in
+ /// being notified about a callsite.
+ ///
+ /// If any subscriber expresses that it is `always()` interested in a given
+ /// callsite, then the callsite will always be enabled.
+ #[inline]
+ pub fn always() -> Self {
+ Interest(InterestKind::Always)
+ }
+
+ /// Returns `true` if the subscriber is never interested in being notified
+ /// about this callsite.
+ #[inline]
+ pub fn is_never(&self) -> bool {
+ matches!(self.0, InterestKind::Never)
+ }
+
+ /// Returns `true` if the subscriber is sometimes interested in being notified
+ /// about this callsite.
+ #[inline]
+ pub fn is_sometimes(&self) -> bool {
+ matches!(self.0, InterestKind::Sometimes)
+ }
+
+ /// Returns `true` if the subscriber is always interested in being notified
+ /// about this callsite.
+ #[inline]
+ pub fn is_always(&self) -> bool {
+ matches!(self.0, InterestKind::Always)
+ }
+
+ /// Returns the common interest between these two Interests.
+ ///
+ /// If both interests are the same, this propagates that interest.
+ /// Otherwise, if they differ, the result must always be
+ /// `Interest::sometimes` --- if the two subscribers differ in opinion, we
+ /// will have to ask the current subscriber what it thinks, no matter what.
+ pub(crate) fn and(self, rhs: Interest) -> Self {
+ if self.0 == rhs.0 {
+ self
+ } else {
+ Interest::sometimes()
+ }
+ }
+}
+
+/// A no-op [`Subscriber`].
+///
+/// [`NoSubscriber`] implements the [`Subscriber`] trait by never being enabled,
+/// never being interested in any callsite, and dropping all spans and events.
+#[derive(Copy, Clone, Debug, Default)]
+pub struct NoSubscriber(());
+
+impl Subscriber for NoSubscriber {
+ #[inline]
+ fn register_callsite(&self, _: &'static Metadata<'static>) -> Interest {
+ Interest::never()
+ }
+
+ fn new_span(&self, _: &span::Attributes<'_>) -> span::Id {
+ span::Id::from_u64(0xDEAD)
+ }
+
+ fn event(&self, _event: &Event<'_>) {}
+
+ fn record(&self, _span: &span::Id, _values: &span::Record<'_>) {}
+
+ fn record_follows_from(&self, _span: &span::Id, _follows: &span::Id) {}
+
+ #[inline]
+ fn enabled(&self, _metadata: &Metadata<'_>) -> bool {
+ false
+ }
+
+ fn enter(&self, _span: &span::Id) {}
+ fn exit(&self, _span: &span::Id) {}
+}
+
+impl<S> Subscriber for Box<S>
+where
+ S: Subscriber + ?Sized,
+{
+ #[inline]
+ fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
+ self.as_ref().register_callsite(metadata)
+ }
+
+ #[inline]
+ fn enabled(&self, metadata: &Metadata<'_>) -> bool {
+ self.as_ref().enabled(metadata)
+ }
+
+ #[inline]
+ fn max_level_hint(&self) -> Option<LevelFilter> {
+ self.as_ref().max_level_hint()
+ }
+
+ #[inline]
+ fn new_span(&self, span: &span::Attributes<'_>) -> span::Id {
+ self.as_ref().new_span(span)
+ }
+
+ #[inline]
+ fn record(&self, span: &span::Id, values: &span::Record<'_>) {
+ self.as_ref().record(span, values)
+ }
+
+ #[inline]
+ fn record_follows_from(&self, span: &span::Id, follows: &span::Id) {
+ self.as_ref().record_follows_from(span, follows)
+ }
+
+ #[inline]
+ fn event_enabled(&self, event: &Event<'_>) -> bool {
+ self.as_ref().event_enabled(event)
+ }
+
+ #[inline]
+ fn event(&self, event: &Event<'_>) {
+ self.as_ref().event(event)
+ }
+
+ #[inline]
+ fn enter(&self, span: &span::Id) {
+ self.as_ref().enter(span)
+ }
+
+ #[inline]
+ fn exit(&self, span: &span::Id) {
+ self.as_ref().exit(span)
+ }
+
+ #[inline]
+ fn clone_span(&self, id: &span::Id) -> span::Id {
+ self.as_ref().clone_span(id)
+ }
+
+ #[inline]
+ fn try_close(&self, id: span::Id) -> bool {
+ self.as_ref().try_close(id)
+ }
+
+ #[inline]
+ #[allow(deprecated)]
+ fn drop_span(&self, id: span::Id) {
+ self.as_ref().try_close(id);
+ }
+
+ #[inline]
+ fn current_span(&self) -> span::Current {
+ self.as_ref().current_span()
+ }
+
+ #[inline]
+ unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
+ if id == TypeId::of::<Self>() {
+ return Some(self as *const Self as *const _);
+ }
+
+ self.as_ref().downcast_raw(id)
+ }
+}
+
+impl<S> Subscriber for Arc<S>
+where
+ S: Subscriber + ?Sized,
+{
+ #[inline]
+ fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
+ self.as_ref().register_callsite(metadata)
+ }
+
+ #[inline]
+ fn enabled(&self, metadata: &Metadata<'_>) -> bool {
+ self.as_ref().enabled(metadata)
+ }
+
+ #[inline]
+ fn max_level_hint(&self) -> Option<LevelFilter> {
+ self.as_ref().max_level_hint()
+ }
+
+ #[inline]
+ fn new_span(&self, span: &span::Attributes<'_>) -> span::Id {
+ self.as_ref().new_span(span)
+ }
+
+ #[inline]
+ fn record(&self, span: &span::Id, values: &span::Record<'_>) {
+ self.as_ref().record(span, values)
+ }
+
+ #[inline]
+ fn record_follows_from(&self, span: &span::Id, follows: &span::Id) {
+ self.as_ref().record_follows_from(span, follows)
+ }
+
+ #[inline]
+ fn event_enabled(&self, event: &Event<'_>) -> bool {
+ self.as_ref().event_enabled(event)
+ }
+
+ #[inline]
+ fn event(&self, event: &Event<'_>) {
+ self.as_ref().event(event)
+ }
+
+ #[inline]
+ fn enter(&self, span: &span::Id) {
+ self.as_ref().enter(span)
+ }
+
+ #[inline]
+ fn exit(&self, span: &span::Id) {
+ self.as_ref().exit(span)
+ }
+
+ #[inline]
+ fn clone_span(&self, id: &span::Id) -> span::Id {
+ self.as_ref().clone_span(id)
+ }
+
+ #[inline]
+ fn try_close(&self, id: span::Id) -> bool {
+ self.as_ref().try_close(id)
+ }
+
+ #[inline]
+ #[allow(deprecated)]
+ fn drop_span(&self, id: span::Id) {
+ self.as_ref().try_close(id);
+ }
+
+ #[inline]
+ fn current_span(&self) -> span::Current {
+ self.as_ref().current_span()
+ }
+
+ #[inline]
+ unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
+ if id == TypeId::of::<Self>() {
+ return Some(self as *const Self as *const _);
+ }
+
+ self.as_ref().downcast_raw(id)
+ }
+}