diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 19:33:14 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 19:33:14 +0000 |
commit | 36d22d82aa202bb199967e9512281e9a53db42c9 (patch) | |
tree | 105e8c98ddea1c1e4784a60a5a6410fa416be2de /third_party/libwebrtc/docs/native-code | |
parent | Initial commit. (diff) | |
download | firefox-esr-upstream.tar.xz firefox-esr-upstream.zip |
Adding upstream version 115.7.0esr.upstream/115.7.0esrupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'third_party/libwebrtc/docs/native-code')
18 files changed, 1489 insertions, 0 deletions
diff --git a/third_party/libwebrtc/docs/native-code/android/index.md b/third_party/libwebrtc/docs/native-code/android/index.md new file mode 100644 index 0000000000..e378cf9a99 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/android/index.md @@ -0,0 +1,158 @@ +# WebRTC Android development + +## Getting the Code + +Android development is only supported on Linux. + +1. Install [prerequisite software][webrtc-prerequisite-sw] + +2. Create a working directory, enter it, and run: + +``` +$ fetch --nohooks webrtc_android +$ gclient sync +``` + +This will fetch a regular WebRTC checkout with the Android-specific parts +added. Notice that the Android specific parts like the Android SDK and NDK are +quite large (~8 GB), so the total checkout size will be about 16 GB. +The same checkout can be used for both Linux and Android development since you +can generate your [Ninja][ninja] project files in different directories for each +build config. + +See [Development][webrtc-development] for instructions on how to update +the code, building etc. + + +## Compiling + +1. Generate projects using GN. + +Make sure your current working directory is src/ of your workspace. +Then run: + +``` +$ gn gen out/Debug --args='target_os="android" target_cpu="arm"' +``` + +You can specify a directory of your own choice instead of `out/Debug`, +to enable managing multiple configurations in parallel. + +* To build for ARM64: use `target_cpu="arm64"` +* To build for 32-bit x86: use `target_cpu="x86"` +* To build for 64-bit x64: use `target_cpu="x64"` + +2. Compile using: + +``` +$ autoninja -C out/Debug +``` + +(To list all available targets, run `autoninja -C out/Debug -t targets all`.) + + +## Using the Bundled Android SDK/NDK + +In order to use the Android SDK and NDK that is bundled in +`third_party/android_tools`, run this to get it included in your `PATH` (from +`src/`): + +``` +$ . build/android/envsetup.sh +``` + +Then you'll have `adb` and all the other Android tools in your `PATH`. + + +## Running the AppRTCMobile App + +AppRTCMobile is an Android application using WebRTC Native APIs via JNI (JNI +wrapper is documented [here][webrtc-jni-doc]). + +For instructions on how to build and run, see +[examples/androidapp/README][apprtc-doc]. + + +## Using Android Studio + +*Note: This is known to be broken at the moment. See bug: +https://bugs.webrtc.org/9282* + +1. Build the project normally (out/Debug should be the directory you used when +generating the build files using GN): + +``` +$ autoninja -C out/Debug AppRTCMobile +``` + +2. Generate the project files: + +``` +$ build/android/gradle/generate_gradle.py --output-directory $PWD/out/Debug \ + --target "//examples:AppRTCMobile" --use-gradle-process-resources \ + --split-projects --canary +``` + +3. *Import* the project in Android Studio. (Do not just open it.) The project +is located in `out/Debug/gradle`. If asked which SDK to use, choose to use +Android Studio's SDK. When asked whether to use the Gradle wrapper, press +"OK". + +4. Ensure target `webrtc > examples > AppRTCMobile` is selected and press Run. +AppRTCMobile should now start on the device. + +If you do any changes to the C++ code, you have to compile the project using +autoninja after the changes (see step 1). + +*Note: Only "arm" is supported as the target_cpu when using Android Studio. This +still allows you to run the application on 64-bit ARM devices. x86-based devices +are not supported right now.* + + +## Running Tests on an Android Device + +To build APKs with the WebRTC native tests, follow these instructions. + +1. Ensure you have an Android device set in Developer mode connected via USB. + +2. Compile unit tests and/or instrumentation tests: + +``` +$ autoninja -C out/Debug android_instrumentation_test_apk +$ autoninja -C out/Debug rtc_unittests +``` + +3. You can find the generated test binaries in `out/Debug/bin`. To run instrumentation tests: + +``` +$ out/Debug/bin/run_android_instrumentation_test_apk -v +``` + +To run unit tests: + +``` +$ out/Debug/bin/run_rtc_unittests -v +``` + +Show verbose output with `-v` and filter tests with `--gtest-filter=SomeTest.*`. For example: + +``` +$ out/Debug/bin/run_android_instrumentation_test_apk -v \ + --gtest_filter=VideoFrameBufferTest.* +``` + +For a full list of command line arguments, use `--help`. + +5. **NOTICE:** The first time you run a test, you must accept a dialog on +the device! + +If want to run Release builds instead; pass `is_debug=false` to GN (and +preferably generate the projects files into a directory like `out/Release`). +Then use the scripts generated in `out/Release/bin` instead. + +[webrtc-prerequisite-sw]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/prerequisite-sw/index.md +[webrtc-jni-doc]: https://webrtc.googlesource.com/src/+/main/sdk/android/README +[apprtc-doc]: https://webrtc.googlesource.com/src/+/main/examples/androidapp/README +[ninja]: https://ninja-build.org/ +[prebuilt-libraries]: https://bintray.com/google/webrtc/google-webrtc +[webrtc-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/index.md diff --git a/third_party/libwebrtc/docs/native-code/development/contributing.md b/third_party/libwebrtc/docs/native-code/development/contributing.md new file mode 100644 index 0000000000..16814a0876 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/development/contributing.md @@ -0,0 +1,99 @@ +# Contributing to the WebRTC project + +## License Agreement + +WebRTC welcomes patches for features and bug fixes! + +For contributors external to Google, follow the instructions given in the +[Google Individual Contributor License Agreement][Google individual CLA]. +In all cases, contributors must sign a contributor license agreement before +a contribution can be accepted. Please complete the agreement for an +[individual][individual] or a [corporation][corporation] as appropriate. + +[Google Individual CLA]: https://cla.developers.google.com/about/google-individual. +[individual]: https://developers.google.com/open-source/cla/individual +[corporation]: https://developers.google.com/open-source/cla/corporate + + +## Instructions + +### Contributing your First Patch +You must do some preparation in order to upload your first CL: + +* [Check out and build the code][Check out and build the code] +* Fill in the Contributor agreement (see above) +* If you’ve never submitted code before, you must add your + (or your organization’s in the case the contributor agreement is signed by + your organization) name and contact info to the + [AUTHORS][AUTHORS] file +* Go to [https://webrtc.googlesource.com/new-password](new-password) + and login with your email account. This should be the same account as + returned by `git config user.email` +* Then, run: `git cl creds-check`. If you get any errors, ask for help on + [discuss-webrtc][discuss-webrtc] + +You will not have to repeat the above. After all that, you’re ready to upload: + +[Check out and build the code]: https://webrtc.googlesource.com/src/+/refs/heads/main/docs/native-code/development/index.md +[AUTHORS]: https://webrtc.googlesource.com/src/+/refs/heads/main/AUTHORS +[new-password]: https://webrtc.googlesource.com/new-password +[discuss-webrtc]: https://groups.google.com/forum/#!forum/discuss-webrtc + +### Uploading your First Patch +Now that you have your account set up, you can do the actual upload: + +* Do this: + * Assuming you're on the main branch: + * `git checkout -b my-work-branch` + * Make changes, build locally, run tests locally + * `git commit -am "Changed x, and it is working"` + * `git cl upload` + + This will open a text editor showing all local commit messages, allowing you + to modify it before it becomes the CL description. + + Fill out the bug entry properly. Please specify the issue tracker prefix and + the issue number, separated by a colon, e.g. `webrtc:123` or `chromium:12345`. + If you do not have an issue tracker prefix and an issue number just add `None`. + + Save and close the file to proceed with the upload to the WebRTC + [code review server](https://webrtc-review.googlesource.com/q/status:open). + + The command will print a link like + [https://webrtc-review.googlesource.com/c/src/+/53121][example CL link]. + if everything goes well. + +* Click this CL Link +* If you’re not signed in, click the Sign In button in the top right and sign + in with your email +* Click Start Review and add a reviewer. You can find reviewers in OWNERS files + around the repository (take the one closest to your changes) +* Address any reviewer feedback: + * Make changes, build locally, run tests locally + * `git commit -am "Fixed X and Y"` + * `git cl upload` +* Once the reviewer LGTMs (approves) the patch, ask them to put it into the + commit queue + +NOTICE: On Windows, you’ll need to run the above in a Git bash shell in order +for gclient to find the `.gitcookies` file. + +[example CL link]: https://webrtc-review.googlesource.com/c/src/+/53121 + +### Trybots + +If you're working a lot in WebRTC, you can apply for *try rights*. This means you +can run the *trybots*, which run all the tests on all platforms. To do this, +file a bug using this [template][template-access] and the WebRTC EngProd team +will review your request. + +To run a tryjob, upload a CL as described above and click either CQ dry run or +Choose Trybots in the Gerrit UI. You need to have try rights for this. Otherwise, +ask your reviewer to kick off the bots for you. + +If you encounter any issues with the bots (flakiness, failing unrelated to your change etc), +please file a bug using this [template][template-issue]. + +[template-access]: https://bugs.chromium.org/p/webrtc/issues/entry?template=Get+tryjob+access +[template-issue]: https://bugs.chromium.org/p/webrtc/issues/entry?template=trybot+issue + diff --git a/third_party/libwebrtc/docs/native-code/development/index.md b/third_party/libwebrtc/docs/native-code/development/index.md new file mode 100644 index 0000000000..d969c7621f --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/development/index.md @@ -0,0 +1,290 @@ +# WebRTC development + +The currently supported platforms are Windows, Mac OS X, Linux, Android and +iOS. See the [Android][webrtc-android-development] and [iOS][webrtc-ios-development] +pages for build instructions and example applications specific to these mobile platforms. + + +## Before You Start + +First, be sure to install the [prerequisite software][webrtc-prerequisite-sw]. + +[webrtc-prerequisite-sw]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/prerequisite-sw/index.md + + +## Getting the Code + +For desktop development: + +1. Create a working directory, enter it, and run `fetch webrtc`: + +``` +$ mkdir webrtc-checkout +$ cd webrtc-checkout +$ fetch --nohooks webrtc +$ gclient sync +``` + +NOTICE: During your first sync, you'll have to accept the license agreement of the Google Play Services SDK. + +The checkout size is large due the use of the Chromium build toolchain and many dependencies. Estimated size: + +* Linux: 6.4 GB. +* Linux (with Android): 16 GB (of which ~8 GB is Android SDK+NDK images). +* Mac (with iOS support): 5.6GB + +2. Optionally you can specify how new branches should be tracked: + +``` +$ git config branch.autosetupmerge always +$ git config branch.autosetuprebase always +``` + +3. Alternatively, you can create new local branches like this (recommended): + +``` +$ cd src +$ git checkout main +$ git new-branch your-branch-name +``` + +See the [Android][webrtc-android-development] and [iOS][webrtc-ios-development] pages for separate instructions. + +**NOTICE:** if you get `Remote: Daily bandwidth rate limit exceeded for <ip>`, +make sure you're logged in. The quota is much larger for logged in users. + +## Updating the Code + +Update your current branch with: + +``` +$ git checkout main +$ git pull origin main +$ gclient sync +$ git checkout my-branch +$ git merge main +``` + +## Building + +[Ninja][ninja] is the default build system for all platforms. + +See the [Android][webrtc-android-development] and [iOS][webrtc-ios-development] pages for build +instructions specific to those platforms. + +## Generating Ninja project files + +[Ninja][ninja] project files are generated using [GN][gn]. They're put in a +directory of your choice, like `out/Debug` or `out/Release`, but you can +use any directory for keeping multiple configurations handy. + +To generate project files using the defaults (Debug build), run (standing in +the src/ directory of your checkout): + +``` +$ gn gen out/Default +``` + +To generate ninja project files for a Release build instead: + +``` +$ gn gen out/Default --args='is_debug=false' +``` + +To clean all build artifacts in a directory but leave the current GN +configuration untouched (stored in the args.gn file), do: + +``` +$ gn clean out/Default +``` + +To build the fuzzers residing in the [test/fuzzers][fuzzers] directory, use +``` +$ gn gen out/fuzzers --args='use_libfuzzer=true optimize_for_fuzzing=true' +``` +Depending on the fuzzer additional arguments like `is_asan`, `is_msan` or `is_ubsan_security` might be required. + +See the [GN][gn-doc] documentation for all available options. There are also more +platform specific tips on the [Android][webrtc-android-development] and +[iOS][webrtc-ios-development] instructions. + +## Compiling + +When you have Ninja project files generated (see previous section), compile +(standing in `src/`) using: + +For [Ninja][ninja] project files generated in `out/Default`: + +``` +$ autoninja -C out/Default +``` + +To build everything in the generated folder (`out/Default`): + +``` +$ autoninja all -C out/Default +``` + +`autoninja` is a wrapper that automatically provides optimal values for the arguments passed to `ninja`. + +See [Ninja build rules][ninja-build-rules] to read more about difference between `ninja` and `ninja all`. + + +## Using Another Build System + +Other build systems are **not supported** (and may fail), such as Visual +Studio on Windows or Xcode on OSX. GN supports a hybrid approach of using +[Ninja][ninja] for building, but Visual Studio/Xcode for editing and driving +compilation. + +To generate IDE project files, pass the `--ide` flag to the [GN][gn] command. +See the [GN reference][gn-doc] for more details on the supported IDEs. + + +## Working with Release Branches + +To see available release branches, run: + +``` +$ git branch -r +``` + +To create a local branch tracking a remote release branch (in this example, +the branch corresponding to Chrome M80): + +``` +$ git checkout -b my_branch refs/remotes/branch-heads/3987 +$ gclient sync +``` + +**NOTICE**: depot_tools are not tracked with your checkout, so it's possible gclient +sync will break on sufficiently old branches. In that case, you can try using +an older depot_tools: + +``` +which gclient +$ # cd to depot_tools dir +$ # edit update_depot_tools; add an exit command at the top of the file +$ git log # find a hash close to the date when the branch happened +$ git checkout <hash> +$ cd ~/dev/webrtc/src +$ gclient sync +$ # When done, go back to depot_tools, git reset --hard, run gclient again and +$ # verify the current branch becomes REMOTE:origin/main +``` + +The above is untested and unsupported, but it might help. + +Commit log for the branch: [https://webrtc.googlesource.com/src/+log/branch-heads/3987][m80-log] +To browse it: [https://webrtc.googlesource.com/src/+/branch-heads/3987][m80] + +For more details, read Chromium's [Working with Branches][chromium-work-branches] and +[Working with Release Branches][chromium-work-release-branches] pages. +To find the branch corresponding to a Chrome release check the +[Chromium Dashboard][https://chromiumdash.appspot.com/branches]. + + +## Contributing Patches + +Please see [Contributing Fixes][contributing] for information on how to run +`git cl upload`, getting your patch reviewed, and getting it submitted. You can also +find info on how to run trybots and applying for try rights. + +[contributing]: https://webrtc.googlesource.com/src/+/refs/heads/main/docs/native-code/development/contributing.md + + +## Chromium Committers + +Many WebRTC committers are also Chromium committers. To make sure to use the +right account for pushing commits to WebRTC, use the `user.email` Git config +setting. The recommended way is to have the chromium committer account set globally +as described at the [depot tools setup page][depot-tools] and then set `user.email` +locally for the WebRTC repos using: + +``` +$ cd /path/to/webrtc/src +$ git config user.email <YOUR_WEBRTC_COMMITTER_EMAIL> +``` + +## Example Applications + +WebRTC contains several example applications, which can be found under +`src/webrtc/examples`. Higher level applications are listed first. + + +### Peerconnection + +Peerconnection consist of two applications using the WebRTC Native APIs: + +* A server application, with target name `peerconnection_server` +* A client application, with target name `peerconnection_client` (not currently supported on Mac/Android) + +The client application has simple voice and video capabilities. The server +enables client applications to initiate a call between clients by managing +signaling messages generated by the clients. + + +#### Setting up P2P calls between peerconnection_clients + +Start `peerconnection_server`. You should see the following message indicating +that it is running: + +``` +Server listening on port 8888 +``` + +Start any number of `peerconnection_clients` and connect them to the server. +The client UI consists of a few parts: + +**Connecting to a server:** When the application is started you must specify +which machine (by IP address) the server application is running on. Once that +is done you can press **Connect** or the return button. + +**Select a peer:** Once successfully connected to a server, you can connect to +a peer by double-clicking or select+press return on a peer's name. + +**Video chat:** When a peer has been successfully connected to, a video chat +will be displayed in full window. + +**Ending chat session:** Press **Esc**. You will now be back to selecting a +peer. + +**Ending connection:** Press **Esc** and you will now be able to select which +server to connect to. + + +#### Testing peerconnection_server + +Start an instance of `peerconnection_server` application. + +Open `src/webrtc/examples/peerconnection/server/server_test.html` in your +browser. Click **Connect**. Observe that the `peerconnection_server` announces +your connection. Open one more tab using the same page. Connect it too (with a +different name). It is now possible to exchange messages between the connected +peers. + +### STUN Server + +Target name `stunserver`. Implements the STUN protocol for Session Traversal +Utilities for NAT as documented in [RFC 5389][rfc-5389]. + + +### TURN Server + +Target name `turnserver`. Used for unit tests. + + +[ninja]: https://ninja-build.org/ +[ninja-build-rules]: https://gn.googlesource.com/gn/+/master/docs/reference.md#the-all-and-default-rules +[gn]: https://gn.googlesource.com/gn/+/master/README.md +[gn-doc]: https://gn.googlesource.com/gn/+/master/docs/reference.md#IDE-options +[webrtc-android-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/android/index.md +[webrtc-ios-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/ios/index.md +[chromium-work-branches]: https://www.chromium.org/developers/how-tos/get-the-code/working-with-branches +[chromium-work-release-branches]: https://www.chromium.org/developers/how-tos/get-the-code/working-with-release-branches +[depot-tools]: http://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up +[rfc-5389]: https://tools.ietf.org/html/rfc5389 +[rfc-5766]: https://tools.ietf.org/html/rfc5766 +[m80-log]: https://webrtc.googlesource.com/src/+log/branch-heads/3987 +[m80]: https://webrtc.googlesource.com/src/+/branch-heads/3987 +[fuzzers]: https://webrtc.googlesource.com/src/+/main/test/fuzzers/ diff --git a/third_party/libwebrtc/docs/native-code/development/prerequisite-sw/index.md b/third_party/libwebrtc/docs/native-code/development/prerequisite-sw/index.md new file mode 100644 index 0000000000..810c87943a --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/development/prerequisite-sw/index.md @@ -0,0 +1,65 @@ +# WebRTC development - Prerequisite software + +## Depot Tools + +1. [Install the Chromium depot tools][depot-tools]. + +2. On Windows, depot tools will download a special version of Git during your +first `gclient sync`. On Mac and Linux, you'll need to install [Git][git] by +yourself. + +## Linux (Ubuntu/Debian) + +A script is provided for Ubuntu, which is unfortunately only available after +your first gclient sync: + +``` +$ ./build/install-build-deps.sh +``` + +Most of the libraries installed with this script are not needed since we now +build using Debian sysroot images in build/linux, but there are still some tools +needed for the build that are installed with +[install-build-deps.sh][install-build-deps]. + +You may also want to have a look at the [Chromium Linux Build +instructions][chromium-linux-build-instructions] if you experience any other problems building. + +## Windows + +Follow the [Chromium's build instructions for Windows][chromium-win-build-instructions]. + +WebRTC requires Visual Studio 2017 to be used. If you only have version 2015 +available, you might be able to keep using it for some time by setting +`GYP_MSVS_VERSION=2015` in your environment. Keep in mind that this is not a +suppported configuration however. + +## macOS + +Xcode 12 or higher is required. Latest Xcode is recommended to be able to build +all code. You may use `xcode-select --install` to install it. + +Absence of Xcode will cause errors like: +``` +xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun +``` + +## Android + +You'll need a Linux development machine. WebRTC is using the same Android +toolchain as Chrome (downloaded into `third_party/android_tools`) so you won't +need to install the NDK/SDK separately. + +1. Install Java OpenJDK as described in the +[Chromium Android prerequisites][chromium-android-build-build-instructions] +2. All set! If you don't run Ubuntu, you may want to have a look at +[Chromium's Linux prerequisites][chromium-linux-prerequisites] for distro-specific details. + + +[depot-tools]: https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up +[git]: http://git-scm.com +[install-build-deps]: https://cs.chromium.org/chromium/src/build/install-build-deps.sh +[chromium-linux-build-instructions]: https://chromium.googlesource.com/chromium/src/+/main/docs/linux/build_instructions.md +[chromium-win-build-instructions]: https://chromium.googlesource.com/chromium/src/+/main/docs/windows_build_instructions.md +[chromium-linux-prerequisites]: https://chromium.googlesource.com/chromium/src/+/main/docs/linux/build_instructions.md#notes +[chromium-android-build-build-instructions]: https://chromium.googlesource.com/chromium/src/+/main/docs/android_build_instructions.md diff --git a/third_party/libwebrtc/docs/native-code/index.md b/third_party/libwebrtc/docs/native-code/index.md new file mode 100644 index 0000000000..928af6e52e --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/index.md @@ -0,0 +1,42 @@ +# WebRTC native code + +The WebRTC Native Code package is meant for browser developers who want to +integrate WebRTC. Application developers are encouraged to use the [WebRTC +API][webrtc-api] instead. + +[webrtc-api]: http://dev.w3.org/2011/webrtc/editor/webrtc.html + +The WebRTC native code can be found at +[https://webrtc.googlesource.com/src][webrtc-repo]. + +[webrtc-repo]: https://webrtc.googlesource.com/src/ + +The change log is available at +[https://webrtc.googlesource.com/src/+log][webrtc-change-log] + +[webrtc-change-log]: https://webrtc.googlesource.com/src/+log + +Please read the [License & Rights][webrtc-license] and [FAQ][webrtc-faq] +before downloading the source code. + +[webrtc-license]: https://webrtc.org/support/license +[webrtc-faq]: https://webrtc.googlesource.com/src/+/main/docs/faq.md + +The WebRTC [issue tracker][webrtc-issue-tracker] can be used for submitting +bugs found in native code. + +[webrtc-issue-tracker]: https://bugs.webrtc.org + +## Subpages + +* [Prerequisite software][webrtc-prerequitite-sw] +* [Development][webrtc-development] +* [Android][webtc-android-development] +* [iOS][webrtc-ios-development] +* [Experimental RTP header extensions][rtp-hdrext] + +[webrtc-prerequitite-sw]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/prerequisite-sw/index.md +[webrtc-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/index.md +[webtc-android-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/android/index.md +[webrtc-ios-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/ios/index.md +[rtp-hdrext]: https://webrtc.googlesource.com/src/+/main/docs/native-code/rtp-hdrext/index.md diff --git a/third_party/libwebrtc/docs/native-code/ios/index.md b/third_party/libwebrtc/docs/native-code/ios/index.md new file mode 100644 index 0000000000..307379f17f --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/ios/index.md @@ -0,0 +1,188 @@ +# WebRTC iOS development + +## Development Environment + +In case you need to build the framework manually or you want to try out the +demo application AppRTCMobile, follow the instructions illustrated bellow. + +A macOS machine is required for iOS development. While it's possible to +develop purely from the command line with text editors, it's easiest to use +Xcode. Both methods will be illustrated here. + +_NOTICE:_ You will need to install [Chromium depot_tools][webrtc-prerequisite-sw]. + +## Getting the Code + +Create a working directory, enter it, and run: + +``` +$ fetch --nohooks webrtc_ios +$ gclient sync +``` + +This will fetch a regular WebRTC checkout with the iOS-specific parts +added. Notice the size is quite large: about 6GB. The same checkout can be used +for both Mac and iOS development, since GN allows you to generate your +[Ninja][ninja] project files in different directories for each build config. + +You may want to disable Spotlight indexing for the checkout to speed up +file operations. + +Note that the git repository root is in `src`. + +From here you can check out a new local branch with: + +``` +$ git new-branch <branch name> +``` + +See [Development][webrtc-development] for generic instructions on how +to update the code in your checkout. + + +## Generating project files + +[GN][gn] is used to generate [Ninja][ninja] project files. In order to configure +[GN][gn] to generate build files for iOS certain variables need to be set. +Those variables can be edited for the various build configurations as needed. + +The variables you should care about are the following: + +* `target_os`: + - To build for iOS this should be set as `target_os="ios"` in your `gn args`. + The default is whatever OS you are running the script on, so this can be + omitted when generating build files for macOS. +* `target_cpu`: + - For builds targeting iOS devices, this should be set to either `"arm"` or + `"arm64"`, depending on the architecture of the device. For builds to run in + the simulator, this should be set to `"x64"`. +* `is_debug`: + - Debug builds are the default. When building for release, specify `false`. + +The component build is the default for Debug builds, which are also enabled by +default unless `is_debug=false` is specified. + +The [GN][gn] command for generating build files is `gn gen <output folder>`. + +After you've generated your build files once, subsequent invocations of `gn gen` +with the same output folder will use the same arguments as first supplied. +To edit these at any time use `gn args <output folder>`. This will open up +a file in `$EDITOR` where you can edit the arguments. When you've made +changes and save the file, `gn` will regenerate your project files for you +with the new arguments. + +### Examples + +``` +$ # debug build for 64-bit iOS +$ gn gen out/ios_64 --args='target_os="ios" target_cpu="arm64"' + +$ # debug build for simulator +$ gn gen out/ios_sim --args='target_os="ios" target_cpu="x64"' +``` + +## Compiling with ninja + +To compile, just run ninja on the appropriate target. For example: + +``` +$ ninja -C out/ios_64 AppRTCMobile +``` + +Replace `AppRTCMobile` in the command above with the target you +are interested in. + +To see a list of available targets, run `gn ls out/<output folder>`. + +## Using Xcode + +Xcode is the default and preferred IDE to develop for the iOS platform. + +*Generating an Xcode project* + +To have GN generate Xcode project files, pass the argument `--ide=xcode` +when running `gn gen`. This will result in a file named `all.xcodeproj` +placed in your specified output directory. + +Example: + +``` +$ gn gen out/ios --args='target_os="ios" target_cpu="arm64"' --ide=xcode +$ open -a Xcode.app out/ios/all.xcodeproj +``` + +*Compile and run with Xcode* + +Compiling with Xcode is not supported! What we do instead is compile using a +script that runs ninja from Xcode. This is done with a custom _run script_ +action in the build phases of the generated project. This script will simply +call ninja as you would when building from the command line. + +This gives us access to the usual deployment/debugging workflow iOS developers +are used to in Xcode, without sacrificing the build speed of Ninja. + +## Running the tests + +There are several test targets in WebRTC. To run the tests, you must deploy the +`.app` bundle to a device (see next section) and run them from there. +To run a specific test or collection of tests, normally with gtest one would pass +the `--gtest_filter` argument to the test binary when running. To do this when +running the tests from Xcode, from the targets menu, select the test bundle +and press _edit scheme..._ at the bottom of the target dropdown menu. From there +click _Run_ in the sidebar and add `--gtest_filter` to the _Arguments passed on +Launch_ list. + +If deploying to a device via the command line using [`ios-deploy`][ios-deploy], +use the `-a` flag to pass arguments to the executable on launch. + +## Deploying to Device + +It's easiest to deploy to a device using Xcode. Other command line tools exist +as well, e.g. [`ios-deploy`][ios-deploy]. + +**NOTICE:** To deploy to an iOS device you must have a valid signing identity +set up. You can verify this by running: + +``` +$ xcrun security find-identity -v -p codesigning +``` + +If you don't have a valid signing identity, you can still build for ARM, +but you won't be able to deploy your code to an iOS device. To do this, +add the flag `ios_enable_code_signing=false` to the `gn gen` args when you +generate the build files. + +## Using WebRTC in your app + +To build WebRTC for use in a native iOS app, it's easiest to build +`WebRTC.framework`. This can be done with ninja as follows, replacing `ios` +with the actual location of your generated build files. + +``` +ninja -C out/ios framework_objc +``` + +This should result in a `.framework` bundle being generated in `out/ios`. +This bundle can now be directly included in another app. + +If you need a FAT `.framework`, that is, a binary that contains code for +multiple architectures, and will work both on device and in the simulator, +a script is available [here][framework-script] + +The resulting framework can be found in out_ios_libs/. + +Please note that you can not ship the FAT framework binary with your app +if you intend to distribute it through the app store. +To solve this either remove "x86-64" from the list of architectures in +the [build script][framework-script] or split the binary and recreate it without x86-64. +For instructions on how to do this see [here][strip-arch]. + + +[cocoapods]: https://cocoapods.org/pods/GoogleWebRTC +[webrtc-prerequisite-sw]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/prerequisite-sw/index.md +[webrtc-development]: https://webrtc.googlesource.com/src/+/main/docs/native-code/development/index.md +[framework-script]: https://webrtc.googlesource.com/src/+/main/tools_webrtc/ios/build_ios_libs.py +[ninja]: https://ninja-build.org/ +[gn]: https://gn.googlesource.com/gn/+/main/README.md +[ios-deploy]: https://github.com/phonegap/ios-deploy +[strip-arch]: http://ikennd.ac/blog/2015/02/stripping-unwanted-architectures-from-dynamic-libraries-in-xcode/ diff --git a/third_party/libwebrtc/docs/native-code/logging.md b/third_party/libwebrtc/docs/native-code/logging.md new file mode 100644 index 0000000000..1daadbe2b5 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/logging.md @@ -0,0 +1,42 @@ +Native logs are often valuable in order to debug issues that can't be easily +reproduced. Following are instructions for gathering logs on various platforms. + +To enable native logs for a native application, you can either: + + * Use a debug build of WebRTC (a build where `NDEBUG` is not defined), + which will enable `INFO` logging by default. + + * Call `rtc::LogMessage::LogToDebug(rtc::LS_INFO)` within your application. + Or use `LS_VERBOSE` to enable `VERBOSE` logging. + +For the location of the log output on different platforms, see below. + +#### Android + +Logged to Android system log. Can be obtained using: + +~~~~ bash +adb logcat -s "libjingle" +~~~~ + +To enable the logging in a non-debug build from Java code, use +`Logging.enableLogToDebugOutput(Logging.Severity.LS_INFO)`. + +#### iOS + +Only logged to `stderr` by default. To log to a file, use `RTCFileLogger`. + +#### Mac + +For debug builds of WebRTC (builds where `NDEBUG` is not defined), logs to +`stderr`. To do this for release builds as well, set a boolean preference named +'logToStderr' to `true` for your application. Or, use `RTCFileLogger` to log to +a file. + +#### Windows + +Logs to the debugger and `stderr`. + +#### Linux/Other Platforms + +Logs to `stderr`. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-capture-time/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-capture-time/README.md new file mode 100644 index 0000000000..171993c2e7 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-capture-time/README.md @@ -0,0 +1,121 @@ +# Absolute Capture Time + +The Absolute Capture Time extension is used to stamp RTP packets with a NTP +timestamp showing when the first audio or video frame in a packet was originally +captured. The intent of this extension is to provide a way to accomplish +audio-to-video synchronization when RTCP-terminating intermediate systems (e.g. +mixers) are involved. + +**Name:** +"Absolute Capture Time"; "RTP Header Extension for Absolute Capture Time" + +**Formal name:** +<http://www.webrtc.org/experiments/rtp-hdrext/abs-capture-time> + +**Status:** +This extension is defined here to allow for experimentation. Once experience has +shown that it is useful, we intend to make a proposal based on it for +standardization in the IETF. + +Contact <chxg@google.com> for more info. + +## RTP header extension format + +### Data layout overview +Data layout of the shortened version of `abs-capture-time` with a 1-byte header +\+ 8 bytes of data: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | len=7 | absolute capture timestamp (bit 0-23) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | absolute capture timestamp (bit 24-55) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ... (56-63) | + +-+-+-+-+-+-+-+-+ + +Data layout of the extended version of `abs-capture-time` with a 1-byte header + +16 bytes of data: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | len=15| absolute capture timestamp (bit 0-23) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | absolute capture timestamp (bit 24-55) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ... (56-63) | estimated capture clock offset (bit 0-23) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | estimated capture clock offset (bit 24-55) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ... (56-63) | + +-+-+-+-+-+-+-+-+ + +### Data layout details +#### Absolute capture timestamp + +Absolute capture timestamp is the NTP timestamp of when the first frame in a +packet was originally captured. This timestamp MUST be based on the same clock +as the clock used to generate NTP timestamps for RTCP sender reports on the +capture system. + +It's not always possible to do an NTP clock readout at the exact moment of when +a media frame is captured. A capture system MAY postpone the readout until a +more convenient time. A capture system SHOULD have known delays (e.g. from +hardware buffers) subtracted from the readout to make the final timestamp as +close to the actual capture time as possible. + +This field is encoded as a 64-bit unsigned fixed-point number with the high 32 +bits for the timestamp in seconds and low 32 bits for the fractional part. This +is also known as the UQ32.32 format and is what the RTP specification defines as +the canonical format to represent NTP timestamps. + +#### Estimated capture clock offset + +Estimated capture clock offset is the sender's estimate of the offset between +its own NTP clock and the capture system's NTP clock. The sender is here defined +as the system that owns the NTP clock used to generate the NTP timestamps for +the RTCP sender reports on this stream. The sender system is typically either +the capture system or a mixer. + +This field is encoded as a 64-bit two’s complement **signed** fixed-point number +with the high 32 bits for the seconds and low 32 bits for the fractional part. +It’s intended to make it easy for a receiver, that knows how to estimate the +sender system’s NTP clock, to also estimate the capture system’s NTP clock: + + Capture NTP Clock = Sender NTP Clock + Capture Clock Offset + +### Further details + +#### Capture system + +A receiver MUST treat the first CSRC in the CSRC list of a received packet as if +it belongs to the capture system. If the CSRC list is empty, then the receiver +MUST treat the SSRC as if it belongs to the capture system. Mixers SHOULD put +the most prominent CSRC as the first CSRC in a packet’s CSRC list. + +#### Intermediate systems + +An intermediate system (e.g. mixer) MAY adjust these timestamps as needed. It +MAY also choose to rewrite the timestamps completely, using its own NTP clock as +reference clock, if it wants to present itself as a capture system for A/V-sync +purposes. + +#### Timestamp interpolation + +A sender SHOULD save bandwidth by not sending `abs-capture-time` with every +RTP packet. It SHOULD still send them at regular intervals (e.g. every second) +to help mitigate the impact of clock drift and packet loss. Mixers SHOULD always +send `abs-capture-time` with the first RTP packet after changing capture system. + +A receiver SHOULD memorize the capture system (i.e. CSRC/SSRC), capture +timestamp, and RTP timestamp of the most recently received `abs-capture-time` +packet on each received stream. It can then use that information, in combination +with RTP timestamps of packets without `abs-capture-time`, to extrapolate +missing capture timestamps. + +Timestamp interpolation works fine as long as there’s reasonably low NTP/RTP +clock drift. This is not always true. Senders that detect "jumps" between its +NTP and RTP clock mappings SHOULD send `abs-capture-time` with the first RTP +packet after such a thing happening. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-send-time/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-send-time/README.md new file mode 100644 index 0000000000..86c3c733dc --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/abs-send-time/README.md @@ -0,0 +1,31 @@ +# Absolute Send Time + +The Absolute Send Time extension is used to stamp RTP packets with a timestamp +showing the departure time from the system that put this packet on the wire +(or as close to this as we can manage). Contact <solenberg@google.com> for +more info. + +Name: "Absolute Sender Time" ; "RTP Header Extension for Absolute Sender Time" + +Formal name: <http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time> + +SDP "a= name": "abs-send-time" ; this is also used in client/cloud signaling. + +Not unlike [RTP with TFRC](http://tools.ietf.org/html/draft-ietf-avt-tfrc-profile-10#section-5) + +Wire format: 1-byte extension, 3 bytes of data. total 4 bytes extra per packet +(plus shared 4 bytes for all extensions present: 2 byte magic word 0xBEDE, 2 +byte # of extensions). Will in practice replace the "toffset" extension so we +should see no long term increase in traffic as a result. + +Encoding: Timestamp is in seconds, 24 bit 6.18 fixed point, yielding 64s +wraparound and 3.8us resolution (one increment for each 477 bytes going out on +a 1Gbps interface). + +Relation to NTP timestamps: abs_send_time_24 = (ntp_timestamp_64 >> 14) & +0x00ffffff ; NTP timestamp is 32 bits for whole seconds, 32 bits fraction of +second. + +Notes: Packets are time stamped when going out, preferably close to metal. +Intermediate RTP relays (entities possibly altering the stream) should remove +the extension or set its own timestamp. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/color-space/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/color-space/README.md new file mode 100644 index 0000000000..3f9485681f --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/color-space/README.md @@ -0,0 +1,88 @@ +# Color Space + +The color space extension is used to communicate color space information and +optionally also metadata that is needed in order to properly render a high +dynamic range (HDR) video stream. Contact <kron@google.com> for more info. + +**Name:** "Color space" ; "RTP Header Extension for color space" + +**Formal name:** <http://www.webrtc.org/experiments/rtp-hdrext/color-space> + +**Status:** This extension is defined here to allow for experimentation. Once experience +has shown that it is useful, we intend to make a proposal based on it for standardization +in the IETF. + +## RTP header extension format + +### Data layout overview +Data layout without HDR metadata (one-byte RTP header extension) + 1-byte header + 4 bytes of data: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | L = 3 | primaries | transfer | matrix | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |range+chr.sit. | + +-+-+-+-+-+-+-+-+ + +Data layout of color space with HDR metadata (two-byte RTP header extension) + 2-byte header + 28 bytes of data: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | length=28 | primaries | transfer | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | matrix |range+chr.sit. | luminance_max | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | luminance_min | mastering_metadata.| + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |primary_r.x and .y | mastering_metadata.| + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |primary_g.x and .y | mastering_metadata.| + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |primary_b.x and .y | mastering_metadata.| + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |white.x and .y | max_content_light_level | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | max_frame_average_light_level | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +### Data layout details +The data is written in the following order, +Color space information (4 bytes): + * Color primaries value according to ITU-T H.273 Table 2. + * Transfer characteristic value according to ITU-T H.273 Table 3. + * Matrix coefficients value according to ITU-T H.273 Table 4. + * Range and chroma siting as specified at + https://www.webmproject.org/docs/container/#colour. Range (range), horizontal (horz) + and vertical (vert) siting are merged to one byte by the operation: (range << 4) + + (horz << 2) + vert. + +The extension may optionally include HDR metadata written in the following order, +Mastering metadata (20 bytes): + * Luminance max, specified in nits, where 1 nit = 1 cd/m<sup>2</sup>. + (16-bit unsigned integer) + * Luminance min, scaled by a factor of 10000 and specified in the unit 1/10000 + nits. (16-bit unsigned integer) + * CIE 1931 xy chromaticity coordinates of the primary red, scaled by a factor of 50000. + (2x 16-bit unsigned integers) + * CIE 1931 xy chromaticity coordinates of the primary green, scaled by a factor of 50000. + (2x 16-bit unsigned integers) + * CIE 1931 xy chromaticity coordinates of the primary blue, scaled by a factor of 50000. + (2x 16-bit unsigned integers) + * CIE 1931 xy chromaticity coordinates of the white point, scaled by a factor of 50000. + (2x 16-bit unsigned integers) + +Followed by max light levels (4 bytes): + * Max content light level, specified in nits. (16-bit unsigned integer) + * Max frame average light level, specified in nits. (16-bit unsigned integer) + +Note, the byte order for all integers is big endian. + +See the standard SMPTE ST 2086 for more information about these entities. + +Notes: Extension should be present only in the last packet of video frames. If attached +to other packets it should be ignored. + diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/inband-cn/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/inband-cn/README.md new file mode 100644 index 0000000000..70ecdac0fb --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/inband-cn/README.md @@ -0,0 +1,57 @@ +# Inband Comfort Noise + +**Name:** "Inband Comfort Noise" ; "RTP Header Extension to signal inband comfort noise" + +**Formal name:** <http://www.webrtc.org/experiments/rtp-hdrext/inband-cn> + +**Status:** This extension is defined here to allow for experimentation. Once experience has shown that it is useful, we intend to make a proposal based on it for standardization in the IETF. + +## Introduction + +Comfort noise \(CN\) is widely used in real time communication, as it significantly reduces the frequency of RTP packets, and thus saves the network bandwidth, when participants in the communication are constantly actively speaking. + +One way of deploying CN is through \[RFC 3389\]. It defines CN as a special payload, which needs to be encoded and decoded independently from the codec\(s\) applied to active speech signals. This deployment is referred to as outband CN in this context. + +Some codecs, for example RFC 6716: Definition of the Opus Audio Codec, implement their own CN schemes. Basically, the encoder can notify that a CN packet is issued and/or no packet needs to be transmitted. + +Since CN packets have their particularities, cloud and client may need to identify them and treat them differently. Special treatments on CN packets include but are not limited to + +* Upon receiving multiple streams of CN packets, choose only one to relay or mix. +* Adapt jitter buffer wisely according to the discontinuous transmission nature of CN packets. + +While RTP packets that contain outband CN can be easily identified as they bear a different payload type, inband CN cannot. Some codecs may be able to extract the information by decoding the packet, but that depends on codec implementation, not even mentioning that decoding packets is not always feasible. This document proposes using an RTP header extension to signal the inband CN. + +## RTP header extension format + +The inband CN extension can be encoded using either the one-byte or two-byte header defined in \[RFC 5285\]. Figures 1 and 2 show encodings with each of these header formats. + + 0 1 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | len=0 |N| noise level | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +Figure 1. Encoding Using the One-Byte Header Format + + 0 1 2 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | len=1 |N| noise level | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +Figure 2. Encoding Using the Two-Byte Header Format + +Noise level is an optional data. The bit "N" being 1 indicates that there is a noise level. The noise level is defined the same way as the audio level in \[RFC 6464\] and therefore can be used to avoid the Audio Level Header Extension on the same RTP packet. This also means that this level is defined the same as the noise level in \[RFC 3389\] and therfore can be compared against outband CN. + +## Further details + +The existence of this header extension in an RTP packet indicates that it has inband CN, and therefore it will be used sparsely, and results in very small transmission cost. + +The end receiver can utilize this RTP header extension to get notified about an upcoming discontinuous transmission. This can be useful for its jitter buffer management. This RTP header extension signals comfort noise, it can also be used by audio mixer to mix streams wisely. As an example, it can avoid mixing multiple comfort noises together. + +Cloud may have the benefits of this RTP header extension as an end receiver, if it does transcoding. It may also utilize this RTP header extension to prioritize RTP packets if it does packet filtering. In both cases, this RTP header extension should not be encrypted. + +## References +* \[RFC 3389\] Zopf, R., "Real-time Transport Protocol \(RTP\) Payload for Comfort Noise \(CN\)", RFC 3389, September 2002. +* \[RFC 6465\] Ivov, E., Ed., Marocco, E., Ed., and J. Lennox, "A Real-time Transport Protocol \(RTP\) Header Extension for Mixer-to-Client Audio Level Indication", RFC 6465, December 2011. +* \[RFC 5285\] Singer, D. and H. Desineni, "A General Mechanism for RTP Header Extensions", RFC 5285, July 2008. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/index.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/index.md new file mode 100644 index 0000000000..081a727c59 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/index.md @@ -0,0 +1,13 @@ +# Experimental RTP header extensions + +The following subpages define experiemental RTP header extensions: + + * [abs-send-time](abs-send-time/README.md) + * [abs-capture-time](abs-capture-time/README.md) + * [color-space](color-space/README.md) + * [playout-delay](playout-delay/README.md) + * [transport-wide-cc-02](transport-wide-cc-02/README.md) + * [video-content-type](video-content-type/README.md) + * [video-timing](video-timing/README.md) + * [inband-cn](inband-cn/README.md) + * [video-layers-allocation00](video-layes-allocation00/README.md) diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/playout-delay/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/playout-delay/README.md new file mode 100644 index 0000000000..e669b04f83 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/playout-delay/README.md @@ -0,0 +1,54 @@ +# Playout Delay + +**Name:** "Playout Delay" ; "RTP Header Extension to control Playout Delay" + +**Formal name:** <http://www.webrtc.org/experiments/rtp-hdrext/playout-delay> + +**SDP "a= name":** "playout-delay" ; this is also used in client/cloud signaling. + +**Status:** This extension is defined here to allow for experimentation. Once experience +has shown that it is useful, we intend to make a proposal based on it for standardization +in the IETF. + +## Introduction + +On WebRTC, the RTP receiver continuously measures inter-packet delay and evaluates packet jitter. Besides this, an estimated delay for decode and render at the receiver is computed. The jitter buffer, the local time extrapolation and the predicted render time (based on predicted decode and render time) impact the delay on a frame before it is rendered at the receiver. + +This document proposes an RTP extension to enable the RTP sender to try and limit the amount of playout delay at the receiver in a certain range. A minimum and maximum delay from the sender provides guidance on the range over which the receiver can smooth out rendering. + +Thus, this extension aims to provide the sender’s intent to the receiver on how quickly a frame needs to be rendered. + +The following use cases are addressed by this extension: + +* Interactive streaming (gaming, remote access): Interactive streaming is highly sensitive to end-to-end latency and any delay in render impacts the end-user experience. These use cases prioritize reducing delay over any smoothing done at the receiver. In these cases, the RTP sender would like to disable all smoothing at receiver (min delay = max delay = 0) +* Movie playback: In some scenarios, the user prefers smooth playback and adaptive delay impacts end-user experience (audio can speed up and slow down). In these cases the sender would like to have a fixed delay at all times (min delay = max delay = K) +* Interactive communication: This is the scenarios where the receiver is best suited to adjust the delay adaptively to minimize latency and at the same time add some smoothing based on jitter prevalent due to network conditions (min delay = K1, max delay = K2) + + +## MIN and MAX playout delay + +The playout delay on a frame represents the amount of delay added to a frame the time it is captured at the sender to the time it is expected to be rendered at the receiver. Thus playout delay is essentially: + +Playout delay = ExpectedRenderTime(frame) - ExpectedCaptureTime(frame) + +MIN and MAX playout delay in turn represent the minimum and maximum delay that can be seen on a frame. This restriction range is best effort. The receiver is expected to try and meet the range as best as it can. + +A value of 0 for example is meaningless from the perspective of actually meeting the suggested delay, but it indicates to the receiver that the frame should be rendered as soon as possible. It is up-to the receiver to decide how to handle a frame when it arrives too late (i.e., whether to simply drop or hand over for rendering as soon as possible). + +## RTP header extension format + + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | len=2 | MIN delay | MAX delay | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + + +12 bits for Minimum and Maximum delay. This represents a range of 0 - 40950 milliseconds for minimum and maximum (with a granularity of 10 ms). A granularity of 10 ms is sufficient since we expect the following typical use cases: + +* 0 ms: Certain gaming scenarios (likely without audio) where we will want to play the frame as soon as possible. Also, for remote desktop without audio where rendering a frame asap makes sense +* 100/150/200 ms: These could be the max target latency for interactive streaming use cases depending on the actual application (gaming, remoting with audio, interactive scenarios) +* 400 ms: Application that want to ensure a network glitch has very little chance of causing a freeze can start with a minimum delay target that is high enough to deal with network issues. Video streaming is one example. + +The header is attached to the RTP packet by the RTP sender when it needs to change the min and max smoothing delay at the receiver. Once the sender is informed that at least one RTP packet which has the min and max details is delivered, it MAY stop providing details on all further RTP packets until another change warrants communicating the details to the receiver again. This is done as follows: + +RTCP feedback to RTP sender includes the highest sequence number that was seen on the RTP receiver. The RTP sender can track the sequence number on the packet that first had the playout delay extension and then stop sending the extension once the received sequence number is greater than the sequence number on the first packet containing the current values playout delay in this extension. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/transport-wide-cc-02/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/transport-wide-cc-02/README.md new file mode 100644 index 0000000000..8dc82612c7 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/transport-wide-cc-02/README.md @@ -0,0 +1,62 @@ +# Transport-Wide Congestion Control + +This RTP header extension is an extended version of the extension defined in +<https://tools.ietf.org/html/draft-holmer-rmcat-transport-wide-cc-extensions-01> + +**Name:** "Transport-wide congenstion control 02" + +**Formal name:** +<http://www.webrtc.org/experiments/rtp-hdrext/transport-wide-cc-02> + +**Status:** This extension is defined here to allow for experimentation. Once +experience has shown that it is useful, we intend to make a proposal based on +it for standardization in the IETF. + +The original extension defines a transport-wide sequence number that is used in +feedback packets for congestion control. The original implementation sends these +feedback packets at a periodic interval. The extended version presented here has +two changes compared to the original version: +* Feedback is sent only on request by the sender, therefore, the extension has + two optional bytes that signals that a feedback packet is requested. +* The sender determines if timing information should be included or not in the + feedback packet. The original version always include timing information. + +Contact <kron@google.com> or <sprang@google.com> for more info. + +## RTP header extension format + +### Data layout overview +Data layout of transport-wide sequence number + 1-byte header + 2 bytes of data: + + 0 1 2 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | L=1 |transport-wide sequence number | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +Data layout of transport-wide sequence number and optional feedback request + 1-byte header + 4 bytes of data: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | L=3 |transport-wide sequence number |T| seq count | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |seq count cont.| + +-+-+-+-+-+-+-+-+ + +### Data layout details +The data is written in the following order, +* transport-wide sequence number (16-bit unsigned integer) +* feedback request (optional) (16-bit unsigned integer)<br> + If the extension contains two extra bytes for feedback request, this means + that a feedback packet should be generated and sent immediately. The feedback + request consists of a one-bit field giving the flag value T and a 15-bit + field giving the sequence count as an unsigned number. + - If the bit T is set the feedback packet must contain timing information. + - seq count specifies how many packets of history that should be included in + the feedback packet. If seq count is zero no feedback should be be + generated, which is equivalent of sending the two-byte extension above. + This is added as an option to allow for a fixed packet header size. + diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-content-type/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-content-type/README.md new file mode 100644 index 0000000000..e7eb10d4e8 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-content-type/README.md @@ -0,0 +1,24 @@ +# Video Content Type + +The Video Content Type extension is used to communicate a video content type +from sender to receiver of rtp video stream. Contact <ilnik@google.com> for +more info. + +Name: "Video Content Type" ; "RTP Header Extension for Video Content Type" + +Formal name: <http://www.webrtc.org/experiments/rtp-hdrext/video-content-type> + +SDP "a= name": "video-content-type" ; this is also used in client/cloud signaling. + +Wire format: 1-byte extension, 1 bytes of data. total 2 bytes extra per packet +(plus shared 4 bytes for all extensions present: 2 byte magic word 0xBEDE, 2 +byte # of extensions). + +Values: + + * 0x00: *Unspecified*. Default value. Treated the same as an absence of an extension. + * 0x01: *Screenshare*. Video stream is of a screenshare type. + +Notes: Extension shoud be present only in the last packet of key-frames. If +attached to other packets it should be ignored. If extension is absent, +*Unspecified* value is assumed. diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-frame-tracking-id/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-frame-tracking-id/README.md new file mode 100644 index 0000000000..d1c609744e --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-frame-tracking-id/README.md @@ -0,0 +1,27 @@ +# Video Frame Tracking Id + +The Video Frame Tracking Id extension is meant for media quality testing +purpose and shouldn't be used in production. It tracks webrtc::VideoFrame id +field from the sender to the receiver to gather referenced base media quality +metrics such as PSNR or SSIM. +Contact <jleconte@google.com> for more info. + +**Name:** "Video Frame Tracking Id" + +**Formal name:** +<http://www.webrtc.org/experiments/rtp-hdrext/video-frame-tracking-id> + +**Status:** This extension is defined to allow for media quality testing. It is +enabled by using a field trial and should only be used in a testing environment. + +### Data layout overview + 1-byte header + 2 bytes of data: + + 0 1 2 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ID | L=1 | video-frame-tracking-id | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +Notes: The extension shoud be present only in the first packet of each frame. +If attached to other packets it can be ignored.
\ No newline at end of file diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-layers-allocation00/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-layers-allocation00/README.md new file mode 100644 index 0000000000..c4454d8ee1 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-layers-allocation00/README.md @@ -0,0 +1,86 @@ +# Video Layers Allocation + +The goal of this extension is for a video sender to provide information about +the target bitrate, resolution and frame rate of each scalability layer in order +to aid a selective forwarding middlebox to decide which layer to relay. + +**Name:** "Video layers allocation version 0" + +**Formal name:** +<http://www.webrtc.org/experiments/rtp-hdrext/video-layers-allocation00> + +**Status:** This extension is defined here to allow for experimentation. + +In a conference scenario, a video from a single sender may be received by +several recipients with different downlink bandwidth constraints and UI +requirements. To allow this, a sender can send video with several scalability +layers and a middle box can choose a layer to relay for each receiver. + +This extension support temporal layers, multiple spatial layers sent on a single +rtp stream (SVC), or independent spatial layers sent on multiple rtp streams +(simulcast). + +## RTP header extension format + +### Data layout + +``` +// +-+-+-+-+-+-+-+-+ +// |RID| NS| sl_bm | +// +-+-+-+-+-+-+-+-+ +// Spatial layer bitmask |sl0_bm |sl1_bm | +// up to 2 bytes |---------------| +// when sl_bm == 0 |sl2_bm |sl3_bm | +// +-+-+-+-+-+-+-+-+ +// Number of temporal layers |#tl|#tl|#tl|#tl| +// per spatial layer | | | | | +// +-+-+-+-+-+-+-+-+ +// Target bitrate in kpbs | | +// per temporal layer : ... : +// leb128 encoded | | +// +-+-+-+-+-+-+-+-+ +// Resolution and framerate | | +// 5 bytes per spatial layer + width-1 for + +// (optional) | rid=0, sid=0 | +// +---------------+ +// | | +// + height-1 for + +// | rid=0, sid=0 | +// +---------------+ +// | max framerate | +// +-+-+-+-+-+-+-+-+ +// : ... : +// +-+-+-+-+-+-+-+-+ +``` + +RID: RTP stream index this allocation is sent on, numbered from 0. 2 bits. + +NS: Number of RTP streams minus one. 2 bits, thus allowing up-to 4 RTP streams. + +sl_bm: BitMask of the active Spatial Layers when same for all RTP streams or 0 +otherwise. 4 bits, thus allows up to 4 spatial layers per RTP streams. + +slX_bm: BitMask of the active Spatial Layers for RTP stream with index=X. +When NS < 2, takes one byte, otherwise uses two bytes. Zero-padded to byte +alignment. + +\#tl: 2-bit value of number of temporal layers-1, thus allowing up-to 4 temporal +layers. Values are stored in ascending order of spatial id. Zero-padded to byte +alignment. + +Target bitrate in kbps. Values are stored using leb128 encoding [1]. One value per +temporal layer. Values are stored in (RTP stream id, spatial id, temporal id) +ascending order. All bitrates are total required bitrate to receive the +corresponding layer, i.e. in simulcast mode they include only corresponding +spatial layers, in full-svc all lower spatial layers are included. All lower +temporal layers are also included. + +Resolution and framerate. Optional. Presence is inferred from the rtp header +extension size. Encoded (width - 1), 16-bit, (height - 1), 16-bit, max frame +rate 8-bit per spatial layer per RTP stream. Values are stored in (RTP stream +id, spatial id) ascending order. + +An empty layer allocation (i.e nothing sent on ssrc) is encoded as +special case with a single 0 byte. + +[1] https://aomediacodec.github.io/av1-spec/#leb128 diff --git a/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-timing/README.md b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-timing/README.md new file mode 100644 index 0000000000..6f862f6157 --- /dev/null +++ b/third_party/libwebrtc/docs/native-code/rtp-hdrext/video-timing/README.md @@ -0,0 +1,42 @@ +# Video Timing + +The Video Timing extension is used to communicate a timing information on +per-frame basis to receiver of rtp video stream. Contact <ilnik@google.com> for +more info. It may be generalized to audio frames as well in the future. + +Name: "Video Timing" ; "RTP Header Extension for Video timing" + +Formal name: <http://www.webrtc.org/experiments/rtp-hdrext/video-timing> + +SDP "a= name": "video-timing" ; this is also used in client/cloud signaling. + +Wire format: 1-byte extension, 13 bytes of data. Total 14 bytes extra per packet +(plus 1-3 padding byte in some cases, plus shared 4 bytes for all extensions +present: 2 byte magic word 0xBEDE, 2 byte # of extensions). + +First byte is a flags field. Defined flags: + + * 0x01 - extension is set due to timer. + * 0x02 - extension is set because the frame is larger than usual. + +Both flags may be set at the same time. All remaining 6 bits are reserved and +should be ignored. + +Next, 6 timestamps are stored as 16-bit values in big-endian order, representing +delta from the capture time of a packet in ms. +Timestamps are, in order: + + * Encode start. + * Encode finish. + * Packetization complete. + * Last packet left the pacer. + * Reserved for network. + * Reserved for network (2). + +Pacer timestamp should be updated inside the RTP packet by pacer component when +the last packet (containing the extension) is sent to the network. Last two, +reserved timstamps, are not set by the sender but are reserved in packet for any +in-network RTP stream processor to modify. + +Notes: Extension shoud be present only in the last packet of video frames. If +attached to other packets it should be ignored. |