summaryrefslogtreecommitdiffstats
path: root/doc/developer
diff options
context:
space:
mode:
Diffstat (limited to 'doc/developer')
-rw-r--r--doc/developer/.gitignore2
-rw-r--r--doc/developer/.readthedocs.yaml12
-rw-r--r--doc/developer/MLD-and-PIMv6-Design.pngbin0 -> 470128 bytes
-rw-r--r--doc/developer/Makefile16
-rw-r--r--doc/developer/PIMv6-Design.pptxbin0 -> 54902 bytes
-rw-r--r--doc/developer/_static/overrides.css255
-rw-r--r--doc/developer/bgp-typecodes.rst25
-rw-r--r--doc/developer/bgpd.rst11
-rw-r--r--doc/developer/building-docker.rst204
-rw-r--r--doc/developer/building-frr-for-alpine.rst109
-rw-r--r--doc/developer/building-frr-for-archlinux.rst131
-rw-r--r--doc/developer/building-frr-for-centos6.rst276
-rw-r--r--doc/developer/building-frr-for-centos7.rst163
-rw-r--r--doc/developer/building-frr-for-centos8.rst157
-rw-r--r--doc/developer/building-frr-for-debian12.rst119
-rw-r--r--doc/developer/building-frr-for-debian8.rst150
-rw-r--r--doc/developer/building-frr-for-debian9.rst127
-rw-r--r--doc/developer/building-frr-for-fedora.rst136
-rw-r--r--doc/developer/building-frr-for-freebsd10.rst130
-rw-r--r--doc/developer/building-frr-for-freebsd11.rst135
-rw-r--r--doc/developer/building-frr-for-freebsd13.rst122
-rw-r--r--doc/developer/building-frr-for-freebsd9.rst140
-rw-r--r--doc/developer/building-frr-for-netbsd6.rst139
-rw-r--r--doc/developer/building-frr-for-netbsd7.rst129
-rw-r--r--doc/developer/building-frr-for-openbsd6.rst182
-rw-r--r--doc/developer/building-frr-for-opensuse.rst146
-rw-r--r--doc/developer/building-frr-for-openwrt.rst79
-rw-r--r--doc/developer/building-frr-for-ubuntu1404.rst142
-rw-r--r--doc/developer/building-frr-for-ubuntu1604.rst142
-rw-r--r--doc/developer/building-frr-for-ubuntu1804.rst148
-rw-r--r--doc/developer/building-frr-for-ubuntu2004.rst164
-rw-r--r--doc/developer/building-frr-for-ubuntu2204.rst183
-rw-r--r--doc/developer/building-libunwind-note.rst6
-rw-r--r--doc/developer/building-libyang.rst47
-rw-r--r--doc/developer/building.rst35
-rw-r--r--doc/developer/checkpatch.rst1251
-rw-r--r--doc/developer/cli.rst1007
-rw-r--r--doc/developer/conf.py406
-rw-r--r--doc/developer/cross-compiling.rst326
-rw-r--r--doc/developer/cspf.rst197
-rw-r--r--doc/developer/draft-zebra-00.ms209
-rw-r--r--doc/developer/fpm.rst119
-rw-r--r--doc/developer/frr-release-procedure.rst267
-rw-r--r--doc/developer/fuzzing.rst164
-rw-r--r--doc/developer/grpc.rst524
-rw-r--r--doc/developer/hooks.rst171
-rw-r--r--doc/developer/images/PCEPlib_design.jpgbin0 -> 42003 bytes
-rw-r--r--doc/developer/images/PCEPlib_internal_deps.jpgbin0 -> 31742 bytes
-rw-r--r--doc/developer/images/PCEPlib_socket_comm.jpgbin0 -> 36823 bytes
-rw-r--r--doc/developer/images/PCEPlib_threading_model.jpgbin0 -> 69181 bytes
-rw-r--r--doc/developer/images/PCEPlib_threading_model_frr_infra.jpgbin0 -> 83409 bytes
-rw-r--r--doc/developer/images/PCEPlib_timers.jpgbin0 -> 37363 bytes
-rw-r--r--doc/developer/include-compile.rst30
-rw-r--r--doc/developer/index.rst26
-rw-r--r--doc/developer/ldpd-basic-test-setup.md681
-rw-r--r--doc/developer/library.rst21
-rw-r--r--doc/developer/link-state.rst499
-rw-r--r--doc/developer/lists.rst777
-rw-r--r--doc/developer/locking.rst79
-rw-r--r--doc/developer/logging.rst873
-rw-r--r--doc/developer/memtypes.rst140
-rw-r--r--doc/developer/mgmtd-dev.rst222
-rw-r--r--doc/developer/modules.rst142
-rw-r--r--doc/developer/next-hop-tracking.rst350
-rw-r--r--doc/developer/northbound/advanced-topics.rst294
-rw-r--r--doc/developer/northbound/architecture.rst275
-rw-r--r--doc/developer/northbound/demos.rst27
-rw-r--r--doc/developer/northbound/images/arch-after.pngbin0 -> 18651 bytes
-rw-r--r--doc/developer/northbound/images/arch-before.pngbin0 -> 4360 bytes
-rw-r--r--doc/developer/northbound/images/ly-ctx.pngbin0 -> 7242 bytes
-rw-r--r--doc/developer/northbound/images/lyd-node.pngbin0 -> 21699 bytes
-rw-r--r--doc/developer/northbound/images/lys-node.pngbin0 -> 18018 bytes
-rw-r--r--doc/developer/northbound/images/nb-layer.pngbin0 -> 25388 bytes
-rw-r--r--doc/developer/northbound/images/transactions.pngbin0 -> 21532 bytes
-rw-r--r--doc/developer/northbound/links.rst233
-rw-r--r--doc/developer/northbound/northbound.rst21
-rw-r--r--doc/developer/northbound/operational-data-rpcs-and-notifications.rst565
-rw-r--r--doc/developer/northbound/plugins-sysrepo.rst137
-rw-r--r--doc/developer/northbound/ppr-basic-test-topology.rst1632
-rw-r--r--doc/developer/northbound/ppr-mpls-basic-test-topology.rst1991
-rw-r--r--doc/developer/northbound/retrofitting-configuration-commands.rst1897
-rw-r--r--doc/developer/northbound/transactional-cli.rst244
-rw-r--r--doc/developer/northbound/yang-module-translator.rst629
-rw-r--r--doc/developer/northbound/yang-tools.rst112
-rw-r--r--doc/developer/ospf-api.rst383
-rw-r--r--doc/developer/ospf-sr.rst347
-rw-r--r--doc/developer/ospf.rst13
-rw-r--r--doc/developer/packaging-debian.rst167
-rw-r--r--doc/developer/packaging-redhat.rst98
-rw-r--r--doc/developer/packaging.rst10
-rw-r--r--doc/developer/path-internals-daemon.rst115
-rw-r--r--doc/developer/path-internals-pcep.rst193
-rw-r--r--doc/developer/path-internals.rst11
-rw-r--r--doc/developer/path.rst11
-rw-r--r--doc/developer/pceplib.rst781
-rw-r--r--doc/developer/process-architecture.rst328
-rw-r--r--doc/developer/rcu.rst269
-rw-r--r--doc/developer/release-announcement-template.md40
-rw-r--r--doc/developer/scripting.rst628
-rw-r--r--doc/developer/static-linking.rst98
-rw-r--r--doc/developer/subdir.am116
-rw-r--r--doc/developer/testing.rst11
-rw-r--r--doc/developer/topotests-jsontopo.rst454
-rw-r--r--doc/developer/topotests-markers.rst114
-rw-r--r--doc/developer/topotests-snippets.rst272
-rw-r--r--doc/developer/topotests.rst1429
-rw-r--r--doc/developer/tracing.rst411
-rw-r--r--doc/developer/vtysh.rst212
-rw-r--r--doc/developer/workflow.rst1740
-rw-r--r--doc/developer/xrefs.rst215
-rw-r--r--doc/developer/zebra.rst232
111 files changed, 29198 insertions, 0 deletions
diff --git a/doc/developer/.gitignore b/doc/developer/.gitignore
new file mode 100644
index 0000000..81c60dc
--- /dev/null
+++ b/doc/developer/.gitignore
@@ -0,0 +1,2 @@
+/_templates
+/_build
diff --git a/doc/developer/.readthedocs.yaml b/doc/developer/.readthedocs.yaml
new file mode 100644
index 0000000..113672f
--- /dev/null
+++ b/doc/developer/.readthedocs.yaml
@@ -0,0 +1,12 @@
+# Required
+version: 2
+
+# Set the version of Python and other tools you might need
+build:
+ os: ubuntu-22.04
+ tools:
+ python: "3.11"
+
+# Build documentation in the docs/ directory with Sphinx
+sphinx:
+ configuration: doc/developer/conf.py
diff --git a/doc/developer/MLD-and-PIMv6-Design.png b/doc/developer/MLD-and-PIMv6-Design.png
new file mode 100644
index 0000000..b5066de
--- /dev/null
+++ b/doc/developer/MLD-and-PIMv6-Design.png
Binary files differ
diff --git a/doc/developer/Makefile b/doc/developer/Makefile
new file mode 100644
index 0000000..38afb43
--- /dev/null
+++ b/doc/developer/Makefile
@@ -0,0 +1,16 @@
+all: ALWAYS
+ @$(MAKE) -s -C ../.. developer-html
+help: ALWAYS
+ @$(MAKE) -s -C ../.. doc/help
+pdf: ALWAYS
+ @$(MAKE) -s -C ../.. doc/developer/_build/latexpdf
+info: ALWAYS
+ @$(MAKE) -s -C ../.. doc/developer/_build/texinfo/frr.info
+%: ALWAYS
+ @$(MAKE) -s -C ../.. doc/developer/_build/$@
+
+Makefile:
+ #nothing
+ALWAYS:
+.PHONY: ALWAYS makefiles
+.SUFFIXES:
diff --git a/doc/developer/PIMv6-Design.pptx b/doc/developer/PIMv6-Design.pptx
new file mode 100644
index 0000000..fc17059
--- /dev/null
+++ b/doc/developer/PIMv6-Design.pptx
Binary files differ
diff --git a/doc/developer/_static/overrides.css b/doc/developer/_static/overrides.css
new file mode 100644
index 0000000..302b8d6
--- /dev/null
+++ b/doc/developer/_static/overrides.css
@@ -0,0 +1,255 @@
+/* remove max-width restriction */
+div.body {
+ max-width: none;
+}
+
+/* Palette URL: http://paletton.com/#uid=70p0p0kt6uvcDRAlhBavokxLJ6w */
+
+:root {
+--primary-0: #F36F16; /* Main Primary color */
+--primary-1: #FFC39A;
+--primary-2: #FF9A55;
+--primary-3: #A34403;
+--primary-4: #341500;
+--primary-9: #FFF3EB;
+
+--secondary-1-0: #F39C16; /* Main Secondary color (1) */
+--secondary-1-1: #FFD79A;
+--secondary-1-2: #FFBC55;
+--secondary-1-3: #A36403;
+--secondary-1-4: #341F00;
+--secondary-1-9: #FFF7EB;
+
+--secondary-2-0: #1A599F; /* Main Secondary color (2) */
+--secondary-2-1: #92B9E5;
+--secondary-2-2: #477CB8;
+--secondary-2-3: #0A386B;
+--secondary-2-4: #011122;
+--secondary-2-9: #E3EBF4;
+
+--complement-0: #0E9A83; /* Main Complement color */
+--complement-1: #8AE4D4;
+--complement-2: #3CB4A0;
+--complement-3: #026857;
+--complement-4: #00211B;
+--complement-9: #E0F4F0;
+}
+
+/* new */
+
+body {
+ font-family: "Fira Sans", Helvetica, Arial, sans-serif;
+ font-weight:400;
+}
+h1, h2, h3, h4, h5, h6 {
+ font-family: "Fira Sans", Helvetica, Arial, sans-serif;
+ font-weight:500;
+}
+code, pre, tt {
+ font-family: "Fira Mono";
+}
+h1 {
+ background-color:var(--secondary-1-1);
+ border-bottom:1px solid var(--secondary-1-0);
+ font-weight:300;
+}
+h2 {
+ margin-top:36pt;
+}
+
+a,
+a:hover,
+a:visited,
+.code-block-caption a.headerlink:hover,
+.rst-content dl:not(.docutils) dt .headerlink {
+ color: var(--complement-0);
+}
+.code-block-caption a.headerlink {
+ visibility:hidden;
+}
+
+/* admonitions */
+
+.admonition.warning {
+ border:1px dashed var(--primary-2);
+}
+.admonition.warning .admonition-title {
+ color: var(--primary-3);
+ background-color: var(--primary-1);
+}
+.admonition.note,
+.admonition.hint {
+ border:1px dashed var(--complement-2);
+}
+.admonition.note .admonition-title,
+.admonition.hint .admonition-title {
+ color: var(--complement-3);
+ background-color: var(--complement-1);
+}
+.admonition.seealso,
+div.seealso {
+ background-color:var(--complement-9);
+}
+.admonition.seealso .admonition-title {
+ color: var(--complement-3);
+ background-color:var(--complement-1);
+ border-bottom:1px solid var(--complement-2);
+}
+.admonition.admonition-todo .admonition-title {
+ background-image: repeating-linear-gradient(
+ 135deg,
+ #ffa,
+ #ffa 14.14213452px,
+ #bbb 14.14213452px,
+ #bbb 28.28427124px
+ );
+ color:#000;
+}
+.admonition.admonition-todo {
+ background-image: repeating-linear-gradient(
+ 135deg,
+ #ffd,
+ #ffd 14.14213452px,
+ #eed 14.14213452px,
+ #eed 28.28427124px
+ );
+}
+
+.rst-content dl .admonition p.last {
+ margin-bottom:0 !important;
+}
+
+/* file block */
+
+.code-block-caption {
+/* border-radius: 4px; */
+ font-style:italic;
+ font-weight:300;
+ border-bottom: 1px solid var(--secondary-2-1);
+ background-color: var(--secondary-2-9);
+ padding:2px 8px;
+}
+
+/* navbar */
+
+.wy-nav-side {
+ background-color: var(--secondary-1-4);
+ border-right:2px solid var(--primary-3);
+}
+.wy-menu-vertical a,
+.wy-menu-vertical a:visited,
+.wy-menu-vertical a:hover,
+.wy-side-nav-search>a,
+.wy-side-nav-search .wy-dropdown>a {
+ color: var(--primary-0);
+}
+
+nav div.wy-side-nav-search {
+ background-color: #eee;
+}
+nav div.wy-side-scroll {
+ background-color: var(--secondary-1-4);
+}
+nav .wy-menu-vertical a:hover {
+ background-color:var(--primary-0);
+ color:var(--primary-4);
+}
+nav .wy-menu-vertical li.current ul a:hover {
+ background-color:var(--secondary-1-2);
+ color:var(--primary-4);
+}
+nav .wy-menu-vertical li.current ul a {
+ background-color:var(--secondary-1-1);
+ color:var(--primary-3);
+}
+nav .wy-menu-vertical li.on a:hover,
+nav .wy-menu-vertical li.current>a:hover {
+ background-color:#fcfcfc;
+}
+.wy-side-nav-search input[type=text] {
+ border-color:var(--primary-2);
+}
+.wy-menu-vertical li.toctree-l1.current>a {
+ border-top:1px solid var(--secondary-1-3);
+ border-bottom:1px solid var(--secondary-1-3);
+}
+.wy-menu-vertical li.toctree-l2.current>a {
+ background-color:var(--secondary-1-2);
+}
+.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a {
+ background-color:var(--secondary-1-9);
+}
+
+.wy-nav-content {
+ padding: 25pt 40pt;
+}
+div[role=navigation] > hr {
+ display:none;
+}
+div[role=navigation] {
+ margin-bottom:15pt;
+}
+h1 {
+ margin-left:-40pt;
+ margin-right:-40pt;
+ padding:5pt 40pt 5pt 40pt;
+}
+
+.rst-content pre.literal-block, .rst-content div[class^='highlight'] {
+ border-color:var(--secondary-1-1);
+}
+
+span.pre {
+ color: var(--complement-3);
+}
+pre {
+ background-color: var(--secondary-1-9);
+ border-color: var(--secondary-1-1);
+}
+.highlight .p { color: var(--secondary-2-3); }
+.highlight .k { color: var(--secondary-2-0); }
+.highlight .kt { color: var(--complement-0); }
+.highlight .cm { color: var(--primary-3); }
+.highlight .ow { color: var(--primary-3); }
+.highlight .na { color: var(--primary-2); }
+.highlight .nv { color: var(--complement-0); }
+
+.rst-content code.frrfmtout {
+ background-color: var(--secondary-1-9);
+ border-color: var(--secondary-1-1);
+ font-size:100%;
+}
+.rst-content code.frrfmtout::before {
+ content: "⇒ \"";
+}
+.rst-content code.frrfmtout::after {
+ content: "\"";
+}
+.rst-content code.frrfmtout span {
+ color: var(--secondary-1-4);
+ font-size:100%;
+}
+
+strong {
+ font-weight:500;
+}
+.rst-content dl:not(.docutils) dt {
+ font-family:Fira Mono;
+ font-weight:600;
+ background-color:var(--secondary-2-9);
+ color:var(--secondary-2-3);
+ border-top:2px solid var(--secondary-2-2);
+}
+dt code.descname {
+ color: var(--secondary-2-4);
+}
+
+@media (min-width: 1200px) {
+ .container { width: auto; }
+}
+@media (min-width: 992px) {
+ .container { width: auto; }
+}
+@media (min-width: 768px) {
+ .container { width: auto; }
+}
diff --git a/doc/developer/bgp-typecodes.rst b/doc/developer/bgp-typecodes.rst
new file mode 100644
index 0000000..c7921a7
--- /dev/null
+++ b/doc/developer/bgp-typecodes.rst
@@ -0,0 +1,25 @@
+BGP-4[+] UPDATE Attribute Preprocessor Constants
+================================================
+
+This is a list of preprocessor constants that map to BGP attributes defined by
+various BGP RFCs. In the code these are defined as BGP_ATTR_<ATTR>.
+
++-------+------------------+------------------------------------------+
+| Value | Attribute | References |
++=======+==================+==========================================+
+| 1 | ORIGIN | [RFC 4271] |
+| 2 | AS_PATH | [RFC 4271] |
+| 3 | NEXT_HOP | [RFC 4271] |
+| 4 | MULTI_EXIT_DISC | [RFC 4271] |
+| 5 | LOCAL_PREF | [RFC 4271] |
+| 6 | ATOMIC_AGGREGATE | [RFC 4271] |
+| 7 | AGGREGATOR | [RFC 4271] |
+| 8 | COMMUNITIES | [RFC 1997] |
+| 9 | ORIGINATOR_ID | [RFC 4456] |
+| 10 | CLUSTER_LIST | [RFC 4456] |
+| 14 | MP_REACH_NLRI | [RFC 4760] |
+| 15 | MP_UNREACH_NLRI | [RFC 4760] |
+| 16 | EXT_COMMUNITIES | [RFC 4360] |
+| 17 | AS4_PATH | [RFC 4893] |
+| 18 | AS4_AGGREGATOR | [RFC 4893] |
++-------+------------------+------------------------------------------+
diff --git a/doc/developer/bgpd.rst b/doc/developer/bgpd.rst
new file mode 100644
index 0000000..a35fa61
--- /dev/null
+++ b/doc/developer/bgpd.rst
@@ -0,0 +1,11 @@
+.. _bgpd:
+
+****
+BGPD
+****
+
+.. toctree::
+ :maxdepth: 2
+
+ next-hop-tracking
+ bgp-typecodes
diff --git a/doc/developer/building-docker.rst b/doc/developer/building-docker.rst
new file mode 100644
index 0000000..9d42784
--- /dev/null
+++ b/doc/developer/building-docker.rst
@@ -0,0 +1,204 @@
+Docker
+======
+
+This page covers how to build FRR Docker images.
+
+Images
+""""""
+FRR has Docker build infrastructure to produce Docker images containing
+source-built FRR on the following base platforms:
+
+* Alpine
+* Centos 7
+* Centos 8
+
+The following platform images are used to support Travis CI and can also
+be used to reproduce topotest failures when the docker host is Ubuntu
+(tested on 18.04 and 20.04):
+
+* Ubuntu 20.04
+* Ubuntu 22.04
+
+The following platform images may also be built, but these simply install a
+binary package from an existing repository and do not perform source builds:
+
+* Debian 10
+
+Some of these are available on `DockerHub
+<https://hub.docker.com/repository/docker/frrouting/frr/tags?page=1>`_.
+
+There is no guarantee on what is and is not available from DockerHub at time of
+writing.
+
+Scripts
+"""""""
+
+Some platforms contain an included build script that may be run from the host.
+This will set appropriate packaging environment variables and clean up
+intermediate build images.
+
+These scripts serve another purpose. They allow building platform packages
+without needing the platform. For example, the Centos 8 docker image can also
+be leveraged to build Centos 8 RPMs that can then be used separately from
+Docker.
+
+If you are only interested in the Docker images and don't want the cleanup
+functionality of the scripts you can ignore them and perform a normal Docker
+build. If you want to build multi-arch docker images this is required as the
+scripts do not support using Buildkit for multi-arch builds.
+
+Building Alpine Image
+---------------------
+
+Script::
+
+ ./docker/alpine/build.sh
+
+No script::
+
+ docker build -f docker/alpine/Dockerfile .
+
+No script, multi-arch (ex. amd64, arm64, armv7)::
+
+ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -f docker/alpine/Dockerfile -t frr:latest .
+
+
+Building Debian Image
+---------------------
+
+::
+
+ cd docker/debian
+ docker build .
+
+Multi-arch (ex. amd64, arm64, armv7)::
+
+ cd docker/debian
+ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t frr-debian:latest .
+
+Building Centos 7 Image
+-----------------------
+
+Script::
+
+ ./docker/centos-7/build.sh
+
+No script::
+
+ docker build -f docker/centos-7/Dockerfile .
+
+No script, multi-arch (ex. amd64, arm64)::
+
+ docker buildx build --platform linux/amd64,linux/arm64 -f docker/centos-7/Dockerfile -t frr-centos7:latest .
+
+
+Building Centos 8 Image
+-----------------------
+
+Script::
+
+ ./docker/centos-8/build.sh
+
+No script::
+
+ docker build -f docker/centos-8/Dockerfile .
+
+No script, multi-arch (ex. amd64, arm64)::
+
+ docker buildx build --platform linux/amd64,linux/arm64 -f docker/centos-8/Dockerfile -t frr-centos8:latest .
+
+
+
+Building ubi 8 Image
+-----------------------
+
+Script::
+
+ ./docker/ubi-8/build.sh
+
+Script with params, an example could be this (all that info will go to docker label) ::
+
+ ./docker/ubi-8/build.sh frr:ubi-8-my-test "$(git rev-parse --short=10 HEAD)" my_release my_name my_vendor
+
+No script::
+
+ docker build -f docker/ubi-8/Dockerfile .
+
+No script, multi-arch (ex. amd64, arm64)::
+
+ docker buildx build --platform linux/amd64,linux/arm64 -f docker/ubi-8/Dockerfile -t frr-ubi-8:latest .
+
+
+
+Building Ubuntu 20.04 Image
+---------------------------
+
+Build image (from project root directory)::
+
+ docker build -t frr-ubuntu20:latest --build-arg=UBUNTU_VERSION=20.04 -f docker/ubuntu-ci/Dockerfile .
+
+Running Full Topotest::
+
+ docker run --init -it --privileged --name frr -v /lib/modules:/lib/modules \
+ frr-ubuntu20:latest bash -c 'cd ~/frr/tests/topotests ; sudo pytest -nauto --dist=loadfile'
+
+Extract results from the above run into `run-results` dir and analyze::
+
+ tests/topotest/analyze.py -C frr -Ar run-results
+
+Start the container::
+
+ docker run -d --init --privileged --name frr-ubuntu20 --mount type=bind,source=/lib/modules,target=/lib/modules frr-ubuntu20:latest
+
+Running a topotest (when the docker host is Ubuntu)::
+
+ docker exec frr-ubuntu20 bash -c 'cd ~/frr/tests/topotests/ospf_topo1 ; sudo pytest test_ospf_topo1.py'
+
+Starting an interactive bash session::
+
+ docker exec -it frr-ubuntu20 bash
+
+Stopping an removing a container::
+
+ docker stop frr-ubuntu20 ; docker rm frr-ubuntu20
+
+Removing the built image::
+
+ docker rmi frr-ubuntu20:latest
+
+
+Building Ubuntu 22.04 Image
+---------------------------
+
+Build image (from project root directory)::
+
+ docker build -t frr-ubuntu22:latest -f docker/ubuntu-ci/Dockerfile .
+
+Running Full Topotest::
+
+ docker run --init -it --privileged --name frr -v /lib/modules:/lib/modules \
+ frr-ubuntu22:latest bash -c 'cd ~/frr/tests/topotests ; sudo pytest -nauto --dist=loadfile'
+
+Extract results from the above run into `run-results` dir and analyze::
+
+ tests/topotest/analyze.py -C frr -Ar run-results
+
+Start the container::
+
+ docker run -d --init --privileged --name frr-ubuntu22 --mount type=bind,source=/lib/modules,target=/lib/modules frr-ubuntu22:latest
+
+Running a topotest (when the docker host is Ubuntu)::
+
+ docker exec frr-ubuntu22 bash -c 'cd ~/frr/tests/topotests/ospf_topo1 ; sudo pytest test_ospf_topo1.py'
+
+Starting an interactive bash session::
+
+ docker exec -it frr-ubuntu22 bash
+
+Stopping an removing a container::
+
+ docker stop frr-ubuntu22 ; docker rm frr-ubuntu22
+
+Removing the built image::
+
+ docker rmi frr-ubuntu22:latest
diff --git a/doc/developer/building-frr-for-alpine.rst b/doc/developer/building-frr-for-alpine.rst
new file mode 100644
index 0000000..68e58c9
--- /dev/null
+++ b/doc/developer/building-frr-for-alpine.rst
@@ -0,0 +1,109 @@
+Alpine Linux 3.7+
+=========================================================
+
+For building Alpine Linux dev packages, we use docker.
+
+Install docker 17.05 or later
+-----------------------------
+
+Depending on your host, there are different ways of installing docker. Refer
+to the documentation here for instructions on how to install a free version of
+docker: https://www.docker.com/community-edition
+
+Pre-built packages and docker images
+------------------------------------
+
+The master branch of https://github.com/frrouting/frr.git has a
+continuous delivery of docker images to docker hub at:
+https://hub.docker.com/r/ajones17/frr/. These images have the frr packages
+in /pkgs/apk and have the frr package pre-installed. To copy Alpine
+packages out of these images:
+
+::
+
+ id=`docker create ajones17/frr:latest`
+ docker cp ${id}:/pkgs _some_directory_
+ docker rm $id
+
+To run the frr daemons (see below for how to configure them):
+
+::
+
+ docker run -it --rm --name frr ajones17/frr:latest
+ docker exec -it frr /bin/sh
+
+Work with sources
+-----------------
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+
+Build apk packages
+------------------
+
+::
+
+ ./docker/alpine/build.sh
+
+This will put the apk packages in:
+
+::
+
+ ./docker/pkgs/apk/x86_64/
+
+Usage
+-----
+
+To create a base image with the frr packages installed:
+
+::
+
+ docker build --rm -f docker/alpine/Dockerfile -t frr:latest .
+
+Or, if you don't have a git checkout of the sources, you can build a base
+image directly off the github account:
+
+::
+
+ docker build --rm -f docker/alpine/Dockerfile -t frr:latest \
+ https://github.com/frrouting/frr.git
+
+And to run the image:
+
+::
+
+ docker run -it --rm --name frr frr:latest
+
+In the default configuration, none of the frr daemons will be running.
+To configure the daemons, exec into the container and edit the configuration
+files or mount a volume with configuration files into the container on
+startup. To configure by hand:
+
+::
+
+ docker exec -it frr /bin/sh
+ vi /etc/frr/daemons
+ /etc/init.d/frr start
+
+Or, to configure the daemons using /etc/frr from a host volume, put the
+config files in, say, ./docker/etc and bind mount that into the
+container:
+
+::
+
+ docker run -it --rm -v `pwd`/docker/etc:/etc/frr frr:latest
+
+We can also build the base image directly from docker-compose, with a
+docker-compose.yml file like this one:
+
+::
+
+ version: '2.2'
+
+ services:
+ frr:
+ build:
+ context: https://github.com/frrouting/frr.git
+ dockerfile: docker/alpine/Dockerfile
diff --git a/doc/developer/building-frr-for-archlinux.rst b/doc/developer/building-frr-for-archlinux.rst
new file mode 100644
index 0000000..406d22d
--- /dev/null
+++ b/doc/developer/building-frr-for-archlinux.rst
@@ -0,0 +1,131 @@
+Arch Linux
+================
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ sudo pacman -Syu
+ sudo pacman -S \
+ git autoconf automake libtool make cmake pcre readline texinfo \
+ pkg-config pam json-c bison flex python-pytest \
+ c-ares python python2-ipaddress python-sphinx \
+ net-snmp perl libcap libelf libunwind
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Protobuf
+^^^^^^^^
+
+.. code-block:: console
+
+ sudo pacman -S protobuf-c
+
+ZeroMQ
+^^^^^^
+
+.. code-block:: console
+
+ sudo pacman -S zeromq
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd --system -g frr --home-dir /var/run/frr/ \
+ -c "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` [*Create the file if it doesn't exist*] and
+append the following values (ignore the other settings):
+
+::
+
+ # Enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Enable packet forwarding for IPv6
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install service files
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 644 tools/frr.service /etc/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: shell
+
+ systemctl start frr
diff --git a/doc/developer/building-frr-for-centos6.rst b/doc/developer/building-frr-for-centos6.rst
new file mode 100644
index 0000000..233d089
--- /dev/null
+++ b/doc/developer/building-frr-for-centos6.rst
@@ -0,0 +1,276 @@
+.. _building-centos6:
+
+CentOS 6
+========================================
+
+This document describes installation from source. If you want to build an RPM,
+see :ref:`packaging-redhat`.
+
+Instructions are tested with ``CentOS 6.8`` on ``x86_64`` platform
+
+Warning:
+--------
+``CentOS 6`` is very old and not fully supported by the FRR community
+anymore. Building FRR takes multiple manual steps to update the build
+system with newer packages than what's available from the archives.
+However, the built packages can still be installed afterwards on
+a standard ``CentOS 6`` without any special packages.
+
+Support for CentOS 6 is now on a best-effort base by the community.
+
+CentOS 6 restrictions:
+----------------------
+
+- PIMd is not supported on ``CentOS 6``. Upgrade to ``CentOS 7`` if
+ PIMd is needed
+- MPLS is not supported on ``CentOS 6``. MPLS requires Linux Kernel 4.5
+ or higher (LDP can be built, but may have limited use without MPLS)
+- Zebra is unable to detect what bridge/vrf an interface is associated
+ with (IFLA\_INFO\_SLAVE\_KIND does not exist in the kernel headers,
+ you can use a newer kernel + headers to get this functionality)
+- frr\_reload.py will not work, as this requires Python 2.7, and CentOS
+ 6 only has 2.6. You can install Python 2.7 via IUS, but it won't work
+ properly unless you compile and install the ipaddr package for it.
+- Building the package requires Sphinx >= 1.1. Only a non-standard
+ package provides a newer sphinx and requires manual installation
+ (see below)
+
+
+Install required packages
+-------------------------
+
+Add packages:
+
+.. code-block:: shell
+
+ sudo yum install git autoconf automake libtool make \
+ readline-devel texinfo net-snmp-devel groff pkgconfig \
+ json-c-devel pam-devel flex epel-release c-ares-devel libcap-devel \
+ elfutils-libelf-devel protobuf-c-devel
+
+Install newer version of bison (CentOS 6 package source is too old) from CentOS
+7:
+
+.. code-block:: shell
+
+ sudo yum install rpm-build
+ curl -O http://vault.centos.org/7.0.1406/os/Source/SPackages/bison-2.7-4.el7.src.rpm
+ rpmbuild --rebuild ./bison-2.7-4.el7.src.rpm
+ sudo yum install ./rpmbuild/RPMS/x86_64/bison-2.7-4.el6.x86_64.rpm
+ rm -rf rpmbuild
+
+Install newer version of autoconf and automake (Package versions are too old):
+
+.. code-block:: shell
+
+ curl -O http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
+ tar xvf autoconf-2.69.tar.gz
+ cd autoconf-2.69
+ ./configure --prefix=/usr
+ make
+ sudo make install
+ cd ..
+
+ curl -O http://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
+ tar xvf automake-1.15.tar.gz
+ cd automake-1.15
+ ./configure --prefix=/usr
+ make
+ sudo make install
+ cd ..
+
+Install ``Python 2.7`` in parallel to default 2.6. Make sure you've install
+EPEL (``epel-release`` as above). Then install current ``python27``:
+``python27-devel`` and ``pytest``
+
+.. code-block:: shell
+
+ sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
+ sudo rpm -ivh https://centos6.iuscommunity.org/ius-release.rpm
+ sudo yum install python27 python27-pip python27-devel
+ sudo pip2.7 install pytest
+
+Please note that ``CentOS 6`` needs to keep python pointing to version 2.6 for
+``yum`` to keep working, so don't create a symlink for python2.7 to python.
+
+Install newer ``Sphinx-Build`` based on ``Python 2.7``.
+
+Create a new repo ``/etc/yum.repos.d/puias6.repo`` with the following contents:
+
+::
+
+ ### Name: RPM Repository for RHEL 6 - PUIAS (used for Sphinx-Build)
+ ### URL: http://springdale.math.ias.edu/data/puias/computational
+ [puias-computational]
+ name = RPM Repository for RHEL 6 - Sphinx-Build
+ baseurl = http://springdale.math.ias.edu/data/puias/computational/$releasever/$basearch
+ #mirrorlist =
+ enabled = 1
+ protect = 0
+ gpgkey =
+ gpgcheck = 0
+
+Update rpm database & Install newer sphinx
+
+.. code-block:: shell
+
+ sudo yum update
+ sudo yum install python27-sphinx
+
+Install libyang and its dependencies:
+
+.. code-block:: shell
+
+ sudo yum install pcre-devel doxygen cmake
+ git clone https://github.com/CESNET/libyang.git
+ cd libyang
+ git checkout 090926a89d59a3c4000719505d563aaf6ac60f2
+ mkdir build ; cd build
+ cmake -DENABLE_LYD_PRIV=ON -DCMAKE_INSTALL_PREFIX:PATH=/usr -D CMAKE_BUILD_TYPE:String="Release" ..
+ make build-rpm
+ sudo yum install ./rpms/RPMS/x86_64/libyang-0.16.111-0.x86_64.rpm ./rpms/RPMS/x86_64/libyang-devel-0.16.111-0.x86_64.rpm
+ cd ../..
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not using any
+packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd -u 92 -g 92 -M -r -G frrvty -s /sbin/nologin \
+ -c "FRR FRRouting suite" -d /var/run/frr frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+.. code-block:: shell
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --bindir=/usr/bin \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --libdir=/usr/lib/frr \
+ --libexecdir=/usr/lib/frr \
+ --localstatedir=/var/run/frr \
+ --with-moduledir=/usr/lib/frr/modules \
+ --disable-pimd \
+ --enable-snmp=agentx \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --disable-ldpd \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ make
+ make check
+ sudo make install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo mkdir /var/log/frr
+ sudo mkdir /etc/frr
+
+For integrated config file:
+
+.. code-block:: shell
+
+ sudo touch /etc/frr/frr.conf
+
+For individual config files:
+
+.. note:: Integrated config is preferred to individual config.
+
+.. code-block:: shell
+
+ sudo touch /etc/frr/babeld.conf
+ sudo touch /etc/frr/bfdd.conf
+ sudo touch /etc/frr/bgpd.conf
+ sudo touch /etc/frr/eigrpd.conf
+ sudo touch /etc/frr/isisd.conf
+ sudo touch /etc/frr/ldpd.conf
+ sudo touch /etc/frr/nhrpd.conf
+ sudo touch /etc/frr/ospf6d.conf
+ sudo touch /etc/frr/ospfd.conf
+ sudo touch /etc/frr/pbrd.conf
+ sudo touch /etc/frr/pimd.conf
+ sudo touch /etc/frr/ripd.conf
+ sudo touch /etc/frr/ripngd.conf
+ sudo touch /etc/frr/staticd.conf
+ sudo touch /etc/frr/zebra.conf
+ sudo chown -R frr:frr /etc/frr/
+ sudo touch /etc/frr/vtysh.conf
+ sudo chown frr:frrvty /etc/frr/vtysh.conf
+ sudo chmod 640 /etc/frr/*.conf
+
+Install daemon config file
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo install -p -m 644 tools/etc/frr/daemons /etc/frr/
+ sudo chown frr:frr /etc/frr/daemons
+
+Edit /etc/frr/daemons as needed to select the required daemons
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Look for the section with ``watchfrr_enable=...`` and ``zebra=...`` etc.
+Enable the daemons as required by changing the value to ``yes``
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Edit :file:`/etc/sysctl.conf` and set the following values (ignore the other
+settings)::
+
+ # Controls IP packet forwarding
+ net.ipv4.ip_forward = 1
+ net.ipv6.conf.all.forwarding=1
+
+ # Controls source route verification
+ net.ipv4.conf.default.rp_filter = 0
+
+Load the modified sysctl's on the system:
+
+.. code-block:: shell
+
+ sudo sysctl -p /etc/sysctl.d/90-routing-sysctl.conf
+
+Add init.d startup file
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo install -p -m 755 tools/frr /etc/init.d/frr
+ sudo chkconfig --add frr
+
+Enable FRR daemon at startup
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo chkconfig frr on
+
+Start FRR manually (or reboot)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo /etc/init.d/frr start
diff --git a/doc/developer/building-frr-for-centos7.rst b/doc/developer/building-frr-for-centos7.rst
new file mode 100644
index 0000000..e6da830
--- /dev/null
+++ b/doc/developer/building-frr-for-centos7.rst
@@ -0,0 +1,163 @@
+CentOS 7
+========================================
+
+This document describes installation from source. If you want to build an RPM,
+see :ref:`packaging-redhat`.
+
+CentOS 7 restrictions:
+----------------------
+
+- MPLS is not supported on ``CentOS 7`` with default kernel. MPLS
+ requires Linux Kernel 4.5 or higher (LDP can be built, but may have
+ limited use without MPLS)
+
+Install required packages
+-------------------------
+
+Add packages:
+
+::
+
+ sudo yum install git autoconf automake libtool make \
+ readline-devel texinfo net-snmp-devel groff pkgconfig \
+ json-c-devel pam-devel bison flex pytest c-ares-devel \
+ python-devel python-sphinx libcap-devel \
+ elfutils-libelf-devel libunwind-devel protobuf-c-devel
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd -u 92 -g 92 -M -r -G frrvty -s /sbin/nologin \
+ -c "FRR FRRouting suite" -d /var/run/frr frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --bindir=/usr/bin \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --libdir=/usr/lib/frr \
+ --libexecdir=/usr/lib/frr \
+ --localstatedir=/var/run/frr \
+ --with-moduledir=/usr/lib/frr/modules \
+ --enable-snmp=agentx \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --disable-ldpd \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion \
+ SPHINXBUILD=/usr/bin/sphinx-build
+ make
+ make check
+ sudo make install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo mkdir /var/log/frr
+ sudo mkdir /etc/frr
+ sudo touch /etc/frr/zebra.conf
+ sudo touch /etc/frr/bgpd.conf
+ sudo touch /etc/frr/ospfd.conf
+ sudo touch /etc/frr/ospf6d.conf
+ sudo touch /etc/frr/isisd.conf
+ sudo touch /etc/frr/ripd.conf
+ sudo touch /etc/frr/ripngd.conf
+ sudo touch /etc/frr/pimd.conf
+ sudo touch /etc/frr/nhrpd.conf
+ sudo touch /etc/frr/eigrpd.conf
+ sudo touch /etc/frr/babeld.conf
+ sudo chown -R frr:frr /etc/frr/
+ sudo touch /etc/frr/vtysh.conf
+ sudo chown frr:frrvty /etc/frr/vtysh.conf
+ sudo chmod 640 /etc/frr/*.conf
+
+Install daemon config file
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -p -m 644 tools/etc/frr/daemons /etc/frr/
+ sudo chown frr:frr /etc/frr/daemons
+
+Edit /etc/frr/daemons as needed to select the required daemons
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Look for the section with ``watchfrr_enable=...`` and ``zebra=...`` etc.
+Enable the daemons as required by changing the value to ``yes``
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Create a new file ``/etc/sysctl.d/90-routing-sysctl.conf`` with the
+following content:
+
+::
+
+ # Sysctl for routing
+ #
+ # Routing: We need to forward packets
+ net.ipv4.conf.all.forwarding=1
+ net.ipv6.conf.all.forwarding=1
+
+Load the modified sysctl's on the system:
+
+::
+
+ sudo sysctl -p /etc/sysctl.d/90-routing-sysctl.conf
+
+Install frr Service
+^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -p -m 644 tools/frr.service /usr/lib/systemd/system/frr.service
+
+Register the systemd files
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl preset frr.service
+
+Enable required frr at startup
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl enable frr
+
+Reboot or start FRR manually
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl start frr
diff --git a/doc/developer/building-frr-for-centos8.rst b/doc/developer/building-frr-for-centos8.rst
new file mode 100644
index 0000000..6d18e7b
--- /dev/null
+++ b/doc/developer/building-frr-for-centos8.rst
@@ -0,0 +1,157 @@
+CentOS 8
+========
+
+This document describes installation from source. If you want to build an RPM,
+see :ref:`packaging-redhat`.
+
+Install required packages
+-------------------------
+
+Add packages:
+
+::
+
+ sudo dnf install --enablerepo=PowerTools git autoconf pcre-devel \
+ automake libtool make readline-devel texinfo net-snmp-devel pkgconfig \
+ groff pkgconfig json-c-devel pam-devel bison flex python2-pytest \
+ c-ares-devel python2-devel libcap-devel \
+ elfutils-libelf-devel libunwind-devel \
+ protobuf-c-devel
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd -u 92 -g 92 -M -r -G frrvty -s /sbin/nologin \
+ -c "FRR FRRouting suite" -d /var/run/frr frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --bindir=/usr/bin \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --libdir=/usr/lib/frr \
+ --libexecdir=/usr/lib/frr \
+ --localstatedir=/var/run/frr \
+ --with-moduledir=/usr/lib/frr/modules \
+ --enable-snmp=agentx \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --disable-ldpd \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion \
+ SPHINXBUILD=/usr/bin/sphinx-build
+ make
+ make check
+ sudo make install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo mkdir /var/log/frr
+ sudo mkdir /etc/frr
+ sudo touch /etc/frr/zebra.conf
+ sudo touch /etc/frr/bgpd.conf
+ sudo touch /etc/frr/ospfd.conf
+ sudo touch /etc/frr/ospf6d.conf
+ sudo touch /etc/frr/isisd.conf
+ sudo touch /etc/frr/ripd.conf
+ sudo touch /etc/frr/ripngd.conf
+ sudo touch /etc/frr/pimd.conf
+ sudo touch /etc/frr/nhrpd.conf
+ sudo touch /etc/frr/eigrpd.conf
+ sudo touch /etc/frr/babeld.conf
+ sudo chown -R frr:frr /etc/frr/
+ sudo touch /etc/frr/vtysh.conf
+ sudo chown frr:frrvty /etc/frr/vtysh.conf
+ sudo chmod 640 /etc/frr/*.conf
+
+Install daemon config file
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -p -m 644 tools/etc/frr/daemons /etc/frr/
+ sudo chown frr:frr /etc/frr/daemons
+
+Edit /etc/frr/daemons as needed to select the required daemons
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Look for the section with ``watchfrr_enable=...`` and ``zebra=...`` etc.
+Enable the daemons as required by changing the value to ``yes``
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Create a new file ``/etc/sysctl.d/90-routing-sysctl.conf`` with the
+following content:
+
+::
+
+ # Sysctl for routing
+ #
+ # Routing: We need to forward packets
+ net.ipv4.conf.all.forwarding=1
+ net.ipv6.conf.all.forwarding=1
+
+Load the modified sysctl's on the system:
+
+::
+
+ sudo sysctl -p /etc/sysctl.d/90-routing-sysctl.conf
+
+Install frr Service
+^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -p -m 644 tools/frr.service /usr/lib/systemd/system/frr.service
+
+Register the systemd files
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl preset frr.service
+
+Enable required frr at startup
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl enable frr
+
+Reboot or start FRR manually
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo systemctl start frr
diff --git a/doc/developer/building-frr-for-debian12.rst b/doc/developer/building-frr-for-debian12.rst
new file mode 100644
index 0000000..ca882ee
--- /dev/null
+++ b/doc/developer/building-frr-for-debian12.rst
@@ -0,0 +1,119 @@
+Debian 12
+=========
+
+Install required packages
+-------------------------
+
+Add packages:
+
+::
+
+ sudo apt-get install git autoconf automake libtool make \
+ libprotobuf-c-dev protobuf-c-compiler build-essential \
+ python3-dev python3-pytest python3-sphinx libjson-c-dev \
+ libelf-dev libreadline-dev cmake libcap-dev bison flex \
+ pkg-config texinfo gdb libgrpc-dev python3-grpc-tools
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo addgroup --system --gid 92 frr
+ sudo addgroup --system --gid 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/opt/frr/ \
+ --gecos "FRR suite" --shell /bin/false frr
+ sudo usermod -a -G frrvty frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --localstatedir=/var/opt/frr \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ make
+ make check
+ sudo make install
+
+For more compile options, see ``./configure --help``
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Edit ``/etc/frr/daemons`` and enable the FRR daemons for the protocols you need
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Edit ``/etc/sysctl.conf`` and uncomment the following values (ignore the
+other settings)
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+**Reboot** or use ``sysctl -p`` to apply the same config to the running
+system
+
+Troubleshooting
+---------------
+
+Shared library error
+^^^^^^^^^^^^^^^^^^^^
+
+If you try and start any of the frrouting daemons you may see the below
+error due to the frrouting shared library directory not being found:
+
+::
+
+ ./zebra: error while loading shared libraries: libfrr.so.0: cannot open
+ shared object file: No such file or directory
+
+The fix is to add the following line to /etc/ld.so.conf which will
+continue to reference the library directory after the system reboots. To
+load the library directory path immediately run the ldconfig command
+after adding the line to the file eg:
+
+::
+
+ echo include /usr/local/lib >> /etc/ld.so.conf
+ ldconfig
diff --git a/doc/developer/building-frr-for-debian8.rst b/doc/developer/building-frr-for-debian8.rst
new file mode 100644
index 0000000..7071cb6
--- /dev/null
+++ b/doc/developer/building-frr-for-debian8.rst
@@ -0,0 +1,150 @@
+Debian 8
+========================================
+
+Debian 8 restrictions:
+----------------------
+
+- MPLS is not supported on ``Debian 8`` with default kernel. MPLS
+ requires Linux Kernel 4.5 or higher (LDP can be built, but may have
+ limited use without MPLS)
+
+Install required packages
+-------------------------
+
+Add packages:
+
+::
+
+ sudo apt-get install git autoconf automake libtool make \
+ libreadline-dev texinfo libjson-c-dev pkg-config bison flex python3-pip \
+ libc-ares-dev python3-dev python3-sphinx build-essential \
+ libsnmp-dev libcap-dev libelf-dev libprotobuf-c-dev protobuf-c-compiler
+
+Install newer pytest (>3.0) from pip
+
+::
+
+ sudo pip3 install pytest
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo addgroup --system --gid 92 frr
+ sudo addgroup --system --gid 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /bin/false frr
+ sudo usermod -a -G frrvty frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --localstatedir=/var/run/frr \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ make
+ make check
+ sudo make install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -m 755 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/zebra.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/bgpd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospfd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospf6d.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/isisd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripngd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/pimd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ldpd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/nhrpd.conf
+ sudo install -m 640 -o frr -g frrvty /dev/null /etc/frr/vtysh.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Edit ``/etc/sysctl.conf`` and uncomment the following values (ignore the
+other settings)
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+**Reboot** or use ``sysctl -p`` to apply the same config to the running
+system
+
+Troubleshooting
+^^^^^^^^^^^^^^^
+
+**Local state directory**
+
+The local state directory must exist and have the correct permissions
+applied for the frrouting daemons to start. In the above ./configure
+example the local state directory is set to /var/run/frr
+(--localstatedir=/var/run/frr) Debian considers /var/run/frr to be
+temporary and this is removed after a reboot.
+
+When using a different local state directory you need to create the new
+directory and change the ownership to the frr user, for example:
+
+::
+
+ mkdir /var/opt/frr
+ chown frr /var/opt/frr
+
+**Shared library error**
+
+If you try and start any of the frrouting daemons you may see the below
+error due to the frrouting shared library directory not being found:
+
+::
+
+ ./zebra: error while loading shared libraries: libfrr.so.0: cannot open shared object file: No such file or directory
+
+The fix is to add the following line to /etc/ld.so.conf which will
+continue to reference the library directory after the system reboots. To
+load the library directory path immediately run the ldconfig command
+after adding the line to the file eg:
+
+::
+
+ echo include /usr/local/lib >> /etc/ld.so.conf
+ ldconfig
diff --git a/doc/developer/building-frr-for-debian9.rst b/doc/developer/building-frr-for-debian9.rst
new file mode 100644
index 0000000..1b2f1b9
--- /dev/null
+++ b/doc/developer/building-frr-for-debian9.rst
@@ -0,0 +1,127 @@
+Debian 9
+========================================
+
+Install required packages
+-------------------------
+
+Add packages:
+
+::
+
+ sudo apt-get install git autoconf automake libtool make \
+ libreadline-dev texinfo libjson-c-dev pkg-config bison flex \
+ libc-ares-dev python3-dev python3-pytest python3-sphinx build-essential \
+ libsnmp-dev libcap-dev libelf-dev libunwind-dev \
+ libprotobuf-c-dev protobuf-c-compiler
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo addgroup --system --gid 92 frr
+ sudo addgroup --system --gid 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/opt/frr/ \
+ --gecos "FRR suite" --shell /bin/false frr
+ sudo usermod -a -G frrvty frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example.)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --localstatedir=/var/opt/frr \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ make
+ make check
+ sudo make install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo install -m 755 -o frr -g frr -d /var/log/frr
+ sudo install -m 755 -o frr -g frr -d /var/opt/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/zebra.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/bgpd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospfd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospf6d.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/isisd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripngd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/pimd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ldpd.conf
+ sudo install -m 640 -o frr -g frr /dev/null /etc/frr/nhrpd.conf
+ sudo install -m 640 -o frr -g frrvty /dev/null /etc/frr/vtysh.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Edit ``/etc/sysctl.conf`` and uncomment the following values (ignore the
+other settings)
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+**Reboot** or use ``sysctl -p`` to apply the same config to the running
+system
+
+Troubleshooting
+---------------
+
+Shared library error
+^^^^^^^^^^^^^^^^^^^^
+
+If you try and start any of the frrouting daemons you may see the below
+error due to the frrouting shared library directory not being found:
+
+::
+
+ ./zebra: error while loading shared libraries: libfrr.so.0: cannot open
+ shared object file: No such file or directory
+
+The fix is to add the following line to /etc/ld.so.conf which will
+continue to reference the library directory after the system reboots. To
+load the library directory path immediately run the ldconfig command
+after adding the line to the file eg:
+
+::
+
+ echo include /usr/local/lib >> /etc/ld.so.conf
+ ldconfig
diff --git a/doc/developer/building-frr-for-fedora.rst b/doc/developer/building-frr-for-fedora.rst
new file mode 100644
index 0000000..35a24b2
--- /dev/null
+++ b/doc/developer/building-frr-for-fedora.rst
@@ -0,0 +1,136 @@
+Fedora 24+
+==========
+
+This document describes installation from source. If you want to build an RPM,
+see :ref:`packaging-redhat`.
+
+These instructions have been tested on Fedora 24+.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ sudo dnf install git autoconf automake libtool make \
+ readline-devel texinfo net-snmp-devel groff pkgconfig json-c-devel \
+ pam-devel python3-pytest bison flex c-ares-devel python3-devel \
+ python3-sphinx perl-core patch libcap-devel \
+ elfutils-libelf-devel libunwind-devel protobuf-c-devel
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd -u 92 -g 92 -M -r -G frrvty -s /sbin/nologin \
+ -c "FRR FRRouting suite" -d /var/run/frr frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Create a new file ``/etc/sysctl.d/90-routing-sysctl.conf`` with the following
+content:
+
+::
+
+ #
+ # Enable packet forwarding
+ #
+ net.ipv4.conf.all.forwarding=1
+ net.ipv6.conf.all.forwarding=1
+ #
+ # Enable MPLS Label processing on all interfaces
+ #
+ #net.mpls.conf.eth0.input=1
+ #net.mpls.conf.eth1.input=1
+ #net.mpls.conf.eth2.input=1
+ #net.mpls.platform_labels=100000
+
+.. note::
+
+ MPLS must be invidividually enabled on each interface that requires it. See
+ the example in the config block above.
+
+Load the modified sysctls on the system:
+
+.. code-block:: console
+
+ sudo sysctl -p /etc/sysctl.d/90-routing-sysctl.conf
+
+Create a new file ``/etc/modules-load.d/mpls.conf`` with the following content:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls-router
+ mpls-iptunnel
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+
+.. note::
+ Fedora ships with the ``firewalld`` service enabled. You may run into some
+ issues with the iptables rules it installs by default. If you wish to just
+ stop the service and clear `ALL` rules do these commands:
+
+ .. code-block:: console
+
+ sudo systemctl disable firewalld.service
+ sudo systemctl stop firewalld.service
+ sudo iptables -F
+
+Install frr Service
+^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -p -m 644 tools/frr.service /usr/lib/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: frr
+
+ sudo systemctl start frr
diff --git a/doc/developer/building-frr-for-freebsd10.rst b/doc/developer/building-frr-for-freebsd10.rst
new file mode 100644
index 0000000..707f1e7
--- /dev/null
+++ b/doc/developer/building-frr-for-freebsd10.rst
@@ -0,0 +1,130 @@
+FreeBSD 10
+==========================================
+
+FreeBSD 10 restrictions:
+------------------------
+
+- MPLS is not supported on ``FreeBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+
+Install required packages
+-------------------------
+
+Add packages: (Allow the install of the package management tool if this
+is first package install and asked)
+
+::
+
+ pkg install git autoconf automake libtool gmake json-c pkgconf \
+ bison flex py36-pytest c-ares python3.6 py36-sphinx libunwind \
+ protobuf-c
+
+.. include:: building-libunwind-note.rst
+
+Make sure there is no /usr/bin/flex preinstalled (and use the newly
+installed in /usr/local/bin): (FreeBSD frequently provides a older flex
+as part of the base OS which takes preference in path)
+
+.. include:: building-libyang.rst
+
+::
+
+ rm -f /usr/bin/flex
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr group and user
+^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ pw groupadd frr -g 101
+ pw groupadd frrvty -g 102
+ pw adduser frr -g 101 -u 101 -G 102 -c "FRR suite" \
+ -d /usr/local/etc/frr -s /usr/sbin/nologin
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ export MAKE=gmake
+ export LDFLAGS="-L/usr/local/lib"
+ export CPPFLAGS="-I/usr/local/include"
+ ./configure \
+ --sysconfdir=/usr/local/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --prefix=/usr/local \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo mkdir /usr/local/etc/frr
+
+For integrated config file:
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/frr.conf
+
+For individual config files:
+
+.. note:: Integrated config is preferred to individual config.
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/babeld.conf
+ sudo touch /usr/local/etc/frr/bfdd.conf
+ sudo touch /usr/local/etc/frr/bgpd.conf
+ sudo touch /usr/local/etc/frr/eigrpd.conf
+ sudo touch /usr/local/etc/frr/isisd.conf
+ sudo touch /usr/local/etc/frr/ldpd.conf
+ sudo touch /usr/local/etc/frr/nhrpd.conf
+ sudo touch /usr/local/etc/frr/ospf6d.conf
+ sudo touch /usr/local/etc/frr/ospfd.conf
+ sudo touch /usr/local/etc/frr/pbrd.conf
+ sudo touch /usr/local/etc/frr/pimd.conf
+ sudo touch /usr/local/etc/frr/ripd.conf
+ sudo touch /usr/local/etc/frr/ripngd.conf
+ sudo touch /usr/local/etc/frr/staticd.conf
+ sudo touch /usr/local/etc/frr/zebra.conf
+ sudo chown -R frr:frr /usr/local/etc/frr/
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/local/etc/frr/vtysh.conf
+ sudo chmod 640 /usr/local/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running system.
diff --git a/doc/developer/building-frr-for-freebsd11.rst b/doc/developer/building-frr-for-freebsd11.rst
new file mode 100644
index 0000000..af0b72b
--- /dev/null
+++ b/doc/developer/building-frr-for-freebsd11.rst
@@ -0,0 +1,135 @@
+FreeBSD 11
+==========
+
+FreeBSD 11 restrictions:
+------------------------
+
+- MPLS is not supported on ``FreeBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+
+Install required packages
+-------------------------
+
+Add packages: (Allow the install of the package management tool if this
+is first package install and asked)
+
+.. code-block:: shell
+
+ pkg install git autoconf automake libtool gmake json-c pkgconf \
+ bison flex py36-pytest c-ares python3.6 py36-sphinx texinfo libunwind \
+ protobuf-c
+
+.. include:: building-libunwind-note.rst
+
+Make sure there is no /usr/bin/flex preinstalled (and use the newly
+installed in /usr/local/bin): (FreeBSD frequently provides a older flex
+as part of the base OS which takes preference in path)
+
+.. include:: building-libyang.rst
+
+.. code-block:: shell
+
+ rm -f /usr/bin/flex
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not using any
+packages**
+
+Add frr group and user
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ pw groupadd frr -g 101
+ pw groupadd frrvty -g 102
+ pw adduser frr -g 101 -u 101 -G 102 -c "FRR suite" \
+ -d /usr/local/etc/frr -s /usr/sbin/nologin
+
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+.. code-block:: shell
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ setenv MAKE gmake
+ setenv LDFLAGS -L/usr/local/lib
+ setenv CPPFLAGS -I/usr/local/include
+ ln -s /usr/local/bin/sphinx-build-3.6 /usr/local/bin/sphinx-build
+ ./configure \
+ --sysconfdir=/usr/local/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --prefix=/usr/local \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo mkdir /usr/local/etc/frr
+
+For integrated config file:
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/frr.conf
+
+For individual config files:
+
+.. note:: Integrated config is preferred to individual config.
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/babeld.conf
+ sudo touch /usr/local/etc/frr/bfdd.conf
+ sudo touch /usr/local/etc/frr/bgpd.conf
+ sudo touch /usr/local/etc/frr/eigrpd.conf
+ sudo touch /usr/local/etc/frr/isisd.conf
+ sudo touch /usr/local/etc/frr/ldpd.conf
+ sudo touch /usr/local/etc/frr/nhrpd.conf
+ sudo touch /usr/local/etc/frr/ospf6d.conf
+ sudo touch /usr/local/etc/frr/ospfd.conf
+ sudo touch /usr/local/etc/frr/pbrd.conf
+ sudo touch /usr/local/etc/frr/pimd.conf
+ sudo touch /usr/local/etc/frr/ripd.conf
+ sudo touch /usr/local/etc/frr/ripngd.conf
+ sudo touch /usr/local/etc/frr/staticd.conf
+ sudo touch /usr/local/etc/frr/zebra.conf
+ sudo chown -R frr:frr /usr/local/etc/frr/
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/local/etc/frr/vtysh.conf
+ sudo chmod 640 /usr/local/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running system.
diff --git a/doc/developer/building-frr-for-freebsd13.rst b/doc/developer/building-frr-for-freebsd13.rst
new file mode 100644
index 0000000..0bc8277
--- /dev/null
+++ b/doc/developer/building-frr-for-freebsd13.rst
@@ -0,0 +1,122 @@
+FreeBSD 13
+==========
+
+FreeBSD 13 restrictions:
+------------------------
+
+- MPLS is not supported on ``FreeBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+- PIM for IPv6 is not currently supported on ``FreeBSD``.
+
+Install required packages
+-------------------------
+
+Add packages: (Allow the install of the package management tool if this
+is first package install and asked)
+
+.. code-block:: shell
+
+ pkg install git autoconf automake libtool gmake json-c pkgconf \
+ bison py39-pytest c-ares py39-sphinx texinfo libunwind libyang2
+
+.. include:: building-libunwind-note.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not using any
+packages**
+
+Add frr group and user
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ pw groupadd frr -g 101
+ pw groupadd frrvty -g 102
+ pw adduser frr -g 101 -u 101 -G 102 -c "FRR suite" \
+ -d /usr/local/etc/frr -s /usr/sbin/nologin
+
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+.. code-block:: shell
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ export MAKE=gmake LDFLAGS=-L/usr/local/lib CPPFLAGS=-I/usr/local/include
+ ./configure \
+ --sysconfdir=/usr/local/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --prefix=/usr/local \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo mkdir /usr/local/etc/frr
+
+For integrated config file:
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/frr.conf
+
+For individual config files:
+
+.. note:: Integrated config is preferred to individual config.
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/babeld.conf
+ sudo touch /usr/local/etc/frr/bfdd.conf
+ sudo touch /usr/local/etc/frr/bgpd.conf
+ sudo touch /usr/local/etc/frr/eigrpd.conf
+ sudo touch /usr/local/etc/frr/isisd.conf
+ sudo touch /usr/local/etc/frr/ldpd.conf
+ sudo touch /usr/local/etc/frr/nhrpd.conf
+ sudo touch /usr/local/etc/frr/ospf6d.conf
+ sudo touch /usr/local/etc/frr/ospfd.conf
+ sudo touch /usr/local/etc/frr/pbrd.conf
+ sudo touch /usr/local/etc/frr/pimd.conf
+ sudo touch /usr/local/etc/frr/ripd.conf
+ sudo touch /usr/local/etc/frr/ripngd.conf
+ sudo touch /usr/local/etc/frr/staticd.conf
+ sudo touch /usr/local/etc/frr/zebra.conf
+ sudo chown -R frr:frr /usr/local/etc/frr/
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/local/etc/frr/vtysh.conf
+ sudo chmod 640 /usr/local/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running system.
diff --git a/doc/developer/building-frr-for-freebsd9.rst b/doc/developer/building-frr-for-freebsd9.rst
new file mode 100644
index 0000000..3033287
--- /dev/null
+++ b/doc/developer/building-frr-for-freebsd9.rst
@@ -0,0 +1,140 @@
+FreeBSD 9
+=========================================
+
+FreeBSD 9 restrictions:
+-----------------------
+
+- MPLS is not supported on ``FreeBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+
+Install required packages
+-------------------------
+
+Add packages: (Allow the install of the package management tool if this
+is first package install and asked)
+
+::
+
+ pkg install -y git autoconf automake libtool gmake \
+ pkgconf texinfo json-c bison flex py36-pytest c-ares \
+ python3 py36-sphinx libexecinfo protobuf-c
+
+Make sure there is no /usr/bin/flex preinstalled (and use the newly
+installed in /usr/local/bin): (FreeBSD frequently provides a older flex
+as part of the base OS which takes preference in path)
+
+::
+
+ rm -f /usr/bin/flex
+
+For building with clang (instead of gcc), upgrade clang from 3.4 default
+to 3.6 *This is needed to build FreeBSD packages as well - for packages
+clang is default* (Clang 3.4 as shipped with FreeBSD 9 crashes during
+compile)
+
+::
+
+ pkg install clang36
+ pkg delete clang34
+ mv /usr/bin/clang /usr/bin/clang34
+ ln -s /usr/local/bin/clang36 /usr/bin/clang
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr group and user
+^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ pw groupadd frr -g 101
+ pw groupadd frrvty -g 102
+ pw adduser frr -g 101 -u 101 -G 102 -c "FRR suite" \
+ -d /usr/local/etc/frr -s /usr/sbin/nologin
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ export MAKE=gmake
+ export LDFLAGS="-L/usr/local/lib"
+ export CPPFLAGS="-I/usr/local/include"
+ ./configure \
+ --sysconfdir=/usr/local/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --prefix=/usr/local \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: shell
+
+ sudo mkdir /usr/local/etc/frr
+
+For integrated config file:
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/frr.conf
+
+For individual config files:
+
+.. note:: Integrated config is preferred to individual config.
+
+.. code-block:: shell
+
+ sudo touch /usr/local/etc/frr/babeld.conf
+ sudo touch /usr/local/etc/frr/bfdd.conf
+ sudo touch /usr/local/etc/frr/bgpd.conf
+ sudo touch /usr/local/etc/frr/eigrpd.conf
+ sudo touch /usr/local/etc/frr/isisd.conf
+ sudo touch /usr/local/etc/frr/ldpd.conf
+ sudo touch /usr/local/etc/frr/nhrpd.conf
+ sudo touch /usr/local/etc/frr/ospf6d.conf
+ sudo touch /usr/local/etc/frr/ospfd.conf
+ sudo touch /usr/local/etc/frr/pbrd.conf
+ sudo touch /usr/local/etc/frr/pimd.conf
+ sudo touch /usr/local/etc/frr/ripd.conf
+ sudo touch /usr/local/etc/frr/ripngd.conf
+ sudo touch /usr/local/etc/frr/staticd.conf
+ sudo touch /usr/local/etc/frr/zebra.conf
+ sudo chown -R frr:frr /usr/local/etc/frr/
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/local/etc/frr/vtysh.conf
+ sudo chmod 640 /usr/local/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running system.
diff --git a/doc/developer/building-frr-for-netbsd6.rst b/doc/developer/building-frr-for-netbsd6.rst
new file mode 100644
index 0000000..8958862
--- /dev/null
+++ b/doc/developer/building-frr-for-netbsd6.rst
@@ -0,0 +1,139 @@
+NetBSD 6
+========================================
+
+NetBSD 6 restrictions:
+----------------------
+
+- MPLS is not supported on ``NetBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+
+Install required packages
+-------------------------
+
+Configure Package location:
+
+::
+
+ PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/`uname -m`/`uname -r`/All"
+ export PKG_PATH
+
+Add packages:
+
+::
+
+ sudo pkg_add git autoconf automake libtool gmake openssl \
+ pkg-config json-c py36-test python36 py36-sphinx \
+ protobuf-c
+
+Install SSL Root Certificates (for git https access):
+
+::
+
+ sudo pkg_add mozilla-rootcerts
+ sudo touch /etc/openssl/openssl.cnf
+ sudo mozilla-rootcerts install
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -g 93 frrvty
+ sudo useradd -g 92 -u 92 -G frrvty -c "FRR suite" \
+ -d /nonexistent -s /sbin/nologin frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ MAKE=gmake
+ export LDFLAGS="-L/usr/pkg/lib -R/usr/pkg/lib"
+ export CPPFLAGS="-I/usr/pkg/include"
+ ./configure \
+ --sysconfdir=/usr/pkg/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo mkdir /var/log/frr
+ sudo mkdir /usr/pkg/etc/frr
+ sudo touch /usr/pkg/etc/frr/zebra.conf
+ sudo touch /usr/pkg/etc/frr/bgpd.conf
+ sudo touch /usr/pkg/etc/frr/ospfd.conf
+ sudo touch /usr/pkg/etc/frr/ospf6d.conf
+ sudo touch /usr/pkg/etc/frr/isisd.conf
+ sudo touch /usr/pkg/etc/frr/ripd.conf
+ sudo touch /usr/pkg/etc/frr/ripngd.conf
+ sudo touch /usr/pkg/etc/frr/pimd.conf
+ sudo chown -R frr:frr /usr/pkg/etc/frr
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/pkg/etc/frr/*.conf
+ sudo chmod 640 /usr/pkg/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running
+system
+
+Install rc.d init files
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ cp pkgsrc/*.sh /etc/rc.d/
+ chmod 555 /etc/rc.d/*.sh
+
+Enable FRR processes
+^^^^^^^^^^^^^^^^^^^^
+
+(Enable the required processes only)
+
+::
+
+ echo "zebra=YES" >> /etc/rc.conf
+ echo "bgpd=YES" >> /etc/rc.conf
+ echo "ospfd=YES" >> /etc/rc.conf
+ echo "ospf6d=YES" >> /etc/rc.conf
+ echo "isisd=YES" >> /etc/rc.conf
+ echo "ripngd=YES" >> /etc/rc.conf
+ echo "ripd=YES" >> /etc/rc.conf
+ echo "pimd=YES" >> /etc/rc.conf
diff --git a/doc/developer/building-frr-for-netbsd7.rst b/doc/developer/building-frr-for-netbsd7.rst
new file mode 100644
index 0000000..e751ba3
--- /dev/null
+++ b/doc/developer/building-frr-for-netbsd7.rst
@@ -0,0 +1,129 @@
+NetBSD 7
+========================================
+
+NetBSD 7 restrictions:
+----------------------
+
+- MPLS is not supported on ``NetBSD``. MPLS requires a Linux Kernel
+ (4.5 or higher). LDP can be built, but may have limited use without
+ MPLS
+
+Install required packages
+-------------------------
+
+::
+
+ sudo pkgin install git autoconf automake libtool gmake openssl \
+ pkg-config json-c python36 py36-test py36-sphinx \
+ protobuf-c
+
+Install SSL Root Certificates (for git https access):
+
+::
+
+ sudo pkgin install mozilla-rootcerts
+ sudo touch /etc/openssl/openssl.cnf
+ sudo mozilla-rootcerts install
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+Add frr groups and user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -g 93 frrvty
+ sudo useradd -g 92 -u 92 -G frrvty -c "FRR suite" \
+ -d /nonexistent -s /sbin/nologin frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ MAKE=gmake
+ export LDFLAGS="-L/usr/pkg/lib -R/usr/pkg/lib"
+ export CPPFLAGS="-I/usr/pkg/include"
+ ./configure \
+ --sysconfdir=/usr/pkg/etc/frr \
+ --enable-pkgsrcrcdir=/usr/pkg/share/examples/rc.d \
+ --localstatedir=/var/run/frr \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ gmake
+ gmake check
+ sudo gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ sudo mkdir /usr/pkg/etc/frr
+ sudo touch /usr/pkg/etc/frr/zebra.conf
+ sudo touch /usr/pkg/etc/frr/bgpd.conf
+ sudo touch /usr/pkg/etc/frr/ospfd.conf
+ sudo touch /usr/pkg/etc/frr/ospf6d.conf
+ sudo touch /usr/pkg/etc/frr/isisd.conf
+ sudo touch /usr/pkg/etc/frr/ripd.conf
+ sudo touch /usr/pkg/etc/frr/ripngd.conf
+ sudo touch /usr/pkg/etc/frr/pimd.conf
+ sudo chown -R frr:frr /usr/pkg/etc/frr
+ sudo touch /usr/local/etc/frr/vtysh.conf
+ sudo chown frr:frrvty /usr/pkg/etc/frr/*.conf
+ sudo chmod 640 /usr/pkg/etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/sysctl.conf``:
+
+::
+
+ # Routing: We need to forward packets
+ net.inet.ip.forwarding=1
+ net.inet6.ip6.forwarding=1
+
+**Reboot** or use ``sysctl`` to apply the same config to the running
+system
+
+Install rc.d init files
+^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ cp pkgsrc/*.sh /etc/rc.d/
+ chmod 555 /etc/rc.d/*.sh
+
+Enable FRR processes
+^^^^^^^^^^^^^^^^^^^^
+
+(Enable the required processes only)
+
+::
+
+ echo "zebra=YES" >> /etc/rc.conf
+ echo "bgpd=YES" >> /etc/rc.conf
+ echo "ospfd=YES" >> /etc/rc.conf
+ echo "ospf6d=YES" >> /etc/rc.conf
+ echo "isisd=YES" >> /etc/rc.conf
+ echo "ripngd=YES" >> /etc/rc.conf
+ echo "ripd=YES" >> /etc/rc.conf
+ echo "pimd=YES" >> /etc/rc.conf
diff --git a/doc/developer/building-frr-for-openbsd6.rst b/doc/developer/building-frr-for-openbsd6.rst
new file mode 100644
index 0000000..00bc2e5
--- /dev/null
+++ b/doc/developer/building-frr-for-openbsd6.rst
@@ -0,0 +1,182 @@
+OpenBSD 6
+=========================================
+
+Install required packages
+-------------------------
+
+Configure PKG\_PATH
+
+::
+
+ export PKG_PATH=http://ftp5.usa.openbsd.org/pub/OpenBSD/$(uname -r)/packages/$(machine -a)/
+
+Add packages:
+
+::
+
+ pkg_add clang libcares python3
+ pkg_add git autoconf-2.69p2 automake-1.15.1 libtool bison
+ pkg_add gmake json-c py-test py-sphinx libexecinfo protobuf-c
+
+Select Python2.7 as default (required for pytest)
+
+::
+
+ ln -s /usr/local/bin/python2.7 /usr/local/bin/python
+
+.. include:: building-libyang.rst
+
+Get FRR, compile it and install it (from Git)
+---------------------------------------------
+
+**This assumes you want to build and install FRR from source and not
+using any packages**
+
+Add frr group and user
+^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ groupadd -g 525 _frr
+ groupadd -g 526 _frrvty
+ useradd -g 525 -u 525 -c "FRR suite" -G _frrvty \
+ -d /nonexistent -s /sbin/nologin _frr
+
+Download Source, configure and compile it
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You may prefer different options on configure statement. These are just
+an example)
+
+.. warning::
+
+ In openbsd the proper links for the libyang library may not have been created.
+
+::
+
+ ln -s /usr/lib/libyang.so.1.10.17 /usr/lib/libyang.so
+
+.. warning::
+
+ ``openbsd`` since version 6.2 has ``clang`` as the default compiler so to
+ build frr, clang must be used (the included gcc version is very old).
+
+::
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ export AUTOCONF_VERSION="2.69"
+ export AUTOMAKE_VERSION="1.15"
+ ./bootstrap.sh
+ export LDFLAGS="-L/usr/local/lib"
+ export CPPFLAGS="-I/usr/local/include"
+ ./configure \
+ --sysconfdir=/etc/frr \
+ --localstatedir=/var/frr \
+ --enable-multipath=64 \
+ --enable-user=_frr \
+ --enable-group=_frr \
+ --enable-vty-group=_frrvty \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-fpm \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion \
+ CC=clang
+ gmake
+ gmake check
+ doas gmake install
+
+Create empty FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ doas mkdir /var/frr
+ doas chown _frr:_frr /var/frr
+ doas chmod 755 /var/frr
+ doas mkdir /etc/frr
+ doas touch /etc/frr/zebra.conf
+ doas touch /etc/frr/bgpd.conf
+ doas touch /etc/frr/ospfd.conf
+ doas touch /etc/frr/ospf6d.conf
+ doas touch /etc/frr/isisd.conf
+ doas touch /etc/frr/ripd.conf
+ doas touch /etc/frr/ripngd.conf
+ doas touch /etc/frr/pimd.conf
+ doas touch /etc/frr/ldpd.conf
+ doas touch /etc/frr/nhrpd.conf
+ doas chown -R _frr:_frr /etc/frr
+ doas touch /etc/frr/vtysh.conf
+ doas chown -R _frr:_frrvty /etc/frr/vtysh.conf
+ doas chmod 750 /etc/frr
+ doas chmod 640 /etc/frr/*.conf
+
+Enable IP & IPv6 forwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Add the following lines to the end of ``/etc/rc.conf``:
+
+::
+
+ net.inet6.ip6.forwarding=1 # 1=Permit forwarding of IPv6 packets
+ net.inet6.ip6.mforwarding=1 # 1=Permit forwarding of IPv6 multicast packets
+ net.inet6.ip6.multipath=1 # 1=Enable IPv6 multipath routing
+
+**Reboot** to apply the config to the system
+
+Enable MPLS Forwarding
+^^^^^^^^^^^^^^^^^^^^^^
+
+To enable MPLS forwarding on a given interface, use the following
+command:
+
+::
+
+ doas ifconfig em0 mpls
+
+Alternatively, to make MPLS forwarding persistent across reboots, add
+the "mpls" keyword in the hostname.\* files of the desired interfaces.
+Example:
+
+::
+
+ cat /etc/hostname.em0
+ inet 10.0.1.1 255.255.255.0 mpls
+
+Install rc.d init files
+^^^^^^^^^^^^^^^^^^^^^^^
+
+(create them in /etc/rc.d - no example are included at this time with
+FRR source)
+
+Example (for zebra - store as ``/etc/rc.d/frr_zebra.sh``)
+
+::
+
+ #!/bin/sh
+ #
+ # $OpenBSD: frr_zebra.rc,v 1.1 2013/04/18 20:29:08 sthen Exp $
+
+ daemon="/usr/local/sbin/zebra -d"
+
+ . /etc/rc.d/rc.subr
+
+ rc_cmd $1
+
+Enable FRR processes
+^^^^^^^^^^^^^^^^^^^^
+
+(Enable the required processes only)
+
+::
+
+ echo "frr_zebra=YES" >> /etc/rc.conf
+ echo "frr_bgpd=YES" >> /etc/rc.conf
+ echo "frr_ospfd=YES" >> /etc/rc.conf
+ echo "frr_ospf6d=YES" >> /etc/rc.conf
+ echo "frr_isisd=YES" >> /etc/rc.conf
+ echo "frr_ripngd=YES" >> /etc/rc.conf
+ echo "frr_ripd=YES" >> /etc/rc.conf
+ echo "frr_pimd=YES" >> /etc/rc.conf
+ echo "frr_ldpd=YES" >> /etc/rc.conf
diff --git a/doc/developer/building-frr-for-opensuse.rst b/doc/developer/building-frr-for-opensuse.rst
new file mode 100644
index 0000000..3ff445b
--- /dev/null
+++ b/doc/developer/building-frr-for-opensuse.rst
@@ -0,0 +1,146 @@
+openSUSE
+========
+
+This document describes installation from source.
+
+These instructions have been tested on openSUSE Tumbleweed in a Raspberry Pi 400.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ zypper in git autoconf automake libtool make \
+ readline-devel texinfo net-snmp-devel groff pkgconfig libjson-c-devel\
+ pam-devel python3-pytest bison flex c-ares-devel python3-devel\
+ python3-Sphinx perl patch libcap-devel libyang-devel \
+ libelf-devel libunwind-devel protobuf-c
+
+.. include:: building-libunwind-note.rst
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo useradd -u 92 -g 92 -M -r -G frrvty -s /sbin/nologin \
+ -c "FRR FRRouting suite" -d /var/run/frr frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+.. note::
+
+ In some platforms like raspberry for performance reasons
+ some directories are in file systems (/var/run, ...) mounted with tempfs
+ so will disapear after every reboot.
+ In frr the /var/run/frr is used to store pid files for every daemon.
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Create a new file ``/etc/sysctl.d/90-routing-sysctl.conf`` with the following
+content:
+
+::
+
+ #
+ # Enable packet forwarding
+ #
+ net.ipv4.conf.all.forwarding=1
+ net.ipv6.conf.all.forwarding=1
+ #
+ # Enable MPLS Label processing on all interfaces
+ #
+ #net.mpls.conf.eth0.input=1
+ #net.mpls.conf.eth1.input=1
+ #net.mpls.conf.eth2.input=1
+ #net.mpls.platform_labels=100000
+
+.. note::
+
+ MPLS must be invidividually enabled on each interface that requires it. See
+ the example in the config block above.
+
+Load the modified sysctls on the system:
+
+.. code-block:: console
+
+ sudo sysctl -p /etc/sysctl.d/90-routing-sysctl.conf
+
+Create a new file ``/etc/modules-load.d/mpls.conf`` with the following content:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls-router
+ mpls-iptunnel
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+
+.. note::
+ The ``firewalld`` service could be enabled. You may run into some
+ issues with the iptables rules it installs by default. If you wish to just
+ stop the service and clear `ALL` rules do these commands:
+
+ .. code-block:: console
+
+ sudo systemctl disable firewalld.service
+ sudo systemctl stop firewalld.service
+ sudo iptables -F
+
+Install frr Service
+^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -p -m 644 tools/frr.service /usr/lib/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``bgpd=no`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: console
+
+ sudo systemctl start frr
+
+Check the starting messages of frr with
+
+.. code-block:: console
+
+ journalctl -u frr --follow
diff --git a/doc/developer/building-frr-for-openwrt.rst b/doc/developer/building-frr-for-openwrt.rst
new file mode 100644
index 0000000..47cf2cb
--- /dev/null
+++ b/doc/developer/building-frr-for-openwrt.rst
@@ -0,0 +1,79 @@
+OpenWrt
+=======
+
+General info about OpenWrt buildsystem: `link <https://openwrt.org/docs/guide-developer/build-system/start>`_.
+
+Prepare build environment
+-------------------------
+
+For Debian based distributions, run:
+
+::
+
+ sudo apt-get install git build-essential libssl-dev libncurses5-dev \
+ unzip zlib1g-dev subversion mercurial
+
+For other environments, instructions can be found in the
+`official documentation
+<https://openwrt.org/docs/guide-developer/build-system/install-buildsystem#examples_of_package_installations>`_.
+
+
+Get OpenWrt Sources (from Git)
+------------------------------
+
+.. note::
+ The OpenWrt build will fail if you run it as root. So take care to run it as a nonprivileged user.
+
+Clone the OpenWrt sources and retrieve the package feeds
+
+::
+
+ git clone https://github.com/openwrt/openwrt.git
+ cd openwrt
+ ./scripts/feeds update -a
+ ./scripts/feeds install -a
+
+Configure OpenWrt for your target and select the needed FRR packages in Network -> Routing and Redirection -> frr,
+exit and save
+
+::
+
+ make menuconfig
+
+Then, to compile either a complete OpenWrt image, or the FRR packages, run:
+
+::
+
+ make or make package/frr/compile
+
+It may be possible that on first build ``make package/frr/compile`` not
+to work and it may be needed to run a ``make`` for the entire build
+environment. Add ``V=s`` to get more debugging output.
+
+More information about OpenWrt buildsystem can be found `here
+<https://openwrt.org/docs/guide-developer/build-system/use-buildsystem>`__.
+
+Work with sources
+-----------------
+
+To update to a newer version, or change other options, you need to edit the ``feeds/packages/frr/Makefile``.
+
+More information about working with patches in OpenWrt buildsystem can be found `here
+<https://openwrt.org/docs/guide-developer/build-system/use-patches-with-buildsystem>`__.
+
+Usage
+-----
+
+Edit ``/usr/sbin/frr.init`` and add/remove the daemons name in section
+``DAEMONS=`` or don't install unneeded packages For example: zebra bgpd ldpd
+isisd nhrpd ospfd ospf6d pimd ripd ripngd
+
+Enable the service
+^^^^^^^^^^^^^^^^^^
+
+- ``service frr enable``
+
+Start the service
+^^^^^^^^^^^^^^^^^
+
+- ``service frr start``
diff --git a/doc/developer/building-frr-for-ubuntu1404.rst b/doc/developer/building-frr-for-ubuntu1404.rst
new file mode 100644
index 0000000..cc6c3c0
--- /dev/null
+++ b/doc/developer/building-frr-for-ubuntu1404.rst
@@ -0,0 +1,142 @@
+Ubuntu 14.04 LTS
+================
+
+This document describes installation from source. If you want to build a
+``deb``, see :ref:`packaging-debian`.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ apt-get update
+ apt-get install \
+ git autoconf automake libtool make libreadline-dev texinfo \
+ pkg-config libpam0g-dev libjson-c-dev bison flex python3-pytest \
+ libc-ares-dev python3-dev python3-sphinx install-info build-essential \
+ libsnmp-dev perl libcap-dev libelf-dev
+
+.. include:: building-libyang.rst
+
+Protobuf
+^^^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install protobuf-c-compiler libprotobuf-c-dev
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` and uncomment the following values (ignore the
+other settings):
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+.. warning::
+
+ MPLS is not supported on Ubuntu 14.04 with the default kernel. MPLS requires
+ kernel 4.5 or higher. LDPD can be built, but may have limited use without
+ MPLS. For an updated Ubuntu Kernel, see
+ http://kernel.ubuntu.com/~kernel-ppa/mainline/
+
+Ubuntu 18.04 ships with kernel 4.15. MPLS modules are present by default. To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install the init.d service
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 755 tools/frr /etc/init.d/frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start the init.d service
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ /etc/init.d/frr start
+
+Use ``/etc/init.d/frr status`` to check its status.
diff --git a/doc/developer/building-frr-for-ubuntu1604.rst b/doc/developer/building-frr-for-ubuntu1604.rst
new file mode 100644
index 0000000..e5c2389
--- /dev/null
+++ b/doc/developer/building-frr-for-ubuntu1604.rst
@@ -0,0 +1,142 @@
+Ubuntu 16.04 LTS
+================
+
+This document describes installation from source. If you want to build a
+``deb``, see :ref:`packaging-debian`.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ apt-get update
+ apt-get install \
+ git autoconf automake libtool make libreadline-dev texinfo \
+ pkg-config libpam0g-dev libjson-c-dev bison flex python3-pytest \
+ libc-ares-dev python3-dev python-ipaddress python3-sphinx \
+ install-info build-essential libsnmp-dev perl libcap-dev \
+ libelf-dev libprotobuf-c-dev protobuf-c-compiler
+
+.. include:: building-libyang.rst
+
+Protobuf
+^^^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install protobuf-c-compiler libprotobuf-c-dev
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` and uncomment the following values (ignore the
+other settings):
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+.. warning::
+
+ MPLS is not supported on Ubuntu 16.04 with the default kernel. MPLS requires
+ kernel 4.5 or higher. LDPD can be built, but may have limited use without
+ MPLS. For an updated Ubuntu Kernel, see
+ http://kernel.ubuntu.com/~kernel-ppa/mainline/
+
+Ubuntu 18.04 ships with kernel 4.15. MPLS modules are present by default. To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install service files
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 644 tools/frr.service /etc/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: console
+
+ systemctl start frr
diff --git a/doc/developer/building-frr-for-ubuntu1804.rst b/doc/developer/building-frr-for-ubuntu1804.rst
new file mode 100644
index 0000000..fcfd94e
--- /dev/null
+++ b/doc/developer/building-frr-for-ubuntu1804.rst
@@ -0,0 +1,148 @@
+Ubuntu 18.04 LTS
+================
+
+This document describes installation from source. If you want to build a
+``deb``, see :ref:`packaging-debian`.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ sudo apt update
+ sudo apt-get install \
+ git autoconf automake libtool make libreadline-dev texinfo \
+ pkg-config libpam0g-dev libjson-c-dev bison flex \
+ libc-ares-dev python3-dev python3-sphinx \
+ install-info build-essential libsnmp-dev perl libcap-dev \
+ libelf-dev libunwind-dev
+
+.. include:: building-libunwind-note.rst
+
+.. include:: building-libyang.rst
+
+Protobuf
+^^^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install protobuf-c-compiler libprotobuf-c-dev
+
+ZeroMQ
+^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install libzmq5 libzmq3-dev
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` and uncomment the following values (ignore the
+other settings):
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+Ubuntu 18.04 ships with kernel 4.15. MPLS modules are present by default. To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+If the above command returns an error, you may need to install the appropriate
+or latest linux-modules-extra-<kernel-version>-generic package. For example
+``apt-get install linux-modules-extra-`uname -r`-generic``
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install service files
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 644 tools/frr.service /etc/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: shell
+
+ systemctl start frr
diff --git a/doc/developer/building-frr-for-ubuntu2004.rst b/doc/developer/building-frr-for-ubuntu2004.rst
new file mode 100644
index 0000000..fdfc25d
--- /dev/null
+++ b/doc/developer/building-frr-for-ubuntu2004.rst
@@ -0,0 +1,164 @@
+Ubuntu 20.04 LTS
+================
+
+This document describes installation from source. If you want to build a
+``deb``, see :ref:`packaging-debian`.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ sudo apt update
+ sudo apt-get install \
+ git autoconf automake libtool make libreadline-dev texinfo \
+ pkg-config libpam0g-dev libjson-c-dev bison flex \
+ libc-ares-dev python3-dev python3-sphinx \
+ install-info build-essential libsnmp-dev perl \
+ libcap-dev python2 libelf-dev libunwind-dev
+
+.. include:: building-libunwind-note.rst
+
+Note that Ubuntu 20 no longer installs python 2.x, so it must be
+installed explicitly. Ensure that your system has a symlink named
+``/usr/bin/python`` pointing at ``/usr/bin/python3``.
+
+In addition, ``pip`` for python2 must be installed if you wish to run
+the FRR topotests. That version of ``pip`` is not available from the
+ubuntu apt repositories; in order to install it:
+
+.. code-block:: shell
+
+ curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py
+ sudo python2 ./get-pip.py
+
+ # And verify the installation
+ pip2 --version
+
+.. include:: building-libyang.rst
+
+Protobuf
+^^^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install protobuf-c-compiler libprotobuf-c-dev
+
+ZeroMQ
+^^^^^^
+
+.. code-block:: console
+
+ sudo apt-get install libzmq5 libzmq3-dev
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` and uncomment the following values (ignore the
+other settings):
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+Ubuntu 20.04 ships with kernel 5.4; MPLS modules are present by default. To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+If the above command returns an error, you may need to install the appropriate
+or latest linux-modules-extra-<kernel-version>-generic package. For example
+``apt-get install linux-modules-extra-`uname -r`-generic``
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install service files
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 644 tools/frr.service /etc/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: shell
+
+ systemctl start frr
diff --git a/doc/developer/building-frr-for-ubuntu2204.rst b/doc/developer/building-frr-for-ubuntu2204.rst
new file mode 100644
index 0000000..97bdf88
--- /dev/null
+++ b/doc/developer/building-frr-for-ubuntu2204.rst
@@ -0,0 +1,183 @@
+Ubuntu 22.04 LTS
+================
+
+This document describes installation from source. If you want to build a
+``deb``, see :ref:`packaging-debian`.
+
+Installing Dependencies
+-----------------------
+
+.. code-block:: console
+
+ sudo apt update
+ sudo apt-get install \
+ git autoconf automake libtool make libreadline-dev texinfo \
+ pkg-config libpam0g-dev libjson-c-dev bison flex \
+ libc-ares-dev python3-dev python3-sphinx \
+ install-info build-essential libsnmp-dev perl \
+ libcap-dev python2 libelf-dev libunwind-dev \
+ libyang2 libyang2-dev
+
+.. include:: building-libunwind-note.rst
+
+Note that Ubuntu >= 20 no longer installs python 2.x, so it must be
+installed explicitly. Ensure that your system has a symlink named
+``/usr/bin/python`` pointing at ``/usr/bin/python3``.
+
+.. code-block:: shell
+
+ sudo ln -s /usr/bin/python3 /usr/bin/python
+ python --version
+
+In addition, ``pip`` for python2 must be installed if you wish to run
+the FRR topotests. That version of ``pip`` is not available from the
+ubuntu apt repositories; in order to install it:
+
+.. code-block:: shell
+
+ curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py
+ sudo python2 ./get-pip.py
+
+ # And verify the installation
+ pip2 --version
+
+
+Protobuf
+^^^^^^^^
+This is optional
+
+.. code-block:: console
+
+ sudo apt-get install protobuf-c-compiler libprotobuf-c-dev
+
+
+Config Rollbacks
+^^^^^^^^^^^^^^^^
+
+If config rollbacks are enabled using ``--enable-config-rollbacks``
+the sqlite3 developer package also should be installed.
+
+.. code-block:: console
+
+ sudo apt install libsqlite3-dev
+
+
+ZeroMQ
+^^^^^^
+This is optional
+
+.. code-block:: console
+
+ sudo apt-get install libzmq5 libzmq3-dev
+
+Building & Installing FRR
+-------------------------
+
+Add FRR user and groups
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo groupadd -r -g 92 frr
+ sudo groupadd -r -g 85 frrvty
+ sudo adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRR suite" --shell /sbin/nologin frr
+ sudo usermod -a -G frrvty frr
+
+Compile
+^^^^^^^
+
+.. include:: include-compile.rst
+
+Install FRR configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 775 -o frr -g frr -d /var/log/frr
+ sudo install -m 775 -o frr -g frrvty -d /etc/frr
+ sudo install -m 640 -o frr -g frrvty tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/frr.conf /etc/frr/frr.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons.conf /etc/frr/daemons.conf
+ sudo install -m 640 -o frr -g frr tools/etc/frr/daemons /etc/frr/daemons
+
+Tweak sysctls
+^^^^^^^^^^^^^
+
+Some sysctls need to be changed in order to enable IPv4/IPv6 forwarding and
+MPLS (if supported by your platform). If your platform does not support MPLS,
+skip the MPLS related configuration in this section.
+
+Edit :file:`/etc/sysctl.conf` and uncomment the following values (ignore the
+other settings):
+
+::
+
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+
+ # Uncomment the next line to enable packet forwarding for IPv6
+ # Enabling this option disables Stateless Address Autoconfiguration
+ # based on Router Advertisements for this host
+ net.ipv6.conf.all.forwarding=1
+
+Reboot or use ``sysctl -p`` to apply the same config to the running system.
+
+Add MPLS kernel modules
+"""""""""""""""""""""""
+
+Ubuntu 20.04 ships with kernel 5.4; MPLS modules are present by default. To
+enable, add the following lines to :file:`/etc/modules-load.d/modules.conf`:
+
+::
+
+ # Load MPLS Kernel Modules
+ mpls_router
+ mpls_iptunnel
+
+
+And load the kernel modules on the running system:
+
+.. code-block:: console
+
+ sudo modprobe mpls-router mpls-iptunnel
+
+If the above command returns an error, you may need to install the appropriate
+or latest linux-modules-extra-<kernel-version>-generic package. For example
+``apt-get install linux-modules-extra-`uname -r`-generic``
+
+Enable MPLS Forwarding
+""""""""""""""""""""""
+
+Edit :file:`/etc/sysctl.conf` and the following lines. Make sure to add a line
+equal to :file:`net.mpls.conf.eth0.input` for each interface used with MPLS.
+
+::
+
+ # Enable MPLS Label processing on all interfaces
+ net.mpls.conf.eth0.input=1
+ net.mpls.conf.eth1.input=1
+ net.mpls.conf.eth2.input=1
+ net.mpls.platform_labels=100000
+
+Install service files
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ sudo install -m 644 tools/frr.service /etc/systemd/system/frr.service
+ sudo systemctl enable frr
+
+Enable daemons
+^^^^^^^^^^^^^^
+
+Open :file:`/etc/frr/daemons` with your text editor of choice. Look for the
+section with ``watchfrr_enable=...`` and ``zebra=...`` etc. Enable the daemons
+as required by changing the value to ``yes``.
+
+Start FRR
+^^^^^^^^^
+
+.. code-block:: shell
+
+ systemctl start frr
diff --git a/doc/developer/building-libunwind-note.rst b/doc/developer/building-libunwind-note.rst
new file mode 100644
index 0000000..0beb1f8
--- /dev/null
+++ b/doc/developer/building-libunwind-note.rst
@@ -0,0 +1,6 @@
+.. note::
+
+ The ``libunwind`` library is optional but highly recommended, as it improves
+ backtraces printed for crashes and debugging. However, if it is not
+ available for some reason, it can simply be left out without any loss of
+ functionality.
diff --git a/doc/developer/building-libyang.rst b/doc/developer/building-libyang.rst
new file mode 100644
index 0000000..c36cd34
--- /dev/null
+++ b/doc/developer/building-libyang.rst
@@ -0,0 +1,47 @@
+FRR depends on the relatively new ``libyang`` library to provide YANG/NETCONF
+support. Unfortunately, most distributions do not yet offer a ``libyang``
+package from their repositories. Therefore we offer two options to install this
+library.
+
+**Option 1: Binary Install**
+
+The FRR project builds some binary ``libyang`` packages.
+
+RPM packages are at our `RPM repository <https://rpm.frrouting.org>`_.
+
+DEB packages are available as CI artifacts `here
+<https://ci1.netdef.org/browse/LIBYANG-LIBYANGV2/latestSuccessful/artifact>`_.
+
+.. warning::
+
+ ``libyang`` version 2.0.0 or newer is required to build FRR.
+
+.. note::
+
+ The ``libyang`` development packages need to be installed in addition to the
+ libyang core package in order to build FRR successfully. Make sure to
+ download and install those from the link above alongside the binary
+ packages.
+
+ Depending on your platform, you may also need to install the PCRE
+ development package. Typically this is ``libpcre2-dev`` or ``pcre2-devel``.
+
+**Option 2: Source Install**
+
+.. note::
+
+ Ensure that the `libyang build requirements
+ <https://github.com/CESNET/libyang/#build-requirements>`_
+ are met before continuing. Usually this entails installing ``cmake`` and
+ ``libpcre2-dev`` or ``pcre2-devel``.
+
+.. code-block:: console
+
+ git clone https://github.com/CESNET/libyang.git
+ cd libyang
+ git checkout v2.0.0
+ mkdir build; cd build
+ cmake -D CMAKE_INSTALL_PREFIX:PATH=/usr \
+ -D CMAKE_BUILD_TYPE:String="Release" ..
+ make
+ sudo make install
diff --git a/doc/developer/building.rst b/doc/developer/building.rst
new file mode 100644
index 0000000..8ca0c13
--- /dev/null
+++ b/doc/developer/building.rst
@@ -0,0 +1,35 @@
+.. _building:
+
+************
+Building FRR
+************
+
+.. toctree::
+ :maxdepth: 2
+
+ static-linking
+ building-frr-for-alpine
+ building-frr-for-archlinux
+ building-frr-for-centos6
+ building-frr-for-centos7
+ building-frr-for-centos8
+ building-frr-for-debian8
+ building-frr-for-debian9
+ building-frr-for-debian12
+ building-frr-for-fedora
+ building-frr-for-freebsd9
+ building-frr-for-freebsd10
+ building-frr-for-freebsd11
+ building-frr-for-freebsd13
+ building-frr-for-netbsd6
+ building-frr-for-netbsd7
+ building-frr-for-openbsd6
+ building-frr-for-opensuse
+ building-frr-for-openwrt
+ building-frr-for-ubuntu1404
+ building-frr-for-ubuntu1604
+ building-frr-for-ubuntu1804
+ building-frr-for-ubuntu2004
+ building-frr-for-ubuntu2204
+ building-docker
+ cross-compiling
diff --git a/doc/developer/checkpatch.rst b/doc/developer/checkpatch.rst
new file mode 100644
index 0000000..d8fe007
--- /dev/null
+++ b/doc/developer/checkpatch.rst
@@ -0,0 +1,1251 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+.. _checkpatch:
+
+==========
+Checkpatch
+==========
+
+Checkpatch (scripts/checkpatch.pl) is a perl script which checks for trivial
+style violations in patches and optionally corrects them. Checkpatch can
+also be run on file contexts and without the kernel tree.
+
+Checkpatch is not always right. Your judgement takes precedence over checkpatch
+messages. If your code looks better with the violations, then its probably
+best left alone.
+
+
+Options
+=======
+
+This section will describe the options checkpatch can be run with.
+
+Usage::
+
+ ./scripts/checkpatch.pl [OPTION]... [FILE]...
+
+Available options:
+
+ - -q, --quiet
+
+ Enable quiet mode.
+
+ - -v, --verbose
+ Enable verbose mode. Additional verbose test descriptions are output
+ so as to provide information on why that particular message is shown.
+
+ - --no-tree
+
+ Run checkpatch without the kernel tree.
+
+ - --no-signoff
+
+ Disable the 'Signed-off-by' line check. The sign-off is a simple line at
+ the end of the explanation for the patch, which certifies that you wrote it
+ or otherwise have the right to pass it on as an open-source patch.
+
+ Example::
+
+ Signed-off-by: Random J Developer <random@developer.example.org>
+
+ Setting this flag effectively stops a message for a missing signed-off-by
+ line in a patch context.
+
+ - --patch
+
+ Treat FILE as a patch. This is the default option and need not be
+ explicitly specified.
+
+ - --emacs
+
+ Set output to emacs compile window format. This allows emacs users to jump
+ from the error in the compile window directly to the offending line in the
+ patch.
+
+ - --terse
+
+ Output only one line per report.
+
+ - --showfile
+
+ Show the diffed file position instead of the input file position.
+
+ - -g, --git
+
+ Treat FILE as a single commit or a git revision range.
+
+ Single commit with:
+
+ - <rev>
+ - <rev>^
+ - <rev>~n
+
+ Multiple commits with:
+
+ - <rev1>..<rev2>
+ - <rev1>...<rev2>
+ - <rev>-<count>
+
+ - -f, --file
+
+ Treat FILE as a regular source file. This option must be used when running
+ checkpatch on source files in the kernel.
+
+ - --subjective, --strict
+
+ Enable stricter tests in checkpatch. By default the tests emitted as CHECK
+ do not activate by default. Use this flag to activate the CHECK tests.
+
+ - --list-types
+
+ Every message emitted by checkpatch has an associated TYPE. Add this flag
+ to display all the types in checkpatch.
+
+ Note that when this flag is active, checkpatch does not read the input FILE,
+ and no message is emitted. Only a list of types in checkpatch is output.
+
+ - --types TYPE(,TYPE2...)
+
+ Only display messages with the given types.
+
+ Example::
+
+ ./scripts/checkpatch.pl mypatch.patch --types EMAIL_SUBJECT,BRACES
+
+ - --ignore TYPE(,TYPE2...)
+
+ Checkpatch will not emit messages for the specified types.
+
+ Example::
+
+ ./scripts/checkpatch.pl mypatch.patch --ignore EMAIL_SUBJECT,BRACES
+
+ - --show-types
+
+ By default checkpatch doesn't display the type associated with the messages.
+ Set this flag to show the message type in the output.
+
+ - --max-line-length=n
+
+ Set the max line length (default 100). If a line exceeds the specified
+ length, a LONG_LINE message is emitted.
+
+
+ The message level is different for patch and file contexts. For patches,
+ a WARNING is emitted. While a milder CHECK is emitted for files. So for
+ file contexts, the --strict flag must also be enabled.
+
+ - --min-conf-desc-length=n
+
+ Set the Kconfig entry minimum description length, if shorter, warn.
+
+ - --tab-size=n
+
+ Set the number of spaces for tab (default 8).
+
+ - --root=PATH
+
+ PATH to the kernel tree root.
+
+ This option must be specified when invoking checkpatch from outside
+ the kernel root.
+
+ - --no-summary
+
+ Suppress the per file summary.
+
+ - --mailback
+
+ Only produce a report in case of Warnings or Errors. Milder Checks are
+ excluded from this.
+
+ - --summary-file
+
+ Include the filename in summary.
+
+ - --debug KEY=[0|1]
+
+ Turn on/off debugging of KEY, where KEY is one of 'values', 'possible',
+ 'type', and 'attr' (default is all off).
+
+ - --fix
+
+ This is an EXPERIMENTAL feature. If correctable errors exists, a file
+ <inputfile>.EXPERIMENTAL-checkpatch-fixes is created which has the
+ automatically fixable errors corrected.
+
+ - --fix-inplace
+
+ EXPERIMENTAL - Similar to --fix but input file is overwritten with fixes.
+
+ DO NOT USE this flag unless you are absolutely sure and you have a backup
+ in place.
+
+ - --ignore-perl-version
+
+ Override checking of perl version. Runtime errors maybe encountered after
+ enabling this flag if the perl version does not meet the minimum specified.
+
+ - --codespell
+
+ Use the codespell dictionary for checking spelling errors.
+
+ - --codespellfile
+
+ Use the specified codespell file.
+ Default is '/usr/share/codespell/dictionary.txt'.
+
+ - --typedefsfile
+
+ Read additional types from this file.
+
+ - --color[=WHEN]
+
+ Use colors 'always', 'never', or only when output is a terminal ('auto').
+ Default is 'auto'.
+
+ - --kconfig-prefix=WORD
+
+ Use WORD as a prefix for Kconfig symbols (default is `CONFIG_`).
+
+ - -h, --help, --version
+
+ Display the help text.
+
+Message Levels
+==============
+
+Messages in checkpatch are divided into three levels. The levels of messages
+in checkpatch denote the severity of the error. They are:
+
+ - ERROR
+
+ This is the most strict level. Messages of type ERROR must be taken
+ seriously as they denote things that are very likely to be wrong.
+
+ - WARNING
+
+ This is the next stricter level. Messages of type WARNING requires a
+ more careful review. But it is milder than an ERROR.
+
+ - CHECK
+
+ This is the mildest level. These are things which may require some thought.
+
+Type Descriptions
+=================
+
+This section contains a description of all the message types in checkpatch.
+
+.. Types in this section are also parsed by checkpatch.
+.. The types are grouped into subsections based on use.
+
+
+Allocation style
+----------------
+
+ **ALLOC_ARRAY_ARGS**
+ The first argument for kcalloc or kmalloc_array should be the
+ number of elements. sizeof() as the first argument is generally
+ wrong.
+
+ See: https://www.kernel.org/doc/html/latest/core-api/memory-allocation.html
+
+ **ALLOC_SIZEOF_STRUCT**
+ The allocation style is bad. In general for family of
+ allocation functions using sizeof() to get memory size,
+ constructs like::
+
+ p = alloc(sizeof(struct foo), ...)
+
+ should be::
+
+ p = alloc(sizeof(*p), ...)
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#allocating-memory
+
+ **ALLOC_WITH_MULTIPLY**
+ Prefer kmalloc_array/kcalloc over kmalloc/kzalloc with a
+ sizeof multiply.
+
+ See: https://www.kernel.org/doc/html/latest/core-api/memory-allocation.html
+
+
+API usage
+---------
+
+ **ARCH_DEFINES**
+ Architecture specific defines should be avoided wherever
+ possible.
+
+ **ARCH_INCLUDE_LINUX**
+ Whenever asm/file.h is included and linux/file.h exists, a
+ conversion can be made when linux/file.h includes asm/file.h.
+ However this is not always the case (See signal.h).
+ This message type is emitted only for includes from arch/.
+
+ **AVOID_BUG**
+ BUG() or BUG_ON() should be avoided totally.
+ Use WARN() and WARN_ON() instead, and handle the "impossible"
+ error condition as gracefully as possible.
+
+ See: https://www.kernel.org/doc/html/latest/process/deprecated.html#bug-and-bug-on
+
+ **CONSIDER_KSTRTO**
+ The simple_strtol(), simple_strtoll(), simple_strtoul(), and
+ simple_strtoull() functions explicitly ignore overflows, which
+ may lead to unexpected results in callers. The respective kstrtol(),
+ kstrtoll(), kstrtoul(), and kstrtoull() functions tend to be the
+ correct replacements.
+
+ See: https://www.kernel.org/doc/html/latest/process/deprecated.html#simple-strtol-simple-strtoll-simple-strtoul-simple-strtoull
+
+ **CONSTANT_CONVERSION**
+ Use of __constant_<foo> form is discouraged for the following functions::
+
+ __constant_cpu_to_be[x]
+ __constant_cpu_to_le[x]
+ __constant_be[x]_to_cpu
+ __constant_le[x]_to_cpu
+ __constant_htons
+ __constant_ntohs
+
+ Using any of these outside of include/uapi/ is not preferred as using the
+ function without __constant_ is identical when the argument is a
+ constant.
+
+ In big endian systems, the macros like __constant_cpu_to_be32(x) and
+ cpu_to_be32(x) expand to the same expression::
+
+ #define __constant_cpu_to_be32(x) ((__force __be32)(__u32)(x))
+ #define __cpu_to_be32(x) ((__force __be32)(__u32)(x))
+
+ In little endian systems, the macros __constant_cpu_to_be32(x) and
+ cpu_to_be32(x) expand to __constant_swab32 and __swab32. __swab32
+ has a __builtin_constant_p check::
+
+ #define __swab32(x) \
+ (__builtin_constant_p((__u32)(x)) ? \
+ ___constant_swab32(x) : \
+ __fswab32(x))
+
+ So ultimately they have a special case for constants.
+ Similar is the case with all of the macros in the list. Thus
+ using the __constant_... forms are unnecessarily verbose and
+ not preferred outside of include/uapi.
+
+ See: https://lore.kernel.org/lkml/1400106425.12666.6.camel@joe-AO725/
+
+ **DEPRECATED_API**
+ Usage of a deprecated RCU API is detected. It is recommended to replace
+ old flavourful RCU APIs by their new vanilla-RCU counterparts.
+
+ The full list of available RCU APIs can be viewed from the kernel docs.
+
+ See: https://www.kernel.org/doc/html/latest/RCU/whatisRCU.html#full-list-of-rcu-apis
+
+ **DEPRECATED_VARIABLE**
+ EXTRA_{A,C,CPP,LD}FLAGS are deprecated and should be replaced by the new
+ flags added via commit f77bf01425b1 ("kbuild: introduce ccflags-y,
+ asflags-y and ldflags-y").
+
+ The following conversion scheme maybe used::
+
+ EXTRA_AFLAGS -> asflags-y
+ EXTRA_CFLAGS -> ccflags-y
+ EXTRA_CPPFLAGS -> cppflags-y
+ EXTRA_LDFLAGS -> ldflags-y
+
+ See:
+
+ 1. https://lore.kernel.org/lkml/20070930191054.GA15876@uranus.ravnborg.org/
+ 2. https://lore.kernel.org/lkml/1313384834-24433-12-git-send-email-lacombar@gmail.com/
+ 3. https://www.kernel.org/doc/html/latest/kbuild/makefiles.html#compilation-flags
+
+ **DEVICE_ATTR_FUNCTIONS**
+ The function names used in DEVICE_ATTR is unusual.
+ Typically, the store and show functions are used with <attr>_store and
+ <attr>_show, where <attr> is a named attribute variable of the device.
+
+ Consider the following examples::
+
+ static DEVICE_ATTR(type, 0444, type_show, NULL);
+ static DEVICE_ATTR(power, 0644, power_show, power_store);
+
+ The function names should preferably follow the above pattern.
+
+ See: https://www.kernel.org/doc/html/latest/driver-api/driver-model/device.html#attributes
+
+ **DEVICE_ATTR_RO**
+ The DEVICE_ATTR_RO(name) helper macro can be used instead of
+ DEVICE_ATTR(name, 0444, name_show, NULL);
+
+ Note that the macro automatically appends _show to the named
+ attribute variable of the device for the show method.
+
+ See: https://www.kernel.org/doc/html/latest/driver-api/driver-model/device.html#attributes
+
+ **DEVICE_ATTR_RW**
+ The DEVICE_ATTR_RW(name) helper macro can be used instead of
+ DEVICE_ATTR(name, 0644, name_show, name_store);
+
+ Note that the macro automatically appends _show and _store to the
+ named attribute variable of the device for the show and store methods.
+
+ See: https://www.kernel.org/doc/html/latest/driver-api/driver-model/device.html#attributes
+
+ **DEVICE_ATTR_WO**
+ The DEVICE_AATR_WO(name) helper macro can be used instead of
+ DEVICE_ATTR(name, 0200, NULL, name_store);
+
+ Note that the macro automatically appends _store to the
+ named attribute variable of the device for the store method.
+
+ See: https://www.kernel.org/doc/html/latest/driver-api/driver-model/device.html#attributes
+
+ **DUPLICATED_SYSCTL_CONST**
+ Commit d91bff3011cf ("proc/sysctl: add shared variables for range
+ check") added some shared const variables to be used instead of a local
+ copy in each source file.
+
+ Consider replacing the sysctl range checking value with the shared
+ one in include/linux/sysctl.h. The following conversion scheme may
+ be used::
+
+ &zero -> SYSCTL_ZERO
+ &one -> SYSCTL_ONE
+ &int_max -> SYSCTL_INT_MAX
+
+ See:
+
+ 1. https://lore.kernel.org/lkml/20190430180111.10688-1-mcroce@redhat.com/
+ 2. https://lore.kernel.org/lkml/20190531131422.14970-1-mcroce@redhat.com/
+
+ **ENOSYS**
+ ENOSYS means that a nonexistent system call was called.
+ Earlier, it was wrongly used for things like invalid operations on
+ otherwise valid syscalls. This should be avoided in new code.
+
+ See: https://lore.kernel.org/lkml/5eb299021dec23c1a48fa7d9f2c8b794e967766d.1408730669.git.luto@amacapital.net/
+
+ **ENOTSUPP**
+ ENOTSUPP is not a standard error code and should be avoided in new patches.
+ EOPNOTSUPP should be used instead.
+
+ See: https://lore.kernel.org/netdev/20200510182252.GA411829@lunn.ch/
+
+ **EXPORT_SYMBOL**
+ EXPORT_SYMBOL should immediately follow the symbol to be exported.
+
+ **IN_ATOMIC**
+ in_atomic() is not for driver use so any such use is reported as an ERROR.
+ Also in_atomic() is often used to determine if sleeping is permitted,
+ but it is not reliable in this use model. Therefore its use is
+ strongly discouraged.
+
+ However, in_atomic() is ok for core kernel use.
+
+ See: https://lore.kernel.org/lkml/20080320201723.b87b3732.akpm@linux-foundation.org/
+
+ **LOCKDEP**
+ The lockdep_no_validate class was added as a temporary measure to
+ prevent warnings on conversion of device->sem to device->mutex.
+ It should not be used for any other purpose.
+
+ See: https://lore.kernel.org/lkml/1268959062.9440.467.camel@laptop/
+
+ **MALFORMED_INCLUDE**
+ The #include statement has a malformed path. This has happened
+ because the author has included a double slash "//" in the pathname
+ accidentally.
+
+ **USE_LOCKDEP**
+ lockdep_assert_held() annotations should be preferred over
+ assertions based on spin_is_locked()
+
+ See: https://www.kernel.org/doc/html/latest/locking/lockdep-design.html#annotations
+
+ **UAPI_INCLUDE**
+ No #include statements in include/uapi should use a uapi/ path.
+
+ **USLEEP_RANGE**
+ usleep_range() should be preferred over udelay(). The proper way of
+ using usleep_range() is mentioned in the kernel docs.
+
+ See: https://www.kernel.org/doc/html/latest/timers/timers-howto.html#delays-information-on-the-various-kernel-delay-sleep-mechanisms
+
+
+Comments
+--------
+
+ **BLOCK_COMMENT_STYLE**
+ The comment style is incorrect. The preferred style for multi-
+ line comments is::
+
+ /*
+ * This is the preferred style
+ * for multi line comments.
+ */
+
+ The networking comment style is a bit different, with the first line
+ not empty like the former::
+
+ /* This is the preferred comment style
+ * for files in net/ and drivers/net/
+ */
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#commenting
+
+ **C99_COMMENTS**
+ C99 style single line comments (//) should not be used.
+ Prefer the block comment style instead.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#commenting
+
+ **DATA_RACE**
+ Applications of data_race() should have a comment so as to document the
+ reasoning behind why it was deemed safe.
+
+ See: https://lore.kernel.org/lkml/20200401101714.44781-1-elver@google.com/
+
+ **FSF_MAILING_ADDRESS**
+ Kernel maintainers reject new instances of the GPL boilerplate paragraph
+ directing people to write to the FSF for a copy of the GPL, since the
+ FSF has moved in the past and may do so again.
+ So do not write paragraphs about writing to the Free Software Foundation's
+ mailing address.
+
+ See: https://lore.kernel.org/lkml/20131006222342.GT19510@leaf/
+
+
+Commit message
+--------------
+
+ **BAD_SIGN_OFF**
+ The signed-off-by line does not fall in line with the standards
+ specified by the community.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#developer-s-certificate-of-origin-1-1
+
+ **BAD_STABLE_ADDRESS_STYLE**
+ The email format for stable is incorrect.
+ Some valid options for stable address are::
+
+ 1. stable@vger.kernel.org
+ 2. stable@kernel.org
+
+ For adding version info, the following comment style should be used::
+
+ stable@vger.kernel.org # version info
+
+ **COMMIT_COMMENT_SYMBOL**
+ Commit log lines starting with a '#' are ignored by git as
+ comments. To solve this problem addition of a single space
+ infront of the log line is enough.
+
+ **COMMIT_MESSAGE**
+ The patch is missing a commit description. A brief
+ description of the changes made by the patch should be added.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#describe-your-changes
+
+ **EMAIL_SUBJECT**
+ Naming the tool that found the issue is not very useful in the
+ subject line. A good subject line summarizes the change that
+ the patch brings.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#describe-your-changes
+
+ **FROM_SIGN_OFF_MISMATCH**
+ The author's email does not match with that in the Signed-off-by:
+ line(s). This can be sometimes caused due to an improperly configured
+ email client.
+
+ This message is emitted due to any of the following reasons::
+
+ - The email names do not match.
+ - The email addresses do not match.
+ - The email subaddresses do not match.
+ - The email comments do not match.
+
+ **MISSING_SIGN_OFF**
+ The patch is missing a Signed-off-by line. A signed-off-by
+ line should be added according to Developer's certificate of
+ Origin.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin
+
+ **NO_AUTHOR_SIGN_OFF**
+ The author of the patch has not signed off the patch. It is
+ required that a simple sign off line should be present at the
+ end of explanation of the patch to denote that the author has
+ written it or otherwise has the rights to pass it on as an open
+ source patch.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin
+
+ **DIFF_IN_COMMIT_MSG**
+ Avoid having diff content in commit message.
+ This causes problems when one tries to apply a file containing both
+ the changelog and the diff because patch(1) tries to apply the diff
+ which it found in the changelog.
+
+ See: https://lore.kernel.org/lkml/20150611134006.9df79a893e3636019ad2759e@linux-foundation.org/
+
+ **GERRIT_CHANGE_ID**
+ To be picked up by gerrit, the footer of the commit message might
+ have a Change-Id like::
+
+ Change-Id: Ic8aaa0728a43936cd4c6e1ed590e01ba8f0fbf5b
+ Signed-off-by: A. U. Thor <author@example.com>
+
+ The Change-Id line must be removed before submitting.
+
+ **GIT_COMMIT_ID**
+ The proper way to reference a commit id is:
+ commit <12+ chars of sha1> ("<title line>")
+
+ An example may be::
+
+ Commit e21d2170f36602ae2708 ("video: remove unnecessary
+ platform_set_drvdata()") removed the unnecessary
+ platform_set_drvdata(), but left the variable "dev" unused,
+ delete it.
+
+ See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#describe-your-changes
+
+
+Comparison style
+----------------
+
+ **ASSIGN_IN_IF**
+ Do not use assignments in if condition.
+ Example::
+
+ if ((foo = bar(...)) < BAZ) {
+
+ should be written as::
+
+ foo = bar(...);
+ if (foo < BAZ) {
+
+ **BOOL_COMPARISON**
+ Comparisons of A to true and false are better written
+ as A and !A.
+
+ See: https://lore.kernel.org/lkml/1365563834.27174.12.camel@joe-AO722/
+
+ **COMPARISON_TO_NULL**
+ Comparisons to NULL in the form (foo == NULL) or (foo != NULL)
+ are better written as (!foo) and (foo).
+
+ **CONSTANT_COMPARISON**
+ Comparisons with a constant or upper case identifier on the left
+ side of the test should be avoided.
+
+
+Indentation and Line Breaks
+---------------------------
+
+ **CODE_INDENT**
+ Code indent should use tabs instead of spaces.
+ Outside of comments, documentation and Kconfig,
+ spaces are never used for indentation.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#indentation
+
+ **DEEP_INDENTATION**
+ Indentation with 6 or more tabs usually indicate overly indented
+ code.
+
+ It is suggested to refactor excessive indentation of
+ if/else/for/do/while/switch statements.
+
+ See: https://lore.kernel.org/lkml/1328311239.21255.24.camel@joe2Laptop/
+
+ **SWITCH_CASE_INDENT_LEVEL**
+ switch should be at the same indent as case.
+ Example::
+
+ switch (suffix) {
+ case 'G':
+ case 'g':
+ mem <<= 30;
+ break;
+ case 'M':
+ case 'm':
+ mem <<= 20;
+ break;
+ case 'K':
+ case 'k':
+ mem <<= 10;
+ fallthrough;
+ default:
+ break;
+ }
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#indentation
+
+ **LONG_LINE**
+ The line has exceeded the specified maximum length.
+ To use a different maximum line length, the --max-line-length=n option
+ may be added while invoking checkpatch.
+
+ Earlier, the default line length was 80 columns. Commit bdc48fa11e46
+ ("checkpatch/coding-style: deprecate 80-column warning") increased the
+ limit to 100 columns. This is not a hard limit either and it's
+ preferable to stay within 80 columns whenever possible.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#breaking-long-lines-and-strings
+
+ **LONG_LINE_STRING**
+ A string starts before but extends beyond the maximum line length.
+ To use a different maximum line length, the --max-line-length=n option
+ may be added while invoking checkpatch.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#breaking-long-lines-and-strings
+
+ **LONG_LINE_COMMENT**
+ A comment starts before but extends beyond the maximum line length.
+ To use a different maximum line length, the --max-line-length=n option
+ may be added while invoking checkpatch.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#breaking-long-lines-and-strings
+
+ **SPLIT_STRING**
+ Quoted strings that appear as messages in userspace and can be
+ grepped, should not be split across multiple lines.
+
+ See: https://lore.kernel.org/lkml/20120203052727.GA15035@leaf/
+
+ **MULTILINE_DEREFERENCE**
+ A single dereferencing identifier spanned on multiple lines like::
+
+ struct_identifier->member[index].
+ member = <foo>;
+
+ is generally hard to follow. It can easily lead to typos and so makes
+ the code vulnerable to bugs.
+
+ If fixing the multiple line dereferencing leads to an 80 column
+ violation, then either rewrite the code in a more simple way or if the
+ starting part of the dereferencing identifier is the same and used at
+ multiple places then store it in a temporary variable, and use that
+ temporary variable only at all the places. For example, if there are
+ two dereferencing identifiers::
+
+ member1->member2->member3.foo1;
+ member1->member2->member3.foo2;
+
+ then store the member1->member2->member3 part in a temporary variable.
+ It not only helps to avoid the 80 column violation but also reduces
+ the program size by removing the unnecessary dereferences.
+
+ But if none of the above methods work then ignore the 80 column
+ violation because it is much easier to read a dereferencing identifier
+ on a single line.
+
+ **TRAILING_STATEMENTS**
+ Trailing statements (for example after any conditional) should be
+ on the next line.
+ Statements, such as::
+
+ if (x == y) break;
+
+ should be::
+
+ if (x == y)
+ break;
+
+
+Macros, Attributes and Symbols
+------------------------------
+
+ **ARRAY_SIZE**
+ The ARRAY_SIZE(foo) macro should be preferred over
+ sizeof(foo)/sizeof(foo[0]) for finding number of elements in an
+ array.
+
+ The macro is defined in include/linux/kernel.h::
+
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+ **AVOID_EXTERNS**
+ Function prototypes don't need to be declared extern in .h
+ files. It's assumed by the compiler and is unnecessary.
+
+ **AVOID_L_PREFIX**
+ Local symbol names that are prefixed with `.L` should be avoided,
+ as this has special meaning for the assembler; a symbol entry will
+ not be emitted into the symbol table. This can prevent `objtool`
+ from generating correct unwind info.
+
+ Symbols with STB_LOCAL binding may still be used, and `.L` prefixed
+ local symbol names are still generally usable within a function,
+ but `.L` prefixed local symbol names should not be used to denote
+ the beginning or end of code regions via
+ `SYM_CODE_START_LOCAL`/`SYM_CODE_END`
+
+ **BIT_MACRO**
+ Defines like: 1 << <digit> could be BIT(digit).
+ The BIT() macro is defined via include/linux/bits.h::
+
+ #define BIT(nr) (1UL << (nr))
+
+ **CONST_READ_MOSTLY**
+ When a variable is tagged with the __read_mostly annotation, it is a
+ signal to the compiler that accesses to the variable will be mostly
+ reads and rarely(but NOT never) a write.
+
+ const __read_mostly does not make any sense as const data is already
+ read-only. The __read_mostly annotation thus should be removed.
+
+ **DATE_TIME**
+ It is generally desirable that building the same source code with
+ the same set of tools is reproducible, i.e. the output is always
+ exactly the same.
+
+ The kernel does *not* use the ``__DATE__`` and ``__TIME__`` macros,
+ and enables warnings if they are used as they can lead to
+ non-deterministic builds.
+
+ See: https://www.kernel.org/doc/html/latest/kbuild/reproducible-builds.html#timestamps
+
+ **DEFINE_ARCH_HAS**
+ The ARCH_HAS_xyz and ARCH_HAVE_xyz patterns are wrong.
+
+ For big conceptual features use Kconfig symbols instead. And for
+ smaller things where we have compatibility fallback functions but
+ want architectures able to override them with optimized ones, we
+ should either use weak functions (appropriate for some cases), or
+ the symbol that protects them should be the same symbol we use.
+
+ See: https://lore.kernel.org/lkml/CA+55aFycQ9XJvEOsiM3txHL5bjUc8CeKWJNR_H+MiicaddB42Q@mail.gmail.com/
+
+ **DO_WHILE_MACRO_WITH_TRAILING_SEMICOLON**
+ do {} while(0) macros should not have a trailing semicolon.
+
+ **INIT_ATTRIBUTE**
+ Const init definitions should use __initconst instead of
+ __initdata.
+
+ Similarly init definitions without const require a separate
+ use of const.
+
+ **INLINE_LOCATION**
+ The inline keyword should sit between storage class and type.
+
+ For example, the following segment::
+
+ inline static int example_function(void)
+ {
+ ...
+ }
+
+ should be::
+
+ static inline int example_function(void)
+ {
+ ...
+ }
+
+ **MISPLACED_INIT**
+ It is possible to use section markers on variables in a way
+ which gcc doesn't understand (or at least not the way the
+ developer intended)::
+
+ static struct __initdata samsung_pll_clock exynos4_plls[nr_plls] = {
+
+ does not put exynos4_plls in the .initdata section. The __initdata
+ marker can be virtually anywhere on the line, except right after
+ "struct". The preferred location is before the "=" sign if there is
+ one, or before the trailing ";" otherwise.
+
+ See: https://lore.kernel.org/lkml/1377655732.3619.19.camel@joe-AO722/
+
+ **MULTISTATEMENT_MACRO_USE_DO_WHILE**
+ Macros with multiple statements should be enclosed in a
+ do - while block. Same should also be the case for macros
+ starting with `if` to avoid logic defects::
+
+ #define macrofun(a, b, c) \
+ do { \
+ if (a == 5) \
+ do_this(b, c); \
+ } while (0)
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#macros-enums-and-rtl
+
+ **PREFER_FALLTHROUGH**
+ Use the `fallthrough;` pseudo keyword instead of
+ `/* fallthrough */` like comments.
+
+ **TRAILING_SEMICOLON**
+ Macro definition should not end with a semicolon. The macro
+ invocation style should be consistent with function calls.
+ This can prevent any unexpected code paths::
+
+ #define MAC do_something;
+
+ If this macro is used within a if else statement, like::
+
+ if (some_condition)
+ MAC;
+
+ else
+ do_something;
+
+ Then there would be a compilation error, because when the macro is
+ expanded there are two trailing semicolons, so the else branch gets
+ orphaned.
+
+ See: https://lore.kernel.org/lkml/1399671106.2912.21.camel@joe-AO725/
+
+ **SINGLE_STATEMENT_DO_WHILE_MACRO**
+ For the multi-statement macros, it is necessary to use the do-while
+ loop to avoid unpredictable code paths. The do-while loop helps to
+ group the multiple statements into a single one so that a
+ function-like macro can be used as a function only.
+
+ But for the single statement macros, it is unnecessary to use the
+ do-while loop. Although the code is syntactically correct but using
+ the do-while loop is redundant. So remove the do-while loop for single
+ statement macros.
+
+ **WEAK_DECLARATION**
+ Using weak declarations like __attribute__((weak)) or __weak
+ can have unintended link defects. Avoid using them.
+
+
+Functions and Variables
+-----------------------
+
+ **CAMELCASE**
+ Avoid CamelCase Identifiers.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#naming
+
+ **CONST_CONST**
+ Using `const <type> const *` is generally meant to be
+ written `const <type> * const`.
+
+ **CONST_STRUCT**
+ Using const is generally a good idea. Checkpatch reads
+ a list of frequently used structs that are always or
+ almost always constant.
+
+ The existing structs list can be viewed from
+ `scripts/const_structs.checkpatch`.
+
+ See: https://lore.kernel.org/lkml/alpine.DEB.2.10.1608281509480.3321@hadrien/
+
+ **EMBEDDED_FUNCTION_NAME**
+ Embedded function names are less appropriate to use as
+ refactoring can cause function renaming. Prefer the use of
+ "%s", __func__ to embedded function names.
+
+ Note that this does not work with -f (--file) checkpatch option
+ as it depends on patch context providing the function name.
+
+ **FUNCTION_ARGUMENTS**
+ This warning is emitted due to any of the following reasons:
+
+ 1. Arguments for the function declaration do not follow
+ the identifier name. Example::
+
+ void foo
+ (int bar, int baz)
+
+ This should be corrected to::
+
+ void foo(int bar, int baz)
+
+ 2. Some arguments for the function definition do not
+ have an identifier name. Example::
+
+ void foo(int)
+
+ All arguments should have identifier names.
+
+ **FUNCTION_WITHOUT_ARGS**
+ Function declarations without arguments like::
+
+ int foo()
+
+ should be::
+
+ int foo(void)
+
+ **GLOBAL_INITIALISERS**
+ Global variables should not be initialized explicitly to
+ 0 (or NULL, false, etc.). Your compiler (or rather your
+ loader, which is responsible for zeroing out the relevant
+ sections) automatically does it for you.
+
+ **INITIALISED_STATIC**
+ Static variables should not be initialized explicitly to zero.
+ Your compiler (or rather your loader) automatically does
+ it for you.
+
+ **MULTIPLE_ASSIGNMENTS**
+ Multiple assignments on a single line makes the code unnecessarily
+ complicated. So on a single line assign value to a single variable
+ only, this makes the code more readable and helps avoid typos.
+
+ **RETURN_PARENTHESES**
+ return is not a function and as such doesn't need parentheses::
+
+ return (bar);
+
+ can simply be::
+
+ return bar;
+
+
+Permissions
+-----------
+
+ **DEVICE_ATTR_PERMS**
+ The permissions used in DEVICE_ATTR are unusual.
+ Typically only three permissions are used - 0644 (RW), 0444 (RO)
+ and 0200 (WO).
+
+ See: https://www.kernel.org/doc/html/latest/filesystems/sysfs.html#attributes
+
+ **EXECUTE_PERMISSIONS**
+ There is no reason for source files to be executable. The executable
+ bit can be removed safely.
+
+ **EXPORTED_WORLD_WRITABLE**
+ Exporting world writable sysfs/debugfs files is usually a bad thing.
+ When done arbitrarily they can introduce serious security bugs.
+ In the past, some of the debugfs vulnerabilities would seemingly allow
+ any local user to write arbitrary values into device registers - a
+ situation from which little good can be expected to emerge.
+
+ See: https://lore.kernel.org/linux-arm-kernel/cover.1296818921.git.segoon@openwall.com/
+
+ **NON_OCTAL_PERMISSIONS**
+ Permission bits should use 4 digit octal permissions (like 0700 or 0444).
+ Avoid using any other base like decimal.
+
+ **SYMBOLIC_PERMS**
+ Permission bits in the octal form are more readable and easier to
+ understand than their symbolic counterparts because many command-line
+ tools use this notation. Experienced kernel developers have been using
+ these traditional Unix permission bits for decades and so they find it
+ easier to understand the octal notation than the symbolic macros.
+ For example, it is harder to read S_IWUSR|S_IRUGO than 0644, which
+ obscures the developer's intent rather than clarifying it.
+
+ See: https://lore.kernel.org/lkml/CA+55aFw5v23T-zvDZp-MmD_EYxF8WbafwwB59934FV7g21uMGQ@mail.gmail.com/
+
+
+Spacing and Brackets
+--------------------
+
+ **ASSIGNMENT_CONTINUATIONS**
+ Assignment operators should not be written at the start of a
+ line but should follow the operand at the previous line.
+
+ **BRACES**
+ The placement of braces is stylistically incorrect.
+ The preferred way is to put the opening brace last on the line,
+ and put the closing brace first::
+
+ if (x is true) {
+ we do y
+ }
+
+ This applies for all non-functional blocks.
+ However, there is one special case, namely functions: they have the
+ opening brace at the beginning of the next line, thus::
+
+ int function(int x)
+ {
+ body of function
+ }
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#placing-braces-and-spaces
+
+ **BRACKET_SPACE**
+ Whitespace before opening bracket '[' is prohibited.
+ There are some exceptions:
+
+ 1. With a type on the left::
+
+ int [] a;
+
+ 2. At the beginning of a line for slice initialisers::
+
+ [0...10] = 5,
+
+ 3. Inside a curly brace::
+
+ = { [0...10] = 5 }
+
+ **CONCATENATED_STRING**
+ Concatenated elements should have a space in between.
+ Example::
+
+ printk(KERN_INFO"bar");
+
+ should be::
+
+ printk(KERN_INFO "bar");
+
+ **ELSE_AFTER_BRACE**
+ `else {` should follow the closing block `}` on the same line.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#placing-braces-and-spaces
+
+ **LINE_SPACING**
+ Vertical space is wasted given the limited number of lines an
+ editor window can display when multiple blank lines are used.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#spaces
+
+ **OPEN_BRACE**
+ The opening brace should be following the function definitions on the
+ next line. For any non-functional block it should be on the same line
+ as the last construct.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#placing-braces-and-spaces
+
+ **POINTER_LOCATION**
+ When using pointer data or a function that returns a pointer type,
+ the preferred use of * is adjacent to the data name or function name
+ and not adjacent to the type name.
+ Examples::
+
+ char *linux_banner;
+ unsigned long long memparse(char *ptr, char **retptr);
+ char *match_strdup(substring_t *s);
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#spaces
+
+ **SPACING**
+ Whitespace style used in the kernel sources is described in kernel docs.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#spaces
+
+ **TRAILING_WHITESPACE**
+ Trailing whitespace should always be removed.
+ Some editors highlight the trailing whitespace and cause visual
+ distractions when editing files.
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#spaces
+
+ **UNNECESSARY_PARENTHESES**
+ Parentheses are not required in the following cases:
+
+ 1. Function pointer uses::
+
+ (foo->bar)();
+
+ could be::
+
+ foo->bar();
+
+ 2. Comparisons in if::
+
+ if ((foo->bar) && (foo->baz))
+ if ((foo == bar))
+
+ could be::
+
+ if (foo->bar && foo->baz)
+ if (foo == bar)
+
+ 3. addressof/dereference single Lvalues::
+
+ &(foo->bar)
+ *(foo->bar)
+
+ could be::
+
+ &foo->bar
+ *foo->bar
+
+ **WHILE_AFTER_BRACE**
+ while should follow the closing bracket on the same line::
+
+ do {
+ ...
+ } while(something);
+
+ See: https://www.kernel.org/doc/html/latest/process/coding-style.html#placing-braces-and-spaces
+
+
+Others
+------
+
+ **CONFIG_DESCRIPTION**
+ Kconfig symbols should have a help text which fully describes
+ it.
+
+ **CORRUPTED_PATCH**
+ The patch seems to be corrupted or lines are wrapped.
+ Please regenerate the patch file before sending it to the maintainer.
+
+ **CVS_KEYWORD**
+ Since linux moved to git, the CVS markers are no longer used.
+ So, CVS style keywords ($Id$, $Revision$, $Log$) should not be
+ added.
+
+ **DEFAULT_NO_BREAK**
+ switch default case is sometimes written as "default:;". This can
+ cause new cases added below default to be defective.
+
+ A "break;" should be added after empty default statement to avoid
+ unwanted fallthrough.
+
+ **DOS_LINE_ENDINGS**
+ For DOS-formatted patches, there are extra ^M symbols at the end of
+ the line. These should be removed.
+
+ **DT_SCHEMA_BINDING_PATCH**
+ DT bindings moved to a json-schema based format instead of
+ freeform text.
+
+ See: https://www.kernel.org/doc/html/latest/devicetree/bindings/writing-schema.html
+
+ **DT_SPLIT_BINDING_PATCH**
+ Devicetree bindings should be their own patch. This is because
+ bindings are logically independent from a driver implementation,
+ they have a different maintainer (even though they often
+ are applied via the same tree), and it makes for a cleaner history in the
+ DT only tree created with git-filter-branch.
+
+ See: https://www.kernel.org/doc/html/latest/devicetree/bindings/submitting-patches.html#i-for-patch-submitters
+
+ **EMBEDDED_FILENAME**
+ Embedding the complete filename path inside the file isn't particularly
+ useful as often the path is moved around and becomes incorrect.
+
+ **FILE_PATH_CHANGES**
+ Whenever files are added, moved, or deleted, the MAINTAINERS file
+ patterns can be out of sync or outdated.
+
+ So MAINTAINERS might need updating in these cases.
+
+ **MEMSET**
+ The memset use appears to be incorrect. This may be caused due to
+ badly ordered parameters. Please recheck the usage.
+
+ **NOT_UNIFIED_DIFF**
+ The patch file does not appear to be in unified-diff format. Please
+ regenerate the patch file before sending it to the maintainer.
+
+ **PRINTF_0XDECIMAL**
+ Prefixing 0x with decimal output is defective and should be corrected.
+
+ **SPDX_LICENSE_TAG**
+ The source file is missing or has an improper SPDX identifier tag.
+ The Linux kernel requires the precise SPDX identifier in all source files,
+ and it is thoroughly documented in the kernel docs.
+
+ See: https://www.kernel.org/doc/html/latest/process/license-rules.html
+
+ **TYPO_SPELLING**
+ Some words may have been misspelled. Consider reviewing them.
diff --git a/doc/developer/cli.rst b/doc/developer/cli.rst
new file mode 100644
index 0000000..59073b3
--- /dev/null
+++ b/doc/developer/cli.rst
@@ -0,0 +1,1007 @@
+.. _command-line-interface:
+
+Command Line Interface
+======================
+
+FRR features a flexible modal command line interface. Often when adding new
+features or modifying existing code it is necessary to create or modify CLI
+commands. FRR has a powerful internal CLI system that does most of the heavy
+lifting for you.
+
+Modes
+-----
+FRR's CLI is organized by modes. Each mode is associated with some set of
+functionality, e.g. EVPN, or some underlying object such as an interface. Each
+mode contains a set of commands that control the associated functionality or
+object. Users move between the modes by entering a command, which is usually
+different for each source and destination mode.
+
+A summary of the modes is given in the following figure.
+
+.. graphviz:: ../figures/nodes.dot
+
+.. seealso:: :ref:`cli-data-structures`
+
+Walkup
+^^^^^^
+FRR exhibits, for historical reasons, a peculiar behavior called 'walkup'.
+Suppose a user is in ``OSPF_NODE``, which contains only OSPF-specific commands,
+and enters the following command: ::
+
+ ip route 192.168.100.0/24 10.0.2.2
+
+This command is not defined in ``OSPF_NODE``, so the matcher will fail to match
+the command in that node. The matcher will then check "parent" nodes of
+``OSPF_NODE``. In this case the direct parent of ``OSPF_NODE`` is
+``CONFIG_NODE``, so the current node switches to ``CONFIG_NODE`` and the command
+is tried in that node. Since static route commands are defined in
+``CONFIG_NODE`` the command succeeds. The procedure of attempting to execute
+unmatched commands by sequentially "walking up" to parent nodes only happens in
+children (direct and indirect) below ``CONFIG_NODE`` and stops at
+``CONFIG_NODE``.
+
+Unfortunately, the internal representation of the various modes is not actually
+a graph. Instead, there is an array. The parent-child relationships are not
+explicitly defined in any datastructure but instead are hard-coded into the
+specific commands that switch nodes. For walkup, there is a function that takes
+a node and returns the parent of the node. This interface causes all manner of
+insidious problems, even for experienced developers, and needs to be fixed at
+some point in the future.
+
+Deprecation of old style of commands
+------------------------------------
+
+There are currently 2 styles of defining commands within a FRR source file.
+``DEFUN`` and ``DEFPY``. ``DEFPY`` should be used for all new commands that
+a developer is writing. This is because it allows for much better handling
+of command line arguments as well as ensuring that input is correct. ``DEFUN``
+is listed here for historical reasons as well as for ensuring that existing
+code can be understood by new developers.
+
+Defining Commands
+-----------------
+All definitions for the CLI system are exposed in ``lib/command.h``. In this
+header there are a set of macros used to define commands. These macros are
+collectively referred to as "DEFUNs", because of their syntax:
+
+::
+
+ DEFUN(command_name,
+ command_name_cmd,
+ "example command FOO...",
+ "Examples\n"
+ "CLI command\n"
+ "Argument\n")
+ {
+ // ...command handler...
+ }
+
+DEFUNs generally take four arguments which are expanded into the appropriate
+constructs for hooking into the CLI. In order these are:
+
+- **Function name** - the name of the handler function for the command
+- **Command name** - the identifier of the ``struct cmd_element`` for the
+ command. By convention this should be the function name with ``_cmd``
+ appended.
+- **Command definition** - an expression in FRR's CLI grammar that defines the
+ form of the command and its arguments, if any
+- **Doc string** - a newline-delimited string that documents each element in
+ the command definition
+
+In the above example, ``command_name`` is the function name,
+``command_name_cmd`` is the command name, ``"example..."`` is the definition and
+the last argument is the doc string. The block following the macro is the body
+of the handler function, details on which are presented later in this section.
+
+In order to make the command show up to the user it must be installed into the
+CLI graph. To do this, call:
+
+``install_element(NODE, &command_name_cmd);``
+
+This will install the command into the specified CLI node. Usually these calls
+are grouped together in a CLI initialization function for a set of commands, and
+the DEFUNs themselves are grouped into the same source file to avoid cluttering
+the codebase. The names of these files follow the form ``*_vty.[ch]`` by
+convention. Please do not scatter individual CLI commands in the middle of
+source files; instead expose the necessary functions in a header and place the
+command definition in a ``*_vty.[ch]`` file.
+
+.. note::
+
+ Please see :ref:`cli-workflow` for requirements when creating CLI commands
+ (e.g., JSON structure and formatting).
+
+Definition Grammar
+^^^^^^^^^^^^^^^^^^
+FRR uses its own grammar for defining CLI commands. The grammar draws from
+syntax commonly seen in \*nix manpages and should be fairly intuitive. The
+parser is implemented in Bison and the lexer in Flex. These may be found in
+``lib/command_parse.y`` and ``lib/command_lex.l``, respectively.
+
+ **ProTip**: if you define a new command and find that the parser is
+ throwing syntax or other errors, the parser is the last place you want
+ to look. Bison is very stable and if it detects a syntax error, 99% of
+ the time it will be a syntax error in your definition.
+
+The formal grammar in BNF is given below. This is the grammar implemented in the
+Bison parser. At runtime, the Bison parser reads all of the CLI strings and
+builds a combined directed graph that is used to match and interpret user input.
+
+Human-friendly explanations of how to use this grammar are given a bit later in
+this section alongside information on the :ref:`cli-data-structures` constructed
+by the parser.
+
+.. productionlist::
+ command: `cmd_token_seq`
+ : `cmd_token_seq` `placeholder_token` "..."
+ cmd_token_seq: *empty*
+ : `cmd_token_seq` `cmd_token`
+ cmd_token: `simple_token`
+ : `selector`
+ simple_token: `literal_token`
+ : `placeholder_token`
+ literal_token: WORD `varname_token`
+ varname_token: "$" WORD
+ placeholder_token: `placeholder_token_real` `varname_token`
+ placeholder_token_real: IPV4
+ : IPV4_PREFIX
+ : IPV6
+ : IPV6_PREFIX
+ : VARIABLE
+ : RANGE
+ : MAC
+ : MAC_PREFIX
+ : ASNUM
+ selector: "<" `selector_seq_seq` ">" `varname_token`
+ : "{" `selector_seq_seq` "}" `varname_token`
+ : "[" `selector_seq_seq` "]" `varname_token`
+ : "![" `selector_seq_seq` "]" `varname_token`
+ selector_seq_seq: `selector_seq_seq` "|" `selector_token_seq`
+ : `selector_token_seq`
+ selector_token_seq: `selector_token_seq` `selector_token`
+ : `selector_token`
+ selector_token: `selector`
+ : `simple_token`
+
+Tokens
+^^^^^^
+The various capitalized tokens in the BNF above are in fact themselves
+placeholders, but not defined as such in the formal grammar; the grammar
+provides the structure, and the tokens are actually more like a type system for
+the strings you write in your CLI definitions. A CLI definition string is broken
+apart and each piece is assigned a type by the lexer based on a set of regular
+expressions. The parser uses the type information to verify the string and
+determine the structure of the CLI graph; additional metadata (such as the raw
+text of each token) is encoded into the graph as it is constructed by the
+parser, but this is merely a dumb copy job.
+
+Here is a brief summary of the various token types along with examples.
+
++-----------------+--------------------------+-------------------------------------------------------+
+| Token type | Syntax | Description |
++=================+==========================+=======================================================+
+| ``WORD`` | ``show ip bgp`` | Matches itself. In the example every token is a WORD. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``IPV4`` | ``A.B.C.D`` | Matches an IPv4 address. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``IPV6`` | ``X:X::X:X`` | Matches an IPv6 address. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``IPV4_PREFIX`` | ``A.B.C.D/M`` | Matches an IPv4 prefix in CIDR notation. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``IPV6_PREFIX`` | ``X:X::X:X/M`` | Matches an IPv6 prefix in CIDR notation. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``MAC`` | ``X:X:X:X:X:X`` | Matches a 48-bit mac address. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``MAC_PREFIX`` | ``X:X:X:X:X:X/M`` | Matches a 48-bit mac address with a mask. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``VARIABLE`` | ``FOOBAR`` | Matches anything. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``RANGE`` | ``(X-Y)`` | Matches numbers in the range X..Y inclusive. |
++-----------------+--------------------------+-------------------------------------------------------+
+| ``ASNUM`` | ``<A.B|(1-4294967295)>`` | Matches an AS in plain or dot format. |
++-----------------+--------------------------+-------------------------------------------------------+
+
+When presented with user input, the parser will search over all defined
+commands in the current context to find a match. It is aware of the various
+types of user input and has a ranking system to help disambiguate commands. For
+instance, suppose the following commands are defined in the user's current
+context:
+
+::
+
+ example command FOO
+ example command (22-49)
+ example command A.B.C.D/X
+
+The following table demonstrates the matcher's choice for a selection of
+possible user input.
+
++---------------------------------+---------------------------+--------------------------------------------------------------------------------------------------------------+
+| Input | Matched command | Reason |
++=================================+===========================+==============================================================================================================+
+| ``example command eLi7eH4xx0r`` | example command FOO | ``eLi7eH4xx0r`` is not an integer or IPv4 prefix, |
+| | | but FOO is a variable and matches all input. |
++---------------------------------+---------------------------+--------------------------------------------------------------------------------------------------------------+
+| ``example command 42`` | example command (22-49) | ``42`` is not an IPv4 prefix. It does match both |
+| | | ``(22-49)`` and ``FOO``, but RANGE tokens are more specific and have a higher priority than VARIABLE tokens. |
++---------------------------------+---------------------------+--------------------------------------------------------------------------------------------------------------+
+| ``example command 10.3.3.0/24`` | example command A.B.C.D/X | The user entered an IPv4 prefix, which is best matched by the last command. |
++---------------------------------+---------------------------+--------------------------------------------------------------------------------------------------------------+
+
+Rules
+^^^^^
+There are also constructs which allow optional tokens, mutual exclusion,
+one-or-more selection and repetition.
+
+- ``<angle|brackets>`` -- Contain sequences of tokens separated by pipes and
+ provide mutual exclusion. User input matches at most one option.
+- ``[square brackets]`` -- Contains sequences of tokens that can be omitted.
+ ``[<a|b>]`` can be shortened to ``[a|b]``.
+- ``![exclamation square brackets]`` -- same as ``[square brackets]``, but
+ only allow skipping the contents if the command input starts with ``no``.
+ (For cases where the positive command needs a parameter, but the parameter
+ is optional for the negative case.)
+- ``{curly|braces}`` -- similar to angle brackets, but instead of mutual
+ exclusion, curly braces indicate that one or more of the pipe-separated
+ sequences may be provided in any order.
+- ``VARIADICS...`` -- Any token which accepts input (anything except WORD)
+ which occurs as the last token of a line may be followed by an ellipsis,
+ which indicates that input matching the token may be repeated an unlimited
+ number of times.
+- ``$name`` -- Specify a variable name for the preceding token. See
+ "Variable Names" below.
+
+Some general notes:
+
+- Options are allowed at the beginning of the command. The developer is
+ entreated to use these extremely sparingly. They are most useful for
+ implementing the 'no' form of configuration commands. Please think carefully
+ before using them for anything else. There is usually a better solution, even
+ if it is just separating out the command definition into separate ones.
+- The developer should judiciously apply separation of concerns when defining
+ commands. CLI definitions for two unrelated or vaguely related commands or
+ configuration items should be defined in separate commands. Clarity is
+ preferred over LOC (within reason).
+- The maximum number of space-separated tokens that can be entered is
+ presently limited to 256. Please keep this limit in mind when
+ implementing new CLI.
+
+Variable Names
+^^^^^^^^^^^^^^
+The parser tries to fill the "varname" field on each token. This can happen
+either manually or automatically. Manual specifications work by appending
+``$name`` after the input specifier:
+
+::
+
+ foo bar$cmd WORD$name A.B.C.D$ip
+
+Note that you can also assign variable names to fixed input tokens, this can be
+useful if multiple commands share code. You can also use "$name" after a
+multiple-choice option:
+
+::
+
+ foo bar <A.B.C.D|X:X::X:X>$addr [optionA|optionB]$mode
+
+The variable name is in this case assigned to the last token in each of the
+branches.
+
+Automatic assignment of variable names works by applying the following rules:
+
+- manual names always have priority
+- a ``[no]`` at the beginning receives ``no`` as varname on the ``no`` token
+- ``VARIABLE`` tokens whose text is not ``WORD`` or ``NAME`` receive a cleaned
+ lowercase version of the token text as varname, e.g. ``ROUTE-MAP`` becomes
+ ``route_map``.
+- other variable tokens (i.e. everything except "fixed") receive the text of
+ the preceding fixed token as varname, if one can be found. E.g.
+ ``ip route A.B.C.D/M INTERFACE`` assigns "route" to the ``A.B.C.D/M`` token.
+
+These rules should make it possible to avoid manual varname assignment in 90% of
+the cases.
+
+Doc Strings
+^^^^^^^^^^^
+Each token in a command definition should be documented with a brief doc string
+that informs a user of the meaning and/or purpose of the subsequent command
+tree. These strings are provided as the last parameter to DEFUN macros,
+concatenated together and separated by an escaped newline (``\n``). These are
+best explained by example.
+
+::
+
+ DEFUN (config_terminal,
+ config_terminal_cmd,
+ "configure terminal",
+ "Configuration from vty interface\n"
+ "Configuration terminal\n")
+
+The last parameter is split into two lines for readability. Two newline
+delimited doc strings are present, one for each token in the command. The second
+string documents the functionality of the ``terminal`` command in the
+``configure`` subtree.
+
+Note that the first string, for ``configure`` does not contain documentation for
+'terminal'. This is because the CLI is best envisioned as a tree, with tokens
+defining branches. An imaginary ``start`` token is the root of every command in
+a CLI node. Each subsequent written token descends into a subtree, so the
+documentation for that token ideally summarizes all the functionality contained
+in the subtree.
+
+A consequence of this structure is that the developer must be careful to use the
+same doc strings when defining multiple commands that are part of the same tree.
+Commands which share prefixes must share the same doc strings for those
+prefixes. On startup the parser will generate warnings if it notices
+inconsistent doc strings. Behavior is undefined; the same token may show up
+twice in completions, with different doc strings, or it may show up once with a
+random doc string. Parser warnings should be heeded and fixed to avoid confusing
+users.
+
+The number of doc strings provided must be equal to the amount of tokens present
+in the command definition, read left to right, ignoring any special constructs.
+
+In the examples below, each arrowed token needs a doc string.
+
+::
+
+ "show ip bgp"
+ ^ ^ ^
+
+ "command <foo|bar> [example]"
+ ^ ^ ^ ^
+
+DEFPY
+^^^^^
+``DEFPY(...)`` is an enhanced version of ``DEFUN()`` which is preprocessed by
+:file:`python/clidef.py`. The python script parses the command definition
+string, extracts variable names and types, and generates a C wrapper function
+that parses the variables and passes them on. This means that in the CLI
+function body, you will receive additional parameters with appropriate types.
+
+This is best explained by an example. Invoking ``DEFPY`` like this:
+
+.. code-block:: c
+
+ DEFPY(func, func_cmd, "[no] foo bar A.B.C.D (0-99)$num", "...help...")
+
+defines the handler function like this:
+
+.. code-block:: c
+
+ func(self, vty, argc, argv, /* standard CLI arguments */
+ const char *no, /* unparsed "no" */
+ struct in_addr bar, /* parsed IP address */
+ const char *bar_str, /* unparsed IP address */
+ long num, /* parsed num */
+ const char *num_str) /* unparsed num */
+
+Note that as documented in the previous section, ``bar`` is automatically
+applied as variable name for ``A.B.C.D``. The Python script then detects this as
+an IP address argument and generates code to parse it into a ``struct in_addr``,
+passing it in ``bar``. The raw value is passed in ``bar_str``. The range/number
+argument works in the same way with the explicitly given variable name.
+
+Type rules
+""""""""""
+
++----------------------------+--------------------------------+--------------------------+
+| Token(s) | Type | Value if omitted by user |
++============================+================================+==========================+
+| ``A.B.C.D`` | ``struct in_addr`` | ``0.0.0.0`` |
++----------------------------+--------------------------------+--------------------------+
+| ``X:X::X:X`` | ``struct in6_addr`` | ``::`` |
++----------------------------+--------------------------------+--------------------------+
+| ``A.B.C.D + X:X::X:X`` | ``const union sockunion *`` | ``NULL`` |
++----------------------------+--------------------------------+--------------------------+
+| ``A.B.C.D/M`` | ``const struct prefix_ipv4 *`` | ``all-zeroes struct`` |
++----------------------------+--------------------------------+--------------------------+
+| ``X:X::X:X/M`` | ``const struct prefix_ipv6 *`` | ``all-zeroes struct`` |
++----------------------------+--------------------------------+--------------------------+
+| ``A.B.C.D/M + X:X::X:X/M`` | ``const struct prefix *`` | ``all-zeroes struct`` |
++----------------------------+--------------------------------+--------------------------+
+| ``(0-9)`` | ``long`` | ``0`` |
++----------------------------+--------------------------------+--------------------------+
+| ``VARIABLE`` | ``const char *`` | ``NULL`` |
++----------------------------+--------------------------------+--------------------------+
+| ``word`` | ``const char *`` | ``NULL`` |
++----------------------------+--------------------------------+--------------------------+
+| *all other* | ``const char *`` | ``NULL`` |
++----------------------------+--------------------------------+--------------------------+
+
+Note the following details:
+
+- Not all parameters are pointers, some are passed as values.
+- When the type is not ``const char *``, there will be an extra ``_str``
+ argument with type ``const char *``.
+- You can give a variable name not only to ``VARIABLE`` tokens but also to
+ ``word`` tokens (e.g. constant words). This is useful if some parts of a
+ command are optional. The type will be ``const char *``.
+- ``[no]`` will be passed as ``const char *no``.
+- Most pointers will be ``NULL`` when the argument is optional and the
+ user did not supply it. As noted in the table above, some prefix
+ struct type arguments are passed as pointers to all-zeroes structs,
+ not as ``NULL`` pointers.
+- If a parameter is not a pointer, but is optional and the user didn't use it,
+ the default value will be passed. Check the ``_str`` argument if you need to
+ determine whether the parameter was omitted.
+- If the definition contains multiple parameters with the same variable name,
+ they will be collapsed into a single function parameter. The python code will
+ detect if the types are compatible (i.e. IPv4 + IPv6 variants) and choose a
+ corresponding C type.
+- The standard DEFUN parameters (``self, vty, argc, argv``) are still present
+ and can be used. A DEFUN can simply be **edited into a DEFPY without further
+ changes and it will still work**; this allows easy forward migration.
+- A file may contain both ``DEFUN`` and ``DEFPY`` statements.
+
+Getting a parameter dump
+""""""""""""""""""""""""
+The clidef.py script can be called to get a list of DEFUNs/DEFPYs with the
+parameter name/type list:
+
+::
+
+ lib/clippy python/clidef.py --all-defun --show lib/plist.c > /dev/null
+
+The generated code is printed to stdout, the info dump to stderr. The
+``--all-defun`` argument will make it process DEFUN blocks as well as DEFPYs,
+which is useful prior to converting some DEFUNs. **The dump does not list the
+``_str`` arguments** to keep the output shorter.
+
+Note that the ``clidef.py`` script cannot be run with python directly, it needs
+to be run with *clippy* since the latter makes the CLI parser available.
+
+Include & Makefile requirements
+"""""""""""""""""""""""""""""""
+A source file that uses DEFPY needs to include the ``*_clippy.c`` file **before
+all DEFPY statements**:
+
+.. code-block:: c
+
+ /* GPL header */
+ #include ...
+ ...
+ #include "daemon/filename_clippy.c"
+
+ DEFPY(...)
+ DEFPY(...)
+
+ install_element(...)
+
+This dependency needs to be marked in ``Makefile.am`` or ``subdir.am``: (there
+is no ordering requirement)
+
+.. code-block:: make
+
+ # ...
+
+ # if linked into a LTLIBRARY (.la/.so):
+ filename.lo: filename_clippy.c
+
+ # if linked into an executable or static library (.a):
+ filename.o: filename_clippy.c
+
+Handlers
+^^^^^^^^
+The block that follows a CLI definition is executed when a user enters input
+that matches the definition. Its function signature looks like this:
+
+.. code-block:: c
+
+ int (*func) (const struct cmd_element *, struct vty *, int, struct cmd_token *[]);
+
+The first argument is the command definition struct. The last argument is an
+ordered array of tokens that correspond to the path taken through the graph, and
+the argument just prior to that is the length of the array.
+
+The arrangement of the token array has changed from Quagga's CLI implementation.
+In the old system, missing arguments were padded with ``NULL`` so that the same
+parts of a command would show up at the same indices regardless of what was
+entered. The new system does not perform such padding and therefore it is
+generally *incorrect* to assume consistent indices in this array. As a simple
+example:
+
+Command definition:
+
+::
+
+ command [foo] <bar|baz>
+
+User enters:
+
+::
+
+ command foo bar
+
+Array:
+
+::
+
+ [0] -> command
+ [1] -> foo
+ [2] -> bar
+
+User enters:
+
+::
+
+ command baz
+
+Array:
+
+::
+
+ [0] -> command
+ [1] -> baz
+
+
+.. _cli-data-structures:
+
+Data Structures
+---------------
+On startup, the CLI parser sequentially parses each command string definition
+and constructs a directed graph with each token forming a node. This graph is
+the basis of the entire CLI system. It is used to match user input in order to
+generate command completions and match commands to functions.
+
+There is one graph per CLI node (not the same as a graph node in the CLI graph).
+The CLI node struct keeps a reference to its graph (see :file:`lib/command.h`).
+
+While most of the graph maintains the form of a tree, special constructs
+outlined in the Rules section introduce some quirks. ``<>``, ``[]`` and ``{}``
+form self-contained 'subgraphs'. Each subgraph is a tree except that all of the
+'leaves' actually share a child node. This helps with minimizing graph size and
+debugging.
+
+As a working example, here is the graph of the following command: ::
+
+ show [ip] bgp neighbors [<A.B.C.D|X:X::X:X|WORD>] [json]
+
+.. figure:: ../figures/cligraph.png
+ :align: center
+
+ Graph of example CLI command
+
+
+``FORK`` and ``JOIN`` nodes are plumbing nodes that don't correspond to user
+input. They're necessary in order to deduplicate these constructs where
+applicable.
+
+Options follow the same form, except that there is an edge from the ``FORK``
+node to the ``JOIN`` node. Since all of the subgraphs in the example command are
+optional, all of them have this edge.
+
+Keywords follow the same form, except that there is an edge from ``JOIN`` to
+``FORK``. Because of this the CLI graph cannot be called acyclic. There is
+special logic in the input matching code that keeps a stack of paths already
+taken through the node in order to disallow following the same path more than
+once.
+
+Variadics are a bit special; they have an edge back to themselves, which allows
+repeating the same input indefinitely.
+
+The leaves of the graph are nodes that have no out edges. These nodes are
+special; their data section does not contain a token, as most nodes do, or
+``NULL``, as in ``FORK``/``JOIN`` nodes, but instead has a pointer to a
+``cmd_element``. All paths through the graph that terminate on a leaf are
+guaranteed to be defined by that command. When a user enters a complete command,
+the command matcher tokenizes the input and executes a DFS on the CLI graph. If
+it is simultaneously able to exhaust all input (one input token per graph node),
+and then find exactly one leaf connected to the last node it reaches, then the
+input has matched the corresponding command and the command is executed. If it
+finds more than one node, then the command is ambiguous (more on this in
+deduplication). If it cannot exhaust all input, the command is unknown. If it
+exhausts all input but does not find an edge node, the command is incomplete.
+
+The parser uses an incremental strategy to build the CLI graph for a node. Each
+command is parsed into its own graph, and then this graph is merged into the
+overall graph. During this merge step, the parser makes a best-effort attempt to
+remove duplicate nodes. If it finds a node in the overall graph that is equal to
+a node in the corresponding position in the command graph, it will intelligently
+merge the properties from the node in the command graph into the
+already-existing node. Subgraphs are also checked for isomorphism and merged
+where possible. The definition of whether two nodes are 'equal' is based on the
+equality of some set of token properties; read the parser source for the most
+up-to-date definition of equality.
+
+When the parser is unable to deduplicate some complicated constructs, this can
+result in two identical paths through separate parts of the graph. If this
+occurs and the user enters input that matches these paths, they will receive an
+'ambiguous command' error and will be unable to execute the command. Most of the
+time the parser can detect and warn about duplicate commands, but it will not
+always be able to do this. Hence care should be taken before defining a new
+command to ensure it is not defined elsewhere.
+
+struct cmd\_token
+^^^^^^^^^^^^^^^^^
+
+.. code-block:: c
+
+ /* Command token struct. */
+ struct cmd_token
+ {
+ enum cmd_token_type type; // token type
+ uint8_t attr; // token attributes
+ bool allowrepeat; // matcher can match token repetitively?
+
+ char *text; // token text
+ char *desc; // token description
+ long long min, max; // for ranges
+ char *arg; // user input that matches this token
+ char *varname; // variable name
+ };
+
+This struct is used in the CLI graph to match input against. It is also used to
+pass user input to command handler functions, as it is frequently useful for
+handlers to have access to that information. When a command is matched, the
+sequence of ``cmd_tokens`` that form the matching path are duplicated and placed
+in order into ``*argv[]``. Before this happens the ``->arg`` field is set to
+point at the snippet of user input that matched it.
+
+For most nontrivial commands the handler function will need to determine which
+of the possible matching inputs was entered. Previously this was done by
+looking at the first few characters of input. This is now considered an
+anti-pattern and should be avoided. Instead, use the ``->type`` or ``->text``
+fields for this logic. The ``->type`` field can be used when the possible
+inputs differ in type. When the possible types are the same, use the ``->text``
+field. This field has the full text of the corresponding token in the
+definition string and using it makes for much more readable code. An example is
+helpful.
+
+Command definition:
+
+::
+
+ command <(1-10)|foo|BAR>
+
+In this example, the user may enter any one of:
+
+* an integer between 1 and 10
+* "foo"
+* anything at all
+
+If the user enters "command f", then:
+
+::
+
+ argv[1]->type == WORD_TKN
+ argv[1]->arg == "f"
+ argv[1]->text == "foo"
+
+Range tokens have some special treatment; a token with ``->type == RANGE_TKN``
+will have the ``->min`` and ``->max`` fields set to the bounding values of the
+range.
+
+struct cmd\_element
+^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: c
+
+ struct cmd_node {
+ /* Node index. */
+ enum node_type node;
+
+ /* Prompt character at vty interface. */
+ const char *prompt;
+
+ /* Is this node's configuration goes to vtysh ? */
+ int vtysh;
+
+ /* Node's configuration write function */
+ int (*func)(struct vty *);
+
+ /* Node's command graph */
+ struct graph *cmdgraph;
+
+ /* Vector of this node's command list. */
+ vector cmd_vector;
+
+ /* Hashed index of command node list, for de-dupping primarily */
+ struct hash *cmd_hash;
+ };
+
+This struct corresponds to a CLI mode. The last three fields are most relevant
+here.
+
+cmdgraph
+ This is a pointer to the command graph that was described in the first part
+ of this section. It is the datastructure used for matching user input to
+ commands.
+
+cmd_vector
+ This is a list of all the ``struct cmd_element`` defined in the mode.
+
+cmd_hash
+ This is a hash table of all the ``struct cmd_element`` defined in the mode.
+ When ``install_element`` is called, it checks that the element it is given is
+ not already present in the hash table as a safeguard against duplicate calls
+ resulting in a command being defined twice, which renders the command
+ ambiguous.
+
+All ``struct cmd_node`` are themselves held in a static vector defined in
+:file:`lib/command.c` that defines the global CLI space.
+
+Command Abbreviation & Matching Priority
+----------------------------------------
+It is possible for users to elide parts of tokens when the CLI matcher does not
+need them to make an unambiguous match. This is best explained by example.
+
+Command definitions:
+
+::
+
+ command dog cow
+ command dog crow
+
+User input:
+
+::
+
+ c d c -> ambiguous command
+ c d co -> match "command dog cow"
+
+
+The parser will look ahead and attempt to disambiguate the input based on tokens
+later on in the input string.
+
+Command definitions:
+
+::
+
+ show ip bgp A.B.C.D
+ show ipv6 bgp X:X::X:X
+
+User enters:
+
+::
+
+ s i b 4.3.2.1 -> match "show ip bgp A.B.C.D"
+ s i b ::e0 -> match "show ipv6 bgp X:X::X:X"
+
+Reading left to right, both of these commands would be ambiguous since 'i' does
+not explicitly select either 'ip' or 'ipv6'. However, since the user later
+provides a token that matches only one of the commands (an IPv4 or IPv6 address)
+the parser is able to look ahead and select the appropriate command. This has
+some implications for parsing the ``*argv[]`` that is passed to the command
+handler.
+
+Now consider a command definition such as:
+
+::
+
+ command <foo|VAR>
+
+'foo' only matches the string 'foo', but 'VAR' matches any input, including
+'foo'. Who wins? In situations like this the matcher will always choose the
+'better' match, so 'foo' will win.
+
+Consider also:
+
+::
+
+ show <ip|ipv6> foo
+
+User input:
+
+::
+
+ show ip foo
+
+``ip`` partially matches ``ipv6`` but exactly matches ``ip``, so ``ip`` will
+win.
+
+Adding a CLI Node
+-----------------
+
+To add a new CLI node, you should:
+
+#. define a new numerical node constant
+#. define a node structure in the relevant daemon
+#. call ``install_node()`` in the relevant daemon
+#. define and install the new node in vtysh
+#. define corresponding node entry commands in daemon and vtysh
+#. add a new entry to the ``ctx_keywords`` dictionary in ``tools/frr-reload.py``
+
+Defining the numerical node constant
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Add your new node value to the enum before ``NODE_TYPE_MAX`` in
+``lib/command.h``:
+
+.. code-block:: c
+
+ enum node_type {
+ AUTH_NODE, // Authentication mode of vty interface.
+ VIEW_NODE, // View node. Default mode of vty interface.
+ [...]
+ MY_NEW_NODE,
+ NODE_TYPE_MAX, // maximum
+ };
+
+Defining a node structure
+^^^^^^^^^^^^^^^^^^^^^^^^^
+In your daemon-specific code where you define your new commands that
+attach to the new node, add a node definition:
+
+.. code-block:: c
+
+ static struct cmd_node my_new_node = {
+ .name = "my new node name",
+ .node = MY_NEW_NODE, // enum node_type lib/command.h
+ .parent_node = CONFIG_NODE,
+ .prompt = "%s(my-new-node-prompt)# ",
+ .config_write = my_new_node_config_write,
+ };
+
+You will need to define ``my_new_node_config_write(struct vty \*vty)``
+(or omit this field if you have no relevant configuration to save).
+
+Calling ``install_node()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+In the daemon's initialization function, before installing your new commands
+with ``install_element()``, add a call ``install_node(&my_new_node)``.
+
+Defining and installing the new node in vtysh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The build tools automatically collect command definitions for vtysh.
+However, new nodes must be coded in vtysh specifically.
+
+In ``vtysh/vtysh.c``, define a stripped-down node structure and
+call ``install_node()``:
+
+.. code-block:: c
+
+ static struct cmd_node my_new_node = {
+ .name = "my new node name",
+ .node = MY_NEW_NODE, /* enum node_type lib/command.h */
+ .parent_node = CONFIG_NODE,
+ .prompt = "%s(my-new-node-prompt)# ",
+ };
+ [...]
+ void vtysh_init_vty(void)
+ {
+ [...]
+ install_node(&my_new_node)
+ [...]
+ }
+
+Defining corresponding node entry commands in daemon and vtysh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The command that descends into the new node is typically programmed
+with ``VTY_PUSH_CONTEXT`` or equivalent in the daemon's CLI handler function.
+(If the CLI has been updated to use the new northbound architecture,
+``VTY_PUSH_XPATH`` is used instead.)
+
+In vtysh, you must implement a corresponding node change so that vtysh
+tracks the daemon's movement through the node tree.
+
+Although the build tools typically scan daemon code for CLI definitions
+to replicate their parsing in vtysh, the node-descent function in the
+daemon must be blocked from this replication so that a hand-coded
+skeleton can be written in ``vtysh.c``.
+
+Accordingly, use one of the ``*_NOSH`` macros such as ``DEFUN_NOSH``,
+``DEFPY_NOSH``, or ``DEFUN_YANG_NOSH`` for the daemon's node-descent
+CLI definition, and use ``DEFUNSH`` in ``vtysh.c`` for the vtysh equivalent.
+
+.. seealso:: :ref:`vtysh-special-defuns`
+
+Examples:
+
+``zebra_whatever.c``
+
+.. code-block:: c
+
+ DEFPY_NOSH(my_new_node,
+ my_new_node_cmd,
+ "my-new-node foo",
+ "New Thing\n"
+ "A foo\n")
+ {
+ [...]
+ VTY_PUSH_CONTEXT(MY_NEW_NODE, bar);
+ [...]
+ }
+
+
+``ripd_whatever.c``
+
+.. code-block:: c
+
+ DEFPY_YANG_NOSH(my_new_node,
+ my_new_node_cmd,
+ "my-new-node foo",
+ "New Thing\n"
+ "A foo\n")
+ {
+ [...]
+ VTY_PUSH_XPATH(MY_NEW_NODE, xbar);
+ [...]
+ }
+
+
+``vtysh.c``
+
+.. code-block:: c
+
+ DEFUNSH(VTYSH_ZEBRA, my_new_node,
+ my_new_node_cmd,
+ "my-new-node foo",
+ "New Thing\n"
+ "A foo\n")
+ {
+ vty->node = MY_NEW_NODE;
+ return CMD_SUCCESS;
+ }
+ [...]
+ install_element(CONFIG_NODE, &my_new_node_cmd);
+
+
+Adding a new entry to the ``ctx_keywords`` dictionary
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+In file ``tools/frr-reload.py``, the ``ctx_keywords`` dictionary
+describes the various node relationships.
+Add a new node entry at the appropriate level in this dictionary.
+
+.. code-block:: python
+
+ ctx_keywords = {
+ [...]
+ "key chain ": {
+ "key ": {}
+ },
+ [...]
+ "my-new-node": {},
+ [...]
+ }
+
+
+
+Inspection & Debugging
+----------------------
+
+Permutations
+^^^^^^^^^^^^
+It is sometimes useful to check all the possible combinations of input that
+would match an arbitrary definition string. There is a tool in
+:file:`tools/permutations` that reads CLI definition strings on ``stdin`` and
+prints out all matching input permutations. It also dumps a text representation
+of the graph, which is more useful for debugging than anything else. It looks
+like this:
+
+.. code-block:: shell
+
+ $ ./permutations "show [ip] bgp [<view|vrf> WORD]"
+
+ show ip bgp view WORD
+ show ip bgp vrf WORD
+ show ip bgp
+ show bgp view WORD
+ show bgp vrf WORD
+ show bgp
+
+This functionality is also built into VTY/VTYSH; :clicmd:`list permutations`
+will list all possible matching input permutations in the current CLI node.
+
+Graph Inspection
+^^^^^^^^^^^^^^^^
+When in the Telnet or VTYSH console, :clicmd:`show cli graph` will dump the
+entire command space of the current mode in the DOT graph language. This can be
+fed into one of the various GraphViz layout engines, such as ``dot``,
+``neato``, etc.
+
+For example, to generate an image of the entire command space for the top-level
+mode (``ENABLE_NODE``):
+
+.. code-block:: shell
+
+ sudo vtysh -c 'show cli graph' | dot -Tjpg -Grankdir=LR > graph.jpg
+
+To do the same for the BGP mode:
+
+.. code-block:: shell
+
+ sudo vtysh -c 'conf t' -c 'router bgp' -c 'show cli graph' | dot -Tjpg -Grankdir=LR > bgpgraph.jpg
+
+This information is very helpful when debugging command resolution, tracking
+down duplicate / ambiguous commands, and debugging patches to the CLI graph
+builder.
diff --git a/doc/developer/conf.py b/doc/developer/conf.py
new file mode 100644
index 0000000..495c604
--- /dev/null
+++ b/doc/developer/conf.py
@@ -0,0 +1,406 @@
+# -*- coding: utf-8 -*-
+#
+# FRR documentation build configuration file, created by
+# sphinx-quickstart on Tue Jan 31 16:00:52 2017.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys
+import os
+import re
+import pygments
+from sphinx.highlighting import lexers
+from sphinx.util import logging
+logger = logging.getLogger(__name__)
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+# sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+needs_sphinx = "1.0"
+
+# prolog for various variable substitutions
+rst_prolog = ""
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ["sphinx.ext.todo", "sphinx.ext.graphviz"]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ["_templates"]
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+# source_suffix = ['.rst']
+source_suffix = ".rst"
+
+# The encoding of source files.
+# source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = "index"
+
+# General information about the project.
+project = u"FRR"
+copyright = u"2017, FRR"
+author = u"FRR authors"
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+
+# The short X.Y version.
+version = u"?.?"
+# The full version, including alpha/beta/rc tags.
+release = u"?.?-?"
+
+
+# -----------------------------------------------------------------------------
+# Extract values from codebase for substitution into docs.
+# -----------------------------------------------------------------------------
+
+# Various installation prefixes. Values are extracted from config.status.
+# Reasonable defaults are set in case that file does not exist.
+replace_vars = {
+ "AUTHORS": author,
+ "COPYRIGHT_YEAR": "1999-2005",
+ "COPYRIGHT_STR": "Copyright (c) 1999-2005",
+ "PACKAGE_NAME": project.lower(),
+ "PACKAGE_TARNAME": project.lower(),
+ "PACKAGE_STRING": project.lower() + " latest",
+ "PACKAGE_URL": "https://frrouting.org/",
+ "PACKAGE_VERSION": "latest",
+ "INSTALL_PREFIX_ETC": "/etc/frr",
+ "INSTALL_PREFIX_SBIN": "/usr/lib/frr",
+ "INSTALL_PREFIX_STATE": "/var/run/frr",
+ "INSTALL_PREFIX_MODULES": "/usr/lib/frr/modules",
+ "INSTALL_USER": "frr",
+ "INSTALL_GROUP": "frr",
+ "INSTALL_VTY_GROUP": "frrvty",
+ "GROUP": "frr",
+ "USER": "frr",
+}
+
+# extract version information, installation location, other stuff we need to
+# use when building final documents
+val = re.compile('^S\["([^"]+)"\]="(.*)"$')
+try:
+ with open("../../config.status", "r") as cfgstatus:
+ for ln in cfgstatus.readlines():
+ m = val.match(ln)
+ if not m or m.group(1) not in replace_vars.keys():
+ continue
+ replace_vars[m.group(1)] = m.group(2)
+except IOError:
+ # if config.status doesn't exist, just ignore it
+ pass
+
+# manually fill out some of these we can't get from config.status
+replace_vars["COPYRIGHT_STR"] = "Copyright (c)"
+replace_vars["COPYRIGHT_STR"] += " {0}".format(replace_vars["COPYRIGHT_YEAR"])
+replace_vars["COPYRIGHT_STR"] += " {0}".format(replace_vars["AUTHORS"])
+release = replace_vars["PACKAGE_VERSION"]
+version = release.split("-")[0]
+
+# add substitutions to prolog
+for key, value in replace_vars.items():
+ rst_prolog += ".. |{0}| replace:: {1}\n".format(key, value)
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+# today = ''
+# Else, today_fmt is used as the format for a strftime call.
+# today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = [
+ "_build",
+ "building-libunwind-note.rst",
+ "building-libyang.rst",
+ "topotests-snippets.rst",
+ "topotests-markers.rst",
+ "include-compile.rst",
+]
+
+# The reST default role (used for this markup: `text`) to use for all
+# documents.
+# default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+# add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+# add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+# show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = "sphinx"
+
+# A list of ignored prefixes for module index sorting.
+# modindex_common_prefix = []
+
+# If true, keep warnings as "system message" paragraphs in the built documents.
+# keep_warnings = False
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = True
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = "default"
+
+try:
+ import sphinx_rtd_theme
+
+ html_theme = "sphinx_rtd_theme"
+except ImportError:
+ pass
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+# html_theme_options = {
+# 'sidebarbgcolor': '#374249'
+# }
+
+# Add any paths that contain custom themes here, relative to this directory.
+# html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+# html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+# html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+html_logo = "../figures/frr-icon.svg"
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+html_favicon = "../figures/frr-logo-icon.png"
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ["_static"]
+
+# Add any extra paths that contain custom files (such as robots.txt or
+# .htaccess) here, relative to this directory. These files are copied
+# directly to the root of the documentation.
+# html_extra_path = []
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+# html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+# html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+# html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+# html_additional_pages = {}
+
+# If false, no module index is generated.
+# html_domain_indices = True
+
+# If false, no index is generated.
+# html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+# html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+# html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+# html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+# html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+# html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+# html_file_suffix = None
+
+# Language to be used for generating the HTML full-text search index.
+# Sphinx supports the following languages:
+# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
+# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
+# html_search_language = 'en'
+
+# A dictionary with options for the search language support, empty by default.
+# Now only 'ja' uses this config value
+# html_search_options = {'type': 'default'}
+
+# The name of a javascript file (relative to the configuration directory) that
+# implements a search results scorer. If empty, the default will be used.
+# html_search_scorer = 'scorer.js'
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = "FRRdoc"
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #'papersize': 'letterpaper',
+ # The font size ('10pt', '11pt' or '12pt').
+ #'pointsize': '10pt',
+ # Additional stuff for the LaTeX preamble.
+ #'preamble': '',
+ # Latex figure (float) alignment
+ #'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, "FRR.tex", u"FRR Developer's Manual", u"FRR", "manual"),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+latex_logo = "../figures/frr-logo-medium.png"
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+# latex_use_parts = False
+
+# If true, show page references after internal links.
+# latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+# latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+# latex_appendices = []
+
+# If false, no module index is generated.
+# latex_domain_indices = True
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [(master_doc, "frr", u"FRR Developer's Manual", [author], 1)]
+
+# If true, show URL addresses after external links.
+# man_show_urls = False
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (
+ master_doc,
+ "frr",
+ u"FRR Developer's Manual",
+ author,
+ "FRR",
+ "One line description of project.",
+ "Miscellaneous",
+ ),
+]
+
+# Documents to append as an appendix to all manuals.
+# texinfo_appendices = []
+
+# If false, no module index is generated.
+# texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+# texinfo_show_urls = 'footnote'
+
+# If true, do not generate a @detailmenu in the "Top" node's menu.
+# texinfo_no_detailmenu = False
+
+# contents of ../extra/frrlexer.py.
+# This is read here to support VPATH build. Since this section is execfile()'d
+# with the file location, we can safely use a relative path here to save the
+# contents of the lexer file for later use even if our relative path changes
+# due to VPATH.
+with open("../extra/frrlexer.py", "rb") as lex:
+ frrlexerpy = lex.read()
+
+frrfmt_re = re.compile(r'^\s*%(?P<spec>[^\s]+)\s+\((?P<types>.*)\)\s*$')
+
+def parse_frrfmt(env, text, node):
+ from sphinx import addnodes
+
+ m = frrfmt_re.match(text)
+ if not m:
+ logger.warning('could not parse frrfmt:: %r' % (text), location=node)
+ node += addnodes.desc_name(text, text)
+ return text
+
+ spec, types = m.group('spec'), m.group('types')
+
+ node += addnodes.desc_sig_operator('%', '%')
+ node += addnodes.desc_name(spec + ' ', spec + ' ')
+ plist = addnodes.desc_parameterlist()
+ for typ in types.split(','):
+ typ = typ.strip()
+ plist += addnodes.desc_parameter(typ, typ)
+ node += plist
+ return '%' + spec
+
+# custom extensions here
+def setup(app):
+ # object type for FRR CLI commands, can be extended to document parent CLI
+ # node later on
+ app.add_object_type("clicmd", "clicmd")
+
+ # printfrr extensions
+ app.add_object_type("frrfmt", "frrfmt", parse_node=parse_frrfmt)
+
+ if "add_css_file" in dir(app):
+ app.add_css_file("overrides.css")
+ else:
+ app.add_stylesheet("overrides.css")
+
+ # load Pygments lexer for FRR config syntax
+ #
+ # NB: in Pygments 2.2+ this can be done with `load_lexer_from_file`, but we
+ # do it manually since not all of our supported build platforms have 2.2
+ # yet.
+ #
+ # frrlexer = pygments.lexers.load_lexer_from_file('../extra/frrlexer.py', lexername="FRRLexer")
+ custom_namespace = {}
+ exec(frrlexerpy, custom_namespace)
+ lexers["frr"] = custom_namespace["FRRLexer"]()
diff --git a/doc/developer/cross-compiling.rst b/doc/developer/cross-compiling.rst
new file mode 100644
index 0000000..3bf78f7
--- /dev/null
+++ b/doc/developer/cross-compiling.rst
@@ -0,0 +1,326 @@
+Cross-Compiling
+===============
+
+FRR is capable of being cross-compiled to a number of different architectures.
+With an adequate toolchain this process is fairly straightforward, though one
+must exercise caution to validate this toolchain's correctness before attempting
+to compile FRR or its dependencies; small oversights in the construction of the
+build tools may lead to problems which quickly become difficult to diagnose.
+
+Toolchain Preliminary
+---------------------
+
+The first step to cross-compiling any program is to identify the system which
+the program (FRR) will run on. From here on this will be called the "host"
+machine, following autotools' convention, while the machine building FRR will be
+called the "build" machine. The toolchain will of course be installed onto the
+build machine and be leveraged to build FRR for the host machine to run.
+
+.. note::
+
+ The build machine used while writing this guide was ``x86_64-pc-linux-gnu``
+ and the target machine was ``arm-linux-gnueabihf`` (a Raspberry Pi 3B+).
+ Replace this with your targeted tuple below if you plan on running the
+ commands from this guide:
+
+ .. code-block:: shell
+
+ export HOST_ARCH="arm-linux-gnueabihf"
+
+ For your given target, the build system's OS may have some support for
+ building cross compilers natively, or may even offer binary toolchains built
+ upstream for the target architecture. Check your package manager or OS
+ documentation before committing to building a toolchain from scratch.
+
+This guide will not detail *how* to build a cross-compiling toolchain but
+will instead assume one already exists and is installed on the build system.
+The methods for building the toolchain itself may differ between operating
+systems so consult the OS documentation for any particulars regarding
+cross-compilers. The OSDev wiki has a `pleasant tutorial`_ on cross-compiling in
+the context of operating system development which bootstraps from only the
+native GCC and binutils on the build machine. This may be useful if the build
+machine's OS does not offer existing tools to build a cross-compiler targeting
+the host.
+
+.. _pleasant tutorial: https://wiki.osdev.org/GCC_Cross-Compiler
+
+This guide will also not demonstrate how to build all of FRR's dependencies for the
+target architecture. Instead, general instructions for using a cross-compiling
+toolchain to compile packages using CMake, Autotools, and Makefiles are
+provided; these three cases apply to almost all FRR dependencies.
+
+.. _glibc mismatch:
+
+.. warning::
+
+ Ensure the versions and implementations of the C standard library (glibc or
+ what have you) match on the host and the build toolchain. ``ldd --version``
+ will help you here. Upgrade one or the other if the they do not match.
+
+Testing the Toolchain
+---------------------
+
+Before any cross-compilation begins it would be prudent to test the new
+toolchain by writing, compiling and linking a simple program.
+
+.. code-block:: shell
+
+ # A small program
+ cat > nothing.c <<EOF
+ int main() { return 0; }
+ EOF
+
+ # Build and link with the cross-compiler
+ ${HOST_ARCH}-gcc -o nothing nothing.c
+
+ # Inspect the resulting binary, results may vary
+ file ./nothing
+
+ # nothing: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV),
+ # dynamically linked, interpreter /lib/ld-linux-armhf.so.3,
+ # for GNU/Linux 3.2.0, not stripped
+
+If this produced no errors then the installed toolchain is probably ready to
+start compiling the build dependencies and eventually FRR itself. There still
+may be lurking issues but fundamentally the toolchain can produce binaries and
+that's good enough to start working with it.
+
+.. warning::
+
+ If any errors occurred during the previous functional test please look back
+ and address them before moving on; this indicates your cross-compiling
+ toolchain is *not* in a position to build FRR or its dependencies. Even if
+ everything was fine, keep in mind that many errors from here on *may still
+ be related* to your toolchain (e.g. libstdc++.so or other components) and this
+ small test is not a guarantee of complete toolchain coherence.
+
+Cross-compiling Dependencies
+----------------------------
+
+When compiling FRR it is necessary to compile some of its dependencies alongside
+it on the build machine. This is so symbols from the shared libraries (which
+will be loaded at run-time on the host machine) can be linked to the FRR
+binaries at compile time; additionally, headers for these libraries are needed
+during the compile stage for a successful build.
+
+Sysroot Overview
+^^^^^^^^^^^^^^^^
+
+All build dependencies should be installed into a "root" directory on the build
+computer, hereafter called the "sysroot". This directory will be prefixed to
+paths while searching for requisite libraries and headers during the build
+process. Often this may be set via a ``--prefix`` flag when building the
+dependent packages, meaning a ``make install`` will copy compiled libraries into
+(e.g.) ``/usr/${HOST_ARCH}/usr``.
+
+If the toolchain was built on the build machine then there is likely already a
+sysroot where those tools and standard libraries were installed; it may be
+helpful to use that directory as the sysroot for this build as well.
+
+Basic Workflow
+^^^^^^^^^^^^^^
+
+Before compiling or building any dependencies, make note of which daemons are
+being targeted and which libraries will be needed. Not all dependencies are
+necessary if only building with a subset of the daemons.
+
+The following workflow will compile and install any libraries which can be built
+with Autotools. The resultant library will be installed into the sysroot
+``/usr/${HOST_ARCH}``.
+
+.. code-block:: shell
+
+ ./configure \
+ CC=${HOST_ARCH}-gcc \
+ CXX=${HOST_ARCH}-g++ \
+ --build=${HOST_ARCH} \
+ --prefix=/usr/${HOST_ARCH}
+ make
+ make install
+
+Some libraries like ``json-c`` and ``libyang`` are packaged with CMake and can
+be built and installed generally like:
+
+.. code-block:: shell
+
+ mkdir build
+ cd build
+ CC=${HOST_ARCH}-gcc \
+ CXX=${HOST_ARCH}-g++ \
+ cmake \
+ -DCMAKE_INSTALL_PREFIX=/usr/${HOST_ARCH} \
+ ..
+ make
+ make install
+
+For programs with only a Makefile (e.g. ``libcap``) the process may look still a
+little different:
+
+.. code-block:: shell
+
+ CC=${HOST_ARCH}-gcc make
+ make install DESTDIR=/usr/${HOST_ARCH}
+
+These three workflows should handle the bulk of building and installing the
+build-time dependencies for FRR. Verify that the installed files are being
+placed correctly into the sysroot and were actually built using the
+cross-compile toolchain, not by the native toolchain by accident.
+
+Dependency Notes
+^^^^^^^^^^^^^^^^
+
+There are a lot of things that can go wrong during a cross-compilation. Some of
+the more common errors and a few special considerations are collected below for
+reference.
+
+libyang
+"""""""
+
+``-DENABLE_LYD_PRIV=ON`` should be provided during the CMake step.
+
+Ensure also that the version of ``libyang`` being installed corresponds to the
+version required by the targeted FRR version.
+
+gRPC
+""""
+
+This piece is requisite only if the ``--enable-grpc`` flag will be passed
+later on to FRR. One may get burned when compiling gRPC if the ``protoc``
+version on the build machine differs from the version of ``protoc`` being linked
+to during a gRPC build. The error messages from this defect look like:
+
+.. code-block:: shell
+
+ gens/src/proto/grpc/channelz/channelz.pb.h: In member function ‘void grpc::channelz::v1::ServerRef::set_name(const char*, size_t)’:
+ gens/src/proto/grpc/channelz/channelz.pb.h:9127:64: error: ‘EmptyDefault’ is not a member of ‘google::protobuf::internal::ArenaStringPtr’
+ 9127 | name_.Set(::PROTOBUF_NAMESPACE_ID::internal::ArenaStringPtr::EmptyDefault{}, ::std::string(
+
+This happens because protocol buffer code generation uses ``protoc`` to create
+classes with different getters and setters corresponding to the protobuf data
+defined by the source tree's ``.proto`` files. Clearly the cross-compiled
+``protoc`` cannot be used for this code generation because that binary is built
+for a different CPU.
+
+The solution is to install matching versions of native and cross-compiled
+protocol buffers; this way the native binary will generate code and the
+cross-compiled library will be linked to by gRPC and these versions will not
+disagree.
+
+----
+
+The ``-latomic`` linker flag may also be necessary here if using ``libstdc++``
+since GCC's C++11 implementation makes library calls in certain cases for
+``<atomic>`` so ``-latomic`` cannot be assumed.
+
+Cross-compiling FRR Itself
+--------------------------
+
+With all the necessary libraries cross-compiled and installed into the sysroot,
+the last thing to actually build is FRR itself:
+
+.. code-block:: shell
+
+ # Clone and bootstrap the build
+ git clone 'https://github.com/FRRouting/frr.git'
+ # (e.g.) git checkout stable/7.5
+ ./bootstrap.sh
+
+ # Build clippy using the native toolchain
+ mkdir build-clippy
+ cd build-clippy
+ ../configure --enable-clippy-only
+ make clippy-only
+ cd ..
+
+ # Next, configure FRR and use the clippy we just built
+ ./configure \
+ CC=${HOST_ARCH}-gcc \
+ CXX=${HOST_ARCH}-g++ \
+ --host=${HOST_ARCH} \
+ --with-sysroot=/usr/${HOST_ARCH} \
+ --with-clippy=./build-clippy/lib/clippy \
+ --sysconfdir=/etc/frr \
+ --sbindir="\${prefix}/lib/frr" \
+ --localstatedir=/var/run/frr \
+ --prefix=/usr \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --disable-doc \
+ --enable-grpc
+
+ # Send it
+ make
+
+Installation to Host Machine
+----------------------------
+
+If no errors were observed during the previous steps it is safe to ``make
+install`` FRR into its own directory.
+
+.. code-block:: shell
+
+ # Install FRR its own "sysroot"
+ make install DESTDIR=/some/path/to/sysroot
+
+After running the above command, FRR binaries, modules and example configuration
+files will be installed into some path on the build machine. The directory
+will have folders like ``/usr`` and ``/etc``; this "root" should now be copied
+to the host and installed on top of the root directory there.
+
+.. code-block:: shell
+
+ # Tar this sysroot (preserving permissions)
+ tar -C /some/path/to/sysroot -cpvf frr-${HOST_ARCH}.tar .
+
+ # Transfer tar file to host machine
+ scp frr-${HOST_ARCH}.tar me@host-machine:
+
+ # Overlay the tarred sysroot on top of the host machine's root
+ ssh me@host-machine <<-EOF
+ # May need to elevate permissions here
+ tar -C / -xpvf frr-${HOST_ARCH}.tar.gz .
+ EOF
+
+Now FRR should be installed just as if ``make install`` had been run on the host
+machine. Create configuration files and assign permissions as needed. Lastly,
+ensure the correct users and groups exist for FRR on the host machine.
+
+Troubleshooting
+---------------
+
+Even when every precaution has been taken some things may still go wrong! This
+section details some common runtime problems.
+
+Mismatched Libraries
+^^^^^^^^^^^^^^^^^^^^
+
+If you see something like this after installing on the host:
+
+.. code-block:: console
+
+ /usr/lib/frr/zebra: error while loading shared libraries: libyang.so.1: cannot open shared object file: No such file or directory
+
+... at least one of FRR's dependencies which was linked to the binary earlier is
+not available on the host OS. Even if it has been installed the host
+repository's version may lag what is needed for more recent FRR builds (this is
+likely to happen with YANG at the moment).
+
+If the matching library is not available from the host OS package manager it may
+be possible to compile them using the same toolchain used to compile FRR. The
+library may have already been built earlier when compiling FRR on the build
+machine, in which case it may be as simple as following the same workflow laid
+out during the `Installation to Host Machine`_.
+
+Mismatched Glibc Versions
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The version and implementation of the C standard library must match on both the
+host and build toolchain. The error corresponding to this misconfiguration will
+look like:
+
+.. code-block:: console
+
+ /usr/lib/frr/zebra: /lib/${HOST_ARCH}/libc.so.6: version `GLIBC_2.32' not found (required by /usr/lib/libfrr.so.0)
+
+See the earlier warning about preventing a `glibc mismatch`_.
diff --git a/doc/developer/cspf.rst b/doc/developer/cspf.rst
new file mode 100644
index 0000000..7a5a55e
--- /dev/null
+++ b/doc/developer/cspf.rst
@@ -0,0 +1,197 @@
+Path Computation Algorithms
+===========================
+
+Introduction
+------------
+
+Both RSVP-TE and Segment Routing Flex Algo need to compute end to end path
+with other constraints as the standard IGP metric. Based on Shortest Path First
+(SPF) algorithms, a new class of Constrained SPF (CSPF) is provided by the FRR
+library.
+
+Supported constraints are as follow:
+- Standard IGP metric (here, CSPF provides the same result as a normal SPF)
+- Traffic Engineering (TE) IGP metric
+- Delay from the IGP Extended Metrics
+- Bandwidth for a given Class of Service (CoS) for bandwidth reservation
+
+Algorithm
+---------
+
+The CSPF algorithm is based on a Priority Queue which store the on-going
+possible path sorted by their respective weights. This weight corresponds
+to the cost of the cuurent path from the source up to the current node.
+
+The algorithm is as followed:
+
+.. code-block:: c
+
+ cost = MAX_COST;
+ Priority_Queue.empty();
+ Visited_Node.empty();
+ Processed_Path.empty();
+ src = new_path(source_address);
+ src.cost = 0;
+ dst = new_destinatio(destination_address);
+ dst.cost = MAX_COST;
+ Processed_Path.add(src);
+ Processed_Path.add(dst);
+ while (Priority_Queue.count != 0) {
+ current_path = Priority_Queue.pop();
+ current_node = next_path.destination;
+ Visited_Node.add(current_node);
+ for (current_node.edges: edge) {
+ if (prune_edge(current_path, edge)
+ continue;
+ if (relax(current_path) && cost > current_path.cost) {
+ optim_path = current_path;
+ cost = current_path.cost;
+ }
+ }
+ }
+
+ prune_edge(path, edge) {
+ // check that path + edge meet constraints e.g.
+ if (current_path.cost + edge.cost > constrained_cost)
+ return false;
+ else
+ return true;
+ }
+
+ relax_edge(current_path, edge) {
+ next_node = edge.destination;
+ if (Visited_Node.get(next_node))
+ return false;
+ next_path = Processed_Path.get(edge.destination);
+ if (!next_path) {
+ next_path = new path(edge.destination);
+ Processed_Path.add(next_path);
+ }
+ total_cost = current_path.cost + edge.cost;
+ if (total_cost < next_path.cost) {
+ next_path = current_path;
+ next_path.add_edge(edge);
+ next_path.cost = total_cost;
+ Priority_Queue.add(next_path);
+ }
+ return (next_path.destination == destination);
+ }
+
+
+Definition
+----------
+
+.. c:struct:: constraints
+
+This is the constraints structure that contains:
+
+- cost: the total cost that the path must respect
+- ctype: type of constraints:
+
+ - CSPF_METRIC for standard metric
+ - CSPF_TE_METRIC for TE metric
+ - CSPF_DELAY for delay metric
+
+- bw: bandwidth that the path must respect
+- cos: Class of Service (COS) for the bandwidth
+- family: AF_INET or AF_INET6
+- type: RSVP_TE, SR_TE or SRV6_TE
+
+.. c:struct:: c_path
+
+This is the Constraint Path structure that contains:
+
+- edges: List of Edges that compose the path
+- status: FAILED, IN_PROGRESS, SUCCESS, NO_SOURCE, NO_DESTINATION, SAME_SRC_DST
+- weight: the cost from source to the destination of the path
+- dst: key of the destination vertex
+
+.. c:struct:: cspf
+
+This is the main structure for path computation. Even if it is public, you
+don't need to set manually the internal field of the structure. Instead, use
+the following functions:
+
+.. c:function:: struct cspf *cspf_new(void);
+
+Function to create an empty cspf for future call of path computation
+
+.. c:function:: struct cspf *cspf_init(struct cspf *algo, const struct ls_vertex *src, const struct ls_vertex *dst, struct constraints *csts);
+
+This function initialize the cspf with source and destination vertex and
+constraints and return pointer to the cspf structure. If input cspf structure
+is NULL, a new cspf structure is allocated and initialize.
+
+.. c:function:: struct cspf *cspf_init_v4(struct cspf *algo, struct ls_ted *ted, const struct in_addr src, const struct in_addr dst, struct constraints *csts);
+
+Same as cspf_init, but here, source and destination vertex are extract from
+the TED data base based on respective IPv4 source and destination addresses.
+
+.. c:function:: struct cspf *cspf_init_v6(struct cspf *algo, struct ls_ted *ted, const struct in6_addr src, const struct in6_addr dst, struct constraints *csts);
+
+Same as cspf_init_v4 but with IPv6 source and destination addresses.
+
+.. c:function:: void cspf_clean(struct cspf *algo);
+
+Clean internal structure of cspf in order to reuse it for another path
+computation.
+
+.. c:function:: void cspf_del(struct cspf *algo);
+
+Delete cspf structure. A call to cspf_clean() function is perform prior to
+free allocated memeory.
+
+.. c:function:: struct c_path *compute_p2p_path(struct ls_ted *ted, struct cspf *algo);
+
+Compute point to point path from the ted and cspf.
+The function always return a constraints path. The status of the path gives
+indication about the success or failure of the algorithm. If cspf structure has
+not been initialize with a call to `cspf_init() or cspf_init_XX()`, the
+algorithm returns a constraints path with status set to FAILED.
+Note that a call to `cspf_clean()` is performed at the end of this function,
+thus it is mandatory to initialize the cspf structure again prior to call again
+the path computation algorithm.
+
+
+Usage
+-----
+
+Of course, CSPF algorithm needs a network topology that contains the
+various metrics. Link State provides such Traffic Engineering Database.
+
+To perform a Path Computation with given constraints, proceed as follow:
+
+.. code-block:: c
+
+ struct cspf *algo;
+ struct ls_ted *ted;
+ struct in_addr src;
+ struct in_addr dst;
+ struct constraints csts;
+ struct c_path *path;
+
+ // Create a new CSPF structure
+ algo = cspf_new();
+
+ // Initialize constraints
+ csts.cost = 100;
+ csts.ctype = CSPF_TE_METRIC;
+ csts.family = AF_INET;
+ csts.type = SR_TE;
+ csts.bw = 1000000;
+ csts.cos = 3;
+
+ // Then, initialise th CSPF with source, destination and constraints
+ cspf_init_v4(algo, ted, src, dst, &csts);
+
+ // Finally, got the Computed Path;
+ path = compute_p2p_path(ted, algo);
+
+ if (path.status == SUCCESS)
+ zlog_info("Got a valid constraints path");
+ else
+ zlog_info("Unable to compute constraints path. Got %d status", path->status);
+
+
+If you would compute another path, you must call `cspf_init()` prior to
+`compute_p2p_path()` to change source, destination and/or constraints.
diff --git a/doc/developer/draft-zebra-00.ms b/doc/developer/draft-zebra-00.ms
new file mode 100644
index 0000000..b5d6924
--- /dev/null
+++ b/doc/developer/draft-zebra-00.ms
@@ -0,0 +1,209 @@
+.pl 10.0i
+.po 0
+.ll 7.2i
+.lt 7.2i
+.nr LL 7.2i
+.nr LT 7.2i
+.ds LF Ishiguro
+.ds RF FORMFEED[Page %]
+.ds CF
+.ds LH RFC DRAFT
+.ds RH March 1998
+.ds CH
+.hy 0
+.ad l
+Network Working Group K. Ishiguro
+Request for Comments: DRAFT Digital Magic Labs, Inc.
+ March 1998
+.sp 2
+.ce
+Zebra Protocol Draft
+.sp 2
+.fi
+.ne 4
+Status of this Memo
+.sp
+.in 3
+This draft is very eary beta version.
+.sp
+.in 0
+.ne 4
+Introduction
+.sp
+.in 3
+The zebra protocol is a communication protocol between kernel
+routing table manager and routing protocol daemon. It is built over
+TCP/IP protocol suite.
+.sp
+.in 0
+.ne 4
+Request message formats
+.sp
+.in 3
+zebra is TCP-based protocol.
+.sp
+Below is request packet format.
+.sp
+.in 0
+.DS
+0 1 2 3
+0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Length (2) | Command (1) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+.DE
+.sp
+.in 3
+Length is total packet length.
+.sp
+Here is summary of command list.
+.sp
+.in 0
+.DS
+1 - ZEBRA_IPV4_ROUTE_ADD
+2 - ZEBRA_IPV4_ROUTE_DELETE
+3 - ZEBRA_IPV6_ROUTE_ADD
+4 - ZEBRA_IPV6_ROUTE_DELETE
+5 - ZEBRA_GET_ONE_INTERFACE
+6 - ZEBRA_GET_ALL_INTERFACE
+7 - ZEBRA_GET_HOSTINFO
+.DE
+.sp
+.in 0
+.ne 4
+IPv4 reply message formats
+.sp
+.in 0
+.DS
+0 1 2 3
+0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
++-+-+-+-+-+-+-+-+
+| Type (1) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Gateway (4) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+.DE
+.sp
+.in 3
+Type field specify route's origin type.
+.sp
+.in 0
+.DS
+1 - ZEBRA_ROUTE_RESERVE
+2 - ZEBRA_ROUTE_CONNECT
+3 - ZEBRA_ROUTE_STATIC
+4 - ZEBRA_ROUTE_RIP
+5 - ZEBRA_ROUTE_RIPNG
+6 - ZEBRA_ROUTE_BGP
+7 - ZEBRA_ROUTE_RADIX
+.DE
+.sp
+.in 3
+After above message there can be variale length IPv4 prefix data.
+Each IPv4 prefix is encoded as a two tuple of the form <masklength,
+prefix>
+.sp
+.in 0
+.DS
++----------------------+
+|Subnet mask (1 octet) |
++----------------------+
+|IPv4 prefix (variable)|
++----------------------+
+.DE
+.sp
+.in 0
+.ne 4
+IPv6 reply message formats
+.sp
+.in 0
+.DS
+0 1 2 3
+0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
++-+-+-+-+-+-+-+-+
+| Type (1) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Gateway (16) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+.DE
+.sp
+.in 3
+Type field specify route's origin type.
+.sp
+.in 0
+.DS
+1 - ZEBRA_ROUTE_RESERVE
+2 - ZEBRA_ROUTE_CONNECT
+3 - ZEBRA_ROUTE_STATIC
+4 - ZEBRA_ROUTE_RIP
+5 - ZEBRA_ROUTE_RIPNG
+6 - ZEBRA_ROUTE_BGP
+7 - ZEBRA_ROUTE_RADIX
+.DE
+.sp
+.in 0
+.DS
++----------------------+
+| ifindex (4 octet) |
++----------------------+
+| prefixlen (1 octet)|
++----------------------+
+|IPv6 prefix (variable)|
++----------------------+
+.DE
+.sp
+.in 3
+I am not sure but it seems some operation systems IPv6
+implementation may need interface index when add and delete
+linklocal routes.
+.sp
+I have added ifindex field to specify IPv6 routes interface
+index. If this index is value zero, it will ignored.
+.sp
+.in 0
+.ne 4
+Interface information message format.
+.sp
+.in 0
+.DS
+0 1 2 3
+0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Interface name (20) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Index (1) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Interface flag (4) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Interface metric (4) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Interface MTU (4) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| Interface Address count (4) |
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+.DE
+.sp
+.in 3
+Address message format.
+.sp
+.in 0
+.ne 4
+Host inforamtion message format.
+.sp
+.in 0
+.DS
+0 1 2 3
+0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+|IPv4 forwarding|IPv6 forwarding|
++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+.DE
+.sp
+.in 3
+Host information contain IPv4/IPv6 forwarding information.
diff --git a/doc/developer/fpm.rst b/doc/developer/fpm.rst
new file mode 100644
index 0000000..56d3367
--- /dev/null
+++ b/doc/developer/fpm.rst
@@ -0,0 +1,119 @@
+FPM
+===
+
+FPM stands for Forwarding Plane Manager and it's a module for use with Zebra.
+
+The encapsulation header for the messages exchanged with the FPM is
+defined by the file :file:`fpm/fpm.h` in the frr tree. The routes
+themselves are encoded in Netlink or protobuf format, with Netlink
+being the default.
+
+Netlink is standard format for encoding messages to talk with kernel space
+in Linux and it is also the name of the socket type used by it.
+The FPM netlink usage differs from Linux's in:
+
+- Linux netlink sockets use datagrams in a multicast fashion, FPM uses
+ as a stream and it is unicast.
+- FPM netlink messages might have more or less information than a normal
+ Linux netlink socket message (example: RTM_NEWROUTE might add an extra
+ route attribute to signalize VxLAN encapsulation).
+
+Protobuf is one of a number of new serialization formats wherein the
+message schema is expressed in a purpose-built language. Code for
+encoding/decoding to/from the wire format is generated from the
+schema. Protobuf messages can be extended easily while maintaining
+backward-compatibility with older code. Protobuf has the following
+advantages over Netlink:
+
+- Code for serialization/deserialization is generated automatically. This
+ reduces the likelihood of bugs, allows third-party programs to be integrated
+ quickly, and makes it easy to add fields.
+- The message format is not tied to an OS (Linux), and can be evolved
+ independently.
+
+.. note::
+
+ Currently there are two FPM modules in ``zebra``:
+
+ * ``fpm``
+ * ``dplane_fpm_nl``
+
+fpm
+^^^
+
+The first FPM implementation that was built using hooks in ``zebra`` route
+handling functions. It uses its own netlink/protobuf encoding functions to
+translate ``zebra`` route data structures into formatted binary data.
+
+
+dplane_fpm_nl
+^^^^^^^^^^^^^
+
+The newer FPM implementation that was built using ``zebra``'s data plane
+framework as a plugin. It only supports netlink and it shares ``zebra``'s
+netlink functions to translate route event snapshots into formatted binary
+data.
+
+
+Protocol Specification
+----------------------
+
+FPM (in any mode) uses a TCP connection to talk with external applications.
+It operates as TCP client and uses the CLI configured address/port to connect
+to the FPM server (defaults to port ``2620``).
+
+FPM frames all data with a header to help the external reader figure how
+many bytes it has to read in order to read the full message (this helps
+simulates datagrams like in the original netlink Linux kernel usage).
+
+Frame header:
+
+::
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +---------------+---------------+-------------------------------+
+ | Version | Message type | Message length |
+ +---------------+---------------+-------------------------------+
+ | Data... |
+ +---------------------------------------------------------------+
+
+
+Version
+^^^^^^^
+
+Currently there is only one version, so it should be always ``1``.
+
+
+Message Type
+^^^^^^^^^^^^
+
+Defines what underlining protocol we are using: netlink (``1``) or protobuf (``2``).
+
+
+Message Length
+^^^^^^^^^^^^^^
+
+Amount of data in this frame in network byte order.
+
+
+Data
+^^^^
+
+The netlink or protobuf message payload.
+
+
+Route Status Notification from ASIC
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The dplane_fpm_nl has the ability to read route netlink messages
+from the underlying fpm implementation that can tell zebra
+whether or not the route has been Offloaded/Failed or Trapped.
+The end developer must send the data up the same socket that has
+been created to listen for FPM messages from Zebra. The data sent
+must have a Frame Header with Version set to 1, Message Type set to 1
+and an appropriate message Length. The message data must contain
+a RTM_NEWROUTE netlink message that sends the prefix and nexthops
+associated with the route. Finally rtm_flags must contain
+RTM_F_OFFLOAD, RTM_F_TRAP and or RTM_F_OFFLOAD_FAILED to signify
+what has happened to the route in the ASIC.
diff --git a/doc/developer/frr-release-procedure.rst b/doc/developer/frr-release-procedure.rst
new file mode 100644
index 0000000..9378637
--- /dev/null
+++ b/doc/developer/frr-release-procedure.rst
@@ -0,0 +1,267 @@
+.. _frr-release-procedure:
+
+FRR Release Procedure
+=====================
+
+``<version>`` - version to be released, e.g. 7.3
+``origin`` - FRR upstream repository
+
+Stage 1 - Preparation
+---------------------
+
+#. Prepare changelog for the new release
+
+ Note: use ``tools/release_notes.py`` to help draft release notes changelog
+
+#. Checkout the existing ``dev/<version>`` branch.
+
+ .. code-block:: console
+
+ git checkout dev/<version>
+
+#. Create and push a new branch called ``stable/<version>`` based on the
+ ``dev/<version>`` branch.
+
+ .. code-block:: console
+
+ git checkout -b stable/<version>
+
+#. Remove the development branch called ``dev/<version>``
+
+ .. code-block:: console
+
+ git push origin --delete dev/<version>
+
+#. Update Changelog for Red Hat Packages:
+
+ Edit :file:`redhat/frr.spec.in` and look for the ``%changelog`` section:
+
+ - Change last (top of list) entry from ``%{version}`` to the **last**
+ released version number. For example, if ``<version>`` is ``7.3`` and the
+ last public release was ``7.2``, you would use ``7.2``, changing the file
+ like so::
+
+ * Tue Nov 7 2017 Martin Winter <mwinter@opensourcerouting.org> - %{version}
+
+ to::
+
+ * Tue Nov 7 2017 Martin Winter <mwinter@opensourcerouting.org> - 7.2
+
+ - Add new entry to the top of the list with ``%{version}`` tag. Make sure
+ to watch the format, i.e. the day is always 2 characters, with the 1st
+ character being a space if the day is one digit.
+
+ - Add the changelog text below this entry.
+
+#. Update Changelog for Debian Packages:
+
+ Update :file:`debian/changelog`:
+
+ - Run following with **last** release version number and debian revision
+ (usually -1) as argument to ``dch --newversion VERSION``. For example, if
+ ``<version>`` is ``7.3`` then you will run ``dch --newversion 7.3-1``.
+
+ - The ``dch`` will run an editor, and you should add the changelog text below
+ this entry, usually that would be: **New upstream version**.
+
+ - Verify the changelog format using ``dpkg-parsechangelog``. In the
+ repository root:
+
+ .. code-block:: console
+
+ dpkg-parsechangelog
+
+ You should see output like this::
+
+ vagrant@local ~/frr> dpkg-parsechangelog
+ Source: frr
+ Version: 7.3-dev-0
+ Distribution: UNRELEASED
+ Urgency: medium
+ Maintainer: FRRouting-Dev <dev@lists.frrouting.org>
+ Timestamp: 1540478210
+ Date: Thu, 25 Oct 2018 16:36:50 +0200
+ Changes:
+ frr (7.3-dev-0) RELEASED; urgency=medium
+ .
+ * Your Changes Here
+
+#. Commit the changes, adding the changelog to the commit message. Follow all
+ existing commit guidelines. The commit message should be akin to::
+
+ debian, redhat: updating changelog for new release
+
+#. Change main version number:
+
+ - Edit :file:`configure.ac` and change version in the ``AC_INIT`` command
+ to ``<version>``
+
+ Add and commit this change. This commit should be separate from the commit
+ containing the changelog. The commit message should be::
+
+ FRR Release <version>
+
+ The version field should be complete; i.e. for ``8.0.0``, the version should
+ be ``8.0.0`` and not ``8.0`` or ``8``.
+
+
+Stage 2 - Staging
+-----------------
+
+#. Push the stable branch to a new remote branch prefixed with ``rc``::
+
+ git push origin stable/<version>:rc/version
+
+ This will trigger the NetDEF CI, which serve as a sanity check on the
+ release branch. Verify that all tests pass and that all package builds are
+ successful. To do this, go to the NetDEF CI located here:
+
+ https://ci1.netdef.org/browse/FRR-FRR
+
+ In the top left, look for ``rc-<version>`` in the "Plan branch" dropdown.
+ Select this version. Note that it may take a few minutes for the CI to kick
+ in on this new branch and appear in the list.
+
+#. Push the stable branch:
+
+ .. code-block:: console
+
+ git push origin stable/<version>:refs/heads/stable/<version>
+
+#. Create and push a git tag for the version:
+
+ .. code-block:: console
+
+ git tag -a frr-<version> -m "FRRouting Release <version>"
+ git push origin frr-<version>
+
+#. Create a new branch based on ``master``, cherry-pick the commit made earlier
+ that added the changelogs, and use it to create a PR against ``master``.
+ This way ``master`` has the latest changelog for the next cycle.
+
+#. Kick off the "Release" build plan on the CI system for the correct release.
+ Contact Martin Winter for this step. Ensure all release packages build
+ successfully.
+
+#. Kick off the Snapcraft build plan for the release.
+
+#. Build Docker images
+
+ 1. Log into the Netdef Docker build VM
+ 2. ``sudo -su builduser``
+ 3. Suppose we are releasing 8.5.0, then ``X.Y.Z`` is ``8.5.0``. Run this:
+
+ .. code-block:: console
+
+ cd /home/builduser/frr
+ TAG=X.Y.Z
+ git fetch --all
+ git checkout frr-$TAG
+ docker buildx build --platform linux/amd64,linux/arm64,linux/ppc64le,linux/s390x,linux/arm/v7,linux/arm/v6 -f docker/alpine/Dockerfile -t quay.io/frrouting/frr:$TAG --push .
+ git tag docker/$TAG
+ git push origin docker/$TAG
+
+ This will build a multi-arch image and upload it to Quay, as well as
+ create a git tag corresponding to the commit that the image was built
+ from and upload that to Github. It's important that the git tag point to
+ the exact codebase that was used to build the docker image, so if any
+ changes need to be made on top of the ``frr-$TAG`` release tag, make
+ sure these changes are committed and pointed at by the ``docker/X.Y.Z``
+ tag.
+
+
+Stage 3 - Publish
+-----------------
+
+#. Upload both the Debian and RPM packages to their respective repositories.
+
+#. Coordinate with the maintainer of FRR's RPM repository to publish the RPM
+ packages on that repository. Update the repository webpage. Verify that the
+ instructions on the webpage work and that FRR is installable from the
+ repository on a Red Hat system.
+
+ Current maintainer: *Martin Winter*
+
+#. Coordinate with the maintainer of FRR Debian package to publish the Debian
+ packages on that repository. Update the repository webpage. Verify that the
+ instructions on the webpage work and that FRR is installable from the
+ repository on a Debian system.
+
+ Current maintainer: *Jafar Al-Gharaibeh*
+
+#. Log in to the Read The Docs instance. in the "FRRouting" project, navigate
+ to the "Overview" tab. Ensure there is a ``stable-<version>`` version listed
+ and that it is enabled. Go to "Admin" and then "Advanced Settings". Change
+ "Default version" to the new version. This ensures that the documentation
+ shown to visitors is that of the latest release by default.
+
+ This step must be performed by someone with administrative access to the
+ Read the Docs instance.
+
+#. On GitHub, go to the <https://github.com/FRRouting/frr/releases>_ and click
+ "Draft a new release". Write a release announcement. The release
+ announcement should follow the template in
+ ``release-announcement-template.md``, located next to this document. Check
+ for spelling errors, and optionally (but preferably) have other maintainers
+ proofread the announcement text.
+
+ Do not attach any packages or source tarballs to the GitHub release.
+
+ Publish the release once it is reviewed.
+
+#. Deploy Snapcraft release. Remember that this will automatically upgrade Snap
+ users.
+
+ Current maintainer: *Martin Winter*
+
+#. Build and publish the Docker containers.
+
+ Current maintainer: *Quentin Young*
+
+#. Clone the ``frr-www`` repository:
+
+ .. code-block:: console
+
+ git clone https://github.com/FRRouting/frr-www.git
+
+#. Add a new release announcement, using a previous announcement as template:
+
+ .. code-block:: console
+
+ cp content/release/<old-version>.md content/release/<new-version>.md
+
+ Paste the GitHub release announcement text into this document, and **remove
+ line breaks**. In other words, this::
+
+ This is one continuous
+ sentence that should be
+ rendered on one line
+
+ Needs to be changed to this::
+
+ This is one continuous sentence that should be rendered on one line
+
+ This is very important otherwise the announcement will be unreadable on the
+ website.
+
+ To get the number of commiters and commits, here is a couple of handy commands:
+
+ .. code-block:: console
+
+ # The number of commits
+ % git log --oneline --no-merges base_8.2...base_8.1 | wc -l
+
+ # The number of commiters
+ % git shortlog --summary --no-merges base_8.2...base_8.1 | wc -l
+
+ Make sure to add a link to the GitHub releases page at the top.
+
+#. Deploy the updated ``frr-www`` on the frrouting.org web server and verify
+ that the announcement text is visible.
+
+#. Update readthedocs.org (Default Version) for https://docs.frrouting.org to
+ be the version of this latest release.
+
+#. Send an email to ``announce@lists.frrouting.org``. The text of this email
+ should include text as appropriate from the GitHub release and a link to the
+ GitHub release, Debian repository, and RPM repository.
diff --git a/doc/developer/fuzzing.rst b/doc/developer/fuzzing.rst
new file mode 100644
index 0000000..8a33187
--- /dev/null
+++ b/doc/developer/fuzzing.rst
@@ -0,0 +1,164 @@
+.. _fuzzing:
+
+Fuzzing
+=======
+
+This page describes the fuzzing targets and supported fuzzers available in FRR
+and how to use them. Familiarity with fuzzing techniques and tools is assumed.
+
+Overview
+--------
+
+It is well known that networked applications tend to be difficult to fuzz on
+their network-facing attack surfaces. Approaches involving actual network
+transmission tend to be slow and are subject to intermediate devices and
+networking stacks which tend to drop fuzzed packets, especially if the fuzzing
+surface covers IP itself. Some time was spent on fuzzing FRR this way with some
+mediocre results but attention quickly turned towards skipping the actual
+networking and instead adding fuzzing targets directly in the packet processing
+code for use by more traditional in- and out-of-process fuzzers. Results from
+this approach have been very fruitful.
+
+The patches to add fuzzing targets are kept in a separate git branch. Typically
+it is better to keep them in the main branch so they are kept up to date and do
+not need to be constantly synchronized with the main codebase. Unfortunately,
+changes to FRR to support fuzzing necessarily extend far beyond the
+entrypoints. Checksums must be disarmed, interactions with the kernel must be
+skipped, sockets and files must be avoided, desired under/overflows must be
+marked, etc. There are the usual ``LD_PRELOAD`` libraries to emulate these
+things (preeny et al) but FRR is a very kernel-reliant program and these
+libraries tend to create annoying problems when used with FRR for whatever
+reason. Keeping this code in the main codebase is cluttering, difficult to work
+with / around, and runs the risk of accidentally introducing bugs even if
+``#ifdef``'d out. Consequently it's in a separate branch that is rebased on
+``master`` from time to time.
+
+
+Code
+----
+
+The git branch with fuzzing targets is located here:
+
+https://github.com/FRRouting/frr/tree/fuzz
+
+To build libFuzzer targets, pass ``--enable-libfuzzer`` to ``configure``.
+To build AFL targets, compile with ``afl-clang`` as usual.
+
+Fuzzing with sanitizers is strongly recommended, especially ASAN, which you can
+enable by passing ``--enable-address-sanitizer`` to ``configure``.
+
+Suggested UBSAN flags: ``-fsanitize-recover=unsigned-integer-overflow,implicit-conversion -fsanitize=unsigned-integer-overflow,implicit-conversion,nullability-arg,nullability-assign,nullability-return``
+Recommended cflags: ``-Wno-all -g3 -O3 -funroll-loops``
+
+Design
+------
+
+All fuzzing targets have support for libFuzzer and AFL. This is done by writing
+the target as a libFuzzer entrypoint (``LLVMFuzzerTestOneInput()``) and calling
+it from the AFL entrypoint in ``main()``. New targets should use this rule.
+
+When adding AFL entrypoints, it's a good idea to use AFL persistent mode for
+better performance. Grep ``bgpd/bgp_main.c`` for ``__AFL_INIT()`` for an
+example of how to do this in FRR. Typically it involves moving all internal
+daemon setup into a setup function. Then this setup function is called exactly
+once for the lifetime of the process. In ``LLVMFuzzerTestOneInput()`` this
+means you need to call it at the start of the function protected by a static
+boolean that is set to true, since that function is your entrypoint. You also
+need to call it prior to ``__AFL_INIT()`` in ``main()`` because ``main()`` is
+your entrypoint in the AFL case.
+
+Adding support to daemons
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This section describes how to add entrypoints to daemons that do not have any
+yet.
+
+Because libFuzzer has its own ``main()`` function, when adding fuzzing support
+to a daemon that doesn't have any targets already, ``main()`` needs to be
+``#ifdef``'d out like so:
+
+.. code:: c
+
+ #ifndef FUZZING_LIBFUZZER
+
+ int main(int argc, char **argv)
+ {
+ ...
+ }
+
+ #endif /* FUZZING_LIBFUZZER */
+
+
+The ``FUZZING_LIBFUZZER`` macro is set by ``--enable-libfuzzer``.
+
+Because libFuzzer can only be linked into daemons that have
+``LLVMFuzzerTestOneInput()`` implemented, we can't pass ``-fsanitize=fuzzer``
+to all daemons in ``AM_CFLAGS``. It needs to go into a variable specific to
+each daemon. Since it can be thought of as a kind of sanitizer, for daemons
+that have libFuzzer support there are now individual flags variables for those
+daemons named ``DAEMON_SAN_FLAGS`` (e.g. ``BGPD_SAN_FLAGS``,
+``ZEBRA_SAN_FLAGS``). This variable has the contents of the generic
+``SAN_FLAGS`` plus any fuzzing-related flags. It is used in daemons'
+``subdir.am`` in place of ``SAN_FLAGS``. Daemons that don't support libFuzzer
+still use ``SAN_FLAGS``. If you want to add fuzzing support to a daemon you
+need to do this flag variable conversion; look at ``configure.ac`` for
+examples, it is fairly straightforward. Remember to update ``subdir.am`` to use
+the new variable.
+
+Do note that when fuzzing is enabled, ``SAN_FLAGS`` gains
+``-fsanitize=fuzzer-no-link``; the result is that all daemons are instrumented
+for fuzzing but only the ones with ``LLVMFuzzerTestOneInput()`` actually get
+linked with libFuzzer.
+
+
+Targets
+-------
+
+A given daemon can have lots of different paths that are interesting to fuzz.
+There's not really a great way to handle this, most fuzzers assume the program
+has one entrypoint. The approach taken in FRR for multiple entrypoints is to
+control which path is taken within ``LLVMFuzzerTestOneInput()`` using
+``#ifdef`` and passing whatever controlling macro definition you want. Take a
+look at that function for the daemon you're interested in fuzzing, pick the
+target, add ``#define MY_TARGET 1`` somewhere before the ``#ifdef`` switch,
+recompile.
+
+.. list-table:: Fuzzing Targets
+
+ * - Daemon
+ - Target
+ - Fuzzers
+ * - bgpd
+ - packet parser
+ - libfuzzer, afl
+ * - ospfd
+ - packet parser
+ - libfuzzer, afl
+ * - pimd
+ - packet parser
+ - libfuzzer, afl
+ * - vrrpd
+ - packet parser
+ - libfuzzer, afl
+ * - vrrpd
+ - zapi parser
+ - libfuzzer, afl
+ * - zebra
+ - netlink
+ - libfuzzer, afl
+ * - zebra
+ - zserv / zapi
+ - libfuzzer, afl
+
+
+Fuzzer Notes
+------------
+
+Some interesting seed corpuses for various daemons are available `here
+<https://github.com/qlyoung/frr-fuzz/tree/master/samples>`_.
+
+For libFuzzer, you need to pass ``-rss_limit_mb=0`` if you are fuzzing with
+ASAN enabled, as you should.
+
+For AFL, afl++ is strongly recommended; afl proper isn't really maintained
+anymore.
diff --git a/doc/developer/grpc.rst b/doc/developer/grpc.rst
new file mode 100644
index 0000000..4e81adf
--- /dev/null
+++ b/doc/developer/grpc.rst
@@ -0,0 +1,524 @@
+.. _grpc-dev:
+
+***************
+Northbound gRPC
+***************
+
+To enable gRPC support one needs to add `--enable-grpc` when running
+`configure`. Additionally, when launching each daemon one needs to request
+the gRPC module be loaded and which port to bind to. This can be done by adding
+`-M grpc:<port>` to the daemon's CLI arguments.
+
+Currently there is no gRPC "routing" so you will need to bind your gRPC
+`channel` to the particular daemon's gRPC port to interact with that daemon's
+gRPC northbound interface.
+
+The minimum version of gRPC known to work is 1.16.1.
+
+.. _grpc-languages-bindings:
+
+Programming Language Bindings
+=============================
+
+The gRPC supported programming language bindings can be found here:
+https://grpc.io/docs/languages/
+
+After picking a programming language that supports gRPC bindings, the
+next step is to generate the FRR northbound bindings. To generate the
+northbound bindings you'll need the programming language binding
+generator tools and those are language specific.
+
+C++ Example
+-----------
+
+The next sections will use C++ as an example for accessing FRR
+northbound through gRPC.
+
+.. _grpc-c++-generate:
+
+Generating C++ FRR Bindings
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Generating FRR northbound bindings for C++ example:
+
+::
+
+ # Install gRPC (e.g., on Ubuntu 20.04)
+ sudo apt-get install libgrpc++-dev libgrpc-dev
+
+ mkdir /tmp/frr-cpp
+ cd grpc
+
+ protoc --cpp_out=/tmp/frr-cpp \
+ --grpc_out=/tmp/frr-cpp \
+ -I $(pwd) \
+ --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` \
+ frr-northbound.proto
+
+
+.. _grpc-c++-if-sample:
+
+Using C++ To Get Version and Interfaces State
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Below is a sample program to print all interfaces discovered.
+
+::
+
+ # test.cpp
+ #include <string>
+ #include <sstream>
+ #include <grpc/grpc.h>
+ #include <grpcpp/create_channel.h>
+ #include "frr-northbound.pb.h"
+ #include "frr-northbound.grpc.pb.h"
+
+ int main() {
+ frr::GetRequest request;
+ frr::GetResponse reply;
+ grpc::ClientContext context;
+ grpc::Status status;
+
+ auto channel = grpc::CreateChannel("localhost:50051",
+ grpc::InsecureChannelCredentials());
+ auto stub = frr::Northbound::NewStub(channel);
+
+ request.set_type(frr::GetRequest::ALL);
+ request.set_encoding(frr::JSON);
+ request.set_with_defaults(true);
+ request.add_path("/frr-interface:lib");
+ auto stream = stub->Get(&context, request);
+
+ std::ostringstream ss;
+ while (stream->Read(&reply))
+ ss << reply.data().data() << std::endl;
+
+ status = stream->Finish();
+ assert(status.ok());
+ std::cout << "Interface Info:\n" << ss.str() << std::endl;
+ }
+
+Below is how to compile and run the program, with the example output:
+
+::
+
+ $ g++ -o test test.cpp frr-northbound.grpc.pb.cc frr-northbound.pb.cc -lgrpc++ -lprotobuf
+ $ ./test
+ Interface Info:
+ {
+ "frr-interface:lib": {
+ "interface": [
+ {
+ "name": "lo",
+ "vrf": "default",
+ "state": {
+ "if-index": 1,
+ "mtu": 0,
+ "mtu6": 65536,
+ "speed": 0,
+ "metric": 0,
+ "phy-address": "00:00:00:00:00:00"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0,
+ "ptm-status": "disabled"
+ }
+ }
+ },
+ {
+ "name": "r1-eth0",
+ "vrf": "default",
+ "state": {
+ "if-index": 2,
+ "mtu": 1500,
+ "mtu6": 1500,
+ "speed": 10000,
+ "metric": 0,
+ "phy-address": "02:37:ac:63:59:b9"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0,
+ "ptm-status": "disabled"
+ }
+ }
+ }
+ ]
+ },
+ "frr-zebra:zebra": {
+ "mcast-rpf-lookup": "mrib-then-urib",
+ "workqueue-hold-timer": 10,
+ "zapi-packets": 1000,
+ "import-kernel-table": {
+ "distance": 15
+ },
+ "dplane-queue-limit": 200
+ }
+ }
+
+
+
+.. _grpc-python-example:
+
+Python Example
+--------------
+
+The next sections will use Python as an example for writing scripts to use
+the northbound.
+
+.. _grpc-python-generate:
+
+Generating Python FRR Bindings
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Generating FRR northbound bindings for Python example:
+
+::
+
+ # Install python3 virtual environment capability e.g.,
+ sudo apt-get install python3-venv
+
+ # Create a virtual environment for python grpc and activate
+ python3 -m venv venv-grpc
+ source venv-grpc/bin/activate
+
+ # Install grpc requirements
+ pip install grpcio grpcio-tools
+
+ mkdir /tmp/frr-python
+ cd grpc
+
+ python3 -m grpc_tools.protoc \
+ --python_out=/tmp/frr-python \
+ --grpc_python_out=/tmp/frr-python \
+ -I $(pwd) \
+ frr-northbound.proto
+
+.. _grpc-python-if-sample:
+
+Using Python To Get Capabilities and Interfaces State
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Below is a sample script to print capabilities and all interfaces Python
+discovered. This demostrates the 2 different RPC results one gets from gRPC,
+Unary (`GetCapabilities`) and Streaming (`Get`) for the interface state.
+
+::
+
+ import grpc
+ import frr_northbound_pb2
+ import frr_northbound_pb2_grpc
+
+ channel = grpc.insecure_channel('localhost:50051')
+ stub = frr_northbound_pb2_grpc.NorthboundStub(channel)
+
+ # Print Capabilities
+ request = frr_northbound_pb2.GetCapabilitiesRequest()
+ response = stub.GetCapabilities(request)
+ print(response)
+
+ # Print Interface State and Config
+ request = frr_northbound_pb2.GetRequest()
+ request.path.append("/frr-interface:lib")
+ request.type=frr_northbound_pb2.GetRequest.ALL
+ request.encoding=frr_northbound_pb2.XML
+
+ for r in stub.Get(request):
+ print(r.data.data)
+
+The previous script will output something like:
+
+::
+
+ frr_version: "7.7-dev-my-manual-build"
+ rollback_support: true
+ supported_modules {
+ name: "frr-filter"
+ organization: "FRRouting"
+ revision: "2019-07-04"
+ }
+ supported_modules {
+ name: "frr-interface"
+ organization: "FRRouting"
+ revision: "2020-02-05"
+ }
+ [...]
+ supported_encodings: JSON
+ supported_encodings: XML
+
+ <lib xmlns="http://frrouting.org/yang/interface">
+ <interface>
+ <name>lo</name>
+ <vrf>default</vrf>
+ <state>
+ <if-index>1</if-index>
+ <mtu>0</mtu>
+ <mtu6>65536</mtu6>
+ <speed>0</speed>
+ <metric>0</metric>
+ <phy-address>00:00:00:00:00:00</phy-address>
+ </state>
+ <zebra xmlns="http://frrouting.org/yang/zebra">
+ <state>
+ <up-count>0</up-count>
+ <down-count>0</down-count>
+ </state>
+ </zebra>
+ </interface>
+ <interface>
+ <name>r1-eth0</name>
+ <vrf>default</vrf>
+ <state>
+ <if-index>2</if-index>
+ <mtu>1500</mtu>
+ <mtu6>1500</mtu6>
+ <speed>10000</speed>
+ <metric>0</metric>
+ <phy-address>f2:62:2e:f3:4c:e4</phy-address>
+ </state>
+ <zebra xmlns="http://frrouting.org/yang/zebra">
+ <state>
+ <up-count>0</up-count>
+ <down-count>0</down-count>
+ </state>
+ </zebra>
+ </interface>
+ </lib>
+
+.. _grpc-ruby-example:
+
+Ruby Example
+------------
+
+Next sections will use Ruby as an example for writing scripts to use
+the northbound.
+
+.. _grpc-ruby-generate:
+
+Generating Ruby FRR Bindings
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Generating FRR northbound bindings for Ruby example:
+
+::
+
+ # Install the required gems:
+ # - grpc: the gem that will talk with FRR's gRPC plugin.
+ # - grpc-tools: the gem that provides the code generator.
+ gem install grpc
+ gem install grpc-tools
+
+ # Create your project/scripts directory:
+ mkdir /tmp/frr-ruby
+
+ # Go to FRR's grpc directory:
+ cd grpc
+
+ # Generate the ruby bindings:
+ grpc_tools_ruby_protoc \
+ --ruby_out=/tmp/frr-ruby \
+ --grpc_out=/tmp/frr-ruby \
+ frr-northbound.proto
+
+
+.. _grpc-ruby-if-sample:
+
+Using Ruby To Get Interfaces State
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Here is a sample script to print all interfaces FRR discovered:
+
+::
+
+ require 'frr-northbound_services_pb'
+
+ # Create the connection with FRR's gRPC:
+ stub = Frr::Northbound::Stub.new('localhost:50051', :this_channel_is_insecure)
+
+ # Create a new state request to get interface state:
+ request = Frr::GetRequest.new
+ request.type = :STATE
+ request.path.push('/frr-interface:lib')
+
+ # Ask FRR.
+ response = stub.get(request)
+
+ # Print the response.
+ response.each do |result|
+ result.data.data.each_line do |line|
+ puts line
+ end
+ end
+
+
+.. note::
+
+ The generated files will assume that they are in the search path (e.g.
+ inside gem) so you'll need to either edit it to use ``require_relative`` or
+ tell Ruby where to look for them. For simplicity we'll use ``-I .`` to tell
+ it is in the current directory.
+
+
+The previous script will output something like this:
+
+::
+
+ $ cd /tmp/frr-ruby
+ # Add `-I.` so ruby finds the FRR generated file locally.
+ $ ruby -I. interface.rb
+ {
+ "frr-interface:lib": {
+ "interface": [
+ {
+ "name": "eth0",
+ "vrf": "default",
+ "state": {
+ "if-index": 2,
+ "mtu": 1500,
+ "mtu6": 1500,
+ "speed": 1000,
+ "metric": 0,
+ "phy-address": "11:22:33:44:55:66"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0
+ }
+ }
+ },
+ {
+ "name": "lo",
+ "vrf": "default",
+ "state": {
+ "if-index": 1,
+ "mtu": 0,
+ "mtu6": 65536,
+ "speed": 0,
+ "metric": 0,
+ "phy-address": "00:00:00:00:00:00"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0
+ }
+ }
+ }
+ ]
+ }
+ }
+
+
+.. _grpc-ruby-bfd-profile-sample:
+
+Using Ruby To Create BFD Profiles
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In this example you'll learn how to edit configuration using JSON
+and programmatic (XPath) format.
+
+::
+
+ require 'frr-northbound_services_pb'
+
+ # Create the connection with FRR's gRPC:
+ stub = Frr::Northbound::Stub.new('localhost:50051', :this_channel_is_insecure)
+
+ # Create a new candidate configuration change.
+ new_candidate = stub.create_candidate(Frr::CreateCandidateRequest.new)
+
+ # Use JSON to configure.
+ request = Frr::LoadToCandidateRequest.new
+ request.candidate_id = new_candidate.candidate_id
+ request.type = :MERGE
+ request.config = Frr::DataTree.new
+ request.config.encoding = :JSON
+ request.config.data = <<-EOJ
+ {
+ "frr-bfdd:bfdd": {
+ "bfd": {
+ "profile": [
+ {
+ "name": "test-prof",
+ "detection-multiplier": 4,
+ "required-receive-interval": 800000
+ }
+ ]
+ }
+ }
+ }
+ EOJ
+
+ # Load configuration to candidate.
+ stub.load_to_candidate(request)
+
+ # Commit candidate.
+ stub.commit(
+ Frr::CommitRequest.new(
+ candidate_id: new_candidate.candidate_id,
+ phase: :ALL,
+ comment: 'create test-prof'
+ )
+ )
+
+ #
+ # Now lets delete the previous profile and create a new one.
+ #
+
+ # Create a new candidate configuration change.
+ new_candidate = stub.create_candidate(Frr::CreateCandidateRequest.new)
+
+ # Edit the configuration candidate.
+ request = Frr::EditCandidateRequest.new
+ request.candidate_id = new_candidate.candidate_id
+
+ # Delete previously created profile.
+ request.delete.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof']",
+ )
+ )
+
+ # Add new profile with two configurations.
+ request.update.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof-2']/detection-multiplier",
+ value: 5.to_s
+ )
+ )
+ request.update.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof-2']/desired-transmission-interval",
+ value: 900_000.to_s
+ )
+ )
+
+ # Modify the candidate.
+ stub.edit_candidate(request)
+
+ # Commit the candidate configuration.
+ stub.commit(
+ Frr::CommitRequest.new(
+ candidate_id: new_candidate.candidate_id,
+ phase: :ALL,
+ comment: 'replace test-prof with test-prof-2'
+ )
+ )
+
+
+And here is the new FRR configuration:
+
+::
+
+ $ sudo vtysh -c 'show running-config'
+ ...
+ bfd
+ profile test-prof-2
+ detect-multiplier 5
+ transmit-interval 900
+ !
+ !
diff --git a/doc/developer/hooks.rst b/doc/developer/hooks.rst
new file mode 100644
index 0000000..b37a4ae
--- /dev/null
+++ b/doc/developer/hooks.rst
@@ -0,0 +1,171 @@
+.. highlight:: c
+
+Hooks
+=====
+
+Libfrr provides type-safe subscribable hook points where other pieces of
+code can add one or more callback functions. "type-safe" in this case
+applies to the function pointers used for subscriptions. The
+implementations checks (at compile-time) whether a callback to be added has
+the appropriate function signature (parameters) for the hook.
+
+Example:
+
+.. code-block:: c
+ :caption: mydaemon.h
+
+ #include "hook.h"
+ DECLARE_HOOK(some_update_event, (struct eventinfo *info), (info));
+
+.. code-block:: c
+ :caption: mydaemon.c
+
+ #include "mydaemon.h"
+ DEFINE_HOOK(some_update_event, (struct eventinfo *info), (info));
+ ...
+ hook_call(some_update_event, info);
+
+.. code-block:: c
+ :caption: mymodule.c
+
+ #include "mydaemon.h"
+ static int event_handler(struct eventinfo *info);
+ ...
+ hook_register(some_update_event, event_handler);
+
+Do not use parameter names starting with "hook", these can collide with
+names used by the hook code itself.
+
+
+Return values
+-------------
+
+Callbacks to be placed on hooks always return "int" for now; hook_call will
+sum up the return values from each called function. (The default is 0 if no
+callbacks are registered.)
+
+There are no pre-defined semantics for the value, in most cases it is
+ignored. For success/failure indication, 0 should be success, and
+handlers should make sure to only return 0 or 1 (not -1 or other values).
+
+There is no built-in way to abort executing a chain after a failure of one
+of the callbacks. If this is needed, the hook can use an extra
+``bool *aborted`` argument.
+
+
+Priorities
+----------
+
+Hooks support a "priority" value for ordering registered calls
+relative to each other. The priority is a signed integer where lower
+values are called earlier. There are also "Koohs", which is hooks with
+reverse priority ordering (for cleanup/deinit hooks, so you can use the
+same priority value).
+
+Recommended priority value ranges are:
+
+======================== ===================================================
+Range Usage
+------------------------ ---------------------------------------------------
+ -999 ... 0 ... 999 main executable / daemon, or library
+
+-1999 ... -1000 modules registering calls that should run before
+ the daemon's bits
+
+1000 ... 1999 modules' calls that should run after daemon's
+ (includes default value: 1000)
+======================== ===================================================
+
+Note: the default value is 1000, based on the following 2 expectations:
+
+- most hook_register() usage will be in loadable modules
+- usage of hook_register() in the daemon itself may need relative ordering
+ to itself, making an explicit value the expected case
+
+The priority value is passed as extra argument on hook_register_prio() /
+hook_register_arg_prio(). Whether a hook runs in reverse is determined
+solely by the code defining / calling the hook. (DECLARE_KOOH is actually
+the same thing as DECLARE_HOOK, it's just there to make it obvious.)
+
+
+Definition
+----------
+
+.. c:macro:: DECLARE_HOOK(name, arglist, passlist)
+.. c:macro:: DECLARE_KOOH(name, arglist, passlist)
+
+ :param name: Name of the hook to be defined
+ :param arglist: Function definition style parameter list in braces.
+ :param passlist: List of the same parameters without their types.
+
+ Note: the second and third macro args must be the hook function's
+ parameter list, with the same names for each parameter. The second
+ macro arg is with types (used for defining things), the third arg is
+ just the names (used for passing along parameters).
+
+ This macro must be placed in a header file; this header file must be
+ included to register a callback on the hook.
+
+ Examples:
+
+ .. code-block:: c
+
+ DECLARE_HOOK(foo, (), ());
+ DECLARE_HOOK(bar, (int arg), (arg));
+ DECLARE_HOOK(baz, (const void *x, in_addr_t y), (x, y));
+
+.. c:macro:: DEFINE_HOOK(name, arglist, passlist)
+
+ Implements an hook. Each ``DECLARE_HOOK`` must have be accompanied by
+ exactly one ``DEFINE_HOOK``, which needs to be placed in a source file.
+ **The hook can only be called from this source file.** This is intentional
+ to avoid overloading and/or misusing hooks for distinct purposes.
+
+ The compiled source file will include a global symbol with the name of the
+ hook prefixed by `_hook_`. Trying to register a callback for a hook that
+ doesn't exist will therefore result in a linker error, or a module
+ load-time error for dynamic modules.
+
+.. c:macro:: DEFINE_KOOH(name, arglist, passlist)
+
+ Same as ``DEFINE_HOOK``, but the sense of priorities / order of callbacks
+ is reversed. This should be used for cleanup hooks.
+
+.. c:function:: int hook_call(name, ...)
+
+ Calls the specified named hook. Parameters to the hook are passed right
+ after the hook name, e.g.:
+
+ .. code-block:: c
+
+ hook_call(foo);
+ hook_call(bar, 0);
+ hook_call(baz, NULL, INADDR_ANY);
+
+ Returns the sum of return values from all callbacks. The ``DEFINE_HOOK``
+ statement for the hook must be placed in the file before any ``hook_call``
+ use of the hook.
+
+
+Callback registration
+---------------------
+
+.. c:function:: void hook_register(name, int (*callback)(...))
+.. c:function:: void hook_register_prio(name, int priority, int (*callback)(...))
+.. c:function:: void hook_register_arg(name, int (*callback)(void *arg, ...), void *arg)
+.. c:function:: void hook_register_arg_prio(name, int priority, int (*callback)(void *arg, ...), void *arg)
+
+ Register a callback with an hook. If the caller needs to pass an extra
+ argument to the callback, the _arg variant can be used and the extra
+ parameter will be passed as first argument to the callback. There is no
+ typechecking for this argument.
+
+ The priority value is used as described above. The variants without a
+ priority parameter use 1000 as priority value.
+
+.. c:function:: void hook_unregister(name, int (*callback)(...))
+.. c:function:: void hook_unregister_arg(name, int (*callback)(void *arg, ...), void *arg)
+
+ Removes a previously registered callback from a hook. Note that there
+ is no _prio variant of these calls. The priority value is only used during
+ registration.
diff --git a/doc/developer/images/PCEPlib_design.jpg b/doc/developer/images/PCEPlib_design.jpg
new file mode 100644
index 0000000..41aada3
--- /dev/null
+++ b/doc/developer/images/PCEPlib_design.jpg
Binary files differ
diff --git a/doc/developer/images/PCEPlib_internal_deps.jpg b/doc/developer/images/PCEPlib_internal_deps.jpg
new file mode 100644
index 0000000..8380021
--- /dev/null
+++ b/doc/developer/images/PCEPlib_internal_deps.jpg
Binary files differ
diff --git a/doc/developer/images/PCEPlib_socket_comm.jpg b/doc/developer/images/PCEPlib_socket_comm.jpg
new file mode 100644
index 0000000..3d62a46
--- /dev/null
+++ b/doc/developer/images/PCEPlib_socket_comm.jpg
Binary files differ
diff --git a/doc/developer/images/PCEPlib_threading_model.jpg b/doc/developer/images/PCEPlib_threading_model.jpg
new file mode 100644
index 0000000..afe91c2
--- /dev/null
+++ b/doc/developer/images/PCEPlib_threading_model.jpg
Binary files differ
diff --git a/doc/developer/images/PCEPlib_threading_model_frr_infra.jpg b/doc/developer/images/PCEPlib_threading_model_frr_infra.jpg
new file mode 100644
index 0000000..5648a9d
--- /dev/null
+++ b/doc/developer/images/PCEPlib_threading_model_frr_infra.jpg
Binary files differ
diff --git a/doc/developer/images/PCEPlib_timers.jpg b/doc/developer/images/PCEPlib_timers.jpg
new file mode 100644
index 0000000..a178ee9
--- /dev/null
+++ b/doc/developer/images/PCEPlib_timers.jpg
Binary files differ
diff --git a/doc/developer/include-compile.rst b/doc/developer/include-compile.rst
new file mode 100644
index 0000000..b98d237
--- /dev/null
+++ b/doc/developer/include-compile.rst
@@ -0,0 +1,30 @@
+Clone the FRR git repo and use the included ``configure`` script to configure
+FRR's build time options to your liking. The full option listing can be
+obtained by running ``./configure -h``. The options shown below are examples.
+
+.. code-block:: console
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --prefix=/usr \
+ --includedir=\${prefix}/include \
+ --bindir=\${prefix}/bin \
+ --sbindir=\${prefix}/lib/frr \
+ --libdir=\${prefix}/lib/frr \
+ --libexecdir=\${prefix}/lib/frr \
+ --localstatedir=/var/run/frr \
+ --sysconfdir=/etc/frr \
+ --with-moduledir=\${prefix}/lib/frr/modules \
+ --enable-configfile-mask=0640 \
+ --enable-logfile-mask=0640 \
+ --enable-snmp=agentx \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --with-pkg-git-version \
+ --with-pkg-extra-version=-MyOwnFRRVersion
+ make
+ sudo make install
diff --git a/doc/developer/index.rst b/doc/developer/index.rst
new file mode 100644
index 0000000..bd794b1
--- /dev/null
+++ b/doc/developer/index.rst
@@ -0,0 +1,26 @@
+FRRouting Developer's Guide
+===========================
+
+.. toctree::
+ :maxdepth: 2
+
+ workflow
+ checkpatch
+ building
+ packaging
+ process-architecture
+ library
+ fuzzing
+ tracing
+ testing
+ mgmtd-dev
+ bgpd
+ fpm
+ grpc
+ ospf
+ zebra
+ vtysh
+ path
+ pceplib
+ link-state
+ northbound/northbound
diff --git a/doc/developer/ldpd-basic-test-setup.md b/doc/developer/ldpd-basic-test-setup.md
new file mode 100644
index 0000000..b25a2b6
--- /dev/null
+++ b/doc/developer/ldpd-basic-test-setup.md
@@ -0,0 +1,681 @@
+## Topology
+
+The goal of this test is to verify that the all the basic functionality
+of ldpd is working as expected, be it running on Linux or OpenBSD. In
+addition to that, more advanced features are also tested, like LDP
+sessions over IPv6, MD5 authentication and pseudowire signaling.
+
+In the topology below there are 3 PE routers, 3 CE routers and one P
+router (not attached to any consumer site).
+
+All routers have IPv4 addresses and OSPF is used as the IGP. The
+three routers from the bottom of the picture, P, PE2 and PE3, are also
+configured for IPv6 (dual-stack) and static IPv6 routes are used to
+provide connectivity among them.
+
+The three CEs share the same VPLS membership. LDP is used to set up the
+LSPs among the PEs and to signal the pseudowires. MD5 authentication is
+used to protect all LDP sessions.
+
+```
+ CE1 172.16.1.1/24
+ +
+ |
+ +---+---+
+ | PE1 |
+ | IOS XE|
+ | |
+ +---+---+
+ |
+ | 10.0.1.0/24
+ |
+ +---+---+
+ | P |
+ +------+ IOS XR+------+
+ | | | |
+ | +-------+ |
+ 10.0.2.0/24 | | 10.0.3.0/24
+2001:db8:2::/64 | | 2001:db8:3::/64
+ | |
+ +---+---+ +---+---+
+ | PE2 | | PE3 |
+ |OpenBSD+-------------+ Linux |
+ | | | |
+ +---+---+ 10.0.4.0/24 +---+---+
+ | 2001:db8:4::/64 |
+ + +
+ 172.16.1.2/24 CE2 CE3 172.16.1.3/24
+```
+
+## Configuration
+
+#### Linux
+1 - Enable IPv4/v6 forwarding:
+```
+# sysctl -w net.ipv4.ip_forward=1
+# sysctl -w net.ipv6.conf.all.forwarding=1
+```
+
+2 - Enable MPLS forwarding:
+```
+# modprobe mpls-router
+# modprobe mpls-iptunnel
+# echo 100000 > /proc/sys/net/mpls/platform_labels
+# echo 1 > /proc/sys/net/mpls/conf/eth1/input
+# echo 1 > /proc/sys/net/mpls/conf/eth2/input
+```
+
+3 - Set up the interfaces:
+```
+# ip link add name lo1 type dummy
+# ip link set dev lo1 up
+# ip addr add 4.4.4.4/32 dev lo1
+# ip -6 addr add 4:4:4::4/128 dev lo1
+# ip link set dev eth1 up
+# ip addr add 10.0.4.4/24 dev eth1
+# ip -6 addr add 2001:db8:4::4/64 dev eth1
+# ip link set dev eth2 up
+# ip addr add 10.0.3.4/24 dev eth2
+# ip -6 addr add 2001:db8:3::4/64 dev eth2
+```
+
+4 - Set up the bridge and pseudowire interfaces:
+```
+# ip link add type bridge
+# ip link set dev bridge0 up
+# ip link set dev eth0 up
+# ip link set dev eth0 master bridge0
+# ip link add name mpw0 type dummy
+# ip link set dev mpw0 up
+# ip link set dev mpw0 master bridge0
+# ip link add name mpw1 type dummy
+# ip link set dev mpw1 up
+# ip link set dev mpw1 master bridge0
+```
+
+> NOTE: MPLS support in the Linux kernel is very recent and it still
+doesn't support pseudowire interfaces. We are using here dummy interfaces
+just to show how the VPLS configuration should look like in the future.
+
+5 - Add static IPv6 routes for the remote loopbacks:
+```
+# ip -6 route add 2:2:2::2/128 via 2001:db8:3::2
+# ip -6 route add 3:3:3::3/128 via 2001:db8:4::3
+```
+
+6 - Edit /etc/frr/ospfd.conf:
+```
+router ospf
+ network 4.4.4.4/32 area 0.0.0.0
+ network 10.0.3.4/24 area 0.0.0.0
+ network 10.0.4.4/24 area 0.0.0.0
+!
+```
+
+7 - Edit /etc/frr/ldpd.conf:
+```
+debug mpls ldp messages recv
+debug mpls ldp messages sent
+debug mpls ldp zebra
+!
+mpls ldp
+ router-id 4.4.4.4
+ dual-stack cisco-interop
+ neighbor 1.1.1.1 password opensourcerouting
+ neighbor 2.2.2.2 password opensourcerouting
+ neighbor 3.3.3.3 password opensourcerouting
+ !
+ address-family ipv4
+ discovery transport-address 4.4.4.4
+ label local advertise explicit-null
+ !
+ interface eth2
+ !
+ interface eth1
+ !
+ !
+ address-family ipv6
+ discovery transport-address 4:4:4::4
+ ttl-security disable
+ !
+ interface eth2
+ !
+ interface eth1
+ !
+ !
+!
+l2vpn ENG type vpls
+ bridge br0
+ member interface eth0
+ !
+ member pseudowire mpw0
+ neighbor lsr-id 1.1.1.1
+ pw-id 100
+ !
+ member pseudowire mpw1
+ neighbor lsr-id 3.3.3.3
+ neighbor address 3:3:3::3
+ pw-id 100
+ !
+!
+```
+
+> NOTE: We have to disable ttl-security under the ipv6 address-family
+in order to interoperate with the IOS-XR router. GTSM is mandatory for
+LDPv6 but the IOS-XR implementation is not RFC compliant in this regard.
+
+8 - Run zebra, ospfd and ldpd.
+
+#### OpenBSD
+1 - Enable IPv4/v6 forwarding:
+```
+# sysctl net.inet.ip.forwarding=1
+# sysctl net.inet6.ip6.forwarding=1
+```
+
+2 - Enable MPLS forwarding:
+```
+# ifconfig em2 10.0.2.3/24 mpls
+# ifconfig em3 10.0.4.3/24 mpls
+```
+
+3 - Set up the interfaces:
+```
+# ifconfig lo1 alias 3.3.3.3 netmask 255.255.255.255
+# ifconfig lo1 inet6 3:3:3::3/128
+# ifconfig em2 inet6 2001:db8:2::3/64
+# ifconfig em3 inet6 2001:db8:4::3/64
+```
+
+4 - Set up the bridge and pseudowire interfaces:
+```
+# ifconfig bridge0 create
+# ifconfig bridge0 up
+# ifconfig em1 up
+# ifconfig bridge0 add em1
+# ifconfig mpw0 create
+# ifconfig mpw0 up
+# ifconfig bridge0 add mpw0
+# ifconfig mpw1 create
+# ifconfig mpw1 up
+# ifconfig bridge0 add mpw1
+```
+
+5 - Add static IPv6 routes for the remote loopbacks:
+```
+# route -n add 4:4:4::4/128 2001:db8:4::4
+# route -n add 2:2:2::2/128 2001:db8:2::2
+```
+
+6 - Edit /etc/frr/ospfd.conf:
+```
+router ospf
+ network 10.0.2.3/24 area 0
+ network 10.0.4.3/24 area 0
+ network 3.3.3.3/32 area 0
+!
+```
+
+7 - Edit /etc/frr/ldpd.conf:
+```
+debug mpls ldp messages recv
+debug mpls ldp messages sent
+debug mpls ldp zebra
+!
+mpls ldp
+ router-id 3.3.3.3
+ dual-stack cisco-interop
+ neighbor 1.1.1.1 password opensourcerouting
+ neighbor 2.2.2.2 password opensourcerouting
+ neighbor 4.4.4.4 password opensourcerouting
+ !
+ address-family ipv4
+ discovery transport-address 3.3.3.3
+ label local advertise explicit-null
+ !
+ interface em3
+ !
+ interface em2
+ !
+ !
+ address-family ipv6
+ discovery transport-address 3:3:3::3
+ ttl-security disable
+ !
+ interface em3
+ !
+ interface em2
+ !
+ !
+!
+l2vpn ENG type vpls
+ bridge br0
+ member interface em1
+ !
+ member pseudowire mpw0
+ neighbor lsr-id 1.1.1.1
+ pw-id 100
+ !
+ member pseudowire mpw1
+ neighbor lsr-id 4.4.4.4
+ neighbor address 4:4:4::4
+ pw-id 100
+ !
+!
+```
+
+8 - Run zebra, ospfd and ldpd.
+
+#### Cisco routers
+CE1 (IOS):
+```
+interface FastEthernet0/0
+ ip address 172.16.1.1 255.255.255.0
+ !
+!
+```
+
+CE2 (IOS):
+```
+interface FastEthernet0/0
+ ip address 172.16.1.2 255.255.255.0
+ !
+!
+```
+
+CE3 (IOS):
+```
+interface FastEthernet0/0
+ ip address 172.16.1.3 255.255.255.0
+ !
+!
+```
+
+PE1 - IOS-XE (1):
+```
+mpls ldp neighbor 2.2.2.2 password opensourcerouting
+mpls ldp neighbor 3.3.3.3 password opensourcerouting
+mpls ldp neighbor 4.4.4.4 password opensourcerouting
+!
+l2vpn vfi context VFI
+ vpn id 1
+ member pseudowire2
+ member pseudowire1
+!
+bridge-domain 1
+ member GigabitEthernet1 service-instance 1
+ member vfi VFI
+!
+interface Loopback1
+ ip address 1.1.1.1 255.255.255.255
+!
+interface pseudowire1
+ encapsulation mpls
+ neighbor 3.3.3.3 100
+!
+interface pseudowire2
+ encapsulation mpls
+ neighbor 4.4.4.4 100
+!
+interface GigabitEthernet3
+ ip address 10.0.1.1 255.255.255.0
+ mpls ip
+!
+router ospf 1
+ network 0.0.0.0 255.255.255.255 area 0
+!
+```
+
+P - IOS-XR (2):
+```
+interface Loopback1
+ ipv4 address 2.2.2.2 255.255.255.255
+ ipv6 address 2:2:2::2/128
+!
+interface GigabitEthernet0/0/0/0
+ ipv4 address 10.0.1.2 255.255.255.0
+!
+interface GigabitEthernet0/0/0/1
+ ipv4 address 10.0.2.2 255.255.255.0
+ ipv6 address 2001:db8:2::2/64
+ ipv6 enable
+!
+interface GigabitEthernet0/0/0/2
+ ipv4 address 10.0.3.2 255.255.255.0
+ ipv6 address 2001:db8:3::2/64
+ ipv6 enable
+!
+router static
+ address-family ipv6 unicast
+ 3:3:3::3/128 2001:db8:2::3
+ 4:4:4::4/128 2001:db8:3::4
+ !
+!
+router ospf 1
+ router-id 2.2.2.2
+ address-family ipv4 unicast
+ area 0
+ interface Loopback1
+ !
+ interface GigabitEthernet0/0/0/0
+ !
+ interface GigabitEthernet0/0/0/1
+ !
+ interface GigabitEthernet0/0/0/2
+ !
+ !
+!
+mpls ldp
+ router-id 2.2.2.2
+ neighbor
+ 1.1.1.1:0 password clear opensourcerouting
+ 3.3.3.3:0 password clear opensourcerouting
+ 4.4.4.4:0 password clear opensourcerouting
+ !
+ address-family ipv4
+ !
+ address-family ipv6
+ discovery transport-address 2:2:2::2
+ !
+ interface GigabitEthernet0/0/0/0
+ address-family ipv4
+ !
+ !
+ interface GigabitEthernet0/0/0/1
+ address-family ipv4
+ !
+ address-family ipv6
+ !
+ !
+ interface GigabitEthernet0/0/0/2
+ address-family ipv4
+ !
+ address-family ipv6
+ !
+ !
+!
+```
+
+## Verification - Control Plane
+
+Using the CLI on the Linux box, the goal is to ensure that everything
+is working as expected.
+
+First, verify that all the required adjacencies and neighborships sessions
+were established:
+
+```
+linux# show mpls ldp discovery
+Local LDP Identifier: 4.4.4.4:0
+Discovery Sources:
+ Interfaces:
+ eth1: xmit/recv
+ LDP Id: 3.3.3.3:0, Transport address: 3.3.3.3
+ Hold time: 15 sec
+ LDP Id: 3.3.3.3:0, Transport address: 3:3:3::3
+ Hold time: 15 sec
+ eth2: xmit/recv
+ LDP Id: 2.2.2.2:0, Transport address: 2.2.2.2
+ Hold time: 15 sec
+ LDP Id: 2.2.2.2:0, Transport address: 2:2:2::2
+ Hold time: 15 sec
+ Targeted Hellos:
+ 4.4.4.4 -> 1.1.1.1: xmit/recv
+ LDP Id: 1.1.1.1:0, Transport address: 1.1.1.1
+ Hold time: 45 sec
+ 4:4:4::4 -> 3:3:3::3: xmit/recv
+ LDP Id: 3.3.3.3:0, Transport address: 3:3:3::3
+ Hold time: 45 sec
+
+linux# show mpls ldp neighbor
+Peer LDP Identifier: 1.1.1.1:0
+ TCP connection: 4.4.4.4:40921 - 1.1.1.1:646
+ Session Holdtime: 180 sec
+ State: OPERATIONAL; Downstream-Unsolicited
+ Up time: 00:06:02
+ LDP Discovery Sources:
+ IPv4:
+ Targeted Hello: 1.1.1.1
+
+Peer LDP Identifier: 2.2.2.2:0
+ TCP connection: 4:4:4::4:52286 - 2:2:2::2:646
+ Session Holdtime: 180 sec
+ State: OPERATIONAL; Downstream-Unsolicited
+ Up time: 00:06:02
+ LDP Discovery Sources:
+ IPv4:
+ Interface: eth2
+ IPv6:
+ Interface: eth2
+
+Peer LDP Identifier: 3.3.3.3:0
+ TCP connection: 4:4:4::4:60575 - 3:3:3::3:646
+ Session Holdtime: 180 sec
+ State: OPERATIONAL; Downstream-Unsolicited
+ Up time: 00:05:57
+ LDP Discovery Sources:
+ IPv4:
+ Interface: eth1
+ IPv6:
+ Targeted Hello: 3:3:3::3
+ Interface: eth1
+```
+
+Note that the neighborships with the P and PE2 routers were established
+over IPv6, since this is the default behavior for dual-stack LSRs, as
+specified in RFC 7552. If desired, the **dual-stack transport-connection
+prefer ipv4** command can be used to establish these sessions over IPv4
+(the command should be applied an all routers).
+
+Now, verify that there's a remote label for each PE address:
+```
+linux# show mpls ldp binding
+1.1.1.1/32
+ Local binding: label: 20
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 imp-null
+ 2.2.2.2 24000
+ 3.3.3.3 20
+2.2.2.2/32
+ Local binding: label: 21
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 18
+ 2.2.2.2 imp-null
+ 3.3.3.3 21
+3.3.3.3/32
+ Local binding: label: 22
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 21
+ 2.2.2.2 24003
+ 3.3.3.3 imp-null
+4.4.4.4/32
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 22
+ 2.2.2.2 24001
+ 3.3.3.3 22
+10.0.1.0/24
+ Local binding: label: 23
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 imp-null
+ 2.2.2.2 imp-null
+ 3.3.3.3 23
+10.0.2.0/24
+ Local binding: label: 24
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 20
+ 2.2.2.2 imp-null
+ 3.3.3.3 imp-null
+10.0.3.0/24
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 19
+ 2.2.2.2 imp-null
+ 3.3.3.3 24
+10.0.4.0/24
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 1.1.1.1 23
+ 2.2.2.2 24002
+ 3.3.3.3 imp-null
+2:2:2::2/128
+ Local binding: label: 18
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 2.2.2.2 imp-null
+ 3.3.3.3 18
+3:3:3::3/128
+ Local binding: label: 19
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 2.2.2.2 24007
+4:4:4::4/128
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 2.2.2.2 24006
+ 3.3.3.3 19
+2001:db8:2::/64
+ Local binding: label: -
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 2.2.2.2 imp-null
+ 3.3.3.3 imp-null
+2001:db8:3::/64
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 2.2.2.2 imp-null
+2001:db8:4::/64
+ Local binding: label: imp-null
+ Remote bindings:
+ Peer Label
+ ----------------- ---------
+ 3.3.3.3 imp-null
+```
+
+Check if the pseudowires are up:
+```
+linux# show l2vpn atom vc
+Interface Peer ID VC ID Name Status
+--------- --------------- ---------- ---------------- ----------
+mpw1 3.3.3.3 100 ENG UP
+mpw0 1.1.1.1 100 ENG UP
+```
+
+Check the label bindings of the pseudowires:
+```
+linux# show l2vpn atom binding
+ Destination Address: 1.1.1.1, VC ID: 100
+ Local Label: 25
+ Cbit: 1, VC Type: Ethernet, GroupID: 0
+ MTU: 1500
+ Remote Label: 16
+ Cbit: 1, VC Type: Ethernet, GroupID: 0
+ MTU: 1500
+ Destination Address: 3.3.3.3, VC ID: 100
+ Local Label: 26
+ Cbit: 1, VC Type: Ethernet, GroupID: 0
+ MTU: 1500
+ Remote Label: 26
+ Cbit: 1, VC Type: Ethernet, GroupID: 0
+ MTU: 1500
+```
+
+## Verification - Data Plane
+
+Verify that all the exchanged label mappings were installed in zebra:
+```
+linux# show mpls table
+ Inbound Outbound
+ Label Type Nexthop Label
+-------- ------- --------------- --------
+ 17 LDP 2001:db8:3::2 3
+ 19 LDP 2001:db8:3::2 24005
+ 20 LDP 10.0.3.2 24000
+ 21 LDP 10.0.3.2 3
+ 22 LDP 10.0.3.2 24001
+ 23 LDP 10.0.3.2 3
+ 24 LDP 10.0.3.2 3
+ 25 LDP 10.0.3.2 3
+
+linux# show ip route ldp
+Codes: K - kernel route, C - connected, S - static, R - RIP,
+ O - OSPF, I - IS-IS, B - BGP, P - PIM, A - Babel, L - LDP,
+ > - selected route, * - FIB route
+
+L>* 1.1.1.1/32 [0/0] via 10.0.3.2, eth2 label 24000
+L>* 3.3.3.3/32 [0/0] via 10.0.3.2, eth2 label 24001
+```
+
+Verify that all the exchanged label mappings were installed in the kernel:
+```
+$ ip -M ro
+17 via inet6 2001:db8:3::2 dev eth2 proto zebra
+19 as to 24005 via inet6 2001:db8:3::2 dev eth2 proto zebra
+20 as to 24000 via inet 10.0.3.2 dev eth2 proto zebra
+21 via inet 10.0.3.2 dev eth2 proto zebra
+22 as to 24001 via inet 10.0.3.2 dev eth2 proto zebra
+23 via inet 10.0.3.2 dev eth2 proto zebra
+24 via inet 10.0.3.2 dev eth2 proto zebra
+25 via inet 10.0.3.2 dev eth2 proto zebra
+$
+$ ip route | grep mpls
+1.1.1.1 encap mpls 24000 via 10.0.3.2 dev eth2 proto zebra metric 20
+3.3.3.3 encap mpls 24001 via 10.0.3.2 dev eth2 proto zebra metric 20
+```
+
+Now ping PE1's loopback using lo1's address as a source address:
+```
+$ ping -c 5 -I 4.4.4.4 1.1.1.1
+PING 1.1.1.1 (1.1.1.1) from 4.4.4.4 : 56(84) bytes of data.
+64 bytes from 1.1.1.1: icmp_seq=1 ttl=253 time=3.02 ms
+64 bytes from 1.1.1.1: icmp_seq=2 ttl=253 time=3.13 ms
+64 bytes from 1.1.1.1: icmp_seq=3 ttl=253 time=3.19 ms
+64 bytes from 1.1.1.1: icmp_seq=4 ttl=253 time=3.07 ms
+64 bytes from 1.1.1.1: icmp_seq=5 ttl=253 time=3.27 ms
+
+--- 1.1.1.1 ping statistics ---
+5 packets transmitted, 5 received, 0% packet loss, time 4005ms
+rtt min/avg/max/mdev = 3.022/3.140/3.278/0.096 ms
+```
+
+Verify that the ICMP echo request packets are leaving with the MPLS
+label advertised by the P router. Also, verify that the ICMP echo reply
+packets are arriving with an explicit-null MPLS label:
+```
+# tcpdump -n -i eth2 mpls and icmp
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes
+10:01:40.758771 MPLS (label 24000, exp 0, [S], ttl 64) IP 4.4.4.4 > 1.1.1.1: ICMP echo request, id 13370, seq 1, length 64
+10:01:40.761777 MPLS (label 0, exp 0, [S], ttl 254) IP 1.1.1.1 > 4.4.4.4: ICMP echo reply, id 13370, seq 1, length 64
+10:01:41.760343 MPLS (label 24000, exp 0, [S], ttl 64) IP 4.4.4.4 > 1.1.1.1: ICMP echo request, id 13370, seq 2, length 64
+10:01:41.763448 MPLS (label 0, exp 0, [S], ttl 254) IP 1.1.1.1 > 4.4.4.4: ICMP echo reply, id 13370, seq 2, length 64
+10:01:42.761758 MPLS (label 24000, exp 0, [S], ttl 64) IP 4.4.4.4 > 1.1.1.1: ICMP echo request, id 13370, seq 3, length 64
+10:01:42.764924 MPLS (label 0, exp 0, [S], ttl 254) IP 1.1.1.1 > 4.4.4.4: ICMP echo reply, id 13370, seq 3, length 64
+10:01:43.763193 MPLS (label 24000, exp 0, [S], ttl 64) IP 4.4.4.4 > 1.1.1.1: ICMP echo request, id 13370, seq 4, length 64
+10:01:43.766237 MPLS (label 0, exp 0, [S], ttl 254) IP 1.1.1.1 > 4.4.4.4: ICMP echo reply, id 13370, seq 4, length 64
+10:01:44.764552 MPLS (label 24000, exp 0, [S], ttl 64) IP 4.4.4.4 > 1.1.1.1: ICMP echo request, id 13370, seq 5, length 64
+10:01:44.767803 MPLS (label 0, exp 0, [S], ttl 254) IP 1.1.1.1 > 4.4.4.4: ICMP echo reply, id 13370, seq 5, length 64
+```
diff --git a/doc/developer/library.rst b/doc/developer/library.rst
new file mode 100644
index 0000000..2e36c25
--- /dev/null
+++ b/doc/developer/library.rst
@@ -0,0 +1,21 @@
+.. _libfrr:
+
+***************************
+Library Facilities (libfrr)
+***************************
+
+.. toctree::
+ :maxdepth: 2
+
+ memtypes
+ rcu
+ lists
+ logging
+ xrefs
+ locking
+ hooks
+ cli
+ modules
+ scripting
+
+
diff --git a/doc/developer/link-state.rst b/doc/developer/link-state.rst
new file mode 100644
index 0000000..aaa253d
--- /dev/null
+++ b/doc/developer/link-state.rst
@@ -0,0 +1,499 @@
+Link State API Documentation
+============================
+
+Introduction
+------------
+
+The Link State (LS) API aims to provide a set of structures and functions to
+build and manage a Traffic Engineering Database for the various FRR daemons.
+This API has been designed for several use cases:
+
+- BGP Link State (BGP-LS): where BGP protocol need to collect the link state
+ information from the routing daemons (IS-IS and/or OSPF) to implement RFC7752
+- Path Computation Element (PCE): where path computation algorithms are based
+ on Traffic Engineering Database
+- ReSerVation Protocol (RSVP): where signaling need to know the Traffic
+ Engineering topology of the network in order to determine the path of
+ RSVP tunnels
+
+Architecture
+------------
+
+The main requirements from the various uses cases are as follow:
+
+- Provides a set of data model and function to ease Link State information
+ manipulation (storage, serialize, parse ...)
+- Ease and normalize Link State information exchange between FRR daemons
+- Provides database structure for Traffic Engineering Database (TED)
+
+To ease Link State understanding, FRR daemons have been classified into two
+categories:
+
+- **Consumer**: Daemons that consume Link State information e.g. BGPd
+- **Producer**: Daemons that are able to collect Link State information and
+ send them to consumer daemons e.g. OSPFd IS-ISd
+
+Zebra daemon, and more precisely, the ZAPI message is used to convey the Link
+State information between *producer* and *consumer*, but, Zebra acts as a
+simple pass through and does not store any Link State information. A new ZAPI
+**Opaque** message has been design for that purpose.
+
+Each consumer and producer daemons are free to store or not Link State data and
+organise the information following the Traffic Engineering Database model
+provided by the API or any other data structure e.g. Hash, RB-tree ...
+
+Link State API
+--------------
+
+This is the low level API that allows any daemons manipulate the Link State
+elements that are stored in the Link State Database.
+
+Data structures
+^^^^^^^^^^^^^^^
+
+3 types of Link State structure have been defined:
+
+.. c:struct:: ls_node
+
+ that groups all information related to a node
+
+.. c:struct:: ls_attributes
+
+ that groups all information related to a link
+
+.. c:struct:: ls_prefix
+
+ that groups all information related to a prefix
+
+These 3 types of structures are those handled by BGP-LS (see RFC7752) and
+suitable to describe a Traffic Engineering topology.
+
+Each structure, in addition to the specific parameters, embed the node
+identifier which advertises the Link State and a bit mask as flags to
+indicates which parameters are valid i.e. for which the value is valid and
+corresponds to a Link State information conveyed by the routing protocol.
+
+.. c:struct:: ls_node_id
+
+ defines the Node identifier as router ID IPv4 address plus the area ID for
+ OSPF or the ISO System ID plus the IS-IS level for IS-IS.
+
+Functions
+^^^^^^^^^
+
+A set of functions is provided to create, delete and compare Link State
+Node, Atribute and Prefix:
+
+.. c:function:: struct ls_node *ls_node_new(struct ls_node_id adv, struct in_addr router_id, struct in6_addr router6_id)
+.. c:function:: struct ls_attributes *ls_attributes_new(struct ls_node_id adv, struct in_addr local, struct in6_addr local6, uint32_t local_id)
+.. c:function:: struct ls_prefix *ls_prefix_new(struct ls_node_id adv, struct prefix p)
+
+ Create respectively a new Link State Node, Attribute or Prefix.
+ Structure is dynamically allocated. Link State Node ID (adv) is mandatory
+ and:
+
+ - at least one of IPv4 or IPv6 must be provided for the router ID
+ (router_id or router6_id) for Node
+ - at least one of local, local6 or local_id must be provided for Attribute
+ - prefix is mandatory for Link State Prefix.
+
+.. c:function:: void ls_node_del(struct ls_node *node)
+.. c:function:: void ls_attributes_del(struct ls_attributes *attr)
+.. c:function:: void ls_prefix_del(struct ls_prefix *pref)
+
+ Remove, respectively Link State Node, Attributes or Prefix.
+ Data structure is freed.
+
+.. c:function:: void ls_attributes_srlg_del(struct ls_attributes *attr)
+
+ Remove SRLGs attribute if defined. Data structure is freed.
+
+.. c:function:: int ls_node_same(struct ls_node *n1, struct ls_node *n2)
+.. c:function:: int ls_attributes_same(struct ls_attributes *a1, struct ls_attributes *a2)
+.. c:function:: int ls_prefix_same(struct ls_prefix *p1, struct ls_prefix*p2)
+
+ Check, respectively if two Link State Nodes, Attributes or Prefix are equal.
+ Note that these routines have the same return value sense as '==' (which is
+ different from a comparison).
+
+
+Link State TED
+--------------
+
+This is the high level API that provides functions to create, update, delete a
+Link State Database to build a Traffic Engineering Database (TED).
+
+Data Structures
+^^^^^^^^^^^^^^^
+
+The Traffic Engineering is modeled as a Graph in order to ease Path Computation
+algorithm implementation. Denoted **G(V, E)**, a graph is composed by a list of
+**Vertices (V)** which represents the network Node and a list of **Edges (E)**
+which represents Link. An additional list of **prefixes (P)** is also added and
+also attached to the *Vertex (V)* which advertise it.
+
+*Vertex (V)* contains the list of outgoing *Edges (E)* that connect this Vertex
+with its direct neighbors and the list of incoming *Edges (E)* that connect
+the direct neighbors to this Vertex. Indeed, the *Edge (E)* is unidirectional,
+thus, it is necessary to add 2 Edges to model a bidirectional relation between
+2 Vertices. Finally, the *Vertex (V)* contains a pointer to the corresponding
+Link State Node.
+
+*Edge (E)* contains the source and destination Vertex that this Edge
+is connecting and a pointer to the corresponding Link State Attributes.
+
+A unique Key is used to identify both Vertices and Edges within the Graph.
+
+
+::
+
+ -------------- --------------------------- --------------
+ | Connected |---->| Connected Edge Va to Vb |--->| Connected |
+ --->| Vertex | --------------------------- | Vertex |---->
+ | | | |
+ | - Key (Va) | | - Key (Vb) |
+ <---| - Vertex | --------------------------- | - Vertex |<----
+ | |<----| Connected Edge Vb to Va |<---| |
+ -------------- --------------------------- --------------
+
+
+4 data structures have been defined to implement the Graph model:
+
+.. c:struct:: ls_vertex
+.. c:struct:: ls_edge
+.. c:struct:: ls_ted
+
+ - :c:struct:`ls_prefix`
+
+TED stores Vertex, Edge and Subnet elements with a RB Tree structure.
+The Vertex key corresponds to the Router ID for OSPF and ISO System ID for
+IS-IS. The Edge key corresponds to the IPv4 address, the lowest 64 bits of
+the IPv6 address or the combination of the local & remote ID of the interface.
+The Subnet key corresponds to the Prefix address (v4 or v6).
+
+An additional status for Vertex, Edge and Subnet allows to determine the state
+of the element in the TED: UNSET, NEW, UPDATE, DELETE, SYNC, ORPHAN. Normal
+state is SYNC. NEW, UPDATE and DELETE are temporary state when element is
+processed. UNSET is normally never used and ORPHAN serves to identify elements
+that must be remove when TED is cleaning.
+
+Vertex, Edges and Subnets management functions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. c:function:: struct ls_vertex *ls_vertex_add(struct ls_ted *ted, struct ls_node *node)
+.. c:function:: struct ls_edge *ls_edge_add(struct ls_ted *ted, struct ls_attributes *attributes)
+.. c:function:: struct ls_subnet *ls_subnet_add(struct ls_ted *ted, struct ls_prefix *pref)
+
+ Add, respectively new Vertex, Edge or Subnet to the Link State Datebase.
+ Vertex, Edge or Subnet are created from, respectively the Link State Node,
+ Attribute or Prefix structure. Data structure are dynamically allocated.
+
+.. c:function:: struct ls_vertex *ls_vertex_update(struct ls_ted *ted, struct ls_node *node)
+.. c:function:: struct ls_edge *ls_edge_update(struct ls_ted *ted, struct ls_attributes *attributes)
+.. c:function:: struct ls_subnet *ls_subnet_update(struct ls_ted *ted, struct ls_prefix *pref)
+
+ Update, respectively Vertex, Edge or Subnet with, respectively the Link
+ State Node, Attribute or Prefix. A new data structure is created if no one
+ corresponds to the Link State Node, Attribute or Prefix. If element already
+ exists in the TED, its associated Link State information is replaced by the
+ new one if there are different and the old associated Link State information
+ is deleted and memory freed.
+
+.. c:function:: void ls_vertex_del(struct ls_ted *ted, struct ls_vertex *vertex)
+.. c:function:: void ls_vertex_del_all(struct ls_ted *ted, struct ls_vertex *vertex)
+.. c:function:: void ls_edge_del(struct ls_ted *ted, struct ls_edge *edge)
+.. c:function:: void ls_edge_del_all(struct ls_ted *ted, struct ls_edge *edge)
+.. c:function:: void ls_subnet_del(struct ls_ted *ted, struct ls_subnet *subnet)
+.. c:function:: void ls_subnet_del_all(struct ls_ted *ted, struct ls_subnet *subnet)
+
+ Delete, respectively Link State Vertex, Edge or Subnet. Data structure are
+ freed but not the associated Link State information with the simple `_del()`
+ form of the function while the `_del_all()` version freed also associated
+ Link State information. TED is not modified if Vertex, Edge or Subnet is
+ NULL or not found in the Data Base. Note that references between Vertices,
+ Edges and Subnets are removed first.
+
+.. c:function:: struct ls_vertex *ls_find_vertex_by_key(struct ls_ted *ted, const uint64_t key)
+.. c:function:: struct ls_vertex *ls_find_vertex_by_id(struct ls_ted *ted, struct ls_node_id id)
+
+ Find Vertex in the TED by its unique key or its Link State Node ID.
+ Return Vertex if found, NULL otherwise.
+
+.. c:function:: struct ls_edge *ls_find_edge_by_key(struct ls_ted *ted, const uint64_t key)
+.. c:function:: struct ls_edge *ls_find_edge_by_source(struct ls_ted *ted, struct ls_attributes *attributes);
+.. c:function:: struct ls_edge *ls_find_edge_by_destination(struct ls_ted *ted, struct ls_attributes *attributes);
+
+ Find Edge in the Link State Data Base by its key, source or distination
+ (local IPv4 or IPv6 address or local ID) informations of the Link State
+ Attributes. Return Edge if found, NULL otherwise.
+
+.. c:function:: struct ls_subnet *ls_find_subnet(struct ls_ted *ted, const struct prefix prefix)
+
+ Find Subnet in the Link State Data Base by its key, i.e. the associated
+ prefix. Return Subnet if found, NULL otherwise.
+
+.. c:function:: int ls_vertex_same(struct ls_vertex *v1, struct ls_vertex *v2)
+.. c:function:: int ls_edge_same(struct ls_edge *e1, struct ls_edge *e2)
+.. c:function:: int ls_subnet_same(struct ls_subnet *s1, struct ls_subnet *s2)
+
+ Check, respectively if two Vertices, Edges or Subnets are equal.
+ Note that these routines has the same return value sense as '=='
+ (which is different from a comparison).
+
+
+TED management functions
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some helpers functions have been also provided to ease TED management:
+
+.. c:function:: struct ls_ted *ls_ted_new(const uint32_t key, char *name, uint32_t asn)
+
+ Create a new Link State Data Base. Key must be different from 0.
+ Name could be NULL and AS number equal to 0 if unknown.
+
+.. c:function:: void ls_ted_del(struct ls_ted *ted)
+.. c:function:: void ls_ted_del_all(struct ls_ted *ted)
+
+ Delete existing Link State Data Base. Vertices, Edges, and Subnets are not
+ removed with ls_ted_del() function while they are with ls_ted_del_all().
+
+.. c:function:: void ls_connect_vertices(struct ls_vertex *src, struct ls_vertex *dst, struct ls_edge *edge)
+
+ Connect Source and Destination Vertices by given Edge. Only non NULL source
+ and destination vertices are connected.
+
+.. c:function:: void ls_connect(struct ls_vertex *vertex, struct ls_edge *edge, bool source)
+.. c:function:: void ls_disconnect(struct ls_vertex *vertex, struct ls_edge *edge, bool source)
+
+ Connect / Disconnect Link State Edge to the Link State Vertex which could be
+ a Source (source = true) or a Destination (source = false) Vertex.
+
+.. c:function:: void ls_disconnect_edge(struct ls_edge *edge)
+
+ Disconnect Link State Edge from both Source and Destination Vertex.
+ Note that Edge is not removed but its status is marked as ORPHAN.
+
+.. c:function:: void ls_vertex_clean(struct ls_ted *ted, struct ls_vertex *vertex, struct zclient *zclient)
+
+ Clean Vertex structure by removing all Edges and Subnets marked as ORPHAN
+ from this vertex. Corresponding Link State Update message is sent if zclient
+ parameter is not NULL. Note that associated Link State Attribute and Prefix
+ are also removed and memory freed.
+
+.. c:function:: void ls_ted_clean(struct ls_ted *ted)
+
+ Clean Link State Data Base by removing all Vertices, Edges and SubNets
+ marked as ORPHAN. Note that associated Link State Node, Attributes and
+ Prefix are removed too.
+
+.. c:function:: void ls_show_vertex(struct ls_vertex *vertex, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_edge(struct ls_edeg *edge, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_subnet(struct ls_subnet *subnet, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_vertices(struct ls_ted *ted, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_edges(struct ls_ted *ted, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_subnets(struct ls_ted *ted, struct vty *vty, struct json_object *json, bool verbose)
+.. c:function:: void ls_show_ted(struct ls_ted *ted, struct vty *vty, struct json_object *json, bool verbose)
+
+ Respectively, show Vertex, Edge, Subnet provided as parameter, all Vertices,
+ all Edges, all Subnets and the whole TED if not specified. Output could be
+ more detailed with verbose parameter for VTY output. If both JSON and VTY
+ output are specified, JSON takes precedence over VTY.
+
+.. c:function:: void ls_dump_ted(struct ls_ted *ted)
+
+ Dump TED information to the current logging output.
+
+Link State Messages
+-------------------
+
+This part of the API provides functions and data structure to ease the
+communication between the *Producer* and *Consumer* daemons.
+
+Communications principles
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Recent ZAPI Opaque Message is used to exchange Link State data between daemons.
+For that purpose, Link State API provides new functions to serialize and parse
+Link State information through the ZAPI Opaque message. A dedicated flag,
+named ZAPI_OPAQUE_FLAG_UNICAST, allows daemons to send a unicast or a multicast
+Opaque message and is used as follow for the Link State exchange:
+
+- Multicast: To send data update to all daemons that have subscribed to the
+ Link State Update message
+- Unicast: To send initial Link State information from a particular daemon. All
+ data are send only to the daemon that request Link State Synchronisatio
+
+Figure 1 below, illustrates the ZAPI Opaque message exchange between a
+*Producer* (an IGP like OSPF or IS-IS) and a *Consumer* (e.g. BGP). The
+message sequences are as follows:
+
+- First, both *Producer* and *Consumer* must register to their respective ZAPI
+ Opaque Message: **Link State Sync** for the *Producer* in order to receive
+ Database synchronisation request from a *Consumer*, **Link State Update** for
+ the *Consumer* in order to received any Link State update from a *Producer*.
+ These register messages are stored by Zebra to determine to which daemon it
+ should redistribute the ZAPI messages it receives.
+- Then, the *Consumer* sends a **Link State Synchronistation** request with the
+ Multicast method in order to receive the complete Link State Database from a
+ *Producer*. ZEBRA daemon forwards this message to any *Producer* daemons that
+ previously registered to this message. If no *Producer* has yet registered,
+ the request is lost. Thus, if the *Consumer* receives no response whithin a
+ given timer, it means that no *Producer* are available right now. So, the
+ *Consumer* must send the same request until it receives a Link State Database
+ Synchronistation message. This behaviour is necessary as we can't control in
+ which order daemons are started. It is up to the *Consumer* daemon to fix the
+ timeout and the number of retry.
+- When a *Producer* receives a **Link State Synchronisation** request, it
+ starts sending all elements of its own Link State Database through the
+ **Link State Database Synchronisation** message. These messages are send with
+ the Unicast method to avoid flooding other daemons with these elements. ZEBRA
+ layer ensures to forward the message to the right daemon.
+- When a *Producer* update its Link State Database, it automatically sends a
+ **Link State Update** message with the Multicast method. In turn, ZEBRA
+ daemon forwards the message to all *Consumer* daemons that previously
+ registered to this message. if no daemon is registered, the message is lost.
+- A daemon could unregister from the ZAPI Opaque message registry at any time.
+ In this case, the ZEBRA daemon stops to forward any messages it receives to
+ this daemon, even if it was previously converns.
+
+::
+
+ IGP ZEBRA Consumer
+ (OSPF/IS-IS) (ZAPI Opaque Thread) (e.g. BGP)
+ | | | \
+ | | Register LS Update | |
+ | |<----------------------------| Register Phase
+ | | | |
+ | | Request LS Sync | |
+ | |<----------------------------| |
+ : : : A |
+ | Register LS Sync | | | |
+ |----------------------------->| | | /
+ : : : |TimeOut
+ : : : |
+ | | | |
+ | | Request LS Sync | v \
+ | Request LS Sync |<----------------------------| |
+ |<-----------------------------| | Synchronistation
+ | LS DB Update | | Phase
+ |----------------------------->| LS DB Update | |
+ | |---------------------------->| |
+ | LS DB Update (cont'd) | | |
+ |----------------------------->| LS DB Update (cont'd) | |
+ | . |---------------------------->| |
+ | . | . | |
+ | . | . | |
+ | LS DB Update (end) | . | |
+ |----------------------------->| LS DB Update (end) | |
+ | |---------------------------->| |
+ | | | /
+ : : :
+ : : :
+ | LS DB Update | | \
+ |----------------------------->| LS DB Update | |
+ | |---------------------------->| Update Phase
+ | | | |
+ : : : /
+ : : :
+ | | | \
+ | | Unregister LS Update | |
+ | |<----------------------------| Deregister Phase
+ | | | |
+ | LS DB Update | | |
+ |----------------------------->| | |
+ | | | /
+ | | |
+
+ Figure 1: Link State messages exchange
+
+
+Data Structures
+^^^^^^^^^^^^^^^
+
+The Link State Message is defined to convey Link State parameters from
+the routing protocol (OSPF or IS-IS) to other daemons e.g. BGP.
+
+.. c:struct:: ls_message
+
+The structure is composed of:
+
+- Event of the message:
+
+ - Sync: Send the whole LS DB following a request
+ - Add: Send the a new Link State element
+ - Update: Send an update of an existing Link State element
+ - Delete: Indicate that the given Link State element is removed
+
+- Type of Link State element: Node, Attribute or Prefix
+- Remote node id when known
+- Data: Node, Attributes or Prefix
+
+A Link State Message can carry only one Link State Element (Node, Attributes
+of Prefix) at once, and only one Link State Message is sent through ZAPI
+Opaque Link State type at once.
+
+Functions
+^^^^^^^^^
+
+.. c:function:: int ls_register(struct zclient *zclient, bool server)
+.. c:function:: int ls_unregister(struct zclient *zclient, bool server)
+
+ Register / Unregister daemon to received ZAPI Link State Opaque messages.
+ Server must be set to true for *Producer* and to false for *Consumer*.
+
+.. c:function:: int ls_request_sync(struct zclient *zclient)
+
+ Request initial Synchronisation to collect the whole Link State Database.
+
+.. c:function:: struct ls_message *ls_parse_msg(struct stream *s)
+
+ Parse Link State Message from stream. Used this function once receiving a
+ new ZAPI Opaque message of type Link State.
+
+.. c:function:: void ls_delete_msg(struct ls_message *msg)
+
+ Delete existing message. Data structure is freed.
+
+.. c:function:: int ls_send_msg(struct zclient *zclient, struct ls_message *msg, struct zapi_opaque_reg_info *dst)
+
+ Send Link State Message as new ZAPI Opaque message of type Link State.
+ If destination is not NULL, message is sent as Unicast otherwise it is
+ broadcast to all registered daemon.
+
+.. c:function:: struct ls_message *ls_vertex2msg(struct ls_message *msg, struct ls_vertex *vertex)
+.. c:function:: struct ls_message *ls_edge2msg(struct ls_message *msg, struct ls_edge *edge)
+.. c:function:: struct ls_message *ls_subnet2msg(struct ls_message *msg, struct ls_subnet *subnet)
+
+ Create respectively a new Link State Message from a Link State Vertex, Edge
+ or Subnet. If Link State Message is NULL, a new data structure is
+ dynamically allocated. Note that the Vertex, Edge and Subnet status is used
+ to determine the corresponding Link State Message event: ADD, UPDATE,
+ DELETE, SYNC.
+
+.. c:function:: int ls_msg2vertex(struct ls_ted *ted, struct ls_message *msg)
+.. c:function:: int ls_msg2edge(struct ls_ted *ted, struct ls_message *msg)
+.. c:function:: int ls_msg2subnet(struct ls_ted *ted, struct ls_message *msg)
+
+ Convert Link State Message respectively in Vertex, Edge or Subnet and
+ update the Link State Database accordingly to the message event: SYNC, ADD,
+ UPDATE or DELETE.
+
+.. c:function:: struct ls_element *ls_msg2ted(struct ls_ted *ted, struct ls_message *msg, bool delete)
+.. c:function:: struct ls_element *ls_stream2ted(struct ls_ted *ted, struct ls_message *msg, bool delete)
+
+ Convert Link State Message or Stream Buffer in a Link State element (Vertex,
+ Edge or Subnet) and update the Link State Database accordingly to the
+ message event: SYNC, ADD, UPDATE or DELETE. The function return the generic
+ structure ls_element that point to the Vertex, Edge or Subnet which has been
+ added, updated or synchronous in the database. Note that the delete boolean
+ parameter governs the action for the DELETE action: true, Link State Element
+ is removed from the database and NULL is return. If set to false, database
+ is not updated and the function sets the Link State Element status to
+ Delete and return the element for futur deletion by the calling function.
+
+.. c:function:: int ls_sync_ted(struct ls_ted *ted, struct zclient *zclient, struct zapi_opaque_reg_info *dst)
+
+ Send all the content of the Link State Data Base to the given destination.
+ Link State content is sent is this order: Vertices, Edges then Subnet.
+ This function must be used when a daemon request a Link State Data Base
+ Synchronization.
diff --git a/doc/developer/lists.rst b/doc/developer/lists.rst
new file mode 100644
index 0000000..ccac10a
--- /dev/null
+++ b/doc/developer/lists.rst
@@ -0,0 +1,777 @@
+.. _lists:
+
+Type-safe containers
+====================
+
+.. note::
+
+ This section previously used the term *list*; it was changed to *container*
+ to be more clear.
+
+Common container interface
+--------------------------
+
+FRR includes a set of container implementations with abstracted
+common APIs. The purpose of this is easily allow swapping out one
+data structure for another while also making the code easier to read and write.
+There is one API for unsorted containers and a similar but not identical API
+for sorted containers - and heaps use a middle ground of both.
+
+For unsorted containers, the following implementations exist:
+
+- single-linked list with tail pointer (e.g. STAILQ in BSD)
+
+- double-linked list
+
+- atomic single-linked list with tail pointer
+
+
+Being partially sorted, the oddball structure:
+
+- an 8-ary heap
+
+
+For sorted containers, these data structures are implemented:
+
+- single-linked list
+
+- atomic single-linked list
+
+- skiplist
+
+- red-black tree (based on OpenBSD RB_TREE)
+
+- hash table (note below)
+
+Except for hash tables, each of the sorted data structures has a variant with
+unique and non-unique items. Hash tables always require unique items
+and mostly follow the "sorted" API but use the hash value as sorting
+key. Also, iterating while modifying does not work with hash tables.
+Conversely, the heap always has non-unique items, but iterating while modifying
+doesn't work either.
+
+
+The following sorted structures are likely to be implemented at some point
+in the future:
+
+- atomic skiplist
+
+- atomic hash table (note below)
+
+
+The APIs are all designed to be as type-safe as possible. This means that
+there will be a compiler warning when an item doesn't match the container, or
+the return value has a different type, or other similar situations. **You
+should never use casts with these APIs.** If a cast is necessary in relation
+to these APIs, there is probably something wrong with the overall design.
+
+Only the following pieces use dynamically allocated memory:
+
+- the hash table itself is dynamically grown and shrunk
+
+- skiplists store up to 4 next pointers inline but will dynamically allocate
+ memory to hold an item's 5th up to 16th next pointer (if they exist)
+
+- the heap uses a dynamically grown and shrunk array of items
+
+Cheat sheet
+-----------
+
+Available types:
+
+::
+
+ DECLARE_LIST
+ DECLARE_ATOMLIST
+ DECLARE_DLIST
+
+ DECLARE_HEAP
+
+ DECLARE_SORTLIST_UNIQ
+ DECLARE_SORTLIST_NONUNIQ
+ DECLARE_ATOMLIST_UNIQ
+ DECLARE_ATOMLIST_NONUNIQ
+ DECLARE_SKIPLIST_UNIQ
+ DECLARE_SKIPLIST_NONUNIQ
+ DECLARE_RBTREE_UNIQ
+ DECLARE_RBTREE_NONUNIQ
+
+ DECLARE_HASH
+
+Functions provided:
+
++------------------------------------+-------+------+------+---------+------------+
+| Function | LIST | HEAP | HASH | \*_UNIQ | \*_NONUNIQ |
++====================================+=======+======+======+=========+============+
+| _init, _fini | yes | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+| _first, _next, _next_safe, | yes | yes | yes | yes | yes |
+| | | | | | |
+| _const_first, _const_next | | | | | |
++------------------------------------+-------+------+------+---------+------------+
+| _last, _prev, _prev_safe, | DLIST | -- | -- | RB only | RB only |
+| | only | | | | |
+| _const_last, _const_prev | | | | | |
++------------------------------------+-------+------+------+---------+------------+
+| _swap_all | yes | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+| _anywhere | yes | -- | -- | -- | -- |
++------------------------------------+-------+------+------+---------+------------+
+| _add_head, _add_tail, _add_after | yes | -- | -- | -- | -- |
++------------------------------------+-------+------+------+---------+------------+
+| _add | -- | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+| _member | yes | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+| _del, _pop | yes | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+| _find, _const_find | -- | -- | yes | yes | -- |
++------------------------------------+-------+------+------+---------+------------+
+| _find_lt, _find_gteq, | -- | -- | -- | yes | yes |
+| | | | | | |
+| _const_find_lt, _const_find_gteq | | | | | |
++------------------------------------+-------+------+------+---------+------------+
+| use with frr_each() macros | yes | yes | yes | yes | yes |
++------------------------------------+-------+------+------+---------+------------+
+
+
+
+Datastructure type setup
+------------------------
+
+Each of the data structures has a ``PREDECL_*`` and a ``DECLARE_*`` macro to
+set up an "instantiation" of the container. This works somewhat similar to C++
+templating, though much simpler.
+
+**In all following text, the Z prefix is replaced with a name chosen
+for the instance of the datastructure.**
+
+The common setup pattern will look like this:
+
+.. code-block:: c
+
+ #include <typesafe.h>
+
+ PREDECL_XXX(Z);
+ struct item {
+ int otherdata;
+ struct Z_item mylistitem;
+ }
+
+ struct Z_head mylisthead;
+
+ /* unsorted: */
+ DECLARE_XXX(Z, struct item, mylistitem);
+
+ /* sorted, items that compare as equal cannot be added to list */
+ int compare_func(const struct item *a, const struct item *b);
+ DECLARE_XXX_UNIQ(Z, struct item, mylistitem, compare_func);
+
+ /* sorted, items that compare as equal can be added to list */
+ int compare_func(const struct item *a, const struct item *b);
+ DECLARE_XXX_NONUNIQ(Z, struct item, mylistitem, compare_func);
+
+ /* hash tables: */
+ int compare_func(const struct item *a, const struct item *b);
+ uint32_t hash_func(const struct item *a);
+ DECLARE_XXX(Z, struct item, mylistitem, compare_func, hash_func);
+
+``XXX`` is replaced with the name of the data structure, e.g. ``SKIPLIST``
+or ``ATOMLIST``. The ``DECLARE_XXX`` invocation can either occur in a `.h`
+file (if the container needs to be accessed from several C files) or it can be
+placed in a `.c` file (if the container is only accessed from that file.) The
+``PREDECL_XXX`` invocation defines the ``struct Z_item`` and ``struct
+Z_head`` types and must therefore occur before these are used.
+
+To switch between compatible data structures, only these two lines need to be
+changes. To switch to a data structure with a different API, some source
+changes are necessary.
+
+Common iteration macros
+-----------------------
+
+The following iteration macros work across all data structures:
+
+.. c:macro:: frr_each(Z, head, item)
+
+ Equivalent to:
+
+ .. code-block:: c
+
+ for (item = Z_first(&head); item; item = Z_next(&head, item))
+
+ Note that this will fail if the container is modified while being iterated
+ over.
+
+.. c:macro:: frr_each_safe(Z, head, item)
+
+ Same as the previous, but the next element is pre-loaded into a "hidden"
+ variable (named ``Z_safe``.) Equivalent to:
+
+ .. code-block:: c
+
+ for (item = Z_first(&head); item; item = next) {
+ next = Z_next_safe(&head, item);
+ ...
+ }
+
+ .. warning::
+
+ Iterating over hash tables while adding or removing items is not
+ possible. The iteration position will be corrupted when the hash
+ tables is resized while iterating. This will cause items to be
+ skipped or iterated over twice.
+
+.. c:macro:: frr_each_from(Z, head, item, from)
+
+ Iterates over the container, starting at item ``from``. This variant is
+ "safe" as in the previous macro. Equivalent to:
+
+ .. code-block:: c
+
+ for (item = from; item; item = from) {
+ from = Z_next_safe(&head, item);
+ ...
+ }
+
+ .. note::
+
+ The ``from`` variable is written to. This is intentional - you can
+ resume iteration after breaking out of the loop by keeping the ``from``
+ value persistent and reusing it for the next loop.
+
+.. c:macro:: frr_rev_each(Z, head, item)
+.. c:macro:: frr_rev_each_safe(Z, head, item)
+.. c:macro:: frr_rev_each_from(Z, head, item, from)
+
+ Reverse direction variants of the above. Only supported on containers that
+ implement ``_last`` and ``_prev`` (i.e. ``RBTREE`` and ``DLIST``).
+
+To iterate over ``const`` pointers, add ``_const`` to the name of the
+datastructure (``Z`` above), e.g. ``frr_each (mylist, head, item)`` becomes
+``frr_each (mylist_const, head, item)``.
+
+Common API
+----------
+
+The following documentation assumes that a container has been defined using
+``Z`` as the name, and ``itemtype`` being the type of the items (e.g.
+``struct item``.)
+
+.. c:function:: void Z_init(struct Z_head *)
+
+ Initializes the container for use. For most implementations, this just sets
+ some values. Hash tables are the only implementation that allocates
+ memory in this call.
+
+.. c:function:: void Z_fini(struct Z_head *)
+
+ Reverse the effects of :c:func:`Z_init()`. The container must be empty
+ when this function is called.
+
+ .. warning::
+
+ This function may ``assert()`` if the container is not empty.
+
+.. c:function:: size_t Z_count(const struct Z_head *)
+
+ Returns the number of items in a structure. All structures store a
+ counter in their `Z_head` so that calling this function completes
+ in O(1).
+
+ .. note::
+
+ For atomic containers with concurrent access, the value will already be
+ outdated by the time this function returns and can therefore only be
+ used as an estimate.
+
+.. c:function:: bool Z_member(const struct Z_head *, const itemtype *)
+
+ Determines whether some item is a member of the given container. The
+ item must either be valid on some container, or set to all zeroes.
+
+ On some containers, if no faster way to determine membership is possible,
+ this is simply ``item == Z_find(head, item)``.
+
+ Not currently available for atomic containers.
+
+.. c:function:: const itemtype *Z_const_first(const struct Z_head *)
+.. c:function:: itemtype *Z_first(struct Z_head *)
+
+ Returns the first item in the structure, or ``NULL`` if the structure is
+ empty. This is O(1) for all data structures except red-black trees
+ where it is O(log n).
+
+.. c:function:: const itemtype *Z_const_last(const struct Z_head *)
+.. c:function:: itemtype *Z_last(struct Z_head *)
+
+ Last item in the structure, or ``NULL``. Only available on containers
+ that support reverse iteration (i.e. ``RBTREE`` and ``DLIST``).
+
+.. c:function:: itemtype *Z_pop(struct Z_head *)
+
+ Remove and return the first item in the structure, or ``NULL`` if the
+ structure is empty. Like :c:func:`Z_first`, this is O(1) for all
+ data structures except red-black trees where it is O(log n) again.
+
+ This function can be used to build queues (with unsorted structures) or
+ priority queues (with sorted structures.)
+
+ Another common pattern is deleting all container items:
+
+ .. code-block:: c
+
+ while ((item = Z_pop(head)))
+ item_free(item);
+
+ .. note::
+
+ This function can - and should - be used with hash tables. It is not
+ affected by the "modification while iterating" problem. To remove
+ all items from a hash table, use the loop demonstrated above.
+
+.. c:function:: const itemtype *Z_const_next(const struct Z_head *, const itemtype *prev)
+.. c:function:: itemtype *Z_next(struct Z_head *, itemtype *prev)
+
+ Return the item that follows after ``prev``, or ``NULL`` if ``prev`` is
+ the last item.
+
+ .. warning::
+
+ ``prev`` must not be ``NULL``! Use :c:func:`Z_next_safe()` if
+ ``prev`` might be ``NULL``.
+
+.. c:function:: itemtype *Z_next_safe(struct Z_head *, itemtype *prev)
+
+ Same as :c:func:`Z_next()`, except that ``NULL`` is returned if
+ ``prev`` is ``NULL``.
+
+.. c:function:: const itemtype *Z_const_prev(const struct Z_head *, const itemtype *next)
+.. c:function:: itemtype *Z_prev(struct Z_head *, itemtype *next)
+.. c:function:: itemtype *Z_prev_safe(struct Z_head *, itemtype *next)
+
+ As above, but preceding item. Only available on structures that support
+ reverse iteration (i.e. ``RBTREE`` and ``DLIST``).
+
+.. c:function:: itemtype *Z_del(struct Z_head *, itemtype *item)
+
+ Remove ``item`` from the container and return it.
+
+ .. note::
+
+ This function's behaviour is undefined if ``item`` is not actually
+ on the container. Some structures return ``NULL`` in this case while
+ others return ``item``. The function may also call ``assert()`` (but
+ most don't.)
+
+.. c:function:: itemtype *Z_swap_all(struct Z_head *, struct Z_head *)
+
+ Swap the contents of 2 containers (of identical type). This exchanges the
+ contents of the two head structures and updates pointers if necessary for
+ the particular data structure. Fast for all structures.
+
+ (Not currently available on atomic containers.)
+
+.. todo::
+
+ ``Z_del_after()`` / ``Z_del_hint()``?
+
+API for unsorted structures
+---------------------------
+
+Since the insertion position is not pre-defined for unsorted data, there
+are several functions exposed to insert data:
+
+.. note::
+
+ ``item`` must not be ``NULL`` for any of the following functions.
+
+.. c:macro:: DECLARE_XXX(Z, type, field)
+
+ :param listtype XXX: ``LIST``, ``DLIST`` or ``ATOMLIST`` to select a data
+ structure implementation.
+ :param token Z: Gives the name prefix that is used for the functions
+ created for this instantiation. ``DECLARE_XXX(foo, ...)``
+ gives ``struct foo_item``, ``foo_add_head()``, ``foo_count()``, etc. Note
+ that this must match the value given in ``PREDECL_XXX(foo)``.
+ :param typename type: Specifies the data type of the list items, e.g.
+ ``struct item``. Note that ``struct`` must be added here, it is not
+ automatically added.
+ :param token field: References a struct member of ``type`` that must be
+ typed as ``struct foo_item``. This struct member is used to
+ store "next" pointers or other data structure specific data.
+
+.. c:function:: void Z_add_head(struct Z_head *, itemtype *item)
+
+ Insert an item at the beginning of the structure, before the first item.
+ This is an O(1) operation for non-atomic lists.
+
+.. c:function:: void Z_add_tail(struct Z_head *, itemtype *item)
+
+ Insert an item at the end of the structure, after the last item.
+ This is also an O(1) operation for non-atomic lists.
+
+.. c:function:: void Z_add_after(struct Z_head *, itemtype *after, itemtype *item)
+
+ Insert ``item`` behind ``after``. If ``after`` is ``NULL``, the item is
+ inserted at the beginning of the list as with :c:func:`Z_add_head`.
+ This is also an O(1) operation for non-atomic lists.
+
+ A common pattern is to keep a "previous" pointer around while iterating:
+
+ .. code-block:: c
+
+ itemtype *prev = NULL, *item;
+
+ frr_each_safe(Z, head, item) {
+ if (something) {
+ Z_add_after(head, prev, item);
+ break;
+ }
+ prev = item;
+ }
+
+ .. todo::
+
+ maybe flip the order of ``item`` & ``after``?
+ ``Z_add_after(head, item, after)``
+
+.. c:function:: bool Z_anywhere(const itemtype *)
+
+ Returns whether an item is a member of *any* container of this type.
+ The item must either be valid on some container, or set to all zeroes.
+
+ Guaranteed to be fast (pointer compare or similar.)
+
+ Not currently available for sorted and atomic containers. Might be added
+ for sorted containers at some point (when needed.)
+
+
+API for sorted structures
+-------------------------
+
+Sorted data structures do not need to have an insertion position specified,
+therefore the insertion calls are different from unsorted containers. Also,
+sorted containers can be searched for a value.
+
+.. c:macro:: DECLARE_XXX_UNIQ(Z, type, field, compare_func)
+
+ :param listtype XXX: One of the following:
+ ``SORTLIST`` (single-linked sorted list), ``SKIPLIST`` (skiplist),
+ ``RBTREE`` (RB-tree) or ``ATOMSORT`` (atomic single-linked list).
+ :param token Z: Gives the name prefix that is used for the functions
+ created for this instantiation. ``DECLARE_XXX(foo, ...)``
+ gives ``struct foo_item``, ``foo_add()``, ``foo_count()``, etc. Note
+ that this must match the value given in ``PREDECL_XXX(foo)``.
+ :param typename type: Specifies the data type of the items, e.g.
+ ``struct item``. Note that ``struct`` must be added here, it is not
+ automatically added.
+ :param token field: References a struct member of ``type`` that must be
+ typed as ``struct foo_item``. This struct member is used to
+ store "next" pointers or other data structure specific data.
+ :param funcptr compare_func: Item comparison function, must have the
+ following function signature:
+ ``int function(const itemtype *, const itemtype*)``. This function
+ may be static if the container is only used in one file.
+
+.. c:macro:: DECLARE_XXX_NONUNIQ(Z, type, field, compare_func)
+
+ Same as above, but allow adding multiple items to the container that compare
+ as equal in ``compare_func``. Ordering between these items is undefined
+ and depends on the container implementation.
+
+.. c:function:: itemtype *Z_add(struct Z_head *, itemtype *item)
+
+ Insert an item at the appropriate sorted position. If another item exists
+ in the container that compares as equal (``compare_func()`` == 0), ``item``
+ is not inserted and the already-existing item in the container is
+ returned. Otherwise, on successful insertion, ``NULL`` is returned.
+
+ For ``_NONUNIQ`` containers, this function always returns NULL since
+ ``item`` can always be successfully added to the container.
+
+.. c:function:: const itemtype *Z_const_find(const struct Z_head *, const itemtype *ref)
+.. c:function:: itemtype *Z_find(struct Z_head *, const itemtype *ref)
+
+ Search the container for an item that compares equal to ``ref``. If no
+ equal item is found, return ``NULL``.
+
+ This function is likely used with a temporary stack-allocated value for
+ ``ref`` like so:
+
+ .. code-block:: c
+
+ itemtype searchfor = { .foo = 123 };
+
+ itemtype *item = Z_find(head, &searchfor);
+
+ .. note::
+
+ The ``Z_find()`` function is only available for containers that contain
+ unique items (i.e. ``DECLARE_XXX_UNIQ``.) This is because on a container
+ with non-unique items, more than one item may compare as equal to
+ the item that is searched for.
+
+.. c:function:: const itemtype *Z_const_find_gteq(const struct Z_head *, const itemtype *ref)
+.. c:function:: itemtype *Z_find_gteq(struct Z_head *, const itemtype *ref)
+
+ Search the container for an item that compares greater or equal to
+ ``ref``. See :c:func:`Z_find()` above.
+
+.. c:function:: const itemtype *Z_const_find_lt(const struct Z_head *, const itemtype *ref)
+.. c:function:: itemtype *Z_find_lt(struct Z_head *, const itemtype *ref)
+
+ Search the container for an item that compares less than
+ ``ref``. See :c:func:`Z_find()` above.
+
+
+API for hash tables
+-------------------
+
+.. c:macro:: DECLARE_HASH(Z, type, field, compare_func, hash_func)
+
+ :param listtype HASH: Only ``HASH`` is currently available.
+ :param token Z: Gives the name prefix that is used for the functions
+ created for this instantiation. ``DECLARE_XXX(foo, ...)``
+ gives ``struct foo_item``, ``foo_add()``, ``foo_count()``, etc. Note
+ that this must match the value given in ``PREDECL_XXX(foo)``.
+ :param typename type: Specifies the data type of the items, e.g.
+ ``struct item``. Note that ``struct`` must be added here, it is not
+ automatically added.
+ :param token field: References a struct member of ``type`` that must be
+ typed as ``struct foo_item``. This struct member is used to
+ store "next" pointers or other data structure specific data.
+ :param funcptr compare_func: Item comparison function, must have the
+ following function signature:
+ ``int function(const itemtype *, const itemtype*)``. This function
+ may be static if the container is only used in one file. For hash tables,
+ this function is only used to check for equality, the ordering is
+ ignored.
+ :param funcptr hash_func: Hash calculation function, must have the
+ following function signature:
+ ``uint32_t function(const itemtype *)``. The hash value for items
+ stored in a hash table is cached in each item, so this value need not
+ be cached by the user code.
+
+ .. warning::
+
+ Items that compare as equal cannot be inserted. Refer to the notes
+ about sorted structures in the previous section.
+
+
+.. c:function:: void Z_init_size(struct Z_head *, size_t size)
+
+ Same as :c:func:`Z_init()` but preset the minimum hash table to
+ ``size``.
+
+Hash tables also support :c:func:`Z_add()` and :c:func:`Z_find()` with
+the same semantics as noted above. :c:func:`Z_find_gteq()` and
+:c:func:`Z_find_lt()` are **not** provided for hash tables.
+
+Hash table invariants
+^^^^^^^^^^^^^^^^^^^^^
+
+There are several ways to injure yourself using the hash table API.
+
+First, note that there are two functions related to computing uniqueness of
+objects inserted into the hash table. There is a hash function and a comparison
+function. The hash function computes the hash of the object. Our hash table
+implementation uses `chaining
+<https://en.wikipedia.org/wiki/Hash_table#Separate_chaining_with_linked_lists>`_.
+This means that your hash function does not have to be perfect; multiple
+objects having the same computed hash will be placed into a linked list
+corresponding to that key. The closer to perfect the hash function, the better
+performance, as items will be more evenly distributed and the chain length will
+not be long on any given lookup, minimizing the number of list operations
+required to find the correct item. However, the comparison function *must* be
+perfect, in the sense that any two unique items inserted into the hash table
+must compare not equal. At insertion time, if you try to insert an item that
+compares equal to an existing item the insertion will not happen and
+``hash_get()`` will return the existing item. However, this invariant *must* be
+maintained while the object is in the hash table. Suppose you insert items
+``A`` and ``B`` into the hash table which both hash to the same value ``1234``
+but do not compare equal. They will be placed in a chain like so::
+
+ 1234 : A -> B
+
+Now suppose you do something like this elsewhere in the code::
+
+ *A = *B
+
+I.e. you copy all fields of ``B`` into ``A``, such that the comparison function
+now says that they are equal based on their contents. At this point when you
+look up ``B`` in the hash table, ``hash_get()`` will search the chain for the
+first item that compares equal to ``B``, which will be ``A``. This leads to
+insidious bugs.
+
+.. warning::
+
+ Never modify the values looked at by the comparison or hash functions after
+ inserting an item into a hash table.
+
+A similar situation can occur with the hash allocation function. ``hash_get()``
+accepts a function pointer that it will call to get the item that should be
+inserted into the list if the provided item is not already present. There is a
+builtin function, ``hash_alloc_intern``, that will simply return the item you
+provided; if you always want to store the value you pass to ``hash_get`` you
+should use this one. If you choose to provide a different one, that function
+*must* return a new item that hashes and compares equal to the one you provided
+to ``hash_get()``. If it does not the behavior of the hash table is undefined.
+
+.. warning::
+
+ Always make sure your hash allocation function returns a value that hashes
+ and compares equal to the item you provided to ``hash_get()``.
+
+Finally, if you maintain pointers to items you have inserted into a hash table,
+then before deallocating them you must release them from the hash table. This
+is basic memory management but worth repeating as bugs have arisen from failure
+to do this.
+
+
+API for heaps
+-------------
+
+Heaps provide the same API as the sorted data structures, except:
+
+* none of the find functions (:c:func:`Z_find()`, :c:func:`Z_find_gteq()`
+ or :c:func:`Z_find_lt()`) are available.
+* iterating over the heap yields the items in semi-random order, only the
+ first item is guaranteed to be in order and actually the "lowest" item
+ on the heap. Being a heap, only the rebalancing performed on removing the
+ first item (either through :c:func:`Z_pop()` or :c:func:`Z_del()`) causes
+ the new lowest item to bubble up to the front.
+* all heap modifications are O(log n). However, cacheline efficiency and
+ latency is likely quite a bit better than with other data structures.
+
+Atomic lists
+------------
+
+`atomlist.h` provides an unsorted and a sorted atomic single-linked list.
+Since atomic memory accesses can be considerably slower than plain memory
+accessses (depending on the CPU type), these lists should only be used where
+necessary.
+
+The following guarantees are provided regarding concurrent access:
+
+- the operations are lock-free but not wait-free.
+
+ Lock-free means that it is impossible for all threads to be blocked. Some
+ thread will always make progress, regardless of what other threads do. (This
+ even includes a random thread being stopped by a debugger in a random
+ location.)
+
+ Wait-free implies that the time any single thread might spend in one of the
+ calls is bounded. This is not provided here since it is not normally
+ relevant to practical operations. What this means is that if some thread is
+ hammering a particular list with requests, it is possible that another
+ thread is blocked for an extended time. The lock-free guarantee still
+ applies since the hammering thread is making progress.
+
+- without a RCU mechanism in place, the point of contention for atomic lists
+ is memory deallocation. As it is, **a rwlock is required for correct
+ operation**. The *read* lock must be held for all accesses, including
+ reading the list, adding items to the list, and removing items from the
+ list. The *write* lock must be acquired and released before deallocating
+ any list element. If this is not followed, an use-after-free can occur
+ as a MT race condition when an element gets deallocated while another
+ thread is accessing the list.
+
+ .. note::
+
+ The *write* lock does not need to be held for deleting items from the
+ list, and there should not be any instructions between the
+ ``pthread_rwlock_wrlock`` and ``pthread_rwlock_unlock``. The write lock
+ is used as a sequence point, not as an exclusion mechanism.
+
+- insertion operations are always safe to do with the read lock held.
+ Added items are immediately visible after the insertion call returns and
+ should not be touched anymore.
+
+- when removing a *particular* (pre-determined) item, the caller must ensure
+ that no other thread is attempting to remove that same item. If this cannot
+ be guaranteed by architecture, a separate lock might need to be added.
+
+- concurrent `pop` calls are always safe to do with only the read lock held.
+ This does not fall under the previous rule since the `pop` call will select
+ the next item if the first is already being removed by another thread.
+
+ **Deallocation locking still applies.** Assume another thread starts
+ reading the list, but gets task-switched by the kernel while reading the
+ first item. `pop` will happily remove and return that item. If it is
+ deallocated without acquiring and releasing the write lock, the other thread
+ will later resume execution and try to access the now-deleted element.
+
+- the list count should be considered an estimate. Since there might be
+ concurrent insertions or removals in progress, it might already be outdated
+ by the time the call returns. No attempt is made to have it be correct even
+ for a nanosecond.
+
+Overall, atomic lists are well-suited for MT queues; concurrent insertion,
+iteration and removal operations will work with the read lock held.
+
+Code snippets
+^^^^^^^^^^^^^
+
+Iteration:
+
+.. code-block:: c
+
+ struct item *i;
+
+ pthread_rwlock_rdlock(&itemhead_rwlock);
+ frr_each(itemlist, &itemhead, i) {
+ /* lock must remain held while iterating */
+ ...
+ }
+ pthread_rwlock_unlock(&itemhead_rwlock);
+
+Head removal (pop) and deallocation:
+
+.. code-block:: c
+
+ struct item *i;
+
+ pthread_rwlock_rdlock(&itemhead_rwlock);
+ i = itemlist_pop(&itemhead);
+ pthread_rwlock_unlock(&itemhead_rwlock);
+
+ /* i might still be visible for another thread doing an
+ * frr_each() (but won't be returned by another pop()) */
+ ...
+
+ pthread_rwlock_wrlock(&itemhead_rwlock);
+ pthread_rwlock_unlock(&itemhead_rwlock);
+ /* i now guaranteed to be gone from the list.
+ * note nothing between wrlock() and unlock() */
+ XFREE(MTYPE_ITEM, i);
+
+FAQ
+---
+
+What are the semantics of ``const`` in the container APIs?
+ ``const`` pointers to list heads and/or items are interpreted to mean that
+ both the container itself as well as the data items are read-only.
+
+Why is it ``PREDECL`` + ``DECLARE`` instead of ``DECLARE`` + ``DEFINE``?
+ The rule is that a ``DEFINE`` must be in a ``.c`` file, and linked exactly
+ once because it defines some kind of global symbol. This is not the case
+ for the data structure macros; they only define ``static`` symbols and it
+ is perfectly fine to include both ``PREDECL`` and ``DECLARE`` in a header
+ file. It is also perfectly fine to have the same ``DECLARE`` statement in
+ 2 ``.c`` files, but only **if the macro arguments are identical.** Maybe
+ don't do that unless you really need it.
+
+FRR lists
+---------
+
+.. TODO::
+
+ document
+
+BSD lists
+---------
+
+.. TODO::
+
+ refer to external docs
diff --git a/doc/developer/locking.rst b/doc/developer/locking.rst
new file mode 100644
index 0000000..bce1311
--- /dev/null
+++ b/doc/developer/locking.rst
@@ -0,0 +1,79 @@
+.. _locking:
+
+Locking
+=======
+
+FRR ships two small wrappers around ``pthread_mutex_lock()`` /
+``pthread_mutex_unlock``. Use ``#include "frr_pthread.h"`` to get these
+macros.
+
+.. c:macro:: frr_with_mutex (mutex)
+
+ (With ``pthread_mutex_t *mutex``.)
+
+ Begin a C statement block that is executed with the mutex locked. Any
+ exit from the block (``break``, ``return``, ``goto``, end of block) will
+ cause the mutex to be unlocked::
+
+ int somefunction(int option)
+ {
+ frr_with_mutex (&my_mutex) {
+ /* mutex will be locked */
+
+ if (!option)
+ /* mutex will be unlocked before return */
+ return -1;
+
+ if (something(option))
+ /* mutex will be unlocked before goto */
+ goto out_err;
+
+ somethingelse();
+
+ /* mutex will be unlocked at end of block */
+ }
+
+ return 0;
+
+ out_err:
+ somecleanup();
+ return -1;
+ }
+
+ This is a macro that internally uses a ``for`` loop. It is explicitly
+ acceptable to use ``break`` to get out of the block. Even though a single
+ statement works correctly, FRR coding style requires that this macro always
+ be used with a ``{ ... }`` block.
+
+.. c:macro:: frr_mutex_lock_autounlock(mutex)
+
+ (With ``pthread_mutex_t *mutex``.)
+
+ Lock mutex and unlock at the end of the current C statement block::
+
+ int somefunction(int option)
+ {
+ frr_mutex_lock_autounlock(&my_mutex);
+ /* mutex will be locked */
+
+ ...
+ if (error)
+ /* mutex will be unlocked before return */
+ return -1;
+ ...
+
+ /* mutex will be unlocked before return */
+ return 0;
+ }
+
+ This is a macro that internally creates a variable with a destructor.
+ When the variable goes out of scope (i.e. the block ends), the mutex is
+ released.
+
+ .. warning::
+
+ This macro should only used when :c:func:`frr_with_mutex` would
+ result in excessively/weirdly nested code. This generally is an
+ indicator that the code might be trying to do too many things with
+ the lock held. Try any possible venues to reduce the amount of
+ code covered by the lock and move to :c:func:`frr_with_mutex`.
diff --git a/doc/developer/logging.rst b/doc/developer/logging.rst
new file mode 100644
index 0000000..52653d3
--- /dev/null
+++ b/doc/developer/logging.rst
@@ -0,0 +1,873 @@
+.. _logging:
+
+.. highlight:: c
+
+Logging
+=======
+
+One of the most frequent decisions to make while writing code for FRR is what
+to log, what level to log it at, and when to log it. Here is a list of
+recommendations for these decisions.
+
+
+printfrr()
+----------
+
+``printfrr()`` is FRR's modified version of ``printf()``, designed to make
+life easier when printing nontrivial datastructures. The following variants
+are available:
+
+.. c:function:: ssize_t snprintfrr(char *buf, size_t len, const char *fmt, ...)
+.. c:function:: ssize_t vsnprintfrr(char *buf, size_t len, const char *fmt, va_list)
+
+ These correspond to ``snprintf``/``vsnprintf``. If you pass NULL for buf
+ or 0 for len, no output is written but the return value is still calculated.
+
+ The return value is always the full length of the output, unconstrained by
+ `len`. It does **not** include the terminating ``\0`` character. A
+ malformed format string can result in a ``-1`` return value.
+
+.. c:function:: ssize_t csnprintfrr(char *buf, size_t len, const char *fmt, ...)
+.. c:function:: ssize_t vcsnprintfrr(char *buf, size_t len, const char *fmt, va_list)
+
+ Same as above, but the ``c`` stands for "continue" or "concatenate". The
+ output is appended to the string instead of overwriting it.
+
+.. c:function:: char *asprintfrr(struct memtype *mt, const char *fmt, ...)
+.. c:function:: char *vasprintfrr(struct memtype *mt, const char *fmt, va_list)
+
+ These functions allocate a dynamic buffer (using MTYPE `mt`) and print to
+ that. If the format string is malformed, they return a copy of the format
+ string, so the return value is always non-NULL and always dynamically
+ allocated with `mt`.
+
+.. c:function:: char *asnprintfrr(struct memtype *mt, char *buf, size_t len, const char *fmt, ...)
+.. c:function:: char *vasnprintfrr(struct memtype *mt, char *buf, size_t len, const char *fmt, va_list)
+
+ This variant tries to use the static buffer provided, but falls back to
+ dynamic allocation if it is insufficient.
+
+ The return value can be either `buf` or a newly allocated string using
+ `mt`. You MUST free it like this::
+
+ char *ret = asnprintfrr(MTYPE_FOO, buf, sizeof(buf), ...);
+ if (ret != buf)
+ XFREE(MTYPE_FOO, ret);
+
+.. c:function:: ssize_t bprintfrr(struct fbuf *fb, const char *fmt, ...)
+.. c:function:: ssize_t vbprintfrr(struct fbuf *fb, const char *fmt, va_list)
+
+ These are the "lowest level" functions, which the other variants listed
+ above use to implement their functionality on top. Mainly useful for
+ implementing printfrr extensions since those get a ``struct fbuf *`` to
+ write their output to.
+
+.. c:macro:: FMT_NSTD(expr)
+
+ This macro turns off/on format warnings as needed when non-ISO-C
+ compatible printfrr extensions are used (e.g. ``%.*p`` or ``%Ld``.)::
+
+ vty_out(vty, "standard compatible %pI4\n", &addr);
+ FMT_NSTD(vty_out(vty, "non-standard %-47.*pHX\n", (int)len, buf));
+
+ When the frr-format plugin is in use, this macro is a no-op since the
+ frr-format plugin supports all printfrr extensions. Since the FRR CI
+ includes a system with the plugin enabled, this means format errors will
+ not slip by undetected even with FMT_NSTD.
+
+.. note::
+
+ ``printfrr()`` does not support the ``%n`` format.
+
+AS-Safety
+^^^^^^^^^
+
+``printfrr()`` are AS-Safe under the following conditions:
+
+* the ``[v]as[n]printfrr`` variants are not AS-Safe (allocating memory)
+* floating point specifiers are not AS-Safe (system printf is used for these)
+* the positional ``%1$d`` syntax should not be used (8 arguments are supported
+ while AS-Safe)
+* extensions are only AS-Safe if their printer is AS-Safe
+
+printfrr Extensions
+-------------------
+
+``printfrr()`` format strings can be extended with suffixes after `%p` or `%d`.
+Printf features like field lengths can be used normally with these extensions,
+e.g. ``%-15pI4`` works correctly, **except if the extension consumes the
+width or precision**. Extensions that do so are listed below as ``%*pXX``
+rather than ``%pXX``.
+
+The extension specifier after ``%p`` or ``%d`` is always an uppercase letter;
+by means of established pattern uppercase letters and numbers form the type
+identifier which may be followed by lowercase flags.
+
+You can grep the FRR source for ``printfrr_ext_autoreg`` to see all extended
+printers and what exactly they do. More printers are likely to be added as
+needed/useful, so the list here may be outdated.
+
+.. note::
+
+ The ``zlog_*``/``flog_*`` and ``vty_out`` functions all use printfrr
+ internally, so these extensions are available there. However, they are
+ **not** available when calling ``snprintf`` directly. You need to call
+ ``snprintfrr`` instead.
+
+Networking data types
+^^^^^^^^^^^^^^^^^^^^^
+
+.. role:: frrfmtout(code)
+
+.. frrfmt:: %pI4 (struct in_addr *, in_addr_t *)
+
+ :frrfmtout:`1.2.3.4`
+
+ ``%pI4s``: :frrfmtout:`*` — print star instead of ``0.0.0.0`` (for multicast)
+
+.. frrfmt:: %pI6 (struct in6_addr *)
+
+ :frrfmtout:`fe80::1234`
+
+ ``%pI6s``: :frrfmtout:`*` — print star instead of ``::`` (for multicast)
+
+.. frrfmt:: %pEA (struct ethaddr *)
+
+ :frrfmtout:`01:23:45:67:89:ab`
+
+.. frrfmt:: %pIA (struct ipaddr *)
+
+ :frrfmtout:`1.2.3.4` / :frrfmtout:`fe80::1234`
+
+ ``%pIAs``: — print star instead of zero address (for multicast)
+
+.. frrfmt:: %pFX (struct prefix *)
+
+ :frrfmtout:`1.2.3.0/24` / :frrfmtout:`fe80::1234/64`
+
+ This accepts the following types:
+
+ - :c:struct:`prefix`
+ - :c:struct:`prefix_ipv4`
+ - :c:struct:`prefix_ipv6`
+ - :c:struct:`prefix_eth`
+ - :c:struct:`prefix_evpn`
+ - :c:struct:`prefix_fs`
+
+ It does **not** accept the following types:
+
+ - :c:struct:`prefix_ls`
+ - :c:struct:`prefix_rd`
+ - :c:struct:`prefix_sg` (use :frrfmt:`%pPSG4`)
+ - :c:union:`prefixptr` (dereference to get :c:struct:`prefix`)
+ - :c:union:`prefixconstptr` (dereference to get :c:struct:`prefix`)
+
+ Options:
+
+ ``%pFXh``: (address only) :frrfmtout:`1.2.3.0` / :frrfmtout:`fe80::1234`
+
+.. frrfmt:: %pPSG4 (struct prefix_sg *)
+
+ :frrfmtout:`(*,1.2.3.4)`
+
+ This is *(S,G)* output for use in zebra. (Note prefix_sg is not a prefix
+ "subclass" like the other prefix_* structs.)
+
+.. frrfmt:: %pSU (union sockunion *)
+
+ ``%pSU``: :frrfmtout:`1.2.3.4` / :frrfmtout:`fe80::1234`
+
+ ``%pSUs``: :frrfmtout:`1.2.3.4` / :frrfmtout:`fe80::1234%89`
+ (adds IPv6 scope ID as integer)
+
+ ``%pSUp``: :frrfmtout:`1.2.3.4:567` / :frrfmtout:`[fe80::1234]:567`
+ (adds port)
+
+ ``%pSUps``: :frrfmtout:`1.2.3.4:567` / :frrfmtout:`[fe80::1234%89]:567`
+ (adds port and scope ID)
+
+.. frrfmt:: %pRN (struct route_node *, struct bgp_node *, struct agg_node *)
+
+ :frrfmtout:`192.168.1.0/24` (dst-only node)
+
+ :frrfmtout:`2001:db8::/32 from fe80::/64` (SADR node)
+
+.. frrfmt:: %pNH (struct nexthop *)
+
+ ``%pNHvv``: :frrfmtout:`via 1.2.3.4, eth0` — verbose zebra format
+
+ ``%pNHv``: :frrfmtout:`1.2.3.4, via eth0` — slightly less verbose zebra format
+
+ ``%pNHs``: :frrfmtout:`1.2.3.4 if 15` — same as :c:func:`nexthop2str()`
+
+ ``%pNHcg``: :frrfmtout:`1.2.3.4` — compact gateway only
+
+ ``%pNHci``: :frrfmtout:`eth0` — compact interface only
+
+.. frrfmt:: %dPF (int)
+
+ :frrfmtout:`AF_INET`
+
+ Prints an `AF_*` / `PF_*` constant. ``PF`` is used here to avoid confusion
+ with `AFI` constants, even though the FRR codebase prefers `AF_INET` over
+ `PF_INET` & co.
+
+.. frrfmt:: %dSO (int)
+
+ :frrfmtout:`SOCK_STREAM`
+
+Time/interval formats
+^^^^^^^^^^^^^^^^^^^^^
+
+.. frrfmt:: %pTS (struct timespec *)
+
+.. frrfmt:: %pTV (struct timeval *)
+
+.. frrfmt:: %pTT (time_t *)
+
+ Above 3 options internally result in the same code being called, support
+ the same flags and produce equal output with one exception: ``%pTT``
+ has no sub-second precision and the formatter will never print a
+ (nonsensical) ``.000``.
+
+ Exactly one of ``I``, ``M`` or ``R`` must immediately follow after
+ ``TS``/``TV``/``TT`` to specify whether the input is an interval, monotonic
+ timestamp or realtime timestamp:
+
+ ``%pTVI``: input is an interval, not a timestamp. Print interval.
+
+ ``%pTVIs``: input is an interval, convert to wallclock by subtracting it
+ from current time (i.e. interval has passed **s**\ ince.)
+
+ ``%pTVIu``: input is an interval, convert to wallclock by adding it to
+ current time (i.e. **u**\ ntil interval has passed.)
+
+ ``%pTVM`` - input is a timestamp on CLOCK_MONOTONIC, convert to wallclock
+ time (by grabbing current CLOCK_MONOTONIC and CLOCK_REALTIME and doing the
+ math) and print calendaric date.
+
+ ``%pTVMs`` - input is a timestamp on CLOCK_MONOTONIC, print interval
+ **s**\ ince that timestamp (elapsed.)
+
+ ``%pTVMu`` - input is a timestamp on CLOCK_MONOTONIC, print interval
+ **u**\ ntil that timestamp (deadline.)
+
+ ``%pTVR`` - input is a timestamp on CLOCK_REALTIME, print calendaric date.
+
+ ``%pTVRs`` - input is a timestamp on CLOCK_REALTIME, print interval
+ **s**\ ince that timestamp.
+
+ ``%pTVRu`` - input is a timestamp on CLOCK_REALTIME, print interval
+ **u**\ ntil that timestamp.
+
+ ``%pTVA`` - reserved for CLOCK_TAI in case a PTP implementation is
+ interfaced to FRR. Not currently implemented.
+
+ .. note::
+
+ If ``%pTVRs`` or ``%pTVRu`` are used, this is generally an indication
+ that a CLOCK_MONOTONIC timestamp should be used instead (or added in
+ parallel.) CLOCK_REALTIME might be adjusted by NTP, PTP or similar
+ procedures, causing bogus intervals to be printed.
+
+ ``%pTVM`` on first look might be assumed to have the same problem, but
+ on closer thought the assumption is always that current system time is
+ correct. And since a CLOCK_MONOTONIC interval is also quite safe to
+ assume to be correct, the (past) absolute timestamp to be printed from
+ this can likely be correct even if it doesn't match what CLOCK_REALTIME
+ would have indicated at that point in the past. This logic does,
+ however, not quite work for *future* times.
+
+ Generally speaking, almost all use cases in FRR should (and do) use
+ CLOCK_MONOTONIC (through :c:func:`monotime()`.)
+
+ Flags common to printing calendar times and intervals:
+
+ ``p``: include spaces in appropriate places (depends on selected format.)
+
+ ``%p.3TV...``: specify sub-second resolution (use with ``FMT_NSTD`` to
+ suppress gcc warning.) As noted above, ``%pTT`` will never print sub-second
+ digits since there are none. Only some formats support printing sub-second
+ digits and the default may vary.
+
+ The following flags are available for printing calendar times/dates:
+
+ (no flag): :frrfmtout:`Sat Jan 1 00:00:00 2022` - print output from
+ ``ctime()``, in local time zone. Since FRR does not currently use/enable
+ locale support, this is always the C locale. (Locale support getting added
+ is unlikely for the time being and would likely break other things worse
+ than this.)
+
+ ``i``: :frrfmtout:`2022-01-01T00:00:00.123` - ISO8601 timestamp in local
+ time zone (note there is no ``Z`` or ``+00:00`` suffix.) Defaults to
+ millisecond precision.
+
+ ``ip``: :frrfmtout:`2022-01-01 00:00:00.123` - use readable form of ISO8601
+ with space instead of ``T`` separator.
+
+ The following flags are available for printing intervals:
+
+ (no flag): :frrfmtout:`9w9d09:09:09.123` - does not match any
+ preexisting format; added because it does not lose precision (like ``t``)
+ for longer intervals without printing huge numbers (like ``h``/``m``).
+ Defaults to millisecond precision. The week/day fields are left off if
+ they're zero, ``p`` adds a space after the respective letter.
+
+ ``t``: :frrfmtout:`9w9d09h`, :frrfmtout:`9d09h09m`, :frrfmtout:`09:09:09` -
+ this replaces :c:func:`frrtime_to_interval()`. ``p`` adds spaces after
+ week/day/hour letters.
+
+ ``d``: print decimal number of seconds. Defaults to millisecond precision.
+
+ ``x`` / ``tx`` / ``dx``: Like no flag / ``t`` / ``d``, but print
+ :frrfmtout:`-` for zero or negative intervals (for use with unset timers.)
+
+ ``h``: :frrfmtout:`09:09:09`
+
+ ``hx``: :frrfmtout:`09:09:09`, :frrfmtout:`--:--:--` - this replaces
+ :c:func:`pim_time_timer_to_hhmmss()`.
+
+ ``m``: :frrfmtout:`09:09`
+
+ ``mx``: :frrfmtout:`09:09`, :frrfmtout:`--:--` - this replaces
+ :c:func:`pim_time_timer_to_mmss()`.
+
+FRR library helper formats
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. frrfmt:: %pTH (struct event *)
+
+ Print remaining time on timer event. Interval-printing flag characters
+ listed above for ``%pTV`` can be added, e.g. ``%pTHtx``.
+
+ ``NULL`` pointers are printed as ``-``.
+
+.. frrfmt:: %pTHD (struct event *)
+
+ Print debugging information for given event. Sample output:
+
+ .. code-block:: none
+
+ {(thread *)NULL}
+ {(thread *)0x55a3b5818910 arg=0x55a3b5827c50 timer r=7.824 mld_t_query() &mld_ifp->t_query from pimd/pim6_mld.c:1369}
+ {(thread *)0x55a3b5827230 arg=0x55a3b5827c50 read fd=16 mld_t_recv() &mld_ifp->t_recv from pimd/pim6_mld.c:1186}
+
+ (The output is aligned to some degree.)
+
+FRR daemon specific formats
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following formats are only available in specific daemons, as the code
+implementing them is part of the daemon, not the library.
+
+zebra
+"""""
+
+.. frrfmt:: %pZN (struct route_node *)
+
+ Print information for a RIB node, including zebra-specific data.
+
+ :frrfmtout:`::/0 src fe80::/64 (MRIB)` (``%pZN``)
+
+ :frrfmtout:`1234` (``%pZNt`` - table number)
+
+bgpd
+""""
+
+.. frrfmt:: %pBD (struct bgp_dest *)
+
+ Print prefix for a BGP destination.
+
+ :frrfmtout:`fe80::1234/64`
+
+.. frrfmt:: %pBP (struct peer *)
+
+ :frrfmtout:`192.168.1.1(leaf1.frrouting.org)`
+
+ Print BGP peer's IP and hostname together.
+
+pimd/pim6d
+""""""""""
+
+.. frrfmt:: %pPA (pim_addr *)
+
+ Format IP address according to IP version (pimd vs. pim6d) being compiled.
+
+ :frrfmtout:`fe80::1234` / :frrfmtout:`10.0.0.1`
+
+ :frrfmtout:`*` (``%pPAs`` - replace 0.0.0.0/:: with star)
+
+.. frrfmt:: %pSG (pim_sgaddr *)
+
+ Format S,G pair according to IP version (pimd vs. pim6d) being compiled.
+ Braces are included.
+
+ :frrfmtout:`(*,224.0.0.0)`
+
+
+General utility formats
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. frrfmt:: %m (no argument)
+
+ :frrfmtout:`Permission denied`
+
+ Prints ``strerror(errno)``. Does **not** consume any input argument, don't
+ pass ``errno``!
+
+ (This is a GNU extension not specific to FRR. FRR guarantees it is
+ available on all systems in printfrr, though BSDs support it in printf too.)
+
+.. frrfmt:: %pSQ (char *)
+
+ ([S]tring [Q]uote.) Like ``%s``, but produce a quoted string. Options:
+
+ ``n`` - treat ``NULL`` as empty string instead.
+
+ ``q`` - include ``""`` quotation marks. Note: ``NULL`` is printed as
+ ``(null)``, not ``"(null)"`` unless ``n`` is used too. This is
+ intentional.
+
+ ``s`` - use escaping suitable for RFC5424 syslog. This means ``]`` is
+ escaped too.
+
+ If a length is specified (``%*pSQ`` or ``%.*pSQ``), null bytes in the input
+ string do not end the string and are just printed as ``\x00``.
+
+.. frrfmt:: %pSE (char *)
+
+ ([S]tring [E]scape.) Like ``%s``, but escape special characters.
+ Options:
+
+ ``n`` - treat ``NULL`` as empty string instead.
+
+ Unlike :frrfmt:`%pSQ`, this escapes many more characters that are fine for
+ a quoted string but not on their own.
+
+ If a length is specified (``%*pSE`` or ``%.*pSE``), null bytes in the input
+ string do not end the string and are just printed as ``\x00``.
+
+.. frrfmt:: %pVA (struct va_format *)
+
+ Recursively invoke printfrr, with arguments passed in through:
+
+ .. c:struct:: va_format
+
+ .. c:member:: const char *fmt
+
+ Format string to use for the recursive printfrr call.
+
+ .. c:member:: va_list *va
+
+ Formatting arguments. Note this is passed as a pointer, not - as in
+ most other places - a direct struct reference. Internally uses
+ ``va_copy()`` so repeated calls can be made (e.g. for determining
+ output length.)
+
+.. frrfmt:: %pFB (struct fbuf *)
+
+ Insert text from a ``struct fbuf *``, i.e. the output of a call to
+ :c:func:`bprintfrr()`.
+
+.. frrfmt:: %*pHX (void *, char *, unsigned char *)
+
+ ``%pHX``: :frrfmtout:`12 34 56 78`
+
+ ``%pHXc``: :frrfmtout:`12:34:56:78` (separate with [c]olon)
+
+ ``%pHXn``: :frrfmtout:`12345678` (separate with [n]othing)
+
+ Insert hexdump. This specifier requires a precision or width to be
+ specified. A precision (``%.*pHX``) takes precedence, but generates a
+ compiler warning since precisions are undefined for ``%p`` in ISO C. If
+ no precision is given, the width is used instead (and normal handling of
+ the width is suppressed).
+
+ Note that width and precision are ``int`` arguments, not ``size_t``. Use
+ like::
+
+ char *buf;
+ size_t len;
+
+ snprintfrr(out, sizeof(out), "... %*pHX ...", (int)len, buf);
+
+ /* with padding to width - would generate a warning due to %.*p */
+ FMT_NSTD(snprintfrr(out, sizeof(out), "... %-47.*pHX ...", (int)len, buf));
+
+.. frrfmt:: %*pHS (void *, char *, unsigned char *)
+
+ ``%pHS``: :frrfmtout:`hex.dump`
+
+ This is a complementary format for :frrfmt:`%*pHX` to print the text
+ representation for a hexdump. Non-printable characters are replaced with
+ a dot.
+
+.. frrfmt:: %pIS (struct iso_address *)
+
+ ([IS]o Network address) - Format ISO Network Address
+
+ ``%pIS``: :frrfmtout:`01.0203.04O5`
+ ISO Network address is printed as separated byte. The number of byte of the
+ address is embeded in the `iso_net` structure.
+
+ ``%pISl``: :frrfmtout:`01.0203.04O5.0607.0809.1011.1213.14` - long format to
+ print the long version of the ISO Network address which include the System
+ ID and the PSEUDO-ID of the IS-IS system
+
+ Note that the `ISO_ADDR_STRLEN` define gives the total size of the string
+ that could be used in conjunction to snprintfrr. Use like::
+
+ char buf[ISO_ADDR_STRLEN];
+ struct iso_address addr = {.addr_len = 4, .area_addr = {1, 2, 3, 4}};
+ snprintfrr(buf, ISO_ADDR_STRLEN, "%pIS", &addr);
+
+.. frrfmt:: %pSY (uint8_t *)
+
+ (IS-IS [SY]stem ID) - Format IS-IS System ID
+
+ ``%pSY``: :frrfmtout:`0102.0304.0506`
+
+.. frrfmt:: %pPN (uint8_t *)
+
+ (IS-IS [P]seudo [N]ode System ID) - Format IS-IS Pseudo Node System ID
+
+ ``%pPN``: :frrfmtout:`0102.0304.0506.07`
+
+.. frrfmt:: %pLS (uint8_t *)
+
+ (IS-IS [L]sp fragment [S]ystem ID) - Format IS-IS Pseudo System ID
+
+ ``%pLS``: :frrfmtout:`0102.0304.0506.07-08`
+
+ Note that the `ISO_SYSID_STRLEN` define gives the total size of the string
+ that could be used in conjunction to snprintfrr. Use like::
+
+ char buf[ISO_SYSID_STRLEN];
+ uint8_t id[8] = {1, 2, 3, 4 , 5 , 6 , 7, 8};
+ snprintfrr(buf, SYS_ID_SIZE, "%pSY", id);
+
+
+Integer formats
+^^^^^^^^^^^^^^^
+
+.. note::
+
+ These formats currently only exist for advanced type checking with the
+ ``frr-format`` GCC plugin. They should not be used directly since they will
+ cause compiler warnings when used without the plugin. Use with
+ :c:macro:`FMT_NSTD` if necessary.
+
+ It is possible ISO C23 may introduce another format for these, possibly
+ ``%w64d`` discussed in `JTC 1/SC 22/WG 14/N2680 <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2680.pdf>`_.
+
+.. frrfmt:: %Lu (uint64_t)
+
+ :frrfmtout:`12345`
+
+.. frrfmt:: %Ld (int64_t)
+
+ :frrfmtout:`-12345`
+
+Log levels
+----------
+
+Errors and warnings
+^^^^^^^^^^^^^^^^^^^
+
+If it is something that the user will want to look at and maybe do
+something, it is either an **error** or a **warning**.
+
+We're expecting that warnings and errors are in some way visible to the
+user (in the worst case by looking at the log after the network broke, but
+maybe by a syslog collector from all routers.) Therefore, anything that
+needs to get the user in the loop—and only these things—are warnings or
+errors.
+
+Note that this doesn't necessarily mean the user needs to fix something in
+the FRR instance. It also includes when we detect something else needs
+fixing, for example another router, the system we're running on, or the
+configuration. The common point is that the user should probably do
+*something*.
+
+Deciding between a warning and an error is slightly less obvious; the rule
+of thumb here is that an error will cause considerable fallout beyond its
+direct effect. Closing a BGP session due to a malformed update is an error
+since all routes from the peer are dropped; discarding one route because
+its attributes don't make sense is a warning.
+
+This also loosely corresponds to the kind of reaction we're expecting from
+the user. An error is likely to need immediate response while a warning
+might be snoozed for a bit and addressed as part of general maintenance.
+If a problem will self-repair (e.g. by retransmits), it should be a
+warning—unless the impact until that self-repair is very harsh.
+
+Examples for warnings:
+
+* a BGP update, LSA or LSP could not be processed, but operation is
+ proceeding and the broken pieces are likely to self-fix later
+* some kind of controller cannot be reached, but we can work without it
+* another router is using some unknown or unsupported capability
+
+Examples for errors:
+
+* dropping a BGP session due to malformed data
+* a socket for routing protocol operation cannot be opened
+* desynchronization from network state because something went wrong
+* *everything that we as developers would really like to be notified about,
+ i.e. some assumption in the code isn't holding up*
+
+
+Informational messages
+^^^^^^^^^^^^^^^^^^^^^^
+
+Anything that provides introspection to the user during normal operation
+is an **info** message.
+
+This includes all kinds of operational state transitions and events,
+especially if they might be interesting to the user during the course of
+figuring out a warning or an error.
+
+By itself, these messages should mostly be statements of fact. They might
+indicate the order and relationship in which things happened. Also covered
+are conditions that might be "operational issues" like a link failure due
+to an unplugged cable. If it's pretty much the point of running a routing
+daemon for, it's not a warning or an error, just business as usual.
+
+The user should be able to see the state of these bits from operational
+state output, i.e. `show interface` or `show foobar neighbors`. The log
+message indicating the change may have been printed weeks ago, but the
+state can always be viewed. (If some state change has an info message but
+no "show" command, maybe that command needs to be added.)
+
+Examples:
+
+* all kinds of up/down state changes
+
+ * interface coming up or going down
+ * addresses being added or deleted
+ * peers and neighbors coming up or going down
+
+* rejection of some routes due to user-configured route maps
+* backwards compatibility handling because another system on the network
+ has a different or smaller feature set
+
+.. note::
+ The previously used **notify** priority is replaced with *info* in all
+ cases. We don't currently have a well-defined use case for it.
+
+
+Debug messages and asserts
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Everything that is only interesting on-demand, or only while developing,
+is a **debug** message. It might be interesting to the user for a
+particularly evasive issue, but in general these are details that an
+average user might not even be able to make sense of.
+
+Most (or all?) debug messages should be behind a `debug foobar` category
+switch that controls which subset of these messages is currently
+interesting and thus printed. If a debug message doesn't have such a
+guard, there should be a good explanation as to why.
+
+Conversely, debug messages are the only thing that should be guarded by
+these switches. Neither info nor warning or error messages should be
+hidden in this way.
+
+**Asserts** should only be used as pretty crashes. We are expecting that
+asserts remain enabled in production builds, but please try to not use
+asserts in a way that would cause a security problem if the assert wasn't
+there (i.e. don't use them for length checks.)
+
+The purpose of asserts is mainly to help development and bug hunting. If
+the daemon crashes, then having some more information is nice, and the
+assert can provide crucial hints that cut down on the time needed to track
+an issue. That said, if the issue can be reasonably handled and/or isn't
+going to crash the daemon, it shouldn't be an assert.
+
+For anything else where internal constraints are violated but we're not
+breaking due to it, it's an error instead (not a debug.) These require
+"user action" of notifying the developers.
+
+Examples:
+
+* mismatched :code:`prev`/:code:`next` pointers in lists
+* some field that is absolutely needed is :code:`NULL`
+* any other kind of data structure corruption that will cause the daemon
+ to crash sooner or later, one way or another
+
+Thread-local buffering
+----------------------
+
+The core logging code in :file:`lib/zlog.c` allows setting up per-thread log
+message buffers in order to improve logging performance. The following rules
+apply for this buffering:
+
+* Only messages of priority *DEBUG* or *INFO* are buffered.
+* Any higher-priority message causes the thread's entire buffer to be flushed,
+ thus message ordering is preserved on a per-thread level.
+* There is no guarantee on ordering between different threads; in most cases
+ this is arbitrary to begin with since the threads essentially race each
+ other in printing log messages. If an order is established with some
+ synchronization primitive, add calls to :c:func:`zlog_tls_buffer_flush()`.
+* The buffers are only ever accessed by the thread they are created by. This
+ means no locking is necessary.
+
+Both the main/default thread and additional threads created by
+:c:func:`frr_pthread_new()` with the default :c:func:`frr_run()` handler will
+initialize thread-local buffering and call :c:func:`zlog_tls_buffer_flush()`
+when idle.
+
+If some piece of code runs for an extended period, it may be useful to insert
+calls to :c:func:`zlog_tls_buffer_flush()` in appropriate places:
+
+.. c:function:: void zlog_tls_buffer_flush(void)
+
+ Write out any pending log messages that the calling thread may have in its
+ buffer. This function is safe to call regardless of the per-thread log
+ buffer being set up / in use or not.
+
+When working with threads that do not use the :c:struct:`thread_master`
+event loop, per-thread buffers can be managed with:
+
+.. c:function:: void zlog_tls_buffer_init(void)
+
+ Set up thread-local buffering for log messages. This function may be
+ called repeatedly without adverse effects, but remember to call
+ :c:func:`zlog_tls_buffer_fini()` at thread exit.
+
+ .. warning::
+
+ If this function is called, but :c:func:`zlog_tls_buffer_flush()` is
+ not used, log message output will lag behind since messages will only be
+ written out when the buffer is full.
+
+ Exiting the thread without calling :c:func:`zlog_tls_buffer_fini()`
+ will cause buffered log messages to be lost.
+
+.. c:function:: void zlog_tls_buffer_fini(void)
+
+ Flush pending messages and tear down thread-local log message buffering.
+ This function may be called repeatedly regardless of whether
+ :c:func:`zlog_tls_buffer_init()` was ever called.
+
+Log targets
+-----------
+
+The actual logging subsystem (in :file:`lib/zlog.c`) is heavily separated
+from the actual log writers. It uses an atomic linked-list (`zlog_targets`)
+with RCU to maintain the log targets to be called. This list is intended to
+function as "backend" only, it **is not used for configuration**.
+
+Logging targets provide their configuration layer on top of this and maintain
+their own capability to enumerate and store their configuration. Some targets
+(e.g. syslog) are inherently single instance and just stuff their config in
+global variables. Others (e.g. file/fd output) are multi-instance capable.
+There is another layer boundary here between these and the VTY configuration
+that they use.
+
+Basic internals
+^^^^^^^^^^^^^^^
+
+.. c:struct:: zlog_target
+
+ This struct needs to be filled in by any log target and then passed to
+ :c:func:`zlog_target_replace()`. After it has been registered,
+ **RCU semantics apply**. Most changes to associated data should make a
+ copy, change that, and then replace the entire struct.
+
+ Additional per-target data should be "appended" by embedding this struct
+ into a larger one, for use with `containerof()`, and
+ :c:func:`zlog_target_clone()` and :c:func:`zlog_target_free()` should be
+ used to allocate/free the entire container struct.
+
+ Do not use this structure to maintain configuration. It should only
+ contain (a copy of) the data needed to perform the actual logging. For
+ example, the syslog target uses this:
+
+ .. code-block:: c
+
+ struct zlt_syslog {
+ struct zlog_target zt;
+ int syslog_facility;
+ };
+
+ static void zlog_syslog(struct zlog_target *zt, struct zlog_msg *msgs[], size_t nmsgs)
+ {
+ struct zlt_syslog *zte = container_of(zt, struct zlt_syslog, zt);
+ size_t i;
+
+ for (i = 0; i < nmsgs; i++)
+ if (zlog_msg_prio(msgs[i]) <= zt->prio_min)
+ syslog(zlog_msg_prio(msgs[i]) | zte->syslog_facility, "%s",
+ zlog_msg_text(msgs[i], NULL));
+ }
+
+
+.. c:function:: struct zlog_target *zlog_target_clone(struct memtype *mt, struct zlog_target *oldzt, size_t size)
+
+ Allocates a logging target struct. Note that the ``oldzt`` argument may be
+ ``NULL`` to allocate a "from scratch". If ``oldzt`` is not ``NULL``, the
+ generic bits in :c:struct:`zlog_target` are copied. **Target specific
+ bits are not copied.**
+
+.. c:function:: struct zlog_target *zlog_target_replace(struct zlog_target *oldzt, struct zlog_target *newzt)
+
+ Adds, replaces or deletes a logging target (either ``oldzt`` or ``newzt`` may be ``NULL``.)
+
+ Returns ``oldzt`` for freeing. The target remains possibly in use by
+ other threads until the RCU cycle ends. This implies you cannot release
+ resources (e.g. memory, file descriptors) immediately.
+
+ The replace operation is not atomic; for a brief period it is possible that
+ messages are delivered on both ``oldzt`` and ``newzt``.
+
+ .. warning::
+
+ ``oldzt`` must remain **functional** until the RCU cycle ends.
+
+.. c:function:: void zlog_target_free(struct memtype *mt, struct zlog_target *zt)
+
+ Counterpart to :c:func:`zlog_target_clone()`, frees a target (using RCU.)
+
+.. c:member:: void (*zlog_target.logfn)(struct zlog_target *zt, struct zlog_msg *msgs[], size_t nmsg)
+
+ Called on a target to deliver "normal" logging messages. ``msgs`` is an
+ array of opaque structs containing the actual message. Use ``zlog_msg_*``
+ functions to access message data (this is done to allow some optimizations,
+ e.g. lazy formatting the message text and timestamp as needed.)
+
+ .. note::
+
+ ``logfn()`` must check each individual message's priority value against
+ the configured ``prio_min``. While the ``prio_min`` field is common to
+ all targets and used by the core logging code to early-drop unneeded log
+ messages, the array is **not** filtered for each ``logfn()`` call.
+
+.. c:member:: void (*zlog_target.logfn_sigsafe)(struct zlog_target *zt, const char *text, size_t len)
+
+ Called to deliver "exception" logging messages (i.e. SEGV messages.)
+ Must be Async-Signal-Safe (may not allocate memory or call "complicated"
+ libc functions.) May be ``NULL`` if the log target cannot handle this.
+
+Standard targets
+^^^^^^^^^^^^^^^^
+
+:file:`lib/zlog_targets.c` provides the standard file / fd / syslog targets.
+The syslog target is single-instance while file / fd targets can be
+instantiated as needed. There are 3 built-in targets that are fully
+autonomous without any config:
+
+- startup logging to `stderr`, until either :c:func:`zlog_startup_end()` or
+ :c:func:`zlog_aux_init()` is called.
+- stdout logging for non-daemon programs using :c:func:`zlog_aux_init()`
+- crashlogs written to :file:`/var/tmp/frr.daemon.crashlog`
+
+The regular CLI/command-line logging setup is handled by :file:`lib/log_vty.c`
+which makes the appropriate instantiations of syslog / file / fd targets.
+
+.. todo::
+
+ :c:func:`zlog_startup_end()` should do an explicit switchover from
+ startup stderr logging to configured logging. Currently, configured logging
+ starts in parallel as soon as the respective setup is executed. This results
+ in some duplicate logging.
diff --git a/doc/developer/memtypes.rst b/doc/developer/memtypes.rst
new file mode 100644
index 0000000..2e181c4
--- /dev/null
+++ b/doc/developer/memtypes.rst
@@ -0,0 +1,140 @@
+.. highlight:: c
+
+Memtypes
+========
+
+FRR includes wrappers around ``malloc()`` and ``free()`` that count the number
+of objects currently allocated, for each of a defined ``MTYPE``.
+
+To this extent, there are *memory groups* and *memory types*. Each memory
+type must belong to a memory group, this is used just to provide some basic
+structure.
+
+Example:
+
+.. code-block:: c
+ :caption: mydaemon.h
+
+ DECLARE_MGROUP(MYDAEMON);
+ DECLARE_MTYPE(MYNEIGHBOR);
+
+.. code-block:: c
+ :caption: mydaemon.c
+
+ DEFINE_MGROUP( MYDAEMON, "My daemon's memory");
+ DEFINE_MTYPE( MYDAEMON, MYNEIGHBOR, "Neighbor entry");
+ DEFINE_MTYPE_STATIC(MYDAEMON, MYNEIGHBORNAME, "Neighbor name");
+
+ struct neigh *neighbor_new(const char *name)
+ {
+ struct neigh *n = XMALLOC(MYNEIGHBOR, sizeof(*n));
+ n->name = XSTRDUP(MYNEIGHBORNAME, name);
+ return n;
+ }
+
+ void neighbor_free(struct neigh *n)
+ {
+ XFREE(MYNEIGHBORNAME, n->name);
+ XFREE(MYNEIGHBOR, n);
+ }
+
+
+Definition
+----------
+
+.. c:struct:: memtype
+
+ This is the (internal) type used for MTYPE definitions. The macros below
+ should be used to create these, but in some cases it is useful to pass a
+ ``struct memtype *`` pointer to some helper function.
+
+ The ``MTYPE_name`` created by the macros is declared as a pointer, i.e.
+ a function taking a ``struct memtype *`` argument can be called with an
+ ``MTYPE_name`` argument (as opposed to ``&MTYPE_name``.)
+
+ .. note::
+
+ As ``MTYPE_name`` is a variable assigned from ``&_mt_name`` and not a
+ constant expression, it cannot be used as initializer for static
+ variables. In the case please fall back to ``&_mt_name``.
+
+.. c:macro:: DECLARE_MGROUP(name)
+
+ This macro forward-declares a memory group and should be placed in a
+ ``.h`` file. It expands to an ``extern struct memgroup`` statement.
+
+.. c:macro:: DEFINE_MGROUP(mname, description)
+
+ Defines/implements a memory group. Must be placed into exactly one ``.c``
+ file (multiple inclusion will result in a link-time symbol conflict).
+
+ Contains additional logic (constructor and destructor) to register the
+ memory group in a global list.
+
+.. c:macro:: DECLARE_MTYPE(name)
+
+ Forward-declares a memory type and makes ``MTYPE_name`` available for use.
+ Note that the ``MTYPE_`` prefix must not be included in the name, it is
+ automatically prefixed.
+
+ ``MTYPE_name`` is created as a `static const` symbol, i.e. a compile-time
+ constant. It refers to an ``extern struct memtype _mt_name``, where `name`
+ is replaced with the actual name.
+
+.. c:macro:: DEFINE_MTYPE(group, name, description)
+
+ Define/implement a memory type, must be placed into exactly one ``.c``
+ file (multiple inclusion will result in a link-time symbol conflict).
+
+ Like ``DEFINE_MGROUP``, this contains actual code to register the MTYPE
+ under its group.
+
+.. c:macro:: DEFINE_MTYPE_STATIC(group, name, description)
+
+ Same as ``DEFINE_MTYPE``, but the ``DEFINE_MTYPE_STATIC`` variant places
+ the C ``static`` keyword on the definition, restricting the MTYPE's
+ availability to the current source file. This should be appropriate in
+ >80% of cases.
+
+ .. todo::
+
+ Daemons currently have ``daemon_memory.[ch]`` files listing all of
+ their MTYPEs. This is not how it should be, most of these types
+ should be moved into the appropriate files where they are used.
+ Only a few MTYPEs should remain non-static after that.
+
+
+Usage
+-----
+
+.. c:function:: void *XMALLOC(struct memtype *mtype, size_t size)
+
+.. c:function:: void *XCALLOC(struct memtype *mtype, size_t size)
+
+.. c:function:: void *XSTRDUP(struct memtype *mtype, const char *name)
+
+ Allocation wrappers for malloc/calloc/realloc/strdup, taking an extra
+ mtype parameter.
+
+.. c:function:: void *XREALLOC(struct memtype *mtype, void *ptr, size_t size)
+
+ Wrapper around realloc() with MTYPE tracking. Note that ``ptr`` may
+ be NULL, in which case the function does the same as XMALLOC (regardless
+ of whether the system realloc() supports this.)
+
+.. c:function:: void XFREE(struct memtype *mtype, void *ptr)
+
+ Wrapper around free(), again taking an extra mtype parameter. This is
+ actually a macro, with the following additional properties:
+
+ - the macro contains ``ptr = NULL``
+ - if ptr is NULL, no operation is performed (as is guaranteed by system
+ implementations.) Do not surround XFREE with ``if (ptr != NULL)``
+ checks.
+
+.. c:function:: void XCOUNTFREE(struct memtype *mtype, void *ptr)
+
+ This macro is used to count the ``ptr`` as freed without actually freeing
+ it. This may be needed in some very specific cases, for example, when the
+ ``ptr`` was allocated using any of the above wrappers and will be freed
+ by some external library using simple ``free()``.
diff --git a/doc/developer/mgmtd-dev.rst b/doc/developer/mgmtd-dev.rst
new file mode 100644
index 0000000..9839aa8
--- /dev/null
+++ b/doc/developer/mgmtd-dev.rst
@@ -0,0 +1,222 @@
+..
+.. SPDX-License-Identifier: GPL-2.0-or-later
+..
+.. June 19 2023, Christian Hopps <chopps@labn.net>
+..
+.. Copyright (c) 2023, LabN Consulting, L.L.C.
+..
+
+.. _mgmtd_dev:
+
+MGMTD Development
+=================
+
+Overview
+^^^^^^^^
+
+``mgmtd`` (Management Daemon) is a new centralized management daemon for FRR.
+
+Previously, ``vtysh`` was the only centralized management service provided.
+Internally ``vtysh`` connects to each daemon and sends CLI commands (both
+configuration and operational state queries) over a socket connection. This
+service only supports CLI which is no longer sufficient.
+
+An important next step was made with the addition of YANG support. A YANG
+infrastructure was added through a new development called *northbound*. This
+*northbound* interface added the capability of daemons to be configured and
+queried using YANG models. However, this interface was per daemon and not
+centralized, which is not sufficient.
+
+``mgmtd`` harnesses this new *northbound* interface to provide a centralized
+interface for all daemons. It utilizes the daemons YANG models to interact with
+each daemon. ``mgmtd`` currently provides the CLI interface for each daemon that
+has been converted to it, but in the future RESTCONF and NETCONF servers can
+easily be added as *front-ends* to mgmtd to support those protocols as well.
+
+
+Converting A Daemon to MGMTD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A daemon must first be transitioned to the new *northbound* interface if that
+has not already been done (see `this northbound conversion documentation
+<https://github.com/opensourcerouting/frr/wiki/Retrofitting-Configuration-Commands>`_
+for how to do this). Once this is done a few simple steps are all that is
+required move the daemon over to ``mgmtd`` control.
+
+Overview of Changes
+-------------------
+
+Adding support for a *northbound* converted daemon involves very little work. It
+requires enabling *frontend* (CLI and YANG) and *backend* (YANG) support.
+``mgmtd`` was designed to keep this as simple as possible.
+
+Front-End Interface:
+
+1. Add YANG module file to ``mgmtd/subdir.am`` (e.g., ``yang/frr-staticd.c``)
+2. Add YANG module description into array defined in ``mgmtd/mgmt_main.c``
+3. Add CLI handler file[s] to ``mgmtd/subdir.am`` (e.g., ``staticd/static_vty.c``)
+4. [if needed] Exclude (#ifndef) non-configuration CLI handlers from CLI source
+ file (e.g., inside ``staticd/static_vty.c``)
+
+Back-End Interface:
+
+5. Add XPATHs mappings to a couple arrays to direct ``mgmtd`` at your daemon in
+ ``mgmtd/mgmt_be_adapter.c``
+
+
+Add YANG and CLI into MGMTD
+---------------------------
+
+As an example here is the addition made to ``mgmtd/subdir.am`` for adding
+``staticd`` support.
+
+.. code-block:: make
+
+ if STATICD
+ nodist_mgmtd_mgmtd_SOURCES += \
+ yang/frr-staticd.yang.c \
+ yang/frr-bfdd.yang.c \
+ # end
+ nodist_mgmtd_libmgmt_be_nb_la_SOURCES += staticd/static_vty.c
+ endif
+
+An here is the addition to the modules array in ``mgmtd/mgmt_main.c``:
+
+.. code-block:: c
+
+ static const struct frr_yang_module_info *const mgmt_yang_modules[] = {
+ &frr_filter_info,
+ ...
+ #ifdef HAVE_STATICD
+ &(struct frr_yang_module_info){.name = "frr-staticd",
+ .ignore_cbs = true},
+ #endif
+ }
+
+
+CLI Handlers
+------------
+
+The daemon's CLI handlers for configuration (which having been converted to
+*northbound* now simply generate YANG changes) will be linked directly into
+``mgmtd``.
+
+If the operational and debug CLI commands are kept in files separate from the
+daemon's configuration CLI commands then no extra work is required. Otherwise some
+CPP #ifndef's will be required.
+
+Currently ``mgmtd`` supports configuration CLI but not operational
+state so if both types of CLI handlers are present in a single file (e.g. a
+``xxx_vty.c`` or ``xxx_cli.c`` file ) then #ifndef will be used to exclude these
+non-configuration CLI handlers from ``mgmtd``. The same goes for *debug* CLI
+handlers. For example:
+
+.. code-block:: c
+
+ DEFPY(daemon_one_config, daemon_one_config_cmd,
+ "daemon one [optional-arg]"
+ ...
+ {
+ ...
+ }
+
+ #ifndef INCLUDE_MGMTD_CMDDEFS_ONLY
+ DEFPY(daemon_show_oepr, daemon_show_oepr_cmd,
+ "show daemon oper [all]"
+ ...
+ {
+ ...
+ }
+ #endif /* ifndef INCLUDE_MGMTD_CMDDEFS_ONLY */
+
+ void daemon_vty_init(void)
+ {
+ install_element(CONFIG_NODE, &daemon_one_config_cmd);
+ ...
+
+ #ifndef INCLUDE_MGMTD_CMDDEFS_ONLY
+ install_element(ENABLE_NODE, &daemon_show_oper_cmd);
+ #endif /* ifndef INCLUDE_MGMTD_CMDDEFS_ONLY */
+
+ }
+
+
+Add Back-End XPATH mappings
+---------------------------
+
+In order for ``mgmtd`` to direct configuration to your daemon you need to add
+some XPATH mappings to ``mgmtd/mgmt_be_adapter.c``. These XPATHs determine which
+configuration changes get sent over the *back-end* interface to your daemon.
+
+Below are the strings added for staticd support:
+
+.. code-block:: c
+
+ static const struct mgmt_be_xpath_map_init mgmt_xpath_map_init[] = {
+ {
+ .xpath_regexp = "/frr-vrf:lib/*",
+ .subscr_info =
+ {
+ #if HAVE_STATICD
+ [MGMTD_BE_CLIENT_ID_STATICD] =
+ MGMT_SUBSCR_VALIDATE_CFG |
+ MGMT_SUBSCR_NOTIFY_CFG,
+ #endif
+ },
+ },
+ ...
+ {
+ .xpath_regexp =
+ "/frr-routing:routing/control-plane-protocols/control-plane-protocol/frr-staticd:staticd/*",
+ .subscr_info =
+ {
+ #if HAVE_STATICD
+ [MGMTD_BE_CLIENT_ID_STATICD] =
+ MGMT_SUBSCR_VALIDATE_CFG |
+ MGMT_SUBSCR_NOTIFY_CFG,
+ #endif
+ },
+ },
+ };
+
+ #if HAVE_STATICD
+ static struct mgmt_be_client_xpath staticd_xpaths[] = {
+ {
+ .xpath = "/frr-vrf:lib/*",
+ .subscribed = MGMT_SUBSCR_VALIDATE_CFG | MGMT_SUBSCR_NOTIFY_CFG,
+ },
+ ...
+ {
+ .xpath =
+ "/frr-routing:routing/control-plane-protocols/control-plane-protocol/frr-staticd:staticd/*",
+ .subscribed = MGMT_SUBSCR_VALIDATE_CFG | MGMT_SUBSCR_NOTIFY_CFG,
+ },
+ };
+ #endif
+
+ static struct mgmt_be_client_xpath_map
+ mgmt_client_xpaths[MGMTD_BE_CLIENT_ID_MAX] = {
+ #ifdef HAVE_STATICD
+ [MGMTD_BE_CLIENT_ID_STATICD] = {staticd_xpaths,
+ array_size(staticd_xpaths)},
+ #endif
+ };
+
+
+MGMTD Internals
+^^^^^^^^^^^^^^^
+
+This section will describe the internal functioning of ``mgmtd``, for now a
+couple diagrams are included to aide in source code perusal.
+
+
+The client side of a CLI change
+
+.. figure:: ../figures/cli-change-client.svg
+ :align: center
+
+
+The server (mgmtd) side of a CLI change
+
+.. figure:: ../figures/cli-change-mgmtd.svg
+ :align: center
diff --git a/doc/developer/modules.rst b/doc/developer/modules.rst
new file mode 100644
index 0000000..0feac8e
--- /dev/null
+++ b/doc/developer/modules.rst
@@ -0,0 +1,142 @@
+.. _modules:
+
+Modules
+=======
+
+FRR has facilities to load DSOs at startup via ``dlopen()``. These are used to
+implement modules, such as SNMP and FPM.
+
+Limitations
+-----------
+
+- can't load, unload, or reload during runtime. This just needs some
+ work and can probably be done in the future.
+- doesn't fix any of the "things need to be changed in the code in the
+ library" issues. Most prominently, you can't add a CLI node because
+ CLI nodes are listed in the library...
+- if your module crashes, the daemon crashes. Should be obvious.
+- **does not provide a stable API or ABI**. Your module must match a
+ version of FRR and you may have to update it frequently to match
+ changes.
+- **does not create a license boundary**. Your module will need to link
+ libzebra and include header files from the daemons, meaning it will
+ be GPL-encumbered.
+
+Installation
+------------
+
+Look for ``moduledir`` in ``configure.ac``, default is normally
+``/usr/lib64/frr/modules`` but depends on ``--libdir`` / ``--prefix``.
+
+The daemon's name is prepended when looking for a module, e.g. "snmp"
+tries to find "zebra\_snmp" first when used in zebra. This is just to
+make it nicer for the user, with the snmp module having the same name
+everywhere.
+
+Modules can be packaged separately from FRR. The SNMP and FPM modules
+are good candidates for this because they have dependencies (net-snmp /
+protobuf) that are not FRR dependencies. However, any distro packages
+should have an "exact-match" dependency onto the FRR package. Using a
+module from a different FRR version will probably blow up nicely.
+
+For snapcraft (and during development), modules can be loaded with full
+path (e.g. -M ``$SNAP/lib/frr/modules/zebra_snmp.so``). Note that
+libtool puts output files in the .libs directory, so during development
+you have to use ``./zebra -M .libs/zebra_snmp.so``.
+
+Creating a module
+-----------------
+
+... best to look at the existing SNMP or FPM modules.
+
+Basic boilerplate:
+
+::
+
+ #include "hook.h"
+ #include "module.h"
+ #include "libfrr.h"
+ #include "frrevent.h"
+
+ static int module_late_init(struct event_loop *master)
+ {
+ /* Do initialization stuff here */
+ return 0;
+ }
+
+ static int
+ module_init (void)
+ {
+ hook_register(frr_late_init, module_late_init);
+ return 0;
+ }
+
+ FRR_MODULE_SETUP(
+ .name = "my module",
+ .version = "0.0",
+ .description = "my module",
+ .init = module_init,
+ );
+
+The ``frr_late_init`` hook will be called after the daemon has finished
+its other startup and is about to enter the main event loop; this is the
+best place for most initialisation.
+
+Compiler & Linker magic
+-----------------------
+
+There's a ``THIS_MODULE`` (like in the Linux kernel), which uses
+``visibility`` attributes to restrict it to the current module. If you
+get a linker error with ``_frrmod_this_module``, there is some linker
+SNAFU. This shouldn't be possible, though one way to get it would be to
+not include libzebra (which provides a fallback definition for the
+symbol).
+
+libzebra and the daemons each have their own ``THIS_MODULE``, as do all
+loadable modules. In any other libraries (e.g. ``libfrrsnmp``),
+``THIS_MODULE`` will use the definition in libzebra; same applies if the
+main executable doesn't use ``FRR_DAEMON_INFO`` (e.g. all testcases).
+
+The deciding factor here is "what dynamic linker unit are you using the
+symbol from." If you're in a library function and want to know who
+called you, you can't use ``THIS_MODULE`` (because that'll just tell you
+you're in the library). Put a macro around your function that adds
+``THIS_MODULE`` in the *caller's code calling your function*.
+
+The idea is to use this in the future for module unloading. Hooks
+already remember which module they were installed by, as groundwork for
+a function that removes all of a module's installed hooks.
+
+There's also the ``frr_module`` symbol in modules, pretty much a
+standard entry point for loadable modules.
+
+Command line parameters
+-----------------------
+
+Command line parameters can be passed directly to a module by appending a
+colon to the module name when loading it, e.g. ``-M mymodule:myparameter``.
+The text after the colon will be accessible in the module's code through
+``THIS_MODULE->load_args``. For example, see how the format parameter is
+configured in the ``zfpm_init()`` function inside ``zebra_fpm.c``.
+
+Hooks
+-----
+
+Hooks are just points in the code where you can register your callback
+to be called. The parameter list is specific to the hook point. Since
+there is no stable API, the hook code has some extra type safety checks
+making sure you get a compiler warning when the hook parameter list
+doesn't match your callback. Don't ignore these warnings.
+
+Relation to MTYPE macros
+------------------------
+
+The MTYPE macros, while primarily designed to decouple MTYPEs from the
+library and beautify the code, also work very nicely with loadable
+modules -- both constructors and destructors are executed when
+loading/unloading modules.
+
+This means there is absolutely no change required to MTYPEs, you can
+just use them in a module and they will even clean up themselves when we
+implement module unloading and an unload happens. In fact, it's
+impossible to create a bug where unloading fails to de-register a MTYPE.
diff --git a/doc/developer/next-hop-tracking.rst b/doc/developer/next-hop-tracking.rst
new file mode 100644
index 0000000..99e1d65
--- /dev/null
+++ b/doc/developer/next-hop-tracking.rst
@@ -0,0 +1,350 @@
+Next Hop Tracking
+==================
+
+Next hop tracking is an optimization feature that reduces the processing time
+involved in the BGP bestpath algorithm by monitoring changes to the routing
+table.
+
+Background
+-----------
+
+Recursive routes are of the form:
+
+::
+
+ p/m --> n
+ [Ex: 1.1.0.0/16 --> 2.2.2.2]
+
+where 'n' itself is resolved through another route as follows:
+
+::
+
+ p2/m --> h, interface
+ [Ex: 2.2.2.0/24 --> 3.3.3.3, eth0]
+
+Usually, BGP routes are recursive in nature and BGP nexthops get resolved
+through an IGP route. IGP usually adds its routes pointing to an interface
+(these are called non-recursive routes).
+
+When BGP receives a recursive route from a peer, it needs to validate the
+nexthop. The path is marked valid or invalid based on the reachability status
+of the nexthop. Nexthop validation is also important for BGP decision process
+as the metric to reach the nexthop is a parameter to best path selection
+process.
+
+As it goes with routing, this is a dynamic process. Route to the nexthop can
+change. The nexthop can become unreachable or reachable. In the current BGP
+implementation, the nexthop validation is done periodically in the scanner run.
+The default scanner run interval is one minute. Every minute, the scanner task
+walks the entire BGP table. It checks the validity of each nexthop with Zebra
+(the routing table manager) through a request and response message exchange
+between BGP and Zebra process. BGP process is blocked for that duration. The
+mechanism has two major drawbacks:
+
+- The scanner task runs to completion. That can potentially starve the other
+ tasks for long periods of time, based on the BGP table size and number of
+ nexthops.
+
+- Convergence around routing changes that affect the nexthops can be long
+ (around a minute with the default intervals). The interval can be shortened
+ to achieve faster reaction time, but it makes the first problem worse, with
+ the scanner task consuming most of the CPU resources.
+
+The next-hop tracking feature makes this process event-driven. It eliminates
+periodic nexthop validation and introduces an asynchronous communication path
+between BGP and Zebra for route change notifications that can then be acted
+upon.
+
+Goal
+----
+
+Stating the obvious, the main goal is to remove the two limitations we
+discussed in the previous section. The goals, in a constructive tone,
+are the following:
+
+- **Fairness**: the scanner run should not consume an unjustly high amount of
+ CPU time. This should give an overall good performance and response time to
+ other events (route changes, session events, IO/user interface).
+
+- **Convergence**: BGP must react to nexthop changes instantly and provide
+ sub-second convergence. This may involve diverting the routes from one
+ nexthop to another.
+
+Overview of changes
+------------------------
+
+The changes are in both BGP and Zebra modules. The short summary is
+the following:
+
+- Zebra implements a registration mechanism by which clients can
+ register for next hop notification. Consequently, it maintains a
+ separate table, per (VRF, AF) pair, of next hops and interested
+ client-list per next hop.
+
+- When the main routing table changes in Zebra, it evaluates the next
+ hop table: for each next hop, it checks if the route table
+ modifications have changed its state. If so, it notifies the
+ interested clients.
+
+- BGP is one such client. It registers the next hops corresponding to
+ all of its received routes/paths. It also threads the paths against
+ each nexthop structure.
+
+- When BGP receives a next hop notification from Zebra, it walks the
+ corresponding path list. It makes them valid or invalid depending
+ on the next hop notification. It then re-computes best path for the
+ corresponding destination. This may result in re-announcing those
+ destinations to peers.
+
+Design
+------
+
+Modules
+^^^^^^^
+
+The core design introduces an "nht" (next hop tracking) module in BGP
+and "rnh" (recursive nexthop) module in Zebra. The "nht" module
+provides the following APIs:
+
++----------------------------+--------------------------------------------------+
+| Function | Action |
++============================+==================================================+
+| bgp_find_or_add_nexthop() | find or add a nexthop in BGP nexthop table |
++----------------------------+--------------------------------------------------+
+| bgp_parse_nexthop_update() | parse a nexthop update message coming from zebra |
++----------------------------+--------------------------------------------------+
+
+The "rnh" module provides the following APIs:
+
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| Function | Action |
++============================+==========================================================================================================+
+| zebra_add_rnh() | add a recursive nexthop |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_delete_rnh() | delete a recursive nexthop |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_lookup_rnh() | lookup a recursive nexthop |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_add_rnh_client() | register a client for nexthop notifications against a recursive nexthop |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_remove_rnh_client() | remove the client registration for a recursive nexthop |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_evaluate_rnh_table() | (re)evaluate the recursive nexthop table (most probably because the main routing table has changed). |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+| zebra_cleanup_rnh_client() | Cleanup a client from the "rnh" module data structures (most probably because the client is going away). |
++----------------------------+----------------------------------------------------------------------------------------------------------+
+
+4.2. Control flow
+
+The next hop registration control flow is the following:
+
+::
+
+ <==== BGP Process ====>|<==== Zebra Process ====>
+ |
+ receive module nht module | zserv module rnh module
+ ----------------------------------------------------------------------
+ | | |
+ bgp_update_ | | |
+ main() | bgp_find_or_add_ | |
+ | nexthop() | |
+ | | |
+ | | zserv_nexthop_ |
+ | | register() |
+ | | | zebra_add_rnh()
+ | | |
+
+
+The next hop notification control flow is the following:
+
+::
+
+ <==== Zebra Process ====>|<==== BGP Process ====>
+ |
+ rib module rnh module | zebra module nht module
+ ----------------------------------------------------------------------
+ | | |
+ meta_queue_ | | |
+ process() | zebra_evaluate_ | |
+ | rnh_table() | |
+ | | |
+ | | bgp_read_nexthop_ |
+ | | update() |
+ | | | bgp_parse_
+ | | | nexthop_update()
+ | | |
+
+
+zclient message format
+^^^^^^^^^^^^^^^^^^^^^^
+
+ZEBRA_NEXTHOP_REGISTER and ZEBRA_NEXTHOP_UNREGISTER messages are
+encoded in the following way:
+
+::
+
+ . 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | AF | prefix len |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . Nexthop prefix .
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . .
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | AF | prefix len |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . Nexthop prefix .
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+``ZEBRA_NEXTHOP_UPDATE`` message is encoded as follows:
+
+::
+
+ . 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | AF | prefix len |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . Nexthop prefix getting resolved .
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | metric |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | #nexthops |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | nexthop type |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . resolving Nexthop details .
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | nexthop type |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ . resolving Nexthop details .
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+BGP data structure
+^^^^^^^^^^^^^^^^^^
+Legend:
+
+::
+
+ /\ struct bgp_node: a BGP destination/route/prefix
+ \/
+
+ [ ] struct bgp_path_info: a BGP path (e.g. route received from a peer)
+
+ _
+ (_) struct bgp_nexthop_cache: a BGP nexthop
+
+ /\ NULL
+ \/--+ ^
+ | :
+ +--[ ]--[ ]--[ ]--> NULL
+ /\ :
+ \/--+ :
+ | :
+ +--[ ]--[ ]--> NULL
+ :
+ _ :
+ (_)...........
+
+
+Zebra data structure
+^^^^^^^^^^^^^^^^^^^^
+
+RNH table::
+
+ . O
+ / \
+ O O
+ / \
+ O O
+
+ struct rnh
+ {
+ uint8_t flags;
+ struct route_entry *state;
+ struct list *client_list;
+ struct route_node *node;
+ };
+
+User interface changes
+^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ frr# show ip nht
+ 3.3.3.3
+ resolved via kernel
+ via 11.0.0.6, swp1
+ Client list: bgp(fd 12)
+ 11.0.0.10
+ resolved via connected
+ is directly connected, swp2
+ Client list: bgp(fd 12)
+ 11.0.0.18
+ resolved via connected
+ is directly connected, swp4
+ Client list: bgp(fd 12)
+ 11.11.11.11
+ resolved via kernel
+ via 10.0.1.2, eth0
+ Client list: bgp(fd 12)
+
+ frr# show ip bgp nexthop
+ Current BGP nexthop cache:
+ 3.3.3.3 valid [IGP metric 0], #paths 3
+ Last update: Wed Oct 16 04:43:49 2013
+
+ 11.0.0.10 valid [IGP metric 1], #paths 1
+ Last update: Wed Oct 16 04:43:51 2013
+
+ 11.0.0.18 valid [IGP metric 1], #paths 2
+ Last update: Wed Oct 16 04:43:47 2013
+
+ 11.11.11.11 valid [IGP metric 0], #paths 1
+ Last update: Wed Oct 16 04:43:47 2013
+
+ frr# show ipv6 nht
+ frr# show ip bgp nexthop detail
+
+ frr# debug bgp nht
+ frr# debug zebra nht
+
+ 6. Sample test cases
+
+ r2----r3
+ / \ /
+ r1----r4
+
+ - Verify that a change in IGP cost triggers NHT
+ + shutdown the r1-r4 and r2-r4 links
+ + no shut the r1-r4 and r2-r4 links and wait for OSPF to come back
+ up
+ + We should be back to the original nexthop via r4 now
+ - Verify that a NH becoming unreachable triggers NHT
+ + Shutdown all links to r4
+ - Verify that a NH becoming reachable triggers NHT
+ + no shut all links to r4
+
+Future work
+^^^^^^^^^^^
+
+- route-policy for next hop validation (e.g. ignore default route)
+- damping for rapid next hop changes
+- prioritized handling of nexthop changes ((un)reachability vs. metric
+ changes)
+- handling recursion loop, e.g::
+
+ 11.11.11.11/32 -> 12.12.12.12
+ 12.12.12.12/32 -> 11.11.11.11
+ 11.0.0.0/8 -> <interface>
+- better statistics
diff --git a/doc/developer/northbound/advanced-topics.rst b/doc/developer/northbound/advanced-topics.rst
new file mode 100644
index 0000000..bee29a9
--- /dev/null
+++ b/doc/developer/northbound/advanced-topics.rst
@@ -0,0 +1,294 @@
+Auto-generated CLI commands
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In order to have less code to maintain, it should be possible to write a
+tool that auto-generates CLI commands based on the FRR YANG models. As a
+matter of fact, there are already a number of NETCONF-based CLIs that do
+exactly that (e.g. `Clixon <https://github.com/clicon/clixon>`__,
+ConfD’s CLI).
+
+The problem however is that there isn’t an exact one-to-one mapping
+between the existing CLI commands and the corresponding YANG nodes from
+the native models. As an example, ripd’s
+``timers basic (5-2147483647) (5-2147483647) (5-2147483647)`` command
+changes three YANG leaves at the same time. In order to auto-generate
+CLI commands and retain their original form, it’s necessary to add
+annotations in the YANG modules to specify how the commands should look
+like. Without YANG annotations, the CLI auto-generator will generate a
+command for each YANG leaf, (leaf-)list and presence-container. The
+ripd’s ``timers basic`` command, for instance, would become three
+different commands, which would be undesirable.
+
+ This Tail-f’s®
+ `document <http://info.tail-f.com/hubfs/Whitepapers/Tail-f_ConfD-CLI__Cfg_Mode_App_Note_Rev%20C.pdf>`__
+ shows how to customize ConfD auto-generated CLI commands using YANG
+ annotations.
+
+The good news is that *libyang* allows users to create plugins to
+implement their own YANG extensions, which can be used to implement CLI
+annotations. If done properly, a CLI generator can save FRR developers
+from writing and maintaining hundreds if not thousands of DEFPYs!
+
+CLI on a separate program
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The flexible design of the northbound architecture opens the door to
+move the CLI to a separate program in the long-term future. Some
+advantages of doing so would be: \* Treat the CLI as just another
+northbound client, instead of having CLI commands embedded in the
+binaries of all FRR daemons. \* Improved robustness: bugs in CLI
+commands (e.g. null-pointer dereferences) or in the CLI code itself
+wouldn’t affect the FRR daemons. \* Foster innovation by allowing other
+CLI programs to be implemented, possibly using higher level programming
+languages.
+
+The problem, however, is that the northbound retrofitting process will
+convert only the CLI configuration commands and EXEC commands in a first
+moment. Retrofitting the “show” commands is a completely different story
+and shouldn’t happen anytime soon. This should hinder progress towards
+moving the CLI to a separate program.
+
+Proposed feature: confirmed commits
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Confirmed commits allow the user to request an automatic rollback to the
+previous configuration if the commit operation is not confirmed within a
+number of minutes. This is particularly useful when the user is
+accessing the CLI through the network (e.g. using SSH) and any
+configuration change might cause an unexpected loss of connectivity
+between the user and the router (e.g. misconfiguration of a routing
+protocol). By using a confirmed commit, the user can rest assured the
+connectivity will be restored after the given timeout expires, avoiding
+the need to access the router physically to fix the problem.
+
+Example of how this feature could be provided in the CLI:
+``commit confirmed [minutes <1-60>]``. The ability to do confirmed
+commits should also be exposed in the northbound API so that the
+northbound plugins can also take advantage of it (in the case of the
+Sysrepo and ConfD plugins, confirmed commits are implemented externally
+in the *netopeer2-server* and *confd* daemons, respectively).
+
+Proposed feature: enable/disable configuration commands/sections
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Since the ``lyd_node`` data structure from *libyang* can hold private
+data, it should be possible to mark configuration commands or sections
+as active or inactive. This would allow CLI users to leverage this
+feature to disable parts of the running configuration without actually
+removing the associated commands, and then re-enable the disabled
+configuration commands or sections later when necessary. Example:
+
+::
+
+ ripd(config)# show configuration running
+ Configuration:
+ [snip]
+ !
+ router rip
+ default-metric 2
+ distance 80
+ network eth0
+ network eth1
+ !
+ end
+ ripd(config)# disable router rip
+ ripd(config)# commit
+ % Configuration committed successfully (Transaction ID #7).
+
+ ripd(config)# show configuration running
+ Configuration:
+ [snip]
+ !
+ !router rip
+ !default-metric 2
+ !distance 80
+ !network eth0
+ !network eth1
+ !
+ end
+ ripd(config)# enable router rip
+ ripd(config)# commit
+ % Configuration committed successfully (Transaction ID #8).
+
+ ripd(config)# show configuration running
+ [snip]
+ frr defaults traditional
+ !
+ router rip
+ default-metric 2
+ distance 80
+ network eth0
+ network eth1
+ !
+ end
+
+This capability could be useful in a number of occasions, like disabling
+configuration commands that are no longer necessary (e.g. ACLs) but that
+might be necessary at a later point in the future. Other example is
+allowing users to disable a configuration section for testing purposes,
+and then re-enable it easily without needing to copy and paste any
+command.
+
+Configuration reloads
+~~~~~~~~~~~~~~~~~~~~~
+
+Given the limitations of the previous northbound architecture, the FRR
+daemons didn’t have the ability to reload their configuration files by
+themselves. The SIGHUP handler of most daemons would only re-read the
+configuration file and merge it into the running configuration. In most
+cases, however, what is desired is to replace the running configuration
+by the updated configuration file. The *frr-reload.py* script was
+written to work around this problem and it does it well to a certain
+extent. The problem with the *frr-reload.py* script is that it’s full of
+special cases here and there, which makes it fragile and unreliable.
+Maintaining the script is also an additional burden for FRR developers,
+few of whom are familiar with its code or know when it needs to be
+updated to account for a new feature.
+
+In the new northbound architecture, reloading the configuration file can
+be easily implemented using a configuration transaction. Once the FRR
+northbound retrofitting process is complete, all daemons should have the
+ability to reload their configuration files upon receiving the SIGHUP
+signal, or when the ``configuration load [...] replace`` command is
+used. Once that point is reached, the *frr-reload.py* script will no
+longer be necessary and should be removed from the FRR repository.
+
+Configuration changes coming from the kernel
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This
+`post <http://discuss.tail-f.com/t/who-should-not-set-configuration-once-a-system-is-up-and-running/111>`__
+from the Tail-f’s® forum describes the problem of letting systems
+configure themselves behind the users back. Here are some selected
+snippets from it: > Traditionally, northbound interface users are the
+ones in charge of providing configuration data for systems. > > In some
+systems, we see a deviation from this traditional practice; allowing
+systems to configure “themselves” behind the scenes (or behind the users
+back). > > While there might be a business case for such a practice,
+this kind of configuration remains “dangerous” from northbound users
+perspective and makes systems hard to predict and even harder to debug.
+(…) > > With the advent of transactional Network configuration, this
+practice can not work anymore. The fact that systems are given the right
+to change configuration is a key here in breaking transactional
+configuration in a Network.
+
+FRR is immune to some of the problems described in the aforementioned
+post. Management clients can configure interfaces that don’t yet exist,
+and once an interface is deleted from the kernel, its configuration is
+retained in FRR.
+
+There are however some cases where information learned from the kernel
+(e.g. using netlink) can affect the running configuration of all FRR
+daemons. Examples: interface rename events, VRF rename events, interface
+being moved to a different VRF, etc. In these cases, since these events
+can’t be ignored, the best we can do is to send YANG notifications to
+the management clients to inform about the configuration changes. The
+management clients should then be prepared to handle such notifications
+and react accordingly.
+
+Interfaces and VRFs
+~~~~~~~~~~~~~~~~~~~
+
+As of now zebra doesn’t have the ability to create VRFs or virtual
+interfaces in the kernel. The ``vrf`` and ``interface`` commands only
+create pre-provisioned VRFs and interfaces that are only activated when
+the corresponding information is learned from the kernel. When
+configuring FRR using an external management client, like a NETCONF
+client, it might be desirable to actually create functional VRFs and
+virtual interfaces (e.g. VLAN subinterfaces, bridges, etc) that are
+installed in the kernel using OS-specific APIs (e.g. netlink, routing
+socket, etc). Work needs to be done in this area to make this possible.
+
+Shared configuration objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One of the existing problems in FRR is that it’s hard to ensure that all
+daemons are in sync with respect to the shared configuration objects
+(e.g. interfaces, VRFs, route-maps, ACLs, etc). When a route-map is
+configured using *vtysh*, the same command is sent to all relevant
+daemons (the daemons that implement route-maps), which ensures
+synchronization among them. The problem is when a daemon starts after
+the route-maps are created. In this case this daemon wouldn’t be aware
+of the previously configured route-maps (unlike the other daemons),
+which can lead to a lot of confusion and unexpected problems.
+
+With the new northbound architecture, configuration objects can be
+manipulated using higher level abstractions, which opens more
+possibilities to solve this decades-long problem. As an example, one
+solution would be to make the FRR daemons fetch the shared configuration
+objects from zebra using the ZAPI interface during initialization. The
+shared configuration objects could be requested using a list of XPaths
+expressions in the ``ZEBRA_HELLO`` message, which zebra would respond by
+sending the shared configuration objects encoded in the JSON format.
+This solution however doesn’t address the case where zebra starts or
+restarts after the other FRR daemons. Other solution would be to store
+the shared configuration objects in the northbound SQL database and make
+all daemons fetch these objects from there. So far no work has been made
+on this area as more investigation needs to be done.
+
+vtysh support
+~~~~~~~~~~~~~
+
+As explained in the [[Transactional CLI]] page, all commands introduced
+by the transactional CLI are not yet available in *vtysh*. This needs to
+be addressed in the short term future. Some challenges for doing that
+work include: \* How to display configurations (running, candidates and
+rollbacks) in a more clever way? The implementation of the
+``show running-config`` command in *vtysh* is not something that should
+be followed as an example. A better idea would be to fetch the desired
+configuration from all daemons (encoded in JSON for example), merge them
+all into a single ``lyd_node`` variable and then display the combined
+configurations from this variable (the configuration merges would
+transparently take care of combining the shared configuration objects).
+In order to be able to manipulate the JSON configurations, *vtysh* will
+need to load the YANG modules from all daemons at startup (this might
+have a minimal impact on startup time). The only issue with this
+approach is that the ``cli_show()`` callbacks from all daemons are
+embedded in their binaries and thus not accessible externally. It might
+be necessary to compile these callbacks on a separate shared library so
+that they are accessible to *vtysh* too. Other than that, displaying the
+combined configurations in the JSON/XML formats should be
+straightforward. \* With the current design, transaction IDs are
+per-daemon and not global across all FRR daemons. This means that the
+same transaction ID can represent different transactions on different
+daemons. Given this observation, how to implement the
+``rollback configuration`` command in *vtysh*? The easy solution would
+be to add a ``daemon WORD`` argument to specify the context of the
+rollback, but per-daemon rollbacks would certainly be confusing and
+convoluted to end users. A better idea would be to attack the root of
+the problem: change configuration transactions to be global instead of
+being per-daemon. This involves a bigger change in the northbound
+architecture, and would have implications on how transactions are stored
+in the SQL database (daemon-specific and shared configuration objects
+would need to have their own tables or columns). \* Loading
+configuration files in the JSON or XML formats will be tricky, as
+*vtysh* will need to know which sections of the configuration should be
+sent to which daemons. *vtysh* will either need to fetch the YANG
+modules implemented by all daemons at runtime or obtain this information
+at compile-time somehow.
+
+Detecting type mismatches at compile-time
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As described in the [[Retrofitting Configuration Commands]] page, the
+northbound configuration callbacks detect type mismatches at runtime
+when fetching data from the the ``dnode`` parameter (which represents
+the configuration node being created, modified, deleted or moved). When
+a type mismatch is detected, the program aborts and displays a backtrace
+showing where the problem happened. It would be desirable to detect such
+type mismatches at compile-time, the earlier the problems are detected
+the sooner they are fixed.
+
+One possible solution to this problem would be to auto-generate C
+structures from the YANG models and provide a function that converts a
+libyang’s ``lyd_node`` variable to a C structure containing the same
+information. The northbound callbacks could then fetch configuration
+data from this C structure, which would naturally lead to type
+mismatches being detected at compile time. One of the challenges of
+doing this would be the handling of YANG lists and leaf-lists. It would
+be necessary to use dynamic data structures like hashes or rb-trees to
+hold all elements of the lists and leaf-lists, and the process of
+converting a ``lyd_node`` to an auto-generated C-structure could be
+expensive. At this point it’s unclear if it’s worth adding more
+complexity in the northbound architecture to solve this specific
+problem.
diff --git a/doc/developer/northbound/architecture.rst b/doc/developer/northbound/architecture.rst
new file mode 100644
index 0000000..e571971
--- /dev/null
+++ b/doc/developer/northbound/architecture.rst
@@ -0,0 +1,275 @@
+Introduction
+------------
+
+The goal of the new northbound API is to provide a better interface to
+configure and monitor FRR programatically. The current design based on
+CLI commands is no longer adequate in a world where computer networks
+are becoming increasingly bigger, more diverse and more complex. Network
+scripting using *expect* and screen scraping techniques is too primitive
+and unreliable to be used in large-scale networks. What is proposed is
+to modernize FRR to turn it into an API-first routing stack, and
+reposition the CLI on top of this API. The most important change,
+however, is not the API that will be provided to external users. In
+fact, multiple APIs will be supported and users will have the ability to
+write custom management APIs if necessary. The biggest change is the
+introduction of a model-driven management architecture based on the
+`YANG <https://tools.ietf.org/html/rfc7950>`__ modeling language.
+Instead of writing code tied to any particular user interface
+(e.g. DEFUNs), YANG allows us to write API-agnostic code (in the form of
+callbacks) that can be used by any management interface. As an example,
+it shouldn’t matter if a set of configuration changes is coming from a
+`NETCONF <https://tools.ietf.org/html/rfc6241>`__ session or from a CLI
+terminal, the same callbacks should be called to process the
+configuration changes regardless of where they came from. This
+model-driven design ensures feature parity across all management
+interfaces supported by FRR.
+
+Quoting :rfc:`7950`:
+
+ YANG is a language originally designed to model data for the NETCONF
+ protocol. A YANG module defines hierarchies of data that can be used for
+ NETCONF-based operations, including configuration, state data, RPCs, and
+ notifications. This allows a complete description of all data sent between a
+ NETCONF client and server. Although out of scope for this specification,
+ YANG can also be used with protocols other than NETCONF.
+
+While the YANG and NETCONF specifications are tightly coupled with one
+another, both are independent to a certain extent and are evolving
+separately. Examples of other management protocols that use YANG include
+`RESTCONF <https://tools.ietf.org/html/rfc8040>`__,
+`gNMI <https://github.com/openconfig/reference/tree/master/rpc/gnmi>`__
+and
+`CoAP <https://www.ietf.org/archive/id/draft-vanderstok-core-comi-11.txt>`__.
+
+In addition to being management-protocol independent, some other
+advantages of using YANG in FRR are listed below: \* Have a formal
+contract between FRR and application developers (management clients). A
+management client that has access to the FRR YANG models knows about all
+existing configuration options available for use. This information can
+be used to auto-generate user-friendly interfaces like Web-UIs, custom
+CLIs and even code bindings for several different programming languages.
+Using `PyangBind <https://github.com/robshakir/pyangbind>`__, for
+example, it’s possible to generate Python class hierarchies from YANG
+models and use these classes to instantiate objects that mirror the
+structure of the YANG modules and can be serialized/deserialized using
+different encoding formats. \* Support different encoding formats for
+instance data. Currently only JSON and XML are supported, but
+`GPB <https://developers.google.com/protocol-buffers/>`__ and
+`CBOR <http://cbor.io/>`__ are other viable options in the long term.
+Additional encoding formats can be implemented in the *libyang* library
+for optimal performance, or externally by translating data to/from one
+of the supported formats (with a performance penalty). \* Have a formal
+mechanism to introduce backward-incompatible changes based on `semantic
+versioning <http://www.openconfig.net/docs/semver/>`__ (not part of the
+YANG standard, which allows backward-compatible module updates only). \*
+Provide seamless support to the industry-standard NETCONF/RESTCONF
+protocols as alternative management APIs. If FRR configuration/state
+data is modeled using YANG, supporting YANG-based protocols like NETCONF
+and RESTCONF is much easier.
+
+As important as shifting to a model-driven management paradigm, the new
+northbound architecture also introduces the concept of configuration
+transactions. Configuration transactions allow management clients to
+commit multiple configuration changes at the same time and rest assured
+that either all changes will be applied or none will (all-or-nothing).
+Configuration transactions are implemented as pseudo-atomic operations
+and facilitate automation by removing the burden of error recovery from
+the management side. Another property of configuration transactions is
+that the configuration changes are always processed in a pre-defined
+order to ensure consistency. Configuration transactions that encompass
+multiple network devices are called network-wide transactions and are
+also supported by the new northbound architecture. When FRR is built
+using the ``--enable-config-rollbacks`` option, all committed
+transactions are recorded in the FRR rollback log, which can reside
+either in memory (volatile) or on persistent storage.
+
+ Network-wide Transactions is the most important leap in network
+ management technology since SNMP. The error recovery and sequencing
+ tasks are removed from the manager side. This is usually more than
+ half the cost in a mature system; more than the entire cost of the
+ managed devices.
+ `[source] <https://www.nanog.org/sites/default/files/tuesday_tutorial_moberg_netconf_35.pdf>`__.
+
+Figures 1 and 2 below illustrate the old and new northbound architecture
+of FRR, respectively. As it can be seen, in the old architecture the CLI
+was the only interface used to configure and monitor FRR (the SNMP
+plugin was’t taken into account given the small number of implemented
+MIBs). This means that the only way to automate FRR was by writing
+scripts that send CLI commands and parse the text output (which usually
+doesn’t have any structure) using screen scraping and regular
+expressions.
+
+.. figure:: images/arch-before.png
+ :alt: diagram of northbound architecture prior to nbapi conversion
+
+ Old northbound architecture
+
+The new northbound architectures, on the other hand, features a
+multitude of different management APIs, all of them connected to the
+northbound layer of the FRR daemons. By default, only the CLI interface
+is compiled built-in in the FRR daemons. The other management interfaces
+are provided as optional plugins and need to be loaded during the daemon
+initialization (e.g. *zebra -M confd*). This design makes it possible to
+integrate FRR with different NETCONF solutions without introducing
+vendor lock-in. The [[Plugins - Writing Your Own]] page explains how to
+write custom northbound plugins that can be tailored to all needs
+(e.g. support custom transport protocols, different data encoding
+formats, fine-grained access control, etc).
+
+.. figure:: images/arch-after.png
+ :alt: diagram of northbound architecture after nbapi conversion
+
+ New northbound architecture
+
+Figure 3 shows the internal view of the FRR northbound architecture. In
+this image we can see that northbound layer is an abstract entity
+positioned between the northbound callbacks and the northbound clients.
+The northbound layer is responsible to process the requests coming from
+the northbound clients and call the appropriate callbacks to satisfy
+these requests. The northbound plugins communicate with the northbound
+layer through a public API, which allow users to write third-party
+plugins that can be maintained separately. The northbound plugins, in
+turn, have their own APIs to communicate with external management
+clients.
+
+.. figure:: images/nb-layer.png
+ :alt: diagram of northbound architecture internals
+
+ New northbound architecture - internal view
+
+Initially the CLI (and all of its commands) will be maintained inside
+the FRR daemons. In the long term, however, the goal is to move the CLI
+to a separate program just like any other management client. The
+[[Advanced Topics]] page describes the motivations and challenges of
+doing that. Last but not least, the *libyang* block inside the
+northbound layer is the engine that makes everything possible. The
+*libyang* library will be described in more detail in the following
+sections.
+
+YANG models
+-----------
+
+The main decision to be made when using YANG is which models to
+implement. There’s a general consensus that using standard models is
+preferable over using custom (native) models. The reasoning is that
+applications based on standard models can be reused for all network
+appliances that support those models, whereas the same doesn’t apply for
+applications written based on custom models.
+
+That said, there are multiple standards bodies publishing YANG models
+and unfortunately not all of them are converging (or at least not yet).
+In the context of FRR, which is a routing stack, the two sets of YANG
+models that would make sense to implement are the ones from IETF and
+from the OpenConfig working group. The question that arises is: which
+one of them should we commit to? Or should we try to support both
+somehow, at the cost of extra development efforts?
+
+Another problem, from an implementation point of view, is that it’s
+challenging to adapt the existing code base to match standard models. A
+more reasonable solution, at least in a first moment, would be to use
+YANG deviations and augmentations to do the opposite: adapt the standard
+models to the existing code. In practice however this is not as simple
+as it seems. There are cases where the differences are too substantial
+to be worked around without restructuring the code by changing its data
+structures and their relationships. As an example, the *ietf-rip* model
+places per-interface RIP configuration parameters inside the
+*control-plane-protocol* list (which is augmented by *ietf-rip*). This
+means that it’s impossible to configure RIP interface parameters without
+first configuring a RIP routing instance. The *ripd* daemon on the other
+hand allows the operator to configure RIP interface parameters even if
+``router rip`` is not configured. If we were to implement the *ietf-rip*
+module natively, we’d need to change ripd’s CLI commands (and the
+associated code) to reflect the new configuration hierarchy.
+
+Taking into account that FRR has a huge code base and that the
+northbound retrofitting process per-se will cause a lot of impact, it
+was decided to take a conservative approach and write custom YANG models
+for FRR modeled after the existing CLI commands. Having YANG models that
+closely mirror the CLI commands will allow the FRR developers to
+retrofit the code base much more easily, without introducing
+backward-incompatible changes in the CLI and reducing the likelihood of
+introducing bugs. The [[Retrofitting Configuration Commands]] page
+explains in detail how to convert configuration commands to the new
+northbound model.
+
+Even though having native YANG models is not the ideal solution, it will
+be already a big step forward for FRR to migrate to a model-driven
+management architecture, with support for configuration transactions and
+multiple management interfaces, including NETCONF and RESTCONF (through
+the northbound plugins).
+
+The new northbound also features an experimental YANG module translator
+that will allow users to translate to and from standard YANG models by
+using translation tables. The [[YANG module translator]] page describes
+this mechanism in more detail. At this point it’s unclear what can be
+achieved through module translation and if that can be considered as a
+definitive solution to support standard models or not.
+
+Northbound Architecture
+-----------------------
+
+.. figure:: images/lys-node.png
+ :alt: diagram of libyanbg's lys_node data structure
+
+ ``libyang's`` lys_node data structure
+
+
+.. figure:: images/lyd-node.png
+ :alt: diagram of libyanbg's lyd_node data structure
+
+ ``libyang's`` lyd_node data structure
+
+
+.. figure:: images/ly-ctx.png
+ :alt: diagram of libyanbg's ly_ctx data structure
+
+ ``libyang's`` ly_ctx data structure
+
+
+.. figure:: images/transactions.png
+ :alt: diagram showing how configuration transactions work
+
+ Configuration transactions
+
+
+Testing
+-------
+
+The new northbound adds the libyang library as a new mandatory
+dependency for FRR. To obtain and install this library, follow the steps
+below:
+
+.. code-block:: console
+
+ git clone https://github.com/CESNET/libyang
+ cd libyang
+ git checkout devel
+ mkdir build ; cd build
+ cmake -DENABLE_LYD_PRIV=ON ..
+ make
+ sudo make install
+
+
+.. note::
+
+ first make sure to install the libyang
+ `requirements <https://github.com/CESNET/libyang#build-requirements>`__.
+
+
+FRR needs libyang from version 0.16.7 or newer, which is maintained in
+the ``devel`` branch. libyang 0.15.x is maintained in the ``master``
+branch and doesn’t contain one small feature used by FRR (the
+``LY_CTX_DISABLE_SEARCHDIR_CWD`` flag). FRR also makes use of the
+libyang’s ``ENABLE_LYD_PRIV`` feature, which is disabled by default and
+needs to be enabled at compile time.
+
+It’s advisable (but not required) to install sqlite3 and build FRR with
+``--enable-config-rollbacks`` in order to have access to the
+configuration rollback feature.
+
+To test the northbound, the suggested method is to use the
+[[Transactional CLI]] with the *ripd* daemon and play with the new
+commands. The ``debug northbound`` command can be used to see which
+northbound callbacks are called in response to the ``commit`` command.
+For reference, the [[Demos]] page shows a small demonstration of the
+transactional CLI in action and what it’s capable of.
diff --git a/doc/developer/northbound/demos.rst b/doc/developer/northbound/demos.rst
new file mode 100644
index 0000000..876bd25
--- /dev/null
+++ b/doc/developer/northbound/demos.rst
@@ -0,0 +1,27 @@
+Transactional CLI
+-----------------
+
+This short demo shows some of the capabilities of the new transactional
+CLI:
+
+|asciicast1|
+
+ConfD + NETCONF + Cisco YDK
+---------------------------
+
+This is a very simple demo of *ripd* being configured by a python
+script. The script uses NETCONF to communicate with *ripd*, which has
+the ConfD plugin loaded. The most interesting part, however, is the fact
+that the python script is not using handcrafted XML payloads to
+configure *ripd*. Instead, the script is using python bindings generated
+using Cisco’s YANG Development Kit (YDK).
+
+- Script used in the demo:
+ https://gist.github.com/rwestphal/defa9bd1ccf216ab082d4711ae402f95
+
+|asciicast2|
+
+.. |asciicast1| image:: https://asciinema.org/a/jL0BS5HfP2kS6N1HfgsZvfZk1.png
+ :target: https://asciinema.org/a/jL0BS5HfP2kS6N1HfgsZvfZk1
+.. |asciicast2| image:: https://asciinema.org/a/VfMElNxsjLcdvV7484E6ChxWv.png
+ :target: https://asciinema.org/a/VfMElNxsjLcdvV7484E6ChxWv
diff --git a/doc/developer/northbound/images/arch-after.png b/doc/developer/northbound/images/arch-after.png
new file mode 100644
index 0000000..01e6ae6
--- /dev/null
+++ b/doc/developer/northbound/images/arch-after.png
Binary files differ
diff --git a/doc/developer/northbound/images/arch-before.png b/doc/developer/northbound/images/arch-before.png
new file mode 100644
index 0000000..ab2bb0d
--- /dev/null
+++ b/doc/developer/northbound/images/arch-before.png
Binary files differ
diff --git a/doc/developer/northbound/images/ly-ctx.png b/doc/developer/northbound/images/ly-ctx.png
new file mode 100644
index 0000000..4d4e138
--- /dev/null
+++ b/doc/developer/northbound/images/ly-ctx.png
Binary files differ
diff --git a/doc/developer/northbound/images/lyd-node.png b/doc/developer/northbound/images/lyd-node.png
new file mode 100644
index 0000000..4ba2b48
--- /dev/null
+++ b/doc/developer/northbound/images/lyd-node.png
Binary files differ
diff --git a/doc/developer/northbound/images/lys-node.png b/doc/developer/northbound/images/lys-node.png
new file mode 100644
index 0000000..e9e46e7
--- /dev/null
+++ b/doc/developer/northbound/images/lys-node.png
Binary files differ
diff --git a/doc/developer/northbound/images/nb-layer.png b/doc/developer/northbound/images/nb-layer.png
new file mode 100644
index 0000000..4aa1fd6
--- /dev/null
+++ b/doc/developer/northbound/images/nb-layer.png
Binary files differ
diff --git a/doc/developer/northbound/images/transactions.png b/doc/developer/northbound/images/transactions.png
new file mode 100644
index 0000000..d18faf4
--- /dev/null
+++ b/doc/developer/northbound/images/transactions.png
Binary files differ
diff --git a/doc/developer/northbound/links.rst b/doc/developer/northbound/links.rst
new file mode 100644
index 0000000..e80374c
--- /dev/null
+++ b/doc/developer/northbound/links.rst
@@ -0,0 +1,233 @@
+RFCs
+~~~~
+
+- `RFC 7950 - The YANG 1.1 Data Modeling
+ Language <https://tools.ietf.org/html/rfc7950>`__
+- `RFC 7951 - JSON Encoding of Data Modeled with
+ YANG <https://tools.ietf.org/html/rfc7951>`__
+- `RFC 8342 - Network Management Datastore Architecture
+ (NMDA) <https://tools.ietf.org/html/rfc8342>`__
+- `RFC 6087 - Guidelines for Authors and Reviewers of YANG Data Model
+ Documents <https://tools.ietf.org/html/rfc6087>`__
+- `RFC 8340 - YANG Tree
+ Diagrams <https://tools.ietf.org/html/rfc8340>`__
+- `RFC 6991 - Common YANG Data
+ Types <https://tools.ietf.org/html/rfc6991>`__
+- `RFC 6241 - Network Configuration Protocol
+ (NETCONF) <https://tools.ietf.org/html/rfc6241>`__
+- `RFC 8040 - RESTCONF
+ Protocol <https://tools.ietf.org/html/rfc8040>`__
+
+YANG models
+~~~~~~~~~~~
+
+- Collection of several YANG models, including models from standards
+ organizations such as the IETF and vendor specific models:
+ https://github.com/YangModels/yang
+- OpenConfig: https://github.com/openconfig/public
+
+Presentations
+~~~~~~~~~~~~~
+
+- FRR Advanced Northbound API (May 2018)
+
+ - Slides:
+ https://www.dropbox.com/s/zhybthruwocbqaw/netdef-frr-northbound.pdf?dl=1
+
+- Ok, We Got Data Models, Now What?
+
+ - Video: https://www.youtube.com/watch?v=2oqkiZ83vAA
+ - Slides:
+ https://www.nanog.org/sites/default/files/20161017_Alvarez_Ok_We_Got_v1.pdf
+
+- Data Model-Driven Management: Latest Industry and Tool Developments
+
+ - Video: https://www.youtube.com/watch?v=n_oKGJ_jgYQ
+ - Slides:
+ https://pc.nanog.org/static/published/meetings/NANOG72/1559/20180219_Claise_Data_Modeling-Driven_Management__v1.pdf
+
+- Network Automation And Programmability: Reality Versus The Vendor
+ Hype When Considering Legacy And NFV Networks
+
+ - Video: https://www.youtube.com/watch?v=N5wbYncUS9o
+ - Slides:
+ https://www.nanog.org/sites/default/files/1_Moore_Network_Automation_And_Programmability.pdf
+
+- Lightning Talk: The API is the new CLI?
+
+ - Video: https://www.youtube.com/watch?v=ngi0erGNi58
+ - Slides:
+ https://pc.nanog.org/static/published/meetings/NANOG72/1638/20180221_Grundemann_Lightning_Talk_The_v1.pdf
+
+- Lightning Talk: OpenConfig - progress toward vendor-neutral network
+ management
+
+ - Video: https://www.youtube.com/watch?v=10rSUbeMmT4
+ - Slides:
+ https://pc.nanog.org/static/published/meetings/NANOG71/1535/20171004_Shaikh_Lightning_Talk_Openconfig_v1.pdf
+
+- Getting started with OpenConfig
+
+ - Video: https://www.youtube.com/watch?v=L7trUNK8NJI
+ - Slides:
+ https://pc.nanog.org/static/published/meetings/NANOG71/1456/20171003_Alvarez_Getting_Started_With_v1.pdf
+
+- Why NETCONF and YANG
+
+ - Video: https://www.youtube.com/watch?v=mp4h8aSTba8
+
+- NETCONF and YANG Concepts
+
+ - Video: https://www.youtube.com/watch?v=UwYYvT7DBvg
+
+- NETCONF Tutorial
+
+ - Video: https://www.youtube.com/watch?v=N4vov1mI14U
+
+Whitepapers
+~~~~~~~~~~~
+
+- Automating Network and Service Configuration Using NETCONF and YANG:
+ http://www.tail-f.com/wordpress/wp-content/uploads/2013/02/Tail-f-Presentation-Netconf-Yang.pdf
+- Creating the Programmable Network: The Business Case for NETCONF/YANG
+ in Network Devices:
+ http://www.tail-f.com/wordpress/wp-content/uploads/2013/10/HR-Tail-f-NETCONF-WP-10-08-13.pdf
+- NETCONF/YANG: What’s Holding Back Adoption & How to Accelerate It:
+ https://www.oneaccess-net.com/images/public/wp_heavy_reading.pdf
+- Achieving Automation with YANG Modeling Technologies:
+ https://www.cisco.com/c/dam/en/us/products/collateral/cloud-systems-management/network-services-orchestrator/idc-achieving-automation-wp.pdf
+
+Blog posts and podcasts
+~~~~~~~~~~~~~~~~~~~~~~~
+
+- OpenConfig and IETF YANG Models: Can they converge? -
+ http://rob.sh/post/215/
+- OpenConfig: Standardized Models For Networking -
+ https://packetpushers.net/openconfig-standardized-models-networking/
+- (Podcast) OpenConfig: From Basics to Implementations -
+ https://blog.ipspace.net/2017/02/openconfig-from-basics-to.html
+- (Podcast) How Did NETCONF Start on Software Gone Wild -
+ https://blog.ipspace.net/2017/12/how-did-netconf-start-on-software-gone.html
+- YANG Data Models in the Industry: Current State of Affairs (March
+ 2018) -
+ https://www.claise.be/2018/03/yang-data-models-in-the-industry-current-state-of-affairs-march-2018/
+- Why Data Model-driven Telemetry is the only useful Telemetry? -
+ https://www.claise.be/2018/02/why-data-model-driven-telemetry-is-the-only-useful-telemetry/
+- NETCONF versus RESTCONF: Capabilitity Comparisons for Data
+ Model-driven Management -
+ https://www.claise.be/2017/10/netconf-versus-restconf-capabilitity-comparisons-for-data-model-driven-management-2/
+- An Introduction to NETCONF/YANG -
+ https://www.fir3net.com/Networking/Protocols/an-introduction-to-netconf-yang.html
+- Network Automation and the Rise of NETCONF -
+ https://medium.com/@k.okasha/network-automation-and-the-rise-of-netconf-e96cc33fe28
+- YANG and the Road to a Model Driven Network -
+ https://medium.com/@k.okasha/yang-and-road-to-a-model-driven-network-e9e52d47148d
+
+Software
+~~~~~~~~
+
+libyang
+^^^^^^^
+
+ libyang is a YANG data modelling language parser and toolkit written
+ (and providing API) in C.
+
+- GitHub page: https://github.com/CESNET/libyang
+- Documentaion: https://netopeer.liberouter.org/doc/libyang/master/
+
+pyang
+^^^^^
+
+ pyang is a YANG validator, transformator and code generator, written
+ in python. It can be used to validate YANG modules for correctness,
+ to transform YANG modules into other formats, and to generate code
+ from the modules.
+
+- GitHub page: https://github.com/mbj4668/pyang
+- Documentaion: https://github.com/mbj4668/pyang/wiki/Documentation
+
+ncclient
+^^^^^^^^
+
+ ncclient is a Python library that facilitates client-side scripting
+ and application development around the NETCONF protocol.
+
+- GitHub page: https://github.com/ncclient/ncclient
+- Documentaion: https://ncclient.readthedocs.io/en/latest/
+
+YDK
+^^^
+
+ ydk-gen is a developer tool that can generate API’s that are modeled
+ in YANG. Currently, it generates language binding for Python, Go and
+ C++ with planned support for other language bindings in the future.
+
+- GitHub pages:
+
+ - Generator: https://github.com/CiscoDevNet/ydk-gen
+ - Python: https://github.com/CiscoDevNet/ydk-py
+
+ - Python samples: https://github.com/CiscoDevNet/ydk-py-samples
+
+ - Go: https://github.com/CiscoDevNet/ydk-go
+ - C++: https://github.com/CiscoDevNet/ydk-cpp
+
+- Documentation:
+
+ - Python: http://ydk.cisco.com/py/docs/
+ - Go: http://ydk.cisco.com/go/docs/
+ - C++: http://ydk.cisco.com/cpp/docs/
+
+- (Blog post) Simplifying Network Programmability with Model-Driven
+ APIs:
+ https://blogs.cisco.com/sp/simplifying-network-programmability-with-model-driven-apis
+- (Video introduction) Infrastructure as a Code Using YANG, OpenConfig
+ and YDK: https://www.youtube.com/watch?v=G1b6vJW1R5w
+
+pyangbind
+^^^^^^^^^
+
+ A plugin for pyang that creates Python bindings for a YANG model.
+
+- GitHub page: https://github.com/robshakir/pyangbind
+- Documentation: http://pynms.io/pyangbind/
+
+ConfD
+^^^^^
+
+- Official webpage (for ConfD Basic):
+ http://www.tail-f.com/confd-basic/
+- Training Videos: http://www.tail-f.com/confd-training-videos/
+- Forum: http://discuss.tail-f.com/
+
+Sysrepo
+^^^^^^^
+
+ Sysrepo is an YANG-based configuration and operational state data
+ store for Unix/Linux applications.
+
+- GitHub page: https://github.com/sysrepo/sysrepo
+- Official webpage: http://www.sysrepo.org/
+- Documentation: http://www.sysrepo.org/static/doc/html/
+
+Netopeer2
+^^^^^^^^^
+
+ Netopeer2 is a set of tools implementing network configuration tools
+ based on the NETCONF Protocol. This is the second generation of the
+ toolset, originally available as the Netopeer project. Netopeer2 is
+ based on the new generation of the NETCONF and YANG libraries -
+ libyang and libnetconf2. The Netopeer server uses sysrepo as a
+ NETCONF datastore implementation.
+
+- GitHub page: https://github.com/CESNET/Netopeer2
+
+Clixon
+^^^^^^
+
+ Clixon is an automatic configuration manager where you generate
+ interactive CLI, NETCONF, RESTCONF and embedded databases with
+ transaction support from a YANG specification.
+
+- GitHub page: https://github.com/clicon/clixon
+- Project page: http://www.clicon.org/
diff --git a/doc/developer/northbound/northbound.rst b/doc/developer/northbound/northbound.rst
new file mode 100644
index 0000000..7dddf06
--- /dev/null
+++ b/doc/developer/northbound/northbound.rst
@@ -0,0 +1,21 @@
+.. _northbound:
+
+**************
+Northbound API
+**************
+
+.. toctree::
+ :maxdepth: 2
+
+ architecture
+ transactional-cli
+ retrofitting-configuration-commands
+ operational-data-rpcs-and-notifications
+ plugins-sysrepo
+ advanced-topics
+ yang-tools
+ yang-module-translator
+ demos
+ links
+ ppr-basic-test-topology
+ ppr-mpls-basic-test-topology
diff --git a/doc/developer/northbound/operational-data-rpcs-and-notifications.rst b/doc/developer/northbound/operational-data-rpcs-and-notifications.rst
new file mode 100644
index 0000000..554bc17
--- /dev/null
+++ b/doc/developer/northbound/operational-data-rpcs-and-notifications.rst
@@ -0,0 +1,565 @@
+Operational data
+~~~~~~~~~~~~~~~~
+
+Writing API-agnostic code for YANG-modeled operational data is
+challenging. ConfD and Sysrepo, for instance, have completely different
+APIs to fetch operational data. So how can we write API-agnostic
+callbacks that can be used by both the ConfD and Sysrepo plugins, and
+any other northbound client that might be written in the future?
+
+As an additional requirement, the callbacks must be designed in a way
+that makes in-place XPath filtering possible. As an example, a
+management client might want to retrieve only a subset of a large YANG
+list (e.g. a BGP table), and for optimal performance it should be
+possible to filter out the unwanted elements locally in the managed
+devices instead of returning all elements and performing the filtering
+on the management application.
+
+To meet all these requirements, the four callbacks below were introduced
+in the northbound architecture:
+
+.. code:: c
+
+ /*
+ * Operational data callback.
+ *
+ * The callback function should return the value of a specific leaf or
+ * inform if a typeless value (presence containers or leafs of type
+ * empty) exists or not.
+ *
+ * xpath
+ * YANG data path of the data we want to get
+ *
+ * list_entry
+ * pointer to list entry
+ *
+ * Returns:
+ * pointer to newly created yang_data structure, or NULL to indicate
+ * the absence of data
+ */
+ struct yang_data *(*get_elem)(const char *xpath, void *list_entry);
+
+ /*
+ * Operational data callback for YANG lists.
+ *
+ * The callback function should return the next entry in the list. The
+ * 'list_entry' parameter will be NULL on the first invocation.
+ *
+ * list_entry
+ * pointer to a list entry
+ *
+ * Returns:
+ * pointer to the next entry in the list, or NULL to signal that the
+ * end of the list was reached
+ */
+ void *(*get_next)(void *list_entry);
+
+ /*
+ * Operational data callback for YANG lists.
+ *
+ * The callback function should fill the 'keys' parameter based on the
+ * given list_entry.
+ *
+ * list_entry
+ * pointer to a list entry
+ *
+ * keys
+ * structure to be filled based on the attributes of the provided
+ * list entry
+ *
+ * Returns:
+ * NB_OK on success, NB_ERR otherwise
+ */
+ int (*get_keys)(void *list_entry, struct yang_list_keys *keys);
+
+ /*
+ * Operational data callback for YANG lists.
+ *
+ * The callback function should return a list entry based on the list
+ * keys given as a parameter.
+ *
+ * keys
+ * structure containing the keys of the list entry
+ *
+ * Returns:
+ * a pointer to the list entry if found, or NULL if not found
+ */
+ void *(*lookup_entry)(struct yang_list_keys *keys);
+
+These callbacks were designed to provide maximum flexibility, and borrow
+a lot of ideas from the ConfD API. Each callback does one and only one
+task, they are indivisible primitives that can be combined in several
+different ways to iterate over operational data. The extra flexibility
+certainly has a performance cost, but it’s the price to pay if we want
+to expose FRR operational data using several different management
+interfaces (e.g. NETCONF via either ConfD or Sysrepo+Netopeer2). In the
+future it might be possible to introduce optional callbacks that do
+things like returning multiple objects at once. They would provide
+enhanced performance when iterating over large lists, but their use
+would be limited by the northbound plugins that can be integrated with
+them.
+
+ NOTE: using the northbound callbacks as a base, the ConfD plugin can
+ provide up to 100 objects between each round trip between FRR and the
+ *confd* daemon. Preliminary tests showed FRR taking ~7 seconds
+ (asynchronously, without blocking the main pthread) to return a RIP
+ table containing 100k routes to a NETCONF client connected to *confd*
+ (JSON was used as the encoding format). Work needs to be done to find
+ the bottlenecks and optimize this operation.
+
+The [[Plugins - Writing Your Own]] page explains how the northbound
+plugins can fetch operational data using the aforementioned northbound
+callbacks, and how in-place XPath filtering can be implemented.
+
+Example
+^^^^^^^
+
+Now let’s move to an example to show how these callbacks are implemented
+in practice. The following YANG container is part of the *ietf-rip*
+module and contains operational data about RIP neighbors:
+
+.. code:: yang
+
+ container neighbors {
+ description
+ "Neighbor information.";
+ list neighbor {
+ key "address";
+ description
+ "A RIP neighbor.";
+ leaf address {
+ type inet:ipv4-address;
+ description
+ "IP address that a RIP neighbor is using as its
+ source address.";
+ }
+ leaf last-update {
+ type yang:date-and-time;
+ description
+ "The time when the most recent RIP update was
+ received from this neighbor.";
+ }
+ leaf bad-packets-rcvd {
+ type yang:counter32;
+ description
+ "The number of RIP invalid packets received from
+ this neighbor which were subsequently discarded
+ for any reason (e.g. a version 0 packet, or an
+ unknown command type).";
+ }
+ leaf bad-routes-rcvd {
+ type yang:counter32;
+ description
+ "The number of routes received from this neighbor,
+ in valid RIP packets, which were ignored for any
+ reason (e.g. unknown address family, or invalid
+ metric).";
+ }
+ }
+ }
+
+We know that this is operational data because the ``neighbors``
+container is within the ``state`` container, which has the
+``config false;`` property (which is applied recursively).
+
+As expected, the ``gen_northbound_callbacks`` tool also generates
+skeleton callbacks for nodes that represent operational data:
+
+.. code:: c
+
+ {
+ .xpath = "/frr-ripd:ripd/state/neighbors/neighbor",
+ .cbs.get_next = ripd_state_neighbors_neighbor_get_next,
+ .cbs.get_keys = ripd_state_neighbors_neighbor_get_keys,
+ .cbs.lookup_entry = ripd_state_neighbors_neighbor_lookup_entry,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/neighbors/neighbor/address",
+ .cbs.get_elem = ripd_state_neighbors_neighbor_address_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/neighbors/neighbor/last-update",
+ .cbs.get_elem = ripd_state_neighbors_neighbor_last_update_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/neighbors/neighbor/bad-packets-rcvd",
+ .cbs.get_elem = ripd_state_neighbors_neighbor_bad_packets_rcvd_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/neighbors/neighbor/bad-routes-rcvd",
+ .cbs.get_elem = ripd_state_neighbors_neighbor_bad_routes_rcvd_get_elem,
+ },
+
+The ``/frr-ripd:ripd/state/neighbors/neighbor`` list within the
+``neighbors`` container has three different callbacks that need to be
+implemented. Let’s start with the first one, the ``get_next`` callback:
+
+.. code:: c
+
+ static void *ripd_state_neighbors_neighbor_get_next(void *list_entry)
+ {
+ struct listnode *node;
+
+ if (list_entry == NULL)
+ node = listhead(peer_list);
+ else
+ node = listnextnode((struct listnode *)list_entry);
+
+ return node;
+ }
+
+Given a list entry, the job of this callback is to find the next element
+from the list. When the ``list_entry`` parameter is NULL, then the first
+element of the list should be returned.
+
+*ripd* uses the ``rip_peer`` structure to represent RIP neighbors, and
+the ``peer_list`` global variable (linked list) is used to store all RIP
+neighbors.
+
+In order to be able to iterate over the list of RIP neighbors, the
+callback returns a ``listnode`` variable instead of a ``rip_peer``
+variable. The ``listnextnode`` macro can then be used to find the next
+element from the linked list.
+
+Now the second callback, ``get_keys``:
+
+.. code:: c
+
+ static int ripd_state_neighbors_neighbor_get_keys(void *list_entry,
+ struct yang_list_keys *keys)
+ {
+ struct listnode *node = list_entry;
+ struct rip_peer *peer = listgetdata(node);
+
+ keys->num = 1;
+ (void)inet_ntop(AF_INET, &peer->addr, keys->key[0].value,
+ sizeof(keys->key[0].value));
+
+ return NB_OK;
+ }
+
+This one is easy. First, we obtain the RIP neighbor from the
+``listnode`` structure. Then, we fill the ``keys`` parameter according
+to the attributes of the RIP neighbor. In this case, the ``neighbor``
+YANG list has only one key: the neighbor IP address. We then use the
+``inet_ntop()`` function to transform this binary IP address into a
+string (the lingua franca of the FRR northbound).
+
+The last callback for the ``neighbor`` YANG list is the ``lookup_entry``
+callback:
+
+.. code:: c
+
+ static void *
+ ripd_state_neighbors_neighbor_lookup_entry(struct yang_list_keys *keys)
+ {
+ struct in_addr address;
+
+ yang_str2ipv4(keys->key[0].value, &address);
+
+ return rip_peer_lookup(&address);
+ }
+
+This callback is the counterpart of the ``get_keys`` callback: given an
+array of list keys, the associated list entry should be returned. The
+``yang_str2ipv4()`` function is used to convert the list key (an IP
+address) from a string to an ``in_addr`` structure. Then the
+``rip_peer_lookup()`` function is used to find the list entry.
+
+Finally, each YANG leaf inside the ``neighbor`` list has its associated
+``get_elem`` callback:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/state/neighbors/neighbor/address
+ */
+ static struct yang_data *
+ ripd_state_neighbors_neighbor_address_get_elem(const char *xpath,
+ void *list_entry)
+ {
+ struct rip_peer *peer = list_entry;
+
+ return yang_data_new_ipv4(xpath, &peer->addr);
+ }
+
+ /*
+ * XPath: /frr-ripd:ripd/state/neighbors/neighbor/last-update
+ */
+ static struct yang_data *
+ ripd_state_neighbors_neighbor_last_update_get_elem(const char *xpath,
+ void *list_entry)
+ {
+ /* TODO: yang:date-and-time is tricky */
+ return NULL;
+ }
+
+ /*
+ * XPath: /frr-ripd:ripd/state/neighbors/neighbor/bad-packets-rcvd
+ */
+ static struct yang_data *
+ ripd_state_neighbors_neighbor_bad_packets_rcvd_get_elem(const char *xpath,
+ void *list_entry)
+ {
+ struct rip_peer *peer = list_entry;
+
+ return yang_data_new_uint32(xpath, peer->recv_badpackets);
+ }
+
+ /*
+ * XPath: /frr-ripd:ripd/state/neighbors/neighbor/bad-routes-rcvd
+ */
+ static struct yang_data *
+ ripd_state_neighbors_neighbor_bad_routes_rcvd_get_elem(const char *xpath,
+ void *list_entry)
+ {
+ struct rip_peer *peer = list_entry;
+
+ return yang_data_new_uint32(xpath, peer->recv_badroutes);
+ }
+
+These callbacks receive the list entry as parameter and return the
+corresponding data using the ``yang_data_new_*()`` wrapper functions.
+Not much to explain here.
+
+Iterating over operational data without blocking the main pthread
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+One of the problems we have in FRR is that some “show” commands in the
+CLI can take too long, potentially long enough to the point of
+triggering some protocol timeouts and bringing sessions down.
+
+To avoid this kind of problem, northbound clients are encouraged to do
+one of the following: \* Create a separate pthread for handling requests
+to fetch operational data. \* Iterate over YANG lists and leaf-lists
+asynchronously, returning a maximum number of elements per time instead
+of returning all elements in one shot.
+
+In order to handle both cases correctly, the ``get_next`` callbacks need
+to use locks to prevent the YANG lists from being modified while they
+are being iterated over. If that is not done, the list entry returned by
+this callback can become a dangling pointer when used in another
+callback.
+
+Currently the ConfD and Sysrepo plugins run only in the main pthread.
+The plan in the short-term is to introduce a separate pthread only for
+handling operational data, and use the main pthread only for handling
+configuration changes, RPCs and notifications.
+
+RPCs and Actions
+~~~~~~~~~~~~~~~~
+
+The FRR northbound supports YANG RPCs and Actions through the ``rpc()``
+callback, which is documented as follows in the *lib/northbound.h* file:
+
+.. code:: c
+
+ /*
+ * RPC and action callback.
+ *
+ * Both 'input' and 'output' are lists of 'yang_data' structures. The
+ * callback should fetch all the input parameters from the 'input' list,
+ * and add output parameters to the 'output' list if necessary.
+ *
+ * xpath
+ * xpath of the YANG RPC or action
+ *
+ * input
+ * read-only list of input parameters
+ *
+ * output
+ * list of output parameters to be populated by the callback
+ *
+ * Returns:
+ * NB_OK on success, NB_ERR otherwise
+ */
+ int (*rpc)(const char *xpath, const struct list *input,
+ struct list *output);
+
+Note that the same callback is used for both RPCs and actions, which are
+essentially the same thing. In the case of YANG actions, the ``xpath``
+parameter can be consulted to find the data node associated to the
+operation.
+
+As part of the northbound retrofitting process, it’s suggested to model
+some EXEC-level commands using YANG so that their functionality is
+exposed to other management interfaces other than the CLI. As an
+example, if the ``clear bgp`` command is modeled using a YANG RPC, and a
+corresponding ``rpc`` callback is written, then it should be possible to
+clear BGP neighbors using NETCONF and RESTCONF with that RPC (the ConfD
+and Sysrepo plugins have full support for YANG RPCs and actions).
+
+Here’s an example of a very simple RPC modeled using YANG:
+
+.. code:: yang
+
+ rpc clear-rip-route {
+ description
+ "Clears RIP routes from the IP routing table and routes
+ redistributed into the RIP protocol.";
+ }
+
+This RPC doesn’t have any input or output parameters. Below we can see
+the implementation of the corresponding ``rpc`` callback, whose skeleton
+was automatically generated by the ``gen_northbound_callbacks`` tool:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:clear-rip-route
+ */
+ static int clear_rip_route_rpc(const char *xpath, const struct list *input,
+ struct list *output)
+ {
+ struct route_node *rp;
+ struct rip_info *rinfo;
+ struct list *list;
+ struct listnode *listnode;
+
+ /* Clear received RIP routes */
+ for (rp = route_top(rip->table); rp; rp = route_next(rp)) {
+ list = rp->info;
+ if (list == NULL)
+ continue;
+
+ for (ALL_LIST_ELEMENTS_RO(list, listnode, rinfo)) {
+ if (!rip_route_rte(rinfo))
+ continue;
+
+ if (CHECK_FLAG(rinfo->flags, RIP_RTF_FIB))
+ rip_zebra_ipv4_delete(rp);
+ break;
+ }
+
+ if (rinfo) {
+ RIP_TIMER_OFF(rinfo->t_timeout);
+ RIP_TIMER_OFF(rinfo->t_garbage_collect);
+ listnode_delete(list, rinfo);
+ rip_info_free(rinfo);
+ }
+
+ if (list_isempty(list)) {
+ list_delete_and_null(&list);
+ rp->info = NULL;
+ route_unlock_node(rp);
+ }
+ }
+
+ return NB_OK;
+ }
+
+If the ``clear-rip-route`` RPC had any input parameters, they would be
+available in the ``input`` list given as a parameter to the callback.
+Similarly, the ``output`` list can be used to append output parameters
+generated by the RPC, if any are defined in the YANG model.
+
+The northbound clients (CLI and northbound plugins) have the
+responsibility to create and delete the ``input`` and ``output`` lists.
+However, in the cases where the RPC or action doesn’t have any input or
+output parameters, the northbound client can pass NULL pointers to the
+``rpc`` callback to avoid creating linked lists unnecessarily. We can
+see this happening in the example below:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:clear-rip-route
+ */
+ DEFPY (clear_ip_rip,
+ clear_ip_rip_cmd,
+ "clear ip rip",
+ CLEAR_STR
+ IP_STR
+ "Clear IP RIP database\n")
+ {
+ return nb_cli_rpc("/frr-ripd:clear-rip-route", NULL, NULL);
+ }
+
+``nb_cli_rpc()`` is a helper function that merely finds the appropriate
+``rpc`` callback based on the XPath provided in the first argument, and
+map the northbound error code from the ``rpc`` callback to a vty error
+code (e.g. ``CMD_SUCCESS``, ``CMD_WARNING``). The second and third
+arguments provided to the function refer to the ``input`` and ``output``
+lists. In this case, both arguments are set to NULL since the YANG RPC
+in question doesn’t have any input/output parameters.
+
+Notifications
+~~~~~~~~~~~~~
+
+YANG notifations are sent using the ``nb_notification_send()`` function,
+documented in the *lib/northbound.h* file as follows:
+
+.. code:: c
+
+ /*
+ * Send a YANG notification. This is a no-op unless the 'nb_notification_send'
+ * hook was registered by a northbound plugin.
+ *
+ * xpath
+ * xpath of the YANG notification
+ *
+ * arguments
+ * linked list containing the arguments that should be sent. This list is
+ * deleted after being used.
+ *
+ * Returns:
+ * NB_OK on success, NB_ERR otherwise
+ */
+ extern int nb_notification_send(const char *xpath, struct list *arguments);
+
+The northbound doesn’t use callbacks for notifications because
+notifications are generated locally and sent to the northbound clients.
+This way, whenever a notification needs to be sent, it’s possible to
+call the appropriate function directly instead of finding a callback
+based on the XPath of the YANG notification.
+
+As an example, the *ietf-rip* module contains the following
+notification:
+
+.. code:: yang
+
+ notification authentication-failure {
+ description
+ "This notification is sent when the system
+ receives a PDU with the wrong authentication
+ information.";
+ leaf interface-name {
+ type string;
+ description
+ "Describes the name of the RIP interface.";
+ }
+ }
+
+The following convenience function was implemented in *ripd* to send
+*authentication-failure* YANG notifications:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:authentication-failure
+ */
+ void ripd_notif_send_auth_failure(const char *ifname)
+ {
+ const char *xpath = "/frr-ripd:authentication-failure";
+ struct list *arguments;
+ char xpath_arg[XPATH_MAXLEN];
+ struct yang_data *data;
+
+ arguments = yang_data_list_new();
+
+ snprintf(xpath_arg, sizeof(xpath_arg), "%s/interface-name", xpath);
+ data = yang_data_new_string(xpath_arg, ifname);
+ listnode_add(arguments, data);
+
+ nb_notification_send(xpath, arguments);
+ }
+
+Now sending the *authentication-failure* YANG notification should be as
+simple as calling the above function and provide the appropriate
+interface name. The notification will be processed by all northbound
+plugins that subscribed a callback to the ``nb_notification_send`` hook.
+The ConfD and Sysrepo plugins, for instance, use this hook to relay the
+notifications to the *confd*/*sysrepod* daemons, which can generate
+NETCONF notifications to subscribed clients. When no northbound plugin
+is loaded, ``nb_notification_send()`` doesn’t do anything and the
+notifications are ignored.
diff --git a/doc/developer/northbound/plugins-sysrepo.rst b/doc/developer/northbound/plugins-sysrepo.rst
new file mode 100644
index 0000000..186c3a0
--- /dev/null
+++ b/doc/developer/northbound/plugins-sysrepo.rst
@@ -0,0 +1,137 @@
+Installation
+------------
+
+Required dependencies
+^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ # apt-get install git cmake build-essential bison flex libpcre3-dev libev-dev \
+ libavl-dev libprotobuf-c-dev protobuf-c-compiler libcmocka0 \
+ libcmocka-dev doxygen libssl-dev libssl-dev libssh-dev
+
+libyang
+^^^^^^^
+
+::
+
+ # apt-get install libyang0.16 libyang-dev
+
+Sysrepo
+^^^^^^^
+
+::
+
+ $ git clone https://github.com/sysrepo/sysrepo.git
+ $ cd sysrepo/
+ $ mkdir build; cd build
+ $ cmake -DCMAKE_BUILD_TYPE=Release -DGEN_LANGUAGE_BINDINGS=OFF .. && make
+ # make install
+
+libnetconf2
+^^^^^^^^^^^
+
+::
+
+ $ git clone https://github.com/CESNET/libnetconf2.git
+ $ cd libnetconf2/
+ $ mkdir build; cd build
+ $ cmake .. && make
+ # make install
+
+netopeer2
+^^^^^^^^^
+
+::
+
+ $ git clone https://github.com/CESNET/Netopeer2.git
+ $ cd Netopeer2
+ $ cd server
+ $ mkdir build; cd build
+ $ cmake .. && make
+ # make install
+
+**Note:** If ``make install`` fails as it can’t find
+``libsysrepo.so.0.7``, then run ``ldconfig`` and try again as it might
+not have updated the lib search path
+
+FRR
+^^^
+
+Build and install FRR using the ``--enable-sysrepo`` configure-time
+option.
+
+Initialization
+--------------
+
+Install the FRR YANG modules in the Sysrepo datastore:
+
+::
+
+ # sysrepoctl --install /usr/local/share/yang/ietf-interfaces@2018-01-09.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-vrf.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-interface.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-route-types.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-filter.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-route-map.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-isisd.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-ripd.yang
+ # sysrepoctl --install /usr/local/share/yang/frr-ripngd.yang
+ # sysrepoctl -c frr-vrf --owner frr --group frr
+ # sysrepoctl -c frr-interface --owner frr --group frr
+ # sysrepoctl -c frr-route-types --owner frr --group frr
+ # sysrepoctl -c frr-filter --owner frr --group frr
+ # sysrepoctl -c frr-route-map --owner frr --group frr
+ # sysrepoctl -c frr-isisd --owner frr --group frr
+ # sysrepoctl -c frr-ripd --owner frr --group frr
+ # sysrepoctl -c frr-ripngd --owner frr --group frr
+
+Start netopeer2-server:
+
+::
+
+ # netopeer2-server -d &
+
+Start the FRR daemons with the sysrepo module:
+
+::
+
+ # isisd -M sysrepo --log=stdout
+
+Managing the configuration
+--------------------------
+
+The following NETCONF scripts can be used to show and edit the FRR
+configuration:
+https://github.com/rzalamena/ietf-hackathon-brazil-201907/tree/master/netconf-scripts
+
+Example:
+
+::
+
+ # ./netconf-edit.py 127.0.0.1
+ # ./netconf-get-config.py 127.0.0.1
+ <?xml version="1.0" encoding="UTF-8"?><data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><isis xmlns="http://frrouting.org/yang/isisd"><instance><area-tag>testnet</area-tag><is-type>level-1</is-type></instance></isis></data>
+
+..
+
+ NOTE: the ncclient library needs to be installed first:
+ ``apt install -y python3-ncclient``
+
+The *sysrepocfg* tool can also be used to show/edit the FRR
+configuration. Example:
+
+::
+
+ # sysrepocfg --format=json --import=frr-isisd.json --datastore=running frr-isisd
+ # sysrepocfg --format=json --export --datastore=running frr-isisd
+ {
+ "frr-isisd:isis": {
+ "instance": [
+ {
+ "area-tag": "testnet",
+ "is-type": "level-1"
+ }
+ ]
+ }
+ }
diff --git a/doc/developer/northbound/ppr-basic-test-topology.rst b/doc/developer/northbound/ppr-basic-test-topology.rst
new file mode 100644
index 0000000..a680ed7
--- /dev/null
+++ b/doc/developer/northbound/ppr-basic-test-topology.rst
@@ -0,0 +1,1632 @@
+Table of Contents
+~~~~~~~~~~~~~~~~~
+
+- `Software <#software>`__
+- `Topology <#topology>`__
+- `Configuration <#configuration>`__
+
+ - `CLI <#configuration-cli>`__
+ - `YANG <#configuration-yang>`__
+
+- `Verification - Control Plane <#verification-cplane>`__
+- `Verification - Forwarding Plane <#verification-fplane>`__
+
+Software
+~~~~~~~~
+
+The FRR PPR implementation for IS-IS is available here:
+https://github.com/opensourcerouting/frr/tree/isisd-ppr
+
+Topology
+~~~~~~~~
+
+In this topology we have an IS-IS network consisting of 12 routers. CE1
+and CE2 are the consumer edges, connected to R11 and R14, respectively.
+Three hosts are connected to the CEs using only static routes.
+
+Router R11 advertises 6 PPR TLVs, which corresponds to three
+bi-directional GRE tunnels: \* **6000:1::1 <-> 6000:2::1:** {R11 - R21 -
+R22 - R23 - R14} (IPv6 Node Addresses only) \* **6000:1::2 <->
+6000:2::2:** {R11 - R21 - R32 - R41 - R33 - R23 - R14} (IPv6 Node and
+Interface Addresses) \* **6000:1::3 <-> 6000:2::3:** {R11 - R21 - R99 -
+R23 - R14} (misconfigured path)
+
+PBR rules are configured on R11 and R14 to route the traffic between
+Host 1 and Host 3 using the first PPR tunnel. Traffic between Host 2 and
+Host 3 uses the regular IS-IS shortest path.
+
+Additional information: \* Addresses in the 4000::/16 range refer to
+interface addresses, where the last hextet corresponds to the node ID.
+\* Addresses in the 5000::/16 range refer to loopback addresses, where
+the last hextet corresponds to the node ID. \* Addresses in the
+6000::/16 range refer to PPR-ID addresses.
+
+::
+
+ +-------+ +-------+ +-------+
+ | | | | | |
+ | HOST1 | | HOST2 | | HOST3 |
+ | | | | | |
+ +---+---+ +---+---+ +---+---+
+ | | |
+ |fd00:10:1::/64 | |
+ +-----+ +------+ fd00:20:1::/64|
+ | |fd00:10:2::/64 |
+ | | |
+ +-+--+--+ +---+---+
+ | | | |
+ | CE1 | | CE2 |
+ | | | |
+ +---+---+ +---+---+
+ | |
+ | |
+ |fd00:10:0::/64 fd00:20:0::/64|
+ | |
+ | |
+ +---+---+ +-------+ +-------+ +---+---+
+ | |4000:101::/64| |4000:102::/64| |4000:103::/64| |
+ | R11 +-------------+ R12 +-------------+ R13 +-------------+ R14 |
+ | | | | | | | |
+ +---+---+ +--+-+--+ +--+-+--+ +---+---+
+ | | | | | |
+ |4000:104::/64 | |4000:106::/64 | |4000:108::/64 |
+ +---------+ +--------+ +--------+ +--------+ +--------+ +---------+
+ | |4000:105::/64 | |4000:107::/64 | |4000:109::/64
+ | | | | | |
+ +--+-+--+ +--+-+--+ +--+-+--+
+ | |4000:110::/64| |4000:111::/64| |
+ | R21 +-------------+ R22 +-------------+ R23 |
+ | | | | | |
+ +--+-+--+ +--+-+--+ +--+-+--+
+ | | | | | |
+ | |4000:113::/64 | |4000:115::/64 | |4000:117::/64
+ +---------+ +--------+ +--------+ +--------+ +--------+ +---------+
+ |4000:112::/64 | |4000:114::/64 | |4000:116::/64 |
+ | | | | | |
+ +---+---+ +--+-+--+ +--+-+--+ +---+---+
+ | |4000:118::/64| |4000:119::/64| |4000:120::/64| |
+ | R31 +-------------+ R32 +-------------+ R33 +-------------+ R34 |
+ | | | | | | | |
+ +-------+ +---+---+ +---+---+ +-------+
+ | |
+ |4000:121::/64 |
+ +----------+----------+
+ |
+ |
+ +---+---+
+ | |
+ | R41 |
+ | |
+ +-------+
+
+Configuration
+~~~~~~~~~~~~~
+
+PPR TLV processing needs to be enabled on all IS-IS routers using the
+``ppr on`` command. The advertisements of all PPR TLVs is done by router
+R11.
+
+CLI configuration
+^^^^^^^^^^^^^^^^^
+
+.. code:: yaml
+
+ ---
+
+ routers:
+
+ host1:
+ links:
+ eth-ce1:
+ peer: [ce1, eth-host1]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce1
+ ipv6 address fd00:10:1::1/64
+ !
+ ipv6 route ::/0 fd00:10:1::100
+
+ host2:
+ links:
+ eth-ce1:
+ peer: [ce1, eth-host2]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce1
+ ipv6 address fd00:10:2::1/64
+ !
+ ipv6 route ::/0 fd00:10:2::100
+
+ host3:
+ links:
+ eth-ce2:
+ peer: [ce2, eth-host3]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce2
+ ipv6 address fd00:20:1::1/64
+ !
+ ipv6 route ::/0 fd00:20:1::100
+
+ ce1:
+ links:
+ eth-host1:
+ peer: [host1, eth-ce1]
+ eth-host2:
+ peer: [host2, eth-ce1]
+ eth-rt11:
+ peer: [rt11, eth-ce1]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-host1
+ ipv6 address fd00:10:1::100/64
+ !
+ interface eth-host2
+ ipv6 address fd00:10:2::100/64
+ !
+ interface eth-rt11
+ ipv6 address fd00:10:0::100/64
+ !
+ ipv6 route ::/0 fd00:10:0::11
+
+ ce2:
+ links:
+ eth-host3:
+ peer: [host3, eth-ce2]
+ eth-rt14:
+ peer: [rt14, eth-ce2]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-host3
+ ipv6 address fd00:20:1::100/64
+ !
+ interface eth-rt14
+ ipv6 address fd00:20:0::100/64
+ !
+ ipv6 route ::/0 fd00:20:0::14
+
+ rt11:
+ links:
+ lo-ppr:
+ eth-ce1:
+ peer: [ce1, eth-rt11]
+ eth-rt12:
+ peer: [rt12, eth-rt11]
+ eth-rt21:
+ peer: [rt21, eth-rt11]
+ shell: |
+ # GRE tunnel for preferred packets (PPR)
+ ip -6 tunnel add tun-ppr mode ip6gre remote 6000:2::1 local 6000:1::1 ttl 64
+ ip link set dev tun-ppr up
+ # PBR rules
+ ip -6 rule add from fd00:10:1::/64 to fd00:20:1::/64 iif eth-ce1 lookup 10000
+ ip -6 route add default dev tun-ppr table 10000
+ frr:
+ zebra:
+ staticd:
+ isisd:
+ config: |
+ interface lo-ppr
+ ipv6 address 6000:1::1/128
+ ipv6 address 6000:1::2/128
+ ipv6 address 6000:1::3/128
+ !
+ interface lo
+ ipv6 address 5000::11/128
+ ipv6 router isis 1
+ !
+ interface eth-ce1
+ ipv6 address fd00:10:0::11/64
+ !
+ interface eth-rt12
+ ipv6 address 4000:101::11/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:104::11/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ ipv6 route fd00:10::/32 fd00:10:0::100
+ !
+ ppr group VOIP
+ ppr ipv6 6000:1::1/128 prefix 5000::11/128 metric 50
+ pde ipv6-node 5000::14/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::22/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::11/128
+ !
+ ppr ipv6 6000:2::1/128 prefix 5000::14/128 metric 50
+ pde ipv6-node 5000::11/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::22/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::14/128
+ !
+ !
+ ppr group INTERFACE_PDES
+ ppr ipv6 6000:1::2/128 prefix 5000::11/128
+ pde ipv6-node 5000::14/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::33/128
+ pde ipv6-interface 4000:121::41/64
+ pde ipv6-node 5000::32/128
+ pde ipv6-interface 4000:113::21/64
+ pde ipv6-node 5000::11/128
+ !
+ ppr ipv6 6000:2::2/128 prefix 5000::14/128
+ pde ipv6-node 5000::11/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::32/128
+ pde ipv6-interface 4000:121::41/64
+ pde ipv6-node 5000::33/128
+ pde ipv6-interface 4000:116::23/64
+ pde ipv6-node 5000::14/128
+ !
+ !
+ ppr group BROKEN
+ ppr ipv6 6000:1::3/128 prefix 5000::11/128 metric 1500
+ pde ipv6-node 5000::14/128
+ pde ipv6-node 5000::23/128
+ ! non-existing node!!!
+ pde ipv6-node 5000::99/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::11/128
+ !
+ ppr ipv6 6000:2::3/128 prefix 5000::14/128 metric 1500
+ pde ipv6-node 5000::11/128
+ pde ipv6-node 5000::21/128
+ ! non-existing node!!!
+ pde ipv6-node 5000::99/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::14/128
+ !
+ !
+ router isis 1
+ net 49.0000.0000.0000.0011.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ ppr advertise VOIP
+ ppr advertise INTERFACE_PDES
+ ppr advertise BROKEN
+ !
+
+ rt12:
+ links:
+ eth-rt11:
+ peer: [rt11, eth-rt12]
+ eth-rt13:
+ peer: [rt13, eth-rt12]
+ eth-rt21:
+ peer: [rt21, eth-rt12]
+ eth-rt22:
+ peer: [rt22, eth-rt12]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::12/128
+ ipv6 router isis 1
+ !
+ interface eth-rt11
+ ipv6 address 4000:101::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt13
+ ipv6 address 4000:102::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:105::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:106::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0012.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt13:
+ links:
+ eth-rt12:
+ peer: [rt12, eth-rt13]
+ eth-rt14:
+ peer: [rt14, eth-rt13]
+ eth-rt22:
+ peer: [rt22, eth-rt13]
+ eth-rt23:
+ peer: [rt23, eth-rt13]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::13/128
+ ipv6 router isis 1
+ !
+ interface eth-rt12
+ ipv6 address 4000:102::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt14
+ ipv6 address 4000:103::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:107::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:108::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0013.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt14:
+ links:
+ lo-ppr:
+ eth-ce2:
+ peer: [ce2, eth-rt14]
+ eth-rt13:
+ peer: [rt13, eth-rt14]
+ eth-rt23:
+ peer: [rt23, eth-rt14]
+ shell: |
+ # GRE tunnel for preferred packets (PPR)
+ ip -6 tunnel add tun-ppr mode ip6gre remote 6000:1::1 local 6000:2::1 ttl 64
+ ip link set dev tun-ppr up
+ # PBR rules
+ ip -6 rule add from fd00:20:1::/64 to fd00:10:1::/64 iif eth-ce2 lookup 10000
+ ip -6 route add default dev tun-ppr table 10000
+ frr:
+ zebra:
+ staticd:
+ isisd:
+ config: |
+ interface lo-ppr
+ ipv6 address 6000:2::1/128
+ ipv6 address 6000:2::2/128
+ ipv6 address 6000:2::3/128
+ !
+ interface lo
+ ipv6 address 5000::14/128
+ ipv6 router isis 1
+ !
+ interface eth-ce2
+ ipv6 address fd00:20:0::14/64
+ !
+ interface eth-rt13
+ ipv6 address 4000:103::14/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:109::14/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ ipv6 route fd00:20::/32 fd00:20:0::100
+ !
+ router isis 1
+ net 49.0000.0000.0000.0014.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt21:
+ links:
+ eth-rt11:
+ peer: [rt11, eth-rt21]
+ eth-rt12:
+ peer: [rt12, eth-rt21]
+ eth-rt22:
+ peer: [rt22, eth-rt21]
+ eth-rt31:
+ peer: [rt31, eth-rt21]
+ eth-rt32:
+ peer: [rt32, eth-rt21]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::21/128
+ ipv6 router isis 1
+ !
+ interface eth-rt11
+ ipv6 address 4000:104::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt12
+ ipv6 address 4000:105::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:110::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt31
+ ipv6 address 4000:112::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:113::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0021.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt22:
+ links:
+ eth-rt12:
+ peer: [rt12, eth-rt22]
+ eth-rt13:
+ peer: [rt13, eth-rt22]
+ eth-rt21:
+ peer: [rt21, eth-rt22]
+ eth-rt23:
+ peer: [rt23, eth-rt22]
+ eth-rt32:
+ peer: [rt32, eth-rt22]
+ eth-rt33:
+ peer: [rt33, eth-rt22]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::22/128
+ ipv6 router isis 1
+ !
+ interface eth-rt12
+ ipv6 address 4000:106::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt13
+ ipv6 address 4000:107::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:110::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:111::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:114::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:115::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0022.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt23:
+ links:
+ eth-rt13:
+ peer: [rt13, eth-rt23]
+ eth-rt14:
+ peer: [rt14, eth-rt23]
+ eth-rt22:
+ peer: [rt22, eth-rt23]
+ eth-rt33:
+ peer: [rt33, eth-rt23]
+ eth-rt34:
+ peer: [rt34, eth-rt23]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::23/128
+ ipv6 router isis 1
+ !
+ interface eth-rt13
+ ipv6 address 4000:108::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt14
+ ipv6 address 4000:109::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:111::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:116::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt34
+ ipv6 address 4000:117::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0023.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt31:
+ links:
+ eth-rt21:
+ peer: [rt21, eth-rt31]
+ eth-rt32:
+ peer: [rt32, eth-rt31]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::31/128
+ ipv6 router isis 1
+ !
+ interface eth-rt21
+ ipv6 address 4000:112::31/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:118::31/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0031.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt32:
+ links:
+ eth-rt21:
+ peer: [rt21, eth-rt32]
+ eth-rt22:
+ peer: [rt22, eth-rt32]
+ eth-rt31:
+ peer: [rt31, eth-rt32]
+ eth-rt33:
+ peer: [rt33, eth-rt32]
+ eth-sw1:
+ peer: [sw1, eth-rt32]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::32/128
+ ipv6 router isis 1
+ !
+ interface eth-rt21
+ ipv6 address 4000:113::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:114::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt31
+ ipv6 address 4000:118::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:119::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::32/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0032.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt33:
+ links:
+ eth-rt22:
+ peer: [rt22, eth-rt33]
+ eth-rt23:
+ peer: [rt23, eth-rt33]
+ eth-rt32:
+ peer: [rt32, eth-rt33]
+ eth-rt34:
+ peer: [rt34, eth-rt33]
+ eth-sw1:
+ peer: [sw1, eth-rt33]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::33/128
+ ipv6 router isis 1
+ !
+ interface eth-rt22
+ ipv6 address 4000:115::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:116::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:119::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt34
+ ipv6 address 4000:120::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::33/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0033.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt34:
+ links:
+ eth-rt23:
+ peer: [rt23, eth-rt34]
+ eth-rt33:
+ peer: [rt33, eth-rt34]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::34/128
+ ipv6 router isis 1
+ !
+ interface eth-rt23
+ ipv6 address 4000:117::34/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:120::34/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0034.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ rt41:
+ links:
+ eth-sw1:
+ peer: [sw1, eth-rt41]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ipv6 address 5000::41/128
+ ipv6 router isis 1
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::41/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0041.00
+ is-type level-1
+ topology ipv6-unicast
+ ppr on
+ !
+
+ switches:
+ sw1:
+ links:
+ eth-rt32:
+ peer: [rt32, eth-sw1]
+ eth-rt33:
+ peer: [rt33, eth-sw1]
+ eth-rt41:
+ peer: [rt41, eth-sw1]
+
+ frr:
+ base-config: |
+ hostname %(node)
+ password 1
+ log file %(logdir)/%(node).log
+ log commands
+ !
+ debug zebra rib
+ debug isis ppr
+ debug isis events
+ debug isis route-events
+ debug isis spf-events
+ debug isis lsp-gen
+ !
+
+YANG
+^^^^
+
+PPR can also be configured using NETCONF, RESTCONF and gRPC based on the
+following YANG models: \*
+`frr-ppr.yang <https://github.com/opensourcerouting/frr/blob/isisd-ppr/yang/frr-ppr.yang>`__
+\*
+`frr-isisd.yang <https://github.com/opensourcerouting/frr/blob/isisd-ppr/yang/frr-isisd.yang>`__
+
+As an example, here’s R11 configuration in the XML format:
+
+.. code:: xml
+
+ <lib xmlns="http://frrouting.org/yang/interface">
+ <interface>
+ <name>lo-ppr</name>
+ <vrf>default</vrf>
+ </interface>
+ <interface>
+ <name>lo</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ </isis>
+ </interface>
+ <interface>
+ <name>eth-ce1</name>
+ <vrf>default</vrf>
+ </interface>
+ <interface>
+ <name>eth-rt12</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ <hello>
+ <multiplier>
+ <level-1>3</level-1>
+ <level-2>3</level-2>
+ </multiplier>
+ </hello>
+ <network-type>point-to-point</network-type>
+ </isis>
+ </interface>
+ <interface>
+ <name>eth-rt21</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ <hello>
+ <multiplier>
+ <level-1>3</level-1>
+ <level-2>3</level-2>
+ </multiplier>
+ </hello>
+ <network-type>point-to-point</network-type>
+ </isis>
+ </interface>
+ </lib>
+ <ppr xmlns="http://frrouting.org/yang/ppr">
+ <group>
+ <name>VOIP</name>
+ <ipv6>
+ <ppr-id>6000:1::1/128</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::22/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>50</ppr-metric>
+ </attributes>
+ </ipv6>
+ <ipv6>
+ <ppr-id>6000:2::1/128</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::22/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>50</ppr-metric>
+ </attributes>
+ </ipv6>
+ </group>
+ <group>
+ <name>INTERFACE_PDES</name>
+ <ipv6>
+ <ppr-id>6000:1::2/128</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::33/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>4000:121::41/64</pde-id>
+ <pde-id-type>ipv6-interface</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::32/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>4000:113::21/64</pde-id>
+ <pde-id-type>ipv6-interface</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </ipv6>
+ <ipv6>
+ <ppr-id>6000:2::2/128</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::32/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>4000:121::41/64</pde-id>
+ <pde-id-type>ipv6-interface</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::33/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>4000:116::23/64</pde-id>
+ <pde-id-type>ipv6-interface</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </ipv6>
+ </group>
+ <group>
+ <name>BROKEN</name>
+ <ipv6>
+ <ppr-id>6000:1::3/128</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::99/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>1500</ppr-metric>
+ </attributes>
+ </ipv6>
+ <ipv6>
+ <ppr-id>6000:2::3/128</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::99/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>1500</ppr-metric>
+ </attributes>
+ </ipv6>
+ </group>
+ </ppr>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <instance>
+ <area-tag>1</area-tag>
+ <area-address>49.0000.0000.0000.0011.00</area-address>
+ <multi-topology>
+ <ipv6-unicast>
+ </ipv6-unicast>
+ </multi-topology>
+ <ppr>
+ <enable>true</enable>
+ <ppr-advertise>
+ <name>VOIP</name>
+ </ppr-advertise>
+ <ppr-advertise>
+ <name>INTERFACE_PDES</name>
+ </ppr-advertise>
+ <ppr-advertise>
+ <name>BROKEN</name>
+ </ppr-advertise>
+ </ppr>
+ </instance>
+ </isis>
+
+Verification - Control Plane
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Verify that R11 has flooded the PPR TLVs correctly to all IS-IS routers:
+
+::
+
+ # show isis database detail 0000.0000.0011
+ Area 1:
+ IS-IS Level-1 link-state database:
+ LSP ID PduLen SeqNumber Chksum Holdtime ATT/P/OL
+ debian.00-00 1233 0x00000009 0x7bd4 683 0/0/0
+ Protocols Supported: IPv4, IPv6
+ Area Address: 49.0000
+ MT Router Info: ipv4-unicast
+ MT Router Info: ipv6-unicast
+ Hostname: debian
+ MT Reachability: 0000.0000.0012.00 (Metric: 10) ipv6-unicast
+ MT Reachability: 0000.0000.0021.00 (Metric: 10) ipv6-unicast
+ MT IPv6 Reachability: 5000::11/128 (Metric: 10) ipv6-unicast
+ MT IPv6 Reachability: 4000:101::/64 (Metric: 10) ipv6-unicast
+ MT IPv6 Reachability: 4000:104::/64 (Metric: 10) ipv6-unicast
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 6000:1::3/128 (Native IPv6)
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::99/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 1500
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 6000:2::3/128 (Native IPv6)
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::99/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 1500
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 6000:1::2/128 (Native IPv6)
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::33/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 4000:121::41 (IPv6 Interface Address), L:0 N:0 E:0
+ PDE: 5000::32/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 4000:113::21 (IPv6 Interface Address), L:0 N:0 E:0
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 0
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 6000:2::2/128 (Native IPv6)
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::32/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 4000:121::41 (IPv6 Interface Address), L:0 N:0 E:0
+ PDE: 5000::33/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 4000:116::23 (IPv6 Interface Address), L:0 N:0 E:0
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 0
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 6000:1::1/128 (Native IPv6)
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::22/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 50
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 6000:2::1/128 (Native IPv6)
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::22/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 50
+
+The PPR TLVs can also be seen using a modified version of Wireshark as
+seen below:
+
+.. figure:: https://user-images.githubusercontent.com/931662/61582441-9551e500-ab01-11e9-8f6f-400ee3fba927.png
+ :alt: s2
+
+ s2
+
+Using the ``show isis ppr`` command, verify that all routers installed
+the PPR-IDs for the paths they are part of. Example:
+
+Router RT11
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ --------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Tail-End - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Tail-End - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Tail-End - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Head-End Up 00:45:41
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Head-End Up 00:45:41
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Head-End Up 00:45:41
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:2::1/128 [115/50] via fe80::c2a:54ff:fe39:bff7, eth-rt21, 00:01:33
+ I>* 6000:2::2/128 [115/0] via fe80::c2a:54ff:fe39:bff7, eth-rt21, 00:01:33
+ I>* 6000:2::3/128 [115/1500] via fe80::c2a:54ff:fe39:bff7, eth-rt21, 00:01:33
+
+Router RT12
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Off-Path - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT13
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Off-Path - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT14
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ --------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Head-End Up 00:45:45
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Head-End Up 00:45:45
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Head-End Up 00:45:45
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Tail-End - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Tail-End - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Tail-End - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::58ea:78ff:fe00:92c1, eth-rt23, 00:01:36
+ I>* 6000:1::2/128 [115/0] via fe80::58ea:78ff:fe00:92c1, eth-rt23, 00:01:36
+ I>* 6000:1::3/128 [115/1500] via fe80::58ea:78ff:fe00:92c1, eth-rt23, 00:01:36
+
+Router RT21
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:45:46
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Mid-Point Up 00:45:46
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Mid-Point Up 00:45:46
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:45:46
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Mid-Point Up 00:45:46
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Mid-Point Down -
+
+ # show isis ppr id ipv6 6000:2::3/128 detail
+ Area 1:
+ PPR-ID: 6000:2::3/128 (Native IPv6)
+ PPR-Prefix: 5000::14/128
+ PDEs:
+ 5000::11/128 (IPv6 Node Address)
+ 5000::21/128 (IPv6 Node Address) [LOCAL]
+ 5000::99/128 (IPv6 Node Address) [NEXT]
+ 5000::23/128 (IPv6 Node Address)
+ 5000::14/128 (IPv6 Node Address)
+ Attributes:
+ Metric: 1500
+ Position: Mid-Point
+ Originator: 0000.0000.0011
+ Level: L1
+ Algorithm: 1
+ MT-ID: ipv4-unicast
+ Status: Down: PDE is unreachable
+ Last change: 00:00:37
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::142e:79ff:feeb:cffc, eth-rt11, 00:01:38
+ I>* 6000:1::2/128 [115/0] via fe80::142e:79ff:feeb:cffc, eth-rt11, 00:01:38
+ I>* 6000:1::3/128 [115/1500] via fe80::142e:79ff:feeb:cffc, eth-rt11, 00:01:38
+ I>* 6000:2::1/128 [115/50] via fe80::c88e:7fff:fe5f:a08d, eth-rt22, 00:01:38
+ I>* 6000:2::2/128 [115/0] via fe80::8b2:9eff:fe98:f66a, eth-rt32, 00:01:38
+
+Router RT22
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:45:47
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Off-Path - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:45:47
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::2cb5:edff:fe60:29b1, eth-rt21, 00:01:38
+ I>* 6000:2::1/128 [115/50] via fe80::e8d9:63ff:fea3:177b, eth-rt23, 00:01:38
+
+Router RT23
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:45:49
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Mid-Point Up 00:45:49
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Mid-Point Down -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:45:49
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Mid-Point Up 00:45:49
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Mid-Point Up 00:45:49
+
+ # show isis ppr id ipv6 6000:1::3/128 detail
+ Area 1:
+ PPR-ID: 6000:1::3/128 (Native IPv6)
+ PPR-Prefix: 5000::11/128
+ PDEs:
+ 5000::14/128 (IPv6 Node Address)
+ 5000::23/128 (IPv6 Node Address) [LOCAL]
+ 5000::99/128 (IPv6 Node Address) [NEXT]
+ 5000::21/128 (IPv6 Node Address)
+ 5000::11/128 (IPv6 Node Address)
+ Attributes:
+ Metric: 1500
+ Position: Mid-Point
+ Originator: 0000.0000.0011
+ Level: L1
+ Algorithm: 1
+ MT-ID: ipv4-unicast
+ Status: Down: PDE is unreachable
+ Last change: 00:02:50
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::d09f:1bff:fe31:e9c9, eth-rt22, 00:01:40
+ I>* 6000:1::2/128 [115/0] via fe80::c0c3:b3ff:fe9f:b5d3, eth-rt33, 00:01:40
+ I>* 6000:2::1/128 [115/50] via fe80::f40a:66ff:fefc:5c32, eth-rt14, 00:01:40
+ I>* 6000:2::2/128 [115/0] via fe80::f40a:66ff:fefc:5c32, eth-rt14, 00:01:40
+ I>* 6000:2::3/128 [115/1500] via fe80::f40a:66ff:fefc:5c32, eth-rt14, 00:01:40
+
+Router RT31
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Off-Path - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT32
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Mid-Point Up 00:45:51
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Mid-Point Up 00:45:51
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::2/128 [115/0] via 4000:113::21, eth-rt21, 00:01:42
+ I>* 6000:2::2/128 [115/0] via 4000:121::41, eth-sw1, 00:01:42
+
+Router RT33
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Mid-Point Up 00:45:52
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Mid-Point Up 00:45:52
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::2/128 [115/0] via 4000:121::41, eth-sw1, 00:01:43
+ I>* 6000:2::2/128 [115/0] via 4000:116::23, eth-rt23, 00:01:43
+
+Router RT34
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Off-Path - -
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT41
+'''''''''''
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:1::2/128 (Native IPv6) 5000::11/128 0 Mid-Point Up 00:45:55
+ 1 L1 6000:1::3/128 (Native IPv6) 5000::11/128 1500 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+ 1 L1 6000:2::2/128 (Native IPv6) 5000::14/128 0 Mid-Point Up 00:45:55
+ 1 L1 6000:2::3/128 (Native IPv6) 5000::14/128 1500 Off-Path - -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::2/128 [115/0] via fe80::b4b9:60ff:feee:3c73, eth-sw1, 00:01:46
+ I>* 6000:2::2/128 [115/0] via fe80::bc2a:d9ff:fe65:97f2, eth-sw1, 00:01:46
+
+As it can be seen by the output of ``show isis ppr id ipv6 ... detail``,
+routers R21 and R23 couldn’t install the third PPR path because of an
+unreachable PDE (configuration error).
+
+Verification - Forwarding Plane
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On Router R11, use the ``traceroute`` tool to ensure that the PPR paths
+were installed correctly in the network:
+
+::
+
+ root@rt11:~# traceroute 6000:2::1
+ traceroute to 6000:2::1 (6000:2::1), 30 hops max, 80 byte packets
+ 1 4000:104::21 (4000:104::21) 0.612 ms 0.221 ms 0.241 ms
+ 2 4000:110::22 (4000:110::22) 0.257 ms 0.113 ms 0.105 ms
+ 3 4000:111::23 (4000:111::23) 0.257 ms 0.151 ms 0.098 ms
+ 4 6000:2::1 (6000:2::1) 0.346 ms 0.139 ms 0.100 ms
+ root@rt11:~#
+ root@rt11:~# traceroute 6000:2::2
+ traceroute to 6000:2::2 (6000:2::2), 30 hops max, 80 byte packets
+ 1 4000:104::21 (4000:104::21) 4.383 ms 4.148 ms 0.044 ms
+ 2 4000:113::32 (4000:113::32) 0.272 ms 0.065 ms 0.064 ms
+ 3 4000:121::41 (4000:121::41) 0.263 ms 0.101 ms 0.086 ms
+ 4 4000:115::33 (4000:115::33) 0.351 ms 4000:119::33 (4000:119::33) 0.249 ms 4000:115::33 (4000:115::33) 0.153 ms
+ 5 4000:111::23 (4000:111::23) 0.232 ms 0.293 ms 0.131 ms
+ 6 6000:2::2 (6000:2::2) 0.184 ms 0.212 ms 0.140 ms
+ root@rt11:~#
+ root@rt11:~# traceroute 6000:2::3
+ traceroute to 6000:2::3 (6000:2::3), 30 hops max, 80 byte packets
+ 1 4000:104::21 (4000:104::21) 1.537 ms !N 1.347 ms !N 1.075 ms !N
+
+The failure on the third traceroute is expected since the 6000:2::3
+PPR-ID is misconfigured.
+
+Now ping Host 3 from Host 1 and use tcpdump or wireshark to verify that
+the ICMP packets are being tunneled using GRE and following the {R11 -
+R21 - R22 - R23 - R14} path. Here’s a wireshark capture between R11 and
+R21:
+
+.. figure:: https://user-images.githubusercontent.com/931662/61582398-d4cc0180-ab00-11e9-83a8-d219f98010b9.png
+ :alt: s1
+
+ s1
+
+Using ``traceroute`` it’s also possible to see that the ICMP packets are
+being tunneled through the IS-IS network:
+
+::
+
+ root@host1:~# traceroute fd00:20:1::1 -s fd00:10:1::1
+ traceroute to fd00:20:1::1 (fd00:20:1::1), 30 hops max, 80 byte packets
+ 1 fd00:10:1::100 (fd00:10:1::100) 0.354 ms 0.092 ms 0.031 ms
+ 2 fd00:10::11 (fd00:10::11) 0.125 ms 0.022 ms 0.026 ms
+ 3 * * *
+ 4 * * *
+ 5 fd00:20:1::1 (fd00:20:1::1) 0.235 ms 0.106 ms 0.091 ms
diff --git a/doc/developer/northbound/ppr-mpls-basic-test-topology.rst b/doc/developer/northbound/ppr-mpls-basic-test-topology.rst
new file mode 100644
index 0000000..cedb795
--- /dev/null
+++ b/doc/developer/northbound/ppr-mpls-basic-test-topology.rst
@@ -0,0 +1,1991 @@
+Table of Contents
+~~~~~~~~~~~~~~~~~
+
+- `Software <#software>`__
+- `Topology <#topology>`__
+- `Configuration <#configuration>`__
+
+ - `CLI <#configuration-cli>`__
+ - `YANG <#configuration-yang>`__
+
+- `Verification - Control Plane <#verification-cplane>`__
+- `Verification - Forwarding Plane <#verification-fplane>`__
+
+Software
+~~~~~~~~
+
+The FRR PPR implementation for IS-IS is available here:
+https://github.com/opensourcerouting/frr/tree/isisd-ppr-sr
+
+Topology
+~~~~~~~~
+
+In this topology we have an IS-IS network consisting of 12 routers. CE1
+and CE2 are the consumer edges, connected to R11 and R14, respectively.
+Three hosts are connected to the CEs using only static routes.
+
+Router R11 advertises 6 PPR TLVs: \* **IPv6 prefixes 6000:1::1/128 and
+6000:2::1/128:** {R11 - R21 - R22 - R23 - R14} (IPv6 Node Addresses). \*
+**MPLS SR Prefix-SIDs 500 and 501:** {R11 - R21 - R22 - R23 - R14} (SR
+Prefix-SIDs). \* **MPLS SR Prefix-SIDs 502 and 503:** {R11 - R21 - R31 -
+R32 - R41 - R33 - R34 - R23 - R14} (SR Prefix-SIDs)
+
+PBR rules are configured on R11 and R14 to route the traffic between
+Host 1 and Host 3 using the first PPR tunnel, whereas all other traffic
+between CE1 and CE2 uses the second PPR tunnel.
+
+Additional information: \* Addresses in the 4000::/16 range refer to
+interface addresses, where the last hextet corresponds to the node ID.
+\* Addresses in the 5000::/16 range refer to loopback addresses, where
+the last hextet corresponds to the node ID. \* Addresses in the
+6000::/16 range refer to PPR-ID addresses.
+
+::
+
+ +-------+ +-------+ +-------+
+ | | | | | |
+ | HOST1 | | HOST2 | | HOST3 |
+ | | | | | |
+ +---+---+ +---+---+ +---+---+
+ | | |
+ |fd00:10:1::/64 | |
+ +-----+ +------+ fd00:20:1::/64|
+ | |fd00:10:2::/64 |
+ | | |
+ +-+--+--+ +---+---+
+ | | | |
+ | CE1 | | CE2 |
+ | | | |
+ +---+---+ +---+---+
+ | |
+ | |
+ |fd00:10:0::/64 fd00:20:0::/64|
+ | |
+ | |
+ +---+---+ +-------+ +-------+ +---+---+
+ | |4000:101::/64| |4000:102::/64| |4000:103::/64| |
+ | R11 +-------------+ R12 +-------------+ R13 +-------------+ R14 |
+ | | | | | | | |
+ +---+---+ +--+-+--+ +--+-+--+ +---+---+
+ | | | | | |
+ |4000:104::/64 | |4000:106::/64 | |4000:108::/64 |
+ +---------+ +--------+ +--------+ +--------+ +--------+ +---------+
+ | |4000:105::/64 | |4000:107::/64 | |4000:109::/64
+ | | | | | |
+ +--+-+--+ +--+-+--+ +--+-+--+
+ | |4000:110::/64| |4000:111::/64| |
+ | R21 +-------------+ R22 +-------------+ R23 |
+ | | | | | |
+ +--+-+--+ +--+-+--+ +--+-+--+
+ | | | | | |
+ | |4000:113::/64 | |4000:115::/64 | |4000:117::/64
+ +---------+ +--------+ +--------+ +--------+ +--------+ +---------+
+ |4000:112::/64 | |4000:114::/64 | |4000:116::/64 |
+ | | | | | |
+ +---+---+ +--+-+--+ +--+-+--+ +---+---+
+ | |4000:118::/64| |4000:119::/64| |4000:120::/64| |
+ | R31 +-------------+ R32 +-------------+ R33 +-------------+ R34 |
+ | | | | | | | |
+ +-------+ +---+---+ +---+---+ +-------+
+ | |
+ |4000:121::/64 |
+ +----------+----------+
+ |
+ |
+ +---+---+
+ | |
+ | R41 |
+ | |
+ +-------+
+
+Configuration
+~~~~~~~~~~~~~
+
+PPR TLV processing needs to be enabled on all IS-IS routers using the
+``ppr on`` command. The advertisements of all PPR TLVs is done by router
+R11.
+
+CLI configuration
+^^^^^^^^^^^^^^^^^
+
+.. code:: yaml
+
+ ---
+
+ routers:
+
+ host1:
+ links:
+ eth-ce1:
+ peer: [ce1, eth-host1]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce1
+ ipv6 address fd00:10:1::1/64
+ !
+ ipv6 route ::/0 fd00:10:1::100
+
+ host2:
+ links:
+ eth-ce1:
+ peer: [ce1, eth-host2]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce1
+ ipv6 address fd00:10:2::1/64
+ !
+ ipv6 route ::/0 fd00:10:2::100
+
+ host3:
+ links:
+ eth-ce2:
+ peer: [ce2, eth-host3]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-ce2
+ ipv6 address fd00:20:1::1/64
+ !
+ ipv6 route ::/0 fd00:20:1::100
+
+ ce1:
+ links:
+ eth-host1:
+ peer: [host1, eth-ce1]
+ eth-host2:
+ peer: [host2, eth-ce1]
+ eth-rt11:
+ peer: [rt11, eth-ce1]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-host1
+ ipv6 address fd00:10:1::100/64
+ !
+ interface eth-host2
+ ipv6 address fd00:10:2::100/64
+ !
+ interface eth-rt11
+ ipv6 address fd00:10:0::100/64
+ !
+ ipv6 route ::/0 fd00:10:0::11 label 16501
+
+ ce2:
+ links:
+ eth-host3:
+ peer: [host3, eth-ce2]
+ eth-rt14:
+ peer: [rt14, eth-ce2]
+ frr:
+ zebra:
+ staticd:
+ config: |
+ interface eth-host3
+ ipv6 address fd00:20:1::100/64
+ !
+ interface eth-rt14
+ ipv6 address fd00:20:0::100/64
+ !
+ ipv6 route ::/0 fd00:20:0::14 label 16500
+
+ rt11:
+ links:
+ lo:
+ mpls: yes
+ lo-ppr:
+ eth-ce1:
+ peer: [ce1, eth-rt11]
+ mpls: yes
+ eth-rt12:
+ peer: [rt12, eth-rt11]
+ mpls: yes
+ eth-rt21:
+ peer: [rt21, eth-rt11]
+ mpls: yes
+ shell: |
+ # GRE tunnel for preferred packets (PPR)
+ ip -6 tunnel add tun-ppr mode ip6gre remote 6000:2::1 local 6000:1::1 ttl 64
+ ip link set dev tun-ppr up
+ # PBR rules
+ ip -6 rule add from fd00:10:1::/64 to fd00:20:1::/64 iif eth-ce1 lookup 10000
+ ip -6 route add default dev tun-ppr table 10000
+ frr:
+ zebra:
+ staticd:
+ isisd:
+ config: |
+ interface lo-ppr
+ ipv6 address 6000:1::1/128
+ !
+ interface lo
+ ip address 10.0.0.11/32
+ ipv6 address 5000::11/128
+ ipv6 router isis 1
+ !
+ interface eth-ce1
+ ipv6 address fd00:10:0::11/64
+ !
+ interface eth-rt12
+ ipv6 address 4000:101::11/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:104::11/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ ipv6 route fd00:10::/32 fd00:10:0::100
+ !
+ ppr group PPR_IPV6
+ ppr ipv6 6000:1::1/128 prefix 5000::11/128 metric 50
+ pde ipv6-node 5000::14/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::22/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::11/128
+ !
+ ppr ipv6 6000:2::1/128 prefix 5000::14/128 metric 50
+ pde ipv6-node 5000::11/128
+ pde ipv6-node 5000::21/128
+ pde ipv6-node 5000::22/128
+ pde ipv6-node 5000::23/128
+ pde ipv6-node 5000::14/128
+ !
+ !
+ ppr group PPR_MPLS_1
+ ppr mpls 500 prefix 5000::11/128
+ pde prefix-sid 14
+ pde prefix-sid 23
+ pde prefix-sid 22
+ pde prefix-sid 21
+ pde prefix-sid 11
+ !
+ ppr mpls 501 prefix 5000::14/128
+ pde prefix-sid 11
+ pde prefix-sid 21
+ pde prefix-sid 22
+ pde prefix-sid 23
+ pde prefix-sid 14
+ !
+ !
+ ppr group PPR_MPLS_2
+ ppr mpls 502 prefix 5000::11/128
+ pde prefix-sid 14
+ pde prefix-sid 23
+ pde prefix-sid 34
+ pde prefix-sid 33
+ pde prefix-sid 41
+ pde prefix-sid 32
+ pde prefix-sid 31
+ pde prefix-sid 21
+ pde prefix-sid 11
+ !
+ ppr mpls 503 prefix 5000::14/128
+ pde prefix-sid 11
+ pde prefix-sid 21
+ pde prefix-sid 31
+ pde prefix-sid 32
+ pde prefix-sid 41
+ pde prefix-sid 33
+ pde prefix-sid 34
+ pde prefix-sid 23
+ pde prefix-sid 14
+ !
+ !
+ router isis 1
+ net 49.0000.0000.0000.0011.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::11/128 index 11 no-php-flag
+ ppr on
+ ppr advertise PPR_IPV6
+ ppr advertise PPR_MPLS_1
+ ppr advertise PPR_MPLS_2
+ !
+
+ rt12:
+ links:
+ lo:
+ mpls: yes
+ eth-rt11:
+ peer: [rt11, eth-rt12]
+ mpls: yes
+ eth-rt13:
+ peer: [rt13, eth-rt12]
+ mpls: yes
+ eth-rt21:
+ peer: [rt21, eth-rt12]
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt12]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.12/32
+ ipv6 address 5000::12/128
+ ipv6 router isis 1
+ !
+ interface eth-rt11
+ ipv6 address 4000:101::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt13
+ ipv6 address 4000:102::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:105::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:106::12/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0012.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::12/128 index 12 no-php-flag
+ ppr on
+ !
+
+ rt13:
+ links:
+ lo:
+ mpls: yes
+ eth-rt12:
+ peer: [rt12, eth-rt13]
+ mpls: yes
+ eth-rt14:
+ peer: [rt14, eth-rt13]
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt13]
+ mpls: yes
+ eth-rt23:
+ peer: [rt23, eth-rt13]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.13/32
+ ipv6 address 5000::13/128
+ ipv6 router isis 1
+ !
+ interface eth-rt12
+ ipv6 address 4000:102::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt14
+ ipv6 address 4000:103::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:107::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:108::13/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0013.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::13/128 index 13 no-php-flag
+ ppr on
+ !
+
+ rt14:
+ links:
+ lo:
+ mpls: yes
+ lo-ppr:
+ eth-ce2:
+ peer: [ce2, eth-rt14]
+ mpls: yes
+ eth-rt13:
+ peer: [rt13, eth-rt14]
+ mpls: yes
+ eth-rt23:
+ peer: [rt23, eth-rt14]
+ mpls: yes
+ shell: |
+ # GRE tunnel for preferred packets (PPR)
+ ip -6 tunnel add tun-ppr mode ip6gre remote 6000:1::1 local 6000:2::1 ttl 64
+ ip link set dev tun-ppr up
+ # PBR rules
+ ip -6 rule add from fd00:20:1::/64 to fd00:10:1::/64 iif eth-ce2 lookup 10000
+ ip -6 route add default dev tun-ppr table 10000
+ frr:
+ zebra:
+ staticd:
+ isisd:
+ config: |
+ interface lo-ppr
+ ipv6 address 6000:2::1/128
+ !
+ interface lo
+ ip address 10.0.0.14/32
+ ipv6 address 5000::14/128
+ ipv6 router isis 1
+ !
+ interface eth-ce2
+ ipv6 address fd00:20:0::14/64
+ !
+ interface eth-rt13
+ ipv6 address 4000:103::14/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:109::14/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ ipv6 route fd00:20::/32 fd00:20:0::100
+ !
+ router isis 1
+ net 49.0000.0000.0000.0014.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::14/128 index 14 no-php-flag
+ ppr on
+ !
+
+ rt21:
+ links:
+ lo:
+ mpls: yes
+ eth-rt11:
+ peer: [rt11, eth-rt21]
+ mpls: yes
+ eth-rt12:
+ peer: [rt12, eth-rt21]
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt21]
+ mpls: yes
+ eth-rt31:
+ peer: [rt31, eth-rt21]
+ mpls: yes
+ eth-rt32:
+ peer: [rt32, eth-rt21]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.21/32
+ ipv6 address 5000::21/128
+ ipv6 router isis 1
+ !
+ interface eth-rt11
+ ipv6 address 4000:104::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt12
+ ipv6 address 4000:105::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:110::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt31
+ ipv6 address 4000:112::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:113::21/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0021.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::21/128 index 21 no-php-flag
+ ppr on
+ !
+
+ rt22:
+ links:
+ lo:
+ mpls: yes
+ eth-rt12:
+ peer: [rt12, eth-rt22]
+ mpls: yes
+ eth-rt13:
+ peer: [rt13, eth-rt22]
+ mpls: yes
+ eth-rt21:
+ peer: [rt21, eth-rt22]
+ mpls: yes
+ eth-rt23:
+ peer: [rt23, eth-rt22]
+ mpls: yes
+ eth-rt32:
+ peer: [rt32, eth-rt22]
+ mpls: yes
+ eth-rt33:
+ mpls: yes
+ peer: [rt33, eth-rt22]
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.22/32
+ ipv6 address 5000::22/128
+ ipv6 router isis 1
+ !
+ interface eth-rt12
+ ipv6 address 4000:106::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt13
+ ipv6 address 4000:107::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt21
+ ipv6 address 4000:110::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:111::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:114::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:115::22/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0022.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::22/128 index 22 no-php-flag
+ ppr on
+ !
+
+ rt23:
+ links:
+ lo:
+ mpls: yes
+ eth-rt13:
+ peer: [rt13, eth-rt23]
+ mpls: yes
+ eth-rt14:
+ peer: [rt14, eth-rt23]
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt23]
+ mpls: yes
+ eth-rt33:
+ peer: [rt33, eth-rt23]
+ mpls: yes
+ eth-rt34:
+ peer: [rt34, eth-rt23]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.23/32
+ ipv6 address 5000::23/128
+ ipv6 router isis 1
+ !
+ interface eth-rt13
+ ipv6 address 4000:108::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt14
+ ipv6 address 4000:109::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:111::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:116::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt34
+ ipv6 address 4000:117::23/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0023.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing global-block 20000 27999
+ segment-routing prefix 5000::23/128 index 23 no-php-flag
+ ppr on
+ !
+
+ rt31:
+ links:
+ lo:
+ mpls: yes
+ eth-rt21:
+ peer: [rt21, eth-rt31]
+ mpls: yes
+ eth-rt32:
+ peer: [rt32, eth-rt31]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.31/32
+ ipv6 address 5000::31/128
+ ipv6 router isis 1
+ !
+ interface eth-rt21
+ ipv6 address 4000:112::31/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:118::31/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0031.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::31/128 index 31 no-php-flag
+ ppr on
+ !
+
+ rt32:
+ links:
+ lo:
+ mpls: yes
+ eth-rt21:
+ peer: [rt21, eth-rt32]
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt32]
+ mpls: yes
+ eth-rt31:
+ peer: [rt31, eth-rt32]
+ mpls: yes
+ eth-rt33:
+ peer: [rt33, eth-rt32]
+ mpls: yes
+ eth-sw1:
+ peer: [sw1, eth-rt32]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.32/32
+ ipv6 address 5000::32/128
+ ipv6 router isis 1
+ !
+ interface eth-rt21
+ ipv6 address 4000:113::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt22
+ ipv6 address 4000:114::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt31
+ ipv6 address 4000:118::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:119::32/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::32/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0032.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::32/128 index 32 no-php-flag
+ ppr on
+ !
+
+ rt33:
+ links:
+ lo:
+ mpls: yes
+ eth-rt22:
+ peer: [rt22, eth-rt33]
+ mpls: yes
+ eth-rt23:
+ peer: [rt23, eth-rt33]
+ mpls: yes
+ eth-rt32:
+ peer: [rt32, eth-rt33]
+ mpls: yes
+ eth-rt34:
+ peer: [rt34, eth-rt33]
+ mpls: yes
+ eth-sw1:
+ peer: [sw1, eth-rt33]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.33/32
+ ipv6 address 5000::33/128
+ ipv6 router isis 1
+ !
+ interface eth-rt22
+ ipv6 address 4000:115::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt23
+ ipv6 address 4000:116::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt32
+ ipv6 address 4000:119::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt34
+ ipv6 address 4000:120::33/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::33/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0033.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::33/128 index 33 no-php-flag
+ ppr on
+ !
+
+ rt34:
+ links:
+ lo:
+ mpls: yes
+ eth-rt23:
+ peer: [rt23, eth-rt34]
+ mpls: yes
+ eth-rt33:
+ peer: [rt33, eth-rt34]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.34/32
+ ipv6 address 5000::34/128
+ ipv6 router isis 1
+ !
+ interface eth-rt23
+ ipv6 address 4000:117::34/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ interface eth-rt33
+ ipv6 address 4000:120::34/64
+ ipv6 router isis 1
+ isis network point-to-point
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0034.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::34/128 index 34 no-php-flag
+ ppr on
+ !
+
+ rt41:
+ links:
+ lo:
+ mpls: yes
+ eth-sw1:
+ peer: [sw1, eth-rt41]
+ mpls: yes
+ frr:
+ zebra:
+ isisd:
+ config: |
+ interface lo
+ ip address 10.0.0.41/32
+ ipv6 address 5000::41/128
+ ipv6 router isis 1
+ !
+ interface eth-sw1
+ ipv6 address 4000:121::41/64
+ ipv6 router isis 1
+ isis hello-multiplier 3
+ !
+ router isis 1
+ net 49.0000.0000.0000.0041.00
+ is-type level-1
+ topology ipv6-unicast
+ segment-routing on
+ segment-routing prefix 5000::41/128 index 41 no-php-flag
+ ppr on
+ !
+
+ switches:
+ sw1:
+ links:
+ eth-rt32:
+ peer: [rt32, eth-sw1]
+ eth-rt33:
+ peer: [rt33, eth-sw1]
+ eth-rt41:
+ peer: [rt41, eth-sw1]
+
+ frr:
+ #valgrind: yes
+ base-config: |
+ hostname %(node)
+ password 1
+ log file %(logdir)/%(node).log
+ log commands
+ !
+ debug zebra rib
+ debug isis sr-events
+ debug isis ppr
+ debug isis events
+ debug isis route-events
+ debug isis spf-events
+ debug isis lsp-gen
+ !
+
+..
+
+ NOTE: it’s of fundamental importance to enable MPLS processing on the
+ loopback interfaces, otherwise the tail-end routers of the PPR-MPLS
+ tunnels will drop the labeled packets they receive.
+
+YANG
+^^^^
+
+PPR can also be configured using NETCONF, RESTCONF and gRPC based on the
+following YANG models: \*
+`frr-ppr.yang <https://github.com/opensourcerouting/frr/blob/isisd-ppr/yang/frr-ppr.yang>`__
+\*
+`frr-isisd.yang <https://github.com/opensourcerouting/frr/blob/isisd-ppr/yang/frr-isisd.yang>`__
+
+As an example, here’s R11 configuration in the XML format:
+
+.. code:: xml
+
+ <lib xmlns="http://frrouting.org/yang/interface">
+ <interface>
+ <name>lo-ppr</name>
+ <vrf>default</vrf>
+ </interface>
+ <interface>
+ <name>lo</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ </isis>
+ </interface>
+ <interface>
+ <name>eth-ce1</name>
+ <vrf>default</vrf>
+ </interface>
+ <interface>
+ <name>eth-rt12</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ <hello>
+ <multiplier>
+ <level-1>3</level-1>
+ <level-2>3</level-2>
+ </multiplier>
+ </hello>
+ <network-type>point-to-point</network-type>
+ </isis>
+ </interface>
+ <interface>
+ <name>eth-rt21</name>
+ <vrf>default</vrf>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <area-tag>1</area-tag>
+ <ipv6-routing>true</ipv6-routing>
+ <hello>
+ <multiplier>
+ <level-1>3</level-1>
+ <level-2>3</level-2>
+ </multiplier>
+ </hello>
+ <network-type>point-to-point</network-type>
+ </isis>
+ </interface>
+ </lib>
+ <ppr xmlns="http://frrouting.org/yang/ppr">
+ <group>
+ <name>PPR_IPV6</name>
+ <ipv6>
+ <ppr-id>6000:1::1/128</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::22/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>50</ppr-metric>
+ </attributes>
+ </ipv6>
+ <ipv6>
+ <ppr-id>6000:2::1/128</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>5000::11/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::21/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::22/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::23/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>5000::14/128</pde-id>
+ <pde-id-type>ipv6-node</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <attributes>
+ <ppr-metric>50</ppr-metric>
+ </attributes>
+ </ipv6>
+ </group>
+ <group>
+ <name>PPR_MPLS_1</name>
+ <mpls>
+ <ppr-id>500</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>14</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>23</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>22</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>21</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>11</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </mpls>
+ <mpls>
+ <ppr-id>501</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>11</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>21</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>22</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>23</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>14</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </mpls>
+ </group>
+ <group>
+ <name>PPR_MPLS_2</name>
+ <mpls>
+ <ppr-id>502</ppr-id>
+ <ppr-prefix>5000::11/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>14</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>23</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>34</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>33</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>41</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>32</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>31</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>21</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>11</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </mpls>
+ <mpls>
+ <ppr-id>503</ppr-id>
+ <ppr-prefix>5000::14/128</ppr-prefix>
+ <ppr-pde>
+ <pde-id>11</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>21</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>31</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>32</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>41</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>33</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>34</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>23</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ <ppr-pde>
+ <pde-id>14</pde-id>
+ <pde-id-type>prefix-sid</pde-id-type>
+ <pde-type>topological</pde-type>
+ </ppr-pde>
+ </mpls>
+ </group>
+ </ppr>
+ <isis xmlns="http://frrouting.org/yang/isisd">
+ <instance>
+ <area-tag>1</area-tag>
+ <area-address>49.0000.0000.0000.0011.00</area-address>
+ <multi-topology>
+ <ipv6-unicast>
+ </ipv6-unicast>
+ </multi-topology>
+ <segment-routing>
+ <enabled>true</enabled>
+ <prefix-sid-map>
+ <prefix-sid>
+ <prefix>5000::11/128</prefix>
+ <sid-value>11</sid-value>
+ <last-hop-behavior>no-php</last-hop-behavior>
+ </prefix-sid>
+ </prefix-sid-map>
+ </segment-routing>
+ <ppr>
+ <enable>true</enable>
+ <ppr-advertise>
+ <name>PPR_IPV6</name>
+ </ppr-advertise>
+ <ppr-advertise>
+ <name>PPR_MPLS_1</name>
+ </ppr-advertise>
+ <ppr-advertise>
+ <name>PPR_MPLS_2</name>
+ </ppr-advertise>
+ </ppr>
+ </instance>
+ </isis>
+
+Verification - Control Plane
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Verify that R11 has flooded the PPR TLVs correctly to all IS-IS routers:
+
+::
+
+ # show isis database detail 0000.0000.0011
+ Area 1:
+ IS-IS Level-1 link-state database:
+ LSP ID PduLen SeqNumber Chksum Holdtime ATT/P/OL
+ debian.00-00 * 980 0x00000003 0x3b69 894 0/0/0
+ Protocols Supported: IPv4, IPv6
+ Area Address: 49.0000
+ MT Router Info: ipv4-unicast
+ MT Router Info: ipv6-unicast
+ Hostname: debian
+ TE Router ID: 10.0.0.11
+ Router Capability: 10.0.0.11 , D:0, S:0
+ Segment Routing: I:1 V:1, SRGB Base: 16000 Range: 8000
+ Algorithm: 0: SPF 0: Strict SPF
+ MT Reachability: 0000.0000.0012.00 (Metric: 10) ipv6-unicast
+ Adjacency-SID: 16, Weight: 0, Flags: F:1 B:0, V:1, L:1, S:0, P:0
+ MT Reachability: 0000.0000.0021.00 (Metric: 10) ipv6-unicast
+ Adjacency-SID: 17, Weight: 0, Flags: F:1 B:0, V:1, L:1, S:0, P:0
+ IPv4 Interface Address: 10.0.0.11
+ Extended IP Reachability: 10.0.0.11/32 (Metric: 10)
+ MT IPv6 Reachability: 5000::11/128 (Metric: 10) ipv6-unicast
+ Subtlvs:
+ SR Prefix-SID Index: 11, Algorithm: 0, Flags: NO-PHP
+ MT IPv6 Reachability: 4000:101::/64 (Metric: 10) ipv6-unicast
+ MT IPv6 Reachability: 4000:104::/64 (Metric: 10) ipv6-unicast
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 6000:1::1/128 (Native IPv6)
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::22/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 50
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 6000:2::1/128 (Native IPv6)
+ PDE: 5000::11/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::21/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::22/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::23/128 (IPv6 Node Address), L:0 N:0 E:0
+ PDE: 5000::14/128 (IPv6 Node Address), L:0 N:1 E:0
+ Metric: 50
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 500 (MPLS)
+ PDE: 14 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 23 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 22 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 21 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 11 (SR-MPLS Prefix SID), L:0 N:1 E:0
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 501 (MPLS)
+ PDE: 11 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 21 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 22 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 23 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 14 (SR-MPLS Prefix SID), L:0 N:1 E:0
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::11/128
+ ID: 502 (MPLS)
+ PDE: 14 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 23 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 34 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 33 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 41 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 32 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 31 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 21 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 11 (SR-MPLS Prefix SID), L:0 N:1 E:0
+ PPR: Fragment ID: 0, MT-ID: ipv4-unicast, Algorithm: SPF, F:0 D:0 A:0 U:1
+ PPR Prefix: 5000::14/128
+ ID: 503 (MPLS)
+ PDE: 11 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 21 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 31 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 32 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 41 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 33 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 34 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 23 (SR-MPLS Prefix SID), L:0 N:0 E:0
+ PDE: 14 (SR-MPLS Prefix SID), L:0 N:1 E:0
+
+Using the ``show isis ppr`` command, verify that all routers installed
+the PPR-IDs for the paths they are part of. Example:
+
+Router RT11
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ --------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Tail-End Up 00:00:42
+ 1 L1 501 (MPLS) 5000::14/128 0 Head-End Up 00:00:41
+ 1 L1 502 (MPLS) 5000::11/128 0 Tail-End Up 00:00:42
+ 1 L1 503 (MPLS) 5000::14/128 0 Head-End Up 00:00:41
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Tail-End - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Head-End Up 00:00:41
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 implicit-null
+ 17 SR (IS-IS) fe80::345f:dfff:fea4:913d implicit-null
+ 16011 SR (IS-IS) lo -
+ 16012 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16012
+ 16013 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16013
+ 16014 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16014
+ 16021 SR (IS-IS) fe80::345f:dfff:fea4:913d 16021
+ 16022 SR (IS-IS) fe80::345f:dfff:fea4:913d 16022
+ 16022 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16022
+ 16023 SR (IS-IS) fe80::345f:dfff:fea4:913d 16023
+ 16023 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16023
+ 16031 SR (IS-IS) fe80::345f:dfff:fea4:913d 16031
+ 16032 SR (IS-IS) fe80::345f:dfff:fea4:913d 16032
+ 16033 SR (IS-IS) fe80::345f:dfff:fea4:913d 16033
+ 16033 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16033
+ 16034 SR (IS-IS) fe80::345f:dfff:fea4:913d 16034
+ 16034 SR (IS-IS) fe80::2065:5ff:fe72:d6c5 16034
+ 16041 SR (IS-IS) fe80::345f:dfff:fea4:913d 16041
+ 16500 PPR (IS-IS) lo -
+ 16501 PPR (IS-IS) fe80::345f:dfff:fea4:913d 16501
+ 16502 PPR (IS-IS) lo -
+ 16503 PPR (IS-IS) fe80::345f:dfff:fea4:913d 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:2::1/128 [115/50] via fe80::345f:dfff:fea4:913d, eth-rt21, 00:00:41
+
+Router RT12
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 503 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ ----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::60ad:96ff:fe3f:9989 implicit-null
+ 17 SR (IS-IS) fe80::9cd2:25ff:febc:84c4 implicit-null
+ 18 SR (IS-IS) fe80::941c:12ff:fe55:8a12 implicit-null
+ 19 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 implicit-null
+ 16011 SR (IS-IS) fe80::60ad:96ff:fe3f:9989 16011
+ 16012 SR (IS-IS) lo -
+ 16013 SR (IS-IS) fe80::9cd2:25ff:febc:84c4 16013
+ 16014 SR (IS-IS) fe80::9cd2:25ff:febc:84c4 16014
+ 16021 SR (IS-IS) fe80::941c:12ff:fe55:8a12 16021
+ 16022 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16022
+ 16023 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16023
+ 16023 SR (IS-IS) fe80::9cd2:25ff:febc:84c4 16023
+ 16031 SR (IS-IS) fe80::941c:12ff:fe55:8a12 16031
+ 16032 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16032
+ 16032 SR (IS-IS) fe80::941c:12ff:fe55:8a12 16032
+ 16033 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16033
+ 16034 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16034
+ 16034 SR (IS-IS) fe80::9cd2:25ff:febc:84c4 16034
+ 16041 SR (IS-IS) fe80::78a7:59ff:fedc:48b8 16041
+ 16041 SR (IS-IS) fe80::941c:12ff:fe55:8a12 16041
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT13
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 503 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ ----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::1c70:63ff:fe40:3a35 implicit-null
+ 17 SR (IS-IS) fe80::20:56ff:feff:b218 implicit-null
+ 18 SR (IS-IS) fe80::44c5:3fff:fe1e:f34a implicit-null
+ 19 SR (IS-IS) fe80::387d:34ff:fe02:87c3 implicit-null
+ 16011 SR (IS-IS) fe80::20:56ff:feff:b218 16011
+ 16012 SR (IS-IS) fe80::20:56ff:feff:b218 16012
+ 16013 SR (IS-IS) lo -
+ 16014 SR (IS-IS) fe80::1c70:63ff:fe40:3a35 16014
+ 16021 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16021
+ 16021 SR (IS-IS) fe80::20:56ff:feff:b218 16021
+ 16022 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16022
+ 16023 SR (IS-IS) fe80::44c5:3fff:fe1e:f34a 20023
+ 16031 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16031
+ 16031 SR (IS-IS) fe80::20:56ff:feff:b218 16031
+ 16032 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16032
+ 16033 SR (IS-IS) fe80::44c5:3fff:fe1e:f34a 20033
+ 16033 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16033
+ 16034 SR (IS-IS) fe80::44c5:3fff:fe1e:f34a 20034
+ 16041 SR (IS-IS) fe80::44c5:3fff:fe1e:f34a 20041
+ 16041 SR (IS-IS) fe80::387d:34ff:fe02:87c3 16041
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT14
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ --------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Head-End Up 00:00:46
+ 1 L1 501 (MPLS) 5000::14/128 0 Tail-End Up 00:00:47
+ 1 L1 502 (MPLS) 5000::11/128 0 Head-End Up 00:00:46
+ 1 L1 503 (MPLS) 5000::14/128 0 Tail-End Up 00:00:47
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Head-End Up 00:00:46
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Tail-End - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad implicit-null
+ 17 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 implicit-null
+ 16011 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16011
+ 16012 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16012
+ 16013 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16013
+ 16014 SR (IS-IS) lo -
+ 16021 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20021
+ 16021 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16021
+ 16022 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20022
+ 16022 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16022
+ 16023 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20023
+ 16031 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20031
+ 16031 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16031
+ 16032 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20032
+ 16032 SR (IS-IS) fe80::bcb5:99ff:fed7:22ad 16032
+ 16033 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20033
+ 16034 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20034
+ 16041 SR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20041
+ 16500 PPR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20500
+ 16501 PPR (IS-IS) lo -
+ 16502 PPR (IS-IS) fe80::4c7b:a1ff:fe66:6ca7 20502
+ 16503 PPR (IS-IS) lo -
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::4c7b:a1ff:fe66:6ca7, eth-rt23, 00:00:02
+
+Router RT21
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:49
+ 1 L1 501 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:48
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:49
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:48
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:00:49
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:00:48
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::b886:2cff:fe84:a76f implicit-null
+ 17 SR (IS-IS) fe80::bc7e:bbff:fe7f:ecb0 implicit-null
+ 18 SR (IS-IS) fe80::e877:a2ff:feb7:4438 implicit-null
+ 19 SR (IS-IS) fe80::a0c2:82ff:fe39:204c implicit-null
+ 20 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 implicit-null
+ 16011 SR (IS-IS) fe80::e877:a2ff:feb7:4438 16011
+ 16012 SR (IS-IS) fe80::a0c2:82ff:fe39:204c 16012
+ 16013 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16013
+ 16013 SR (IS-IS) fe80::a0c2:82ff:fe39:204c 16013
+ 16014 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16014
+ 16014 SR (IS-IS) fe80::a0c2:82ff:fe39:204c 16014
+ 16021 SR (IS-IS) lo -
+ 16022 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16022
+ 16023 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16023
+ 16031 SR (IS-IS) fe80::bc7e:bbff:fe7f:ecb0 16031
+ 16032 SR (IS-IS) fe80::b886:2cff:fe84:a76f 16032
+ 16033 SR (IS-IS) fe80::b886:2cff:fe84:a76f 16033
+ 16033 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16033
+ 16034 SR (IS-IS) fe80::b886:2cff:fe84:a76f 16034
+ 16034 SR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16034
+ 16041 SR (IS-IS) fe80::b886:2cff:fe84:a76f 16041
+ 16500 PPR (IS-IS) fe80::e877:a2ff:feb7:4438 16500
+ 16501 PPR (IS-IS) fe80::ac6a:8aff:fe14:4f36 16501
+ 16502 PPR (IS-IS) fe80::e877:a2ff:feb7:4438 16502
+ 16503 PPR (IS-IS) fe80::bc7e:bbff:fe7f:ecb0 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::e877:a2ff:feb7:4438, eth-rt11, 00:00:04
+ I>* 6000:2::1/128 [115/50] via fe80::ac6a:8aff:fe14:4f36, eth-rt22, 00:00:04
+
+Router RT22
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:50
+ 1 L1 501 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:50
+ 1 L1 502 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 503 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:00:50
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:00:50
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::3432:84ff:fe9d:2e41 implicit-null
+ 17 SR (IS-IS) fe80::c436:63ff:feb3:4f5d implicit-null
+ 18 SR (IS-IS) fe80::56:41ff:fe53:a6b2 implicit-null
+ 19 SR (IS-IS) fe80::b423:eaff:fea1:8247 implicit-null
+ 20 SR (IS-IS) fe80::9c2f:11ff:fe0a:ab34 implicit-null
+ 21 SR (IS-IS) fe80::7402:b8ff:fee9:682e implicit-null
+ 16011 SR (IS-IS) fe80::b423:eaff:fea1:8247 16011
+ 16011 SR (IS-IS) fe80::3432:84ff:fe9d:2e41 16011
+ 16012 SR (IS-IS) fe80::3432:84ff:fe9d:2e41 16012
+ 16013 SR (IS-IS) fe80::c436:63ff:feb3:4f5d 16013
+ 16014 SR (IS-IS) fe80::56:41ff:fe53:a6b2 20014
+ 16014 SR (IS-IS) fe80::c436:63ff:feb3:4f5d 16014
+ 16021 SR (IS-IS) fe80::b423:eaff:fea1:8247 16021
+ 16022 SR (IS-IS) lo -
+ 16023 SR (IS-IS) fe80::56:41ff:fe53:a6b2 20023
+ 16031 SR (IS-IS) fe80::9c2f:11ff:fe0a:ab34 16031
+ 16031 SR (IS-IS) fe80::b423:eaff:fea1:8247 16031
+ 16032 SR (IS-IS) fe80::9c2f:11ff:fe0a:ab34 16032
+ 16033 SR (IS-IS) fe80::7402:b8ff:fee9:682e 16033
+ 16034 SR (IS-IS) fe80::7402:b8ff:fee9:682e 16034
+ 16034 SR (IS-IS) fe80::56:41ff:fe53:a6b2 20034
+ 16041 SR (IS-IS) fe80::7402:b8ff:fee9:682e 16041
+ 16041 SR (IS-IS) fe80::9c2f:11ff:fe0a:ab34 16041
+ 16500 PPR (IS-IS) fe80::b423:eaff:fea1:8247 16500
+ 16501 PPR (IS-IS) fe80::56:41ff:fe53:a6b2 20501
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::b423:eaff:fea1:8247, eth-rt21, 00:00:06
+ I>* 6000:2::1/128 [115/50] via fe80::56:41ff:fe53:a6b2, eth-rt23, 00:00:06
+
+Router RT23
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:52
+ 1 L1 501 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:52
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:52
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:52
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Mid-Point Up 00:00:52
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Mid-Point Up 00:00:52
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::c4ca:41ff:fe2d:de8c implicit-null
+ 17 SR (IS-IS) fe80::a02b:1eff:fed6:97e4 implicit-null
+ 18 SR (IS-IS) fe80::5c15:8aff:feea:1d07 implicit-null
+ 19 SR (IS-IS) fe80::a42f:50ff:fe9c:af9f implicit-null
+ 20 SR (IS-IS) fe80::d0dc:6eff:fe71:9f19 implicit-null
+ 20011 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16011
+ 20011 SR (IS-IS) fe80::a02b:1eff:fed6:97e4 16011
+ 20012 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16012
+ 20012 SR (IS-IS) fe80::a02b:1eff:fed6:97e4 16012
+ 20013 SR (IS-IS) fe80::a02b:1eff:fed6:97e4 16013
+ 20014 SR (IS-IS) fe80::c4ca:41ff:fe2d:de8c 16014
+ 20021 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16021
+ 20022 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16022
+ 20023 SR (IS-IS) lo -
+ 20031 SR (IS-IS) fe80::a42f:50ff:fe9c:af9f 16031
+ 20031 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16031
+ 20032 SR (IS-IS) fe80::a42f:50ff:fe9c:af9f 16032
+ 20032 SR (IS-IS) fe80::5c15:8aff:feea:1d07 16032
+ 20033 SR (IS-IS) fe80::a42f:50ff:fe9c:af9f 16033
+ 20034 SR (IS-IS) fe80::d0dc:6eff:fe71:9f19 16034
+ 20041 SR (IS-IS) fe80::a42f:50ff:fe9c:af9f 16041
+ 20500 PPR (IS-IS) fe80::5c15:8aff:feea:1d07 16500
+ 20501 PPR (IS-IS) fe80::c4ca:41ff:fe2d:de8c 16501
+ 20502 PPR (IS-IS) fe80::d0dc:6eff:fe71:9f19 16502
+ 20503 PPR (IS-IS) fe80::c4ca:41ff:fe2d:de8c 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+ Codes: K - kernel route, C - connected, S - static, R - RIPng,
+ O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
+ v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
+ f - OpenFabric,
+ > - selected route, * - FIB route, q - queued route, r - rejected route
+
+ I>* 6000:1::1/128 [115/50] via fe80::5c15:8aff:feea:1d07, eth-rt22, 00:00:07
+ I>* 6000:2::1/128 [115/50] via fe80::c4ca:41ff:fe2d:de8c, eth-rt14, 00:00:07
+
+Router RT31
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:54
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:54
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 implicit-null
+ 17 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 implicit-null
+ 16011 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16011
+ 16012 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16012
+ 16013 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16013
+ 16013 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16013
+ 16014 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16014
+ 16014 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16014
+ 16021 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16021
+ 16022 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16022
+ 16022 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16022
+ 16023 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16023
+ 16023 SR (IS-IS) fe80::a067:c6ff:fe2c:3385 16023
+ 16031 SR (IS-IS) lo -
+ 16032 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16032
+ 16033 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16033
+ 16034 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16034
+ 16041 SR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16041
+ 16502 PPR (IS-IS) fe80::a067:c6ff:fe2c:3385 16502
+ 16503 PPR (IS-IS) fe80::f46d:c8ff:fe8a:a341 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT32
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:55
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:55
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::881f:d3ff:febd:9e8c implicit-null
+ 17 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 implicit-null
+ 18 SR (IS-IS) fe80::9863:abff:fed0:d7e implicit-null
+ 19 SR (IS-IS) fe80::ec65:d1ff:fe32:b508 implicit-null
+ 20 SR (IS-IS) fe80::a4e9:77ff:feaa:f690 implicit-null
+ 21 SR (IS-IS) fe80::40c4:e6ff:fe26:767f implicit-null
+ 16011 SR (IS-IS) fe80::881f:d3ff:febd:9e8c 16011
+ 16012 SR (IS-IS) fe80::40c4:e6ff:fe26:767f 16012
+ 16012 SR (IS-IS) fe80::881f:d3ff:febd:9e8c 16012
+ 16013 SR (IS-IS) fe80::40c4:e6ff:fe26:767f 16013
+ 16014 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16014
+ 16014 SR (IS-IS) fe80::ec65:d1ff:fe32:b508 16014
+ 16014 SR (IS-IS) fe80::40c4:e6ff:fe26:767f 16014
+ 16021 SR (IS-IS) fe80::881f:d3ff:febd:9e8c 16021
+ 16022 SR (IS-IS) fe80::40c4:e6ff:fe26:767f 16022
+ 16023 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16023
+ 16023 SR (IS-IS) fe80::ec65:d1ff:fe32:b508 16023
+ 16023 SR (IS-IS) fe80::40c4:e6ff:fe26:767f 16023
+ 16031 SR (IS-IS) fe80::9863:abff:fed0:d7e 16031
+ 16032 SR (IS-IS) lo -
+ 16033 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16033
+ 16033 SR (IS-IS) fe80::ec65:d1ff:fe32:b508 16033
+ 16034 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16034
+ 16034 SR (IS-IS) fe80::ec65:d1ff:fe32:b508 16034
+ 16041 SR (IS-IS) fe80::a4e9:77ff:feaa:f690 16041
+ 16502 PPR (IS-IS) fe80::9863:abff:fed0:d7e 16502
+ 16503 PPR (IS-IS) fe80::a4e9:77ff:feaa:f690 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT33
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:57
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:57
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::2832:a9ff:fec3:7078 implicit-null
+ 17 SR (IS-IS) fe80::7806:e1ff:fe72:9b1f implicit-null
+ 18 SR (IS-IS) fe80::5476:31ff:fe94:c39 implicit-null
+ 19 SR (IS-IS) fe80::a4e9:77ff:feaa:f690 implicit-null
+ 20 SR (IS-IS) fe80::68c9:2ff:fe04:5eba implicit-null
+ 21 SR (IS-IS) fe80::d053:97ff:fee2:1711 implicit-null
+ 16011 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16011
+ 16011 SR (IS-IS) fe80::5476:31ff:fe94:c39 16011
+ 16011 SR (IS-IS) fe80::d053:97ff:fee2:1711 16011
+ 16012 SR (IS-IS) fe80::d053:97ff:fee2:1711 16012
+ 16013 SR (IS-IS) fe80::68c9:2ff:fe04:5eba 20013
+ 16013 SR (IS-IS) fe80::d053:97ff:fee2:1711 16013
+ 16014 SR (IS-IS) fe80::68c9:2ff:fe04:5eba 20014
+ 16021 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16021
+ 16021 SR (IS-IS) fe80::5476:31ff:fe94:c39 16021
+ 16021 SR (IS-IS) fe80::d053:97ff:fee2:1711 16021
+ 16022 SR (IS-IS) fe80::d053:97ff:fee2:1711 16022
+ 16023 SR (IS-IS) fe80::68c9:2ff:fe04:5eba 20023
+ 16031 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16031
+ 16031 SR (IS-IS) fe80::5476:31ff:fe94:c39 16031
+ 16032 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16032
+ 16032 SR (IS-IS) fe80::5476:31ff:fe94:c39 16032
+ 16033 SR (IS-IS) lo -
+ 16034 SR (IS-IS) fe80::7806:e1ff:fe72:9b1f 16034
+ 16041 SR (IS-IS) fe80::a4e9:77ff:feaa:f690 16041
+ 16502 PPR (IS-IS) fe80::a4e9:77ff:feaa:f690 16502
+ 16503 PPR (IS-IS) fe80::7806:e1ff:fe72:9b1f 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT34
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:00:59
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:00:59
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::ac33:5dff:fe99:81ec implicit-null
+ 17 SR (IS-IS) fe80::f009:b9ff:fe05:e540 implicit-null
+ 16011 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16011
+ 16011 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20011
+ 16012 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16012
+ 16012 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20012
+ 16013 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20013
+ 16014 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20014
+ 16021 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16021
+ 16021 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20021
+ 16022 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16022
+ 16022 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20022
+ 16023 SR (IS-IS) fe80::f009:b9ff:fe05:e540 20023
+ 16031 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16031
+ 16032 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16032
+ 16033 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16033
+ 16034 SR (IS-IS) lo -
+ 16041 SR (IS-IS) fe80::ac33:5dff:fe99:81ec 16041
+ 16502 PPR (IS-IS) fe80::ac33:5dff:fe99:81ec 16502
+ 16503 PPR (IS-IS) fe80::f009:b9ff:fe05:e540 20503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Router RT41
+^^^^^^^^^^^
+
+::
+
+ # show isis ppr
+ Area Level ID Prefix Metric Position Status Uptime
+ ---------------------------------------------------------------------------------------------
+ 1 L1 500 (MPLS) 5000::11/128 0 Off-Path - -
+ 1 L1 501 (MPLS) 5000::14/128 0 Off-Path - -
+ 1 L1 502 (MPLS) 5000::11/128 0 Mid-Point Up 00:01:01
+ 1 L1 503 (MPLS) 5000::14/128 0 Mid-Point Up 00:01:01
+ 1 L1 6000:1::1/128 (Native IPv6) 5000::11/128 50 Off-Path - -
+ 1 L1 6000:2::1/128 (Native IPv6) 5000::14/128 50 Off-Path - -
+
+ # show mpls table
+ Inbound Label Type Nexthop Outbound Label
+ -----------------------------------------------------------------------
+ 16 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 implicit-null
+ 17 SR (IS-IS) fe80::2832:a9ff:fec3:7078 implicit-null
+ 16011 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16011
+ 16012 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16012
+ 16012 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16012
+ 16013 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16013
+ 16013 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16013
+ 16014 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16014
+ 16021 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16021
+ 16022 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16022
+ 16022 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16022
+ 16023 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16023
+ 16031 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16031
+ 16032 SR (IS-IS) fe80::2832:a9ff:fec3:7078 16032
+ 16033 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16033
+ 16034 SR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16034
+ 16041 SR (IS-IS) lo -
+ 16502 PPR (IS-IS) fe80::2832:a9ff:fec3:7078 16502
+ 16503 PPR (IS-IS) fe80::1c7e:c3ff:fe5e:7a54 16503
+
+ # show ipv6 route 6000::/16 longer-prefixes isis
+
+Notice how R23 uses a different SRGB compared to the other routers in
+the network. As such, this router install different labels for PPR-IDs
+500 and 501 (e.g. 20500 instead of 16500 using the default SRGB).
+
+Verification - Forwarding Plane
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ping Host 3 from Host2 and use tcpdump or wireshark to verify that the
+ICMP packets are being tunneled using MPLS LSPs and following the {R11 -
+R21 - R22 - R23 - R14} path. Here’s a wireshark capture between R11 and
+R21:
+
+.. figure:: https://user-images.githubusercontent.com/931662/64057179-2e980080-cb70-11e9-89c3-ff43e6d66cae.png
+ :alt: wireshark
+
+ wireshark
+
+Using ``traceroute`` it’s also possible to see that the ICMP packets are
+being tunneled through the IS-IS network:
+
+::
+
+ root@host2:~# traceroute -n fd00:20:1::1 -s fd00:10:2::1
+ traceroute to fd00:20:1::1 (fd00:20:1::1), 30 hops max, 80 byte packets
+ 1 fd00:10:2::100 1.996 ms 1.832 ms 1.725 ms
+ 2 * * *
+ 3 * * *
+ 4 * * *
+ 5 * * *
+ 6 * * *
+ 7 * * *
+ 8 fd00:20::100 0.154 ms 0.191 ms 0.116 ms
+ 9 fd00:20:1::1 0.125 ms 0.105 ms 0.104 ms
diff --git a/doc/developer/northbound/retrofitting-configuration-commands.rst b/doc/developer/northbound/retrofitting-configuration-commands.rst
new file mode 100644
index 0000000..b407246
--- /dev/null
+++ b/doc/developer/northbound/retrofitting-configuration-commands.rst
@@ -0,0 +1,1897 @@
+Retrofitting Configuration Commands
+-----------------------------------
+
+This page explains how to convert existing CLI configuration commands to
+the new northbound model. This documentation is meant to be the primary
+reference for developers working on the northbound retrofitting process.
+We’ll show several examples taken from the ripd northbound conversion to
+illustrate some concepts described herein.
+
+Retrofitting process
+--------------------
+
+Step 1: writing a YANG module
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The first step is to write a YANG module that models faithfully the
+commands that are going to be converted. As explained in the
+[[Architecture]] page, the goal is to introduce the new YANG-based
+Northbound API without introducing backward incompatible changes in the
+CLI. The northbound retrofitting process should be completely
+transparent to FRR users.
+
+The developer is free to choose whether to write a full YANG module or a
+partial YANG module and increment it gradually. For developers who lack
+experience with YANG it’s probably a better idea to model one command at
+time.
+
+It’s recommended to reuse definitions from standard YANG models whenever
+possible to facilitate the process of writing module translators using
+the [[YANG module translator]]. As an example, the frr-ripd YANG module
+incorporated several parts of the IETF RIP YANG module. The repositories
+below contain big collections of YANG models that might be used as a
+reference: \* https://github.com/YangModels/yang \*
+https://github.com/openconfig/public
+
+When writing a YANG module, it’s highly recommended to follow the
+guidelines from `RFC 6087 <https://tools.ietf.org/html/rfc6087>`__. In
+general, most commands should be modeled fairly easy. Here are a few
+guidelines specific to authors of FRR YANG models: \* Use
+presence-containers or lists to model commands that change the CLI node
+(e.g. ``router rip``, ``interface eth0``). This way, if the
+presence-container or list entry is removed, all configuration options
+below them are removed automatically (exactly like the CLI behaves when
+a configuration object is removed using a *no* command). This
+recommendation is orthogonal to the `YANG authoring guidelines for
+OpenConfig
+models <https://github.com/openconfig/public/blob/master/doc/openconfig_style_guide.md>`__
+where the use of presence containers is discouraged. OpenConfig YANG
+models however were not designed to replicate the behavior of legacy CLI
+commands. \* When using YANG lists, be careful to identify what should
+be the key leaves. In the ``offset-list WORD <in|out> (0-16) IFNAME``
+command, for example, both the direction (``<in|out>``) and the
+interface name should be the keys of the list. This can be only known by
+analyzing the data structures used to store the commands. \* For
+clarity, use non-presence containers to group leaves that are associated
+to the same configuration command (as we’ll see later, this also
+facilitate the process of writing ``cli_show`` callbacks). \* YANG
+leaves of type *enumeration* should define explicitly the value of each
+*enum* option based on the value used in the FRR source code. \* Default
+values should be taken from the source code whenever they exist.
+
+Some commands are more difficult to model and demand the use of more
+advanced YANG constructs like *choice*, *when* and *must* statements.
+**One key requirement is that it should be impossible to load an invalid
+JSON/XML configuration to FRR**. The YANG modules should model exactly
+what the CLI accepts in the form of commands, and all restrictions
+imposed by the CLI should be defined in the YANG models whenever
+possible. As we’ll see later, not all constraints can be expressed using
+the YANG language and sometimes we’ll need to resort to code-level
+validation in the northbound callbacks.
+
+ Tip: the :doc:`yang-tools` page details several tools and commands that
+ might be useful when writing a YANG module, like validating YANG
+ files, indenting YANG files, validating instance data, etc.
+
+In the example YANG snippet below, we can see the use of the *must*
+statement that prevents ripd from redistributing RIP routes into itself.
+Although ripd CLI doesn’t allow the operator to enter *redistribute rip*
+under *router rip*, we don’t have the same protection when configuring
+ripd using other northbound interfaces (e.g. NETCONF). So without this
+constraint it would be possible to feed an invalid configuration to ripd
+(i.e. a bug).
+
+.. code:: yang
+
+ list redistribute {
+ key "protocol";
+ description
+ "Redistributes routes learned from other routing protocols.";
+ leaf protocol {
+ type frr-route-types:frr-route-types-v4;
+ description
+ "Routing protocol.";
+ must '. != "rip"';
+ }
+ [snip]
+ }
+
+In the example below, we use the YANG *choice* statement to ensure that
+either the ``password`` leaf or the ``key-chain`` leaf is configured,
+but not both. This is in accordance to the sanity checks performed by
+the *ip rip authentication* commands.
+
+.. code:: yang
+
+ choice authentication-data {
+ description
+ "Choose whether to use a simple password or a key-chain.";
+ leaf authentication-password {
+ type string {
+ length "1..16";
+ }
+ description
+ "Authentication string.";
+ }
+ leaf authentication-key-chain {
+ type string;
+ description
+ "Key-chain name.";
+ }
+ }
+
+Once finished, the new YANG model should be put into the FRR *yang/* top
+level directory. This will ensure it will be installed automatically by
+``make install``. It’s also encouraged (but not required) to put sample
+configurations under *yang/examples/* using either JSON or XML files.
+
+Step 2: generate skeleton northbound callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Use the *gen_northbound_callbacks* tool to generate skeleton callbacks
+for the YANG module. Example:
+
+.. code:: sh
+
+ $ tools/gen_northbound_callbacks frr-ripd > ripd/rip_northbound.c
+
+The tool will look for the given module in the ``YANG_MODELS_PATH``
+directory defined during the installation. For each schema node of the
+YANG module, the tool will generate skeleton callbacks based on the
+properties of the node. Example:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/instance
+ */
+ static int ripd_instance_create(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ /* TODO: implement me. */
+ return NB_OK;
+ }
+
+ static int ripd_instance_delete(enum nb_event event,
+ const struct lyd_node *dnode)
+ {
+ /* TODO: implement me. */
+ return NB_OK;
+ }
+
+ /*
+ * XPath: /frr-ripd:ripd/instance/allow-ecmp
+ */
+ static int ripd_instance_allow_ecmp_modify(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ /* TODO: implement me. */
+ return NB_OK;
+ }
+
+ [snip]
+
+ const struct frr_yang_module_info frr_ripd_info = {
+ .name = "frr-ripd",
+ .nodes = {
+ {
+ .xpath = "/frr-ripd:ripd/instance",
+ .cbs.create = ripd_instance_create,
+ .cbs.delete = ripd_instance_delete,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/allow-ecmp",
+ .cbs.modify = ripd_instance_allow_ecmp_modify,
+ },
+ [snip]
+ {
+ .xpath = "/frr-ripd:ripd/state/routes/route",
+ .cbs.get_next = ripd_state_routes_route_get_next,
+ .cbs.get_keys = ripd_state_routes_route_get_keys,
+ .cbs.lookup_entry = ripd_state_routes_route_lookup_entry,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/routes/route/prefix",
+ .cbs.get_elem = ripd_state_routes_route_prefix_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/routes/route/next-hop",
+ .cbs.get_elem = ripd_state_routes_route_next_hop_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/routes/route/interface",
+ .cbs.get_elem = ripd_state_routes_route_interface_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/state/routes/route/metric",
+ .cbs.get_elem = ripd_state_routes_route_metric_get_elem,
+ },
+ {
+ .xpath = "/frr-ripd:clear-rip-route",
+ .cbs.rpc = clear_rip_route_rpc,
+ },
+ [snip]
+
+After the C source file is generated, it’s necessary to add a copyright
+header on it and indent the code using ``clang-format``.
+
+Step 3: update the *frr_yang_module_info* array of all relevant daemons
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We must inform the northbound about which daemons will implement the new
+YANG module. This is done by updating the ``frr_daemon_info`` structure
+of these daemons, with help of the ``FRR_DAEMON_INFO`` macro.
+
+When a YANG module is specific to a single daemon, like the frr-ripd
+module, then only the corresponding daemon should be updated. When the
+YANG module is related to a subset of libfrr (e.g. route-maps), then all
+FRR daemons that make use of that subset must be updated.
+
+Example:
+
+.. code:: c
+
+ static const struct frr_yang_module_info *ripd_yang_modules[] = {
+ &frr_interface_info,
+ &frr_ripd_info,
+ };
+
+ FRR_DAEMON_INFO(ripd, RIP, .vty_port = RIP_VTY_PORT,
+ [snip]
+ .yang_modules = ripd_yang_modules,
+ .n_yang_modules = array_size(ripd_yang_modules), )
+
+Step 4: implement the northbound configuration callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Implementing the northbound configuration callbacks consists mostly of
+copying code from the corresponding CLI commands and make the required
+adaptations.
+
+It’s recommended to convert one command or a small group of related
+commands per commit. Small commits are preferred to facilitate the
+review process. Both “old” and “new” command can coexist without
+problems, so the retrofitting process can happen gradually over time.
+
+The configuration callbacks
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These are the four main northbound configuration callbacks, as defined
+in the ``lib/northbound.h`` file:
+
+.. code:: c
+
+ /*
+ * Configuration callback.
+ *
+ * A presence container, list entry, leaf-list entry or leaf of type
+ * empty has been created.
+ *
+ * For presence-containers and list entries, the callback is supposed to
+ * initialize the default values of its children (if any) from the YANG
+ * models.
+ *
+ * event
+ * The transaction phase. Refer to the documentation comments of
+ * nb_event for more details.
+ *
+ * dnode
+ * libyang data node that is being created.
+ *
+ * resource
+ * Pointer to store resource(s) allocated during the NB_EV_PREPARE
+ * phase. The same pointer can be used during the NB_EV_ABORT and
+ * NB_EV_APPLY phases to either release or make use of the allocated
+ * resource(s). It's set to NULL when the event is NB_EV_VALIDATE.
+ *
+ * Returns:
+ * - NB_OK on success.
+ * - NB_ERR_VALIDATION when a validation error occurred.
+ * - NB_ERR_RESOURCE when the callback failed to allocate a resource.
+ * - NB_ERR_INCONSISTENCY when an inconsistency was detected.
+ * - NB_ERR for other errors.
+ */
+ int (*create)(enum nb_event event, const struct lyd_node *dnode,
+ union nb_resource *resource);
+
+ /*
+ * Configuration callback.
+ *
+ * The value of a leaf has been modified.
+ *
+ * List keys don't need to implement this callback. When a list key is
+ * modified, the northbound treats this as if the list was deleted and a
+ * new one created with the updated key value.
+ *
+ * event
+ * The transaction phase. Refer to the documentation comments of
+ * nb_event for more details.
+ *
+ * dnode
+ * libyang data node that is being modified
+ *
+ * resource
+ * Pointer to store resource(s) allocated during the NB_EV_PREPARE
+ * phase. The same pointer can be used during the NB_EV_ABORT and
+ * NB_EV_APPLY phases to either release or make use of the allocated
+ * resource(s). It's set to NULL when the event is NB_EV_VALIDATE.
+ *
+ * Returns:
+ * - NB_OK on success.
+ * - NB_ERR_VALIDATION when a validation error occurred.
+ * - NB_ERR_RESOURCE when the callback failed to allocate a resource.
+ * - NB_ERR_INCONSISTENCY when an inconsistency was detected.
+ * - NB_ERR for other errors.
+ */
+ int (*modify)(enum nb_event event, const struct lyd_node *dnode,
+ union nb_resource *resource);
+
+ /*
+ * Configuration callback.
+ *
+ * A presence container, list entry, leaf-list entry or optional leaf
+ * has been deleted.
+ *
+ * The callback is supposed to delete the entire configuration object,
+ * including its children when they exist.
+ *
+ * event
+ * The transaction phase. Refer to the documentation comments of
+ * nb_event for more details.
+ *
+ * dnode
+ * libyang data node that is being deleted.
+ *
+ * Returns:
+ * - NB_OK on success.
+ * - NB_ERR_VALIDATION when a validation error occurred.
+ * - NB_ERR_INCONSISTENCY when an inconsistency was detected.
+ * - NB_ERR for other errors.
+ */
+ int (*delete)(enum nb_event event, const struct lyd_node *dnode);
+
+ /*
+ * Configuration callback.
+ *
+ * A list entry or leaf-list entry has been moved. Only applicable when
+ * the "ordered-by user" statement is present.
+ *
+ * event
+ * The transaction phase. Refer to the documentation comments of
+ * nb_event for more details.
+ *
+ * dnode
+ * libyang data node that is being moved.
+ *
+ * Returns:
+ * - NB_OK on success.
+ * - NB_ERR_VALIDATION when a validation error occurred.
+ * - NB_ERR_INCONSISTENCY when an inconsistency was detected.
+ * - NB_ERR for other errors.
+ */
+ int (*move)(enum nb_event event, const struct lyd_node *dnode);
+
+Since skeleton northbound callbacks are generated automatically by the
+*gen_northbound_callbacks* tool, the developer doesn’t need to worry
+about which callbacks need to be implemented.
+
+ NOTE: once a daemon starts, it reads its YANG modules and validates
+ that all required northbound callbacks were implemented. If any
+ northbound callback is missing, an error is logged and the program
+ exists.
+
+Transaction phases
+^^^^^^^^^^^^^^^^^^
+
+Configuration transactions and their phases were described in detail in
+the [[Architecture]] page. Here’s the definition of the ``nb_event``
+enumeration as defined in the *lib/northbound.h* file:
+
+.. code:: c
+
+ /* Northbound events. */
+ enum nb_event {
+ /*
+ * The configuration callback is supposed to verify that the changes are
+ * valid and can be applied.
+ */
+ NB_EV_VALIDATE,
+
+ /*
+ * The configuration callback is supposed to prepare all resources
+ * required to apply the changes.
+ */
+ NB_EV_PREPARE,
+
+ /*
+ * Transaction has failed, the configuration callback needs to release
+ * all resources previously allocated.
+ */
+ NB_EV_ABORT,
+
+ /*
+ * The configuration changes need to be applied. The changes can't be
+ * rejected at this point (errors are logged and ignored).
+ */
+ NB_EV_APPLY,
+ };
+
+When converting a CLI command, we must identify all error-prone
+operations and perform them in the ``NB_EV_PREPARE`` phase of the
+northbound callbacks. When the operation in question involves the
+allocation of a specific resource (e.g. file descriptors), we can store
+the allocated resource in the ``resource`` variable given to the
+callback. This way the allocated resource can be obtained in the other
+phases of the transaction using the same parameter.
+
+Here’s the ``create`` northbound callback associated to the
+``router rip`` command:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/instance
+ */
+ static int ripd_instance_create(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ int socket;
+
+ switch (event) {
+ case NB_EV_VALIDATE:
+ break;
+ case NB_EV_PREPARE:
+ socket = rip_create_socket();
+ if (socket < 0)
+ return NB_ERR_RESOURCE;
+ resource->fd = socket;
+ break;
+ case NB_EV_ABORT:
+ socket = resource->fd;
+ close(socket);
+ break;
+ case NB_EV_APPLY:
+ socket = resource->fd;
+ rip_create(socket);
+ break;
+ }
+
+ return NB_OK;
+ }
+
+Note that the socket creation is an error-prone operation since it
+depends on the underlying operating system, so the socket must be
+created during the ``NB_EV_PREPARE`` phase and stored in
+``resource->fd``. This socket is then either closed or used depending on
+the outcome of the preparation phase of the whole transaction.
+
+During the ``NB_EV_VALIDATE`` phase, the northbound callbacks must
+validate if the intended changes are valid. As an example, FRR doesn’t
+allow the operator to deconfigure active interfaces:
+
+.. code:: c
+
+ static int lib_interface_delete(enum nb_event event,
+ const struct lyd_node *dnode)
+ {
+ struct interface *ifp;
+
+ ifp = yang_dnode_get_entry(dnode);
+
+ switch (event) {
+ case NB_EV_VALIDATE:
+ if (CHECK_FLAG(ifp->status, ZEBRA_INTERFACE_ACTIVE)) {
+ zlog_warn("%s: only inactive interfaces can be deleted",
+ __func__);
+ return NB_ERR_VALIDATION;
+ }
+ break;
+ case NB_EV_PREPARE:
+ case NB_EV_ABORT:
+ break;
+ case NB_EV_APPLY:
+ if_delete(ifp);
+ break;
+ }
+
+ return NB_OK;
+ }
+
+Note however that it’s preferred to use YANG to model the validation
+constraints whenever possible. Code-level validations should be used
+only to validate constraints that can’t be modeled using the YANG
+language.
+
+Most callbacks don’t need to perform any validations nor perform any
+error-prone operations, so in these cases we can use the following
+pattern to return early if ``event`` is different than ``NB_EV_APPLY``:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/instance/distance/default
+ */
+ static int ripd_instance_distance_default_modify(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ if (event != NB_EV_APPLY)
+ return NB_OK;
+
+ rip->distance = yang_dnode_get_uint8(dnode, NULL);
+
+ return NB_OK;
+ }
+
+During development it’s recommend to use the *debug northbound* command
+to debug configuration transactions and see what callbacks are being
+called. Example:
+
+::
+
+ ripd# conf t
+ ripd(config)# debug northbound
+ ripd(config)# router rip
+ ripd(config-router)# allow-ecmp
+ ripd(config-router)# network eth0
+ ripd(config-router)# redistribute ospf metric 2
+ ripd(config-router)# commit
+ % Configuration committed successfully.
+
+ ripd(config-router)#
+
+Now the ripd log:
+
+::
+
+ 2018/09/23 12:43:59 RIP: northbound callback: event [validate] op [create] xpath [/frr-ripd:ripd/instance] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [validate] op [modify] xpath [/frr-ripd:ripd/instance/allow-ecmp] value [true]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [validate] op [create] xpath [/frr-ripd:ripd/instance/interface[.='eth0']] value [eth0]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [validate] op [create] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [validate] op [modify] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']/metric] value [2]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [prepare] op [create] xpath [/frr-ripd:ripd/instance] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [prepare] op [modify] xpath [/frr-ripd:ripd/instance/allow-ecmp] value [true]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [prepare] op [create] xpath [/frr-ripd:ripd/instance/interface[.='eth0']] value [eth0]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [prepare] op [create] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [prepare] op [modify] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']/metric] value [2]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [create] xpath [/frr-ripd:ripd/instance] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [modify] xpath [/frr-ripd:ripd/instance/allow-ecmp] value [true]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [create] xpath [/frr-ripd:ripd/instance/interface[.='eth0']] value [eth0]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [create] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']] value [(none)]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [modify] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']/metric] value [2]
+ 2018/09/23 12:43:59 RIP: northbound callback: event [apply] op [apply_finish] xpath [/frr-ripd:ripd/instance/redistribute[protocol='ospf']] value [(null)]
+
+Getting the data
+^^^^^^^^^^^^^^^^
+
+One parameter that is common to all northbound configuration callbacks
+is the ``dnode`` parameter. This is a libyang data node structure that
+contains information relative to the configuration change that is being
+performed. For ``create`` callbacks, it contains the configuration node
+that is being added. For ``delete`` callbacks, it contains the
+configuration node that is being deleted. For ``modify`` callbacks, it
+contains the configuration node that is being modified.
+
+In order to get the actual data value out of the ``dnode`` variable, we
+need to use the ``yang_dnode_get_*()`` wrappers documented in
+*lib/yang_wrappers.h*.
+
+The advantage of passing a ``dnode`` structure to the northbound
+callbacks is that the whole candidate being committed is made available,
+so the callbacks can obtain values from other portions of the
+configuration if necessary. This can be done by providing an xpath
+expression to the second parameter of the ``yang_dnode_get_*()``
+wrappers to specify the element we want to get. The example below shows
+a callback that gets the values of two leaves that are part of the same
+list entry:
+
+.. code:: c
+
+ static int
+ ripd_instance_redistribute_metric_modify(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ int type;
+ uint8_t metric;
+
+ if (event != NB_EV_APPLY)
+ return NB_OK;
+
+ type = yang_dnode_get_enum(dnode, "../protocol");
+ metric = yang_dnode_get_uint8(dnode, NULL);
+
+ rip->route_map[type].metric_config = true;
+ rip->route_map[type].metric = metric;
+ rip_redistribute_conf_update(type);
+
+ return NB_OK;
+ }
+
+..
+
+ NOTE: if the wrong ``yang_dnode_get_*()`` wrapper is used, the code
+ will log an error and abort. An example would be using
+ ``yang_dnode_get_enum()`` to get the value of a boolean data node.
+
+No need to check if the configuration value has changed
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A common pattern in CLI commands is this:
+
+.. code:: c
+
+ DEFUN (...)
+ {
+ [snip]
+ if (new_value == old_value)
+ return CMD_SUCCESS;
+ [snip]
+ }
+
+Several commands need to check if the new value entered by the user is
+the same as the one currently configured. Then, if yes, ignore the
+command since nothing was changed.
+
+The northbound callbacks on the other hand don’t need to perform this
+check since they act on effective configuration changes. Using the CLI
+as an example, if the operator enters the same command multiple times,
+the northbound layer will detect that nothing has changed in the
+configuration and will avoid calling the northbound callbacks
+unnecessarily.
+
+In some cases, however, it might be desirable to check for
+inconsistencies and notify the northbound when that happens:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/instance/interface
+ */
+ static int ripd_instance_interface_create(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ const char *ifname;
+
+ if (event != NB_EV_APPLY)
+ return NB_OK;
+
+ ifname = yang_dnode_get_string(dnode, NULL);
+
+ return rip_enable_if_add(ifname);
+ }
+
+.. code:: c
+
+ /* Add interface to rip_enable_if. */
+ int rip_enable_if_add(const char *ifname)
+ {
+ int ret;
+
+ ret = rip_enable_if_lookup(ifname);
+ if (ret >= 0)
+ return NB_ERR_INCONSISTENCY;
+
+ vector_set(rip_enable_interface,
+ XSTRDUP(MTYPE_RIP_INTERFACE_STRING, ifname));
+
+ rip_enable_apply_all(); /* TODOVJ */
+
+ return NB_OK;
+ }
+
+In the example above, the ``rip_enable_if_add()`` function should never
+return ``NB_ERR_INCONSISTENCY`` in normal conditions. This is because
+the northbound layer guarantees that the same interface will never be
+added more than once (except when it’s removed and re-added again). But
+to be on the safe side it’s probably wise to check for internal
+inconsistencies to ensure everything is working as expected.
+
+Default values
+^^^^^^^^^^^^^^
+
+Whenever creating a new presence-container or list entry, it’s usually
+necessary to initialize certain variables to their default values. FRR
+most of the time uses special constants for that purpose
+(e.g. ``RIP_DEFAULT_METRIC_DEFAULT``, ``DFLT_BGP_HOLDTIME``, etc). Now
+that we have YANG models, we want to fetch the default values from these
+models instead. This will allow us to changes default values smoothly
+without needing to touch the code. Better yet, it will allow users to
+create YANG deviations to define custom default values easily.
+
+To fetch default values from the loaded YANG models, use the
+``yang_get_default_*()`` wrapper functions
+(e.g. ``yang_get_default_bool()``) documented in *lib/yang_wrappers.h*.
+
+Example:
+
+.. code:: c
+
+ int rip_create(int socket)
+ {
+ rip = XCALLOC(MTYPE_RIP, sizeof(struct rip));
+
+ /* Set initial values. */
+ rip->ecmp = yang_get_default_bool("%s/allow-ecmp", RIP_INSTANCE);
+ rip->default_metric =
+ yang_get_default_uint8("%s/default-metric", RIP_INSTANCE);
+ [snip]
+ }
+
+Configuration options are edited individually
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Several CLI commands edit multiple configuration options at the same
+time. Some examples taken from ripd: \*
+``timers basic (5-2147483647) (5-2147483647) (5-2147483647)`` -
+*/frr-ripd:ripd/instance/timers/flush-interval* -
+*/frr-ripd:ripd/instance/timers/holddown-interval* -
+*/frr-ripd:ripd/instance/timers/update-interval* \*
+``distance (1-255) A.B.C.D/M [WORD]`` -
+*/frr-ripd:ripd/instance/distance/source/prefix* -
+*/frr-ripd:ripd/instance/distance/source/distance* -
+*/frr-ripd:ripd/instance/distance/source/access-list*
+
+In the new northbound model, there’s one or more separate callbacks for
+each configuration option. This usually has implications when converting
+code from CLI commands to the northbound commands. An example of this is
+the following commit from ripd:
+`7cf2f2eaf <https://github.com/opensourcerouting/frr/commit/7cf2f2eaf43ef5df294625d1ab4c708db8293510>`__.
+The ``rip_distance_set()`` and ``rip_distance_unset()`` functions were
+torn apart and their code split into a few different callbacks.
+
+For lists and presence-containers, it’s possible to use the
+``yang_dnode_set_entry()`` function to attach user data to a libyang
+data node, and then retrieve this value in the other callbacks (for the
+same node or any of its children) using the ``yang_dnode_get_entry()``
+function. Example:
+
+.. code:: c
+
+ static int ripd_instance_distance_source_create(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ struct prefix_ipv4 prefix;
+ struct route_node *rn;
+
+ if (event != NB_EV_APPLY)
+ return NB_OK;
+
+ yang_dnode_get_ipv4p(&prefix, dnode, "./prefix");
+
+ /* Get RIP distance node. */
+ rn = route_node_get(rip_distance_table, (struct prefix *)&prefix);
+ rn->info = rip_distance_new();
+ yang_dnode_set_entry(dnode, rn);
+
+ return NB_OK;
+ }
+
+.. code:: c
+
+ static int
+ ripd_instance_distance_source_distance_modify(enum nb_event event,
+ const struct lyd_node *dnode,
+ union nb_resource *resource)
+ {
+ struct route_node *rn;
+ uint8_t distance;
+ struct rip_distance *rdistance;
+
+ if (event != NB_EV_APPLY)
+ return NB_OK;
+
+ /* Set distance value. */
+ rn = yang_dnode_get_entry(dnode);
+ distance = yang_dnode_get_uint8(dnode, NULL);
+ rdistance = rn->info;
+ rdistance->distance = distance;
+
+ return NB_OK;
+ }
+
+Commands that edit multiple configuration options at the same time can
+also use the ``apply_finish`` optional callback, documented as follows
+in the *lib/northbound.h* file:
+
+.. code:: c
+
+ /*
+ * Optional configuration callback for YANG lists and containers.
+ *
+ * The 'apply_finish' callbacks are called after all other callbacks
+ * during the apply phase (NB_EV_APPLY). These callbacks are called only
+ * under one of the following two cases:
+ * * The container or a list entry has been created;
+ * * Any change is made within the descendants of the list entry or
+ * container (e.g. a child leaf was modified, created or deleted).
+ *
+ * This callback is useful in the cases where a single event should be
+ * triggered regardless if the container or list entry was changed once
+ * or multiple times.
+ *
+ * dnode
+ * libyang data node from the YANG list or container.
+ */
+ void (*apply_finish)(const struct lyd_node *dnode);
+
+Here’s an example of how this callback can be used:
+
+.. code:: c
+
+ /*
+ * XPath: /frr-ripd:ripd/instance/timers/
+ */
+ static void ripd_instance_timers_apply_finish(const struct lyd_node *dnode)
+ {
+ /* Reset update timer thread. */
+ rip_event(RIP_UPDATE_EVENT, 0);
+ }
+
+.. code:: c
+
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers",
+ .cbs.apply_finish = ripd_instance_timers_apply_finish,
+ .cbs.cli_show = cli_show_rip_timers,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers/flush-interval",
+ .cbs.modify = ripd_instance_timers_flush_interval_modify,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers/holddown-interval",
+ .cbs.modify = ripd_instance_timers_holddown_interval_modify,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers/update-interval",
+ .cbs.modify = ripd_instance_timers_update_interval_modify,
+ },
+
+In this example, we want to call the ``rip_event()`` function only once
+regardless if all RIP timers were modified or only one of them. Without
+the ``apply_finish`` callback we’d need to call ``rip_event()`` in the
+``modify`` callback of each timer (a YANG leaf), resulting in redundant
+call to the ``rip_event()`` function if multiple timers are changed at
+once.
+
+Bonus: libyang user types
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When writing YANG modules, it’s advisable to create derived types for
+data types that are used on multiple places (e.g. MAC addresses, IS-IS
+networks, etc). Here’s how `RFC
+7950 <https://tools.ietf.org/html/rfc7950#page-25>`__ defines derived
+types: > YANG can define derived types from base types using the
+“typedef” > statement. A base type can be either a built-in type or a
+derived > type, allowing a hierarchy of derived types. > > A derived
+type can be used as the argument for the “type” statement. > > YANG
+Example: > > typedef percent { > type uint8 { > range “0 .. 100”; > } >
+} > > leaf completed { > type percent; > }
+
+Derived types are essentially built-in types with imposed restrictions.
+As an example, the ``ipv4-address`` derived type from IETF is defined
+using the ``string`` built-in type with a ``pattern`` constraint (a
+regular expression):
+
+::
+
+ typedef ipv4-address {
+ type string {
+ pattern
+ '(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.){3}'
+ + '([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])'
+ + '(%[\p{N}\p{L}]+)?';
+ }
+ description
+ "The ipv4-address type represents an IPv4 address in
+ dotted-quad notation. The IPv4 address may include a zone
+ index, separated by a % sign.
+
+ The zone index is used to disambiguate identical address
+ values. For link-local addresses, the zone index will
+ typically be the interface index number or the name of an
+ interface. If the zone index is not present, the default
+ zone of the device will be used.
+
+ The canonical format for the zone index is the numerical
+ format";
+ }
+
+Sometimes, however, it’s desirable to have a binary representation of
+the derived type that is different from the associated built-in type.
+Taking the ``ipv4-address`` example above, it would be more convenient
+to manipulate this YANG type using ``in_addr`` structures instead of
+strings. libyang allow us to do that using the user types plugin:
+https://netopeer.liberouter.org/doc/libyang/master/howtoschemaplugins.html#usertypes
+
+Here’s how the the ``ipv4-address`` derived type is implemented in FRR
+(*yang/libyang_plugins/frr_user_types.c*):
+
+.. code:: c
+
+ static int ipv4_address_store_clb(const char *type_name, const char *value_str,
+ lyd_val *value, char **err_msg)
+ {
+ value->ptr = malloc(sizeof(struct in_addr));
+ if (!value->ptr)
+ return 1;
+
+ if (inet_pton(AF_INET, value_str, value->ptr) != 1) {
+ free(value->ptr);
+ return 1;
+ }
+
+ return 0;
+ }
+
+.. code:: c
+
+ struct lytype_plugin_list frr_user_types[] = {
+ {"ietf-inet-types", "2013-07-15", "ipv4-address",
+ ipv4_address_store_clb, free},
+ {"ietf-inet-types", "2013-07-15", "ipv4-address-no-zone",
+ ipv4_address_store_clb, free},
+ [snip]
+ {NULL, NULL, NULL, NULL, NULL} /* terminating item */
+ };
+
+Now, in addition to the string representation of the data value, libyang
+will also store the data in the binary format we specified (an
+``in_addr`` structure).
+
+Whenever a new derived type is implemented in FRR, it’s also recommended
+to write new wrappers in the *lib/yang_wrappers.c* file
+(e.g. ``yang_dnode_get_ipv4()``, ``yang_get_default_ipv4()``, etc).
+
+Step 5: rewrite the CLI commands as dumb wrappers around the northbound callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the northbound callbacks are implemented, we need to rewrite the
+associated CLI commands on top of the northbound layer. This is the
+easiest part of the retrofitting process.
+
+For protocol daemons, it’s recommended to put all CLI commands on a
+separate C file (e.g. *ripd/rip_cli.c*). This helps to keep the code
+more clean by separating the main protocol code from the user interface.
+It should also help when moving the CLI to a separate program in the
+future.
+
+For libfrr commands, it’s not possible to centralize all commands in a
+single file because the *extract.pl* script from *vtysh* treats commands
+differently depending on the file in which they are defined (e.g. DEFUNs
+from *lib/routemap.c* are installed using the ``VTYSH_RMAP`` constant,
+which identifies the daemons that support route-maps). In this case, the
+CLI commands should be rewritten but maintained in the same file.
+
+Since all CLI configuration commands from FRR will need to be rewritten,
+this is an excellent opportunity to rework this part of the code to make
+the commands easier to maintain and extend. These are the three main
+recommendations: 1. Always use DEFPY instead of DEFUN to improve code
+readability. 2. Always try to join multiple DEFUNs into a single DEFPY
+whenever possible. As an example, there’s no need to have both
+``distance (1-255) A.B.C.D/M`` and ``distance (1-255) A.B.C.D/M WORD``
+when a single ``distance (1-255) A.B.C.D/M [WORD]`` would suffice. 3.
+When necessary, create a separate DEFPY for ``no`` commands so that part
+of the configuration command can be made optional for convenience.
+Example:
+``no timers basic [(5-2147483647) (5-2147483647) (5-2147483647)]``. In
+this example, everything after ``no timers basic`` is ignored by FRR, so
+it makes sense to accept ``no timers basic`` as a valid command. But it
+also makes sense to accept all parameters
+(``no timers basic (5-2147483647) (5-2147483647) (5-2147483647)``) to
+make it easier to remove the command just by prefixing a “no” to it.
+
+To rewrite a CLI command as a dumb wrapper around the northbound
+callbacks, use the ``nb_cli_cfg_change()`` function. This function
+accepts as a parameter an array of ``cli_config_change`` structures that
+specify the changes that need to performed on the candidate
+configuration. Here’s the declaration of this structure (taken from the
+*lib/northbound_cli.h* file):
+
+.. code:: c
+
+ struct cli_config_change {
+ /*
+ * XPath (absolute or relative) of the configuration option being
+ * edited.
+ */
+ char xpath[XPATH_MAXLEN];
+
+ /*
+ * Operation to apply (either NB_OP_CREATE, NB_OP_MODIFY or
+ * NB_OP_DELETE).
+ */
+ enum nb_operation operation;
+
+ /*
+ * New value of the configuration option. Should be NULL for typeless
+ * YANG data (e.g. presence-containers). For convenience, NULL can also
+ * be used to restore a leaf to its default value.
+ */
+ const char *value;
+ };
+
+The ``nb_cli_cfg_change()`` function positions the CLI command on top on
+top of the northbound layer. Instead of changing the running
+configuration directly, this function changes the candidate
+configuration instead, as described in the [[Transactional CLI]] page.
+When the transactional CLI is not in use (i.e. the default mode), then
+``nb_cli_cfg_change()`` performs an implicit ``commit`` operation after
+changing the candidate configuration.
+
+ NOTE: the ``nb_cli_cfg_change()`` function clones the candidate
+ configuration before actually editing it. This way, if any error
+ happens during the editing, the original candidate is restored to
+ avoid inconsistencies. Either all changes from the configuration
+ command are performed successfully or none are. It’s like a
+ mini-transaction but happening on the candidate configuration (thus
+ the northbound callbacks are not involved).
+
+Other important details to keep in mind while rewriting the CLI
+commands: \* ``nb_cli_cfg_change()`` returns CLI errors codes
+(e.g. ``CMD_SUCCESS``, ``CMD_WARNING``), so the return value of this
+function can be used as the return value of CLI commands. \* Calls to
+``VTY_PUSH_CONTEXT`` and ``VTY_PUSH_CONTEXT_SUB`` should be converted to
+calls to ``VTY_PUSH_XPATH``. Similarly, the following macros aren’t
+necessary anymore and can be removed: ``VTY_DECLVAR_CONTEXT``,
+``VTY_DECLVAR_CONTEXT_SUB``, ``VTY_GET_CONTEXT`` and
+``VTY_CHECK_CONTEXT``. The ``nb_cli_cfg_change()`` functions uses the
+``VTY_CHECK_XPATH`` macro to check if the data node being edited still
+exists before doing anything else.
+
+The examples below provide additional details about how the conversion
+should be done.
+
+Example 1
+^^^^^^^^^
+
+In this first example, the *router rip* command becomes a dumb wrapper
+around the ``ripd_instance_create()`` callback. Note that we don’t need
+to check if the ``/frr-ripd:ripd/instance`` data path already exists
+before trying to create it. The northbound will detect when this
+presence-container already exists and do nothing. The
+``VTY_PUSH_XPATH()`` macro is used to change the vty node and set the
+context for other commands under *router rip*.
+
+.. code:: c
+
+ DEFPY_NOSH (router_rip,
+ router_rip_cmd,
+ "router rip",
+ "Enable a routing process\n"
+ "Routing Information Protocol (RIP)\n")
+ {
+ int ret;
+
+ struct cli_config_change changes[] = {
+ {
+ .xpath = "/frr-ripd:ripd/instance",
+ .operation = NB_OP_CREATE,
+ .value = NULL,
+ },
+ };
+
+ ret = nb_cli_cfg_change(vty, NULL, changes, array_size(changes));
+ if (ret == CMD_SUCCESS)
+ VTY_PUSH_XPATH(RIP_NODE, changes[0].xpath);
+
+ return ret;
+ }
+
+Example 2
+^^^^^^^^^
+
+Here we can see the use of relative xpaths (starting with ``./``), which
+are more convenient that absolute xpaths (which would be
+``/frr-ripd:ripd/instance/default-metric`` in this example). This is
+possible because the use of ``VTY_PUSH_XPATH()`` in the *router rip*
+command set the vty base xpath to ``/frr-ripd:ripd/instance``.
+
+.. code:: c
+
+ DEFPY (rip_default_metric,
+ rip_default_metric_cmd,
+ "default-metric (1-16)",
+ "Set a metric of redistribute routes\n"
+ "Default metric\n")
+ {
+ struct cli_config_change changes[] = {
+ {
+ .xpath = "./default-metric",
+ .operation = NB_OP_MODIFY,
+ .value = default_metric_str,
+ },
+ };
+
+ return nb_cli_cfg_change(vty, NULL, changes, array_size(changes));
+ }
+
+In the command below we the ``value`` to NULL to indicate that we want
+to set this leaf to its default value. This is better than hardcoding
+the default value because the default might change in the future. Also,
+users might define custom defaults by using YANG deviations, so it’s
+better to write code that works correctly regardless of the default
+values defined in the YANG models.
+
+.. code:: c
+
+ DEFPY (no_rip_default_metric,
+ no_rip_default_metric_cmd,
+ "no default-metric [(1-16)]",
+ NO_STR
+ "Set a metric of redistribute routes\n"
+ "Default metric\n")
+ {
+ struct cli_config_change changes[] = {
+ {
+ .xpath = "./default-metric",
+ .operation = NB_OP_MODIFY,
+ .value = NULL,
+ },
+ };
+
+ return nb_cli_cfg_change(vty, NULL, changes, array_size(changes));
+ }
+
+Example 3
+^^^^^^^^^
+
+This example shows how one command can change multiple leaves at the
+same time.
+
+.. code:: c
+
+ DEFPY (rip_timers,
+ rip_timers_cmd,
+ "timers basic (5-2147483647)$update (5-2147483647)$timeout (5-2147483647)$garbage",
+ "Adjust routing timers\n"
+ "Basic routing protocol update timers\n"
+ "Routing table update timer value in second. Default is 30.\n"
+ "Routing information timeout timer. Default is 180.\n"
+ "Garbage collection timer. Default is 120.\n")
+ {
+ struct cli_config_change changes[] = {
+ {
+ .xpath = "./timers/update-interval",
+ .operation = NB_OP_MODIFY,
+ .value = update_str,
+ },
+ {
+ .xpath = "./timers/holddown-interval",
+ .operation = NB_OP_MODIFY,
+ .value = timeout_str,
+ },
+ {
+ .xpath = "./timers/flush-interval",
+ .operation = NB_OP_MODIFY,
+ .value = garbage_str,
+ },
+ };
+
+ return nb_cli_cfg_change(vty, NULL, changes, array_size(changes));
+ }
+
+Example 4
+^^^^^^^^^
+
+This example shows how to create a list entry:
+
+.. code:: c
+
+ DEFPY (rip_distance_source,
+ rip_distance_source_cmd,
+ "distance (1-255) A.B.C.D/M$prefix [WORD$acl]",
+ "Administrative distance\n"
+ "Distance value\n"
+ "IP source prefix\n"
+ "Access list name\n")
+ {
+ char xpath_list[XPATH_MAXLEN];
+ struct cli_config_change changes[] = {
+ {
+ .xpath = ".",
+ .operation = NB_OP_CREATE,
+ },
+ {
+ .xpath = "./distance",
+ .operation = NB_OP_MODIFY,
+ .value = distance_str,
+ },
+ {
+ .xpath = "./access-list",
+ .operation = acl ? NB_OP_MODIFY : NB_OP_DELETE,
+ .value = acl,
+ },
+ };
+
+ snprintf(xpath_list, sizeof(xpath_list), "./distance/source[prefix='%s']",
+ prefix_str);
+
+ return nb_cli_cfg_change(vty, xpath_list, changes, array_size(changes));
+ }
+
+The ``xpath_list`` variable is used to hold the xpath that identifies
+the list entry. The keys of the list entry should be embedded in this
+xpath and don’t need to be part of the array of configuration changes.
+All entries from the ``changes`` array use relative xpaths which are
+based on the xpath of the list entry.
+
+The ``access-list`` optional leaf can be either modified or deleted
+depending whether the optional *WORD* parameter is present or not.
+
+When deleting a list entry, all non-key leaves can be ignored:
+
+.. code:: c
+
+ DEFPY (no_rip_distance_source,
+ no_rip_distance_source_cmd,
+ "no distance (1-255) A.B.C.D/M$prefix [WORD$acl]",
+ NO_STR
+ "Administrative distance\n"
+ "Distance value\n"
+ "IP source prefix\n"
+ "Access list name\n")
+ {
+ char xpath_list[XPATH_MAXLEN];
+ struct cli_config_change changes[] = {
+ {
+ .xpath = ".",
+ .operation = NB_OP_DELETE,
+ },
+ };
+
+ snprintf(xpath_list, sizeof(xpath_list), "./distance/source[prefix='%s']",
+ prefix_str);
+
+ return nb_cli_cfg_change(vty, xpath_list, changes, 1);
+ }
+
+Example 5
+^^^^^^^^^
+
+This example shows a DEFPY statement that performs two validations
+before calling ``nb_cli_cfg_change()``:
+
+.. code:: c
+
+ DEFPY (ip_rip_authentication_string,
+ ip_rip_authentication_string_cmd,
+ "ip rip authentication string LINE$password",
+ IP_STR
+ "Routing Information Protocol\n"
+ "Authentication control\n"
+ "Authentication string\n"
+ "Authentication string\n")
+ {
+ struct cli_config_change changes[] = {
+ {
+ .xpath = "./frr-ripd:rip/authentication/password",
+ .operation = NB_OP_MODIFY,
+ .value = password,
+ },
+ };
+
+ if (strlen(password) > 16) {
+ vty_out(vty,
+ "%% RIPv2 authentication string must be shorter than 16\n");
+ return CMD_WARNING_CONFIG_FAILED;
+ }
+
+ if (yang_dnode_exists(vty->candidate_config->dnode, "%s%s",
+ VTY_GET_XPATH,
+ "/frr-ripd:rip/authentication/key-chain")) {
+ vty_out(vty, "%% key-chain configuration exists\n");
+ return CMD_WARNING_CONFIG_FAILED;
+ }
+
+ return nb_cli_cfg_change(vty, NULL, changes, array_size(changes));
+ }
+
+These two validations are not strictly necessary since the configuration
+change is validated using libyang afterwards. The issue with the libyang
+validation is that the error messages from libyang are too verbose:
+
+::
+
+ ripd# conf t
+ ripd(config)# interface eth0
+ ripd(config-if)# ip rip authentication string XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+ % Failed to edit candidate configuration.
+
+ Value "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" does not satisfy the constraint "1..16" (range, length, or pattern).
+ Failed to create node "authentication-password" as a child of "rip".
+ YANG path: /frr-interface:lib/interface[name='eth0'][vrf='Default-IP-Routing-Table']/frr-ripd:rip/authentication-password
+
+On the other hand, the original error message from ripd is much cleaner:
+
+::
+
+ ripd# conf t
+ ripd(config)# interface eth0
+ ripd(config-if)# ip rip authentication string XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+ % RIPv2 authentication string must be shorter than 16
+
+The second validation is a bit more complex. If we try to create the
+``authentication/password`` leaf when the ``authentication/key-chain``
+leaf already exists (both are under a YANG *choice* statement), libyang
+will automatically delete the ``authentication/key-chain`` and create
+``authentication/password`` on its place. This is different from the
+original ripd behavior where the *ip rip authentication key-chain*
+command must be removed before configuring the *ip rip authentication
+string* command.
+
+In the spirit of not introducing any backward-incompatible changes in
+the CLI, converted commands should retain some of their validation
+checks to preserve their original behavior.
+
+Step 6: implement the ``cli_show`` callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The traditional method used by FRR to display the running configuration
+consists of looping through all CLI nodes all call their ``func``
+callbacks one by one, which in turn read the configuration from internal
+variables and dump them to the terminal in the form of CLI commands.
+
+The problem with this approach is twofold. First, since the callbacks
+read the configuration from internal variables, they can’t display
+anything other than the running configuration. Second, they don’t have
+the ability to display default values when requested by the user
+(e.g. *show configuration candidate with-defaults*).
+
+The new northbound architecture solves these problems by introducing a
+new callback: ``cli_show``. Here’s the signature of this function (taken
+from the *lib/northbound.h* file):
+
+.. code:: c
+
+ /*
+ * Optional callback to show the CLI command associated to the given
+ * YANG data node.
+ *
+ * vty
+ * the vty terminal to dump the configuration to
+ *
+ * dnode
+ * libyang data node that should be shown in the form of a CLI
+ * command
+ *
+ * show_defaults
+ * specify whether to display default configuration values or not.
+ * This parameter can be ignored most of the time since the
+ * northbound doesn't call this callback for default leaves or
+ * non-presence containers that contain only default child nodes.
+ * The exception are commands associated to multiple configuration
+ * options, in which case it might be desirable to hide one or more
+ * parts of the command when this parameter is set to false.
+ */
+ void (*cli_show)(struct vty *vty, struct lyd_node *dnode,
+ bool show_defaults);
+
+One of the main differences to the old CLI ``func`` callbacks is that
+the ``cli_show`` callbacks are associated to YANG data paths and not to
+CLI nodes. This means we can define one separate callback for each CLI
+command, making the code more modular and easier to maintain (among
+other advantages that will be more clear later). For enhanced code
+readability, it’s recommended to position the ``cli_show`` callbacks
+immediately after their associated command definitions (DEFPYs).
+
+The ``cli_show`` callbacks are used by the ``nb_cli_show_config_cmds()``
+function to display configurations stored inside ``nb_config``
+structures. The configuration being displayed can be anything from the
+running configuration (*show configuration running*), a candidate
+configuration (*show configuration candidate*) or a rollback
+configuration (*show configuration transaction (1-4294967296)*). The
+``nb_cli_show_config_cmds()`` function works by iterating over all data
+nodes from the given configuration and calling the ``cli_show`` callback
+for the nodes where it’s defined. If a list has dozens of entries, the
+``cli_show`` callback associated to this list will be called multiple
+times with the ``dnode`` parameter pointing to different list entries on
+each iteration.
+
+For backward compatibility with the *show running-config* command, we
+can’t get rid of the CLI ``func`` callbacks at this point in time.
+However, we can make the CLI ``func`` callbacks call the corresponding
+``cli_show`` callbacks to avoid code duplication. The
+``nb_cli_show_dnode_cmds()`` function can be used for that purpose. Once
+the CLI retrofitting process finishes for all FRR daemons, we can remove
+the legacy CLI ``func`` callbacks and turn *show running-config* into a
+shorthand for *show configuration running*.
+
+Regarding displaying configuration with default values, this is
+something that is taken care of by the ``nb_cli_show_config_cmds()``
+function itself. When the *show configuration* command is used without
+the *with-defaults* option, ``nb_cli_show_config_cmds()`` will skip
+calling ``cli_show`` callbacks for data nodes that contain only default
+values (e.g. default leaves or non-presence containers that contain only
+default child nodes). There are however some exceptional cases where the
+implementer of the ``cli_show`` callback should take into consideration
+if default values should be displayed or not. This and other concepts
+will be explained in more detail in the examples below.
+
+.. _example-1-1:
+
+Example 1
+^^^^^^^^^
+
+Command: ``default-metric (1-16)``
+
+YANG representation:
+
+.. code:: yang
+
+ leaf default-metric {
+ type uint8 {
+ range "1..16";
+ }
+ default "1";
+ description
+ "Default metric of redistributed routes.";
+ }
+
+Placement of the ``cli_show`` callback:
+
+.. code:: diff
+
+ {
+ .xpath = "/frr-ripd:ripd/instance/default-metric",
+ .cbs.modify = ripd_instance_default_metric_modify,
+ + .cbs.cli_show = cli_show_rip_default_metric,
+ },
+
+Implementation of the ``cli_show`` callback:
+
+.. code:: c
+
+ void cli_show_rip_default_metric(struct vty *vty, struct lyd_node *dnode,
+ bool show_defaults)
+ {
+ vty_out(vty, " default-metric %s\n",
+ yang_dnode_get_string(dnode, NULL));
+ }
+
+In this first example, the *default-metric* command was modeled using a
+YANG leaf, and we added a new ``cli_show`` callback attached to the YANG
+path of this leaf.
+
+The callback makes use of the ``yang_dnode_get_string()`` function to
+obtain the string value of the configuration option. The following would
+also be possible:
+
+.. code:: c
+
+ vty_out(vty, " default-metric %u\n",
+ yang_dnode_get_uint8(dnode, NULL));
+
+Both options are possible because libyang stores both a binary
+representation and a textual representation of all values stored in a
+data node (``lyd_node``). For simplicity, it’s recommended to always use
+``yang_dnode_get_string()`` in the ``cli_show`` callbacks.
+
+.. _example-2-1:
+
+Example 2
+^^^^^^^^^
+
+Command: ``router rip``
+
+YANG representation:
+
+.. code:: yang
+
+ container instance {
+ presence "Present if the RIP protocol is enabled.";
+ description
+ "RIP routing instance.";
+ [snip]
+ }
+
+Placement of the ``cli_show`` callback:
+
+.. code:: diff
+
+ {
+ .xpath = "/frr-ripd:ripd/instance",
+ .cbs.create = ripd_instance_create,
+ .cbs.delete = ripd_instance_delete,
+ + .cbs.cli_show = cli_show_router_rip,
+ },
+
+Implementation of the ``cli_show`` callback:
+
+.. code:: c
+
+ void cli_show_router_rip(struct vty *vty, struct lyd_node *dnode,
+ bool show_defaults)
+ {
+ vty_out(vty, "!\n");
+ vty_out(vty, "router rip\n");
+ }
+
+In this example, the ``cli_show`` callback doesn’t need to obtain any
+value from the ``dnode`` parameter since presence-containers don’t hold
+any data (apart from their child nodes, but they have their own
+``cli_show`` callbacks).
+
+.. _example-3-1:
+
+Example 3
+^^^^^^^^^
+
+Command: ``timers basic (5-2147483647) (5-2147483647) (5-2147483647)``
+
+YANG representation:
+
+.. code:: yang
+
+ container timers {
+ description
+ "Settings of basic timers";
+ leaf flush-interval {
+ type uint32 {
+ range "5..2147483647";
+ }
+ units "seconds";
+ default "120";
+ description
+ "Interval before a route is flushed from the routing
+ table.";
+ }
+ leaf holddown-interval {
+ type uint32 {
+ range "5..2147483647";
+ }
+ units "seconds";
+ default "180";
+ description
+ "Interval before better routes are released.";
+ }
+ leaf update-interval {
+ type uint32 {
+ range "5..2147483647";
+ }
+ units "seconds";
+ default "30";
+ description
+ "Interval at which RIP updates are sent.";
+ }
+ }
+
+Placement of the ``cli_show`` callback:
+
+.. code:: diff
+
+ {
+ + .xpath = "/frr-ripd:ripd/instance/timers",
+ + .cbs.cli_show = cli_show_rip_timers,
+ + },
+ + {
+ .xpath = "/frr-ripd:ripd/instance/timers/flush-interval",
+ .cbs.modify = ripd_instance_timers_flush_interval_modify,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers/holddown-interval",
+ .cbs.modify = ripd_instance_timers_holddown_interval_modify,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/timers/update-interval",
+ .cbs.modify = ripd_instance_timers_update_interval_modify,
+ },
+
+Implementation of the ``cli_show`` callback:
+
+.. code:: c
+
+ void cli_show_rip_timers(struct vty *vty, struct lyd_node *dnode,
+ bool show_defaults)
+ {
+ vty_out(vty, " timers basic %s %s %s\n",
+ yang_dnode_get_string(dnode, "./update-interval"),
+ yang_dnode_get_string(dnode, "./holddown-interval"),
+ yang_dnode_get_string(dnode, "./flush-interval"));
+ }
+
+This command is a bit different since it changes three leaves at the
+same time. This means we need to have a single ``cli_show`` callback in
+order to display the three leaves together in the same line.
+
+The new ``cli_show_rip_timers()`` callback was added attached to the
+*timers* non-presence container that groups the three leaves. Without
+the *timers* non-presence container we’d need to display the *timers
+basic* command inside the ``cli_show_router_rip()`` callback, which
+would break our requirement of having a separate ``cli_show`` callback
+for each configuration command.
+
+.. _example-4-1:
+
+Example 4
+^^^^^^^^^
+
+Command:
+``redistribute <kernel|connected|static|ospf|isis|bgp|eigrp|nhrp|table|vnc|babel|sharp> [{metric (0-16)|route-map WORD}]``
+
+YANG representation:
+
+.. code:: yang
+
+ list redistribute {
+ key "protocol";
+ description
+ "Redistributes routes learned from other routing protocols.";
+ leaf protocol {
+ type frr-route-types:frr-route-types-v4;
+ description
+ "Routing protocol.";
+ must '. != "rip"';
+ }
+ leaf route-map {
+ type string {
+ length "1..max";
+ }
+ description
+ "Applies the conditions of the specified route-map to
+ routes that are redistributed into the RIP routing
+ instance.";
+ }
+ leaf metric {
+ type uint8 {
+ range "0..16";
+ }
+ description
+ "Metric used for the redistributed route. If a metric is
+ not specified, the metric configured with the
+ default-metric attribute in RIP router configuration is
+ used. If the default-metric attribute has not been
+ configured, the default metric for redistributed routes
+ is 0.";
+ }
+ }
+
+Placement of the ``cli_show`` callback:
+
+.. code:: diff
+
+ {
+ .xpath = "/frr-ripd:ripd/instance/redistribute",
+ .cbs.create = ripd_instance_redistribute_create,
+ .cbs.delete = ripd_instance_redistribute_delete,
+ + .cbs.cli_show = cli_show_rip_redistribute,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/redistribute/route-map",
+ .cbs.modify = ripd_instance_redistribute_route_map_modify,
+ .cbs.delete = ripd_instance_redistribute_route_map_delete,
+ },
+ {
+ .xpath = "/frr-ripd:ripd/instance/redistribute/metric",
+ .cbs.modify = ripd_instance_redistribute_metric_modify,
+ .cbs.delete = ripd_instance_redistribute_metric_delete,
+ },
+
+Implementation of the ``cli_show`` callback:
+
+.. code:: c
+
+ void cli_show_rip_redistribute(struct vty *vty, struct lyd_node *dnode,
+ bool show_defaults)
+ {
+ vty_out(vty, " redistribute %s",
+ yang_dnode_get_string(dnode, "./protocol"));
+ if (yang_dnode_exists(dnode, "./metric"))
+ vty_out(vty, " metric %s",
+ yang_dnode_get_string(dnode, "./metric"));
+ if (yang_dnode_exists(dnode, "./route-map"))
+ vty_out(vty, " route-map %s",
+ yang_dnode_get_string(dnode, "./route-map"));
+ vty_out(vty, "\n");
+ }
+
+Similar to the previous example, the *redistribute* command changes
+several leaves at the same time, and we need a single callback to
+display all leaves in a single line in accordance to the CLI command. In
+this case, the leaves are already grouped by a YANG list so there’s no
+need to add a non-presence container. The new ``cli_show`` callback was
+attached to the YANG path of the list.
+
+It’s also worth noting the use of the ``yang_dnode_exists()`` function
+to check if optional leaves exist in the configuration before displaying
+them.
+
+.. _example-5-1:
+
+Example 5
+^^^^^^^^^
+
+Command:
+``ip rip authentication mode <md5 [auth-length <rfc|old-ripd>]|text>``
+
+YANG representation:
+
+.. code:: yang
+
+ container authentication-scheme {
+ description
+ "Specify the authentication scheme for the RIP interface";
+ leaf mode {
+ type enumeration {
+ [snip]
+ }
+ default "none";
+ description
+ "Specify the authentication mode.";
+ }
+ leaf md5-auth-length {
+ when "../mode = 'md5'";
+ type enumeration {
+ [snip]
+ }
+ default "20";
+ description
+ "MD5 authentication data length.";
+ }
+ }
+
+Placement of the ``cli_show`` callback:
+
+.. code:: diff
+
+ + {
+ + .xpath = "/frr-interface:lib/interface/frr-ripd:rip/authentication-scheme",
+ + .cbs.cli_show = cli_show_ip_rip_authentication_scheme,
+ },
+ {
+ .xpath = "/frr-interface:lib/interface/frr-ripd:rip/authentication-scheme/mode",
+ .cbs.modify = lib_interface_rip_authentication_scheme_mode_modify,
+ },
+ {
+ .xpath = "/frr-interface:lib/interface/frr-ripd:rip/authentication-scheme/md5-auth-length",
+ .cbs.modify = lib_interface_rip_authentication_scheme_md5_auth_length_modify,
+ .cbs.delete = lib_interface_rip_authentication_scheme_md5_auth_length_delete,
+ },
+
+Implementation of the ``cli_show`` callback:
+
+.. code:: c
+
+ void cli_show_ip_rip_authentication_scheme(struct vty *vty,
+ struct lyd_node *dnode,
+ bool show_defaults)
+ {
+ switch (yang_dnode_get_enum(dnode, "./mode")) {
+ case RIP_NO_AUTH:
+ vty_out(vty, " no ip rip authentication mode\n");
+ break;
+ case RIP_AUTH_SIMPLE_PASSWORD:
+ vty_out(vty, " ip rip authentication mode text\n");
+ break;
+ case RIP_AUTH_MD5:
+ vty_out(vty, " ip rip authentication mode md5");
+ if (show_defaults
+ || !yang_dnode_is_default(dnode, "./md5-auth-length")) {
+ if (yang_dnode_get_enum(dnode, "./md5-auth-length")
+ == RIP_AUTH_MD5_SIZE)
+ vty_out(vty, " auth-length rfc");
+ else
+ vty_out(vty, " auth-length old-ripd");
+ }
+ vty_out(vty, "\n");
+ break;
+ }
+ }
+
+This is the most complex ``cli_show`` callback we have in ripd. Its
+complexity comes from the following: \* The
+``ip rip authentication mode ...`` command changes two YANG leaves at
+the same time. \* Part of the command should be hidden when the
+``show_defaults`` parameter is set to false.
+
+This is the behavior we want to implement:
+
+::
+
+ ripd(config)# interface eth0
+ ripd(config-if)# ip rip authentication mode md5
+ ripd(config-if)#
+ ripd(config-if)# show configuration candidate
+ Configuration:
+ !
+ [snip]
+ !
+ interface eth0
+ ip rip authentication mode md5
+ !
+ end
+ ripd(config-if)#
+ ripd(config-if)# show configuration candidate with-defaults
+ Configuration:
+ !
+ [snip]
+ !
+ interface eth0
+ [snip]
+ ip rip authentication mode md5 auth-length old-ripd
+ !
+ end
+
+Note that ``auth-length old-ripd`` should be hidden unless the
+configuration is shown using the *with-defaults* option. This is why the
+``cli_show_ip_rip_authentication_scheme()`` callback needs to consult
+the value of the *show_defaults* parameter. It’s expected that only a
+very small minority of all ``cli_show`` callbacks will need to consult
+the *show_defaults* parameter (there’s a chance this might be the only
+case!)
+
+In the case of the *timers basic* command seen before, we need to
+display the value of all leaves even if only one of them has a value
+different from the default. Hence the ``cli_show_rip_timers()`` callback
+was able to completely ignore the *show_defaults* parameter.
+
+Step 7: consolidation
+~~~~~~~~~~~~~~~~~~~~~
+
+As mentioned in the fourth step, the northbound retrofitting process can
+happen gradually over time, since both “old” and “new” commands can
+coexist without problems. Once all commands from a given daemon were
+converted, we can proceed to the consolidation step, which consists of
+the following: \* Remove the vty configuration lock, which is enabled by
+default in all daemons. Now multiple users should be able to edit the
+configuration concurrently, using either shared or private candidate
+configurations. \* Reference commit:
+`57dccdb1 <https://github.com/opensourcerouting/frr/commit/57dccdb18b799556214dcfb8943e248c0bf1f6a6>`__.
+\* Stop using the qobj infrastructure to keep track of configuration
+objects. This is not necessary anymore, the northbound uses a similar
+mechanism to keep track of YANG data nodes in the candidate
+configuration. \* Reference commit:
+`4e6d63ce <https://github.com/opensourcerouting/frr/commit/4e6d63cebd988af650c1c29d0f2e5a251c8d2e7a>`__.
+\* Make the daemon SIGHUP handler re-read the configuration file (and
+ensure it’s not doing anything other than that). \* Reference commit:
+`5e57edb4 <https://github.com/opensourcerouting/frr/commit/5e57edb4b71ff03f9a22d9ec1412c3c5167f90cf>`__.
+
+Final Considerations
+--------------------
+
+Testing
+~~~~~~~
+
+Converting CLI commands to the new northbound model can be a complicated
+task for beginners, but the more commands one converts, the easier it
+gets. It’s highly recommended to perform as much testing as possible on
+the converted commands to reduce the likelihood of introducing
+regressions. Tools like topotests, ANVL and the `CLI
+fuzzer <https://github.com/rwestphal/frr-cli-fuzzer>`__ can be used to
+catch hidden bugs that might be present. As usual, it’s also recommended
+to use valgrind and static code analyzers to catch other types of
+problems like memory leaks.
+
+Amount of work
+~~~~~~~~~~~~~~
+
+The output below gives a rough estimate of the total number of
+configuration commands that need to be converted per daemon:
+
+.. code:: sh
+
+ $ for dir in lib zebra bgpd ospfd ospf6d isisd ripd ripngd eigrpd pimd pbrd ldpd nhrpd babeld ; do echo -n "$dir: " && cd $dir && grep -ERn "DEFUN|DEFPY" * | grep -Ev "clippy|show|clear" | wc -l && cd ..; done
+ lib: 302
+ zebra: 181
+ bgpd: 569
+ ospfd: 198
+ ospf6d: 99
+ isisd: 126
+ ripd: 64
+ ripngd: 44
+ eigrpd: 58
+ pimd: 113
+ pbrd: 9
+ ldpd: 46
+ nhrpd: 24
+ babeld: 28
+
+As it can be seen, the northbound retrofitting process will demand a lot
+of work from FRR developers and should take months to complete. Everyone
+is welcome to collaborate!
diff --git a/doc/developer/northbound/transactional-cli.rst b/doc/developer/northbound/transactional-cli.rst
new file mode 100644
index 0000000..439bb6a
--- /dev/null
+++ b/doc/developer/northbound/transactional-cli.rst
@@ -0,0 +1,244 @@
+Table of Contents
+-----------------
+
+- `Introduction <#introduction>`__
+- `Configuration modes <#config-modes>`__
+- `New commands <#retrofitting-process>`__
+
+ - `commit check <#cmd1>`__
+ - `commit <#cmd2>`__
+ - `discard <#cmd3>`__
+ - `configuration database max-transactions <#cmd4>`__
+ - `configuration load <#cmd5>`__
+ - `rollback configuration <#cmd6>`__
+ - `show configuration candidate <#cmd7>`__
+ - `show configuration compare <#cmd8>`__
+ - `show configuration running <#cmd9>`__
+ - `show configuration transaction <#cmd10>`__
+ - `show yang module <#cmd11>`__
+ - `show yang module-translator <#cmd12>`__
+ - `update <#cmd13>`__
+ - `yang module-translator load <#cmd14>`__
+ - `yang module-translator unload <#cmd15>`__
+
+Introduction
+~~~~~~~~~~~~
+
+All FRR daemons have built-in support for the CLI, which can be accessed
+either through local telnet or via the vty socket (e.g. by using
+*vtysh*). This will not change with the introduction of the Northbound
+API. However, a new command-line option will be available for all FRR
+daemons: ``--tcli``. When given, this option makes the daemon start with
+a transactional CLI and configuration commands behave a bit different.
+Instead of editing the running configuration, they will edit the
+candidate configuration. In other words, the configuration commands
+won’t be applied immediately, that has to be done on a separate step
+using the new ``commit`` command.
+
+The transactional CLI simply leverages the new capabilities provided by
+the Northbound API and exposes the concept of candidate configurations
+to CLI users too. When the transactional mode is not used, the
+configuration commands also edit the candidate configuration, but
+there’s an implicit ``commit`` after each command.
+
+In order for the transactional CLI to work, all configuration commands
+need to be converted to the new northbound model. Commands not converted
+to the new northbound model will change the running configuration
+directly since they bypass the FRR northbound layer. For this reason,
+starting a daemon with the transactional CLI is not advisable unless all
+of its commands have already been converted. When that’s not the case,
+we can run into a situation like this:
+
+::
+
+ ospfd(config)# router ospf
+ ospfd(config-router)# ospf router-id 1.1.1.1
+ [segfault in ospfd]
+
+The segfault above can happen if ``router ospf`` edits the candidate
+configuration but ``ospf router-id 1.1.1.1`` edits the running
+configuration. The second command tries to set
+``ospf->router_id_static`` but, since the previous ``router ospf``
+command hasn’t been commited yet, the ``ospf`` global variable is set to
+NULL, which leads to the crash. Besides this problem, having a set of
+commands that edit the candidate configuration and others that edit the
+running configuration is confusing at best. The ``--tcli`` option should
+be used only by developers until the northbound retrofitting process is
+complete.
+
+Configuration modes
+~~~~~~~~~~~~~~~~~~~
+
+When using the transactional CLI (``--tcli``), FRR supports three
+different forms of the ``configure`` command: \* ``configure terminal``:
+in this mode, a single candidate configuration is shared by all users.
+This means that one user might delete a configuration object that’s
+being edited by another user, in which case the CLI will detect and
+report the problem. If one user issues the ``commit`` command, all
+changes done by all users are committed. \* ``configure private``: users
+have a private candidate configuration that is edited separately from
+the other users. The ``commit`` command commits only the changes done by
+the user. \* ``configure exclusive``: similar to ``configure private``,
+but also locks the running configuration to prevent other users from
+changing it. The configuration lock is released when the user exits the
+configuration mode.
+
+When using ``configure terminal`` or ``configure private``, the
+candidate configuration being edited might become outdated if another
+user commits a different candidate configuration on another session.
+TODO: show image to illustrate the problem.
+
+New commands
+~~~~~~~~~~~~
+
+The list below contains the new CLI commands introduced by Northbound
+API. The commands are available when a daemon is started using the
+transactional CLI (``--tcli``). Currently ``vtysh`` doesn’t support any
+of these new commands.
+
+Please refer to the [[Demos]] page to see a demo of the transactional
+CLI in action.
+
+--------------
+
+``commit check``
+''''''''''''''''
+
+Check if the candidate configuration is valid or not.
+
+``commit [force] [comment LINE...]``
+''''''''''''''''''''''''''''''''''''
+
+Commit the changes done in the candidate configuration into the running
+configuration.
+
+Options: \* ``force``: commit even if the candidate configuration is
+outdated. It’s usually a better option to use the ``update`` command
+instead. \* ``comment LINE...``: assign a comment to the configuration
+transaction. This comment is displayed when viewing the recorded
+transactions in the output of the ``show configuration transaction``
+command.
+
+``discard``
+'''''''''''
+
+Discard the changes done in the candidate configuration.
+
+``configuration database max-transactions (1-100)``
+'''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Set the maximum number of transactions to store in the rollback log.
+
+``configuration load <file [<json|xml> [translate WORD]] FILENAME|transaction (1-4294967296)> [replace]``
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Load a new configuration into the candidate configuration. When loading
+the configuration from a file, it’s assumed that the configuration will
+be in the form of CLI commands by default. The ``json`` and ``xml``
+options can be used to load configurations in the JSON and XML formats,
+respectively. It’s also possible to load a configuration from a previous
+transaction by specifying the desired transaction ID
+(``(1-4294967296)``).
+
+Options: \* ``translate WORD``: translate the JSON/XML configuration
+file using the YANG module translator. \* ``replace``: replace the
+candidate by the loaded configuration. The default is to merge the
+loaded configuration into the candidate configuration.
+
+``rollback configuration (1-4294967296)``
+'''''''''''''''''''''''''''''''''''''''''
+
+Roll back the running configuration to a previous configuration
+identified by its transaction ID (``(1-4294967296)``).
+
+``show configuration candidate [<json|xml> [translate WORD]] [<with-defaults|changes>]``
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Show the candidate configuration.
+
+Options: \* ``json``: show the configuration in the JSON format. \*
+``xml``: show the configuration in the XML format. \*
+``translate WORD``: translate the JSON/XML output using the YANG module
+translator. \* ``with-defaults``: show default values that are hidden by
+default. \* ``changes``: show only the changes done in the candidate
+configuration.
+
+``show configuration compare <candidate|running|transaction (1-4294967296)> <candidate|running|transaction (1-4294967296)> [<json|xml> [translate WORD]]``
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Show the difference between two different configurations.
+
+Options: \* ``json``: show the configuration differences in the JSON
+format. \* ``xml``: show the configuration differences in the XML
+format. \* ``translate WORD``: translate the JSON/XML output using the
+YANG module translator.
+
+``show configuration running [<json|xml> [translate WORD]] [with-defaults]``
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Show the running configuration.
+
+Options: \* ``json``: show the configuration in the JSON format. \*
+``xml``: show the configuration in the XML format. \*
+``translate WORD``: translate the JSON/XML output using the YANG module
+translator. \* ``with-defaults``: show default values that are hidden by
+default.
+
+ NOTE: ``show configuration running`` shows only the running
+ configuration as known by the northbound layer. Configuration
+ commands not converted to the new northbound model will not be
+ displayed. To show the full running configuration, the legacy
+ ``show running-config`` command must be used.
+
+``show configuration transaction [(1-4294967296) [<json|xml> [translate WORD]] [changes]]``
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+When a transaction ID (``(1-4294967296)``) is given, show the
+configuration associated to the previously committed transaction.
+
+When a transaction ID is not given, show all recorded transactions in
+the rollback log.
+
+Options: \* ``json``: show the configuration in the JSON format. \*
+``xml``: show the configuration in the XML format. \*
+``translate WORD``: translate the JSON/XML output using the YANG module
+translator. \* ``with-defaults``: show default values that are hidden by
+default. \* ``changes``: show changes compared to the previous
+transaction.
+
+``show yang module [module-translator WORD] [WORD <summary|tree|yang|yin>]``
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+When a YANG module is not given, show all loaded YANG modules.
+Otherwise, show detailed information about the given module.
+
+Options: \* ``module-translator WORD``: change the context to modules
+loaded by the specified YANG module translator. \* ``summary``: display
+summary information about the module. \* ``tree``: display module in the
+tree (RFC 8340) format. \* ``yang``: display module in the YANG format.
+\* ``yin``: display module in the YIN format.
+
+``show yang module-translator``
+'''''''''''''''''''''''''''''''
+
+Show all loaded YANG module translators.
+
+``update``
+''''''''''
+
+Rebase the candidate configuration on top of the latest running
+configuration. Conflicts are resolved automatically by giving preference
+to the changes done in the candidate configuration.
+
+The candidate configuration might be outdated if the running
+configuration was updated after the candidate was created.
+
+``yang module-translator load FILENAME``
+''''''''''''''''''''''''''''''''''''''''
+
+Load a YANG module translator from the filesystem.
+
+``yang module-translator unload WORD``
+''''''''''''''''''''''''''''''''''''''
+
+Unload a YANG module translator identified by its name.
diff --git a/doc/developer/northbound/yang-module-translator.rst b/doc/developer/northbound/yang-module-translator.rst
new file mode 100644
index 0000000..aa527ce
--- /dev/null
+++ b/doc/developer/northbound/yang-module-translator.rst
@@ -0,0 +1,629 @@
+Table of Contents
+-----------------
+
+- `Introduction <#introduction>`__
+- `Deviation Modules <#deviation-modules>`__
+- `Translation Tables <#translation-tables>`__
+- `CLI Demonstration <#cli-demonstration>`__
+- `Implementation Details <#implementation-details>`__
+
+Introduction
+------------
+
+One key requirement for the FRR northbound architecture is that it
+should be possible to configure/monitor FRR using different sets of YANG
+models. This is especially important considering that the industry
+hasn’t reached a consensus to provide a single source of standard models
+for network management. At this moment both the IETF and OpenConfig
+models are widely implemented and are unlikely to converge, at least not
+in the short term. In the ideal scenario, management applications should
+be able to use either IETF or OpenConfig models to configure and monitor
+FRR programatically (or even both at the same time!).
+
+But how can FRR support multiple sets of YANG models at the same time?
+There must be only a single source of truth that models the existing
+implementation accurately (the native models). Writing different code
+paths or callbacks for different models would be inviable, it would lead
+to a lot of duplicated code and extra maintenance overhead.
+
+In order to support different sets of YANG modules without introducing
+the overhead of writing additional code, the solution is to create a
+mechanism that dynamically translates YANG instance data between
+non-native models to native models and vice-versa. Based on this idea,
+an experimental YANG module translator was implemented within the FRR
+northbound layer. The translator works by translating XPaths at runtime
+using translation tables provided by the user. The translator itself is
+modeled using YANG and users can create translators using simple JSON
+files.
+
+A YANG module translator consists of two components: deviation modules
+and translation tables.
+
+Deviation Modules
+-----------------
+
+The first step when writing a YANG module translator is to create a
+`deviations <https://tools.ietf.org/html/rfc7950#page-131>`__ module for
+each module that is going be translated. This is necessary because in
+most cases it won’t be possible to create a perfect translator that
+covers the non-native models on their entirety. Some non-native modules
+might contain nodes that can’t be mapped to a corresponding node in the
+FRR native models. This is either because the corresponding
+functionality is not implemented in FRR or because it’s modeled in a
+different way that is incompatible.
+
+An an example, *ripd* doesn’t have BFD support yet, so we need to create
+a YANG deviation to modify the *ietf-rip* module and remove the ``bfd``
+container from it:
+
+.. code:: yang
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:interfaces/ietf-rip:interface/ietf-rip:bfd" {
+ deviate not-supported;
+ }
+
+In the example below, while both the *frr-ripd* and *ietf-rip* modules
+support RIP authentication, they model the authentication data in
+different ways, making translation not possible given the constraints of
+the current module translator. A new deviation is necessary to remove
+the ``authentication`` container from the *ietf-rip* module:
+
+.. code:: yang
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:interfaces/ietf-rip:interface/ietf-rip:authentication" {
+ deviate not-supported;
+ }
+
+..
+
+ NOTE: it should be possible to translate the
+ ``ietf-rip:authentication`` container if the *frr-ripd* module is
+ modified to model the corresponding data in a compatible way. Another
+ option is to improve the module translator to make more complex
+ translations possible, instead of requiring one-to-one XPath
+ mappings.
+
+Sometimes creating a mapping between nodes from the native and
+non-native models is possible, but the nodes have different properties
+that need to be normalized to allow the translation. In the example
+below, a YANG deviation is used to change the type and the default value
+from a node from the ``ietf-rip`` module.
+
+.. code:: yang
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:timers/ietf-rip:flush-interval" {
+ deviate replace {
+ default "120";
+ }
+ deviate replace {
+ type uint32;
+ }
+ }
+
+The deviation modules allow the management applications to know which
+parts of the custom modules (e.g. IETF/OC) can be used to configure and
+monitor FRR.
+
+In order to facilitate the process of creating YANG deviation modules,
+the *gen_yang_deviations* tool was created to automate part of the
+process. This tool creates a “not-supported” deviation for all nodes
+from the given non-native module. Example:
+
+::
+
+ $ tools/gen_yang_deviations ietf-rip > yang/ietf/frr-deviations-ietf-rip.yang
+ $ head -n 40 yang/ietf/frr-deviations-ietf-rip.yang
+ deviation "/ietf-rip:clear-rip-route" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-rip:clear-rip-route/ietf-rip:input" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-rip:clear-rip-route/ietf-rip:input/ietf-rip:rip-instance" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:originate-default-route" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:originate-default-route/ietf-rip:enabled" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:originate-default-route/ietf-rip:route-policy" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:default-metric" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:distance" {
+ deviate not-supported;
+ }
+
+ deviation "/ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol/ietf-rip:rip/ietf-rip:triggered-update-threshold" {
+ deviate not-supported;
+ }
+
+Once all existing nodes are listed in the deviation module, it’s easy to
+check the deviations that need to be removed or modified. This is more
+convenient than starting with a blank deviations module and listing
+manually all nodes that need to be deviated.
+
+After removing and/or modifying the auto-generated deviations, the next
+step is to write the module XPath translation table as we’ll see in the
+next section. Before that, it’s possible to use the *yanglint* tool to
+check how the non-native module looks like after applying the
+deviations. Example:
+
+::
+
+ $ yanglint -f tree yang/ietf/ietf-rip@2018-02-03.yang yang/ietf/frr-deviations-ietf-rip.yang
+ module: ietf-rip
+
+ augment /ietf-routing:routing/ietf-routing:control-plane-protocols/ietf-routing:control-plane-protocol:
+ +--rw rip
+ +--rw originate-default-route
+ | +--rw enabled? boolean <false>
+ +--rw default-metric? uint8 <1>
+ +--rw distance? uint8 <0>
+ +--rw timers
+ | +--rw update-interval? uint32 <30>
+ | +--rw holddown-interval? uint32 <180>
+ | +--rw flush-interval? uint32 <120>
+ +--rw interfaces
+ | +--rw interface* [interface]
+ | +--rw interface ietf-interfaces:interface-ref
+ | +--rw split-horizon? enumeration <simple>
+ +--ro ipv4
+ +--ro neighbors
+ | +--ro neighbor* [ipv4-address]
+ | +--ro ipv4-address ietf-inet-types:ipv4-address
+ | +--ro last-update? ietf-yang-types:date-and-time
+ | +--ro bad-packets-rcvd? ietf-yang-types:counter32
+ | +--ro bad-routes-rcvd? ietf-yang-types:counter32
+ +--ro routes
+ +--ro route* [ipv4-prefix]
+ +--ro ipv4-prefix ietf-inet-types:ipv4-prefix
+ +--ro next-hop? ietf-inet-types:ipv4-address
+ +--ro interface? ietf-interfaces:interface-ref
+ +--ro metric? uint8
+
+ rpcs:
+ +---x clear-rip-route
+
+..
+
+ NOTE: the same output can be obtained using the
+ ``show yang module module-translator ietf ietf-rip tree`` command in
+ FRR once the *ietf* module translator is loaded.
+
+In the example above, it can be seen that the vast majority of the
+*ietf-rip* nodes were removed because of the “not-supported” deviations.
+When a module translator is loaded, FRR calculates the coverage of the
+translator by dividing the number of YANG nodes before applying the
+deviations by the number of YANG nodes after applying the deviations.
+The calculated coverage is displayed in the output of the
+``show yang module-translator`` command:
+
+::
+
+ ripd# show yang module-translator
+ Family Module Deviations Coverage (%)
+ -----------------------------------------------------------------------
+ ietf ietf-interfaces frr-deviations-ietf-interfaces 3.92
+ ietf ietf-routing frr-deviations-ietf-routing 1.56
+ ietf ietf-rip frr-deviations-ietf-rip 13.60
+
+As it can be seen in the output above, the *ietf* module translator
+covers only ~13% of the original *ietf-rip* module. This is in part
+because the *ietf-rip* module models both RIPv2 and RIPng. Also,
+*ietf-rip.yang* contains several knobs that aren’t implemented in *ripd*
+yet (e.g. BFD support, per-interface timers, statistics, etc). Work can
+be done over time to increase the coverage to a more reasonable number.
+
+Translation Tables
+------------------
+
+Below is an example of a translator for the IETF family of models:
+
+.. code:: json
+
+ {
+ "frr-module-translator:frr-module-translator": {
+ "family": "ietf",
+ "module": [
+ {
+ "name": "ietf-interfaces@2018-01-09",
+ "deviations": "frr-deviations-ietf-interfaces",
+ "mappings": [
+ {
+ "custom": "/ietf-interfaces:interfaces/interface[name='KEY1']",
+ "native": "/frr-interface:lib/interface[name='KEY1'][vrf='default']"
+ },
+ {
+ "custom": "/ietf-interfaces:interfaces/interface[name='KEY1']/description",
+ "native": "/frr-interface:lib/interface[name='KEY1'][vrf='default']/description"
+ }
+ ]
+ },
+ {
+ "name": "ietf-routing@2018-01-25",
+ "deviations": "frr-deviations-ietf-routing",
+ "mappings": [
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']",
+ "native": "/frr-ripd:ripd/instance"
+ }
+ ]
+ },
+ {
+ "name": "ietf-rip@2018-02-03",
+ "deviations": "frr-deviations-ietf-rip",
+ "mappings": [
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/default-metric",
+ "native": "/frr-ripd:ripd/instance/default-metric"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/distance",
+ "native": "/frr-ripd:ripd/instance/distance/default"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/originate-default-route/enabled",
+ "native": "/frr-ripd:ripd/instance/default-information-originate"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/timers/update-interval",
+ "native": "/frr-ripd:ripd/instance/timers/update-interval"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/timers/holddown-interval",
+ "native": "/frr-ripd:ripd/instance/timers/holddown-interval"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/timers/flush-interval",
+ "native": "/frr-ripd:ripd/instance/timers/flush-interval"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/interfaces/interface[interface='KEY1']",
+ "native": "/frr-ripd:ripd/instance/interface[.='KEY1']"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/interfaces/interface[interface='KEY1']/split-horizon",
+ "native": "/frr-interface:lib/interface[name='KEY1'][vrf='default']/frr-ripd:rip/split-horizon"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/neighbors/neighbor[ipv4-address='KEY1']",
+ "native": "/frr-ripd:ripd/state/neighbors/neighbor[address='KEY1']"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/neighbors/neighbor[ipv4-address='KEY1']/last-update",
+ "native": "/frr-ripd:ripd/state/neighbors/neighbor[address='KEY1']/last-update"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/neighbors/neighbor[ipv4-address='KEY1']/bad-packets-rcvd",
+ "native": "/frr-ripd:ripd/state/neighbors/neighbor[address='KEY1']/bad-packets-rcvd"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/neighbors/neighbor[ipv4-address='KEY1']/bad-routes-rcvd",
+ "native": "/frr-ripd:ripd/state/neighbors/neighbor[address='KEY1']/bad-routes-rcvd"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/routes/route[ipv4-prefix='KEY1']",
+ "native": "/frr-ripd:ripd/state/routes/route[prefix='KEY1']"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/routes/route[ipv4-prefix='KEY1']/next-hop",
+ "native": "/frr-ripd:ripd/state/routes/route[prefix='KEY1']/next-hop"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/routes/route[ipv4-prefix='KEY1']/interface",
+ "native": "/frr-ripd:ripd/state/routes/route[prefix='KEY1']/interface"
+ },
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/ipv4/routes/route[ipv4-prefix='KEY1']/metric",
+ "native": "/frr-ripd:ripd/state/routes/route[prefix='KEY1']/metric"
+ },
+ {
+ "custom": "/ietf-rip:clear-rip-route",
+ "native": "/frr-ripd:clear-rip-route"
+ }
+ ]
+ }
+ ]
+ }
+ }
+
+The main motivation to use YANG itself to model YANG module translators
+was a practical one: leverage *libyang* to validate the structure of the
+user input (JSON files) instead of doing that manually in the
+*lib/yang_translator.c* file (tedious and error-prone work).
+
+Module translators can be loaded using the following CLI command:
+
+::
+
+ ripd(config)# yang module-translator load /usr/local/share/yang/ietf/frr-ietf-translator.json
+ % Module translator "ietf" loaded successfully.
+
+Module translators can also be loaded/unloaded programatically using the
+``yang_translator_load()/yang_translator_unload()`` functions within the
+northbound plugins. These functions are documented in the
+*lib/yang_translator.h* file.
+
+Each module translator must be assigned a “family” identifier
+(e.g. IETF, OpenConfig), and can contain mappings for multiple
+interrelated YANG modules. The mappings consist of pairs of
+custom/native XPath expressions that should be equivalent, despite
+belonging to different YANG modules.
+
+Example:
+
+.. code:: json
+
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/default-metric",
+ "native": "/frr-ripd:ripd/instance/default-metric"
+ },
+
+The nodes pointed by the custom and native XPaths must have compatible
+types. In the case of the example above, both nodes point to a YANG leaf
+of type ``uint8``, so the mapping is valid.
+
+In the example below, the “custom” XPath points to a YANG list
+(typeless), and the “native” XPath points to a YANG leaf-list of
+strings. In this exceptional case, the types are also considered to be
+compatible.
+
+.. code:: json
+
+ {
+ "custom": "/ietf-routing:routing/control-plane-protocols/control-plane-protocol[type='ietf-rip:ripv2'][name='main']/ietf-rip:rip/interfaces/interface[interface='KEY1']",
+ "native": "/frr-ripd:ripd/instance/interface[.='KEY1']"
+ },
+
+The ``KEY1..KEY4`` values have a special meaning and are used to
+preserve the list keys while performing the XPath translation.
+
+Once a YANG module translator is loaded and validated at a syntactic
+level using *libyang*, further validations are performed to check for
+missing mappings (after loading the deviation modules) and incompatible
+YANG types. Example:
+
+::
+
+ ripd(config)# yang module-translator load /usr/local/share/yang/ietf/frr-ietf-translator.json
+ % Failed to load "/usr/local/share/yang/ietf/frr-ietf-translator.json"
+
+ Please check the logs for more details.
+
+::
+
+ 2018/09/03 15:18:45 RIP: yang_translator_validate_cb: YANG types are incompatible (xpath: "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/default-metric")
+ 2018/09/03 15:18:45 RIP: yang_translator_validate_cb: missing mapping for "/ietf-routing:routing/control-plane-protocols/control-plane-protocol/ietf-rip:rip/distance"
+ 2018/09/03 15:18:45 RIP: yang_translator_validate: failed to validate "ietf" module translator: 2 error(s)
+
+Overall, this translation mechanism based on XPath mappings is simple
+and functional, but only to a certain extent. The native models need to
+be reasonably similar to the models that are going be translated,
+otherwise the translation is compromised and a good coverage can’t be
+achieved. Other translation techniques must be investigated to address
+this shortcoming and make it possible to create more powerful YANG
+module translators.
+
+YANG module translators can be evaluated based on the following metrics:
+\* Translation potential: is it possible to make complex translations,
+taking several variables into account? \* Complexity: measure of how
+easy or hard it is to write a module translator. \* Speed: measure of
+how fast the translation can be achieved. Translation speed is of
+fundamental importance, especially for operational data. \* Robustness:
+can the translator be checked for inconsistencies at load time? A module
+translator based on scripts wouldn’t fare well on this metric. \*
+Round-trip conversions: can the translated data be translated back to
+the original format without information loss?
+
+CLI Demonstration
+-----------------
+
+As of now the only northbound client that supports the YANG module
+translator is the FRR embedded CLI. The confd and sysrepo plugins need
+to be extended to support the module translator, which might be used not
+only for configuration data, but also for operational data, RPCs and
+notifications.
+
+In this demonstration, we’ll use the CLI ``configuration load`` command
+to load the following JSON configuration file specified using the IETF
+data hierarchy:
+
+.. code:: json
+
+ {
+ "ietf-interfaces:interfaces": {
+ "interface": [
+ {
+ "description": "Engineering",
+ "name": "eth0"
+ }
+ ]
+ },
+ "ietf-routing:routing": {
+ "control-plane-protocols": {
+ "control-plane-protocol": [
+ {
+ "name": "main",
+ "type": "ietf-rip:ripv2",
+ "ietf-rip:rip": {
+ "default-metric": "2",
+ "distance": "80",
+ "interfaces": {
+ "interface": [
+ {
+ "interface": "eth0",
+ "split-horizon": "poison-reverse"
+ }
+ ]
+ },
+ "originate-default-route": {
+ "enabled": "true"
+ },
+ "timers": {
+ "flush-interval": "241",
+ "holddown-interval": "181",
+ "update-interval": "31"
+ }
+ }
+ }
+ ]
+ }
+ }
+ }
+
+In order to load this configuration file, it’s necessary to load the
+IETF module translator first. Then, when entering the
+``configuration load`` command, the ``translate ietf`` parameters must
+be given to specify that the input needs to be translated using the
+previously loaded ``ietf`` module translator. Example:
+
+::
+
+ ripd(config)# configuration load file json /mnt/renato/git/frr/yang/example/ietf-rip.json
+ % Failed to load configuration:
+
+ Unknown element "interfaces".
+ ripd(config)#
+ ripd(config)# yang module-translator load /usr/local/share/yang/ietf/frr-ietf-translator.json
+ % Module translator "ietf" loaded successfully.
+
+ ripd(config)#
+ ripd(config)# configuration load file json translate ietf /mnt/renato/git/frr/yang/example/ietf-rip.json
+
+Now let’s check the candidate configuration to see if the configuration
+file was loaded successfully:
+
+::
+
+ ripd(config)# show configuration candidate
+ Configuration:
+ !
+ frr version 5.1-dev
+ frr defaults traditional
+ !
+ interface eth0
+ description Engineering
+ ip rip split-horizon poisoned-reverse
+ !
+ router rip
+ default-metric 2
+ distance 80
+ network eth0
+ default-information originate
+ timers basic 31 181 241
+ !
+ end
+ ripd(config)# show configuration candidate json
+ {
+ "frr-interface:lib": {
+ "interface": [
+ {
+ "name": "eth0",
+ "vrf": "default",
+ "description": "Engineering",
+ "frr-ripd:rip": {
+ "split-horizon": "poison-reverse"
+ }
+ }
+ ]
+ },
+ "frr-ripd:ripd": {
+ "instance": {
+ "default-metric": 2,
+ "distance": {
+ "default": 80
+ },
+ "interface": [
+ "eth0"
+ ],
+ "default-information-originate": true,
+ "timers": {
+ "flush-interval": 241,
+ "holddown-interval": 181,
+ "update-interval": 31
+ }
+ }
+ }
+ }
+
+As it can be seen, the candidate configuration is identical to the one
+defined in the *ietf-rip.json* file, only the structure is different.
+This means that the *ietf-rip.json* file was translated successfully.
+
+The ``ietf`` module translator can also be used to do the translation in
+other direction: transform data from the native format to the IETF
+format. This is shown below by altering the output of the
+``show configuration candidate json`` command using the
+``translate ietf`` parameter:
+
+::
+
+ ripd(config)# show configuration candidate json translate ietf
+ {
+ "ietf-interfaces:interfaces": {
+ "interface": [
+ {
+ "name": "eth0",
+ "description": "Engineering"
+ }
+ ]
+ },
+ "ietf-routing:routing": {
+ "control-plane-protocols": {
+ "control-plane-protocol": [
+ {
+ "type": "ietf-rip:ripv2",
+ "name": "main",
+ "ietf-rip:rip": {
+ "interfaces": {
+ "interface": [
+ {
+ "interface": "eth0",
+ "split-horizon": "poison-reverse"
+ }
+ ]
+ },
+ "default-metric": 2,
+ "distance": 80,
+ "originate-default-route": {
+ "enabled": true
+ },
+ "timers": {
+ "flush-interval": 241,
+ "holddown-interval": 181,
+ "update-interval": 31
+ }
+ }
+ }
+ ]
+ }
+ }
+ }
+
+As expected, this output is exactly identical to the configuration
+defined in the *ietf-rip.json* file. The module translator was able to
+do a round-trip conversion without information loss.
+
+Implementation Details
+----------------------
+
+A different libyang context is allocated for each YANG module
+translator. This is important to avoid collisions and ensure that
+non-native data can’t be instantiated in the running and candidate
+configurations.
diff --git a/doc/developer/northbound/yang-tools.rst b/doc/developer/northbound/yang-tools.rst
new file mode 100644
index 0000000..346efca
--- /dev/null
+++ b/doc/developer/northbound/yang-tools.rst
@@ -0,0 +1,112 @@
+Yang Tools
+~~~~~~~~~~
+
+Here's some information about various tools for working with yang
+models.
+
+yanglint cheat sheet
+~~~~~~~~~~~~~~~~~~~~
+
+ libyang project includes a feature-rich tool called yanglint(1) for
+ validation and conversion of the schemas and YANG modeled data. The
+ source codes are located at /tools/lint and can be used to explore
+ how an application is supposed to use the libyang library.
+ yanglint(1) binary as well as its man page are installed together
+ with the library itself.
+
+Validate a YANG module:
+
+.. code:: sh
+
+ $ yanglint -p <yang-search-path> module.yang
+
+Generate tree representation of a YANG module:
+
+.. code:: sh
+
+ $ yanglint -p <yang-search-path> -f tree module.yang
+
+Validate JSON/XML instance data:
+
+.. code:: sh
+
+ $ yanglint -p <yang-search-path> module.yang data.{json,xml}
+
+Convert JSON/XML instance data to another format:
+
+.. code:: sh
+
+ $ yanglint -p <yang-search-path> -f xml module.yang data.json
+ $ yanglint -p <yang-search-path> -f json module.yang data.xml
+
+*yanglint* also features an interactive mode which is very useful when
+needing to validate data from multiple modules at the same time. The
+*yanglint* README provides several examples:
+https://github.com/CESNET/libyang/blob/master/tools/lint/examples/README.md
+
+Man page (groff):
+https://github.com/CESNET/libyang/blob/master/tools/lint/yanglint.1
+
+pyang cheat sheet
+~~~~~~~~~~~~~~~~~
+
+ pyang is a YANG validator, transformator and code generator, written
+ in python. It can be used to validate YANG modules for correctness,
+ to transform YANG modules into other formats, and to generate code
+ from the modules.
+
+Obtaining and installing pyang:
+
+.. code:: sh
+
+ $ git clone https://github.com/mbj4668/pyang.git
+ $ cd pyang/
+ $ sudo python setup.py install
+
+Validate a YANG module:
+
+.. code:: sh
+
+ $ pyang --ietf -p <yang-search-path> module.yang
+
+Generate tree representation of a YANG module:
+
+.. code:: sh
+
+ $ pyang -f tree -p <yang-search-path> module.yang
+
+Indent a YANG file:
+
+.. code:: sh
+
+ $ pyang -p <yang-search-path> \
+ --keep-comments -f yang --yang-canonical \
+ module.yang -o module.yang
+
+Generate skeleton instance data: \* XML:
+
+.. code:: sh
+
+ $ pyang -p <yang-search-path> \
+ -f sample-xml-skeleton --sample-xml-skeleton-defaults \
+ module.yang [augmented-module1.yang ...] -o module.xml
+
+- JSON:
+
+.. code:: sh
+
+ $ pyang -p <yang-search-path> \
+ -f jsonxsl module.yang -o module.xsl
+ $ xsltproc -o module.json module.xsl module.xml
+
+Validate XML instance data (works only with YANG 1.0):
+
+.. code:: sh
+
+ $ yang2dsdl -v module.xml module.yang
+
+vim
+~~~
+
+YANG syntax highlighting for vim:
+https://github.com/nathanalderson/yang.vim
diff --git a/doc/developer/ospf-api.rst b/doc/developer/ospf-api.rst
new file mode 100644
index 0000000..41c31b2
--- /dev/null
+++ b/doc/developer/ospf-api.rst
@@ -0,0 +1,383 @@
+OSPF API Documentation
+======================
+
+Disclaimer
+----------
+
+The OSPF daemon contains an API for application access to the LSA database.
+This API and documentation was created by Ralph Keller, originally as patch for
+Zebra. Unfortunately, the page containing documentation for the API is no
+longer online. This page is an attempt to recreate documentation for the API
+(with lots of help from the WayBackMachine).
+
+Ralph has kindly licensed this documentation under GPLv2+. Please preserve the
+acknowledgements at the bottom of this document.
+
+Introduction
+------------
+
+This page describes an API that allows external applications to access the
+link-state database (LSDB) of the OSPF daemon. The implementation is based on
+the OSPF code from FRRouting (forked from Quagga and formerly Zebra) routing
+protocol suite and is subject to the GNU General Public License. The OSPF API
+provides you with the following functionality:
+
+- Retrieval of the full or partial link-state database of the OSPF daemon.
+ This allows applications to obtain an exact copy of the LSDB including router
+ LSAs, network LSAs and so on. Whenever a new LSA arrives at the OSPF daemon,
+ the API module immediately informs the application by sending a message. This
+ way, the application is always synchronized with the LSDB of the OSPF daemon.
+- Origination of own opaque LSAs (of type 9, 10, or 11) which are then
+ distributed transparently to other routers within the flooding scope and
+ received by other applications through the OSPF API.
+
+Opaque LSAs, which are described in :rfc:`2370`, allow you to distribute
+application-specific information within a network using the OSPF protocol. The
+information contained in opaque LSAs is transparent for the routing process but
+it can be processed by other modules such as traffic engineering (e.g.,
+MPLS-TE).
+
+Architecture
+------------
+
+The following picture depicts the architecture of the Quagga/Zebra protocol
+suite. The OSPF daemon is extended with opaque LSA capabilities and an API for
+external applications. The OSPF core module executes the OSPF protocol by
+discovering neighbors and exchanging neighbor state. The opaque module,
+implemented by Masahiko Endo, provides functions to exchange opaque LSAs
+between routers. Opaque LSAs can be generated by several modules such as the
+MPLS-TE module or the API server module. These modules then invoke the opaque
+module to flood their data to neighbors within the flooding scope.
+
+The client, which is an application potentially running on a different node
+than the OSPF daemon, links against the OSPF API client library. This client
+library establishes a socket connection with the API server module of the OSPF
+daemon and uses this connection to retrieve LSAs and originate opaque LSAs.
+
+.. figure:: ../figures/ospf_api_architecture.png
+ :alt: image
+
+ image
+
+The OSPF API server module works like any other internal opaque module (such as
+the MPLS-TE module), but listens to connections from external applications that
+want to communicate with the OSPF daemon. The API server module can handle
+multiple clients concurrently.
+
+One of the main objectives of the implementation is to make as little changes
+to the existing Zebra code as possible.
+
+Installation & Configuration
+----------------------------
+
+Download FRRouting and unpack it.
+
+Configure and build FRR (note that ``--enable-opaque-lsa`` also enables the
+ospfapi server and ospfclient).
+
+::
+
+ % sh ./configure --enable-opaque-lsa
+ % make
+
+This should also compile the client library and sample application in
+ospfclient.
+
+Make sure that you have enabled opaque LSAs in your configuration. Add the
+``ospf opaque-lsa`` statement to your :file:`ospfd.conf`:
+
+::
+
+ ! -*- ospf -*-
+ !
+ ! OSPFd sample configuration file
+ !
+ !
+ hostname xxxxx
+ password xxxxx
+
+ router ospf
+ router-id 10.0.0.1
+ network 10.0.0.1/24 area 1
+ neighbor 10.0.0.2
+ network 10.0.1.2/24 area 1
+ neighbor 10.0.1.1
+ ospf opaque-lsa <============ add this statement!
+
+Usage
+-----
+
+In the following we describe how you can use the sample application to
+originate opaque LSAs. The sample application first registers with the OSPF
+daemon the opaque type it wants to inject and then waits until the OSPF daemon
+is ready to accept opaque LSAs of that type. Then the client application
+originates an opaque LSA, waits 10 seconds and then updates the opaque LSA with
+new opaque data. After another 20 seconds, the client application deletes the
+opaque LSA from the LSDB. If the clients terminates unexpectedly, the OSPF API
+module will remove all the opaque LSAs that the application registered. Since
+the opaque LSAs are flooded to other routers, we will see the opaque LSAs in
+all routers according to the flooding scope of the opaque LSA.
+
+We have a very simple demo setup, just two routers connected with an ATM
+point-to-point link. Start the modified OSPF daemons on two adjacent routers.
+First run on msr2:
+
+.. code-block:: console
+
+ # ./ospfd --apiserver -f /usr/local/etc/ospfd.conf
+
+And on the neighboring router msr3:
+
+.. code-block:: console
+
+ # ./ospfd --apiserver -f /usr/local/etc/ospfd.conf
+
+Now the two routers form adjacency and start exchanging their databases.
+Looking at the OSPF daemon of msr2 (or msr3), you see this:
+
+.. code-block:: console
+
+ ospfd> show ip ospf database
+
+ OSPF Router with ID (10.0.0.1)
+
+ Router Link States (Area 0.0.0.1)
+
+ Link ID ADV Router Age Seq# CkSum Link count
+ 10.0.0.1 10.0.0.1 55 0x80000003 0xc62f 2
+ 10.0.0.2 10.0.0.2 55 0x80000003 0xe3e4 3
+
+ Net Link States (Area 0.0.0.1)
+
+ Link ID ADV Router Age Seq# CkSum
+ 10.0.0.2 10.0.0.2 60 0x80000001 0x5fcb
+
+Now we start the sample main application that originates an opaque LSA.
+
+.. code-block:: console
+
+ # cd ospfapi/apiclient
+ # ./main msr2 10 250 20 0.0.0.0 0.0.0.1
+
+This originates an opaque LSA of type 10 (area local), with opaque type 250
+(experimental), opaque id of 20 (chosen arbitrarily), interface address 0.0.0.0
+(which is used only for opaque LSAs type 9), and area 0.0.0.1
+
+Again looking at the OSPF database you see:
+
+.. code-block:: console
+
+ ospfd> show ip ospf database
+
+ OSPF Router with ID (10.0.0.1)
+
+ Router Link States (Area 0.0.0.1)
+
+ Link ID ADV Router Age Seq# CkSum Link count
+ 10.0.0.1 10.0.0.1 437 0x80000003 0xc62f 2
+ 10.0.0.2 10.0.0.2 437 0x80000003 0xe3e4 3
+
+ Net Link States (Area 0.0.0.1)
+
+ Link ID ADV Router Age Seq# CkSum
+ 10.0.0.2 10.0.0.2 442 0x80000001 0x5fcb
+
+ Area-Local Opaque-LSA (Area 0.0.0.1)
+
+ Opaque-Type/Id ADV Router Age Seq# CkSum
+ 250.0.0.20 10.0.0.1 0 0x80000001 0x58a6 <=== opaque LSA
+
+You can take a closer look at this opaque LSA:
+
+.. code-block:: console
+
+ ospfd> show ip ospf database opaque-area
+
+ OSPF Router with ID (10.0.0.1)
+
+
+ Area-Local Opaque-LSA (Area 0.0.0.1)
+
+ LS age: 4
+ Options: 66
+ LS Type: Area-Local Opaque-LSA
+ Link State ID: 250.0.0.20 (Area-Local Opaque-Type/ID)
+ Advertising Router: 10.0.0.1
+ LS Seq Number: 80000001
+ Checksum: 0x58a6
+ Length: 24
+ Opaque-Type 250 (Private/Experimental)
+ Opaque-ID 0x14
+ Opaque-Info: 4 octets of data
+ Added using OSPF API: 4 octets of opaque data
+ Opaque data: 1 0 0 0 <==== counter is 1
+
+Note that the main application updates the opaque LSA after 10 seconds, then it
+looks as follows:
+
+.. code-block:: console
+
+ ospfd> show ip ospf database opaque-area
+
+ OSPF Router with ID (10.0.0.1)
+
+
+ Area-Local Opaque-LSA (Area 0.0.0.1)
+
+ LS age: 1
+ Options: 66
+ LS Type: Area-Local Opaque-LSA
+ Link State ID: 250.0.0.20 (Area-Local Opaque-Type/ID)
+ Advertising Router: 10.0.0.1
+ LS Seq Number: 80000002
+ Checksum: 0x59a3
+ Length: 24
+ Opaque-Type 250 (Private/Experimental)
+ Opaque-ID 0x14
+ Opaque-Info: 4 octets of data
+ Added using OSPF API: 4 octets of opaque data
+ Opaque data: 2 0 0 0 <==== counter is now 2
+
+Note that the payload of the opaque LSA has changed as you can see above.
+
+Then, again after another 20 seconds, the opaque LSA is flushed from the LSDB.
+
+Important note:
+^^^^^^^^^^^^^^^
+
+In order to originate an opaque LSA, there must be at least one active
+opaque-capable neighbor. Thus, you cannot originate opaque LSAs if no neighbors
+are present. If you try to originate when no neighbors are ready, you will
+receive a not ready error message. The reason for this restriction is that it
+might be possible that some routers have an identical opaque LSA from a
+previous origination in their LSDB that unfortunately could not be flushed due
+to a crash, and now if the router comes up again and starts originating a new
+opaque LSA, the new opaque LSA is considered older since it has a lower
+sequence number and is ignored by other routers (that consider the stalled
+opaque LSA as more recent). However, if the originating router first
+synchronizes the database before originating opaque LSAs, it will detect the
+older opaque LSA and can flush it first.
+
+Protocol and Message Formats
+----------------------------
+
+If you are developing your own client application and you don't want to make
+use of the client library (due to the GNU license restriction or whatever
+reason), you can implement your own client-side message handling. The OSPF API
+uses two connections between the client and the OSPF API server: One connection
+is used for a synchronous request /reply protocol and another connection is
+used for asynchronous notifications (e.g., LSA update, neighbor status change).
+
+Each message begins with the following header:
+
+.. figure:: ../figures/ospf_api_msghdr.png
+ :alt: image
+
+ image
+
+The message type field can take one of the following values:
+
++-------------------------------+---------+
+| Messages to OSPF daemon | Value |
++===============================+=========+
+| MSG\_REGISTER\_OPAQUETYPE | 1 |
++-------------------------------+---------+
+| MSG\_UNREGISTER\_OPAQUETYPE | 2 |
++-------------------------------+---------+
+| MSG\_REGISTER\_EVENT | 3 |
++-------------------------------+---------+
+| MSG\_SYNC\_LSDB | 4 |
++-------------------------------+---------+
+| MSG\_ORIGINATE\_REQUEST | 5 |
++-------------------------------+---------+
+| MSG\_DELETE\_REQUEST | 6 |
++-------------------------------+---------+
+
++-----------------------------+---------+
+| Messages from OSPF daemon | Value |
++=============================+=========+
+| MSG\_REPLY | 10 |
++-----------------------------+---------+
+| MSG\_READY\_NOTIFY | 11 |
++-----------------------------+---------+
+| MSG\_LSA\_UPDATE\_NOTIFY | 12 |
++-----------------------------+---------+
+| MSG\_LSA\_DELETE\_NOTIFY | 13 |
++-----------------------------+---------+
+| MSG\_NEW\_IF | 14 |
++-----------------------------+---------+
+| MSG\_DEL\_IF | 15 |
++-----------------------------+---------+
+| MSG\_ISM\_CHANGE | 16 |
++-----------------------------+---------+
+| MSG\_NSM\_CHANGE | 17 |
++-----------------------------+---------+
+
+The synchronous requests and replies have the following message formats:
+
+.. figure:: ../figures/ospf_api_msgs1.png
+ :alt: image
+
+ image
+
+The origin field allows origin-based filtering using the following origin
+types:
+
++-------------------------+---------+
+| Origin | Value |
++=========================+=========+
+| NON\_SELF\_ORIGINATED | 0 |
++-------------------------+---------+
+| SELF\_ORIGINATED | 1 |
++-------------------------+---------+
+| ANY\_ORIGIN | 2 |
++-------------------------+---------+
+
+The reply message has one of the following error codes:
+
++--------------------------+---------+
+| Error code | Value |
++==========================+=========+
+| API\_OK | 0 |
++--------------------------+---------+
+| API\_NOSUCHINTERFACE | -1 |
++--------------------------+---------+
+| API\_NOSUCHAREA | -2 |
++--------------------------+---------+
+| API\_NOSUCHLSA | -3 |
++--------------------------+---------+
+| API\_ILLEGALSATYPE | -4 |
++--------------------------+---------+
+| API\_ILLEGALOPAQUETYPE | -5 |
++--------------------------+---------+
+| API\_OPAQUETYPEINUSE | -6 |
++--------------------------+---------+
+| API\_NOMEMORY | -7 |
++--------------------------+---------+
+| API\_ERROR | -99 |
++--------------------------+---------+
+| API\_UNDEF | -100 |
++--------------------------+---------+
+
+The asynchronous notifications have the following message formats:
+
+.. figure:: ../figures/ospf_api_msgs2.png
+ :alt: image
+
+ image
+
+
+.. Do not delete these acknowledgements!
+
+Original Acknowledgments from Ralph Keller
+------------------------------------------
+
+I would like to thank Masahiko Endo, the author of the opaque LSA extension
+module, for his great support. His wonderful ASCII graphs explaining the
+internal workings of this code, and his invaluable input proved to be crucial
+in designing a useful API for accessing the link state database of the OSPF
+daemon. Once, he even decided to take the plane from Tokyo to Zurich so that we
+could actually meet and have face-to-face discussions, which was a lot of fun.
+Clearly, without Masahiko no API would ever be completed. I also would like to
+thank Daniel Bauer who wrote an opaque LSA implementation too and was willing
+to test the OSPF API code in one of his projects.
diff --git a/doc/developer/ospf-sr.rst b/doc/developer/ospf-sr.rst
new file mode 100644
index 0000000..1c16443
--- /dev/null
+++ b/doc/developer/ospf-sr.rst
@@ -0,0 +1,347 @@
+OSPF Segment Routing
+====================
+
+This is an EXPERIMENTAL support of `RFC 8665`.
+DON'T use it for production network.
+
+Supported Features
+------------------
+
+* Automatic computation of Primary and Backup Adjacency SID with
+ Cisco experimental remote IP address
+* SRGB & SRLB configuration
+* Prefix configuration for Node SID with optional NO-PHP flag (Linux
+ kernel support both mode)
+* Node MSD configuration (with Linux Kernel >= 4.10 a maximum of 32 labels
+ could be stack)
+* Automatic provisioning of MPLS table
+* Equal Cost Multi-Path (ECMP)
+* Static route configuration with label stack up to 32 labels
+* TI-LFA (for P2P interfaces only)
+
+Interoperability
+----------------
+
+* Tested on various topology including point-to-point and LAN interfaces
+ in a mix of FRRouting instance and Cisco IOS-XR 6.0.x
+* Check OSPF LSA conformity with latest wireshark release 2.5.0-rc
+
+Implementation details
+----------------------
+
+Concepts
+^^^^^^^^
+
+Segment Routing used 3 different OPAQUE LSA in OSPF to carry the various
+information:
+
+* **Router Information:** flood the Segment Routing capabilities of the node.
+ This include the supported algorithms, the Segment Routing Global Block
+ (SRGB) and the Maximum Stack Depth (MSD).
+* **Extended Link:** flood the Adjaceny and Lan Adjacency Segment Identifier
+* **Extended Prefix:** flood the Prefix Segment Identifier
+
+The implementation follows previous TE and Router Information codes. It used the
+OPAQUE LSA functions defined in ospf_opaque.[c,h] as well as the OSPF API. This
+latter is mandatory for the implementation as it provides the Callback to
+Segment Routing functions (see below) when an Extended Link / Prefix or Router
+Information LSA s are received.
+
+Overview
+^^^^^^^^
+
+Following files where modified or added:
+
+* ospd_ri.[c,h] have been modified to add the new TLVs for Segment Routing.
+* ospf_ext.[c,h] implement RFC7684 as base support of Extended Link and Prefix
+ Opaque LSA.
+* ospf_sr.[c,h] implement the earth of Segment Routing. It adds a new Segment
+ Routing database to manage Segment Identifiers per Link and Prefix and
+ Segment Routing enable node, Callback functions to process incoming LSA and
+ install MPLS FIB entry through Zebra.
+
+The figure below shows the relation between the various files:
+
+* ospf_sr.c centralized all the Segment Routing processing. It receives Opaque
+ LSA Router Information (4.0.0.0) from ospf_ri.c and Extended Prefix
+ (7.0.0.X) Link (8.0.0.X) from ospf_ext.c. Once received, it parse TLVs and
+ SubTLVs and store information in SRDB (which is defined in ospf_sr.h). For
+ each received LSA, NHLFE is computed and send to Zebra to add/remove new
+ MPLS labels entries and FEC. New CLI configurations are also centralized in
+ ospf_sr.c. This CLI will trigger the flooding of new LSA Router Information
+ (4.0.0.0), Extended Prefix (7.0.0.X) and Link (8.0.0.X) by ospf_ri.c,
+ respectively ospf_ext.c.
+* ospf_ri.c send back to ospf_sr.c received Router Information LSA and update
+ Self Router Information LSA with parameters provided by ospf_sr.c i.e. SRGB
+ and MSD. It use ospf_opaque.c functions to send/received these Opaque LSAs.
+* ospf_ext.c send back to ospf_sr.c received Extended Prefix and Link Opaque
+ LSA and send self Extended Prefix and Link Opaque LSA through ospf_opaque.c
+ functions.
+
+::
+
+ +-----------+ +-------+
+ | | | |
+ | ospf_sr.c +-----+ SRDB |
+ +-----------+ +--+ | |
+ | +-^-------^-+ | +-------+
+ | | | | |
+ | | | | |
+ | | | | +--------+
+ | | | | |
+ +---v----------+ | | | +-----v-------+
+ | | | | | | |
+ | ospf_ri.c +--+ | +-------+ ospf_ext.c |
+ | LSA 4.0.0.0 | | | LSA 7.0.0.X |
+ | | | | LSA 8.0.0.X |
+ +---^----------+ | | |
+ | | +-----^-------+
+ | | |
+ | | |
+ | +--------v------------+ |
+ | | | |
+ | | ZEBRA: Labels + FEC | |
+ | | | |
+ | +---------------------+ |
+ | |
+ | |
+ | +---------------+ |
+ | | | |
+ +---------> ospf_opaque.c <---------+
+ | |
+ +---------------+
+
+ Figure 1: Overview of Segment Routing interaction
+
+Module interactions
+^^^^^^^^^^^^^^^^^^^
+
+To process incoming LSA, the code is based on the capability to call `hook()`
+functions when LSA are inserted or delete to / from the LSDB and the
+possibility to register particular treatment for Opaque LSA. The first point
+is provided by the OSPF API feature and the second by the Opaque implementation
+itself. Indeed, it is possible to register callback function for a given Opaque
+LSA ID (see `ospf_register_opaque_functab()` function defined in
+`ospf_opaque.c`). Each time a new LSA is added to the LSDB, the
+`new_lsa_hook()` function previously register for this LSA type is called. For
+Opaque LSA it is the `ospf_opaque_lsa_install_hook()`. For deletion, it is
+`ospf_opaque_lsa_delete_hook()`.
+
+Note that incoming LSA which is already present in the LSDB will be inserted
+after the old instance of this LSA remove from the LSDB. Thus, after the first
+time, each incoming LSA will trigger a `delete` following by an `install`. This
+is not very helpful to handle real LSA deletion. In fact, LSA deletion is done
+by Flushing LSA i.e. flood LSA after setting its age to MAX_AGE. Then, a garbage
+function has the role to remove all LSA with `age == MAX_AGE` in the LSDB. So,
+to handle LSA Flush, the best is to look to the LSA age to determine if it is
+an installation or a future deletion i.e. the flushed LSA is first store in the
+LSDB with MAX_AGE waiting for the garbage collector function.
+
+Router Information LSAs
+^^^^^^^^^^^^^^^^^^^^^^^
+
+To activate Segment Routing, new CLI command `segment-routing on` has been
+introduced. When this command is activated, function
+`ospf_router_info_update_sr()` is called to indicate to Router Information
+process that Segment Routing TLVs must be flood. Same function is called to
+modify the Segment Routing Global Block (SRGB) and Maximum Stack Depth (MSD)
+TLV. Only Shortest Path First (SPF) Algorithm is supported, so no possibility
+to modify this TLV is offer by the code.
+
+When Opaque LSA Type 4 i.e. Router Information are stored in LSDB, function
+`ospf_opaque_lsa_install_hook()` will call the previously registered function
+`ospf_router_info_lsa_update()`. In turn, the function will simply trigger
+`ospf_sr_ri_lsa_update()` or `ospf_sr_ri_lsa_delete` in function of the LSA
+age. Before, it verifies that the LSA Opaque Type is 4 (Router Information).
+Self Opaque LSA are not send back to the Segment Routing functions as
+information are already stored.
+
+Extended Link Prefix LSAs
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Like for Router Information, Segment Routing is activate at the Extended
+Link/Prefix level with new `segment-routing on` command. This triggers
+automatically the flooding of Extended Link LSA for all ospf interfaces where
+adjacency is full. For Extended Prefix LSA, the new CLI command
+`segment-routing prefix ...` will trigger the flooding of Prefix SID
+TLV/SubTLVs.
+
+When Opaque LSA Type 7 i.e. Extended Prefix and Type 8 i.e. Extended Link are
+store in the LSDB, `ospf_ext_pref_update_lsa()` respectively
+`ospf_ext_link_update_lsa()` are called like for Router Information LSA. In
+turn, they respectively trigger `ospf_sr_ext_prefix_lsa_update()` /
+`ospf_sr_ext_link_lsa_update()` or `ospf_sr_ext_prefix_lsa_delete()` /
+`ospf_sr_ext_link_lsa_delete()` if the LSA age is equal to MAX_AGE.
+
+Zebra
+^^^^^
+
+When a new MPLS entry or new Forwarding Equivalent Class (FEC) must be added or
+deleted in the data plane, `add_sid_nhlfe()` respectively `del_sid_nhlfe()` are
+called. Once check the validity of labels, they are send to ZEBRA layer through
+`ZEBRA_MPLS_LABELS_ADD` command, respectively `ZEBRA_MPLS_LABELS_DELETE`
+command for deletion. This is completed by a new labelled route through
+`ZEBRA_ROUTE_ADD` command, respectively `ZEBRA_ROUTE_DELETE` command.
+
+TI-LFA
+^^^^^^
+
+Experimental support for Topology Independent LFA (Loop-Free Alternate), see
+for example 'draft-bashandy-rtgwg-segment-routing-ti-lfa-05'. The related
+files are `ospf_ti_lfa.c/h`.
+
+The current implementation is rather naive and does not support the advanced
+optimizations suggested in e.g. RFC7490 or RFC8102. It focuses on providing
+the essential infrastructure which can also later be used to enhance the
+algorithmic aspects.
+
+Supported features:
+
+* Link and node protection
+* Intra-area support
+* Proper use of Prefix- and Adjacency-SIDs in label stacks
+* Asymmetric weights (using reverse SPF)
+* Non-adjacent P/Q spaces
+* Protection of Prefix-SIDs
+
+If configured for every SPF run the routing table is enriched with additional
+backup paths for every prefix. The corresponding Prefix-SIDs are updated with
+backup paths too within the OSPF SR update task.
+
+Informal High-Level Algorithm Description:
+
+::
+
+ p_spaces = empty_list()
+
+ for every protected_resource (link or node):
+ p_space = generate_p_space(protected_resource)
+ p_space.q_spaces = empty_list()
+
+ for every destination that is affected by the protected_resource:
+ q_space = generate_q_space(destination)
+
+ # The label stack is stored in q_space
+ generate_label_stack(p_space, q_space)
+
+ # The p_space collects all its q_spaces
+ p_spaces.q_spaces.add(q_space)
+
+ p_spaces.add(p_space)
+
+ adjust_routing_table(p_spaces)
+
+Possible Performance Improvements:
+
+* Improve overall datastructures, get away from linked lists for vertices
+* Don't calculate a Q space for every destination, but for a minimum set of
+ backup paths that cover all destinations in the post-convergence SPF. The
+ thinking here is that once a backup path is known that it is also a backup
+ path for all nodes on the path themselves. This can be done by using the
+ leafs of a trimmed minimum spanning tree generated out of the post-
+ convergence SPF tree for that particular P space.
+* For an alternative (maybe better) optimization look at
+ https://tools.ietf.org/html/rfc7490#section-5.2.1.3 which describes using
+ the Q space of the node which is affected by e.g. a link failure. Note that
+ this optimization is topology dependent.
+
+It is highly recommended to read e.g. `Segment Routing I/II` by Filsfils to
+understand the basics of Ti-LFA.
+
+Configuration
+-------------
+
+Linux Kernel
+^^^^^^^^^^^^
+
+In order to use OSPF Segment Routing, you must setup MPLS data plane. Up to
+know, only Linux Kernel version >= 4.5 is supported.
+
+First, the MPLS modules aren't loaded by default, so you'll need to load them
+yourself:
+
+::
+
+ modprobe mpls_router
+ modprobe mpls_gso
+ modprobe mpls_iptunnel
+
+Then, you must activate MPLS on the interface you would used:
+
+::
+
+ sysctl -w net.mpls.conf.enp0s9.input=1
+ sysctl -w net.mpls.conf.lo.input=1
+ sysctl -w net.mpls.platform_labels=1048575
+
+The last line fix the maximum MPLS label value.
+
+Once OSPFd start with Segment Routing, you could check that MPLS routes are
+enable with:
+
+::
+
+ ip -M route
+ ip route
+
+The first command show the MPLS LFIB table while the second show the FIB
+table which contains route with MPLS label encapsulation.
+
+If you disable Penultimate Hop Popping with the `no-php-flag` (see below), you
+MUST check that RP filter is not enable for the interface you intend to use,
+especially the `lo` one. For that purpose, disable RP filtering with:
+
+::
+
+ systcl -w net.ipv4.conf.all.rp_filter=0
+ sysctl -w net.ipv4.conf.lo.rp_filter=0
+
+OSPFd
+^^^^^
+
+Here it is a simple example of configuration to enable Segment Routing. Note
+that `opaque capability` and `router information` must be set to activate
+Opaque LSA prior to Segment
+Routing.
+
+::
+
+ router ospf
+ ospf router-id 192.168.1.11
+ capability opaque
+ segment-routing on
+ segment-routing global-block 10000 19999 local-block 5000 5999
+ segment-routing node-msd 8
+ segment-routing prefix 192.168.1.11/32 index 1100
+
+The first segment-routing statement enables it. The second and third one set
+the SRGB and SRLB respectively, fourth line the MSD and finally, set the
+Prefix SID index for a given prefix.
+
+Note that only prefix of Loopback interface could be configured with a Prefix
+SID. It is possible to add `no-php-flag` at the end of the prefix command to
+disable Penultimate Hop Popping. This advertises to peers that they MUST NOT pop
+the MPLS label prior to sending the packet.
+
+Known limitations
+-----------------
+
+* Runs only within default VRF
+* Only single Area is supported. ABR is not yet supported
+* Only SPF algorithm is supported
+* Extended Prefix Range is not supported
+* With NO Penultimate Hop Popping, it is not possible to express a Segment
+ Path with an Adjacency SID due to the impossibility for the Linux Kernel to
+ perform double POP instruction.
+
+Credits
+-------
+
+* Author: Anselme Sawadogo <anselmesawadogo@gmail.com>
+* Author: Olivier Dugeon <olivier.dugeon@orange.com>
+* Copyright (C) 2016 - 2018 Orange Labs http://www.orange.com
+
+This work has been performed in the framework of the H2020-ICT-2014
+project 5GEx (Grant Agreement no. 671636), which is partially funded
+by the European Commission.
+
diff --git a/doc/developer/ospf.rst b/doc/developer/ospf.rst
new file mode 100644
index 0000000..837a0bd
--- /dev/null
+++ b/doc/developer/ospf.rst
@@ -0,0 +1,13 @@
+.. _ospfd:
+
+*****
+OSPFD
+*****
+
+.. toctree::
+ :maxdepth: 2
+
+ ospf-api
+ ospf-sr
+ cspf
+
diff --git a/doc/developer/packaging-debian.rst b/doc/developer/packaging-debian.rst
new file mode 100644
index 0000000..c2c3b7e
--- /dev/null
+++ b/doc/developer/packaging-debian.rst
@@ -0,0 +1,167 @@
+.. _packaging-debian:
+
+Packaging Debian
+================
+
+(Tested on Ubuntu 14.04, 16.04, 17.10, 18.04, Debian jessie, stretch and
+buster.)
+
+1. Install the Debian packaging tools:
+
+ .. code-block:: shell
+
+ sudo apt install fakeroot debhelper devscripts
+
+2. Checkout FRR under an **unprivileged** user account:
+
+ .. code-block:: shell
+
+ git clone https://github.com/frrouting/frr.git frr
+ cd frr
+
+ If you wish to build a package for a branch other than master:
+
+ .. code-block:: shell
+
+ git checkout <branch>
+
+3. Install build dependencies using the `mk-build-deps` tool from the
+ `devscripts` package:
+
+ .. code-block:: shell
+
+ sudo mk-build-deps --install --remove debian/control
+
+ Alternatively, you can manually install build dependencies for your
+ platform as outlined in :ref:`building`.
+
+4. Install `git-buildpackage` package:
+
+ .. code-block:: shell
+
+ sudo apt-get install git-buildpackage
+
+5. (optional) Append a distribution identifier if needed (see below under
+ :ref:`multi-dist`.)
+
+6. Build Debian Binary and/or Source Packages:
+
+ .. code-block:: shell
+
+ gbp buildpackage --git-builder=dpkg-buildpackage --git-debian-branch="$(git rev-parse --abbrev-ref HEAD)" $options
+
+ Where `$options` may contain any or all of the following items:
+
+ * build profiles specified with ``-P``, e.g.
+ ``-Ppkg.frr.nortrlib,pkg.frr.rtrlib``.
+ Multiple values are separated by commas and there must not be a space
+ after the ``-P``.
+
+ The following build profiles are currently available:
+
+ +----------------+-------------------+-----------------------------------------+
+ | Profile | Negation | Effect |
+ +================+===================+=========================================+
+ | pkg.frr.rtrlib | pkg.frr.nortrlib | builds frr-rpki-rtrlib package (or not) |
+ +----------------+-------------------+-----------------------------------------+
+ | pkg.frr.lua | pkg.frr.nolua | builds lua scripting extension |
+ +----------------+-------------------+-----------------------------------------+
+ | pkg.frr.pim6d | pkg.frr.nopim6d | builds pim6d (default enabled) |
+ +----------------+-------------------+-----------------------------------------+
+
+ * the ``-uc -us`` options to disable signing the packages with your GPG key
+
+ (git builds of the `master` or `stable/X.X` branches won't be signed by
+ default since their target release is set to ``UNRELEASED``.)
+
+ * the ``--build=type`` accepts following options (see ``dpkg-buildpackage`` manual page):
+
+ * ``source`` builds the source package
+ * ``any`` builds the architecture specific binary packages
+ * ``all`` build the architecture independent binary packages
+ * ``binary`` build the architecture specific and independent binary packages (alias for ``any,all``)
+ * ``full`` builds everything (alias for ``source,any,all``)
+
+ Alternatively, you might want to replace ``dpkg-buildpackage`` with
+ ``debuild`` wrapper that also runs ``lintian`` and ``debsign`` on the final
+ packages.
+
+7. Done!
+
+ If all worked correctly, then you should end up with the Debian packages in
+ the parent directory of where `debuild` ran. If distributed, please make sure
+ you distribute it together with the sources (``frr_*.orig.tar.xz``,
+ ``frr_*.debian.tar.xz`` and ``frr_*.dsc``)
+
+.. note::
+
+ A package created from `master` or `stable/X.X` is slightly different from
+ a package created from the `debian` branch. The changelog for the former
+ is autogenerated and sets the Debian revision to ``-0``, which causes an
+ intentional lintian warning. The `debian` branch on the other hand has
+ a manually maintained changelog that contains proper Debian release
+ versioning.
+
+
+.. _multi-dist:
+
+Multi-Distribution builds
+=========================
+
+You can optionally append a distribution identifier in case you want to
+make multiple versions of the package available in the same repository.
+
+.. code-block:: shell
+
+ dch -l '~deb8u' 'build for Debian 8 (jessie)'
+ dch -l '~deb9u' 'build for Debian 9 (stretch)'
+ dch -l '~ubuntu14.04.' 'build for Ubuntu 14.04 (trusty)'
+ dch -l '~ubuntu16.04.' 'build for Ubuntu 16.04 (xenial)'
+ dch -l '~ubuntu18.04.' 'build for Ubuntu 18.04 (bionic)'
+
+Between building packages for specific distributions, the only difference
+in the package itself lies in the automatically generated shared library
+dependencies, e.g. libjson-c2 or libjson-c3. This means that the
+architecture independent packages should **not** have a suffix appended.
+Also, the current Debian testing/unstable releases should not have any suffix
+appended.
+
+For example, at the end of 2018 (i.e. ``buster``/Debian 10 is the current
+"testing" release), the following is a complete list of `.deb` files for
+Debian 8, 9 and 10 packages for FRR 6.0.1-1 with RPKI support::
+
+ frr_6.0.1-1_amd64.deb
+ frr_6.0.1-1~deb8u1_amd64.deb
+ frr_6.0.1-1~deb9u1_amd64.deb
+ frr-dbg_6.0.1-1_amd64.deb
+ frr-dbg_6.0.1-1~deb8u1_amd64.deb
+ frr-dbg_6.0.1-1~deb9u1_amd64.deb
+ frr-rpki-rtrlib_6.0.1-1_amd64.deb
+ frr-rpki-rtrlib_6.0.1-1~deb8u1_amd64.deb
+ frr-rpki-rtrlib_6.0.1-1~deb9u1_amd64.deb
+ frr-doc_6.0.1-1_all.deb
+ frr-pythontools_6.0.1-1_all.deb
+
+Note that there are no extra versions of the `frr-doc` and `frr-pythontools`
+packages (because they are for architecture ``all``, not ``amd64``), and the
+version for Debian 10 does **not** have a ``~deb10u1`` suffix.
+
+.. warning::
+
+ Do not use the ``-`` character in the version suffix. The last ``-`` in
+ the version number is the separator between upstream version and Debian
+ version. ``6.0.1-1~foobar-2`` means upstream version ``6.0.1-1~foobar``,
+ Debian version ``2``. This is not what you want.
+
+ The only allowed characters in the Debian version are ``0-9 A-Z a-z + . ~``
+
+.. note::
+
+ The separating character for the suffix **must** be the tilde (``~``)
+ because the tilde is ordered in version-comparison before the empty
+ string. That means the order of the above packages is the following:
+
+ ``6.0.1-1`` newer than ``6.0.1-1~deb9u1`` newer than ``6.0.1-1~deb8u1``
+
+ If you use another character (e.g. ``+``), the untagged version will be
+ regarded as the "oldest"!
diff --git a/doc/developer/packaging-redhat.rst b/doc/developer/packaging-redhat.rst
new file mode 100644
index 0000000..d88f449
--- /dev/null
+++ b/doc/developer/packaging-redhat.rst
@@ -0,0 +1,98 @@
+.. _packaging-redhat:
+
+Packaging Red Hat
+=================
+
+Tested on CentOS 6, CentOS 7, CentOS 8 and Fedora 24.
+
+1. On CentOS 6, refer to :ref:`building-centos6` for details on installing
+ sufficiently up-to-date package versions to enable building FRR.
+
+ Newer automake/autoconf/bison is only needed to build the RPM and is **not**
+ needed to install the binary RPM package.
+
+2. Install the build dependencies for your platform. Refer to the
+ platform-specific build documentation on how to do this.
+
+3. Install the following additional packages::
+
+ yum install rpm-build net-snmp-devel pam-devel libcap-devel
+
+ For CentOS 7 and CentOS 8, the package will be built using python3
+ and requires additional python3 packages::
+
+ yum install python3-devel python3-sphinx
+
+ .. note::
+
+ For CentOS 8 you need to install ``platform-python-devel`` package
+ to provide ``/usr/bin/pathfix.py``::
+
+ yum install platform-python-devel
+
+
+ If ``yum`` is not present on your system, use ``dnf`` instead.
+
+ You should enable ``PowerTools`` repo if using CentOS 8 which
+ is disabled by default.
+
+4. Checkout FRR::
+
+ git clone https://github.com/frrouting/frr.git frr
+
+5. Run Bootstrap and make distribution tar.gz::
+
+ cd frr
+ ./bootstrap.sh
+ ./configure --with-pkg-extra-version=-MyRPMVersion
+ make dist
+
+ .. note::
+
+ The only ``configure`` option respected when building RPMs is
+ ``--with-pkg-extra-version``.
+
+6. Create RPM directory structure and populate with sources::
+
+ mkdir rpmbuild
+ mkdir rpmbuild/SOURCES
+ mkdir rpmbuild/SPECS
+ cp redhat/*.spec rpmbuild/SPECS/
+ cp frr*.tar.gz rpmbuild/SOURCES/
+
+7. Edit :file:`rpm/SPECS/frr.spec` with configuration as needed.
+
+ Look at the beginning of the file and adjust the following parameters to
+ enable or disable features as required::
+
+ ############### FRRouting (FRR) configure options #################
+ # with-feature options
+ %{!?with_pam: %global with_pam 0 }
+ %{!?with_ospfclient: %global with_ospfclient 1 }
+ %{!?with_ospfapi: %global with_ospfapi 1 }
+ %{!?with_irdp: %global with_irdp 1 }
+ %{!?with_rtadv: %global with_rtadv 1 }
+ %{!?with_ldpd: %global with_ldpd 1 }
+ %{!?with_nhrpd: %global with_nhrpd 1 }
+ %{!?with_eigrp: %global with_eigrpd 1 }
+ %{!?with_shared: %global with_shared 1 }
+ %{!?with_multipath: %global with_multipath 256 }
+ %{!?frr_user: %global frr_user frr }
+ %{!?vty_group: %global vty_group frrvty }
+ %{!?with_fpm: %global with_fpm 0 }
+ %{!?with_watchfrr: %global with_watchfrr 1 }
+ %{!?with_bgp_vnc: %global with_bgp_vnc 0 }
+ %{!?with_pimd: %global with_pimd 1 }
+ %{!?with_pim6d: %global with_pim6d 1 }
+ %{!?with_rpki: %global with_rpki 0 }
+
+8. Build the RPM::
+
+ rpmbuild --define "_topdir `pwd`/rpmbuild" -ba rpmbuild/SPECS/frr.spec
+
+ If building with RPKI, then download and install the additional RPKI
+ packages from
+ https://ci1.netdef.org/browse/RPKI-RTRLIB/latestSuccessful/artifact
+
+If all works correctly, then you should end up with the RPMs under
+:file:`rpmbuild/RPMS` and the source RPM under :file:`rpmbuild/SRPMS`.
diff --git a/doc/developer/packaging.rst b/doc/developer/packaging.rst
new file mode 100644
index 0000000..0c072e4
--- /dev/null
+++ b/doc/developer/packaging.rst
@@ -0,0 +1,10 @@
+********************
+Releases & Packaging
+********************
+
+.. toctree::
+ :maxdepth: 2
+
+ frr-release-procedure
+ packaging-debian
+ packaging-redhat
diff --git a/doc/developer/path-internals-daemon.rst b/doc/developer/path-internals-daemon.rst
new file mode 100644
index 0000000..29f0172
--- /dev/null
+++ b/doc/developer/path-internals-daemon.rst
@@ -0,0 +1,115 @@
+PATHD Internals
+===============
+
+Architecture
+------------
+
+Overview
+........
+
+The pathd deamon manages the segment routing policies, it owns the data
+structures representing them and can load modules that manipulate them like the
+PCEP module. Its responsibility is to select a candidate path for each
+configured policy and to install it into Zebra.
+
+Zebra
+.....
+
+Zebra manages policies that are active or pending to be activated due to the
+next hop not being available yet. In zebra, policy data structures and APIs are
+defined in `zebra_srte.[hc]`.
+
+The responsibilities of Zebra are:
+
+ - Store the policies' segment list.
+ - Install the policies when their next-hop is available.
+ - Notify other daemons of the status of the policies.
+
+Adding and removing policies is done using the commands `ZEBRA_SR_POLICY_SET`
+and `ZEBRA_SR_POLICY_DELETE` as parameter of the function `zebra_send_sr_policy`
+all defined in `zclient.[hc]`.
+
+If the first segment of the policy is an unknown label, it is kept until
+notified by the mpls hooks `zebra_mpls_label_created`, and then it is installed.
+
+To get notified when a policy status changes, a client can implement the
+`sr_policy_notify_status` callback defined in `zclient.[hc]`.
+
+For encoding/decoding the various data structures used to comunicate with zebra,
+the following functions are available from `zclient.[hc]`:
+`zapi_sr_policy_encode`, `zapi_sr_policy_decode` and
+`zapi_sr_policy_notify_status_decode`.
+
+
+Pathd
+.....
+
+
+The pathd daemon manages all the possible candidate paths for the segment
+routing policies and selects the best one following the
+`segment routing policy draft <https://tools.ietf.org/html/draft-ietf-spring-segment-routing-policy-06#section-2.9>`_.
+It also supports loadable modules for handling dynamic candidate paths and the
+creation of new policies and candidate paths at runtime.
+
+The responsibilities of the pathd base daemon, not including any optional
+modules, are:
+
+ - Store the policies and all the possible candidate paths for them.
+ - Select the best candidate path for each policy and send it to Zebra.
+ - Provide VTYSH configuration to set up policies and candidate paths.
+ - Provide a Northbound API to manipulate **configured** policies and candidate paths.
+ - Handle loadable modules for extending the functionality.
+ - Provide an API to the loadable module to manipulate policies and candidate paths.
+
+
+Threading Model
+---------------
+
+The daemon runs completely inside the main thread using FRR event model, there
+is no threading involved.
+
+
+Source Code
+-----------
+
+Internal Data Structures
+........................
+
+The main data structures for policies and candidate paths are defined in
+`pathd.h` and implemented in `pathd.c`.
+
+When modifying these structures, either directly or through the functions
+exported by `pathd.h`, nothing should be deleted/freed right away. The deletion
+or modification flags must be set and when all the changes are done, the
+function `srte_apply_changes` must be called. When called, a new candidate path
+may be elected and sent to Zebra, and all the structures flagged as deleted
+will be freed. In addition, a hook will be called so dynamic modules can perform
+any required action when the elected candidate path changes.
+
+
+Northbound API
+..............
+
+The northbound API is defined in `path_nb.[ch]` and implemented in
+`path_nb_config.c` for configuration data and `path_nb_state.c` for operational
+data.
+
+
+Command Line Client
+...................
+
+The command-line client (VTYSH) is implemented in `path_cli.c`.
+
+
+Interface with Zebra
+....................
+
+All the functions interfacing with Zebra are defined and implemented in
+`path_zebra.[hc]`.
+
+
+Loadable Module API
+...................
+
+For the time being, the API the loadable module uses is defined by `pathd.h`,
+but in the future, it should be moved to a dedicated include file.
diff --git a/doc/developer/path-internals-pcep.rst b/doc/developer/path-internals-pcep.rst
new file mode 100644
index 0000000..a6b2220
--- /dev/null
+++ b/doc/developer/path-internals-pcep.rst
@@ -0,0 +1,193 @@
+PCEP Module Internals
+=====================
+
+Introduction
+------------
+
+The PCEP module for the pathd daemon implements the PCEP protocol described in
+:rfc:`5440` to update the policies and candidate paths.
+
+The protocol encoding/decoding and the basic session management is handled by
+the `pceplib external library 1.2 <https://github.com/volta-networks/pceplib/tree/devel-1.2>`_.
+
+Together with pceplib, this module supports at least partially:
+
+ - :rfc:`5440`
+
+ Most of the protocol defined in the RFC is implemented.
+ All the messages can be parsed, but this was only tested in the context
+ of segment routing. Only a very small subset of metric types can be
+ configured, and there is a known issue with some Cisco routers not
+ following the IANA numbers for metrics.
+
+ - :rfc:`8231`
+
+ Support delegation of candidate path after performing the initial
+ computation request. If the PCE does not respond or cannot compute
+ a path, an empty candidate path is delegated to the PCE.
+ Only tested in the context of segment routing.
+
+ - :rfc:`8408`
+
+ Only used to comunicate the support for segment routing to the PCE.
+
+ - :rfc:`8664`
+
+ All the NAI types are implemented, but only the MPLS NAI are supported.
+ If the PCE provide segments that are not MPLS labels, the PCC will
+ return an error.
+
+Note that pceplib supports more RFCs and drafts, see pceplib
+`README <https://github.com/volta-networks/pceplib/blob/master/README.md>`_
+for more details.
+
+
+Architecture
+------------
+
+Overview
+........
+
+The module is separated into multiple layers:
+
+ - pathd interface
+ - command-line console
+ - controller
+ - PCC
+ - pceplib interface
+
+The pathd interface handles all the interactions with the daemon API.
+
+The command-line console handles all the VTYSH configuration commands.
+
+The controller manages the multiple PCC connections and the interaction between
+them and the daemon interface.
+
+The PCC handles a single connection to a PCE through a pceplib session.
+
+The pceplib interface abstracts the API of the pceplib.
+
+.. figure:: ../figures/pcep_module_threading_overview.svg
+
+
+Threading Model
+---------------
+
+The module requires multiple threads to cooperate:
+
+ - The main thread used by the pathd daemon.
+ - The controller pthread used to isolate the PCC from the main thread.
+ - The possible threads started in the pceplib library.
+
+To ensure thread safety, all the controller and PCC state data structures can
+only be read and modified in the controller thread, and all the global data
+structures can only be read and modified in the main thread. Most of the
+interactions between these threads are done through FRR timers and events.
+
+The controller is the bridge between the two threads, all the functions that
+**MUST** be called from the main thread start with the prefix `pcep_ctrl_` and
+all the functions that **MUST** be called from the controller thread start
+with the prefix `pcep_thread_`. When an asynchronous action must be taken in
+a different thread, an FRR event is sent to the thread. If some synchronous
+operation is needed, the calling thread will block and run a callback in the
+other thread, there the result is **COPIED** and returned to the calling thread.
+
+No function other than the controller functions defined for it should be called
+from the main thread. The only exception being some utility functions from
+`path_pcep_lib.[hc]`.
+
+All the calls to pathd API functions **MUST** be performed in the main thread,
+for that, the controller sends FRR events handled in function
+`path_pcep.c:pcep_main_event_handler`.
+
+For the same reason, the console client only runs in the main thread. It can
+freely use the global variable, but **MUST** use controller's `pcep_ctrl_`
+functions to interact with the PCCs.
+
+
+Source Code
+-----------
+
+Generic Data Structures
+.......................
+
+The data structures are defined in multiple places, and where they are defined
+dictates where they can be used.
+
+The data structures defined in `path_pcep.h` can be used anywhere in the module.
+
+Internally, throughout the module, the `struct path` data structure is used
+to describe PCEP messages. It is a simplified flattened structure that can
+represent multiple complex PCEP message types. The conversion from this
+structure to the PCEP data structures used by pceplib is done in the pceplib
+interface layer.
+
+The data structures defined in `path_pcep_controller.h` should only be used
+in `path_pcep_controller.c`. Even if a structure pointer is passed as a parameter
+to functions defined in `path_pcep_pcc.h`, these should consider it as an opaque
+data structure only used to call back controller functions.
+
+The same applies to the structures defined in `path_pcep_pcc.h`, even if the
+controller owns a reference to this data structure, it should never read or
+modify it directly, it should be considered an opaque structure.
+
+The global data structure can be accessed from the pathd interface layer
+`path_pcep.c` and the command line client code `path_pcep_cli.c`.
+
+
+Interface With Pathd
+....................
+
+All the functions calling or called by the pathd daemon are implemented in
+`path_pcep.c`. These functions **MUST** run in the main FRR thread, and
+all the interactions with the controller and the PCCs **MUST** pass through
+the controller's `pcep_ctrl_` prefixed functions.
+
+To handle asynchronous events from the PCCs, a callback is passed to
+`pcep_ctrl_initialize` that is called in the FRR main thread context.
+
+
+Command Line Client
+...................
+
+All the command line configuration commands (VTYSH) are implemented in
+`path_pcep_cli.c`. All the functions there run in the main FRR thread and
+can freely access the global variables. All the interaction with the
+controller's and the PCCs **MUST** pass through the controller `pcep_ctrl_`
+prefixed functions.
+
+
+Debugging Helpers
+.................
+
+All the functions formating data structures for debugging and logging purposes
+are implemented in `path_pcep_debug.[hc]`.
+
+
+Interface with pceplib
+......................
+
+All the functions calling the pceplib external library are defined in
+`path_pcep_lib.[hc]`. Some functions are called from the main FRR thread, like
+`pcep_lib_initialize`, `pcep_lib_finalize`; some can be called from either
+thread, like `pcep_lib_free_counters`; some function must be called from the
+controller thread, like `pcep_lib_connect`. This will probably be formalized
+later on with function prefix like done in the controller.
+
+
+Controller
+..........
+
+The controller is defined and implemented in `path_pcep_controller.[hc]`.
+Part of the controller code runs in FRR main thread and part runs in its own
+FRR pthread started to isolate the main thread from the PCCs' event loop.
+To communicate between the threads it uses FRR events, timers and
+`event_execute` calls.
+
+
+PCC
+...
+
+Each PCC instance owns its state and runs in the controller thread. They are
+defined and implemented in `path_pcep_pcc.[hc]`. All the interactions with
+the daemon must pass through some controller's `pcep_thread_` prefixed function.
diff --git a/doc/developer/path-internals.rst b/doc/developer/path-internals.rst
new file mode 100644
index 0000000..2c2df0f
--- /dev/null
+++ b/doc/developer/path-internals.rst
@@ -0,0 +1,11 @@
+.. _path_internals:
+
+*********
+Internals
+*********
+
+.. toctree::
+ :maxdepth: 2
+
+ path-internals-daemon
+ path-internals-pcep
diff --git a/doc/developer/path.rst b/doc/developer/path.rst
new file mode 100644
index 0000000..b6d2438
--- /dev/null
+++ b/doc/developer/path.rst
@@ -0,0 +1,11 @@
+.. _path:
+
+*****
+PATHD
+*****
+
+.. toctree::
+ :maxdepth: 2
+
+ path-internals
+
diff --git a/doc/developer/pceplib.rst b/doc/developer/pceplib.rst
new file mode 100644
index 0000000..774617d
--- /dev/null
+++ b/doc/developer/pceplib.rst
@@ -0,0 +1,781 @@
+.. _pceplib:
+
+*******
+PCEPlib
+*******
+
+Overview
+========
+
+The PCEPlib is a PCEP implementation library that can be used by either a PCE
+or PCC.
+
+Currently, only the FRR pathd has been implemented as a PCC with the PCEPlib.
+The PCEPlib is able to simultaneously connect to multiple PCEP peers and can
+maintain persistent PCEP connections.
+
+
+PCEPlib compliance
+==================
+
+The PCEPlib implements version 1 of the PCEP protocol, according to `RFC 5440 <https://tools.ietf.org/html/rfc5440>`_.
+
+Additionally, the PCEPlib implements the following PCEP extensions:
+
+- `RFC 8281 <https://tools.ietf.org/html/rfc8281>`_ PCE initiated for PCE-Initiated LSP Setup
+- `RFC 8231 <https://tools.ietf.org/html/rfc8231>`_ Extensions for Stateful PCE
+- `RFC 8232 <https://tools.ietf.org/html/rfc8232>`_ Optimizations of Label Switched Path State Synchronization Procedures for a Stateful PCE
+- `RFC 8282 <https://tools.ietf.org/html/rfc8282>`_ Extensions to PCEP for Inter-Layer MPLS and GMPLS Traffic Engineering
+- `RFC 8408 <https://tools.ietf.org/html/rfc8408>`_ Conveying Path Setup Type in PCE Communication Protocol (PCEP) Messages
+- `draft-ietf-pce-segment-routing-07 <https://tools.ietf.org/html/draft-ietf-pce-segment-routing-07>`_,
+ `draft-ietf-pce-segment-routing-16 <https://tools.ietf.org/html/draft-ietf-pce-segment-routing-16>`_,
+ `RFC 8664 <https://tools.ietf.org/html/rfc8664>`_ Segment routing protocol extensions
+- `RFC 7470 <https://tools.ietf.org/html/rfc7470>`_ Conveying Vendor-Specific Constraints
+- `Draft-ietf-pce-association-group-10 <https://tools.ietf.org/html/draft-ietf-pce-association-group-10>`_
+ Establishing Relationships Between Sets of Label Switched Paths
+- `Draft-barth-pce-segment-routing-policy-cp-04 <https://tools.ietf.org/html/draft-barth-pce-segment-routing-policy-cp-04>`_
+ Segment Routing Policy Candidate Paths
+
+
+PCEPlib Architecture
+====================
+
+The PCEPlib is comprised of the following modules, each of which will be
+detailed in the following sections.
+
+- **pcep_messages**
+ - PCEP messages, objects, and TLVs implementations
+
+- **pcep_pcc**
+ - PCEPlib public PCC API with a sample PCC binary
+
+- **pcep_session_logic**
+ - PCEP Session handling
+
+- **pcep_socket_comm**
+ - Socket communications
+
+- **pcep_timers**
+ - PCEP timers
+
+- **pcep_utils**
+ - Internal utilities used by the PCEPlib modules.
+
+The interaction of these modules can be seen in the following diagram.
+
+PCEPlib Architecture:
+
+.. image:: images/PCEPlib_design.jpg
+
+
+PCEP Session Logic library
+--------------------------
+
+The PCEP Session Logic library orchestrates calls to the rest of the PCC libraries.
+
+PCEP Session Logic library responsibilities:
+
+- Handle messages received from "PCEP Socket Comm"
+- Create and manage "PCEP Session" objects
+- Set timers and react to timer expirations
+- Manage counters
+
+The PCEP Session Logic library will have 2 main triggers controlled by a
+pthread condition variable:
+
+- Timer expirations - ``on_timer_expire()`` callback
+- Messages received from PCEP SocketComm - ``message_received()`` callback
+
+The counters are created and managed using the ``pcep_utils/pcep_utils_counters.h``
+counters library. The following are the different counter groups managed:
+
+- **COUNTER_SUBGROUP_ID_RX_MSG**
+- **COUNTER_SUBGROUP_ID_TX_MSG**
+- **COUNTER_SUBGROUP_ID_RX_OBJ**
+- **COUNTER_SUBGROUP_ID_TX_OBJ**
+- **COUNTER_SUBGROUP_ID_RX_SUBOBJ**
+- **COUNTER_SUBGROUP_ID_TX_SUBOBJ**
+- **COUNTER_SUBGROUP_ID_RX_RO_SR_SUBOBJ**
+- **COUNTER_SUBGROUP_ID_TX_RO_SR_SUBOBJ**
+- **COUNTER_SUBGROUP_ID_RX_TLV**
+- **COUNTER_SUBGROUP_ID_TX_TLV**
+- **COUNTER_SUBGROUP_ID_EVENT**
+
+The counters can be obtained and reset as explained later in the PCEPlib PCC API.
+
+PCEP Socket Comm library
+------------------------
+
+PCEP communication can be configured to be handled internally in this simple
+library. When this library is instantiated by the PCEP Session Logic, callbacks
+are provided to handle received messages and error conditions.
+
+The following diagram illustrates how the library works.
+
+PCEPlib Socket Comm:
+
+.. image:: images/PCEPlib_socket_comm.jpg
+
+
+PCEP Timers library
+-------------------
+
+Timers can be configured to be handled internally by this library. When this
+library is instantiated by the PCEP Session Logic, callbacks are provided to
+ha:0
+ndle timer expirations. The following timers are implemented and handled,
+according to `RFC 5440 <https://tools.ietf.org/html/rfc5440>`_.
+
+- Open KeepWait (fixed at 60 seconds)
+ - Set once the PCC sends an Open, and if it expires before receiving a KeepAlive or PCErr, then the PCC should send a PCErr and close the TCP connection
+
+- Keepalive timer
+ - How often the PCC should send Keepalive messages to the PCE (and vice-versa)
+ - The timer will be reset after any message is sent: any message serves as a Keepalive
+
+- DeadTimer
+ - If no messages are received before expiration, the session is declared as down
+ - Reset everytime any message is received
+
+- PCReq request timer
+ - How long the PCC waits for the PCE to reply to PCReq messages.
+
+PCEPlib Timers:
+
+.. image:: images/PCEPlib_timers.jpg
+
+
+PCEP Messages library
+---------------------
+
+The PCEP Messages library has all of the implemented PCEP messages, objects,
+TLVs, and related functionality.
+
+The following header files can be used for creating and handling received PCEP
+entities.
+
+- pcep-messages.h
+- pcep-objects.h
+- pcep-tlvs.h
+
+
+PCEP Messages
++++++++++++++
+
+The following PCEP messages can be created and received:
+
+- ``struct pcep_message* pcep_msg_create_open(...);``
+- ``struct pcep_message* pcep_msg_create_open_with_tlvs(...);``
+- ``struct pcep_message* pcep_msg_create_request(...);``
+- ``struct pcep_message* pcep_msg_create_request_ipv6(...);``
+- ``struct pcep_message* pcep_msg_create_reply(...);``
+- ``struct pcep_message* pcep_msg_create_close(...);``
+- ``struct pcep_message* pcep_msg_create_error(...);``
+- ``struct pcep_message* pcep_msg_create_error_with_objects(...);``
+- ``struct pcep_message* pcep_msg_create_keepalive(...);``
+- ``struct pcep_message* pcep_msg_create_report(...);``
+- ``struct pcep_message* pcep_msg_create_update(...);``
+- ``struct pcep_message* pcep_msg_create_initiate(...);``
+
+Refer to ``pcep_messages/include/pcep-messages.h`` and the API section
+below for more details.
+
+
+PCEP Objects
+++++++++++++
+
+The following PCEP objects can be created and received:
+
+- ``struct pcep_object_open* pcep_obj_create_open(...);``
+- ``struct pcep_object_rp* pcep_obj_create_rp(...);``
+- ``struct pcep_object_notify* pcep_obj_create_notify(...);``
+- ``struct pcep_object_nopath* pcep_obj_create_nopath(...);``
+- ``struct pcep_object_association_ipv4* pcep_obj_create_association_ipv4(...);``
+- ``struct pcep_object_association_ipv6* pcep_obj_create_association_ipv6(...);``
+- ``struct pcep_object_endpoints_ipv4* pcep_obj_create_endpoint_ipv4(...);``
+- ``struct pcep_object_endpoints_ipv6* pcep_obj_create_endpoint_ipv6(...);``
+- ``struct pcep_object_bandwidth* pcep_obj_create_bandwidth(...);``
+- ``struct pcep_object_metric* pcep_obj_create_metric(...);``
+- ``struct pcep_object_lspa* pcep_obj_create_lspa(...);``
+- ``struct pcep_object_svec* pcep_obj_create_svec(...);``
+- ``struct pcep_object_error* pcep_obj_create_error(...);``
+- ``struct pcep_object_close* pcep_obj_create_close(...);``
+- ``struct pcep_object_srp* pcep_obj_create_srp(...);``
+- ``struct pcep_object_lsp* pcep_obj_create_lsp(...);``
+- ``struct pcep_object_vendor_info* pcep_obj_create_vendor_info(...);``
+- ``struct pcep_object_ro* pcep_obj_create_ero(...);``
+- ``struct pcep_object_ro* pcep_obj_create_rro(...);``
+- ``struct pcep_object_ro* pcep_obj_create_iro(...);``
+- ``struct pcep_ro_subobj_ipv4* pcep_obj_create_ro_subobj_ipv4(...);``
+- ``struct pcep_ro_subobj_ipv6* pcep_obj_create_ro_subobj_ipv6(...);``
+- ``struct pcep_ro_subobj_unnum* pcep_obj_create_ro_subobj_unnum(...);``
+- ``struct pcep_ro_subobj_32label* pcep_obj_create_ro_subobj_32label(...);``
+- ``struct pcep_ro_subobj_asn* pcep_obj_create_ro_subobj_asn(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_nonai(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_ipv4_node(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_ipv6_node(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_ipv4_adj(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_ipv6_adj(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_unnumbered_ipv4_adj(...);``
+- ``struct pcep_ro_subobj_sr* pcep_obj_create_ro_subobj_sr_linklocal_ipv6_adj(...);``
+
+Refer to ``pcep_messages/include/pcep-objects.h`` and the API section
+below for more details.
+
+
+PCEP TLVs
++++++++++
+
+The following PCEP TLVs (Tag, Length, Value) can be created and received:
+
+- Open Object TLVs
+ - ``struct pcep_object_tlv_stateful_pce_capability* pcep_tlv_create_stateful_pce_capability(...);``
+ - ``struct pcep_object_tlv_lsp_db_version* pcep_tlv_create_lsp_db_version(...);``
+ - ``struct pcep_object_tlv_speaker_entity_identifier* pcep_tlv_create_speaker_entity_id(...);``
+ - ``struct pcep_object_tlv_path_setup_type* pcep_tlv_create_path_setup_type(...);``
+ - ``struct pcep_object_tlv_path_setup_type_capability* pcep_tlv_create_path_setup_type_capability(...);``
+ - ``struct pcep_object_tlv_sr_pce_capability* pcep_tlv_create_sr_pce_capability(...);``
+
+- LSP Object TLVs
+ - ``struct pcep_object_tlv_ipv4_lsp_identifier* pcep_tlv_create_ipv4_lsp_identifiers(...);``
+ - ``struct pcep_object_tlv_ipv6_lsp_identifier* pcep_tlv_create_ipv6_lsp_identifiers(...);``
+ - ``struct pcep_object_tlv_symbolic_path_name* pcep_tlv_create_symbolic_path_name(...);``
+ - ``struct pcep_object_tlv_lsp_error_code* pcep_tlv_create_lsp_error_code(...);``
+ - ``struct pcep_object_tlv_rsvp_error_spec* pcep_tlv_create_rsvp_ipv4_error_spec(...);``
+ - ``struct pcep_object_tlv_rsvp_error_spec* pcep_tlv_create_rsvp_ipv6_error_spec(...);``
+ - ``struct pcep_object_tlv_nopath_vector* pcep_tlv_create_nopath_vector(...);``
+ - ``struct pcep_object_tlv_vendor_info* pcep_tlv_create_vendor_info(...);``
+ - ``struct pcep_object_tlv_arbitrary* pcep_tlv_create_tlv_arbitrary(...);``
+
+- SRPAG (SR Association Group) TLVs
+ - ``struct pcep_object_tlv_srpag_pol_id *pcep_tlv_create_srpag_pol_id_ipv4(...);``
+ - ``struct pcep_object_tlv_srpag_pol_id *pcep_tlv_create_srpag_pol_id_ipv6(...);``
+ - ``struct pcep_object_tlv_srpag_pol_name *pcep_tlv_create_srpag_pol_name(...);``
+ - ``struct pcep_object_tlv_srpag_cp_id *pcep_tlv_create_srpag_cp_id(...);``
+ - ``struct pcep_object_tlv_srpag_cp_pref *pcep_tlv_create_srpag_cp_pref(...);``
+
+Refer to ``pcep_messages/include/pcep-tlvs.h`` and the API section
+below for more details.
+
+
+PCEP PCC
+--------
+
+This module has a Public PCC API library (explained in detail later) and a
+sample PCC binary. The APIs in this library encapsulate other PCEPlib libraries
+for simplicity. With this API, the PCEPlib PCC can be started and stopped, and
+the PCEPlib event queue can be accessed. The PCEP Messages library is not
+encapsulated, and should be used directly.
+
+
+Internal Dependencies
+---------------------
+
+The following diagram illustrates the internal PCEPlib library dependencies.
+
+PCEPlib internal dependencies:
+
+.. image:: images/PCEPlib_internal_deps.jpg
+
+
+External Dependencies
+---------------------
+
+Originally the PCEPlib was based on the open source `libpcep project <https://www.acreo.se/open-software-libpcep>`_,
+but that dependency has been reduced to just one source file (pcep-tools.[ch]).
+
+
+PCEPlib Threading model
+-----------------------
+
+The PCEPlib can be run in stand-alone mode whereby a thread is launched for
+timers and socket comm, as is illustrated in the following diagram.
+
+PCEPlib Threading model:
+
+.. image:: images/PCEPlib_threading_model.jpg
+
+The PCEPlib can also be configured to use an external timers and socket
+infrastructure like the FRR threads and tasks. In this case, no internal
+threads are launched for timers and socket comm, as is illustrated in the
+following diagram.
+
+PCEPlib Threading model with external infra:
+
+.. image:: images/PCEPlib_threading_model_frr_infra.jpg
+
+
+Building
+--------
+
+The autotools build system is used and integrated with the frr build system.
+
+Testing
+-------
+
+The Unit Tests for an individual library are executed with the ``make check``
+command. The Unit Test binary will be written to the project ``build`` directory.
+All Unit Tests are executed with Valgrind, and any memory issues reported by
+Valgrind will cause the Unit Test to fail.
+
+
+PCEPlib PCC API
+===============
+
+The following sections describe the PCEPlib PCC API.
+
+
+PCEPlib PCC Initialization and Destruction
+------------------------------------------
+
+The PCEPlib can be initialized to handle memory, timers, and socket comm
+internally in what is called stand-alone mode, or with an external
+infrastructure, like FRR.
+
+PCEPlib PCC Initialization and Destruction in stand-alone mode
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+PCEPlib PCC initialization and destruction functions:
+
+- ``bool initialize_pcc();``
+- ``bool initialize_pcc_wait_for_completion();``
+- ``bool destroy_pcc();``
+
+The PCC can be initialized with either ``initialize_pcc()`` or
+``initialize_pcc_wait_for_completion()``.
+
+- ``initialize_pcc_wait_for_completion()`` blocks until ``destroy_pcc()``
+ is called from a separate pthread.
+- ``initialize_pcc()`` is non-blocking and will be stopped when
+ ``destroy_pcc()`` is called.
+
+Both initialize functions will launch 3 pthreads:
+
+- 1 Timer pthread
+- 1 SocketComm pthread
+- 1 SessionLogic pthread
+
+When ``destroy_pcc()`` is called, all pthreads will be stopped and all
+resources will be released.
+
+All 3 functions return true upon success, and false otherwise.
+
+PCEPlib PCC Initialization and Destruction with FRR infrastructure
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+PCEPlib PCC initialization and destruction functions:
+
+- ``bool initialize_pcc_infra(struct pceplib_infra_config *infra_config);``
+- ``bool destroy_pcc();``
+
+The ``pceplib_infra_config`` struct has the following fields:
+
+- **void *pceplib_infra_mt**
+ - FRR Memory type pointer for infra related memory management
+
+- **void *pceplib_messages_mt**
+ - FRR Memory type pointer for PCEP messages related memory management
+
+- **pceplib_malloc_func mfunc**
+ - FRR malloc function pointer
+
+- **pceplib_calloc_func cfunc**
+ - FRR calloc function pointer
+
+- **pceplib_realloc_func rfunc**
+ - FRR realloc function pointer
+
+- **pceplib_strdup_func sfunc**
+ - FRR strdup function pointer
+
+- **pceplib_free_func ffunc**
+ - FRR free function pointer
+
+- **void *external_infra_data**
+ - FRR data used by FRR timers and sockets infrastructure
+
+- **ext_timer_create timer_create_func**
+ - FRR timer create function pointer
+
+- **ext_timer_cancel timer_cancel_func**
+ - FRR timer cancel function pointer
+
+- **ext_socket_write socket_write_func**
+ - FRR socket write function pointer, indicating fd is ready to be written to
+
+- **ext_socket_read socket_read_func**
+ - FRR socket write function pointer, indicating fd is ready to be read from
+
+
+PCEPlib PCC configuration
+-------------------------
+
+PCEPlib PCC configuratoin functions:
+
+- ``pcep_configuration *create_default_pcep_configuration();``
+- ``void destroy_pcep_configuration(pcep_configuration *config);``
+
+A ``pcep_configuration`` object with default values is created with
+``create_default_pcep_configuration()``. These values can be tailored to
+specific use cases.
+
+Created ``pcep_configuration`` objects are destroyed with
+``destroy_pcep_configuration()``.
+
+
+PCEPlib PCC configuration paramaters
+++++++++++++++++++++++++++++++++++++
+
+The ``pcep_configuration`` object is defined in ``pcep_session_logic/include/pcep_session_logic.h``
+The attributes in the ``pcep_configuration`` object are detailed as follows.
+
+PCEP Connection parameters:
+
+- **dst_pcep_port**
+ - Defaults to 0, in which case the default PCEP TCP destination port
+ 4189 will be used.
+ - Set to use a specific PCEP TCP destination port.
+
+- **src_pcep_port**
+ - Defaults to 0, in which case the default PCEP TCP source port
+ 4189 will be used.
+ - Set to use a specific PCEP TCP source port.
+
+- **Source IP**
+ - Defaults to IPv4 INADDR_ANY
+ - Set **src_ip.src_ipv4** and **is_src_ipv6=false** to set the source IPv4.
+ - Set **src_ip.src_ipv6** and **is_src_ipv6=true** to set the source IPv6.
+
+- **socket_connect_timeout_millis**
+ - Maximum amount of time to wait to connect to the PCE TCP socket
+ before failing, in milliseconds.
+
+PCEP Versioning:
+
+- **pcep_msg_versioning->draft_ietf_pce_segment_routing_07**
+ - Defaults to false, in which case draft 16 versioning will be used.
+ - Set to true to use draft 07 versioning.
+
+PCEP Open Message Parameters:
+
+- **keep_alive_seconds**
+ - Sent to PCE in PCEP Open Msg
+ - Recommended value = 30, Minimum value = 1
+ - Disabled by setting value = 0
+
+- **dead_timer_seconds**
+ - Sent to PCE in PCEP Open Msg
+ - Recommended value = 4 * keepalive timer value
+
+- Supported value ranges for PCEP Open Message received from the PCE
+ - **min_keep_alive_seconds**, **max_keep_alive_seconds**
+ - **min_dead_timer_seconds**, **max_dead_timer_seconds**
+
+- **request_time_seconds**
+ - When a PCC sends a PcReq to a PCE, the amount of time a PCC will
+ wait for a PcRep reply from the PCE.
+
+- **max_unknown_requests**
+ - If a PCC/PCE receives PCRep/PCReq messages with unknown requests
+ at a rate equal or greater than MAX-UNKNOWN-REQUESTS per minute,
+ the PCC/PCE MUST send a PCEP CLOSE message.
+ - Recommended value = 5
+
+- **max_unknown_messages**
+ - If a PCC/PCE receives unrecognized messages at a rate equal or
+ greater than MAX-UNKNOWN-MESSAGES per minute, the PCC/PCE MUST
+ send a PCEP CLOSE message
+ - Recommended value = 5
+
+Stateful PCE Capability TLV configuration parameters (RFC 8231, 8232, 8281, and
+draft-ietf-pce-segment-routing-16):
+
+- **support_stateful_pce_lsp_update**
+ - If this flag is true, then a Stateful PCE Capability TLV will
+ be added to the PCEP Open object, with the LSP Update Capability
+ U-flag set true.
+ - The rest of these parameters are used to configure the Stateful
+ PCE Capability TLV
+
+- **support_pce_lsp_instantiation**
+ - Sets the I-flag true, indicating the PCC allows instantiation
+ of an LSP by a PCE.
+
+- **support_include_db_version**
+ - Sets the S-bit true, indicating the PCC will include the
+ LSP-DB-VERSION TLV in each LSP object. See lsp_db_version below.
+
+- **support_lsp_triggered_resync**
+ - Sets the T-bit true, indicating the PCE can trigger resynchronization
+ of LSPs at any point in the life of the session.
+
+- **support_lsp_delta_sync**
+ - Sets the D-bit true, indicating the PCEP speaker allows incremental
+ (delta) State Synchronization.
+
+- **support_pce_triggered_initial_sync**
+ - Sets the F-bit true, indicating the PCE SHOULD trigger initial (first)
+ State Synchronization
+
+LSP DB Version TLV configuration parameters:
+
+- **lsp_db_version**
+ - If this parameter has a value other than 0, and the above
+ support_include_db_version flag is true, then an LSP DB
+ Version TLV will be added to the PCEP Open object.
+ - This parameter should only be set if LSP-DB survived a restart
+ and is available.
+ - This value will be copied over to the pcep_session upon initialization.
+
+SR PCE Capability sub-TLV configuration parameters (draft-ietf-pce-segment-routing-16):
+
+- **support_sr_te_pst**
+ - If this flag is true, then an SR PCE Capability sub-TLV will be
+ added to a Path Setup type Capability TLV, which will be added
+ to the PCEP Open object.
+ - The PST used in the Path Setup type Capability will be 1,
+ indicating the Path is setup using Segment Routing Traffic Engineering.
+
+Only set the following fields if the **support_sr_te_pst** flag is true.
+
+- **pcc_can_resolve_nai_to_sid**
+ - Sets the N-flag true, indicating that the PCC is capable of resolving
+ a Node or Adjacency Identifier to a SID
+
+- **max_sid_depth**
+ - If set other than 0, then the PCC imposes a limit on the Maximum
+ SID depth.
+ - If this parameter is other than 0, then the X bit will be true,
+ and the parameter value will be set in the MSD field.
+
+
+PCEPlib PCC connections
+-----------------------
+
+PCEPlib PCC connect and disconnect functions:
+
+- ``pcep_session *connect_pce(pcep_configuration *config, struct in_addr *pce_ip);``
+- ``pcep_session *connect_pce_ipv6(pcep_configuration *config, struct in6_addr *pce_ip);``
+- ``void disconnect_pce(pcep_session *session);``
+
+When connecting to a PCE, a ``pcep_session`` will be returned on success, NULL
+otherwise.
+
+Refer to the above PCC configuration parameters section for setting the source
+and destination PCEP TCP ports, and the source IP address and version.
+
+
+PCEP Messages, Objects, and TLVs
+--------------------------------
+
+The PCEP messages, objects, and TLVs created in the PCEPlib are high-level API
+structures, meaning they need to be encoded before being sent on-the-wire, and
+the raw data received needs to be decoded into these structures. This makes
+using these objects much easier for the library consumer, since they do not
+need to know the detailed raw format of the PCEP entities.
+
+
+PCEP Messages
++++++++++++++
+
+Received messages (in the ``pcep_event`` explained below) are of type
+``pcep_message``, which have the following fields:
+
+- ``struct pcep_message_header *msg_header;``
+ - Defines the PCEP version and message type
+
+- ``double_linked_list *obj_list;``
+ - A double linked list of the message objects
+ - Each entry is a pointer to a ``struct pcep_object_header``, and
+ using the ``object_class`` and ``object_type`` fields, the pointer
+ can be cast to the appropriate object structure to access the
+ rest of the object fields
+
+- ``uint8_t *encoded_message;``
+ - This field is only populated for received messages or once the
+ ``pcep_encode_message()`` function has been called on the message.
+ - This field is a pointer to the raw PCEP data for the entire
+ message, including all objects and TLVs.
+
+- ``uint16_t encoded_message_length;``
+ - This field is only populated for received messages or once the
+ ``pcep_encode_message()`` function has been called on the message.
+ - This field is the length of the entire raw message, including
+ all objects and TLVs.
+ - This field is in host byte order.
+
+
+PCEP Objects
+++++++++++++
+
+A PCEP message has a double linked list of pointers to ``struct pcep_object_header``
+structures, which have the following fields:
+
+- ``enum pcep_object_classes object_class;``
+- ``enum pcep_object_types object_type;``
+- ``bool flag_p;``
+ - PCC Processing rule bit: When set, the object MUST be taken into
+ account, when cleared the object is optional
+
+- ``bool flag_i;``
+ - PCE Ignore bit: indicates to a PCC whether or not an optional
+ object was processed
+
+- ``double_linked_list *tlv_list;``
+ - A double linked list of the object TLVs
+ - Each entry is a pointer to a ``struct pcep_object_tlv_header``, and
+ using the TLV type field, the pointer can be cast to the
+ appropriate TLV structure to access the rest of the TLV fields
+
+- ``uint8_t *encoded_object;``
+ - This field is only populated for received objects or once the
+ ``pcep_encode_object()`` (called by ``pcep_encode_message()``)
+ function has been called on the object.
+ - Pointer into the encoded_message field (from the pcep_message)
+ where the raw object PCEP data starts.
+
+- ``uint16_t encoded_object_length;``
+ - This field is only populated for received objects or once the
+ ``pcep_encode_object()`` (called by ``pcep_encode_message()``)
+ function has been called on the object.
+ - This field is the length of the entire raw TLV
+ - This field is in host byte order.
+
+The object class and type can be used to cast the ``struct pcep_object_header``
+pointer to the appropriate object structure so the specific object fields can
+be accessed.
+
+
+PCEP TLVs
++++++++++
+
+A PCEP object has a double linked list of pointers to ``struct pcep_object_tlv_header``
+structures, which have the following fields:
+
+- ``enum pcep_object_tlv_types type;``
+- ``uint8_t *encoded_tlv;``
+ - This field is only populated for received TLVs or once the
+ ``pcep_encode_tlv()`` (called by ``pcep_encode_message()``)
+ function has been called on the TLV.
+ - Pointer into the encoded_message field (from the pcep_message)
+ where the raw TLV PCEP data starts.
+
+- ``uint16_t encoded_tlv_length;``
+ - This field is only populated for received TLVs or once the
+ ``pcep_encode_tlv()`` (called by ``pcep_encode_message()``)
+ function has been called on the TLV.
+ - This field is the length of the entire raw TLV
+ - This field is in host byte order.
+
+
+Memory management
++++++++++++++++++
+
+Any of the PCEPlib Message Library functions that receive a pointer to a
+``double_linked_list``, ``pcep_object_header``, or ``pcep_object_tlv_header``,
+transfer the ownership of the entity to the PCEPlib. The memory will be freed
+internally when the encapsulating structure is freed. If the memory for any of
+these is freed by the caller, then there will be a double memory free error
+when the memory is freed internally in the PCEPlib.
+
+Any of the PCEPlib Message Library functions that receive either a pointer to a
+``struct in_addr`` or ``struct in6_addr`` will allocate memory for the IP
+address internally and copy the IP address. It is the responsibility of the
+caller to manage the memory for the IP address passed into the PCEPlib Message
+Library functions.
+
+For messages received via the event queue (explained below), the message will
+be freed when the event is freed by calling ``destroy_pcep_event()``.
+
+When sending messages, the message will be freed internally in the PCEPlib when
+the ``send_message()`` ``pcep_pcc`` API function when the ``free_after_send`` flag
+is set true.
+
+To manually delete a message, call the ``pcep_msg_free_message()`` function.
+Internally, this will call ``pcep_obj_free_object()`` and ``pcep_obj_free_tlv()``
+appropriately.
+
+
+Sending a PCEP Report message
+-----------------------------
+
+This section shows how to send a PCEP Report messages from the PCC to the PCE,
+and serves as an example of how to send other messages. Refer to the sample
+PCC binary located in ``pcep_pcc/src/pcep_pcc.c`` for code examples os sending
+a PCEP Report message.
+
+The Report message must have at least an SRP, LSP, and ERO object.
+
+The PCEP Report message objects are created with the following APIs:
+
+- ``struct pcep_object_srp *pcep_obj_create_srp(...);``
+- ``struct pcep_object_lsp *pcep_obj_create_lsp(...);``
+- ``struct pcep_object_ro *pcep_obj_create_ero(...);``
+ - Create ero subobjects with the ``pcep_obj_create_ro_subobj_*(...);`` functions
+
+PCEP Report message is created with the following API:
+
+- ``struct pcep_header *pcep_msg_create_report(double_linked_list *report_object_list);``
+
+A PCEP report messages is sent with the following API:
+
+- ``void send_message(pcep_session *session, pcep_message *message, bool free_after_send);``
+
+
+PCEPlib Received event queue
+----------------------------
+
+PCEP events and messages of interest to the PCEPlib consumer will be stored
+internally in a message queue for retrieval.
+
+The following are the event types:
+
+- **MESSAGE_RECEIVED**
+- **PCE_CLOSED_SOCKET**
+- **PCE_SENT_PCEP_CLOSE**
+- **PCE_DEAD_TIMER_EXPIRED**
+- **PCE_OPEN_KEEP_WAIT_TIMER_EXPIRED**
+- **PCC_CONNECTED_TO_PCE**
+- **PCC_CONNECTION_FAILURE**
+- **PCC_PCEP_SESSION_CLOSED**
+- **PCC_RCVD_INVALID_OPEN**
+- **PCC_SENT_INVALID_OPEN**
+- **PCC_RCVD_MAX_INVALID_MSGS**
+- **PCC_RCVD_MAX_UNKOWN_MSGS**
+
+The following PCEP messages will not be posted on the message queue, as they
+are handled internally in the library:
+
+- **Open**
+- **Keep Alive**
+- **Close**
+
+Received event queue API:
+
+- ``bool event_queue_is_empty();``
+ - Returns true if the queue is empty, false otherwise
+
+- ``uint32_t event_queue_num_events_available();``
+ - Return the number of events on the queue, 0 if empty
+
+- ``struct pcep_event *event_queue_get_event();``
+ - Return the next event on the queue, NULL if empty
+ - The ``message`` pointer will only be non-NULL if ``event_type``
+ is ``MESSAGE_RECEIVED``
+
+- ``void destroy_pcep_event(struct pcep_event *event);``
+ - Free the PCEP Event resources, including the PCEP message if present
+
+
+PCEPlib Counters
+----------------
+
+The PCEPlib counters are managed in the ``pcep_session_logic`` library, and can
+be accessed in the ``pcep_session_counters`` field of the ``pcep_session`` structure.
+There are 2 API functions to manage the counters:
+
+- ``void dump_pcep_session_counters(pcep_session *session);``
+ - Dump all of the counters to the logs
+
+- ``void reset_pcep_session_counters(pcep_session *session);``
+
diff --git a/doc/developer/process-architecture.rst b/doc/developer/process-architecture.rst
new file mode 100644
index 0000000..06ee6a3
--- /dev/null
+++ b/doc/developer/process-architecture.rst
@@ -0,0 +1,328 @@
+.. _process-architecture:
+
+Process Architecture
+====================
+
+FRR is a suite of daemons that serve different functions. This document
+describes internal architecture of daemons, focusing their general design
+patterns, and especially how threads are used in the daemons that use them.
+
+Overview
+--------
+The fundamental pattern used in FRR daemons is an `event loop
+<https://en.wikipedia.org/wiki/Event_loop>`_. Some daemons use `kernel threads
+<https://en.wikipedia.org/wiki/Thread_(computing)#Kernel_threads>`_. In these
+daemons, each kernel thread runs its own event loop. The event loop
+implementation is constructed to be thread safe and to allow threads other than
+its owning thread to schedule events on it. The rest of this document describes
+these two designs in detail.
+
+Terminology
+-----------
+Because this document describes the architecture for kernel threads as well as
+the event system, a digression on terminology is in order here.
+
+Historically Quagga's loop system was viewed as an implementation of userspace
+threading. Because of this design choice, the names for various datastructures
+within the event system are variations on the term "thread". The primary
+datastructure that holds the state of an event loop in this system is called a
+"threadmaster". Events scheduled on the event loop - what would today be called
+an 'event' or 'task' in systems such as libevent - are called "threads" and the
+datastructure for them is ``struct event``. To add to the confusion, these
+"threads" have various types, one of which is "event". To hopefully avoid some
+of this confusion, this document refers to these "threads" as a 'task' except
+where the datastructures are explicitly named. When they are explicitly named,
+they will be formatted ``like this`` to differentiate from the conceptual
+names. When speaking of kernel threads, the term used will be "pthread" since
+FRR's kernel threading implementation uses the POSIX threads API.
+
+.. This should be broken into its document under :ref:`libfrr`
+.. _event-architecture:
+
+Event Architecture
+------------------
+This section presents a brief overview of the event model as currently
+implemented in FRR. This doc should be expanded and broken off into its own
+section. For now it provides basic information necessary to understand the
+interplay between the event system and kernel threads.
+
+The core event system is implemented in :file:`lib/event.c` and
+:file:`lib/frrevent.h`. The primary
+structure is ``struct event_loop``, hereafter referred to as a
+``threadmaster``. A ``threadmaster`` is a global state object, or context, that
+holds all the tasks currently pending execution as well as statistics on tasks
+that have already executed. The event system is driven by adding tasks to this
+data structure and then calling a function to retrieve the next task to
+execute. At initialization, a daemon will typically create one
+``threadmaster``, add a small set of initial tasks, and then run a loop to
+fetch each task and execute it.
+
+These tasks have various types corresponding to their general action. The types
+are given by integer macros in :file:`frrevent.h` and are:
+
+``EVENT_READ``
+ Task which waits for a file descriptor to become ready for reading and then
+ executes.
+
+``EVENT_WRITE``
+ Task which waits for a file descriptor to become ready for writing and then
+ executes.
+
+``EVENT_TIMER``
+ Task which executes after a certain amount of time has passed since it was
+ scheduled.
+
+``EVENT_EVENT``
+ Generic task that executes with high priority and carries an arbitrary
+ integer indicating the event type to its handler. These are commonly used to
+ implement the finite state machines typically found in routing protocols.
+
+``EVENT_READY``
+ Type used internally for tasks on the ready queue.
+
+``EVENT_UNUSED``
+ Type used internally for ``struct event`` objects that aren't being used.
+ The event system pools ``struct event`` to avoid heap allocations; this is
+ the type they have when they're in the pool.
+
+``EVENT_EXECUTE``
+ Just before a task is run its type is changed to this. This is used to show
+ ``X`` as the type in the output of :clicmd:`show thread cpu`.
+
+The programmer never has to work with these types explicitly. Each type of task
+is created and queued via special-purpose functions (actually macros, but
+irrelevant for the time being) for the specific type. For example, to add a
+``EVENT_READ`` task, you would call
+
+::
+
+ event_add_read(struct event_loop *master, int (*handler)(struct event *), void *arg, int fd, struct event **ref);
+
+The ``struct event`` is then created and added to the appropriate internal
+datastructure within the ``threadmaster``. Note that the ``READ`` and
+``WRITE`` tasks are independent - a ``READ`` task only tests for
+readability, for example.
+
+The Event Loop
+^^^^^^^^^^^^^^
+To use the event system, after creating a ``threadmaster`` the program adds an
+initial set of tasks. As these tasks execute, they add more tasks that execute
+at some point in the future. This sequence of tasks drives the lifecycle of the
+program. When no more tasks are available, the program dies. Typically at
+startup the first task added is an I/O task for VTYSH as well as any network
+sockets needed for peerings or IPC.
+
+To retrieve the next task to run the program calls ``event_fetch()``.
+``event_fetch()`` internally computes which task to execute next based on
+rudimentary priority logic. Events (type ``EVENT_EVENT``) execute with the
+highest priority, followed by expired timers and finally I/O tasks (type
+``EVENT_READ`` and ``EVENT_WRITE``). When scheduling a task a function and an
+arbitrary argument are provided. The task returned from ``event_fetch()`` is
+then executed with ``event_call()``.
+
+The following diagram illustrates a simplified version of this infrastructure.
+
+.. todo: replace these with SVG
+.. figure:: ../figures/threadmaster-single.png
+ :align: center
+
+ Lifecycle of a program using a single threadmaster.
+
+The series of "task" boxes represents the current ready task queue. The various
+other queues for other types are not shown. The fetch-execute loop is
+illustrated at the bottom.
+
+Mapping the general names used in the figure to specific FRR functions:
+
+- ``task`` is ``struct event *``
+- ``fetch`` is ``event_fetch()``
+- ``exec()`` is ``event_call()``
+- ``cancel()`` is ``event_cancel()``
+- ``schedule()`` is any of the various task-specific ``event_add_*`` functions
+
+Adding tasks is done with various task-specific function-like macros. These
+macros wrap underlying functions in :file:`event.c` to provide additional
+information added at compile time, such as the line number the task was
+scheduled from, that can be accessed at runtime for debugging, logging and
+informational purposes. Each task type has its own specific scheduling function
+that follow the naming convention ``event_add_<type>``; see :file:`frrevent.h`
+for details.
+
+There are some gotchas to keep in mind:
+
+- I/O tasks are keyed off the file descriptor associated with the I/O
+ operation. This means that for any given file descriptor, only one of each
+ type of I/O task (``EVENT_READ`` and ``EVENT_WRITE``) can be scheduled. For
+ example, scheduling two write tasks one after the other will overwrite the
+ first task with the second, resulting in total loss of the first task and
+ difficult bugs.
+
+- Timer tasks are only as accurate as the monotonic clock provided by the
+ underlying operating system.
+
+- Memory management of the arbitrary handler argument passed in the schedule
+ call is the responsibility of the caller.
+
+
+Kernel Thread Architecture
+--------------------------
+Efforts have begun to introduce kernel threads into FRR to improve performance
+and stability. Naturally a kernel thread architecture has long been seen as
+orthogonal to an event-driven architecture, and the two do have significant
+overlap in terms of design choices. Since the event model is tightly integrated
+into FRR, careful thought has been put into how pthreads are introduced, what
+role they fill, and how they will interoperate with the event model.
+
+Design Overview
+^^^^^^^^^^^^^^^
+Each kernel thread behaves as a lightweight process within FRR, sharing the
+same process memory space. On the other hand, the event system is designed to
+run in a single process and drive serial execution of a set of tasks. With this
+consideration, a natural choice is to implement the event system within each
+kernel thread. This allows us to leverage the event-driven execution model with
+the currently existing task and context primitives. In this way the familiar
+execution model of FRR gains the ability to execute tasks simultaneously while
+preserving the existing model for concurrency.
+
+The following figure illustrates the architecture with multiple pthreads, each
+running their own ``threadmaster``-based event loop.
+
+.. todo: replace these with SVG
+.. figure:: ../figures/threadmaster-multiple.png
+ :align: center
+
+ Lifecycle of a program using multiple pthreads, each running their own
+ ``threadmaster``
+
+Each roundrect represents a single pthread running the same event loop
+described under :ref:`event-architecture`. Note the arrow from the ``exec()``
+box on the right to the ``schedule()`` box in the middle pthread. This
+illustrates code running in one pthread scheduling a task onto another
+pthread's threadmaster. A global lock for each ``threadmaster`` is used to
+synchronize these operations. The pthread names are examples.
+
+
+.. This should be broken into its document under :ref:`libfrr`
+.. _kernel-thread-wrapper:
+
+Kernel Thread Wrapper
+^^^^^^^^^^^^^^^^^^^^^
+The basis for the integration of pthreads and the event system is a lightweight
+wrapper for both systems implemented in :file:`lib/frr_pthread.[ch]`. The
+header provides a core datastructure, ``struct frr_pthread``, that encapsulates
+structures from both POSIX threads and :file:`event.c`, :file:`frrevent.h`.
+In particular, this
+datastructure has a pointer to a ``threadmaster`` that runs within the pthread.
+It also has fields for a name as well as start and stop functions that have
+signatures similar to the POSIX arguments for ``pthread_create()``.
+
+Calling ``frr_pthread_new()`` creates and registers a new ``frr_pthread``. The
+returned structure has a pre-initialized ``threadmaster``, and its ``start``
+and ``stop`` functions are initialized to defaults that will run a basic event
+loop with the given threadmaster. Calling ``frr_pthread_run()`` starts the thread
+with the ``start`` function. From there, the model is the same as the regular
+event model. To schedule tasks on a particular pthread, simply use the regular
+:file:`event.c` functions as usual and provide the ``threadmaster`` pointed to
+from the ``frr_pthread``. As part of implementing the wrapper, the
+:file:`event.c` functions were made thread-safe. Consequently, it is safe to
+schedule events on a ``threadmaster`` belonging both to the calling thread as
+well as *any other pthread*. This serves as the basis for inter-thread
+communication and boils down to a slightly more complicated method of message
+passing, where the messages are the regular task events as used in the
+event-driven model. The only difference is thread cancellation, which requires
+calling ``event_cancel_async()`` instead of ``event_cancel()`` to cancel a task
+currently scheduled on a ``threadmaster`` belonging to a different pthread.
+This is necessary to avoid race conditions in the specific case where one
+pthread wants to guarantee that a task on another pthread is cancelled before
+proceeding.
+
+In addition, the existing commands to show statistics and other information for
+tasks within the event driven model have been expanded to handle multiple
+pthreads; running :clicmd:`show thread cpu` will display the usual event
+breakdown, but it will do so for each pthread running in the program. For
+example, :ref:`bgpd` runs a dedicated I/O pthread and shows the following
+output for :clicmd:`show thread cpu`:
+
+::
+
+ frr# show thread cpu
+
+ Thread statistics for bgpd:
+
+ Showing statistics for pthread main
+ ------------------------------------
+ CPU (user+system): Real (wall-clock):
+ Active Runtime(ms) Invoked Avg uSec Max uSecs Avg uSec Max uSecs Type Thread
+ 0 1389.000 10 138900 248000 135549 255349 T subgroup_coalesce_timer
+ 0 0.000 1 0 0 18 18 T bgp_startup_timer_expire
+ 0 850.000 18 47222 222000 47795 233814 T work_queue_run
+ 0 0.000 10 0 0 6 14 T update_subgroup_merge_check_thread_cb
+ 0 0.000 8 0 0 117 160 W zclient_flush_data
+ 2 2.000 1 2000 2000 831 831 R bgp_accept
+ 0 1.000 1 1000 1000 2832 2832 E zclient_connect
+ 1 42082.000 240574 174 37000 178 72810 R vtysh_read
+ 1 152.000 1885 80 2000 96 6292 R zclient_read
+ 0 549346.000 2997298 183 7000 153 20242 E bgp_event
+ 0 2120.000 300 7066 14000 6813 22046 T (bgp_holdtime_timer)
+ 0 0.000 2 0 0 57 59 T update_group_refresh_default_originate_route_map
+ 0 90.000 1 90000 90000 73729 73729 T bgp_route_map_update_timer
+ 0 1417.000 9147 154 48000 132 61998 T bgp_process_packet
+ 300 71807.000 2995200 23 3000 24 11066 T (bgp_connect_timer)
+ 0 1894.000 12713 148 45000 112 33606 T (bgp_generate_updgrp_packets)
+ 0 0.000 1 0 0 105 105 W vtysh_write
+ 0 52.000 599 86 2000 138 6992 T (bgp_start_timer)
+ 1 1.000 8 125 1000 164 593 R vtysh_accept
+ 0 15.000 600 25 2000 15 153 T (bgp_routeadv_timer)
+ 0 11.000 299 36 3000 53 3128 RW bgp_connect_check
+
+
+ Showing statistics for pthread BGP I/O thread
+ ----------------------------------------------
+ CPU (user+system): Real (wall-clock):
+ Active Runtime(ms) Invoked Avg uSec Max uSecs Avg uSec Max uSecs Type Thread
+ 0 1611.000 9296 173 13000 188 13685 R bgp_process_reads
+ 0 2995.000 11753 254 26000 182 29355 W bgp_process_writes
+
+
+ Showing statistics for pthread BGP Keepalives thread
+ -----------------------------------------------------
+ CPU (user+system): Real (wall-clock):
+ Active Runtime(ms) Invoked Avg uSec Max uSecs Avg uSec Max uSecs Type Thread
+ No data to display yet.
+
+Attentive readers will notice that there is a third thread, the Keepalives
+thread. This thread is responsible for -- surprise -- generating keepalives for
+peers. However, there are no statistics showing for that thread. Although the
+pthread uses the ``frr_pthread`` wrapper, it opts not to use the embedded
+``threadmaster`` facilities. Instead it replaces the ``start`` and ``stop``
+functions with custom functions. This was done because the ``threadmaster``
+facilities introduce a small but significant amount of overhead relative to the
+pthread's task. In this case since the pthread does not need the event-driven
+model and does not need to receive tasks from other pthreads, it is simpler and
+more efficient to implement it outside of the provided event facilities. The
+point to take away from this example is that while the facilities to make using
+pthreads within FRR easy are already implemented, the wrapper is flexible and
+allows usage of other models while still integrating with the rest of the FRR
+core infrastructure. Starting and stopping this pthread works the same as it
+does for any other ``frr_pthread``; the only difference is that event
+statistics are not collected for it, because there are no events.
+
+Notes on Design and Documentation
+---------------------------------
+Because of the choice to embed the existing event system into each pthread
+within FRR, at this time there is not integrated support for other models of
+pthread use such as divide and conquer. Similarly, there is no explicit support
+for thread pooling or similar higher level constructs. The currently existing
+infrastructure is designed around the concept of long-running worker threads
+responsible for specific jobs within each daemon. This is not to say that
+divide and conquer, thread pooling, etc. could not be implemented in the
+future. However, designs in this direction must be very careful to take into
+account the existing codebase. Introducing kernel threads into programs that
+have been written under the assumption of a single thread of execution must be
+done very carefully to avoid insidious errors and to ensure the program remains
+understandable and maintainable.
+
+In keeping with these goals, future work on kernel threading should be
+extensively documented here and FRR developers should be very careful with
+their design choices, as poor choices tightly integrated can prove to be
+catastrophic for development efforts in the future.
diff --git a/doc/developer/rcu.rst b/doc/developer/rcu.rst
new file mode 100644
index 0000000..4fd5658
--- /dev/null
+++ b/doc/developer/rcu.rst
@@ -0,0 +1,269 @@
+.. highlight:: c
+
+RCU
+===
+
+Introduction
+------------
+
+RCU (Read-Copy-Update) is, fundamentally, a paradigm of multithreaded
+operation (and not a set of APIs.) The core ideas are:
+
+* longer, complicated updates to structures are made only on private,
+ "invisible" copies. Other threads, when they access the structure, see an
+ older (but consistent) copy.
+
+* once done, the updated copy is swapped in a single operation so that
+ other threads see either the old or the new data but no inconsistent state
+ between.
+
+* the old instance is only released after making sure that it is impossible
+ any other thread might still be reading it.
+
+For more information, please search for general or Linux kernel RCU
+documentation; there is no way this doc can be comprehensive in explaining the
+interactions:
+
+* https://en.wikipedia.org/wiki/Read-copy-update
+* https://www.kernel.org/doc/html/latest/kernel-hacking/locking.html#avoiding-locks-read-copy-update
+* https://lwn.net/Articles/262464/
+* http://www.rdrop.com/users/paulmck/RCU/rclock_OLS.2001.05.01c.pdf
+* http://lse.sourceforge.net/locking/rcupdate.html
+
+RCU, the TL;DR
+^^^^^^^^^^^^^^
+
+#. data structures are always consistent for reading. That's the "R" part.
+#. reading never blocks / takes a lock.
+#. rcu_read_lock is not a lock in the traditional sense. Think of it as a
+ "reservation"; it notes what the *oldest* possible thing the thread might
+ be seeing is, and which thus can't be deleted yet.
+#. you create some object, finish it up, and then publish it.
+#. publishing is an ``atomic_*`` call with ``memory_order_release``, which
+ tells the compiler to make sure prior memory writes have completed before
+ doing the atomic op.
+#. ``ATOMLIST_*`` ``add`` operations do the ``memory_order_release`` for you.
+#. you can't touch the object after it is published, except with atomic ops.
+#. because you can't touch it, if you want to change it you make a new copy,
+ work on that, and then publish the new copy. That's the "CU" part.
+#. deleting the object is also an atomic op.
+#. other threads that started working before you published / deleted an object
+ might not see the new object / still see the deleted object.
+#. because other threads may still see deleted objects, the ``free()`` needs
+ to be delayed. That's what :c:func:`rcu_free()` is for.
+
+
+When (not) to use RCU
+^^^^^^^^^^^^^^^^^^^^^
+
+RCU is designed for read-heavy workloads where objects are updated relatively
+rarely, but frequently accessed. Do *not* indiscriminately replace locking by
+RCU patterns.
+
+The "copy" part of RCU implies that, while updating, several copies of a given
+object exist in parallel. Even after the updated copy is swapped in, the old
+object remains queued for freeing until all other threads are guaranteed to
+not be accessing it anymore, due to passing a sequence point. In addition to
+the increased memory usage, there may be some bursted (due to batching) malloc
+contention when the RCU cleanup thread does its thing and frees memory.
+
+Other useful patterns
+^^^^^^^^^^^^^^^^^^^^^
+
+In addition to the full "copy object, apply changes, atomically update"
+approach, there are 2 "reduced" usage cases that can be done:
+
+* atomically updating single pieces of a particular object, e.g. some flags
+ or configuration piece
+
+* straight up read-only / immutable objects
+
+Both of these cases can be considered RCU "subsets". For example, when
+maintaining an atomic list of items, but these items only have a single
+integer value that needs to be updated, that value can be atomically updated
+without copying the entire object. However, the object still needs to be
+free'd through :c:func:`rcu_free()` since reading/updating and deleting might
+be happening concurrently. The same applies for immutable objects; deletion
+might still race with reading so they need to be free'd through RCU.
+
+FRR API
+-------
+
+Before diving into detail on the provided functions, it is important to note
+that the FRR RCU API covers the **cleanup part of RCU, not the read-copy-update
+paradigm itself**. These parts are handled by standard C11 atomic operations,
+and by extension through the atomic data structures (ATOMLIST, ATOMSORT & co.)
+
+The ``rcu_*`` functions only make sense in conjunction with these RCU access
+patterns. If you're calling the RCU API but not using these, something is
+wrong. The other way around is not necessarily true; it is possible to use
+atomic ops & datastructures with other types of locking, e.g. rwlocks.
+
+.. c:function:: void rcu_read_lock()
+.. c:function:: void rcu_read_unlock()
+
+ These functions acquire / release the RCU read-side lock. All access to
+ RCU-guarded data must be inside a block guarded by these. Any number of
+ threads may hold the RCU read-side lock at a given point in time, including
+ both no threads at all and all threads.
+
+ The functions implement a depth counter, i.e. can be nested. The nested
+ calls are cheap, since they only increment/decrement the counter.
+ Therefore, any place that uses RCU data and doesn't have a guarantee that
+ the caller holds RCU (e.g. ``lib/`` code) should just have its own
+ rcu_read_lock/rcu_read_unlock pair.
+
+ At the "root" level (e.g. un-nested), these calls can incur the cost of one
+ syscall (to ``futex()``). That puts them on about the same cost as a
+ mutex lock/unlock.
+
+ The ``thread_master`` code currently always holds RCU everywhere, except
+ while doing the actual ``poll()`` syscall. This is both an optimization as
+ well as an "easement" into getting RCU going. The current implementation
+ contract is that any ``struct event *`` callback is called with a RCU
+ holding depth of 1, and that this is owned by the thread so it may (should)
+ drop and reacquire it when doing some longer-running work.
+
+ .. warning::
+
+ The RCU read-side lock must be held **continuously** for the entire time
+ any piece of RCU data is used. This includes any access to RCU data
+ after the initial ``atomic_load``. If the RCU read-side lock is
+ released, any RCU-protected pointers as well as the data they refer to
+ become invalid, as another thread may have called :c:func:`rcu_free` on
+ them.
+
+.. c:struct:: rcu_head
+.. c:struct:: rcu_head_close
+.. c:struct:: rcu_action
+
+ The ``rcu_head`` structures are small (16-byte) bits that contain the
+ queueing machinery for the RCU sweeper/cleanup mechanisms.
+
+ Any piece of data that is cleaned up by RCU needs to have a matching
+ ``rcu_head`` embedded in it. If there is more than one cleanup operation
+ to be done (e.g. closing a file descriptor), more than one ``rcu_head`` may
+ be embedded.
+
+ .. warning::
+
+ It is not possible to reuse a ``rcu_head``. It is owned by the RCU code
+ as soon as ``rcu_*`` is called on it.
+
+ The ``_close`` variant carries an extra ``int fd`` field to store the fd to
+ be closed.
+
+ To minimize the amount of memory used for ``rcu_head``, details about the
+ RCU operation to be performed are moved into the ``rcu_action`` structure.
+ It contains e.g. the MTYPE for :c:func:`rcu_free` calls. The pointer to be
+ freed is stored as an offset relative to the ``rcu_head``, which means it
+ must be embedded as a struct field so the offset is constant.
+
+ The ``rcu_action`` structure is an implementation detail. Using
+ ``rcu_free`` or ``rcu_close`` will set it up correctly without further
+ code needed.
+
+ The ``rcu_head`` may be put in an union with other data if the other data
+ is only used during "life" of the data, since the ``rcu_head`` is used only
+ for the "death" of data. But note that other threads may still be reading
+ a piece of data while a thread is working to free it.
+
+.. c:function:: void rcu_free(struct memtype *mtype, struct X *ptr, field)
+
+ Free a block of memory after RCU has ensured no other thread can be
+ accessing it anymore. The pointer remains valid for any other thread that
+ has called :c:func:`rcu_read_lock` before the ``rcu_free`` call.
+
+ .. warning::
+
+ In some other RCU implementations, the pointer remains valid to the
+ *calling* thread if it is holding the RCU read-side lock. This is not
+ the case in FRR, particularly when running single-threaded. Enforcing
+ this rule also allows static analysis to find use-after-free issues.
+
+ ``mtype`` is the libfrr ``MTYPE_FOO`` allocation type to pass to
+ :c:func:`XFREE`.
+
+ ``field`` must be the name of a ``struct rcu_head`` member field in ``ptr``.
+ The offset of this field (which must be constant) is used to reduce the
+ memory size of ``struct rcu_head``.
+
+ .. note::
+
+ ``rcu_free`` (and ``rcu_close``) calls are more efficient if they are
+ put close to each other. When freeing several RCU'd resources, try to
+ move the calls next to each other (even if the data structures do not
+ directly point to each other.)
+
+ Having the calls bundled reduces the cost of adding the ``rcu_head`` to
+ the RCU queue; the RCU queue is an atomic data structure whose usage
+ will require the CPU to acquire an exclusive hold on relevant cache
+ lines.
+
+.. c:function:: void rcu_close(struct rcu_head_close *head, int fd)
+
+ Close a file descriptor after ensuring no other thread might be using it
+ anymore. Same as :c:func:`rcu_free`, except it calls ``close`` instead of
+ ``free``.
+
+Internals
+^^^^^^^^^
+
+.. c:struct:: rcu_thread
+
+ Per-thread state maintained by the RCU code, set up by the following
+ functions. A pointer to a thread's own ``rcu_thread`` is saved in
+ thread-local storage.
+
+.. c:function:: struct rcu_thread *rcu_thread_prepare(void)
+.. c:function:: void rcu_thread_unprepare(struct rcu_thread *rcu_thread)
+.. c:function:: void rcu_thread_start(struct rcu_thread *rcu_thread)
+
+ Since the RCU code needs to have a list of all active threads, these
+ functions are used by the ``frr_pthread`` code to set up threads. Teardown
+ is automatic. It should not be necessary to call these functions.
+
+ Any thread that accesses RCU-protected data needs to be registered with
+ these functions. Threads that do not access RCU-protected data may call
+ these functions but do not need to.
+
+ Note that passing a pointer to RCU-protected data to some library which
+ accesses that pointer makes the library "access RCU-protected data". In
+ that case, either all of the library's threads must be registered for RCU,
+ or the code must instead pass a (non-RCU) copy of the data to the library.
+
+.. c:function:: void rcu_shutdown(void)
+
+ Stop the RCU sweeper thread and make sure all cleanup has finished.
+
+ This function is called on daemon exit by the libfrr code to ensure pending
+ RCU operations are completed. This is mostly to get a clean exit without
+ memory leaks from queued RCU operations. It should not be necessary to
+ call this function as libfrr handles this.
+
+FRR specifics and implementation details
+----------------------------------------
+
+The FRR RCU infrastructure has the following characteristics:
+
+* it is Epoch-based with a 32-bit wrapping counter. (This is somewhat
+ different from other Epoch-based approaches which may be designed to only
+ use 3 counter values, but works out to a simple implementation.)
+
+* instead of tracking CPUs as the Linux kernel does, threads are tracked. This
+ has exactly zero semantic impact, RCU just cares about "threads of
+ execution", which the kernel can optimize to CPUs but we can't. But it
+ really boils down to the same thing.
+
+* there are no ``rcu_dereference`` and ``rcu_assign_pointer`` - use
+ ``atomic_load`` and ``atomic_store`` instead. (These didn't exist when the
+ Linux RCU code was created.)
+
+* there is no ``synchronize_rcu``; this is a design choice but may be revisited
+ at a later point. ``synchronize_rcu`` blocks a thread until it is guaranteed
+ that no other threads might still be accessing data structures that they may
+ have access to at the beginning of the function call. This is a blocking
+ design and probably not appropriate for FRR. Instead, ``rcu_call`` can be
+ used to have the RCU sweeper thread make a callback after the same constraint
+ is fulfilled in an asynchronous way. Most needs should be covered by
+ ``rcu_free`` and ``rcu_close``.
diff --git a/doc/developer/release-announcement-template.md b/doc/developer/release-announcement-template.md
new file mode 100644
index 0000000..658b87e
--- /dev/null
+++ b/doc/developer/release-announcement-template.md
@@ -0,0 +1,40 @@
+<!---
+name: release-announcement-template
+about: Template to use when drafing a new release announcement. DELETE THIS
+ BLOCK BEFORE PUBLISHING.
+--->
+
+We are pleased to announce FRR <version>.
+
+<!-- Add a brief summary of major changes here -->
+
+Thank you to all contributors!
+
+Changelog
+---------
+
+<!-- List **only** user-visible changes in this section. When listing changes to individual daemons, alphabetize the list by daemon name. -->
+
+**All daemons:**
+- <!-- List changes to all daemons -->
+
+<!-- If a new daemon was added, list it at the top here -->
+**New daemon: <new>**
+- Adds support for <protocol/feature>
+
+**daemon 1**
+- <!-- List changes -->
+
+**daemon 2**
+- <!-- List changes -->
+
+**daemon N**
+- <!-- List changes -->
+
+### Internal improvements
+
+- <!-- List **only** user-invisible changes here -->
+
+### Packaging changes
+
+- <!-- List any new or removed packages here -->
diff --git a/doc/developer/scripting.rst b/doc/developer/scripting.rst
new file mode 100644
index 0000000..202f003
--- /dev/null
+++ b/doc/developer/scripting.rst
@@ -0,0 +1,628 @@
+.. _scripting:
+
+Scripting
+=========
+
+.. seealso:: User docs for scripting
+
+Overview
+--------
+
+FRR has the ability to call Lua scripts to perform calculations, make
+decisions, or otherwise extend builtin behavior with arbitrary user code. This
+is implemented using the standard Lua C bindings. The supported version of Lua
+is 5.3.
+
+C objects may be passed into Lua and Lua objects may be retrieved by C code via
+a encoding/decoding system. In this way, arbitrary data from FRR may be passed to
+scripts.
+
+The Lua environment is isolated from the C environment; user scripts cannot
+access FRR's address space unless explicitly allowed by FRR.
+
+For general information on how Lua is used to extend C, refer to Part IV of
+"Programming in Lua".
+
+https://www.lua.org/pil/contents.html#24
+
+
+Design
+------
+
+Why Lua
+^^^^^^^
+
+Lua is designed to be embedded in C applications. It is very small; the
+standard library is 220K. It is relatively fast. It has a simple, minimal
+syntax that is relatively easy to learn and can be understood by someone with
+little to no programming experience. Moreover it is widely used to add
+scripting capabilities to applications. In short it is designed for this task.
+
+Reasons against supporting multiple scripting languages:
+
+- Each language would require different FFI methods, and specifically
+ different object encoders; a lot of code
+- Languages have different capabilities that would have to be brought to
+ parity with each other; a lot of work
+- Languages have vastly different performance characteristics; this would
+ create alot of basically unfixable issues, and result in a single de facto
+ standard scripting language (the fastest)
+- Each language would need a dedicated maintainer for the above reasons;
+ this is pragmatically difficult
+- Supporting multiple languages fractures the community and limits the audience
+ with which a given script can be shared
+
+General
+-------
+
+FRR's scripting functionality is provided in the form of Lua functions in Lua
+scripts (``.lua`` files). One Lua script may contain many Lua functions. These
+are respectively encapsulated in the following structures:
+
+.. code-block:: c
+
+ struct frrscript {
+ /* Lua file name */
+ char *name;
+
+ /* hash of lua_function_states */
+ struct hash *lua_function_hash;
+ };
+
+ struct lua_function_state {
+ /* Lua function name */
+ char *name;
+
+ lua_State *L;
+ };
+
+
+`struct frrscript`: Since all Lua functions are contained within scripts, the
+following APIs manipulates this structure. ``name`` contains the
+Lua script name and a hash of Lua functions to their function names.
+
+`struct lua_function_state` is an internal structure, but it essentially contains
+the name of the Lua function and its state (a stack), which is run using Lua
+library functions.
+
+In general, to run a Lua function, these steps must take place:
+
+- Initialization
+- Load
+- Call
+- Delete
+
+Initialization
+^^^^^^^^^^^^^^
+
+The ``frrscript`` object encapsulates the Lua function state(s) from
+one Lua script file. To create, use ``frrscript_new()`` which takes the
+name of the Lua script.
+The string ".lua" is appended to the script name, and the resultant filename
+will be used to look for the script when we want to load a Lua function from it.
+
+For example, to create ``frrscript`` for ``/etc/frr/scripts/bingus.lua``:
+
+.. code-block:: c
+
+ struct frrscript *fs = frrscript_new("bingus");
+
+
+The script is *not* read at this stage.
+This function cannot be used to test for a script's presence.
+
+Load
+^^^^
+
+The function to be called must first be loaded. Use ``frrscript_load()``
+which takes a ``frrscript`` object, the name of the Lua function
+and a callback function.
+The script file will be read to load and compile the function.
+
+For example, to load the Lua function ``on_foo``
+in ``/etc/frr/scripts/bingus.lua``:
+
+.. code-block:: c
+
+ int ret = frrscript_load(fs, "on_foo", NULL);
+
+
+This function returns 0 if and only if the Lua function was successfully loaded.
+A non-zero return could indicate either a missing Lua script, a missing
+Lua function, or an error when loading the function.
+
+During loading the script is validated for syntax and its environment
+is set up. By default this does not include the Lua standard library; there are
+security issues to consider, though for practical purposes untrusted users
+should not be able to write the scripts directory anyway.
+
+Call
+^^^^
+
+After loading, a Lua function can be called any number of times.
+
+Input
+"""""
+
+Inputs to the Lua script should be given by providing a list of parenthesized
+pairs,
+where the first and second field identify the name of the variable and the
+value it is bound to, respectively.
+The types of the values must have registered encoders (more below); the compiler
+will warn you otherwise.
+
+These variables are first encoded in-order, then provided as arguments
+to the Lua function. In the example, note that ``c`` is passed in as a value
+while ``a`` and ``b`` are passed in as pointers.
+
+.. code-block:: c
+
+ int a = 100, b = 200, c = 300;
+ frrscript_call(fs, "on_foo", ("a", &a), ("b", &b), ("c", c));
+
+
+.. code-block:: lua
+
+ function on_foo(a, b, c)
+ -- a is 100, b is 200, c is 300
+ ...
+
+
+Output
+""""""
+
+.. code-block:: c
+
+ int a = 100, b = 200, c = 300;
+ frrscript_call(fs, "on_foo", ("a", &a), ("b", &b), ("c", c));
+ // a is 500, b is 200, c is 300
+
+ int* d = frrscript_get_result(fs, "on_foo", "d", lua_tointegerp);
+ // d is 800
+
+
+.. code-block:: lua
+
+ function on_foo(a, b, c)
+ b = 600
+ return { ["a"] = 500, ["c"] = 700, ["d"] = 800 }
+ end
+
+
+**Lua functions being called must return a single table of string names to
+values.**
+(Lua functions should return an empty table if there is no output.)
+The keys of the table are mapped back to names of variables in C. Note that
+the values in the table can also be tables. Since tables are Lua's primary
+data structure, this design lets us return any Lua value.
+
+After the Lua function returns, the names of variables to ``frrscript_call()``
+are matched against keys of the returned table, and then decoded. The types
+being decoded must have registered decoders (more below); the compiler will
+warn you otherwise.
+
+In the example, since ``a`` was in the returned table and ``b`` was not,
+``a`` was decoded and its value modified, while ``b`` was not decoded.
+``c`` was decoded as well, but its decoder is a noop.
+What modifications happen given a variable depends whether its name was
+in the returned table and the decoder's implementation.
+
+.. warning::
+ Always keep in mind that non const-qualified pointers in
+ ``frrscript_call()`` may be modified - this may be a source of bugs.
+ On the other hand, const-qualified pointers and other values cannot
+ be modified.
+
+
+.. tip::
+ You can make a copy of a data structure and pass that in instead,
+ so that modifications only happen to that copy.
+
+``frrscript_call()`` returns 0 if and only if the Lua function was successfully
+called. A non-zero return could indicate either a missing Lua script, a missing
+Lua function, or an error from the Lua interpreter.
+
+In the above example, ``d`` was not an input to ``frrscript_call()``, so its
+value must be explicitly retrieved with ``frrscript_get_result``.
+
+``frrscript_get_result()`` takes a
+decoder and string name which is used as a key to search the returned table.
+Returns the pointer to the decoded value, or NULL if it was not found.
+In the example, ``d`` is a "new" value in C space,
+so memory allocation might take place. Hence the caller is
+responsible for memory deallocation.
+
+``frrscript_call()`` may be called multiple times without re-loading with
+``frrscript_load()``. Results are not preserved between consecutive calls.
+
+.. code-block:: c
+
+ frrscript_load(fs, "on_foo");
+
+ frrscript_call(fs, "on_foo");
+ frrscript_get_result(fs, "on_foo", ...);
+ frrscript_call(fs, "on_foo");
+ frrscript_get_result(fs, "on_foo", ...);
+
+
+Delete
+^^^^^^
+
+To delete a script and the all Lua states associated with it:
+
+.. code-block:: c
+
+ frrscript_delete(fs);
+
+
+A complete example
+""""""""""""""""""
+
+So, a typical execution call, with error checking, looks something like this:
+
+.. code-block:: c
+
+ struct frrscript *fs = frrscript_new("my_script"); // name *without* .lua
+
+ int ret = frrscript_load(fs, "on_foo", NULL);
+ if (ret != 0)
+ goto DONE; // Lua script or function might have not been found
+
+ int a = 100, b = 200, c = 300;
+ ret = frrscript_call(fs, "on_foo", ("a", &a), ("b", &b), ("c", c));
+ if (ret != 0)
+ goto DONE; // Lua function might have not successfully run
+
+ // a and b might be modified
+ assert(a == 500);
+ assert(b == 200);
+
+ // c could not have been modified
+ assert(c == 300);
+
+ // d is new
+ int* d = frrscript_get_result(fs, "on_foo", "d", lua_tointegerp);
+
+ if (!d)
+ goto DONE; // "d" might not have been in returned table
+
+ assert(*d == 800);
+ XFREE(MTYPE_SCRIPT_RES, d); // caller responsible for free
+
+ DONE:
+ frrscript_delete(fs);
+
+
+.. code-block:: lua
+
+ function on_foo(a, b, c)
+ b = 600
+ return { a = 500, c = 700, d = 800 }
+ end
+
+
+Note that ``{ a = ...`` is same as ``{ ["a"] = ...``; it is Lua shorthand to
+use the variable name as the key in a table.
+
+Encoding and Decoding
+^^^^^^^^^^^^^^^^^^^^^
+
+Earlier sections glossed over the types of values that can be passed into
+``frrscript_call()`` and how data is passed between C and Lua. Lua, as a
+dynamically typed, garbage collected language, cannot directly use C values
+without some kind of encoding / decoding system to
+translate types between the two runtimes.
+
+Lua communicates with C code using a stack. C code wishing to provide data to
+Lua scripts must provide a function that encodes the C data into a Lua
+representation and pushes it on the stack. C code wishing to retrieve data from
+Lua must provide a corresponding decoder function that retrieves a Lua
+value from the stack and converts it to the corresponding C type.
+
+Encoders and decoders are provided for common data types.
+Developers wishing to pass their own data structures between C and Lua need to
+create encoders and decoders for that data type.
+
+We try to keep them named consistently.
+There are three kinds of encoders and decoders:
+
+1. lua_push*: encodes a value onto the Lua stack.
+ Required for ``frrscript_call``.
+
+2. lua_decode*: decodes a value from the Lua stack.
+ Required for ``frrscript_call``.
+ Only non const-qualified pointers may be actually decoded (more below).
+
+3. lua_to*: allocates memory and decodes a value from the Lua stack.
+ Required for ``frrscript_get_result``.
+
+This design allows us to combine typesafe *modification* of C values as well as
+*allocation* of new C values.
+
+In the following sections, we will use the encoders/decoders for ``struct prefix`` as an example.
+
+Encoding
+""""""""
+
+An encoder function takes a ``lua_State *``, a C type and pushes that value onto
+the Lua state (a stack).
+For C structs, the usual case,
+this will typically be encoded to a Lua table, then pushed onto the Lua stack.
+
+Here is the encoder function for ``struct prefix``:
+
+.. code-block:: c
+
+ void lua_pushprefix(lua_State *L, struct prefix *prefix)
+ {
+ char buffer[PREFIX_STRLEN];
+
+ lua_newtable(L);
+ lua_pushstring(L, prefix2str(prefix, buffer, PREFIX_STRLEN));
+ lua_setfield(L, -2, "network");
+ lua_pushinteger(L, prefix->prefixlen);
+ lua_setfield(L, -2, "length");
+ lua_pushinteger(L, prefix->family);
+ lua_setfield(L, -2, "family");
+ }
+
+This function pushes a single value, a table, onto the Lua stack, whose
+equivalent in Lua is:
+
+.. code-block:: c
+
+ { ["network"] = "1.2.3.4/24", ["prefixlen"] = 24, ["family"] = 2 }
+
+
+Decoding
+""""""""
+
+Decoders are a bit more involved. They do the reverse; a decoder function takes
+a ``lua_State *``, pops a value off the Lua stack and converts it back into its
+C type.
+
+There are two: ``lua_decode*`` and ``lua_to*``. The former does no mememory
+allocation and is needed for ``frrscript_call``.
+The latter performs allocation and is optional.
+
+A ``lua_decode_*`` function takes a ``lua_State*``, an index, and a pointer
+to a C data structure, and directly modifies the structure with values from the
+Lua stack. Note that only non const-qualified pointers may be modified;
+``lua_decode_*`` for other types will be noops.
+
+Again, for ``struct prefix *``:
+
+.. code-block:: c
+
+ void lua_decode_prefix(lua_State *L, int idx, struct prefix *prefix)
+ {
+ lua_getfield(L, idx, "network");
+ (void)str2prefix(lua_tostring(L, -1), prefix);
+ /* pop the network string */
+ lua_pop(L, 1);
+ /* pop the prefix table */
+ lua_pop(L, 1);
+ }
+
+
+Note:
+ - Before ``lua_decode*`` is run, the "prefix" table is already on the top of
+ the stack. ``frrscript_call`` does this for us.
+ - However, at the end of ``lua_decode*``, the "prefix" table should be popped.
+ - The other two fields in the "network" table are disregarded, meaning that any
+ modification to them is discarded in C space. In this case, this is desired
+ behavior.
+
+.. warning::
+
+ ``lua_decode*`` functions should pop all values that ``lua_to*`` pushed onto
+ the Lua stack.
+ For encoders that pushed a table, its decoder should pop the table at the end.
+ The above is an example.
+
+
+
+``int`` is not a non const-qualified pointer, so for ``int``:
+
+.. code-block:: c
+
+ void lua_decode_int_noop(lua_State *L, int idx, int i)
+ { //noop
+ }
+
+
+A ``lua_to*`` function provides identical functionality except that it first
+allocates memory for the new C type before decoding the value from the Lua stack,
+then returns a pointer to the newly allocated C type. You only need to implement
+this function to use with ``frrscript_get_result`` to retrieve a result of
+this type.
+
+This function can and should be implemented using ``lua_decode_*``:
+
+.. code-block:: c
+
+ void *lua_toprefix(lua_State *L, int idx)
+ {
+ struct prefix *p = XCALLOC(MTYPE_SCRIPT_RES, sizeof(struct prefix));
+
+ lua_decode_prefix(L, idx, p);
+ return p;
+ }
+
+
+The returned data must always be copied off the stack and the copy must be
+allocated with ``MTYPE_SCRIPT_RES``. This way it is possible to unload the script
+(destroy the state) without invalidating any references to values stored in it.
+Note that it is the caller's responsibility to free the data.
+
+
+Registering encoders and decoders for frrscript_call
+""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+To register a new type with its ``lua_push*`` and ``lua_decode*`` functions,
+add the mapping in the following macros in ``frrscript.h``:
+
+.. code-block:: diff
+
+ #define ENCODE_ARGS_WITH_STATE(L, value) \
+ _Generic((value), \
+ ...
+ - struct peer * : lua_pushpeer \
+ + struct peer * : lua_pushpeer, \
+ + struct prefix * : lua_pushprefix \
+ )((L), (value))
+
+ #define DECODE_ARGS_WITH_STATE(L, value) \
+ _Generic((value), \
+ ...
+ - struct peer * : lua_decode_peer \
+ + struct peer * : lua_decode_peer, \
+ + struct prefix * : lua_decode_prefix \
+ )((L), -1, (value))
+
+
+At compile time, the compiler will search for encoders/decoders for the type of
+each value passed in via ``frrscript_call``. If a encoder/decoder cannot be
+found, it will appear as a compile warning. Note that the types must
+match *exactly*.
+In the above example, we defined encoders/decoders for a value of
+``struct prefix *``, but not ``struct prefix`` or ``const struct prefix *``.
+
+``const`` values are a special case. We want to use them in our Lua scripts
+but not modify them, so creating a decoder for them would be meaningless.
+But we still need a decoder for the type of value so that the compiler will be
+satisfied.
+For that, use ``lua_decode_noop``:
+
+.. code-block:: diff
+
+ #define DECODE_ARGS_WITH_STATE(L, value) \
+ _Generic((value), \
+ ...
+ + const struct prefix * : lua_decode_noop \
+ )(L, -1, value)
+
+
+.. note::
+
+ Encodable/decodable types are not restricted to simple values like integers,
+ strings and tables.
+ It is possible to encode a type such that the resultant object in Lua
+ is an actual object-oriented object, complete with methods that call
+ back into defined C functions. See the Lua manual for how to do this;
+ for a code example, look at how zlog is exported into the script environment.
+
+
+Script Environment
+------------------
+
+Logging
+^^^^^^^
+
+For convenience, script environments are populated by default with a ``log``
+object which contains methods corresponding to each of the ``zlog`` levels:
+
+.. code-block:: lua
+
+ log.info("info")
+ log.warn("warn")
+ log.error("error")
+ log.notice("notice")
+ log.debug("debug")
+
+The log messages will show up in the daemon's log output.
+
+
+Examples
+--------
+
+For a complete code example involving passing custom types, retrieving results,
+and doing complex calculations in Lua, look at the implementation of the
+``match script SCRIPT`` command for BGP routemaps. This example calls into a
+script with a function named ``route_match``,
+provides route prefix and attributes received from a peer and expects the
+function to return a match / no match / match and update result.
+
+An example script to use with this follows. This function matches, does not match
+or updates a route depending on how many BGP UPDATE messages the peer has
+received when the script is called, simply as a demonstration of what can be
+accomplished with scripting.
+
+.. code-block:: lua
+
+
+ -- Example route map matching
+ -- author: qlyoung
+ --
+ -- The following variables are available in the global environment:
+ -- log
+ -- logging library, with the usual functions
+ --
+ -- route_match arguments:
+ -- table prefix
+ -- the route under consideration
+ -- table attributes
+ -- the route's attributes
+ -- table peer
+ -- the peer which received this route
+ -- integer RM_FAILURE
+ -- status code in case of failure
+ -- integer RM_NOMATCH
+ -- status code for no match
+ -- integer RM_MATCH
+ -- status code for match
+ -- integer RM_MATCH_AND_CHANGE
+ -- status code for match-and-set
+ --
+ -- route_match returns table with following keys:
+ -- integer action, required
+ -- resultant status code. Should be one of RM_*
+ -- table attributes, optional
+ -- updated route attributes
+ --
+
+ function route_match(prefix, attributes, peer,
+ RM_FAILURE, RM_NOMATCH, RM_MATCH, RM_MATCH_AND_CHANGE)
+
+ log.info("Evaluating route " .. prefix.network .. " from peer " .. peer.remote_id.string)
+
+ function on_match (prefix, attributes)
+ log.info("Match")
+ return {
+ attributes = RM_MATCH
+ }
+ end
+
+ function on_nomatch (prefix, attributes)
+ log.info("No match")
+ return {
+ action = RM_NOMATCH
+ }
+ end
+
+ function on_match_and_change (prefix, attributes)
+ log.info("Match and change")
+ attributes["metric"] = attributes["metric"] + 7
+ return {
+ action = RM_MATCH_AND_CHANGE,
+ attributes = attributes
+ }
+ end
+
+ special_routes = {
+ ["172.16.10.4/24"] = on_match,
+ ["172.16.13.1/8"] = on_nomatch,
+ ["192.168.0.24/8"] = on_match_and_change,
+ }
+
+
+ if special_routes[prefix.network] then
+ return special_routes[prefix.network](prefix, attributes)
+ elseif peer.stats.update_in % 3 == 0 then
+ return on_match(prefix, attributes)
+ elseif peer.stats.update_in % 2 == 0 then
+ return on_nomatch(prefix, attributes)
+ else
+ return on_match_and_change(prefix, attributes)
+ end
+ end
diff --git a/doc/developer/static-linking.rst b/doc/developer/static-linking.rst
new file mode 100644
index 0000000..5342fbf
--- /dev/null
+++ b/doc/developer/static-linking.rst
@@ -0,0 +1,98 @@
+.. _static-linking:
+
+Static Linking
+==============
+
+This document describes how to build FRR without hard dependencies on shared
+libraries. Note that it's not possible to build FRR *completely* statically.
+This document just covers how to statically link the dependencies that aren't
+likely to be present on a given platform - libfrr and libyang. The resultant
+binaries should still be fairly portable. For example, here is the DSO
+dependency list for `bgpd` after using these steps:
+
+.. code-block:: shell
+
+ $ ldd bgpd
+ linux-vdso.so.1 (0x00007ffe3a989000)
+ libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9dc10c0000)
+ libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x00007f9dc0eba000)
+ libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9dc0b1c000)
+ libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9dc0918000)
+ libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f9dc06e0000)
+ libjson-c.so.3 => /lib/x86_64-linux-gnu/libjson-c.so.3 (0x00007f9dc04d5000)
+ librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f9dc02cd000)
+ libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9dc00ae000)
+ libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9dbfe96000)
+ libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9dbfaa5000)
+ /lib64/ld-linux-x86-64.so.2 (0x00007f9dc1449000)
+
+Procedure
+---------
+Note that these steps have only been tested with LLVM 9 / clang.
+
+Today, libfrr can already be statically linked by passing these configure
+options::
+
+ --enable-static --enable-static-bin --enable-shared
+
+libyang is more complicated. You must build and install libyang as a static
+library. To do this, follow the usual libyang build procedure as listed in the
+FRR developer docs, but set the ``ENABLE_STATIC`` option in your cmake
+invocation. You also need to build with PIC enabled, which today is disabled
+when building libyang statically.
+
+The resultant cmake command is::
+
+ cmake -DENABLE_STATIC=ON -DENABLE_LYD_PRIV=ON \
+ -DCMAKE_INSTALL_PREFIX:PATH=/usr \
+ -DCMAKE_POSITION_INDEPENDENT_CODE=TRUE \
+ -DCMAKE_BUILD_TYPE:String="Release" ..
+
+This produces a bunch of ``.a`` static archives that need to ultimately be linked
+into FRR. However, not only is it 6 archives rather than the usual ``libyang.so``,
+you will now also need to link FRR with ``libpcre.a``. Ubuntu's ``libpcre3-dev``
+package provides this, but it hasn't been built with PIC enabled, so it's not
+usable for our purposes. So download ``libpcre`` from
+`SourceForge <https://sourceforge.net/projects/pcre/>`_, and build it
+like this:
+
+.. code-block:: shell
+
+ ./configure --with-pic
+ make
+
+Hopefully you get a nice, usable, PIC ``libpcre.a``.
+
+So now we have to link all these static libraries into FRR. Rather than modify
+FRR to accommodate this, the best option is to create an archive with all of
+libyang's dependencies. Then to avoid making any changes to FRR build foo,
+rename this ``libyang.a`` and copy it over the usual static library location.
+Ugly but it works. To do this, go into your libyang build directory, which
+should have a bunch of ``.a`` files. Copy ``libpcre.a`` into this directory.
+Write the following into a shell script and run it:
+
+.. code-block:: shell
+
+ #!/bin/bash
+ ar -M <<EOM
+ CREATE libyang_fat.a
+ ADDLIB libyang.a
+ ADDLIB libyangdata.a
+ ADDLIB libmetadata.a
+ ADDLIB libnacm.a
+ ADDLIB libuser_inet_types.a
+ ADDLIB libuser_yang_types.a
+ ADDLIB libpcre.a
+ SAVE
+ END
+ EOM
+ ranlib libyang_fat.a
+
+``libyang_fat.a`` is your archive. Now copy this over your install
+``libyang.a``, which on my machine is located at
+``/usr/lib/x86_64-linux-gnu/libyang.a`` (try ``locate libyang.a`` if not).
+
+Now when you build FRR with the static options enabled as above, clang should
+pick up the static libyang and link it, leaving you with FRR binaries that have
+no hard DSO dependencies beyond common system libraries. To verify, run ``ldd``
+over the resultant binaries.
diff --git a/doc/developer/subdir.am b/doc/developer/subdir.am
new file mode 100644
index 0000000..0deb0f5
--- /dev/null
+++ b/doc/developer/subdir.am
@@ -0,0 +1,116 @@
+#
+# doc/developer
+#
+
+dev_RSTFILES = \
+ doc/developer/bgp-typecodes.rst \
+ doc/developer/bgpd.rst \
+ doc/developer/building-frr-for-alpine.rst \
+ doc/developer/building-frr-for-archlinux.rst \
+ doc/developer/building-frr-for-centos6.rst \
+ doc/developer/building-frr-for-centos7.rst \
+ doc/developer/building-frr-for-debian8.rst \
+ doc/developer/building-frr-for-debian9.rst \
+ doc/developer/building-frr-for-debian12.rst \
+ doc/developer/building-frr-for-fedora.rst \
+ doc/developer/building-frr-for-freebsd10.rst \
+ doc/developer/building-frr-for-freebsd11.rst \
+ doc/developer/building-frr-for-freebsd13.rst \
+ doc/developer/building-frr-for-freebsd9.rst \
+ doc/developer/building-frr-for-netbsd6.rst \
+ doc/developer/building-frr-for-netbsd7.rst \
+ doc/developer/building-frr-for-openbsd6.rst \
+ doc/developer/building-frr-for-opensuse.rst \
+ doc/developer/building-frr-for-openwrt.rst \
+ doc/developer/building-frr-for-ubuntu1404.rst \
+ doc/developer/building-frr-for-ubuntu1604.rst \
+ doc/developer/building-frr-for-ubuntu1804.rst \
+ doc/developer/building-frr-for-ubuntu2004.rst \
+ doc/developer/building-frr-for-ubuntu2204.rst \
+ doc/developer/building-libunwind-note.rst \
+ doc/developer/building-libyang.rst \
+ doc/developer/building.rst \
+ doc/developer/checkpatch.rst \
+ doc/developer/cli.rst \
+ doc/developer/conf.py \
+ doc/developer/cross-compiling.rst \
+ doc/developer/frr-release-procedure.rst \
+ doc/developer/grpc.rst \
+ doc/developer/hooks.rst \
+ doc/developer/include-compile.rst \
+ doc/developer/index.rst \
+ doc/developer/library.rst \
+ doc/developer/link-state.rst \
+ doc/developer/lists.rst \
+ doc/developer/locking.rst \
+ doc/developer/logging.rst \
+ doc/developer/memtypes.rst \
+ doc/developer/modules.rst \
+ doc/developer/next-hop-tracking.rst \
+ doc/developer/ospf-api.rst \
+ doc/developer/ospf-sr.rst \
+ doc/developer/ospf.rst \
+ doc/developer/packaging-debian.rst \
+ doc/developer/packaging-redhat.rst \
+ doc/developer/packaging.rst \
+ doc/developer/path-internals-daemon.rst \
+ doc/developer/path-internals-pcep.rst \
+ doc/developer/path-internals.rst \
+ doc/developer/path.rst \
+ doc/developer/rcu.rst \
+ doc/developer/scripting.rst \
+ doc/developer/static-linking.rst \
+ doc/developer/tracing.rst \
+ doc/developer/testing.rst \
+ doc/developer/topotests-snippets.rst \
+ doc/developer/topotests-markers.rst \
+ doc/developer/topotests.rst \
+ doc/developer/workflow.rst \
+ doc/developer/xrefs.rst \
+ doc/developer/zebra.rst \
+ doc/developer/northbound/advanced-topics.rst \
+ doc/developer/northbound/architecture.rst \
+ doc/developer/northbound/demos.rst \
+ doc/developer/northbound/links.rst \
+ doc/developer/northbound/northbound.rst \
+ doc/developer/northbound/operational-data-rpcs-and-notifications.rst \
+ doc/developer/northbound/plugins-sysrepo.rst \
+ doc/developer/northbound/ppr-basic-test-topology.rst \
+ doc/developer/northbound/ppr-mpls-basic-test-topology.rst \
+ doc/developer/northbound/retrofitting-configuration-commands.rst \
+ doc/developer/northbound/transactional-cli.rst \
+ doc/developer/northbound/yang-module-translator.rst \
+ doc/developer/northbound/yang-tools.rst \
+ # end
+
+EXTRA_DIST += \
+ $(dev_RSTFILES) \
+ doc/developer/draft-zebra-00.ms \
+ doc/developer/ldpd-basic-test-setup.md \
+ doc/developer/release-announcement-template.md \
+ doc/developer/_static/overrides.css \
+ # end
+
+DEVBUILD = doc/developer/_build
+$(DEVBUILD)/.doctrees/environment.pickle: $(dev_RSTFILES)
+
+#
+# nothing built automatically for "all" target.
+#
+
+#
+# standard targets
+#
+
+developer-info: $(DEVBUILD)/texinfo/frr.info
+developer-html: $(DEVBUILD)/html/.buildinfo
+developer-pdf: $(DEVBUILD)/latexpdf
+
+#
+# hook-in for clean
+#
+
+.PHONY: clean-devdocs
+clean-local: clean-devdocs
+clean-devdocs:
+ -rm -rf "$(DEVBUILD)"
diff --git a/doc/developer/testing.rst b/doc/developer/testing.rst
new file mode 100644
index 0000000..5865a6b
--- /dev/null
+++ b/doc/developer/testing.rst
@@ -0,0 +1,11 @@
+.. _testing:
+
+*******
+Testing
+*******
+
+.. toctree::
+ :maxdepth: 2
+
+ topotests
+ topotests-jsontopo
diff --git a/doc/developer/topotests-jsontopo.rst b/doc/developer/topotests-jsontopo.rst
new file mode 100644
index 0000000..e2cc72c
--- /dev/null
+++ b/doc/developer/topotests-jsontopo.rst
@@ -0,0 +1,454 @@
+.. _topotests-json:
+
+Topotests with JSON
+===================
+
+Overview
+--------
+
+On top of current topotests framework following enhancements are done:
+
+
+* Creating the topology and assigning IPs to router' interfaces dynamically.
+ It is achieved by using json file, in which user specify the number of
+ routers, links to each router, interfaces for the routers and protocol
+ configurations for all routers.
+
+* Creating the configurations dynamically. It is achieved by using
+ :file:`/usr/lib/frr/frr-reload.py` utility, which takes running configuration
+ and the newly created configuration for any particular router and creates a
+ delta file(diff file) and loads it to router.
+
+
+Logging of test case executions
+-------------------------------
+
+* The execution log for each test is saved in the test specific directory create
+ under `/tmp/topotests` (e.g.,
+ `/tmp/topotests/<testdirname.testfilename>/exec.log`)
+
+* Additionally all test logs are captured in the `topotest.xml` results file.
+ This file will be saved in `/tmp/topotests/topotests.xml`. In order to extract
+ the logs for a particular test one can use the `analyze.py` utility found in
+ the topotests base directory.
+
+* Router's current configuration, as it is changed during the test, can be
+ displayed on console or sent to logs by adding ``show_router_config = True`` in
+ :file:`pytest.ini`.
+
+Note: directory "/tmp/topotests/" is created by topotests by default, making
+use of same directory to save execution logs.
+
+Guidelines
+----------
+
+Writing New Tests
+^^^^^^^^^^^^^^^^^
+
+This section will guide you in all recommended steps to produce a standard
+topology test.
+
+This is the recommended test writing routine:
+
+* Create a json file which will have routers and protocol configurations
+* Write and debug the tests
+* Format the new code using `black <https://github.com/psf/black>`_
+* Create a Pull Request
+
+.. Note::
+
+ BGP tests MUST use generous convergence timeouts - you must ensure that any
+ test involving BGP uses a convergence timeout that is proportional to the
+ configured BGP timers. If the timers are not reduced from their defaults this
+ means 130 seconds; however, it is highly recommended that timers be reduced
+ from the default values unless the test requires they not be.
+
+File Hierarchy
+^^^^^^^^^^^^^^
+
+Before starting to write any tests one must know the file hierarchy. The
+repository hierarchy looks like this:
+
+.. code-block:: console
+
+ $ cd frr/tests/topotests
+ $ find ./*
+ ...
+ ./example_test/
+ ./example_test/test_template_json.json # input json file, having topology, interfaces, bgp and other configuration
+ ./example_test/test_template_json.py # test script to write and execute testcases
+ ...
+ ./lib # shared test/topology functions
+ ./lib/topojson.py # library to create topology and configurations dynamically from json file
+ ./lib/common_config.py # library to create protocol's common configurations ex- static_routes, prefix_lists, route_maps etc.
+ ./lib/bgp.py # library to create and test bgp configurations
+
+Defining the Topology and initial configuration in JSON file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The first step to write a new test is to define the topology and initial
+configuration. User has to define topology and initial configuration in JSON
+file. Here is an example of JSON file::
+
+ BGP neighborship with single phy-link, sample JSON file:
+ {
+ "ipv4base": "192.168.0.0",
+ "ipv4mask": 30,
+ "ipv6base": "fd00::",
+ "ipv6mask": 64,
+ "link_ip_start": {"ipv4": "192.168.0.0", "v4mask": 30, "ipv6": "fd00::", "v6mask": 64},
+ "lo_prefix": {"ipv4": "1.0.", "v4mask": 32, "ipv6": "2001:DB8:F::", "v6mask": 128},
+ "routers": {
+ "r1": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback"},
+ "r2": {"ipv4": "auto", "ipv6": "auto"},
+ "r3": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "neighbor": {
+ "r2": {
+ "dest_link": {
+ "r1": {}
+ }
+ },
+ "r3": {
+ "dest_link": {
+ "r1": {}
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "r2": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback"},
+ "r1": {"ipv4": "auto", "ipv6": "auto"},
+ "r3": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "redistribute": [
+ {
+ "redist_type": "static"
+ }
+ ],
+ "neighbor": {
+ "r1": {
+ "dest_link": {
+ "r2": {}
+ }
+ },
+ "r3": {
+ "dest_link": {
+ "r2": {}
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ ...
+
+
+BGP neighboship with loopback interface, sample JSON file::
+
+ {
+ "ipv4base": "192.168.0.0",
+ "ipv4mask": 30,
+ "ipv6base": "fd00::",
+ "ipv6mask": 64,
+ "link_ip_start": {"ipv4": "192.168.0.0", "v4mask": 30, "ipv6": "fd00::", "v6mask": 64},
+ "lo_prefix": {"ipv4": "1.0.", "v4mask": 32, "ipv6": "2001:DB8:F::", "v6mask": 128},
+ "routers": {
+ "r1": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback",
+ "add_static_route":"yes"},
+ "r2": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "neighbor": {
+ "r2": {
+ "dest_link": {
+ "lo": {
+ "source_link": "lo"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "static_routes": [
+ {
+ "network": "1.0.2.17/32",
+ "next_hop": "192.168.0.1
+ }
+ ]
+ },
+ "r2": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback",
+ "add_static_route":"yes"},
+ "r1": {"ipv4": "auto", "ipv6": "auto"},
+ "r3": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "redistribute": [
+ {
+ "redist_type": "static"
+ }
+ ],
+ "neighbor": {
+ "r1": {
+ "dest_link": {
+ "lo": {
+ "source_link": "lo"
+ }
+ }
+ },
+ "r3": {
+ "dest_link": {
+ "lo": {
+ "source_link": "lo"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "static_routes": [
+ {
+ "network": "192.0.20.1/32",
+ "no_of_ip": 9,
+ "admin_distance": 100,
+ "next_hop": "192.168.0.1",
+ "tag": 4001
+ }
+ ],
+ }
+ ...
+
+BGP neighborship with Multiple phy-links, sample JSON file::
+
+ {
+ "ipv4base": "192.168.0.0",
+ "ipv4mask": 30,
+ "ipv6base": "fd00::",
+ "ipv6mask": 64,
+ "link_ip_start": {"ipv4": "192.168.0.0", "v4mask": 30, "ipv6": "fd00::", "v6mask": 64},
+ "lo_prefix": {"ipv4": "1.0.", "v4mask": 32, "ipv6": "2001:DB8:F::", "v6mask": 128},
+ "routers": {
+ "r1": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback"},
+ "r2-link1": {"ipv4": "auto", "ipv6": "auto"},
+ "r2-link2": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "neighbor": {
+ "r2": {
+ "dest_link": {
+ "r1-link1": {}
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "r2": {
+ "links": {
+ "lo": {"ipv4": "auto", "ipv6": "auto", "type": "loopback"},
+ "r1-link1": {"ipv4": "auto", "ipv6": "auto"},
+ "r1-link2": {"ipv4": "auto", "ipv6": "auto"},
+ "r3-link1": {"ipv4": "auto", "ipv6": "auto"},
+ "r3-link2": {"ipv4": "auto", "ipv6": "auto"}
+ },
+ "bgp": {
+ "local_as": "64512",
+ "address_family": {
+ "ipv4": {
+ "unicast": {
+ "redistribute": [
+ {
+ "redist_type": "static"
+ }
+ ],
+ "neighbor": {
+ "r1": {
+ "dest_link": {
+ "r2-link1": {}
+ }
+ },
+ "r3": {
+ "dest_link": {
+ "r2-link1": {}
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ ...
+
+
+JSON File Explained
+"""""""""""""""""""
+
+Mandatory keywords/options in JSON:
+
+* ``ipv4base`` : base ipv4 address to generate ips, ex - 192.168.0.0
+* ``ipv4mask`` : mask for ipv4 address, ex - 30
+* ``ipv6base`` : base ipv6 address to generate ips, ex - fd00:
+* ``ipv6mask`` : mask for ipv6 address, ex - 64
+* ``link_ip_start`` : physical interface base ipv4 and ipv6 address
+* ``lo_prefix`` : loopback interface base ipv4 and ipv6 address
+* ``routers`` : user can add number of routers as per topology, router's name
+ can be any logical name, ex- r1 or a0.
+* ``r1`` : name of the router
+* ``lo`` : loopback interface dict, ipv4 and/or ipv6 addresses generated automatically
+* ``type`` : type of interface, to identify loopback interface
+* ``links`` : physical interfaces dict, ipv4 and/or ipv6 addresses generated
+ automatically
+* ``r2-link1`` : it will be used when routers have multiple links. 'r2' is router
+ name, 'link' is any logical name, '1' is to identify link number,
+ router name and link must be seperated by hyphen (``-``), ex- a0-peer1
+
+Optional keywords/options in JSON:
+
+* ``bgp`` : bgp configuration
+* ``local_as`` : Local AS number
+* ``unicast`` : All SAFI configuration
+* ``neighbor``: All neighbor details
+* ``dest_link`` : Destination link to which router will connect
+* ``router_id`` : bgp router-id
+* ``source_link`` : if user wants to establish bgp neighborship with loopback
+ interface, add ``source_link``: ``lo``
+* ``keepalivetimer`` : Keep alive timer for BGP neighbor
+* ``holddowntimer`` : Hold down timer for BGP neighbor
+* ``static_routes`` : create static routes for routers
+* ``redistribute`` : redistribute static and/or connected routes
+* ``prefix_lists`` : create Prefix-lists for routers
+
+Building topology and configurations
+""""""""""""""""""""""""""""""""""""
+
+Topology and initial configuration as well as teardown are invoked through the
+use of a pytest fixture::
+
+
+ from lib import fixtures
+
+ tgen = pytest.fixture(fixtures.tgen_json, scope="module")
+
+
+ # tgen is defined above
+ # topo is a fixture defined in ../conftest.py and automatically available
+ def test_bgp_convergence(tgen, topo):
+ bgp_convergence = bgp.verify_bgp_convergence(tgen, topo)
+ assert bgp_convergence
+
+The `fixtures.topo_json` function calls `topojson.setup_module_from_json()` to
+create and return a new `topogen.Topogen()` object using the JSON config file
+with the same base filename as the test (i.e., `test_file.py` ->
+`test_file.json`). Additionally, the fixture calls `tgen.stop_topology()` after
+all the tests have run to cleanup. The function is only invoked once per
+file/module (scope="module"), but the resulting object is passed to each
+function that has `tgen` as an argument.
+
+For more info on the powerful pytest fixtures feature please see `FIXTURES`_.
+
+.. _FIXTURES: https://docs.pytest.org/en/6.2.x/fixture.html
+
+Creating configuration files
+""""""""""""""""""""""""""""
+
+Router's configuration would be saved in config file frr_json.conf. Common
+configurations are like, static routes, prefixlists and route maps etc configs,
+these configs can be used by any other protocols as it is.
+BGP config will be specific to BGP protocol testing.
+
+* json file is passed to API Topogen() which saves the JSON object in
+ `self.json_topo`
+* The Topogen object is then passed to API build_config_from_json(), which looks
+ for configuration tags in new JSON object.
+* If tag is found in the JSON object, configuration is created as per input and
+ written to file frr_json.conf
+* Once JSON parsing is over, frr_json.conf is loaded onto respective router.
+ Config loading is done using 'vtysh -f <file>'. Initial config at this point
+ is also saved frr_json_initial.conf. This file can be used to reset
+ configuration on router, during the course of execution.
+* Reset of configuration is done using frr "reload.py" utility, which
+ calculates the difference between router's running config and user's config
+ and loads delta file to router. API used - reset_config_on_router()
+
+Writing Tests
+"""""""""""""
+
+Test topologies should always be bootstrapped from the
+`example_test/test_template_json.py` when possible in order to take advantage of
+the most recent infrastructure support code.
+
+Example:
+
+
+* Define a module scoped fixture to setup/teardown and supply the tests with the
+ `Topogen` object.
+
+.. code-block:: python
+
+ import pytest
+ from lib import fixtures
+
+ tgen = pytest.fixture(fixtures.tgen_json, scope="module")
+
+
+* Define test functions using pytest fixtures
+
+.. code-block:: python
+
+ from lib import bgp
+
+ # tgen is defined above
+ # topo is a global available fixture defined in ../conftest.py
+ def test_bgp_convergence(tgen, topo):
+ "Test for BGP convergence."
+
+ # Don't run this test if we have any failure.
+ if tgen.routers_have_failure():
+ pytest.skip(tgen.errors)
+
+ bgp_convergence = bgp.verify_bgp_convergence(tgen, topo)
+ assert bgp_convergence
diff --git a/doc/developer/topotests-markers.rst b/doc/developer/topotests-markers.rst
new file mode 100644
index 0000000..9f92412
--- /dev/null
+++ b/doc/developer/topotests-markers.rst
@@ -0,0 +1,114 @@
+.. _topotests-markers:
+
+Markers
+--------
+
+To allow for automated selective testing on large scale continuous integration
+systems, all tests must be marked with at least one of the following markers:
+
+* babeld
+* bfdd
+* bgpd
+* eigrpd
+* isisd
+* ldpd
+* nhrpd
+* ospf6d
+* ospfd
+* pathd
+* pbrd
+* pimd
+* ripd
+* ripngd
+* sharpd
+* staticd
+* vrrpd
+
+The markers corespond to the daemon subdirectories in FRR's source code and have
+to be added to tests on a module level depending on which daemons are used
+during the test.
+
+The goal is to have continuous integration systems scan code submissions, detect
+changes to files in a daemons subdirectory and select only tests using that
+daemon to run to shorten developers waiting times for test results and save test
+infrastructure resources.
+
+Newly written modules and code changes on tests, which do not contain any or
+incorrect markers will be rejected by reviewers.
+
+
+Registering markers
+^^^^^^^^^^^^^^^^^^^
+The Registration of new markers takes place in the file
+``tests/topotests/pytest.ini``:
+
+.. code:: python3
+
+ # tests/topotests/pytest.ini
+ [pytest]
+ ...
+ markers =
+ babeld: Tests that run against BABELD
+ bfdd: Tests that run against BFDD
+ ...
+ vrrpd: Tests that run against VRRPD
+
+
+Adding markers to tests
+^^^^^^^^^^^^^^^^^^^^^^^
+Markers are added to a test by placing a global variable in the test module.
+
+Adding a single marker:
+
+.. code:: python3
+
+ import pytest
+ ...
+
+ # add after imports, before defining classes or functions:
+ pytestmark = pytest.mark.bfdd
+
+ ...
+
+ def test_using_bfdd():
+
+
+Adding multiple markers:
+
+.. code:: python3
+
+ import pytest
+ ...
+
+ # add after imports, before defining classes or functions:
+ pytestmark = [
+ pytest.mark.bgpd,
+ pytest.mark.ospfd,
+ pytest.mark.ospf6d
+ ]
+
+ ...
+
+ def test_using_bgpd_ospfd_ospf6d():
+
+
+Selecting marked modules for testing
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Selecting by a single marker:
+
+.. code:: bash
+
+ pytest -v -m isisd
+
+Selecting by multiple markers:
+
+.. code:: bash
+
+ pytest -v -m "isisd or ldpd or nhrpd"
+
+
+Further Information
+^^^^^^^^^^^^^^^^^^^
+The `online pytest documentation <https://docs.pytest.org/en/stable/example/markers.html>`_
+provides further information and usage examples for pytest markers.
+
diff --git a/doc/developer/topotests-snippets.rst b/doc/developer/topotests-snippets.rst
new file mode 100644
index 0000000..fb3c928
--- /dev/null
+++ b/doc/developer/topotests-snippets.rst
@@ -0,0 +1,272 @@
+.. _topotests-snippets:
+
+Snippets
+--------
+
+This document will describe common snippets of code that are frequently needed
+to perform some test checks.
+
+Checking for router / test failures
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following check uses the topogen API to check for software failure (e.g.
+zebra died) and/or for errors manually set by ``Topogen.set_error()``.
+
+.. code:: py
+
+ # Get the topology reference
+ tgen = get_topogen()
+
+ # Check for errors in the topology
+ if tgen.routers_have_failure():
+ # Skip the test with the topology errors as reason
+ pytest.skip(tgen.errors)
+
+Checking FRR routers version
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This code snippet is usually run after the topology setup to make sure all
+routers instantiated in the topology have the correct software version.
+
+.. code:: py
+
+ # Get the topology reference
+ tgen = get_topogen()
+
+ # Get the router list
+ router_list = tgen.routers()
+
+ # Run the check for all routers
+ for router in router_list.values():
+ if router.has_version('<', '3'):
+ # Set topology error, so the next tests are skipped
+ tgen.set_error('unsupported version')
+
+A sample of this snippet in a test can be found `here
+<ldp-vpls-topo1/test_ldp_vpls_topo1.py>`__.
+
+Interacting with equipment
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You might want to interact with the topology equipment during the tests and
+there are different ways to do so.
+
+Notes:
+
+1. When using the Topogen API, all the equipment code derives from ``Topogear``
+ (`lib/topogen.py <lib/topogen.py>`__). If you feel brave you can look by
+ yourself how the abstractions that will be mentioned here work.
+
+2. When not using the ``Topogen`` API there is only one way to interact with
+ the equipment, which is by calling the ``mininet`` API functions directly
+ to spawn commands.
+
+Interacting with the Linux sandbox
+""""""""""""""""""""""""""""""""""
+
+Without ``Topogen``:
+
+.. code:: py
+
+ global net
+ output = net['r1'].cmd('echo "foobar"')
+ print 'output is: {}'.format(output)
+
+With ``Topogen``:
+
+.. code:: py
+
+ tgen = get_topogen()
+ output = tgen.gears['r1'].run('echo "foobar"')
+ print 'output is: {}'.format(output)
+
+Interacting with VTYSH
+""""""""""""""""""""""
+
+Without ``Topogen``:
+
+.. code:: py
+
+ global net
+ output = net['r1'].cmd('vtysh "show ip route" 2>/dev/null')
+ print 'output is: {}'.format(output)
+
+With ``Topogen``:
+
+.. code:: py
+
+ tgen = get_topogen()
+ output = tgen.gears['r1'].vtysh_cmd("show ip route")
+ print 'output is: {}'.format(output)
+
+``Topogen`` also supports sending multiple lines of command:
+
+.. code:: py
+
+ tgen = get_topogen()
+ output = tgen.gears['r1'].vtysh_cmd("""
+ configure terminal
+ router bgp 10
+ bgp router-id 10.0.255.1
+ neighbor 1.2.3.4 remote-as 10
+ !
+ router bgp 11
+ bgp router-id 10.0.255.2
+ !
+ """)
+ print 'output is: {}'.format(output)
+
+You might also want to run multiple commands and get only the commands that
+failed:
+
+.. code:: py
+
+ tgen = get_topogen()
+ output = tgen.gears['r1'].vtysh_multicmd("""
+ configure terminal
+ router bgp 10
+ bgp router-id 10.0.255.1
+ neighbor 1.2.3.4 remote-as 10
+ !
+ router bgp 11
+ bgp router-id 10.0.255.2
+ !
+ """, pretty_output=false)
+ print 'output is: {}'.format(output)
+
+Translating vtysh JSON output into Python structures:
+
+.. code:: py
+
+ tgen = get_topogen()
+ json_output = tgen.gears['r1'].vtysh_cmd("show ip route json", isjson=True)
+ output = json.dumps(json_output, indent=4)
+ print 'output is: {}'.format(output)
+
+ # You can also access the data structure as normal. For example:
+ # protocol = json_output['1.1.1.1/32']['protocol']
+ # assert protocol == "ospf", "wrong protocol"
+
+.. note::
+
+ ``vtysh_(multi)cmd`` is only available for router types of equipment.
+
+Invoking mininet CLI
+^^^^^^^^^^^^^^^^^^^^
+
+Without ``Topogen``:
+
+.. code:: py
+
+ CLI(net)
+
+With ``Topogen``:
+
+.. code:: py
+
+ tgen = get_topogen()
+ tgen.mininet_cli()
+
+Reading files
+^^^^^^^^^^^^^
+
+Loading a normal text file content in the current directory:
+
+.. code:: py
+
+ # If you are using Topogen
+ # CURDIR = CWD
+ #
+ # Otherwise find the directory manually:
+ CURDIR = os.path.dirname(os.path.realpath(__file__))
+
+ file_name = '{}/r1/show_ip_route.txt'.format(CURDIR)
+ file_content = open(file_name).read()
+
+Loading JSON from a file:
+
+.. code:: py
+
+ import json
+
+ file_name = '{}/r1/show_ip_route.json'.format(CURDIR)
+ file_content = json.loads(open(file_name).read())
+
+Comparing JSON output
+^^^^^^^^^^^^^^^^^^^^^
+
+After obtaining JSON output formatted with Python data structures, you may use
+it to assert a minimalist schema:
+
+.. code:: py
+
+ tgen = get_topogen()
+ json_output = tgen.gears['r1'].vtysh_cmd("show ip route json", isjson=True)
+
+ expect = {
+ '1.1.1.1/32': {
+ 'protocol': 'ospf'
+ }
+ }
+
+ assertmsg = "route 1.1.1.1/32 was not learned through OSPF"
+ assert json_cmp(json_output, expect) is None, assertmsg
+
+``json_cmp`` function description (it might be outdated, you can find the
+latest description in the source code at
+:file:`tests/topotests/lib/topotest.py`
+
+.. code:: text
+
+ JSON compare function. Receives two parameters:
+ * `d1`: json value
+ * `d2`: json subset which we expect
+
+ Returns `None` when all keys that `d1` has matches `d2`,
+ otherwise a string containing what failed.
+
+ Note: key absence can be tested by adding a key with value `None`.
+
+Pausing execution
+^^^^^^^^^^^^^^^^^
+
+Preferably, choose the ``sleep`` function that ``topotest`` provides, as it
+prints a notice during the test execution to help debug topology test execution
+time.
+
+.. code:: py
+
+ # Using the topotest sleep
+ from lib import topotest
+
+ topotest.sleep(10, 'waiting 10 seconds for bla')
+ # or just tell it the time:
+ # topotest.sleep(10)
+ # It will print 'Sleeping for 10 seconds'.
+
+ # Or you can also use the Python sleep, but it won't show anything
+ from time import sleep
+ sleep(5)
+
+iproute2 Linux commands as JSON
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``topotest`` has two helpers implemented that parses the output of ``ip route``
+commands to JSON. It might simplify your comparison needs by only needing to
+provide a Python dictionary.
+
+.. code:: py
+
+ from lib import topotest
+
+ tgen = get_topogen()
+ routes = topotest.ip4_route(tgen.gears['r1'])
+ expected = {
+ '10.0.1.0/24': {},
+ '10.0.2.0/24': {
+ 'dev': 'r1-eth0'
+ }
+ }
+
+ assertmsg = "failed to find 10.0.1.0/24 and/or 10.0.2.0/24"
+ assert json_cmp(routes, expected) is None, assertmsg
diff --git a/doc/developer/topotests.rst b/doc/developer/topotests.rst
new file mode 100644
index 0000000..b8f213b
--- /dev/null
+++ b/doc/developer/topotests.rst
@@ -0,0 +1,1429 @@
+.. _topotests:
+
+Topotests
+=========
+
+Topotests is a suite of topology tests for FRR built on top of micronet.
+
+Installation and Setup
+----------------------
+
+Topotests run under python3. Additionally, for ExaBGP (which is used
+in some of the BGP tests) an older python2 version (and the python2
+version of ``pip``) must be installed.
+
+Tested with Ubuntu 20.04,Ubuntu 18.04, and Debian 11.
+
+Instructions are the same for all setups (i.e. ExaBGP is only used for
+BGP tests).
+
+Tshark is only required if you enable any packet captures on test runs.
+
+Valgrind is only required if you enable valgrind on test runs.
+
+Installing Topotest Requirements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ apt-get install \
+ gdb \
+ iproute2 \
+ net-tools \
+ python3-pip \
+ iputils-ping \
+ tshark \
+ valgrind
+ python3 -m pip install wheel
+ python3 -m pip install 'pytest>=6.2.4'
+ python3 -m pip install 'pytest-xdist>=2.3.0'
+ python3 -m pip install 'scapy>=2.4.5'
+ python3 -m pip install xmltodict
+ # Use python2 pip to install older ExaBGP
+ python2 -m pip install 'exabgp<4.0.0'
+ useradd -d /var/run/exabgp/ -s /bin/false exabgp
+
+ # To enable the gRPC topotest install:
+ python3 -m pip install grpcio grpcio-tools
+
+
+Enable Coredumps
+""""""""""""""""
+
+Optional, will give better output.
+
+.. code:: shell
+
+ disable apport (which move core files)
+
+Set ``enabled=0`` in ``/etc/default/apport``.
+
+Next, update security limits by changing :file:`/etc/security/limits.conf` to::
+
+ #<domain> <type> <item> <value>
+ * soft core unlimited
+ root soft core unlimited
+ * hard core unlimited
+ root hard core unlimited
+
+Reboot for options to take effect.
+
+SNMP Utilities Installation
+"""""""""""""""""""""""""""
+
+To run SNMP test you need to install SNMP utilities and MIBs. Unfortunately
+there are some errors in the upstream MIBS which need to be patched up. The
+following steps will get you there on Ubuntu 20.04.
+
+.. code:: shell
+
+ apt install libsnmp-dev
+ apt install snmpd snmp
+ apt install snmp-mibs-downloader
+ download-mibs
+ wget https://raw.githubusercontent.com/FRRouting/frr-mibs/main/iana/IANA-IPPM-METRICS-REGISTRY-MIB -O /usr/share/snmp/mibs/iana/IANA-IPPM-METRICS-REGISTRY-MIB
+ wget https://raw.githubusercontent.com/FRRouting/frr-mibs/main/ietf/SNMPv2-PDU -O /usr/share/snmp/mibs/ietf/SNMPv2-PDU
+ wget https://raw.githubusercontent.com/FRRouting/frr-mibs/main/ietf/IPATM-IPMC-MIB -O /usr/share/snmp/mibs/ietf/IPATM-IPMC-MIB
+ edit /etc/snmp/snmp.conf to look like this
+ # As the snmp packages come without MIB files due to license reasons, loading
+ # of MIBs is disabled by default. If you added the MIBs you can reenable
+ # loading them by commenting out the following line.
+ mibs +ALL
+
+
+FRR Installation
+^^^^^^^^^^^^^^^^
+
+FRR needs to be installed separately. It is assume to be configured like the
+standard Ubuntu Packages:
+
+- Binaries in :file:`/usr/lib/frr`
+- State Directory :file:`/var/run/frr`
+- Running under user ``frr``, group ``frr``
+- vtygroup: ``frrvty``
+- config directory: :file:`/etc/frr`
+- For FRR Packages, install the dbg package as well for coredump decoding
+
+No FRR config needs to be done and no FRR daemons should be run ahead of the
+test. They are all started as part of the test.
+
+Manual FRR build
+""""""""""""""""
+
+If you prefer to manually build FRR, then use the following suggested config:
+
+.. code:: shell
+
+ ./configure \
+ --prefix=/usr \
+ --localstatedir=/var/run/frr \
+ --sbindir=/usr/lib/frr \
+ --sysconfdir=/etc/frr \
+ --enable-vtysh \
+ --enable-pimd \
+ --enable-pim6d \
+ --enable-sharpd \
+ --enable-multipath=64 \
+ --enable-user=frr \
+ --enable-group=frr \
+ --enable-vty-group=frrvty \
+ --enable-snmp=agentx \
+ --with-pkg-extra-version=-my-manual-build
+
+And create ``frr`` user and ``frrvty`` group as follows:
+
+.. code:: shell
+
+ addgroup --system --gid 92 frr
+ addgroup --system --gid 85 frrvty
+ adduser --system --ingroup frr --home /var/run/frr/ \
+ --gecos "FRRouting suite" --shell /bin/false frr
+ usermod -G frrvty frr
+
+Executing Tests
+---------------
+
+Configure your sudo environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Topotests must be run as root. Normally this will be accomplished through the
+use of the ``sudo`` command. In order for topotests to be able to open new
+windows (either XTerm or byobu/screen/tmux windows) certain environment
+variables must be passed through the sudo command. One way to do this is to
+specify the ``-E`` flag to ``sudo``. This will carry over most if not all
+your environment variables include ``PATH``. For example:
+
+.. code:: shell
+
+ sudo -E python3 -m pytest -s -v
+
+If you do not wish to use ``-E`` (e.g., to avoid ``sudo`` inheriting
+``PATH``) you can modify your `/etc/sudoers` config file to specifically pass
+the environment variables required by topotests. Add the following commands to
+your ``/etc/sudoers`` config file.
+
+.. code:: shell
+
+ Defaults env_keep="TMUX"
+ Defaults env_keep+="TMUX_PANE"
+ Defaults env_keep+="STY"
+ Defaults env_keep+="DISPLAY"
+
+If there was already an ``env_keep`` configuration there be sure to use the
+``+=`` rather than ``=`` on the first line above as well.
+
+
+Execute all tests in distributed test mode
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ sudo -E pytest -s -v -nauto --dist=loadfile
+
+The above command must be executed from inside the topotests directory.
+
+All test\_\* scripts in subdirectories are detected and executed (unless
+disabled in ``pytest.ini`` file). Pytest will execute up to N tests in parallel
+where N is based on the number of cores on the host.
+
+Analyze Test Results (``analyze.py``)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default router and execution logs are saved in ``/tmp/topotests`` and an XML
+results file is saved in ``/tmp/topotests/topotests.xml``. An analysis tool
+``analyze.py`` is provided to archive and analyze these results after the run
+completes.
+
+After the test run completes one should pick an archive directory to store the
+results in and pass this value to ``analyze.py``. On first execution the results
+are moved to that directory from ``/tmp/topotests``. Subsequent runs of
+``analyze.py`` with the same args will use that directories contents for instead
+of copying any new results from ``/tmp``. Below is an example of this which also
+shows the default behavior which is to display all failed and errored tests in
+the run.
+
+.. code:: shell
+
+ ~/frr/tests/topotests# ./analyze.py -Ar run-save
+ bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_converge
+ ospf_basic_functionality/test_ospf_lan.py::test_ospf_lan_tc1_p0
+ bgp_gr_functionality_topo2/test_bgp_gr_functionality_topo2.py::test_BGP_GR_10_p2
+ bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_routingTable
+
+Here we see that 4 tests have failed. We can dig deeper by displaying the
+captured logs and errors. First let's redisplay the results enumerated by adding
+the ``-E`` flag
+
+.. code:: shell
+
+ ~/frr/tests/topotests# ./analyze.py -Ar run-save -E
+ 0 bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_converge
+ 1 ospf_basic_functionality/test_ospf_lan.py::test_ospf_lan_tc1_p0
+ 2 bgp_gr_functionality_topo2/test_bgp_gr_functionality_topo2.py::test_BGP_GR_10_p2
+ 3 bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_routingTable
+
+Now to look at the error message for a failed test we use ``-T N`` where N is
+the number of the test we are interested in along with ``--errmsg`` option.
+
+.. code:: shell
+
+ ~/frr/tests/topotests# ./analyze.py -Ar run-save -T0 --errmsg
+ bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_converge: AssertionError: BGP did not converge:
+
+ IPv4 Unicast Summary (VIEW 1):
+ BGP router identifier 172.30.1.1, local AS number 100 vrf-id -1
+ BGP table version 1
+ RIB entries 1, using 184 bytes of memory
+ Peers 3, using 2169 KiB of memory
+
+ Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
+ 172.16.1.1 4 65001 0 0 0 0 0 never Connect 0 N/A
+ 172.16.1.2 4 65002 0 0 0 0 0 never Connect 0 N/A
+ 172.16.1.5 4 65005 0 0 0 0 0 never Connect 0 N/A
+
+ Total number of neighbors 3
+
+ assert False
+
+Now to look at the error text for a failed test we can use ``-T RANGES`` where
+``RANGES`` can be a number (e.g., ``5``), a range (e.g., ``0-10``), or a comma
+separated list numbers and ranges (e.g., ``5,10-20,30``) of the test cases we
+are interested in along with ``--errtext`` option. In the example below we'll
+select the first failed test case.
+
+.. code:: shell
+
+ ~/frr/tests/topotests# ./analyze.py -Ar run-save -T0 --errtext
+ bgp_multiview_topo1/test_bgp_multiview_topo1.py::test_bgp_converge: def test_bgp_converge():
+ "Check for BGP converged on all peers and BGP views"
+
+ global fatal_error
+ global net
+ [...]
+ else:
+ # Bail out with error if a router fails to converge
+ bgpStatus = net["r%s" % i].cmd('vtysh -c "show ip bgp view %s summary"' % view)
+ > assert False, "BGP did not converge:\n%s" % bgpStatus
+ E AssertionError: BGP did not converge:
+ E
+ E IPv4 Unicast Summary (VIEW 1):
+ E BGP router identifier 172.30.1.1, local AS number 100 vrf-id -1
+ [...]
+ E Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
+ E 172.16.1.1 4 65001 0 0 0 0 0 never Connect 0 N/A
+ E 172.16.1.2 4 65002 0 0 0 0 0 never Connect 0 N/A
+ [...]
+
+To look at the full capture for a test including the stdout and stderr which
+includes full debug logs, use ``--full`` option, or specify a ``-T RANGES`` without
+specifying ``--errmsg`` or ``--errtext``.
+
+.. code:: shell
+
+ ~/frr/tests/topotests# ./analyze.py -Ar run-save -T0
+ @classname: bgp_multiview_topo1.test_bgp_multiview_topo1
+ @name: test_bgp_converge
+ @time: 141.401
+ @message: AssertionError: BGP did not converge:
+ [...]
+ system-out: --------------------------------- Captured Log ---------------------------------
+ 2021-08-09 02:55:06,581 DEBUG: lib.micronet_compat.topo: Topo(unnamed): Creating
+ 2021-08-09 02:55:06,581 DEBUG: lib.micronet_compat.topo: Topo(unnamed): addHost r1
+ [...]
+ 2021-08-09 02:57:16,932 DEBUG: topolog.r1: LinuxNamespace(r1): cmd_status("['/bin/bash', '-c', 'vtysh -c "show ip bgp view 1 summary" 2> /dev/null | grep ^[0-9] | grep -vP " 11\\s+(\\d+)"']", kwargs: {'encoding': 'utf-8', 'stdout': -1, 'stderr': -2, 'shell': False})
+ 2021-08-09 02:57:22,290 DEBUG: topolog.r1: LinuxNamespace(r1): cmd_status("['/bin/bash', '-c', 'vtysh -c "show ip bgp view 1 summary" 2> /dev/null | grep ^[0-9] | grep -vP " 11\\s+(\\d+)"']", kwargs: {'encoding': 'utf-8', 'stdout': -1, 'stderr': -2, 'shell': False})
+ 2021-08-09 02:57:27,636 DEBUG: topolog.r1: LinuxNamespace(r1): cmd_status("['/bin/bash', '-c', 'vtysh -c "show ip bgp view 1 summary"']", kwargs: {'encoding': 'utf-8', 'stdout': -1, 'stderr': -2, 'shell': False})
+ --------------------------------- Captured Out ---------------------------------
+ system-err: --------------------------------- Captured Err ---------------------------------
+
+Filtered results
+""""""""""""""""
+
+There are 4 types of test results, [e]rrored, [f]ailed, [p]assed, and
+[s]kipped. One can select the set of results to show with the ``-S`` or
+``--select`` flags along with the letters for each type (i.e., ``-S efps``
+would select all results). By default ``analyze.py`` will use ``-S ef`` (i.e.,
+[e]rrors and [f]ailures) unless the ``--search`` filter is given in which case
+the default is to search all results (i.e., ``-S efps``).
+
+One can find all results which contain a ``REGEXP``. To filter results using a
+regular expression use the ``--search REGEXP`` option. In this case, by default,
+all result types will be searched for a match against the given ``REGEXP``. If a
+test result output contains a match it is selected into the set of results to show.
+
+An example of using ``--search`` would be to search all tests results for some
+log message, perhaps a warning or error.
+
+Using XML Results File from CI
+""""""""""""""""""""""""""""""
+
+``analyze.py`` actually only needs the ``topotests.xml`` file to run. This is
+very useful for analyzing a CI run failure where one only need download the
+``topotests.xml`` artifact from the run and then pass that to ``analyze.py``
+with the ``-r`` or ``--results`` option.
+
+For local runs if you wish to simply copy the ``topotests.xml`` file (leaving
+the log files where they are), you can pass the ``-a`` (or ``--save-xml``)
+instead of the ``-A`` (or ``-save``) options.
+
+Analyze Results from a Container Run
+""""""""""""""""""""""""""""""""""""
+
+``analyze.py`` can also be used with ``docker`` or ``podman`` containers.
+Everything works exactly as with a host run except that you specify the name of
+the container, or the container-id, using the `-C` or ``--container`` option.
+``analyze.py`` will then use the results inside that containers
+``/tmp/topotests`` directory. It will extract and save those results when you
+pass the ``-A`` or ``-a`` options just as withe host results.
+
+
+Execute single test
+^^^^^^^^^^^^^^^^^^^
+
+.. code:: shell
+
+ cd test_to_be_run
+ sudo -E pytest ./test_to_be_run.py
+
+For example, and assuming you are inside the frr directory:
+
+.. code:: shell
+
+ cd tests/topotests/bgp_l3vpn_to_bgp_vrf
+ sudo -E pytest ./test_bgp_l3vpn_to_bgp_vrf.py
+
+For further options, refer to pytest documentation.
+
+Test will set exit code which can be used with ``git bisect``.
+
+For the simulated topology, see the description in the python file.
+
+Running Topotests with AddressSanitizer
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Topotests can be run with AddressSanitizer. It requires GCC 4.8 or newer.
+(Ubuntu 16.04 as suggested here is fine with GCC 5 as default). For more
+information on AddressSanitizer, see
+https://github.com/google/sanitizers/wiki/AddressSanitizer.
+
+The checks are done automatically in the library call of ``checkRouterRunning``
+(ie at beginning of tests when there is a check for all daemons running). No
+changes or extra configuration for topotests is required beside compiling the
+suite with AddressSanitizer enabled.
+
+If a daemon crashed, then the errorlog is checked for AddressSanitizer output.
+If found, then this is added with context (calling test) to
+:file:`/tmp/AddressSanitizer.txt` in Markdown compatible format.
+
+Compiling for GCC AddressSanitizer requires to use ``gcc`` as a linker as well
+(instead of ``ld``). Here is a suggest way to compile frr with AddressSanitizer
+for ``master`` branch:
+
+.. code:: shell
+
+ git clone https://github.com/FRRouting/frr.git
+ cd frr
+ ./bootstrap.sh
+ ./configure \
+ --enable-address-sanitizer \
+ --prefix=/usr/lib/frr --sysconfdir=/etc/frr \
+ --localstatedir=/var/run/frr \
+ --sbindir=/usr/lib/frr --bindir=/usr/lib/frr \
+ --with-moduledir=/usr/lib/frr/modules \
+ --enable-multipath=0 --enable-rtadv \
+ --enable-tcp-zebra --enable-fpm --enable-pimd \
+ --enable-sharpd
+ make
+ sudo make install
+ # Create symlink for vtysh, so topotest finds it in /usr/lib/frr
+ sudo ln -s /usr/lib/frr/vtysh /usr/bin/
+
+and create ``frr`` user and ``frrvty`` group as shown above.
+
+Debugging Topotest Failures
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Install and run tests inside ``tmux`` or ``byobu`` for best results.
+
+``XTerm`` is also fully supported. GNU ``screen`` can be used in most
+situations; however, it does not work as well with launching ``vtysh`` or shell
+on error.
+
+For the below debugging options which launch programs or CLIs, topotest should
+be run within ``tmux`` (or ``screen``)_, as ``gdb``, the shell or ``vtysh`` will
+be launched using that windowing program, otherwise ``xterm`` will be attempted
+to launch the given programs.
+
+NOTE: you must run the topotest (pytest) such that your DISPLAY, STY or TMUX
+environment variables are carried over. You can do this by passing the
+``-E`` flag to ``sudo`` or you can modify your ``/etc/sudoers`` config to
+automatically pass that environment variable through to the ``sudo``
+environment.
+
+.. _screen: https://www.gnu.org/software/screen/
+.. _tmux: https://github.com/tmux/tmux/wiki
+
+Capturing Packets
+"""""""""""""""""
+
+One can view and capture packets on any of the networks or interfaces defined by
+the topotest by specifying the ``--pcap=NET|INTF|all[,NET|INTF,...]`` CLI option
+as shown in the examples below.
+
+.. code:: shell
+
+ # Capture on all networks in isis_topo1 test
+ sudo -E pytest isis_topo1 --pcap=all
+
+ # Capture on `sw1` network
+ sudo -E pytest isis_topo1 --pcap=sw1
+
+ # Capture on `sw1` network and on interface `eth0` on router `r2`
+ sudo -E pytest isis_topo1 --pcap=sw1,r2:r2-eth0
+
+For each capture a window is opened displaying a live summary of the captured
+packets. Additionally, the entire packet stream is captured in a pcap file in
+the tests log directory e.g.,:
+
+.. code:: console
+
+ $ sudo -E pytest isis_topo1 --pcap=sw1,r2:r2-eth0
+ ...
+ $ ls -l /tmp/topotests/isis_topo1.test_isis_topo1/
+ -rw------- 1 root root 45172 Apr 19 05:30 capture-r2-r2-eth0.pcap
+ -rw------- 1 root root 48412 Apr 19 05:30 capture-sw1.pcap
+ ...
+
+Viewing Live Daemon Logs
+""""""""""""""""""""""""
+
+One can live view daemon or the frr logs in separate windows using the
+``--logd`` CLI option as shown below.
+
+.. code:: shell
+
+ # View `ripd` logs on all routers in test
+ sudo -E pytest rip_allow_ecmp --logd=ripd
+
+ # View `ripd` logs on all routers and `mgmtd` log on `r1`
+ sudo -E pytest rip_allow_ecmp --logd=ripd --logd=mgmtd,r1
+
+For each capture a window is opened displaying a live summary of the captured
+packets. Additionally, the entire packet stream is captured in a pcap file in
+the tests log directory e.g.,
+
+When using a unified log file ``frr.log`` one substitutes ``frr`` for the
+daemon name in the ``--logd`` CLI option, e.g.,
+
+.. code:: shell
+
+ # View `frr` log on all routers in test
+ sudo -E pytest some_test_suite --logd=frr
+
+Spawning Debugging CLI, ``vtysh`` or Shells on Routers on Test Failure
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+One can have a debugging CLI invoked on test failures by specifying the
+``--cli-on-error`` CLI option as shown in the example below.
+
+.. code:: shell
+
+ sudo -E pytest --cli-on-error all-protocol-startup
+
+The debugging CLI can run shell or vtysh commands on any combination of routers
+It can also open shells or vtysh in their own windows for any combination of
+routers. This is usually the most useful option when debugging failures. Here is
+the help command from within a CLI launched on error:
+
+.. code:: shell
+
+ test_bgp_multiview_topo1/test_bgp_routingTable> help
+
+ Basic Commands:
+ cli :: open a secondary CLI window
+ help :: this help
+ hosts :: list hosts
+ quit :: quit the cli
+
+ HOST can be a host or one of the following:
+ - '*' for all hosts
+ - '.' for the parent munet
+ - a regex specified between '/' (e.g., '/rtr.*/')
+
+ New Window Commands:
+ logd HOST [HOST ...] DAEMON :: tail -f on the logfile of the given DAEMON for the given HOST[S]
+ pcap NETWORK :: capture packets from NETWORK into file capture-NETWORK.pcap the command is run within a new window which also shows packet summaries. NETWORK can also be an interface specified as HOST:INTF. To capture inside the host namespace.
+ stderr HOST [HOST ...] DAEMON :: tail -f on the stderr of the given DAEMON for the given HOST[S]
+ stdlog HOST [HOST ...] :: tail -f on the `frr.log` for the given HOST[S]
+ stdout HOST [HOST ...] DAEMON :: tail -f on the stdout of the given DAEMON for the given HOST[S]
+ term HOST [HOST ...] :: open terminal[s] (TMUX or XTerm) on HOST[S], * for all
+ vtysh ROUTER [ROUTER ...] ::
+ xterm HOST [HOST ...] :: open XTerm[s] on HOST[S], * for all
+ Inline Commands:
+ [ROUTER ...] COMMAND :: execute vtysh COMMAND on the router[s]
+ [HOST ...] sh <SHELL-COMMAND> :: execute <SHELL-COMMAND> on hosts
+ [HOST ...] shi <INTERACTIVE-COMMAND> :: execute <INTERACTIVE-COMMAND> on HOST[s]
+
+ test_bgp_multiview_topo1/test_bgp_routingTable> r1 show int br
+ ------ Host: r1 ------
+ Interface Status VRF Addresses
+ --------- ------ --- ---------
+ erspan0 down default
+ gre0 down default
+ gretap0 down default
+ lo up default
+ r1-eth0 up default 172.16.1.254/24
+ r1-stub up default 172.20.0.1/28
+
+ ----------------------
+ test_bgp_multiview_topo1/test_bgp_routingTable>
+
+Additionally, one can have ``vtysh`` or a shell launched on all routers when a
+test fails. To launch the given process on each router after a test failure
+specify one of ``--shell-on-error`` or ``--vtysh-on-error``.
+
+Spawning ``vtysh`` or Shells on Routers
+"""""""""""""""""""""""""""""""""""""""
+
+Topotest can automatically launch a shell or ``vtysh`` for any or all routers in
+a test. This is enabled by specifying 1 of 2 CLI arguments ``--shell`` or
+``--vtysh``. Both of these options can be set to a single router value, multiple
+comma-seperated values, or ``all``.
+
+When either of these options are specified topotest will pause after setup and
+each test to allow for inspection of the router state.
+
+Here's an example of launching ``vtysh`` on routers ``rt1`` and ``rt2``.
+
+.. code:: shell
+
+ sudo -E pytest --vtysh=rt1,rt2 all-protocol-startup
+
+Debugging with GDB
+""""""""""""""""""
+
+Topotest can automatically launch any daemon with ``gdb``, possibly setting
+breakpoints for any test run. This is enabled by specifying 1 or 2 CLI arguments
+``--gdb-routers`` and ``--gdb-daemons``. Additionally ``--gdb-breakpoints`` can
+be used to automatically set breakpoints in the launched ``gdb`` processes.
+
+Each of these options can be set to a single value, multiple comma-seperated
+values, or ``all``. If ``--gdb-routers`` is empty but ``--gdb_daemons`` is set
+then the given daemons will be launched in ``gdb`` on all routers in the test.
+Likewise if ``--gdb_routers`` is set, but ``--gdb_daemons`` is empty then all
+daemons on the given routers will be launched in ``gdb``.
+
+Here's an example of launching ``zebra`` and ``bgpd`` inside ``gdb`` on router
+``r1`` with a breakpoint set on ``nb_config_diff``
+
+.. code:: shell
+
+ sudo -E pytest --gdb-routers=r1 \
+ --gdb-daemons=bgpd,zebra \
+ --gdb-breakpoints=nb_config_diff \
+ all-protocol-startup
+
+Reporting Memleaks with FRR Memory Statistics
+"""""""""""""""""""""""""""""""""""""""""""""
+
+FRR reports all allocated FRR memory objects on exit to standard error.
+Topotest can be run to report such output as errors in order to check for
+memleaks in FRR memory allocations. Specifying the CLI argument
+``--memleaks`` will enable reporting FRR-based memory allocations at exit as errors.
+
+.. code:: shell
+
+ sudo -E pytest --memleaks all-protocol-startup
+
+
+StdErr log from daemos after exit
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When running with ``--memleaks``, to enable the reporting of other,
+non-memory related, messages seen on StdErr after the daemons exit,
+the following env variable can be set::
+
+ export TOPOTESTS_CHECK_STDERR=Yes
+
+(The value doesn't matter at this time. The check is whether the env
+variable exists or not.) There is no pass/fail on this reporting; the
+Output will be reported to the console.
+
+Collect Memory Leak Information
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When running with ``--memleaks``, FRR processes report unfreed memory
+allocations upon exit. To enable also reporting of memory leaks to a specific
+location, define an environment variable ``TOPOTESTS_CHECK_MEMLEAK`` with the
+file prefix, i.e.:
+
+::
+
+ export TOPOTESTS_CHECK_MEMLEAK="/home/mydir/memleak_"
+
+For tests that support the TOPOTESTS_CHECK_MEMLEAK environment variable, this
+will enable output to the information to files with the given prefix (followed
+by testname), e.g.,:
+file:`/home/mydir/memcheck_test_bgp_multiview_topo1.txt` in case
+of a memory leak.
+
+Detecting Memleaks with Valgrind
+""""""""""""""""""""""""""""""""
+
+Topotest can automatically launch all daemons with ``valgrind`` to check for
+memleaks. This is enabled by specifying 1 or 2 CLI arguments.
+``--valgrind-memleaks`` will enable general memleak detection, and
+``--valgrind-extra`` enables extra functionality including generating a
+suppression file. The suppression file ``tools/valgrind.supp`` is used when
+memleak detection is enabled.
+
+.. code:: shell
+
+ sudo -E pytest --valgrind-memleaks all-protocol-startup
+
+Collecting Performance Data using perf(1)
+"""""""""""""""""""""""""""""""""""""""""
+
+Topotest can automatically launch any daemon under ``perf(1)`` to collect
+performance data. The daemon is run in non-daemon mode with ``perf record -g``.
+The ``perf.data`` file will be saved in the router specific directory under the
+tests run directoy.
+
+Here's an example of collecting performance data from ``mgmtd`` on router ``r1``
+during the config_timing test.
+
+.. code:: console
+
+ $ sudo -E pytest --perf=mgmtd,r1 config_timing
+ ...
+ $ find /tmp/topotests/ -name '*perf.data*'
+ /tmp/topotests/config_timing.test_config_timing/r1/perf.data
+
+To specify different arguments for ``perf record``, one can use the
+``--perf-options`` this will replace the ``-g`` used by default.
+
+.. _topotests_docker:
+
+Running Tests with Docker
+-------------------------
+
+There is a Docker image which allows to run topotests.
+
+Quickstart
+^^^^^^^^^^
+
+If you have Docker installed, you can run the topotests in Docker. The easiest
+way to do this, is to use the make targets from this repository.
+
+Your current user needs to have access to the Docker daemon. Alternatively you
+can run these commands as root.
+
+.. code:: console
+
+ make topotests
+
+This command will pull the most recent topotests image from Dockerhub, compile
+FRR inside of it, and run the topotests.
+
+Advanced Usage
+^^^^^^^^^^^^^^
+
+Internally, the topotests make target uses a shell script to pull the image and
+spawn the Docker container.
+
+There are several environment variables which can be used to modify the
+behavior of the script, these can be listed by calling it with ``-h``:
+
+.. code:: console
+
+ ./tests/topotests/docker/frr-topotests.sh -h
+
+For example, a volume is used to cache build artifacts between multiple runs of
+the image. If you need to force a complete recompile, you can set
+``TOPOTEST_CLEAN``:
+
+.. code:: console
+
+ TOPOTEST_CLEAN=1 ./tests/topotests/docker/frr-topotests.sh
+
+By default, ``frr-topotests.sh`` will build frr and run pytest. If you append
+arguments and the first one starts with ``/`` or ``./``, they will replace the
+call to pytest. If the appended arguments do not match this patttern, they will
+be provided to pytest as arguments. So, to run a specific test with more
+verbose logging:
+
+.. code:: console
+
+ ./tests/topotests/docker/frr-topotests.sh -vv -s all-protocol-startup/test_all_protocol_startup.py
+
+And to compile FRR but drop into a shell instead of running pytest:
+
+.. code:: console
+
+ ./tests/topotests/docker/frr-topotests.sh /bin/bash
+
+Development
+^^^^^^^^^^^
+
+The Docker image just includes all the components to run the topotests, but not
+the topotests themselves. So if you just want to write tests and don't want to
+make changes to the environment provided by the Docker image. You don't need to
+build your own Docker image if you do not want to.
+
+When developing new tests, there is one caveat though: The startup script of
+the container will run a ``git-clean`` on its copy of the FRR tree to avoid any
+pollution of the container with build artefacts from the host. This will also
+result in your newly written tests being unavailable in the container unless at
+least added to the index with ``git-add``.
+
+If you do want to test changes to the Docker image, you can locally build the
+image and run the tests without pulling from the registry using the following
+commands:
+
+.. code:: console
+
+ make topotests-build
+ TOPOTEST_PULL=0 make topotests
+
+
+.. _topotests-guidelines:
+
+Guidelines
+----------
+
+Executing Tests
+^^^^^^^^^^^^^^^
+
+To run the whole suite of tests the following commands must be executed at the
+top level directory of topotest:
+
+.. code:: shell
+
+ $ # Change to the top level directory of topotests.
+ $ cd path/to/topotests
+ $ # Tests must be run as root, since micronet requires it.
+ $ sudo -E pytest
+
+In order to run a specific test, you can use the following command:
+
+.. code:: shell
+
+ $ # running a specific topology
+ $ sudo -E pytest ospf-topo1/
+ $ # or inside the test folder
+ $ cd ospf-topo1
+ $ sudo -E pytest # to run all tests inside the directory
+ $ sudo -E pytest test_ospf_topo1.py # to run a specific test
+ $ # or outside the test folder
+ $ cd ..
+ $ sudo -E pytest ospf-topo1/test_ospf_topo1.py # to run a specific one
+
+The output of the tested daemons will be available at the temporary folder of
+your machine:
+
+.. code:: shell
+
+ $ ls /tmp/topotest/ospf-topo1.test_ospf-topo1/r1
+ ...
+ zebra.err # zebra stderr output
+ zebra.log # zebra log file
+ zebra.out # zebra stdout output
+ ...
+
+You can also run memory leak tests to get reports:
+
+.. code:: shell
+
+ $ # Set the environment variable to apply to a specific test...
+ $ sudo -E env TOPOTESTS_CHECK_MEMLEAK="/tmp/memleak_report_" pytest ospf-topo1/test_ospf_topo1.py
+ $ # ...or apply to all tests adding this line to the configuration file
+ $ echo 'memleak_path = /tmp/memleak_report_' >> pytest.ini
+ $ # You can also use your editor
+ $ $EDITOR pytest.ini
+ $ # After running tests you should see your files:
+ $ ls /tmp/memleak_report_*
+ memleak_report_test_ospf_topo1.txt
+
+Writing a New Test
+^^^^^^^^^^^^^^^^^^
+
+This section will guide you in all recommended steps to produce a standard
+topology test.
+
+This is the recommended test writing routine:
+
+- Write a topology (Graphviz recommended)
+- Obtain configuration files
+- Write the test itself
+- Format the new code using `black <https://github.com/psf/black>`_
+- Create a Pull Request
+
+Some things to keep in mind:
+
+- BGP tests MUST use generous convergence timeouts - you must ensure
+ that any test involving BGP uses a convergence timeout of at least
+ 130 seconds.
+- Topotests are run on a range of Linux versions: if your test
+ requires some OS-specific capability (like mpls support, or vrf
+ support), there are test functions available in the libraries that
+ will help you determine whether your test should run or be skipped.
+- Avoid including unstable data in your test: don't rely on link-local
+ addresses or ifindex values, for example, because these can change
+ from run to run.
+- Using sleep is almost never appropriate. As an example: if the test resets the
+ peers in BGP, the test should look for the peers re-converging instead of just
+ sleeping an arbitrary amount of time and continuing on. See
+ ``verify_bgp_convergence`` as a good example of this. In particular look at
+ it's use of the ``@retry`` decorator. If you are having troubles figuring out
+ what to look for, please do not be afraid to ask.
+- Don't duplicate effort. There exists many protocol utility functions that can
+ be found in their eponymous module under ``tests/topotests/lib/`` (e.g.,
+ ``ospf.py``)
+
+
+
+Topotest File Hierarchy
+"""""""""""""""""""""""
+
+Before starting to write any tests one must know the file hierarchy. The
+repository hierarchy looks like this:
+
+.. code:: shell
+
+ $ cd path/to/topotest
+ $ find ./*
+ ...
+ ./README.md # repository read me
+ ./GUIDELINES.md # this file
+ ./conftest.py # test hooks - pytest related functions
+ ./example-test # example test folder
+ ./example-test/__init__.py # python package marker - must always exist.
+ ./example-test/test_template.jpg # generated topology picture - see next section
+ ./example-test/test_template.dot # Graphviz dot file
+ ./example-test/test_template.py # the topology plus the test
+ ...
+ ./ospf-topo1 # the ospf topology test
+ ./ospf-topo1/r1 # router 1 configuration files
+ ./ospf-topo1/r1/zebra.conf # zebra configuration file
+ ./ospf-topo1/r1/ospfd.conf # ospf configuration file
+ ./ospf-topo1/r1/ospfroute.txt # 'show ip ospf' output reference file
+ # removed other for shortness sake
+ ...
+ ./lib # shared test/topology functions
+ ./lib/topogen.py # topogen implementation
+ ./lib/topotest.py # topotest implementation
+
+Guidelines for creating/editing topotest:
+
+- New topologies that don't fit the existing directories should create its own
+- Always remember to add the ``__init__.py`` to new folders, this makes auto
+ complete engines and pylint happy
+- Router (Quagga/FRR) specific code should go on topotest.py
+- Generic/repeated router actions should have an abstraction in
+ topogen.TopoRouter.
+- Generic/repeated non-router code should go to topotest.py
+- pytest related code should go to conftest.py (e.g. specialized asserts)
+
+Defining the Topology
+"""""""""""""""""""""
+
+The first step to write a new test is to define the topology. This step can be
+done in many ways, but the recommended is to use Graphviz to generate a drawing
+of the topology. It allows us to see the topology graphically and to see the
+names of equipment, links and addresses.
+
+Here is an example of Graphviz dot file that generates the template topology
+:file:`tests/topotests/example-test/test_template.dot` (the inlined code might
+get outdated, please see the linked file)::
+
+ graph template {
+ label="template";
+
+ # Routers
+ r1 [
+ shape=doubleoctagon,
+ label="r1",
+ fillcolor="#f08080",
+ style=filled,
+ ];
+ r2 [
+ shape=doubleoctagon,
+ label="r2",
+ fillcolor="#f08080",
+ style=filled,
+ ];
+
+ # Switches
+ s1 [
+ shape=oval,
+ label="s1\n192.168.0.0/24",
+ fillcolor="#d0e0d0",
+ style=filled,
+ ];
+ s2 [
+ shape=oval,
+ label="s2\n192.168.1.0/24",
+ fillcolor="#d0e0d0",
+ style=filled,
+ ];
+
+ # Connections
+ r1 -- s1 [label="eth0\n.1"];
+
+ r1 -- s2 [label="eth1\n.100"];
+ r2 -- s2 [label="eth0\n.1"];
+ }
+
+Here is the produced graph:
+
+.. graphviz::
+
+ graph template {
+ label="template";
+
+ # Routers
+ r1 [
+ shape=doubleoctagon,
+ label="r1",
+ fillcolor="#f08080",
+ style=filled,
+ ];
+ r2 [
+ shape=doubleoctagon,
+ label="r2",
+ fillcolor="#f08080",
+ style=filled,
+ ];
+
+ # Switches
+ s1 [
+ shape=oval,
+ label="s1\n192.168.0.0/24",
+ fillcolor="#d0e0d0",
+ style=filled,
+ ];
+ s2 [
+ shape=oval,
+ label="s2\n192.168.1.0/24",
+ fillcolor="#d0e0d0",
+ style=filled,
+ ];
+
+ # Connections
+ r1 -- s1 [label="eth0\n.1"];
+
+ r1 -- s2 [label="eth1\n.100"];
+ r2 -- s2 [label="eth0\n.1"];
+ }
+
+Generating / Obtaining Configuration Files
+""""""""""""""""""""""""""""""""""""""""""
+
+In order to get the configuration files or command output for each router, we
+need to run the topology and execute commands in ``vtysh``. The quickest way to
+achieve that is writing the topology building code and running the topology.
+
+To bootstrap your test topology, do the following steps:
+
+- Copy the template test
+
+.. code:: shell
+
+ $ mkdir new-topo/
+ $ touch new-topo/__init__.py
+ $ cp example-test/test_template.py new-topo/test_new_topo.py
+
+- Modify the template according to your dot file
+
+Here is the template topology described in the previous section in python code:
+
+.. code:: py
+
+ topodef = {
+ "s1": "r1"
+ "s2": ("r1", "r2")
+ }
+
+If more specialized topology definitions, or router initialization arguments are
+required a build function can be used instead of a dictionary:
+
+.. code:: py
+
+ def build_topo(tgen):
+ "Build function"
+
+ # Create 2 routers
+ for routern in range(1, 3):
+ tgen.add_router("r{}".format(routern))
+
+ # Create a switch with just one router connected to it to simulate a
+ # empty network.
+ switch = tgen.add_switch("s1")
+ switch.add_link(tgen.gears["r1"])
+
+ # Create a connection between r1 and r2
+ switch = tgen.add_switch("s2")
+ switch.add_link(tgen.gears["r1"])
+ switch.add_link(tgen.gears["r2"])
+
+- Run the topology
+
+Topogen allows us to run the topology without running any tests, you can do
+that using the following example commands:
+
+.. code:: shell
+
+ $ # Running your bootstraped topology
+ $ sudo -E pytest -s --topology-only new-topo/test_new_topo.py
+ $ # Running the test_template.py topology
+ $ sudo -E pytest -s --topology-only example-test/test_template.py
+ $ # Running the ospf_topo1.py topology
+ $ sudo -E pytest -s --topology-only ospf-topo1/test_ospf_topo1.py
+
+Parameters explanation:
+
+.. program:: pytest
+
+.. option:: -s
+
+ Actives input/output capture. If this is not specified a new window will be
+ opened for the interactive CLI, otherwise it will be activated inline.
+
+.. option:: --topology-only
+
+ Don't run any tests, just build the topology.
+
+After executing the commands above, you should get the following terminal
+output:
+
+.. code:: shell
+
+ frr/tests/topotests# sudo -E pytest -s --topology-only ospf_topo1/test_ospf_topo1.py
+ ============================= test session starts ==============================
+ platform linux -- Python 3.9.2, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
+ rootdir: /home/chopps/w/frr/tests/topotests, configfile: pytest.ini
+ plugins: forked-1.3.0, xdist-2.3.0
+ collected 11 items
+
+ [...]
+ unet>
+
+The last line shows us that we are now using the CLI (Command Line
+Interface), from here you can call your router ``vtysh`` or even bash.
+
+Here's the help text:
+
+.. code:: shell
+
+ unet> help
+
+ Commands:
+ help :: this help
+ sh [hosts] <shell-command> :: execute <shell-command> on <host>
+ term [hosts] :: open shell terminals for hosts
+ vtysh [hosts] :: open vtysh terminals for hosts
+ [hosts] <vtysh-command> :: execute vtysh-command on hosts
+
+Here are some commands example:
+
+.. code:: shell
+
+ unet> sh r1 ping 10.0.3.1
+ PING 10.0.3.1 (10.0.3.1) 56(84) bytes of data.
+ 64 bytes from 10.0.3.1: icmp_seq=1 ttl=64 time=0.576 ms
+ 64 bytes from 10.0.3.1: icmp_seq=2 ttl=64 time=0.083 ms
+ 64 bytes from 10.0.3.1: icmp_seq=3 ttl=64 time=0.088 ms
+ ^C
+ --- 10.0.3.1 ping statistics ---
+ 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
+ rtt min/avg/max/mdev = 0.083/0.249/0.576/0.231 ms
+
+ unet> r1 show run
+ Building configuration...
+
+ Current configuration:
+ !
+ frr version 8.1-dev-my-manual-build
+ frr defaults traditional
+ hostname r1
+ log file /tmp/topotests/ospf_topo1.test_ospf_topo1/r1/zebra.log
+ [...]
+ end
+
+ unet> show daemons
+ ------ Host: r1 ------
+ zebra ospfd ospf6d staticd
+ ------- End: r1 ------
+ ------ Host: r2 ------
+ zebra ospfd ospf6d staticd
+ ------- End: r2 ------
+ ------ Host: r3 ------
+ zebra ospfd ospf6d staticd
+ ------- End: r3 ------
+ ------ Host: r4 ------
+ zebra ospfd ospf6d staticd
+ ------- End: r4 ------
+
+After you successfully configured your topology, you can obtain the
+configuration files (per-daemon) using the following commands:
+
+.. code:: shell
+
+ unet> sh r3 vtysh -d ospfd
+
+ Hello, this is FRRouting (version 3.1-devrzalamena-build).
+ Copyright 1996-2005 Kunihiro Ishiguro, et al.
+
+ r1# show running-config
+ Building configuration...
+
+ Current configuration:
+ !
+ frr version 3.1-devrzalamena-build
+ frr defaults traditional
+ no service integrated-vtysh-config
+ !
+ log file ospfd.log
+ !
+ router ospf
+ ospf router-id 10.0.255.3
+ redistribute kernel
+ redistribute connected
+ redistribute static
+ network 10.0.3.0/24 area 0
+ network 10.0.10.0/24 area 0
+ network 172.16.0.0/24 area 1
+ !
+ line vty
+ !
+ end
+ r1#
+
+You can also login to the node specified by nsenter using bash, etc.
+A pid file for each node will be created in the relevant test dir.
+You can run scripts inside the node, or use vtysh's <tab> or <?> feature.
+
+.. code:: shell
+
+ [unet shell]
+ # cd tests/topotests/srv6_locator
+ # ./test_srv6_locator.py --topology-only
+ unet> r1 show segment-routing srv6 locator
+ Locator:
+ Name ID Prefix Status
+ -------------------- ------- ------------------------ -------
+ loc1 1 2001:db8:1:1::/64 Up
+ loc2 2 2001:db8:2:2::/64 Up
+
+ [Another shell]
+ # nsenter -a -t $(cat /tmp/topotests/srv6_locator.test_srv6_locator/r1.pid) bash --norc
+ # vtysh
+ r1# r1 show segment-routing srv6 locator
+ Locator:
+ Name ID Prefix Status
+ -------------------- ------- ------------------------ -------
+ loc1 1 2001:db8:1:1::/64 Up
+ loc2 2 2001:db8:2:2::/64 Up
+
+Writing Tests
+"""""""""""""
+
+Test topologies should always be bootstrapped from
+:file:`tests/topotests/example_test/test_template.py` because it contains
+important boilerplate code that can't be avoided, like:
+
+Example:
+
+.. code:: py
+
+ # For all routers arrange for:
+ # - starting zebra using config file from <rtrname>/zebra.conf
+ # - starting ospfd using an empty config file.
+ for rname, router in router_list.items():
+ router.load_config(TopoRouter.RD_ZEBRA, "zebra.conf")
+ router.load_config(TopoRouter.RD_OSPF)
+
+
+- The topology definition or build function
+
+.. code:: py
+
+ topodef = {
+ "s1": ("r1", "r2"),
+ "s2": ("r2", "r3")
+ }
+
+ def build_topo(tgen):
+ # topology build code
+ ...
+
+- pytest setup/teardown fixture to start the topology and supply ``tgen``
+ argument to tests.
+
+.. code:: py
+
+
+ @pytest.fixture(scope="module")
+ def tgen(request):
+ "Setup/Teardown the environment and provide tgen argument to tests"
+
+ tgen = Topogen(topodef, module.__name__)
+ # or
+ tgen = Topogen(build_topo, module.__name__)
+
+ ...
+
+ # Start and configure the router daemons
+ tgen.start_router()
+
+ # Provide tgen as argument to each test function
+ yield tgen
+
+ # Teardown after last test runs
+ tgen.stop_topology()
+
+
+Requirements:
+
+- Directory name for a new topotest must not contain hyphen (``-``) characters.
+ To separate words, use underscores (``_``). For example, ``tests/topotests/bgp_new_example``.
+- Test code should always be declared inside functions that begin with the
+ ``test_`` prefix. Functions beginning with different prefixes will not be run
+ by pytest.
+- Configuration files and long output commands should go into separated files
+ inside folders named after the equipment.
+- Tests must be able to run without any interaction. To make sure your test
+ conforms with this, run it without the :option:`-s` parameter.
+- Use `black <https://github.com/psf/black>`_ code formatter before creating
+ a pull request. This ensures we have a unified code style.
+- Mark test modules with pytest markers depending on the daemons used during the
+ tests (see :ref:`topotests-markers`)
+- Always use IPv4 :rfc:`5737` (``192.0.2.0/24``, ``198.51.100.0/24``,
+ ``203.0.113.0/24``) and IPv6 :rfc:`3849` (``2001:db8::/32``) ranges reserved
+ for documentation.
+
+Tips:
+
+- Keep results in stack variables, so people inspecting code with ``pdb`` can
+ easily print their values.
+
+Don't do this:
+
+.. code:: py
+
+ assert foobar(router1, router2)
+
+Do this instead:
+
+.. code:: py
+
+ result = foobar(router1, router2)
+ assert result
+
+- Use ``assert`` messages to indicate where the test failed.
+
+Example:
+
+.. code:: py
+
+ for router in router_list:
+ # ...
+ assert condition, 'Router "{}" condition failed'.format(router.name)
+
+Debugging Execution
+^^^^^^^^^^^^^^^^^^^
+
+The most effective ways to inspect topology tests are:
+
+- Run pytest with ``--pdb`` option. This option will cause a pdb shell to
+ appear when an assertion fails
+
+Example: ``pytest -s --pdb ospf-topo1/test_ospf_topo1.py``
+
+- Set a breakpoint in the test code with ``pdb``
+
+Example:
+
+.. code:: py
+
+ # Add the pdb import at the beginning of the file
+ import pdb
+ # ...
+
+ # Add a breakpoint where you think the problem is
+ def test_bla():
+ # ...
+ pdb.set_trace()
+ # ...
+
+The `Python Debugger <https://docs.python.org/2.7/library/pdb.html>`__ (pdb)
+shell allows us to run many useful operations like:
+
+- Setting breaking point on file/function/conditions (e.g. ``break``,
+ ``condition``)
+- Inspecting variables (e.g. ``p`` (print), ``pp`` (pretty print))
+- Running python code
+
+.. tip::
+
+ The TopoGear (equipment abstraction class) implements the ``__str__`` method
+ that allows the user to inspect equipment information.
+
+Example of pdb usage:
+
+.. code:: shell
+
+ > /media/sf_src/topotests/ospf-topo1/test_ospf_topo1.py(121)test_ospf_convergence()
+ -> for rnum in range(1, 5):
+ (Pdb) help
+ Documented commands (type help <topic>):
+ ========================================
+ EOF bt cont enable jump pp run unt
+ a c continue exit l q s until
+ alias cl d h list quit step up
+ args clear debug help n r tbreak w
+ b commands disable ignore next restart u whatis
+ break condition down j p return unalias where
+
+ Miscellaneous help topics:
+ ==========================
+ exec pdb
+
+ Undocumented commands:
+ ======================
+ retval rv
+
+ (Pdb) list
+ 116 title2="Expected output")
+ 117
+ 118 def test_ospf_convergence():
+ 119 "Test OSPF daemon convergence"
+ 120 pdb.set_trace()
+ 121 -> for rnum in range(1, 5):
+ 122 router = 'r{}'.format(rnum)
+ 123
+ 124 # Load expected results from the command
+ 125 reffile = os.path.join(CWD, '{}/ospfroute.txt'.format(router))
+ 126 expected = open(reffile).read()
+ (Pdb) step
+ > /media/sf_src/topotests/ospf-topo1/test_ospf_topo1.py(122)test_ospf_convergence()
+ -> router = 'r{}'.format(rnum)
+ (Pdb) step
+ > /media/sf_src/topotests/ospf-topo1/test_ospf_topo1.py(125)test_ospf_convergence()
+ -> reffile = os.path.join(CWD, '{}/ospfroute.txt'.format(router))
+ (Pdb) print rnum
+ 1
+ (Pdb) print router
+ r1
+ (Pdb) tgen = get_topogen()
+ (Pdb) pp tgen.gears[router]
+ <lib.topogen.TopoRouter object at 0x7f74e06c9850>
+ (Pdb) pp str(tgen.gears[router])
+ 'TopoGear<name="r1",links=["r1-eth0"<->"s1-eth0","r1-eth1"<->"s3-eth0"]> TopoRouter<>'
+ (Pdb) l 125
+ 120 pdb.set_trace()
+ 121 for rnum in range(1, 5):
+ 122 router = 'r{}'.format(rnum)
+ 123
+ 124 # Load expected results from the command
+ 125 -> reffile = os.path.join(CWD, '{}/ospfroute.txt'.format(router))
+ 126 expected = open(reffile).read()
+ 127
+ 128 # Run test function until we get an result. Wait at most 60 seconds.
+ 129 test_func = partial(compare_show_ip_ospf, router, expected)
+ 130 result, diff = topotest.run_and_expect(test_func, '',
+ (Pdb) router1 = tgen.gears[router]
+ (Pdb) router1.vtysh_cmd('show ip ospf route')
+ '============ OSPF network routing table ============\r\nN 10.0.1.0/24 [10] area: 0.0.0.0\r\n directly attached to r1-eth0\r\nN 10.0.2.0/24 [20] area: 0.0.0.0\r\n via 10.0.3.3, r1-eth1\r\nN 10.0.3.0/24 [10] area: 0.0.0.0\r\n directly attached to r1-eth1\r\nN 10.0.10.0/24 [20] area: 0.0.0.0\r\n via 10.0.3.1, r1-eth1\r\nN IA 172.16.0.0/24 [20] area: 0.0.0.0\r\n via 10.0.3.1, r1-eth1\r\nN IA 172.16.1.0/24 [30] area: 0.0.0.0\r\n via 10.0.3.1, r1-eth1\r\n\r\n============ OSPF router routing table =============\r\nR 10.0.255.2 [10] area: 0.0.0.0, ASBR\r\n via 10.0.3.3, r1-eth1\r\nR 10.0.255.3 [10] area: 0.0.0.0, ABR, ASBR\r\n via 10.0.3.1, r1-eth1\r\nR 10.0.255.4 IA [20] area: 0.0.0.0, ASBR\r\n via 10.0.3.1, r1-eth1\r\n\r\n============ OSPF external routing table ===========\r\n\r\n\r\n'
+ (Pdb) tgen.cli()
+ unet>
+
+To enable more debug messages in other Topogen subsystems, more
+logging messages can be displayed by modifying the test configuration file
+``pytest.ini``:
+
+.. code:: ini
+
+ [topogen]
+ # Change the default verbosity line from 'info'...
+ #verbosity = info
+ # ...to 'debug'
+ verbosity = debug
+
+Instructions for use, write or debug topologies can be found in :ref:`topotests-guidelines`.
+To learn/remember common code snippets see :ref:`topotests-snippets`.
+
+Before creating a new topology, make sure that there isn't one already that
+does what you need. If nothing is similar, then you may create a new topology,
+preferably, using the newest template
+(:file:`tests/topotests/example-test/test_template.py`).
+
+.. include:: topotests-markers.rst
+
+.. include:: topotests-snippets.rst
+
+License
+-------
+
+All the configs and scripts are licensed under a ISC-style license. See Python
+scripts for details.
diff --git a/doc/developer/tracing.rst b/doc/developer/tracing.rst
new file mode 100644
index 0000000..76f6004
--- /dev/null
+++ b/doc/developer/tracing.rst
@@ -0,0 +1,411 @@
+.. _tracing:
+
+Tracing
+=======
+
+FRR has a small but growing number of static tracepoints available for use with
+various tracing systems. These tracepoints can assist with debugging,
+performance analysis and to help understand program flow. They can also be used
+for monitoring.
+
+Developers are encouraged to write new static tracepoints where sensible. They
+are not compiled in by default, and even when they are, they have no overhead
+unless enabled by a tracer, so it is okay to be liberal with them.
+
+
+Supported tracers
+-----------------
+
+Presently two types of tracepoints are supported:
+
+- `LTTng tracepoints <https://lttng.org/>`_
+- `USDT probes <http://dtrace.org/guide/chp-usdt.html>`_
+
+LTTng is a tracing framework for Linux only. It offers extremely low overhead
+and very rich tracing capabilities. FRR supports LTTng-UST, which is the
+userspace implementation. LTTng tracepoints are very rich in detail. No kernel
+modules are needed. Besides only being available for Linux, the primary
+downside of LTTng is the need to link to ``lttng-ust``.
+
+USDT probes originate from Solaris, where they were invented for use with
+dtrace. They are a kernel feature. At least Linux and FreeBSD support them. No
+library is needed; support is compiled in via a system header
+(``<sys/sdt.h>``). USDT probes are much slower than LTTng tracepoints and offer
+less flexibility in what information can be gleaned from them.
+
+LTTng is capable of tracing USDT probes but has limited support for them.
+SystemTap and dtrace both work only with USDT probes.
+
+
+Usage
+-----
+
+To compile with tracepoints, use one of the following configure flags:
+
+.. program:: configure.ac
+
+.. option:: --enable-lttng=yes
+
+ Generate LTTng tracepoints
+
+.. option:: --enable-usdt=yes
+
+ Generate USDT probes
+
+To trace with LTTng, compile with either one (prefer :option:`--enable-lttng`
+run the target in non-forking mode (no ``-d``) and use LTTng as usual (refer to
+LTTng user manual). When using USDT probes with LTTng, follow the example in
+`this article
+<https://lttng.org/blog/2019/10/15/new-dynamic-user-space-tracing-in-lttng/>`_.
+To trace with dtrace or SystemTap, compile with `--enable-usdt=yes` and
+use your tracer as usual.
+
+To see available USDT probes::
+
+ readelf -n /usr/lib/frr/bgpd
+
+Example::
+
+ root@host ~> readelf -n /usr/lib/frr/bgpd
+
+ Displaying notes found in: .note.ABI-tag
+ Owner Data size Description
+ GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)
+ OS: Linux, ABI: 3.2.0
+
+ Displaying notes found in: .note.gnu.build-id
+ Owner Data size Description
+ GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
+ Build ID: 4f42933a69dcb42a519bc459b2105177c8adf55d
+
+ Displaying notes found in: .note.stapsdt
+ Owner Data size Description
+ stapsdt 0x00000045 NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: packet_read
+ Location: 0x000000000045ee48, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-96(%rbp) 8@-104(%rbp)
+ stapsdt 0x00000047 NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: open_process
+ Location: 0x000000000047c43b, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-224(%rbp) 2@-226(%rbp)
+ stapsdt 0x00000049 NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: update_process
+ Location: 0x000000000047c4bf, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-208(%rbp) 2@-210(%rbp)
+ stapsdt 0x0000004f NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: notification_process
+ Location: 0x000000000047c557, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-192(%rbp) 2@-194(%rbp)
+ stapsdt 0x0000004c NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: keepalive_process
+ Location: 0x000000000047c5db, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-176(%rbp) 2@-178(%rbp)
+ stapsdt 0x0000004a NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: refresh_process
+ Location: 0x000000000047c673, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-160(%rbp) 2@-162(%rbp)
+ stapsdt 0x0000004d NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: capability_process
+ Location: 0x000000000047c6f7, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-144(%rbp) 2@-146(%rbp)
+ stapsdt 0x0000006f NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: output_filter
+ Location: 0x000000000048e33a, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-144(%rbp) 8@-152(%rbp) 4@-156(%rbp) 4@-160(%rbp) 8@-168(%rbp)
+ stapsdt 0x0000007d NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: process_update
+ Location: 0x0000000000491f10, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-800(%rbp) 8@-808(%rbp) 4@-812(%rbp) 4@-816(%rbp) 4@-820(%rbp) 8@-832(%rbp)
+ stapsdt 0x0000006e NT_STAPSDT (SystemTap probe descriptors)
+ Provider: frr_bgp
+ Name: input_filter
+ Location: 0x00000000004940ed, Base: 0x00000000005a09d2, Semaphore: 0x0000000000000000
+ Arguments: 8@-144(%rbp) 8@-152(%rbp) 4@-156(%rbp) 4@-160(%rbp) 8@-168(%rbp)
+
+
+To see available LTTng probes, run the target, create a session and then::
+
+ lttng list --userspace | grep frr
+
+Example::
+
+ root@host ~> lttng list --userspace | grep frr
+ PID: 11157 - Name: /usr/lib/frr/bgpd
+ frr_libfrr:route_node_get (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:list_sort (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:list_delete_node (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:list_remove (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:list_add (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:memfree (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:memalloc (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:frr_pthread_stop (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:frr_pthread_run (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
+ frr_libfrr:thread_call (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:event_cancel_async (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:event_cancel (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:schedule_write (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:schedule_read (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:schedule_event (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:schedule_timer (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:hash_release (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:hash_insert (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_libfrr:hash_get (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:output_filter (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:input_filter (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:process_update (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:packet_read (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:refresh_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:capability_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:notification_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:update_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:keepalive_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+ frr_bgp:open_process (loglevel: TRACE_INFO (6)) (type: tracepoint)
+
+When using LTTng, you can also get zlogs as trace events by enabling
+the ``lttng_ust_tracelog:*`` event class.
+
+To see available SystemTap USDT probes, run::
+
+ stap -L 'process("/usr/lib/frr/bgpd").mark("*")'
+
+Example::
+
+ root@host ~> stap -L 'process("/usr/lib/frr/bgpd").mark("*")'
+ process("/usr/lib/frr/bgpd").mark("capability_process") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("input_filter") $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long
+ process("/usr/lib/frr/bgpd").mark("keepalive_process") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("notification_process") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("open_process") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("output_filter") $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long
+ process("/usr/lib/frr/bgpd").mark("packet_read") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("process_update") $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long $arg6:long
+ process("/usr/lib/frr/bgpd").mark("refresh_process") $arg1:long $arg2:long
+ process("/usr/lib/frr/bgpd").mark("update_process") $arg1:long $arg2:long
+
+When using SystemTap, you can also easily attach to an existing function::
+
+ stap -L 'process("/usr/lib/frr/bgpd").function("bgp_update_receive")'
+
+Example::
+
+ root@host ~> stap -L 'process("/usr/lib/frr/bgpd").function("bgp_update_receive")'
+ process("/usr/lib/frr/bgpd").function("bgp_update_receive@bgpd/bgp_packet.c:1531") $peer:struct peer* $size:bgp_size_t $attr:struct attr $restart:_Bool $nlris:struct bgp_nlri[] $__func__:char const[] const
+
+Complete ``bgp.stp`` example using SystemTap to show BGP peer, prefix and aspath
+using ``process_update`` USDT::
+
+ global pkt_size;
+ probe begin
+ {
+ ansi_clear_screen();
+ println("Starting...");
+ }
+ probe process("/usr/lib/frr/bgpd").function("bgp_update_receive")
+ {
+ pkt_size <<< $size;
+ }
+ probe process("/usr/lib/frr/bgpd").mark("process_update")
+ {
+ aspath = @cast($arg6, "attr")->aspath;
+ printf("> %s via %s (%s)\n",
+ user_string($arg2),
+ user_string(@cast($arg1, "peer")->host),
+ user_string(@cast(aspath, "aspath")->str));
+ }
+ probe end
+ {
+ if (@count(pkt_size))
+ print(@hist_linear(pkt_size, 0, 20, 2));
+ }
+
+Output::
+
+ Starting...
+ > 192.168.0.0/24 via 192.168.0.1 (65534)
+ > 192.168.100.1/32 via 192.168.0.1 (65534)
+ > 172.16.16.1/32 via 192.168.0.1 (65534 65030)
+ ^Cvalue |-------------------------------------------------- count
+ 0 | 0
+ 2 | 0
+ 4 |@ 1
+ 6 | 0
+ 8 | 0
+ ~
+ 18 | 0
+ 20 | 0
+ >20 |@@@@@ 5
+
+
+Concepts
+--------
+
+Tracepoints are statically defined points in code where a developer has
+determined that outside observers might gain something from knowing what is
+going on at that point. It's like logging but with the ability to dump large
+amounts of internal data with much higher performance. LTTng has a good summary
+`here <https://lttng.org/docs/#doc-what-is-tracing>`_.
+
+Each tracepoint has a "provider" and name. The provider is basically a
+namespace; for example, ``bgpd`` uses the provider name ``frr_bgp``. The name
+is arbitrary, but because providers share a global namespace on the user's
+system, all providers from FRR should be prefixed by ``frr_``. The tracepoint
+name is just the name of the event. Events are globally named by their provider
+and name. For example, the event when BGP reads a packet from a peer is
+``frr_bgp:packet_read``.
+
+To do tracing, the tracing tool of choice is told which events to listen to.
+For example, to listen to all events from FRR's BGP implementation, you would
+enable the events ``frr_bgp:*``. In the same tracing session you could also
+choose to record all memory allocations by enabling the ``malloc`` tracepoints
+in ``libc`` as well as all kernel skb operations using the various in-kernel
+tracepoints. This allows you to build as complete a view as desired of what the
+system is doing during the tracing window (subject to what tracepoints are
+available).
+
+Of particular use are the tracepoints for FRR's internal event scheduler;
+tracing these allows you to see all events executed by all event loops for the
+target(s) in question. Here's a couple events selected from a trace of BGP
+during startup::
+
+ ...
+
+ [18:41:35.750131763] (+0.000048901) host frr_libfrr:thread_call: { cpu_id =
+ 1 }, { threadmaster_name = "default", function_name = "zclient_connect",
+ scheduled_from = "lib/zclient.c", scheduled_on_line = 3877, thread_addr =
+ 0x0, file_descriptor = 0, event_value = 0, argument_ptr = 0xA37F70, timer =
+ 0 }
+
+ [18:41:35.750175124] (+0.000020001) host frr_libfrr:thread_call: { cpu_id =
+ 1 }, { threadmaster_name = "default", function_name = "frr_config_read_in",
+ scheduled_from = "lib/libfrr.c", scheduled_on_line = 934, thread_addr = 0x0,
+ file_descriptor = 0, event_value = 0, argument_ptr = 0x0, timer = 0 }
+
+ [18:41:35.753341264] (+0.000010532) host frr_libfrr:thread_call: { cpu_id =
+ 1 }, { threadmaster_name = "default", function_name = "bgp_event",
+ scheduled_from = "bgpd/bgpd.c", scheduled_on_line = 142, thread_addr = 0x0,
+ file_descriptor = 2, event_value = 2, argument_ptr = 0xE4D780, timer = 2 }
+
+ [18:41:35.753404186] (+0.000004910) host frr_libfrr:thread_call: { cpu_id =
+ 1 }, { threadmaster_name = "default", function_name = "zclient_read",
+ scheduled_from = "lib/zclient.c", scheduled_on_line = 3891, thread_addr =
+ 0x0, file_descriptor = 40, event_value = 40, argument_ptr = 0xA37F70, timer
+ = 40 }
+
+ ...
+
+
+Very useful for getting a time-ordered look into what the process is doing.
+
+
+Adding Tracepoints
+------------------
+
+Adding new tracepoints is a two step process:
+
+1. Define the tracepoint
+2. Use the tracepoint
+
+Tracepoint definitions state the "provider" and name of the tracepoint, along
+with any values it will produce, and how to format them. This is done with
+macros provided by LTTng. USDT probes do not use definitions and are inserted
+at the trace site with a single macro. However, to maintain support for both
+platforms, you must define an LTTng tracepoint when adding a new one.
+``frrtrace()`` will expand to the appropriate ``DTRACE_PROBEn`` macro when USDT
+is in use.
+
+If you are adding new tracepoints to a daemon that has no tracepoints, that
+daemon's ``subdir.am`` must be updated to conditionally link ``lttng-ust``.
+Look at ``bgpd/subdir.am`` for an example of how to do this; grep for
+``UST_LIBS``. Create new files named ``<daemon>_trace.[ch]``. Use
+``bgpd/bgp_trace.[h]`` as boilerplate. If you are adding tracepoints to a
+daemon that already has them, look for the ``<daemon>_trace.h`` file;
+tracepoints are written here.
+
+Refer to the `LTTng developer docs
+<https://lttng.org/docs/#doc-c-application>`_ for details on how to define
+tracepoints.
+
+To use them, simply add a call to ``frrtrace()`` at the point you'd like the
+event to be emitted, like so:
+
+.. code-block:: c
+
+ ...
+
+ switch (type) {
+ case BGP_MSG_OPEN:
+ frrtrace(2, frr_bgp, open_process, peer, size); /* tracepoint */
+ atomic_fetch_add_explicit(&peer->open_in, 1,
+ memory_order_relaxed);
+ mprc = bgp_open_receive(peer, size);
+
+ ...
+
+After recompiling this tracepoint will now be available, either as a USDT probe
+or LTTng tracepoint, depending on your compilation choice.
+
+
+trace.h
+^^^^^^^
+
+Because FRR supports multiple types of tracepoints, the code for creating them
+abstracts away the underlying system being used. This abstraction code is in
+``lib/trace.h``. There are 2 function-like macros that are used for working
+with tracepoints.
+
+- ``frrtrace()`` defines tracepoints
+- ``frrtrace_enabled()`` checks whether a tracepoint is enabled
+
+There is also ``frrtracelog()``, which is used in zlog core code to make zlog
+messages available as trace events to LTTng. This should not be used elsewhere.
+
+There is additional documentation in the header. The key thing to note is that
+you should never include ``trace.h`` in source where you plan to put
+tracepoints; include the tracepoint definition header instead (e.g.
+:file:`bgp_trace.h`).
+
+
+Limitations
+-----------
+
+Tracers do not like ``fork()`` or ``dlopen()``. LTTng has some workarounds for
+this involving interceptor libraries using ``LD_PRELOAD``.
+
+If you're running FRR in a typical daemonizing way (``-d`` to the daemons)
+you'll need to run the daemons like so:
+
+.. code-block:: shell
+
+ LD_PRELOAD=liblttng-ust-fork.so <daemon>
+
+
+If you're using systemd this you can accomplish this for all daemons by
+modifying ``frr.service`` like so:
+
+.. code-block:: diff
+
+ --- a/frr.service
+ +++ b/frr.service
+ @@ -7,6 +7,7 @@ Before=network.target
+ OnFailure=heartbeat-failed@%n
+
+ [Service]
+ +Environment="LD_PRELOAD=liblttng-ust-fork.so"
+ Nice=-5
+ Type=forking
+ NotifyAccess=all
+
+
+USDT tracepoints are relatively high overhead and probably shouldn't be used
+for "flight recorder" functionality, i.e. enabling and passively recording all
+events for monitoring purposes. It's generally okay to use LTTng like this,
+though.
diff --git a/doc/developer/vtysh.rst b/doc/developer/vtysh.rst
new file mode 100644
index 0000000..323ea57
--- /dev/null
+++ b/doc/developer/vtysh.rst
@@ -0,0 +1,212 @@
+.. _vtysh:
+
+*****
+VTYSH
+*****
+
+.. seealso:: :ref:`command-line-interface`
+
+.. _vtysh-architecture:
+
+Architecture
+============
+
+VTYSH is a shell for FRR daemons. It amalgamates all the CLI commands defined
+in each of the daemons and presents them to the user in a single shell, which
+saves the user from having to telnet to each of the daemons and use their
+individual shells. The amalgamation is achieved by
+:ref:`extracting <vtysh-command-extraction>` commands from daemons and
+injecting them into VTYSH at build time.
+
+At runtime, VTYSH maintains an instance of a CLI mode tree just like each
+daemon. However, the mode tree in VTYSH contains (almost) all commands from
+every daemon in the same tree, whereas individual daemons have trees that only
+contain commands relevant to themselves. VTYSH also uses the library CLI
+facilities to maintain the user's current position in the tree (the current
+node). Note that this position must be synchronized with all daemons; if a
+daemon receives a command that causes it to change its current node, VTYSH must
+also change its node. Since the extraction script does not understand the
+handler code of commands, but only their definitions, this and other behaviors
+must be manually programmed into VTYSH for every case where the internal state
+of VTYSH must change in response to a command. Details on how this is done are
+discussed in the :ref:`vtysh-special-defuns` section.
+
+VTYSH also handles writing and applying the integrated configuration file,
+:file:`/etc/frr/frr.conf`. Since it has knowledge of the entire command space
+of FRR, it can intelligently distribute configuration commands only to the
+daemons that understand them. Similarly, when writing the configuration file it
+takes care of combining multiple instances of configuration blocks and
+simplifying the output. This is discussed in :ref:`vtysh-configuration`.
+
+.. _vtysh-command-extraction:
+
+Command Extraction
+------------------
+
+To build ``vtysh``, the :file:`python/xref2vtysh.py` script scans through the
+:file:`frr.xref` file created earlier in the build process. This file contains
+a list of all ``DEFUN`` and ``install_element`` sites in the code, generated
+directly from the binaries (and therefore matching exactly what is really
+available.)
+
+This list is collated and transformed into ``DEFSH`` (and ``install_element``)
+statements, output to ``vtysh_cmd.c``. Each ``DEFSH``
+contains the name of the command plus ``_vtysh``, as well as a flag that
+indicates which daemons the command was found in. When the command is executed
+in VTYSH, this flag is inspected to determine which daemons to send the command
+to. This way, commands are only sent to the daemons that know about them,
+avoiding spurious errors from daemons that don't have the command defined.
+
+The extraction script contains lots of hardcoded knowledge about what sources
+to look at and what flags to use for certain commands.
+
+.. note::
+
+ The ``vtysh_scan`` Makefile variable and ``#ifndef VTYSH_EXTRACT_PL``
+ checks in source files are no longer used. Remove them when rebasing older
+ changes.
+
+.. _vtysh-special-defuns:
+
+Special DEFUNs
+--------------
+
+In addition to the vanilla ``DEFUN`` macro for defining CLI commands, there are
+several VTYSH-specific ``DEFUN`` variants that each serve different purposes.
+
+``DEFSH``
+ Used almost exclusively by generated VTYSH code. This macro defines a
+ ``cmd_element`` with no handler function; the command, when executed, is
+ simply forwarded to the daemons indicated in the daemon flag.
+
+``DEFUN_NOSH``
+ Used by daemons. Has the same expansion as a ``DEFUN``, but ``xref2vtysh.py``
+ will skip these definitions when extracting commands. This is typically used
+ when VTYSH must take some special action upon receiving the command, and the
+ programmer therefore needs to write VTYSH's copy of the command manually
+ instead of using the generated version.
+
+``DEFUNSH``
+ The same as ``DEFUN``, but with an argument that allows specifying the
+ ``->daemon`` field of the generated ``cmd_element``. This is used by VTYSH
+ to determine which daemons to send the command to.
+
+``DEFUNSH_ATTR``
+ A version of ``DEFUNSH`` that allows setting the ``->attr`` field of the
+ generated ``cmd_element``. Not used in practice.
+
+.. _vtysh-configuration:
+
+Configuration Management
+------------------------
+
+When integrated configuration is used, VTYSH manages writing, reading and
+applying the FRR configuration file. VTYSH can be made to read and apply an
+integrated configuration to all running daemons by launching it with ``-f
+<file>``. It sends the appropriate configuration lines to the relevant daemons
+in the same way that commands entered by the user on VTYSH's shell prompt are
+processed.
+
+Configuration writing is more complicated. VTYSH makes a best-effort attempt to
+combine and simplify the configuration as much as possible. A working example
+is best to explain this behavior.
+
+Example
+^^^^^^^
+
+Suppose we have just *staticd* and *zebra* running on the system, and use VTYSH
+to apply the following configuration snippet:
+
+.. code-block:: frr
+
+ !
+ vrf blue
+ ip protocol static route-map ExampleRoutemap
+ ip route 192.168.0.0/24 192.168.0.1
+ exit-vrf
+ !
+
+Note that *staticd* defines static route commands and *zebra* defines ``ip
+protocol`` commands. Therefore if we ask only *zebra* for its configuration, we
+get the following::
+
+ (config)# do sh running-config zebra
+ Building configuration...
+
+ ...
+ !
+ vrf blue
+ ip protocol static route-map ExampleRoutemap
+ exit-vrf
+ !
+ ...
+
+Note that the static route doesn't show up there. Similarly, if we ask
+*staticd* for its configuration, we get::
+
+ (config)# do sh running-config staticd
+
+ ...
+ !
+ vrf blue
+ ip route 192.168.0.0/24 192.168.0.1
+ exit-vrf
+ !
+ ...
+
+But when we display the configuration with VTYSH, we see::
+
+ ubuntu-bionic(config)# do sh running-config
+
+ ...
+ !
+ vrf blue
+ ip protocol static route-map ExampleRoutemap
+ ip route 192.168.0.0/24 192.168.0.1
+ exit-vrf
+ !
+ ...
+
+This is because VTYSH asks each daemon for its currently running configuration,
+and combines equivalent blocks together. In the above example, it combined the
+``vrf blue`` blocks from both *zebra* and *staticd* together into one. This is
+done in :file:`vtysh_config.c`.
+
+Protocol
+========
+
+VTYSH communicates with FRR daemons by way of domain socket. Each daemon
+creates its own socket, typically in :file:`/var/run/frr/<daemon>.vty`. The
+protocol is very simple. In the VTYSH to daemon direction, messages are simply
+NUL-terminated strings, whose content are CLI commands. Here is a typical
+message from VTYSH to a daemon:
+
+::
+
+ Request
+
+ 00000000: 646f 2077 7269 7465 2074 6572 6d69 6e61 do write termina
+ 00000010: 6c0a 00 l..
+
+
+The response format has some more data in it. First is a NUL-terminated string
+containing the plaintext response, which is just the output of the command that
+was sent in the request. This is displayed to the user. The plaintext response
+is followed by 3 null marker bytes, followed by a 1-byte status code that
+indicates whether the command was successful or not.
+
+::
+
+ Response
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Plaintext Response |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Marker (0x00) | Status Code |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+The first ``0x00`` byte in the marker also serves to terminate the plaintext
+response.
diff --git a/doc/developer/workflow.rst b/doc/developer/workflow.rst
new file mode 100644
index 0000000..68834ed
--- /dev/null
+++ b/doc/developer/workflow.rst
@@ -0,0 +1,1740 @@
+.. _process-and-workflow:
+
+*******************
+Process & Workflow
+*******************
+
+.. highlight:: none
+
+FRR is a large project developed by many different groups. This section
+documents standards for code style & quality, commit messages, pull requests
+and best practices that all contributors are asked to follow.
+
+This chapter is "descriptive/post-factual" in that it documents pratices that
+are in use; it is not "definitive/pre-factual" in prescribing practices. This
+means that when a procedure changes, it is agreed upon, then put into practice,
+and then documented here. If this document doesn't match reality, it's the
+document that needs to be updated, not reality.
+
+Mailing Lists
+=============
+
+The FRR development group maintains multiple mailing lists for use by the
+community. Italicized lists are private.
+
++----------------------------------+--------------------------------+
+| Topic | List |
++==================================+================================+
+| Development | dev@lists.frrouting.org |
++----------------------------------+--------------------------------+
+| Users & Operators | frog@lists.frrouting.org |
++----------------------------------+--------------------------------+
+| Announcements | announce@lists.frrouting.org |
++----------------------------------+--------------------------------+
+| *Security* | security@lists.frrouting.org |
++----------------------------------+--------------------------------+
+| *Technical Steering Committee* | tsc@lists.frrouting.org |
++----------------------------------+--------------------------------+
+
+The Development list is used to discuss and document general issues related to
+project development and governance. The public
+`Slack instance <https://frrouting.slack.com>`_ and weekly technical meetings
+provide a higher bandwidth channel for discussions. The results of such
+discussions must be reflected in updates, as appropriate, to code (i.e.,
+merges), `GitHub issues`_, and for governance or process changes, updates to
+the Development list and either this file or information posted at
+https://frrouting.org/.
+
+Development & Release Cycle
+===========================
+
+Development
+-----------
+
+.. figure:: ../figures/git_branches.png
+ :align: center
+ :scale: 55%
+ :alt: Merging Git branches into a central trunk
+
+ Rough outline of FRR development workflow
+
+The master Git for FRR resides on `GitHub`_.
+
+There is one main branch for development, ``master``. For each major release
+(2.0, 3.0 etc) a new release branch is created based on the master. Significant
+bugfixes should be backported to upcoming and existing release branches no more
+than 1 year old. As a general rule new features are not backported to release
+branches.
+
+Subsequent point releases based on a major branch are handled with git tags.
+
+Releases
+--------
+FRR employs a ``<MAJOR>.<MINOR>.<BUGFIX>`` versioning scheme.
+
+``MAJOR``
+ Significant new features or multiple minor features. This should mostly
+ cover any kind of disruptive change that is visible or "risky" to operators.
+ New features or protocols do not necessarily trigger this. (This was changed
+ for FRR 7.x after feedback from users that the pace of major version number
+ increments was too high.)
+
+``MINOR``
+ General incremental development releases, excluding "major" changes
+ mentioned above. Not necessarily fully backwards compatible, as smaller
+ (but still visible) changes or deprecated feature removals may still happen.
+ However, there shouldn't be any huge "surprises" between minor releases.
+
+``BUGFIX``
+ Fixes for actual bugs and/or security issues. Fully compatible.
+
+Releases are scheduled in a 4-month cycle on the first Tuesday each
+March/July/November. Walking backwards from this date:
+
+ - 6 weeks earlier, ``master`` is frozen for new features, and feature PRs
+ are considered lowest priority (regardless of when they were opened.)
+
+ - 4 weeks earlier, the stable branch separates from master (named
+ ``dev/MAJOR.MINOR`` at this point) and tagged as ``base_X.Y``.
+ Master is unfrozen and new features may again proceed.
+
+ Part of unfreezing master is editing the ``AC_INIT`` statement in
+ :file:`configure.ac` to reflect the new development version that master
+ now refers to. This is accompanied by a ``frr-X.Y-dev`` tag on master,
+ which should always be on the first commit on master *after* the stable
+ branch was forked (even if that is not the edit to ``AC_INIT``; it's more
+ important to have it on the very first commit on master after the fork.)
+
+ (The :file:`configure.ac` edit and tag push are considered git housekeeping
+ and are pushed directly to ``master``, not through a PR.)
+
+ Below is the snippet of the commands to use in this step.
+
+ .. code-block:: console
+
+ % git remote --verbose
+ upstream git@github.com:frrouting/frr (fetch)
+ upstream git@github.com:frrouting/frr (push)
+
+ % git checkout master
+ % git pull upstream master
+ % git checkout -b dev/8.2
+ % git tag base_8.2
+ % git push upstream base_8.2
+ % git push upstream dev/8.2
+ % git checkout master
+ % sed -i 's/8.2-dev/8.3-dev/' configure.ac
+ % git add configure.ac
+ % git commit -s -m "build: FRR 8.3 development version"
+ % git tag -a frr-8.3-dev -m "frr-8.3-dev"
+ % git push upstream master
+ % git push upstream frr-8.3-dev
+
+ In this step, we also have to update package versions to reflect
+ the development version. Versions need to be updated using
+ a standard way of development (Pull Requests) based on master branch.
+
+ Only change the version number with no other changes. This will produce
+ packages with the a version number that is higher than any previous
+ version. Once the release is done, whatever updates we make to changelog
+ files on the release branch need to be cherry-picked to the master branch.
+
+ Update essential dates in advance for reference table (below) when
+ the next freeze, dev/X.Y, RC, and release phases are scheduled. This should
+ go in the ``master`` branch.
+
+ - 2 weeks earlier, a ``frr-X.Y-rc`` release candidate is tagged.
+
+ .. code-block:: console
+
+ % git remote --verbose
+ upstream git@github.com:frrouting/frr (fetch)
+ upstream git@github.com:frrouting/frr (push)
+
+ % git checkout dev/8.2
+ % git tag frr-8.2-rc
+ % git push upstream frr-8.2-rc
+
+ - on release date, the branch is renamed to ``stable/MAJOR.MINOR``.
+
+The 2 week window between each of these events should be used to run any and
+all testing possible for the release in progress. However, the current
+intention is to stick to the schedule even if known issues remain. This would
+hopefully occur only after all avenues of fixing issues are exhausted, but to
+achieve this, an as exhaustive as possible list of issues needs to be available
+as early as possible, i.e. the first 2-week window.
+
+For reference, the expected release schedule according to the above is:
+
++---------+------------+------------+------------+
+| Release | 2023-11-07 | 2024-03-05 | 2024-07-02 |
++---------+------------+------------+------------+
+| RC | 2023-10-24 | 2024-02-20 | 2024-06-18 |
++---------+------------+------------+------------+
+| dev/X.Y | 2023-10-10 | 2024-02-06 | 2024-06-04 |
++---------+------------+------------+------------+
+| freeze | 2023-09-26 | 2024-01-23 | 2024-05-21 |
++---------+------------+------------+------------+
+
+Here is the hint on how to get the dates easily:
+
+ .. code-block:: console
+
+ ~$ # Release date is 2023-11-07 (First Tuesday each March/July/November)
+ ~$ date +%F --date='2023-11-07 -42 days' # Next freeze date
+ 2023-09-26
+ ~$ date +%F --date='2023-11-07 -28 days' # Next dev/X.Y date
+ 2023-10-10
+ ~$ date +%F --date='2023-11-07 -14 days' # Next RC date
+ 2023-10-24
+
+Each release is managed by one or more volunteer release managers from the FRR
+community. These release managers are expected to handle the branch for a period
+of one year. To spread and distribute this workload, this should be rotated for
+subsequent releases. The release managers are currently assumed/expected to
+run a release management meeting during the weeks listed above. Barring other
+constraints, this would be scheduled before the regular weekly FRR community
+call such that important items can be carried over into that call.
+
+Bugfixes are applied to the two most recent releases. It is expected that
+each bugfix backported should include some reasoning for its inclusion
+as well as receiving approval by the release managers for that release before
+accepted into the release branch. This does not necessarily preclude backporting of
+bug fixes to older than the two most recent releases.
+
+Security fixes are backported to all releases less than or equal to at least one
+year old. Security fixes may also be backported to older releases depending on
+severity.
+
+For detailed instructions on how to produce an FRR release, refer to
+:ref:`frr-release-procedure`.
+
+
+Long term support branches ( LTS )
+-----------------------------------------
+
+This kind of branch is not yet officially supported, and need experimentation
+before being effective.
+
+Previous definition of releases prevents long term support of previous releases.
+For instance, bug and security fixes are not applied if the stable branch is too
+old.
+
+Because the FRR users have a need to backport bug and security fixes after the
+stable branch becomes too old, there is a need to provide support on a long term
+basis on that stable branch. If that support is applied on that stable branch,
+then that branch is a long term support branch.
+
+Having a LTS branch requires extra-work and requires one person to be in charge
+of that maintenance branch for a certain amount of time. The amount of time will
+be by default set to 4 months, and can be increased. 4 months stands for the time
+between two releases, this time can be applied to the decision to continue with a
+LTS release or not. In all cases, that time period will be well-defined and
+published. Also, a self nomination from a person that proposes to handle the LTS
+branch is required. The work can be shared by multiple people. In all cases, there
+must be at least one person that is in charge of the maintenance branch. The person
+on people responsible for a maintenance branch must be a FRR maintainer. Note that
+they may choose to abandon support for the maintenance branch at any time. If
+no one takes over the responsibility of the LTS branch, then the support will be
+discontinued.
+
+The LTS branch duties are the following ones:
+
+- organise meetings on a (bi-)weekly or monthly basis, the handling of issues
+ and pull requested relative to that branch. When time permits, this may be done
+ during the regularly scheduled FRR meeting.
+
+- ensure the stability of the branch, by using and eventually adapting the
+ checking the CI tools of FRR ( indeed, maintaining may lead to create
+ maintenance branches for topotests or for CI).
+
+It will not be possible to backport feature requests to LTS branches. Actually, it
+is a false good idea to use LTS for that need. Introducing feature requests may
+break the paradigm where all more recent releases should also include the feature
+request. This would require the LTS maintainer to ensure that all more recent
+releases have support for this feature request. Moreover, introducing features
+requests may result in breaking the stability of the branch. LTS branches are first
+done to bring long term support for stability.
+
+Development Branches
+--------------------
+
+Occassionally the community will desire the ability to work together
+on a feature that is considered useful to FRR. In this case the
+parties may ask the Maintainers for the creation of a development
+branch in the main FRR repository. Requirements for this to happen
+are:
+
+- A one paragraph description of the feature being implemented to
+ allow for the facilitation of discussion about the feature. This
+ might include pointers to relevant RFC's or presentations that
+ explain what is planned. This is intended to set a somewhat
+ low bar for organization.
+- A branch maintainer must be named. This person is responsible for
+ keeping the branch up to date, and general communication about the
+ project with the other FRR Maintainers. Additionally this person
+ must already be a FRR Maintainer.
+- Commits to this branch must follow the normal PR and commit process
+ as outlined in other areas of this document. The goal of this is
+ to prevent the current state where large features are submitted
+ and are so large they are difficult to review.
+
+After a development branch has completed the work together, a final
+review can be made and the branch merged into master. If a development
+branch is becomes un-maintained or not being actively worked on after
+three months then the Maintainers can decide to remove the branch.
+
+Debian Branches
+---------------
+
+The Debian project contains "official" packages for FRR. While FRR
+Maintainers may participate in creating these, it is entirely the Debian
+project's decision what to ship and how to work on this.
+
+As a courtesy and for FRR's benefit, this packaging work is currently visible
+in git branches named ``debian/*`` on the main FRR git repository. These
+branches are for the exclusive use by people involved in Debian packaging work
+for FRR. Direct commit access may be handed out and FRR git rules (review,
+testing, etc.) do not apply. Do not push to these branches without talking
+to the people noted under ``Maintainer:`` and ``Uploaders:`` in
+``debian/control`` on the target branch -- even if you are a FRR Maintainer.
+
+Changelog
+---------
+The changelog will be the base for the release notes. A changelog entry for
+your changes is usually not required and will be added based on your commit
+messages by the maintainers. However, you are free to include an update to the
+changelog with some better description.
+
+Accords: non-code community consensus
+=====================================
+
+The FRR repository has a place for "accords" - these are items of
+consideration for FRR that influence how we work as a community, but either
+haven't resulted in code *yet*, or may *never* result in code being written.
+They are placed in the ``doc/accords/`` directory.
+
+The general idea is to simply pass small blurbs of text through our normal PR
+procedures, giving them the same visibility, comment and review mechanisms as
+code PRs - and changing them later is another PR. Please refer to the README
+file in ``doc/accords/`` for further details. The file names of items in that
+directory are hopefully helpful in determining whether some of them might be
+relevant to your work.
+
+Submitting Patches and Enhancements
+===================================
+
+FRR accepts patches using GitHub pull requests.
+
+The base branch for new contributions and non-critical bug fixes should be
+``master``. Please ensure your pull request is based on this branch when you
+submit it.
+
+Code submitted by pull request will be automatically tested by one or more CI
+systems. Once the automated tests succeed, other developers will review your
+code for quality and correctness. After any concerns are resolved, your code
+will be merged into the branch it was submitted against.
+
+The title of the pull request should provide a high level technical
+summary of the included patches. The description should provide
+additional details that will help the reviewer to understand the context
+of the included patches.
+
+Squash commits
+--------------
+
+Before merging make sure a PR has squashed the following kinds of commits:
+
+- Fixes/review feedback
+- Typos
+- Merges and rebases
+- Work in progress
+
+This helps to automatically generate human-readable changelog messages.
+
+Commit Guidelines
+-----------------
+
+There is a built-in commit linter. Basic rules:
+
+- Commit messages must be prefixed with the name of the changed subsystem, followed
+ by a colon and a space and start with an imperative verb.
+
+ `Check <https://github.com/FRRouting/frr/tree/master/.github/commitlint.config.js>`_ all
+ the supported subsystems.
+
+- Commit messages must not end with a period ``.``
+
+Why was my pull request closed?
+-------------------------------
+
+Pull requests older than 180 days will be closed. Exceptions can be made for
+pull requests that have active review comments, or that are awaiting other
+dependent pull requests. Closed pull requests are easy to recreate, and little
+work is lost by closing a pull request that subsequently needs to be reopened.
+
+We want to limit the total number of pull requests in flight to:
+
+- Maintain a clean project
+- Remove old pull requests that would be difficult to rebase as the underlying code has changed over time
+- Encourage code velocity
+
+.. _license-for-contributions:
+
+License for Contributions
+-------------------------
+FRR is under a “GPLv2 or later” license. Any code submitted must be released
+under the same license (preferred) or any license which allows redistribution
+under this GPLv2 license (eg MIT License).
+It is forbidden to push any code that prevents from using GPLv3 license. This
+becomes a community rule, as FRR produces binaries that links with Apache 2.0
+libraries. Apache 2.0 and GPLv2 license are incompatible, if put together.
+Please see `<http://www.apache.org/licenses/GPL-compatibility.html>`_ for
+more information. This rule guarantees the user to distribute FRR binary code
+without any licensing issues.
+
+Pre-submission Checklist
+------------------------
+- Format code (see `Code Formatting <#code-formatting>`__)
+- Verify and acknowledge license (see :ref:`license-for-contributions`)
+- Ensure you have properly signed off (see :ref:`signing-off`)
+- Test building with various configurations:
+
+ - ``buildtest.sh``
+
+- Verify building source distribution:
+
+ - ``make dist`` (and try rebuilding from the resulting tar file)
+
+- Run unit tests:
+
+ - ``make test``
+
+- In the case of a major new feature or other significant change, document
+ plans for continued maintenance of the feature. In addition it is a
+ requirement that automated testing must be written that exercises
+ the new feature within our existing CI infrastructure. Also the
+ addition of automated testing to cover any pull request is encouraged.
+
+- All new code must use the current latest version of acceptable code.
+
+ - If a daemon is converted to YANG, then new code must use YANG.
+ - DEFPY's must be used for new cli
+ - Typesafe lists must be used
+ - printf formatting changes must be used
+
+.. _signing-off:
+
+Signing Off
+-----------
+Code submitted to FRR must be signed off. We have the same requirements for
+using the signed-off-by process as the Linux kernel. In short, you must include
+a ``Signed-off-by`` tag in every patch.
+
+An easy way to do this is to use ``git commit -s`` where ``-s`` will automatically
+append a signed-off line to the end of your commit message. Also, if you commit
+and forgot to add the line you can use ``git commit --amend -s`` to add the
+signed-off line to the last commit.
+
+``Signed-off-by`` is a developer's certification that they have the right to
+submit the patch for inclusion into the project. It is an agreement to the
+:ref:`Developer's Certificate of Origin <developers-certificate-of-origin>`.
+Code without a proper ``Signed-off-by`` line cannot and will not be merged.
+
+If you are unfamiliar with this process, you should read the
+`official policy at kernel.org <https://www.kernel.org/doc/html/latest/process/submitting-patches.html>`_.
+You might also find
+`this article <http://www.linuxfoundation.org/content/how-participate-linux-community-0>`_
+about participating in the Linux community on the Linux Foundation website to
+be a helpful resource.
+
+.. _developers-certificate-of-origin:
+
+In short, when you sign off on a commit, you assert your agreement to all of
+the following::
+
+ Developer's Certificate of Origin 1.1
+
+ By making a contribution to this project, I certify that:
+
+ (a) The contribution was created in whole or in part by me and I
+ have the right to submit it under the open source license
+ indicated in the file; or
+
+ (b) The contribution is based upon previous work that, to the best
+ of my knowledge, is covered under an appropriate open source
+ license and I have the right under that license to submit that
+ work with modifications, whether created in whole or in part by
+ me, under the same open source license (unless I am permitted to
+ submit under a different license), as indicated in the file; or
+
+ (c) The contribution was provided directly to me by some other
+ person who certified (a), (b) or (c) and I have not modified it.
+
+ (d) I understand and agree that this project and the contribution
+ are public and that a record of the contribution (including all
+ personal information I submit with it, including my sign-off) is
+ maintained indefinitely and may be redistributed consistent with
+ this project or the open source license(s) involved.
+
+After Submitting Your Changes
+-----------------------------
+
+- Watch for Continuous Integration (CI) test results
+
+ - You should automatically receive an email with the test results
+ within less than 2 hrs of the submission. If you don’t get the
+ email, then check status on the GitHub pull request.
+ - Please notify the development mailing list if you think something
+ doesn't work.
+
+- If the tests failed:
+
+ - In general, expect the community to ignore the submission until
+ the tests pass.
+ - It is up to you to fix and resubmit.
+
+ - This includes fixing existing unit (“make test”) tests if your
+ changes broke or changed them.
+ - It also includes fixing distribution packages for the failing
+ platforms (ie if new libraries are required).
+ - Feel free to ask for help on the development list.
+
+ - Go back to the submission process and repeat until the tests pass.
+
+- If the tests pass:
+
+ - Wait for reviewers. Someone will review your code or be assigned
+ to review your code.
+ - Respond to any comments or concerns the reviewer has. Use e-mail or
+ add a comment via github to respond or to let the reviewer know how
+ their comment or concern is addressed.
+ - An author must never delete or manually dismiss someone else's comments
+ or review. (A review may be overridden by agreement in the weekly
+ technical meeting.)
+ - When you have addressed someone's review comments, please click the
+ "re-request review" button (in the top-right corner of the PR page, next
+ to the reviewer's name, an icon that looks like "reload")
+ - The responsibility for keeping a PR moving rests with the author at
+ least as long as there are either negative CI results or negative review
+ comments. If you forget to mark a review comment as addressed (by
+ clicking re-request review), the reviewer may very well not notice and
+ won't come back to your PR.
+ - Automatically generated comments, e.g., those generated by CI systems,
+ may be deleted by authors and others when such comments are not the most
+ recent results from that automated comment source.
+ - After all comments and concerns are addressed, expect your patch
+ to be merged.
+
+- Watch out for questions on the mailing list. At this time there will
+ be a manual code review and further (longer) tests by various
+ community members.
+- Your submission is done once it is merged to the master branch.
+
+Programming Languages, Tools and Libraries
+==========================================
+
+The core of FRR is written in C (gcc or clang supported) and makes
+use of GNU compiler extensions. Additionally, the CLI generation
+tool, `clippy`, requires Python. A few other non-essential scripts are
+implemented in Perl and Python. FRR requires the following tools
+to build distribution packages: automake, autoconf, texinfo, libtool and
+gawk and various libraries (i.e. libpam and libjson-c).
+
+If your contribution requires a new library or other tool, then please
+highlight this in your description of the change. Also make sure it’s
+supported by all FRR platform OSes or provide a way to build
+without the library (potentially without the new feature) on the other
+platforms.
+
+Documentation should be written in reStructuredText. Sphinx extensions may be
+utilized but pure ReST is preferred where possible. See
+:ref:`documentation`.
+
+Use of C++
+----------
+
+While C++ is not accepted for core components of FRR, extensions, modules or
+other distinct components may want to use C++ and include FRR header files.
+There is no requirement on contributors to work to retain C++ compatibility,
+but fixes for C++ compatibility are welcome.
+
+This implies that the burden of work to keep C++ compatibility is placed with
+the people who need it, and they may provide it at their leisure to the extent
+it is useful to them. So, if only a subset of header files, or even parts of
+a header file are made available to C++, this is perfectly fine.
+
+Code Reviews
+============
+
+Code quality is paramount for any large program. Consequently we require
+reviews of all submitted patches by at least one person other than the
+submitter before the patch is merged.
+
+Because of the nature of the software, FRR's maintainer list (i.e. those with
+commit permissions) tends to contain employees / members of various
+organizations. In order to prevent conflicts of interest, we use an honor
+system in which submissions from an individual representing one company should
+be merged by someone unaffiliated with that company.
+
+Guidelines for code review
+--------------------------
+
+- As a rule of thumb, the depth of the review should be proportional to the
+ scope and / or impact of the patch.
+
+- Anyone may review a patch.
+
+- When using GitHub reviews, marking "Approve" on a code review indicates
+ willingness to merge the PR.
+
+- For individuals with merge rights, marking "Changes requested" is equivalent
+ to a NAK.
+
+- For a PR you marked with "Changes requested", please respond to updates in a
+ timely manner to avoid impeding the flow of development.
+
+- Rejected or obsolete PRs are generally closed by the submitter based
+ on requests and/or agreement captured in a PR comment. The comment
+ may originate with a reviewer or document agreement reached on Slack,
+ the Development mailing list, or the weekly technical meeting.
+
+- Reviewers may ask for new automated testing if they feel that the
+ code change is large enough/significant enough to warrant such
+ a requirement.
+
+For project members with merge permissions, the following patterns have
+emerged:
+
+- a PR with any reviews requesting changes may not be merged.
+
+- a PR with any negative CI result may not be merged.
+
+- an open "yellow" review mark ("review requested, but not done") should be
+ given some time (a few days up to weeks, depending on the size of the PR),
+ but is not a merge blocker.
+
+- a "textbubble" review mark ("review comments, but not positive/negative")
+ should be read through but is not a merge blocker.
+
+- non-trivial PRs are generally given some time (again depending on the size)
+ for people to mark an interest in reviewing. Trivial PRs may be merged
+ immediately when CI is green.
+
+
+Coding Practices & Style
+========================
+
+Commit messages
+---------------
+
+Commit messages should be formatted in the same way as Linux kernel
+commit messages. The format is roughly::
+
+ dir: short summary
+
+ extended summary
+
+``dir`` should be the top level source directory under which the change was
+made. For example, a change in :file:`bgpd/rfapi` would be formatted as::
+
+ bgpd: short summary
+
+ ...
+
+The first line should be no longer than 50 characters. Subsequent lines should
+be wrapped to 72 characters.
+
+The purpose of commit messages is to briefly summarize what the commit is
+changing. Therefore, the extended summary portion should be in the form of an
+English paragraph. Brief examples of program output are acceptable but if
+present should be short (on the order of 10 lines) and clearly demonstrate what
+has changed. The goal should be that someone with only passing familiarity with
+the code in question can understand what is being changed.
+
+Commit messages consisting entirely of program output are *unacceptable*. These
+do not describe the behavior changed. For example, putting VTYSH output or the
+result of test runs as the sole content of commit messages is unacceptable.
+
+You must also sign off on your commit.
+
+.. seealso:: :ref:`signing-off`
+
+
+Source File Header
+------------------
+
+New files must have a copyright header (see :ref:`license-for-contributions`
+above) added to the file. The header should be:
+
+.. code-block:: c
+
+ // SPDX-License-Identifier: GPL-2.0-or-later
+ /*
+ * Title/Function of file
+ * Copyright (C) YEAR Author’s Name
+ */
+
+ #include <zebra.h>
+
+A ``SPDX-License-Identifier`` header is required in all source files, i.e.
+``.c``, ``.h``, ``.cpp`` and ``.py`` files. The license boilerplate should be
+removed in these files. Some existing files are missing this header, this is
+slowly being fixed.
+
+A ``SPDX-License-Identifier`` header *and* the full license boilerplate is
+required in schema definition files, i.e. ``.yang`` and ``.proto``. The
+rationale for this is that these files are likely to be individually copied to
+places outside FRR, and having only the SPDX header would become a "dangling
+pointer".
+
+.. warning::
+
+ **DO NOT REMOVE A "Copyright" LINE OR AUTHOR NAME, EVER.**
+
+ **DO NOT APPLY AN SPDX HEADER WHEN THE LICENSE IS UNCLEAR, UNLESS YOU HAVE
+ CHECKED WITH *ALL* SIGNIFICANT AUTHORS.**
+
+Please to keep ``#include <zebra.h>``. The absolute first header included in
+any C file **must** be either ``zebra.h`` or ``config.h`` (with HAVE_CONFIG_H
+guard.)
+
+
+Adding Copyright Claims to Existing Files
+-----------------------------------------
+
+When adding copyright claims for modifications to an existing file, please
+add a ``Portions:`` section as shown below. If this section already exists, add
+your new claim at the end of the list.
+
+.. code-block:: c
+
+ /*
+ * Title/Function of file
+ * Copyright (C) YEAR Author’s Name
+ * Portions:
+ * Copyright (C) 2010 Entity A ....
+ * Copyright (C) 2016 Your name [optional brief change description]
+ * ...
+ */
+
+Defensive coding requirements
+-----------------------------
+
+In general, code submitted into FRR will be rejected if it uses unsafe
+programming practices. While there is no enforced overall ruleset, the
+following requirements have achieved consensus:
+
+- ``strcpy``, ``strcat`` and ``sprintf`` are unacceptable without exception.
+ Use ``strlcpy``, ``strlcat`` and ``snprintf`` instead. (Rationale: even if
+ you know the operation cannot overflow the buffer, a future code change may
+ inadvertedly introduce an overflow.)
+
+- buffer size arguments, particularly to ``strlcpy`` and ``snprintf``, must
+ use ``sizeof()`` whereever possible. Particularly, do not use a size
+ constant in these cases. (Rationale: changing a buffer to another size
+ constant may leave the write operations on a now-incorrect size limit.)
+
+- For stack allocated structs and arrays that should be zero initialized,
+ prefer initializer expressions over ``memset()`` wherever possible. This
+ helps prevent ``memset()`` calls being missed in branches, and eliminates the
+ error class of an incorrect ``size`` argument to ``memset()``.
+
+ For example, instead of:
+
+ .. code-block:: c
+
+ struct foo mystruct;
+ ...
+ memset(&mystruct, 0x00, sizeof(struct foo));
+
+ Prefer:
+
+ .. code-block:: c
+
+ struct foo mystruct = {};
+
+- Do not zero initialize stack allocated values that must be initialized with a
+ nonzero value in order to be used. This way the compiler and memory checking
+ tools can catch uninitialized value use that would otherwise be suppressed by
+ the (incorrect) zero initialization.
+
+- Usage of ``system()`` or other c library routines that cause signals to
+ possibly be ignored are not allowed. This includes the ``fork()`` and
+ ``execXX`` call patterns, which is actually what system() does underneath
+ the covers. This pattern causes the system shutdown to never work properly
+ as the SIGINT sent is never received. It is better to just prohibit code
+ that does this instead of having to debug shutdown issues again.
+
+Other than these specific rules, coding practices from the Linux kernel as
+well as CERT or MISRA C guidelines may provide useful input on safe C code.
+However, these rules are not applied as-is; some of them expressly collide
+with established practice.
+
+
+Container implementations
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In particular to gain defensive coding benefits from better compiler type
+checks, there is a set of replacement container data structures to be found
+in :file:`lib/typesafe.h`. They're documented under :ref:`lists`.
+
+Unfortunately, the FRR codebase is quite large, and migrating existing code to
+use these new structures is a tedious and far-reaching process (even if it
+can be automated with coccinelle, the patches would touch whole swaths of code
+and create tons of merge conflicts for ongoing work.) Therefore, little
+existing code has been migrated.
+
+However, both **new code and refactors of existing code should use the new
+containers**. If there are any reasons this can't be done, please work to
+remove these reasons (e.g. by adding necessary features to the new containers)
+rather than falling back to the old code.
+
+In order of likelyhood of removal, these are the old containers:
+
+- :file:`nhrpd/list.*`, ``hlist_*`` ⇒ ``DECLARE_LIST``
+- :file:`nhrpd/list.*`, ``list_*`` ⇒ ``DECLARE_DLIST``
+- :file:`lib/skiplist.*`, ``skiplist_*`` ⇒ ``DECLARE_SKIPLIST``
+- :file:`lib/*_queue.h` (BSD), ``SLIST_*`` ⇒ ``DECLARE_LIST``
+- :file:`lib/*_queue.h` (BSD), ``LIST_*`` ⇒ ``DECLARE_DLIST``
+- :file:`lib/*_queue.h` (BSD), ``STAILQ_*`` ⇒ ``DECLARE_LIST``
+- :file:`lib/*_queue.h` (BSD), ``TAILQ_*`` ⇒ ``DECLARE_DLIST``
+- :file:`lib/hash.*`, ``hash_*`` ⇒ ``DECLARE_HASH``
+- :file:`lib/linklist.*`, ``list_*`` ⇒ ``DECLARE_DLIST``
+- open-coded linked lists ⇒ ``DECLARE_LIST``/``DECLARE_DLIST``
+
+
+Code Formatting
+---------------
+
+C Code
+^^^^^^
+
+For C code, FRR uses Linux kernel style except where noted below. Code which
+does not comply with these style guidelines will not be accepted.
+
+The project provides multiple tools to allow you to correctly style your code
+as painlessly as possible, primarily built around ``clang-format``.
+
+clang-format
+
+ In the project root there is a :file:`.clang-format` configuration file
+ which can be used with the ``clang-format`` source formatter tool from the
+ LLVM project. Most of the time, this is the easiest and smartest tool to
+ use. It can be run in a variety of ways. If you point it at a C source file
+ or directory of source files, it will format all of them. In the LLVM source
+ tree there are scripts that allow you to integrate it with ``git``, ``vim``
+ and ``emacs``, and there are third-party plugins for other editors. The
+ ``git`` integration is particularly useful; suppose you have some changes in
+ your git index. Then, with the integration installed, you can do the
+ following:
+
+ ::
+
+ git clang-format
+
+ This will format *only* the changes present in your index. If you have just
+ made a few commits and would like to correctly style only the changes made
+ in those commits, you can use the following syntax:
+
+ ::
+
+ git clang-format HEAD~X
+
+ Where X is one more than the number of commits back from the tip of your
+ branch you would like ``clang-format`` to look at (similar to specifying the
+ target for a rebase).
+
+ The ``vim`` plugin is particularly useful. It allows you to select lines in
+ visual line mode and press a key binding to invoke ``clang-format`` on only
+ those lines.
+
+ When using ``clang-format``, it is recommended to use the latest version.
+ Each consecutive version generally has better handling of various edge
+ cases. You may notice on occasion that two consecutive runs of
+ ``clang-format`` over the same code may result in changes being made on the
+ second run. This is an unfortunate artifact of the tool. Please check with
+ the kernel style guide if in doubt.
+
+ One stylistic problem with the FRR codebase is the use of ``DEFUN`` macros
+ for defining CLI commands. ``clang-format`` will happily format these macro
+ invocations, but the result is often unsightly and difficult to read.
+ Consequently, FRR takes a more relaxed position with how these are
+ formatted. In general you should lean towards using the style exemplified in
+ the section on :ref:`command-line-interface`. Because ``clang-format``
+ mangles this style, there is a Python script named ``tools/indent.py`` that
+ wraps ``clang-format`` and handles ``DEFUN`` macros as well as some other
+ edge cases specific to FRR. If you are submitting a new file, it is
+ recommended to run that script over the new file, preferably after ensuring
+ that the latest stable release of ``clang-format`` is in your ``PATH``.
+
+ Documentation on ``clang-format`` and its various integrations is maintained
+ on the LLVM website.
+
+ https://clang.llvm.org/docs/ClangFormat.html
+
+checkpatch.sh
+checkpatch.pl
+
+ .. seealso:: :ref:`checkpatch`
+
+ In the Linux kernel source tree there is a Perl script used to check
+ incoming patches for style errors. FRR uses a shell script front end and an
+ adapted version of the perl script for the same purpose. These scripts can
+ be found at :file:`tools/checkpatch.sh` and :file:`tools/checkpatch.pl`.
+ This script takes a git-formatted diff or patch file, applies it to a clean
+ FRR tree, and inspects the result to catch potential style errors. Running
+ this script on your patches before submission is highly recommended. The CI
+ system runs this script as well and will comment on the PR with the results
+ if style errors are found.
+
+ It is run like this::
+
+ ./checkpatch.sh <patch> <tree>
+
+ Reports are generated on ``stderr`` and the exit code indicates whether
+ issues were found (2, 1) or not (0).
+
+ Where ``<patch>`` is the path to the diff or patch file and ``<tree>`` is
+ the path to your FRR source tree. The tree should be on the branch that you
+ intend to submit the patch against. The script will make a best-effort
+ attempt to save the state of your working tree and index before applying the
+ patch, and to restore it when it is done, but it is still recommended that
+ you have a clean working tree as the script does perform a hard reset on
+ your tree during its run.
+
+ The script reports two classes of issues, namely WARNINGs and ERRORs. Please
+ pay attention to both of them. The script will generally report WARNINGs
+ where it cannot be 100% sure that a particular issue is real. In most cases
+ WARNINGs indicate an issue that needs to be fixed. Sometimes the script will
+ report false positives; these will be handled in code review on a
+ case-by-case basis. Since the script only looks at changed lines,
+ occasionally changing one part of a line can cause the script to report a
+ style issue already present on that line that is unrelated to the change.
+ When convenient it is preferred that these be cleaned up inline, but this is
+ not required.
+
+ In general, a developer should heed the information reported by checkpatch.
+ However, some flexibility is needed for cases where human judgement yields
+ better clarity than the script. Accordingly, it may be appropriate to
+ ignore some checkpatch.sh warnings per discussion among the submitter(s)
+ and reviewer(s) of a change. Misreporting of errors by the script is
+ possible. When this occurs, the exception should be handled either by
+ patching checkpatch to correct the false error report, or by documenting the
+ exception in this document under :ref:`style-exceptions`. If the incorrect
+ report is likely to appear again, a checkpatch update is preferred.
+
+ If the script finds one or more WARNINGs it will exit with 1. If it finds
+ one or more ERRORs it will exit with 2.
+
+ For convenience the Linux documentation for the :file:`tools/checkpatch.pl`
+ script has been included unmodified (i.e., it has not been updated to
+ reflect local changes) :doc:`here <checkpatch>`
+
+
+Please remember that while FRR provides these tools for your convenience,
+responsibility for properly formatting your code ultimately lies on the
+shoulders of the submitter. As such, it is recommended to double-check the
+results of these tools to avoid delays in merging your submission.
+
+In some cases, these tools modify or flag the format in ways that go beyond or
+even conflict [#tool_style_conflicts]_ with the canonical documented Linux
+kernel style. In these cases, the Linux kernel style takes priority;
+non-canonical issues flagged by the tools are not compulsory but rather are
+opportunities for discussion among the submitter(s) and reviewer(s) of a change.
+
+**Whitespace changes in untouched parts of the code are not acceptable
+in patches that change actual code.** To change/fix formatting issues,
+please create a separate patch that only does formatting changes and
+nothing else.
+
+Kernel and BSD styles are documented externally:
+
+- https://www.kernel.org/doc/html/latest/process/coding-style.html
+- http://man.openbsd.org/style
+
+For GNU coding style, use ``indent`` with the following invocation:
+
+::
+
+ indent -nut -nfc1 file_for_submission.c
+
+
+Historically, FRR used fixed-width integral types that do not exist in any
+standard but were defined by most platforms at some point. Officially these
+types are not guaranteed to exist. Therefore, please use the fixed-width
+integral types introduced in the C99 standard when contributing new code to
+FRR. If you need to convert a large amount of code to use the correct types,
+there is a shell script in :file:`tools/convert-fixedwidth.sh` that will do the
+necessary replacements.
+
++-----------+--------------------------+
+| Incorrect | Correct |
++===========+==========================+
+| u_int8_t | uint8_t |
++-----------+--------------------------+
+| u_int16_t | uint16_t |
++-----------+--------------------------+
+| u_int32_t | uint32_t |
++-----------+--------------------------+
+| u_int64_t | uint64_t |
++-----------+--------------------------+
+| u_char | uint8_t or unsigned char |
++-----------+--------------------------+
+| u_short | unsigned short |
++-----------+--------------------------+
+| u_int | unsigned int |
++-----------+--------------------------+
+| u_long | unsigned long |
++-----------+--------------------------+
+
+FRR also uses unnamed struct fields, enabled with ``-fms-extensions`` (cf.
+https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html). The following two
+patterns can/should be used where contextually appropriate:
+
+.. code-block:: c
+
+ struct outer {
+ struct inner;
+ };
+
+.. code-block:: c
+
+ struct outer {
+ union {
+ struct inner;
+ struct inner inner_name;
+ };
+ };
+
+
+.. _style-exceptions:
+
+Exceptions
+""""""""""
+
+FRR project code comes from a variety of sources, so there are some
+stylistic exceptions in place. They are organized here by branch.
+
+For ``master``:
+
+BSD coding style applies to:
+
+- ``ldpd/``
+
+``babeld`` uses, approximately, the following style:
+
+- K&R style braces
+- Indents are 4 spaces
+- Function return types are on their own line
+
+For ``stable/3.0`` and ``stable/2.0``:
+
+GNU coding style apply to the following parts:
+
+- ``lib/``
+- ``zebra/``
+- ``bgpd/``
+- ``ospfd/``
+- ``ospf6d/``
+- ``isisd/``
+- ``ripd/``
+- ``ripngd/``
+- ``vtysh/``
+
+BSD coding style applies to:
+
+- ``ldpd/``
+
+
+Python Code
+^^^^^^^^^^^
+
+Format all Python code with `black <https://github.com/psf/black>`_.
+
+In a line::
+
+ python3 -m black <file.py>
+
+Run this on any Python files you modify before committing.
+
+FRR's Python code has been formatted with black version 19.10b.
+
+
+YANG
+^^^^
+
+FRR uses YANG to define data models for its northbound interface. YANG models
+should follow conventions used by the IETF standard models. From a practical
+standpoint, this corresponds to the output produced by the ``yanglint`` tool
+included in the ``libyang`` project, which is used by FRR to parse and validate
+YANG models. You should run the following command on all YANG documents you
+write:
+
+.. code-block:: console
+
+ yanglint -f yang <model>
+
+The output of this command should be identical to the input file. The sole
+exception to this is comments. ``yanglint`` does not support comments and will
+strip them from its output. You may include comments in your YANG documents,
+but they should be indented appropriately (use spaces). Where possible,
+comments should be eschewed in favor of a suitable ``description`` statement.
+
+In short, a diff between your input file and the output of ``yanglint`` should
+either be empty or contain only comments.
+
+Specific Exceptions
+^^^^^^^^^^^^^^^^^^^
+
+Most of the time checkpatch errors should be corrected. Occasionally as a group
+maintainers will decide to ignore certain stylistic issues. Usually this is
+because correcting the issue is not possible without large unrelated code
+changes. When an exception is made, if it is unlikely to show up again and
+doesn't warrant an update to checkpatch, it is documented here.
+
++------------------------------------------+---------------------------------------------------------------+
+| Issue | Ignore Reason |
++==========================================+===============================================================+
+| DEFPY_HIDDEN, DEFPY_ATTR: complex macros | DEF* macros cannot be wrapped in parentheses without updating |
+| should be wrapped in parentheses | all usages of the macro, which would be highly disruptive. |
++------------------------------------------+---------------------------------------------------------------+
+
+Types of configurables
+----------------------
+
+.. note::
+
+ This entire section essentially just argues to not make configuration
+ unnecessarily involved for the user. Rather than rules, this is more of
+ a list of conclusions intended to help make FRR usable for operators.
+
+
+Almost every feature FRR has comes with its own set of switches and options.
+There are several stages at which configuration can be applied. In order of
+preference, these are:
+
+- at configuration/runtime, through YANG.
+
+ This is the preferred way for all FRR knobs. Not all daemons and features
+ are fully YANGified yet, so in some cases new features cannot rely on a
+ YANG interface. If a daemon already implements a YANG interface (even
+ partial), new CLI options must be implemented through a YANG model.
+
+ .. warning::
+
+ Unlike everything else in this section being guidelines with some slack,
+ implementing and using a YANG interface for new CLI options in (even
+ partially!) YANGified daemons is a hard requirement.
+
+
+- at configuration/runtime, through the CLI.
+
+ The "good old" way for all regular configuration. More involved for users
+ to automate *correctly* than YANG.
+
+- at startup, by loading additional modules.
+
+ If a feature introduces a dependency on additional libraries (e.g. libsnmp,
+ rtrlib, etc.), this is the best way to encapsulate the dependency. Having
+ a separate module allows the distribution to create a separate package
+ with the extra dependency, so FRR can still be installed without pulling
+ everything in.
+
+ A module may also be appropriate if a feature is large and reasonably well
+ isolated. Reducing the amount of running the code is a security benefit,
+ so even if there are no new external dependencies, modules can be useful.
+
+ While modules cannot currently be loaded at runtime, this is a tradeoff
+ decision that was made to allow modules to change/extend code that is very
+ hard to (re)adjust at runtime. If there is a case for runtime (un)loading
+ of modules, this tradeoff can absolutely be reevaluated.
+
+- at startup, with command line options.
+
+ This interface is only appropriate for options that have an effect very
+ early in FRR startup, i.e. before configuration is loaded. Anything that
+ affects configuration load itself should be here, as well as options
+ changing the environment FRR runs in.
+
+ If a tunable can be changed at runtime, a command line option is only
+ acceptable if the configured value has an effect before configuration is
+ loaded (e.g. zebra reads routes from the kernel before loading config, so
+ the netlink buffer size is an appropriate command line option.)
+
+- at compile time, with ``./configure`` options.
+
+ This is the absolute last preference for tunables, since the distribution
+ needs to make the decision for the user and/or the user needs to rebuild
+ FRR in order to change the option.
+
+ "Good" configure options do one of three things:
+
+ - set distribution-specific parameters, most prominently all the path
+ options. File system layout is a distribution/packaging choice, so the
+ user would hopefully never need to adjust these.
+
+ - changing toolchain behavior, e.g. instrumentation, warnings,
+ optimizations and sanitizers.
+
+ - enabling/disabling parts of the build, especially if they need
+ additional dependencies. Being able to build only parts of FRR, or
+ without some library, is useful. **The only effect these options should
+ have is adding or removing files from the build result.** If a knob
+ in this category causes the same binary to exist in different variants,
+ it is likely implemented incorrectly!
+
+ .. note::
+
+ This last guideline is currently ignored by several configure options.
+ ``vtysh`` in general depends on the entire list of enabled daemons,
+ and options like ``--enable-bgp-vnc`` and ``--enable-ospfapi`` change
+ daemons internally. Consider this more of an "ideal" than a "rule".
+
+
+Whenever adding new knobs, please try reasonably hard to go up as far as
+possible on the above list. Especially ``./configure`` flags are often enough
+the "easy way out" but should be avoided when at all possible. To a lesser
+degree, the same applies to command line options.
+
+
+Compile-time conditional code
+-----------------------------
+
+Many users access FRR via binary packages from 3rd party sources;
+compile-time code puts inclusion/exclusion in the hands of the package
+maintainer. Please think very carefully before making code conditional
+at compile time, as it increases regression testing, maintenance
+burdens, and user confusion. In particular, please avoid gratuitous
+``--enable-…`` switches to the configure script - in general, code
+should be of high quality and in working condition, or it shouldn’t be
+in FRR at all.
+
+When code must be compile-time conditional, try have the compiler make
+it conditional rather than the C pre-processor so that it will still be
+checked by the compiler, even if disabled. For example,
+
+::
+
+ if (SOME_SYMBOL)
+ frobnicate();
+
+is preferred to
+
+::
+
+ #ifdef SOME_SYMBOL
+ frobnicate ();
+ #endif /* SOME_SYMBOL */
+
+Note that the former approach requires ensuring that ``SOME_SYMBOL`` will be
+defined (watch your ``AC_DEFINE``\ s).
+
+Debug-guards in code
+--------------------
+
+Debugging statements are an important methodology to allow developers to fix
+issues found in the code after it has been released. The caveat here is that
+the developer must remember that people will be using the code at scale and in
+ways that can be unexpected for the original implementor. As such debugs
+**MUST** be guarded in such a way that they can be turned off. FRR has the
+ability to turn on/off debugs from the CLI and it is expected that the
+developer will use this convention to allow control of their debugs.
+
+Custom syntax-like block macros
+-------------------------------
+
+FRR uses some macros that behave like the ``for`` or ``if`` C keywords. These
+macros follow these patterns:
+
+- loop-style macros are named ``frr_each_*`` (and ``frr_each``)
+- single run macros are named ``frr_with_*``
+- to avoid confusion, ``frr_with_*`` macros must always use a ``{ ... }``
+ block even if the block only contains one statement. The ``frr_each``
+ constructs are assumed to be well-known enough to use normal ``for`` rules.
+- ``break``, ``return`` and ``goto`` all work correctly. For loop-style
+ macros, ``continue`` works correctly too.
+
+Both the ``each`` and ``with`` keywords are inspired by other (more
+higher-level) programming languages that provide these constructs.
+
+There are also some older iteration macros, e.g. ``ALL_LIST_ELEMENTS`` and
+``FOREACH_AFI_SAFI``. These macros in some cases do **not** fulfill the above
+pattern (e.g. ``break`` does not work in ``FOREACH_AFI_SAFI`` because it
+expands to 2 nested loops.)
+
+Static Analysis and Sanitizers
+------------------------------
+Clang/LLVM and GCC come with a variety of tools that can be used to help find
+bugs in FRR.
+
+clang-analyze
+ This is a static analyzer that scans the source code looking for patterns
+ that are likely to be bugs. The tool is run automatically on pull requests
+ as part of CI and new static analysis warnings will be placed in the CI
+ results. FRR aims for absolutely zero static analysis errors. While the
+ project is not quite there, code that introduces new static analysis errors
+ is very unlikely to be merged.
+
+AddressSanitizer
+ This is an excellent tool that provides runtime instrumentation for
+ detecting memory errors. As part of CI FRR is built with this
+ instrumentation and run through a series of tests to look for any results.
+ Testing your own code with this tool before submission is encouraged. You
+ can enable it by passing::
+
+ --enable-address-sanitizer
+
+ to ``configure``.
+
+ThreadSanitizer
+ Similar to AddressSanitizer, this tool provides runtime instrumentation for
+ detecting data races. If you are working on or around multithreaded code,
+ extensive testing with this instrumtation enabled is *highly* recommended.
+ You can enable it by passing::
+
+ --enable-thread-sanitizer
+
+ to ``configure``.
+
+MemorySanitizer
+ Similar to AddressSanitizer, this tool provides runtime instrumentation for
+ detecting use of uninitialized heap memory. Testing your own code with this
+ tool before submission is encouraged. You can enable it by passing::
+
+ --enable-memory-sanitizer
+
+ to ``configure``.
+
+All of the above tools are available in the Clang/LLVM toolchain since 3.4.
+AddressSanitizer and ThreadSanitizer are available in recent versions of GCC,
+but are no longer actively maintained. MemorySanitizer is not available in GCC.
+
+.. note::
+
+ The different Sanitizers are mostly incompatible with each other. Please
+ refer to GCC/LLVM documentation for details.
+
+frr-format plugin
+ This is a GCC plugin provided with FRR that does extended type checks for
+ ``%pFX``-style printfrr extensions. To use this plugin,
+
+ 1. install GCC plugin development files, e.g.::
+
+ apt-get install gcc-10-plugin-dev
+
+ 2. **before** running ``configure``, compile the plugin with::
+
+ make -C tools/gcc-plugins CXX=g++-10
+
+ (Edit the GCC version to what you're using, it should work for GCC 9 or
+ newer.)
+
+ After this, the plugin should be automatically picked up by ``configure``.
+ The plugin does not change very frequently, so you can keep it around across
+ work on different FRR branches. After a ``git clean -x``, the ``make`` line
+ will need to be run again. You can also add ``--with-frr-format`` to the
+ ``configure`` line to make sure the plugin is used, otherwise if something
+ is not set up correctly it might be silently ignored.
+
+ .. warning::
+
+ Do **not** enable this plugin for package/release builds. It is intended
+ for developer/debug builds only. Since it modifies the compiler, it may
+ cause silent corruption of the executable files.
+
+ Using the plugin also changes the string for ``PRI[udx]64`` from the
+ system value to ``%L[udx]`` (normally ``%ll[udx]`` or ``%l[udx]``.)
+
+Additionally, the FRR codebase is regularly scanned for static analysis
+errors with Coverity and pull request changes are scanned as part of the
+Continuous Integration (CI) process. Developers can scan their commits for
+Coverity static analysis errors prior to submission using the
+``scan-build`` command. To use this command, the ``clang-tools`` package must
+be installed. For example, this can be accomplished on Ubuntu with the
+``sudo apt-get install clang-tools`` command. Then, touch the files you want scanned and
+invoke the ``scan-build`` command. For example::
+
+ cd ~/GitHub/frr
+ touch ospfd/ospf_flood.c ospfd/ospf_vty.c ospfd/ospf_opaque.c
+ cd build
+ scan-build make -j32
+
+The results of the scan including any static analysis errors will appear inline.
+Additionally, there will a directory in the /tmp containing the Coverity
+reports (e.g., scan-build-2023-06-09-120100-473730-1).
+
+Executing non-installed dynamic binaries
+----------------------------------------
+
+Since FRR uses the GNU autotools build system, it inherits its shortcomings.
+To execute a binary directly from the build tree under a wrapper like
+`valgrind`, `gdb` or `strace`, use::
+
+ ./libtool --mode=execute valgrind [--valgrind-opts] zebra/zebra [--zebra-opts]
+
+While replacing valgrind/zebra as needed. The `libtool` script is found in
+the root of the build directory after `./configure` has completed. Its purpose
+is to correctly set up `LD_LIBRARY_PATH` so that libraries from the build tree
+are used. (On some systems, `libtool` is also available from PATH, but this is
+not always the case.)
+
+.. _cli-workflow:
+
+CLI changes
+-----------
+
+CLI's are a complicated ugly beast. Additions or changes to the CLI should use
+a DEFPY to encapsulate one setting as much as is possible. Additionally as new
+DEFPY's are added to the system, documentation should be provided for the new
+commands.
+
+Backwards Compatibility
+-----------------------
+
+As a general principle, changes to CLI and code in the lib/ directory should be
+made in a backwards compatible fashion. This means that changes that are purely
+stylistic in nature should be avoided, e.g., renaming an existing macro or
+library function name without any functional change. When adding new parameters
+to common functions, it is also good to consider if this too should be done in
+a backward compatible fashion, e.g., by preserving the old form in addition to
+adding the new form.
+
+This is not to say that minor or even major functional changes to CLI and
+common code should be avoided, but rather that the benefit gained from a change
+should be weighed against the added cost/complexity to existing code. Also,
+that when making such changes, it is good to preserve compatibility when
+possible to do so without introducing maintenance overhead/cost. It is also
+important to keep in mind, existing code includes code that may reside in
+private repositories (and is yet to be submitted) or code that has yet to be
+migrated from Quagga to FRR.
+
+That said, compatibility measures can (and should) be removed when either:
+
+- they become a significant burden, e.g. when data structures change and the
+ compatibility measure would need a complex adaptation layer or becomes
+ flat-out impossible
+- some measure of time (dependent on the specific case) has passed, so that
+ the compatibility grace period is considered expired.
+
+For CLI commands, the deprecation period is 1 year.
+
+In all cases, compatibility pieces should be marked with compiler/preprocessor
+annotations to print warnings at compile time, pointing to the appropriate
+update path. A ``-Werror`` build should fail if compatibility bits are used. To
+avoid compilation issues in released code, such compiler/preprocessor
+annotations must be ignored non-development branches. For example:
+
+.. code-block:: c
+
+ #if CONFDATE > 20180403
+ CPP_NOTICE("Use of <XYZ> is deprecated, please use <ABC>")
+ #endif
+
+Preferably, the shell script :file:`tools/fixup-deprecated.py` will be
+updated along with making non-backwards compatible code changes, or an
+alternate script should be introduced, to update the code to match the
+change. When the script is updated, there is no need to preserve the
+deprecated code. Note that this does not apply to user interface
+changes, just internal code, macros and libraries.
+
+Miscellaneous
+-------------
+
+When in doubt, follow the guidelines in the Linux kernel style guide, or ask on
+the development mailing list / public Slack instance.
+
+JSON Output
+^^^^^^^^^^^
+
+New JSON output in FRR needs to be backed by schema, in particular a YANG model.
+When adding new JSON, first search for an existing YANG model, either in FRR or
+a standard model (e.g., IETF) and use that model as the basis for any JSON
+structure and *especially* for key names and canonical values formats.
+
+If no YANG model exists to support the JSON then an FRR YANG model needs to be
+added to or created to support the JSON format.
+
+* All JSON keys are to be ``camelCased``, with no spaces. YANG modules almost
+ always use ``kebab-case`` (i.e., all lower case with hyphens to separate
+ words), so these identifiers need to be mapped to ``camelCase`` by removing
+ the hyphen (or symbol) and capitalizing the following letter, for
+ example "router-id" becomes "routerId"
+* Commands which output JSON should produce ``{}`` if they have nothing to
+ display
+* In general JSON commands include a ``json`` keyword typically at the end of
+ the CLI command (e.g., ``show ip ospf json``)
+
+Use of const
+^^^^^^^^^^^^
+
+Please consider using ``const`` when possible: it's a useful hint to
+callers about the limits to side-effects from your apis, and it makes
+it possible to use your apis in paths that involve ``const``
+objects. If you encounter existing apis that *could* be ``const``,
+consider including changes in your own pull-request.
+
+Help with specific warnings
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+FRR's configure script enables a whole batch of extra warnings, some of which
+may not be obvious in how to fix. Here are some notes on specific warnings:
+
+* ``-Wstrict-prototypes``: you probably just forgot the ``void`` in a function
+ declaration with no parameters, i.e. ``static void foo() {...}`` rather than
+ ``static void foo(void) {...}``.
+
+ Without the ``void``, in C, it's a function with *unspecified* parameters
+ (and varargs calling convention.) This is a notable difference to C++, where
+ the ``void`` is optional and an empty parameter list means no parameters.
+
+* ``"strict match required"`` from the frr-format plugin: check if you are
+ using a cast in a printf parameter list. The frr-format plugin cannot
+ access correct full type information for casts like
+ ``printfrr(..., (uint64_t)something, ...)`` and will print incorrect
+ warnings particularly if ``uint64_t``, ``size_t`` or ``ptrdiff_t`` are
+ involved. The problem is *not* triggered with a variable or function return
+ value of the exact same type (without a cast).
+
+ Since these cases are very rare, community consensus is to just work around
+ the warning even though the code might be correct. If you are running into
+ this, your options are:
+
+ 1. try to avoid the cast altogether, maybe using a different printf format
+ specifier (e.g. ``%lu`` instead of ``%zu`` or ``PRIu64``).
+ 2. fix the type(s) of the function/variable/struct member being printed
+ 3. create a temporary variable with the value and print that without a cast
+ (this is the last resort and was not necessary anywhere so far.)
+
+
+.. _documentation:
+
+Documentation
+=============
+
+FRR uses Sphinx+RST as its documentation system. The document you are currently
+reading was generated by Sphinx from RST source in
+:file:`doc/developer/workflow.rst`. The documentation is structured as follows:
+
++-----------------------+-------------------------------------------+
+| Directory | Contents |
++=======================+===========================================+
+| :file:`doc/user` | User documentation; configuration guides; |
+| | protocol overviews |
++-----------------------+-------------------------------------------+
+| :file:`doc/developer` | Developer's documentation; API specs; |
+| | datastructures; architecture overviews; |
+| | project management procedure |
++-----------------------+-------------------------------------------+
+| :file:`doc/manpages` | Source for manpages |
++-----------------------+-------------------------------------------+
+| :file:`doc/figures` | Images and diagrams |
++-----------------------+-------------------------------------------+
+| :file:`doc/extra` | Miscellaneous Sphinx extensions, scripts, |
+| | customizations, etc. |
++-----------------------+-------------------------------------------+
+
+Each of these directories, with the exception of :file:`doc/figures` and
+:file:`doc/extra`, contains a Sphinx-generated Makefile and configuration
+script :file:`conf.py` used to set various document parameters. The makefile
+can be used for a variety of targets; invoke `make help` in any of these
+directories for a listing of available output formats. For convenience, there
+is a top-level :file:`Makefile.am` that has targets for PDF and HTML
+documentation for both developer and user documentation, respectively. That
+makefile is also responsible for building manual pages packed with distribution
+builds.
+
+Indent and styling should follow existing conventions:
+
+- 3 spaces for indents under directives
+- Cross references may contain only lowercase alphanumeric characters and
+ hyphens ('-')
+- Lines wrapped to 80 characters where possible
+
+Characters for header levels should follow Python documentation guide:
+
+- ``#`` with overline, for parts
+- ``*`` with overline, for chapters
+- ``=``, for sections
+- ``-``, for subsections
+- ``^``, for subsubsections
+- ``"``, for paragraphs
+
+After you have made your changes, please make sure that you can invoke
+``make latexpdf`` and ``make html`` with no warnings.
+
+The documentation is currently incomplete and needs love. If you find a broken
+cross-reference, figure, dead hyperlink, style issue or any other nastiness we
+gladly accept documentation patches.
+
+To build the docs, please ensure you have installed a recent version of
+`Sphinx <http://www.sphinx-doc.org/en/stable/install.html>`_. If you want to
+build LaTeX or PDF docs, you will also need a full LaTeX distribution
+installed.
+
+Code
+----
+
+FRR is a large and complex software project developed by many different people
+over a long period of time. Without adequate documentation, it can be
+exceedingly difficult to understand code segments, APIs and other interfaces.
+In the interest of keeping the project healthy and maintainable, you should
+make every effort to document your code so that other people can understand
+what it does without needing to closely read the code itself.
+
+Some specific guidelines that contributors should follow are:
+
+- Functions exposed in header files should have descriptive comments above
+ their signatures in the header file. At a minimum, a function comment should
+ contain information about the return value, parameters, and a general summary
+ of the function's purpose. Documentation on parameter values can be omitted
+ if it is (very) obvious what they are used for.
+
+ Function comments must follow the style for multiline comments laid out in
+ the kernel style guide.
+
+ Example:
+
+ .. code-block:: c
+
+ /*
+ * Determines whether or not a string is cool.
+ *
+ * text
+ * the string to check for coolness
+ *
+ * is_clccfc
+ * whether capslock is cruise control for cool
+ *
+ * Returns:
+ * 7 if the text is cool, 0 otherwise
+ */
+ int check_coolness(const char *text, bool is_clccfc);
+
+ Function comments should make it clear what parameters and return values are
+ used for.
+
+- Static functions should have descriptive comments in the same form as above
+ if what they do is not immediately obvious. Use good engineering judgement
+ when deciding whether a comment is necessary. If you are unsure, document
+ your code.
+- Global variables, static or not, should have a comment describing their use.
+- **For new code in lib/, these guidelines are hard requirements.**
+
+If you make significant changes to portions of the codebase covered in the
+Developer's Manual, add a major subsystem or feature, or gain arcane mastery of
+some undocumented or poorly documented part of the codebase, please document
+your work so others can benefit. If you add a major feature or introduce a new
+API, please document the architecture and API to the best of your abilities in
+the Developer's Manual, using good judgement when choosing where to place it.
+
+Finally, if you come across some code that is undocumented and feel like
+going above and beyond, document it! We absolutely appreciate and accept
+patches that document previously undocumented code.
+
+User
+----
+
+If you are contributing code that adds significant user-visible functionality
+please document how to use it in :file:`doc/user`. Use good judgement when
+choosing where to place documentation. For example, instructions on how to use
+your implementation of a new BGP draft should go in the BGP chapter instead of
+being its own chapter. If you are adding a new protocol daemon, please create a
+new chapter.
+
+FRR Specific Markup
+-------------------
+
+FRR has some customizations applied to the Sphinx markup that go a long way
+towards making documentation easier to use, write and maintain.
+
+CLI Commands
+^^^^^^^^^^^^
+
+When documenting CLI please use the ``.. clicmd::`` directive. This directive
+will format the command and generate index entries automatically. For example,
+the command :clicmd:`show pony` would be documented as follows:
+
+.. code-block:: rest
+
+ .. clicmd:: show pony
+
+ Prints an ASCII pony. Example output:::
+
+ >>\.
+ /_ )`.
+ / _)`^)`. _.---. _
+ (_,' \ `^-)"" `.\
+ | | \
+ \ / |
+ / \ /.___.'\ (\ (_
+ < ,"|| \ |`. \`-'
+ \\ () )| )/
+ hjw |_>|> /_] //
+ /_] /_]
+
+
+When documented this way, CLI commands can be cross referenced with the
+``:clicmd:`` inline markup like so:
+
+.. code-block:: rest
+
+ :clicmd:`show pony`
+
+This is very helpful for users who want to quickly remind themselves what a
+particular command does.
+
+When documenting a cli that has a ``no`` form, please do not include the ``no``
+form. I.e. ``no show pony`` would not be documented anywhere. Since most
+commands have ``no`` forms, users should be able to infer these or get help
+from vtysh's completions.
+
+When documenting commands that have lots of possible variants, just document
+the single command in summary rather than enumerating each possible variant.
+E.g. for ``show pony [foo|bar]``, do not:
+
+.. code-block:: rest
+
+ .. clicmd:: show pony
+ .. clicmd:: show pony foo
+ .. clicmd:: show pony bar
+
+Do:
+
+.. code-block:: rest
+
+ .. clicmd:: show pony [foo|bar]
+
+
+Configuration Snippets
+^^^^^^^^^^^^^^^^^^^^^^
+
+When putting blocks of example configuration please use the
+``.. code-block::`` directive and specify ``frr`` as the highlighting language,
+as in the following example. This will tell Sphinx to use a custom Pygments
+lexer to highlight FRR configuration syntax.
+
+.. code-block:: rest
+
+ .. code-block:: frr
+
+ !
+ ! Example configuration file.
+ !
+ log file /tmp/log.log
+ service integrated-vtysh-config
+ !
+ ip route 1.2.3.0/24 reject
+ ipv6 route de:ea:db:ee:ff::/64 reject
+ !
+
+
+.. _GitHub: https://github.com/frrouting/frr
+.. _GitHub issues: https://github.com/frrouting/frr/issues
+
+.. rubric:: Footnotes
+
+.. [#tool_style_conflicts] For example, lines over 80 characters are allowed
+ for text strings to make it possible to search the code for them: please
+ see `Linux kernel style (breaking long lines and strings) <https://www.kernel.org/doc/html/v4.10/process/coding-style.html#breaking-long-lines-and-strings>`_
+ and `Issue #1794 <https://github.com/FRRouting/frr/issues/1794>`_.
diff --git a/doc/developer/xrefs.rst b/doc/developer/xrefs.rst
new file mode 100644
index 0000000..e8e07df
--- /dev/null
+++ b/doc/developer/xrefs.rst
@@ -0,0 +1,215 @@
+.. _xrefs:
+
+Introspection (xrefs)
+=====================
+
+The FRR library provides an introspection facility called "xrefs." The intent
+is to provide structured access to annotated entities in the compiled binary,
+such as log messages and thread scheduling calls.
+
+Enabling and use
+----------------
+
+Support for emitting an xref is included in the macros for the specific
+entities, e.g. :c:func:`zlog_info` contains the relevant statements. The only
+requirement for the system to work is a GNU compatible linker that supports
+section start/end symbols. (The only known linker on any system FRR supports
+that does not do this is the Solaris linker.)
+
+To verify xrefs have been included in a binary or dynamic library, run
+``readelf -n binary``. For individual object files, it's
+``readelf -S object.o | grep xref_array`` instead.
+
+Structure and contents
+----------------------
+
+As a slight improvement to security and fault detection, xrefs are divided into
+a ``const struct xref *`` and an optional ``struct xrefdata *``. The required
+const part contains:
+
+.. c:member:: enum xref_type xref.type
+
+ Identifies what kind of object the xref points to.
+
+.. c:member:: int line
+.. c:member:: const char *xref.file
+.. c:member:: const char *xref.func
+
+ Source code location of the xref. ``func`` will be ``<global>`` for
+ xrefs outside of a function.
+
+.. c:member:: struct xrefdata *xref.xrefdata
+
+ The optional writable part of the xref. NULL if no non-const part exists.
+
+The optional non-const part has:
+
+.. c:member:: const struct xref *xrefdata.xref
+
+ Pointer back to the constant part. Since circular pointers are close to
+ impossible to emit from inside a function body's static variables, this
+ is initialized at startup.
+
+.. c:member:: char xrefdata.uid[16]
+
+ Unique identifier, see below.
+
+.. c:member:: const char *xrefdata.hashstr
+.. c:member:: uint32_t xrefdata.hashu32[2]
+
+ Input to unique identifier calculation. These should encompass all
+ details needed to make an xref unique. If more than one string should
+ be considered, use string concatenation for the initializer.
+
+Both structures can be extended by embedding them in a larger type-specific
+struct, e.g. ``struct xref_logmsg *``.
+
+Unique identifiers
+------------------
+
+All xrefs that have a writable ``struct xrefdata *`` part are assigned an
+unique identifier, which is formed as base32 (crockford) SHA256 on:
+
+- the source filename
+- the ``hashstr`` field
+- the ``hashu32`` fields
+
+.. note::
+
+ Function names and line numbers are intentionally not included to allow
+ moving items within a file without affecting the identifier.
+
+For running executables, this hash is calculated once at startup. When
+directly reading from an ELF file with external tooling, the value must be
+calculated when necessary.
+
+The identifiers have the form ``AXXXX-XXXXX`` where ``X`` is
+``0-9, A-Z except I,L,O,U`` and ``A`` is ``G-Z except I,L,O,U`` (i.e. the
+identifiers always start with a letter.) When reading identifiers from user
+input, ``I`` and ``L`` should be replaced with ``1`` and ``O`` should be
+replaced with ``0``. There are 49 bits of entropy in this identifier.
+
+Underlying machinery
+--------------------
+
+Xrefs are nothing other than global variables with some extra glue to make
+them possible to find from the outside by looking at the binary. The first
+non-obvious part is that they can occur inside of functions, since they're
+defined as ``static``. They don't have a visible name -- they don't need one.
+
+To make finding these variables possible, another global variable, a pointer
+to the first one, is created in the same way. However, it is put in a special
+ELF section through ``__attribute__((section("xref_array")))``. This is the
+section you can see with readelf.
+
+Finally, on the level of a whole executable or library, the linker will stuff
+the individual pointers consecutive to each other since they're in the same
+section — hence the array. Start and end of this array is given by the
+linker-autogenerated ``__start_xref_array`` and ``__stop_xref_array`` symbols.
+Using these, both a constructor to run at startup as well as an ELF note are
+created.
+
+The ELF note is the entrypoint for externally retrieving xrefs from a binary
+without having to run it. It can be found by walking through the ELF data
+structures even if the binary has been fully stripped of debug and section
+information. SystemTap's SDT probes & LTTng's trace points work in the same
+way (though they emit 1 note for each probe, while xrefs only emit one note
+in total which refers to the array.) Using xrefs does not impact SystemTap
+or LTTng, the notes have identifiers they can be distinguished by.
+
+The ELF structure of a linked binary (library or executable) will look like
+this::
+
+ $ readelf --wide -l -n lib/.libs/libfrr.so
+
+ Elf file type is DYN (Shared object file)
+ Entry point 0x67d21
+ There are 12 program headers, starting at offset 64
+
+ Program Headers:
+ Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
+ PHDR 0x000040 0x0000000000000040 0x0000000000000040 0x0002a0 0x0002a0 R 0x8
+ INTERP 0x125560 0x0000000000125560 0x0000000000125560 0x00001c 0x00001c R 0x10
+ [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
+ LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x02aff0 0x02aff0 R 0x1000
+ LOAD 0x02b000 0x000000000002b000 0x000000000002b000 0x0b2889 0x0b2889 R E 0x1000
+ LOAD 0x0de000 0x00000000000de000 0x00000000000de000 0x070048 0x070048 R 0x1000
+ LOAD 0x14e428 0x000000000014f428 0x000000000014f428 0x00fb70 0x01a2b8 RW 0x1000
+ DYNAMIC 0x157a40 0x0000000000158a40 0x0000000000158a40 0x000270 0x000270 RW 0x8
+ NOTE 0x0002e0 0x00000000000002e0 0x00000000000002e0 0x00004c 0x00004c R 0x4
+ TLS 0x14e428 0x000000000014f428 0x000000000014f428 0x000000 0x000008 R 0x8
+ GNU_EH_FRAME 0x12557c 0x000000000012557c 0x000000000012557c 0x00819c 0x00819c R 0x4
+ GNU_STACK 0x000000 0x0000000000000000 0x0000000000000000 0x000000 0x000000 RW 0x10
+ GNU_RELRO 0x14e428 0x000000000014f428 0x000000000014f428 0x009bd8 0x009bd8 R 0x1
+
+ (...)
+
+ Displaying notes found in: .note.gnu.build-id
+ Owner Data size Description
+ GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring) Build ID: 6a1f66be38b523095ebd6ec13cc15820cede903d
+
+ Displaying notes found in: .note.FRR
+ Owner Data size Description
+ FRRouting 0x00000010 Unknown note type: (0x46455258) description data: 6c eb 15 00 00 00 00 00 74 ec 15 00 00 00 00 00
+
+Where 0x15eb6c…0x15ec74 are the offsets (relative to the note itself) where
+the xref array is in the file. Also note the owner is clearly marked as
+"FRRouting" and the type is "XREF" in hex.
+
+For SystemTap's use of ELF notes, refer to
+https://libstapsdt.readthedocs.io/en/latest/how-it-works/internals.html as an
+entry point.
+
+.. note::
+
+ Due to GCC bug 41091, the "xref_array" section is not correctly generated
+ for C++ code when compiled by GCC. A workaround is present for runtime
+ functionality, but to extract the xrefs from a C++ source file, it needs
+ to be built with clang (or a future fixed version of GCC) instead.
+
+Extraction tool
+---------------
+
+The FRR source contains a matching tool to extract xref data from compiled ELF
+binaries in ``python/xrelfo.py``. This tool uses CPython extensions
+implemented in ``clippy`` and must therefore be executed with that.
+
+``xrelfo.py`` processes input from one or more ELF file (.o, .so, executable),
+libtool object (.lo, .la, executable wrapper script) or JSON (output from
+``xrelfo.py``) and generates an output JSON file. During standard FRR build,
+it is invoked on all binaries and libraries and the result is combined into
+``frr.json``.
+
+ELF files from any operating system, CPU architecture and endianness can be
+processed on any host. Any issues with this are bugs in ``xrelfo.py``
+(or clippy's ELF code.)
+
+``xrelfo.py`` also performs some sanity checking, particularly on log
+messages. The following options are available:
+
+.. option:: -o OUTPUT
+
+ Filename to write JSON output to. As a convention, a ``.xref`` filename
+ extension is used.
+
+.. option:: -Wlog-format
+
+ Performs extra checks on log message format strings, particularly checks
+ for ``\t`` and ``\n`` characters (which should not be used in log messages).
+
+.. option:: -Wlog-args
+
+ Generates cleanup hints for format string arguments where
+ :c:func:`printfrr()` extensions could be used, e.g. replacing ``inet_ntoa``
+ with ``%pI4``.
+
+.. option:: --profile
+
+ Runs the Python profiler to identify hotspots in the ``xrelfo.py`` code.
+
+``xrelfo.py`` uses information about C structure definitions saved in
+``python/xrefstructs.json``. This file is included with the FRR sources and
+only needs to be regenerated when some of the ``struct xref_*`` definitions
+are changed (which should be almost never). The file is written by
+``python/tiabwarfo.py``, which uses ``pahole`` to extract the necessary data
+from DWARF information.
diff --git a/doc/developer/zebra.rst b/doc/developer/zebra.rst
new file mode 100644
index 0000000..be2952e
--- /dev/null
+++ b/doc/developer/zebra.rst
@@ -0,0 +1,232 @@
+.. _zebra:
+
+*****
+Zebra
+*****
+
+.. _zebra-protocol:
+
+Overview of the Zebra Protocol
+==============================
+
+The Zebra protocol (or ``ZAPI``) is used by protocol daemons to
+communicate with the **zebra** daemon.
+
+Each protocol daemon may request and send information to and from the
+**zebra** daemon such as interface states, routing state,
+nexthop-validation, and so on. Protocol daemons may also install
+routes with **zebra**. The **zebra** daemon manages which routes are
+installed into the forwarding table with the kernel. Some daemons use
+more than one ZAPI connection. This is supported: each ZAPI session is
+identified by a tuple of: ``{protocol, instance, session_id}``. LDPD
+is an example: it uses a second, synchronous ZAPI session to manage
+label blocks. The default value for ``session_id`` is zero; daemons
+who use multiple ZAPI sessions must assign unique values to the
+sessions' ids.
+
+The Zebra protocol is a streaming protocol, with a common header. Version 0
+lacks a version field and is implicitly versioned. Version 1 and all subsequent
+versions have a version field. Version 0 can be distinguished from all other
+versions by examining the 3rd byte of the header, which contains a marker value
+of 255 (in Quagga) or 254 (in FRR) for all versions except version 0. The
+marker byte corresponds to the command field in version 0, and the marker value
+is a reserved command in version 0.
+
+Version History
+---------------
+
+- Version 0
+
+ Used by all versions of GNU Zebra and all version of Quagga up to and
+ including Quagga 0.98. This version has no ``version`` field, and so is
+ implicitly versioned as version 0.
+
+- Version 1
+
+ Added ``marker`` and ``version`` fields, increased ``command`` field to 16
+ bits. Used by Quagga versions 0.99.3 through 0.99.20.
+
+- Version 2
+
+ Used by Quagga versions 0.99.21 through 0.99.23.
+
+- Version 3
+
+ Added ``vrf_id`` field. Used by Quagga versions 0.99.23 until FRR fork.
+
+- Version 4
+
+ Change marker value to 254 to prevent people mixing and matching Quagga and
+ FRR daemon binaries. Used by FRR versions 2.0 through 3.0.3.
+
+- Version 5
+
+ Increased VRF identifier field from 16 to 32 bits. Used by FRR versions 4.0
+ through 5.0.1.
+
+- Version 6
+
+ Removed the following commands:
+
+ * ZEBRA_IPV4_ROUTE_ADD
+ * ZEBRA_IPV4_ROUTE_DELETE
+ * ZEBRA_IPV6_ROUTE_ADD
+ * ZEBRA_IPV6_ROUTE_DELETE
+
+ Used since FRR version 6.0.
+
+
+Zebra Protocol Definition
+=========================
+
+Zebra Protocol Header Field Definitions
+---------------------------------------
+
+Length
+ Total packet length including this header.
+
+Marker
+ Static marker. The marker value, when it exists, is 255 in all versions of
+ Quagga. It is 254 in all versions of FRR. This is to allow version 0 headers
+ (which do not include version explicitly) to be distinguished from versioned
+ headers.
+
+Version
+ Zebra protocol version number. Clients should not continue processing
+ messages past the version field for versions they do not recognise.
+
+Command
+ The Zebra protocol command.
+
+
+Current Version
+^^^^^^^^^^^^^^^
+
+::
+
+ Version 5, 6
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Length | Marker | Version |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | VRF ID |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Command |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+Past Versions
+^^^^^^^^^^^^^
+
+::
+
+ Version 0
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Length | Command |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+::
+
+ Version 1, 2
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Length | Marker | Version |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Command |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+::
+
+ Version 3, 4
+
+ 0 1 2 3
+ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | Length | Marker | Version |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | VRF ID | Command |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+
+Zebra Protocol Commands
+-----------------------
+
+The definitions of zebra protocol commands can be found at ``lib/zclient.h``.
+
+Dataplane batching
+==================
+
+Dataplane batching is an optimization feature that reduces the processing
+time involved in the user space to kernel space transition for every message we
+want to send.
+
+Design
+-----------
+
+With our dataplane abstraction, we create a queue of dataplane context objects
+for the messages we want to send to the kernel. In a separate pthread, we
+loop over this queue and send the context objects to the appropriate
+dataplane. A batching enhancement tightly integrates with the dataplane
+context objects so they are able to be batch sent to dataplanes that support
+it.
+
+There is one main change in the dataplane code. It does not call
+kernel-dependent functions one-by-one, but instead it hands a list of work down
+to the kernel level for processing.
+
+Netlink
+^^^^^^^
+
+At the moment, this is the only dataplane that allows for batch sending
+messages to it.
+
+When messages must be sent to the kernel, they are consecutively added
+to the batch represented by the `struct nl_batch`. Context objects are firstly
+encoded to their binary representation. All the encoding functions use the same
+interface: take a context object, a buffer and a size of the buffer as an
+argument. It is important that they should handle a situation in which a message
+wouldn't fit in the buffer and return a proper error. To achieve a zero-copy
+(in the user space only) messages are encoded to the same buffer which will
+be passed to the kernel. Hence, we can theoretically hit the boundary of the
+buffer.
+
+Messages stored in the batch are sent if one of the conditions occurs:
+
+- When an encoding function returns the buffer overflow error. The context
+ object that caused this error is re-added to the new, empty batch.
+
+- When the size of the batch hits certain limit.
+
+- When the namespace of a currently being processed context object is
+ different from all the previous ones. They have to be sent through
+ distinct sockets, so the messages cannot share the same buffer.
+
+- After the last message from the list is processed.
+
+As mentioned earlier, there is a special threshold which is smaller than
+the size of the underlying buffer. It prevents the overflow error and thus
+eliminates the case, in which a message is encoded twice.
+
+The buffer used in the batching is global, since allocating that big amount of
+memory every time wouldn't be most effective. However, its size can be changed
+dynamically, using hidden vtysh command:
+``zebra kernel netlink batch-tx-buf (1-1048576) (1-1048576)``. This feature is
+only used in tests and shouldn't be utilized in any other place.
+
+For every failed message in the batch, the kernel responds with an error
+message. Error messages are kept in the same order as they were sent, so parsing the
+response is straightforward. We use the two pointer technique to match
+requests with responses and then set appropriate status of dataplane context
+objects. There is also a global receive buffer and it is assumed that whatever
+the kernel sends it will fit in this buffer. The payload of netlink error messages
+consists of a error code and the original netlink message of the request, so
+the batch response won't be bigger than the batch request increased by
+some space for the headers.