summaryrefslogtreecommitdiffstats
path: root/doc/rbd/libvirt.rst
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:13 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:13 +0000
commit389020e14594e4894e28d1eb9103c210b142509e (patch)
tree2ba734cdd7a243f46dda7c3d0cc88c2293d9699f /doc/rbd/libvirt.rst
parentAdding upstream version 18.2.2. (diff)
downloadceph-389020e14594e4894e28d1eb9103c210b142509e.tar.xz
ceph-389020e14594e4894e28d1eb9103c210b142509e.zip
Adding upstream version 18.2.3.upstream/18.2.3
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/rbd/libvirt.rst')
-rw-r--r--doc/rbd/libvirt.rst122
1 files changed, 62 insertions, 60 deletions
diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst
index e3523f8a8..a55a4f95b 100644
--- a/doc/rbd/libvirt.rst
+++ b/doc/rbd/libvirt.rst
@@ -4,11 +4,11 @@
.. index:: Ceph Block Device; livirt
-The ``libvirt`` library creates a virtual machine abstraction layer between
-hypervisor interfaces and the software applications that use them. With
-``libvirt``, developers and system administrators can focus on a common
+The ``libvirt`` library creates a virtual machine abstraction layer between
+hypervisor interfaces and the software applications that use them. With
+``libvirt``, developers and system administrators can focus on a common
management framework, common API, and common shell interface (i.e., ``virsh``)
-to many different hypervisors, including:
+to many different hypervisors, including:
- QEMU/KVM
- XEN
@@ -18,7 +18,7 @@ to many different hypervisors, including:
Ceph block devices support QEMU/KVM. You can use Ceph block devices with
software that interfaces with ``libvirt``. The following stack diagram
-illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
+illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
.. ditaa::
@@ -41,10 +41,11 @@ illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
The most common ``libvirt`` use case involves providing Ceph block devices to
-cloud solutions like OpenStack or CloudStack. The cloud solution uses
+cloud solutions like OpenStack, OpenNebula or CloudStack. The cloud solution uses
``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
-devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
-and CloudStack`_ for details. See `Installation`_ for installation details.
+devices via ``librbd``. See `Block Devices and OpenStack`_,
+`Block Devices and OpenNebula`_ and `Block Devices and CloudStack`_ for details.
+See `Installation`_ for installation details.
You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
``libvirt`` API. See `libvirt Virtualization API`_ for details.
@@ -62,12 +63,12 @@ Configuring Ceph
To configure Ceph for use with ``libvirt``, perform the following steps:
-#. `Create a pool`_. The following example uses the
+#. `Create a pool`_. The following example uses the
pool name ``libvirt-pool``.::
ceph osd pool create libvirt-pool
- Verify the pool exists. ::
+ Verify the pool exists. ::
ceph osd lspools
@@ -80,23 +81,23 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
and references ``libvirt-pool``. ::
ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool'
-
- Verify the name exists. ::
-
+
+ Verify the name exists. ::
+
ceph auth ls
- **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
- not the Ceph name ``client.libvirt``. See `User Management - User`_ and
- `User Management - CLI`_ for a detailed explanation of the difference
- between ID and name.
+ **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
+ not the Ceph name ``client.libvirt``. See `User Management - User`_ and
+ `User Management - CLI`_ for a detailed explanation of the difference
+ between ID and name.
-#. Use QEMU to `create an image`_ in your RBD pool.
+#. Use QEMU to `create an image`_ in your RBD pool.
The following example uses the image name ``new-libvirt-image``
and references ``libvirt-pool``. ::
qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G
- Verify the image exists. ::
+ Verify the image exists. ::
rbd -p libvirt-pool ls
@@ -111,7 +112,7 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
The ``client.libvirt`` section name should match the cephx user you created
- above.
+ above.
If SELinux or AppArmor is enabled, note that this could prevent the client
process (qemu via libvirt) from doing some operations, such as writing logs
or operate the images or admin socket to the destination locations (``/var/
@@ -123,7 +124,7 @@ Preparing the VM Manager
========================
You may use ``libvirt`` without a VM manager, but you may find it simpler to
-create your first domain with ``virt-manager``.
+create your first domain with ``virt-manager``.
#. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::
@@ -131,7 +132,7 @@ create your first domain with ``virt-manager``.
#. Download an OS image (if necessary).
-#. Launch the virtual machine manager. ::
+#. Launch the virtual machine manager. ::
sudo virt-manager
@@ -142,12 +143,12 @@ Creating a VM
To create a VM with ``virt-manager``, perform the following steps:
-#. Press the **Create New Virtual Machine** button.
+#. Press the **Create New Virtual Machine** button.
#. Name the new virtual machine domain. In the exemplary embodiment, we
use the name ``libvirt-virtual-machine``. You may use any name you wish,
- but ensure you replace ``libvirt-virtual-machine`` with the name you
- choose in subsequent commandline and configuration examples. ::
+ but ensure you replace ``libvirt-virtual-machine`` with the name you
+ choose in subsequent commandline and configuration examples. ::
libvirt-virtual-machine
@@ -155,9 +156,9 @@ To create a VM with ``virt-manager``, perform the following steps:
/path/to/image/recent-linux.img
- **NOTE:** Import a recent image. Some older images may not rescan for
+ **NOTE:** Import a recent image. Some older images may not rescan for
virtual devices properly.
-
+
#. Configure and start the VM.
#. You may use ``virsh list`` to verify the VM domain exists. ::
@@ -179,11 +180,11 @@ you that root privileges are required. For a reference of ``virsh``
commands, refer to `Virsh Command Reference`_.
-#. Open the configuration file with ``virsh edit``. ::
+#. Open the configuration file with ``virsh edit``. ::
sudo virsh edit {vm-domain-name}
- Under ``<devices>`` there should be a ``<disk>`` entry. ::
+ Under ``<devices>`` there should be a ``<disk>`` entry. ::
<devices>
<emulator>/usr/bin/kvm</emulator>
@@ -196,18 +197,18 @@ commands, refer to `Virsh Command Reference`_.
Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
- The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
+ The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
`Virtio`_ for details.
- **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
- the configuration file under ``/etc/libvirt/qemu`` with a text editor,
- ``libvirt`` may not recognize the change. If there is a discrepancy between
- the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
- ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
+ **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
+ the configuration file under ``/etc/libvirt/qemu`` with a text editor,
+ ``libvirt`` may not recognize the change. If there is a discrepancy between
+ the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
+ ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
properly.
-
-#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
+
+#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
<disk type='network' device='disk'>
<source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
@@ -216,21 +217,21 @@ commands, refer to `Virsh Command Reference`_.
<target dev='vdb' bus='virtio'/>
</disk>
- Replace ``{monitor-host}`` with the name of your host, and replace the
- pool and/or image name as necessary. You may add multiple ``<host>``
+ Replace ``{monitor-host}`` with the name of your host, and replace the
+ pool and/or image name as necessary. You may add multiple ``<host>``
entries for your Ceph monitors. The ``dev`` attribute is the logical
- device name that will appear under the ``/dev`` directory of your
- VM. The optional ``bus`` attribute indicates the type of disk device to
- emulate. The valid settings are driver specific (e.g., "ide", "scsi",
+ device name that will appear under the ``/dev`` directory of your
+ VM. The optional ``bus`` attribute indicates the type of disk device to
+ emulate. The valid settings are driver specific (e.g., "ide", "scsi",
"virtio", "xen", "usb" or "sata").
-
+
See `Disks`_ for details of the ``<disk>`` element, and its child elements
and attributes.
-
+
#. Save the file.
-#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
- default), you must generate a secret. ::
+#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
+ default), you must generate a secret. ::
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
@@ -249,11 +250,11 @@ commands, refer to `Virsh Command Reference`_.
ceph auth get-key client.libvirt | sudo tee client.libvirt.key
-#. Set the UUID of the secret. ::
+#. Set the UUID of the secret. ::
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
- You must also set the secret manually by adding the following ``<auth>``
+ You must also set the secret manually by adding the following ``<auth>``
entry to the ``<disk>`` element you entered earlier (replacing the
``uuid`` value with the result from the command line example above). ::
@@ -266,14 +267,14 @@ commands, refer to `Virsh Command Reference`_.
<auth username='libvirt'>
<secret type='ceph' uuid='{uuid of secret}'/>
</auth>
- <target ...
+ <target ...
- **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
- ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
- you use the ID component of the Ceph name you generated. If for some reason
- you need to regenerate the secret, you will have to execute
- ``sudo virsh secret-undefine {uuid}`` before executing
+ **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
+ ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
+ you use the ID component of the Ceph name you generated. If for some reason
+ you need to regenerate the secret, you will have to execute
+ ``sudo virsh secret-undefine {uuid}`` before executing
``sudo virsh secret-set-value`` again.
@@ -285,30 +286,31 @@ To verify that the VM and Ceph are communicating, you may perform the
following procedures.
-#. Check to see if Ceph is running::
+#. Check to see if Ceph is running::
ceph health
-#. Check to see if the VM is running. ::
+#. Check to see if the VM is running. ::
sudo virsh list
-#. Check to see if the VM is communicating with Ceph. Replace
- ``{vm-domain-name}`` with the name of your VM domain::
+#. Check to see if the VM is communicating with Ceph. Replace
+ ``{vm-domain-name}`` with the name of your VM domain::
sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'
#. Check to see if the device from ``<target dev='vdb' bus='virtio'/>`` exists::
-
+
virsh domblklist {vm-domain-name} --details
-If everything looks okay, you may begin using the Ceph block device
+If everything looks okay, you may begin using the Ceph block device
within your VM.
.. _Installation: ../../install
.. _libvirt Virtualization API: http://www.libvirt.org
.. _Block Devices and OpenStack: ../rbd-openstack
+.. _Block Devices and OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html#datastore-internals
.. _Block Devices and CloudStack: ../rbd-cloudstack
.. _Create a pool: ../../rados/operations/pools#create-a-pool
.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user