From 19f556125be7e658a0b7d27bcd6c453768ba9337 Mon Sep 17 00:00:00 2001 From: JoeOster <52936608+JoeOster@users.noreply.github.com> Date: Mon, 25 Apr 2022 16:11:04 -0700 Subject: [PATCH 01/14] First pass changes Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com> --- docs/overview/architecture.md | 57 +++++++++++---------- docs/overview/data_integrity.md | 66 +++++++++++++----------- docs/overview/fault.md | 41 ++++++++------- docs/overview/security.md | 20 ++++---- docs/overview/storage.md | 23 +++++---- docs/overview/terminology.md | 4 +- docs/overview/transaction.md | 83 +++++++++++++++--------------- docs/overview/use_cases.md | 91 ++++++++++++++++----------------- src/rdb/raft | 2 +- 9 files changed, 198 insertions(+), 189 deletions(-) diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md index 41328c0db92..53bb3176354 100644 --- a/docs/overview/architecture.md +++ b/docs/overview/architecture.md @@ -5,10 +5,10 @@ high bandwidth and high IOPS storage containers to applications and enables next-generation data-centric workflows combining simulation, data analytics, and machine learning. -Unlike the traditional storage stacks that were primarily designed for +Unlike the traditional storage stacks that are primarily designed for rotating media, DAOS is architected from the ground up to exploit new NVM technologies and is extremely lightweight since it operates -End-to-End (E2E) in user space with full OS bypass. DAOS offers a shift +End-to-End (E2E) in user space with complete OS bypass. DAOS offers a shift away from an I/O model designed for block-based and high-latency storage to one that inherently supports fine-grained data access and unlocks the performance of the next-generation storage technologies. @@ -30,10 +30,10 @@ directories over the native DAOS API is also available. DAOS I/O operations are logged and then inserted into a persistent index maintained in SCM. Each I/O is tagged with a particular timestamp called -epoch and is associated with a particular version of the dataset. No +epoch and associated with a specific dataset version. No read-modify-write operations are performed internally. Write operations are non-destructive and not sensitive to alignment. Upon read request, -the DAOS service walks through the persistent index and creates a +the DAOS service walks through the persistent index. It creates a complex scatter-gather Remote Direct Memory Access (RDMA) descriptor to reconstruct the data at the requested version directly in the buffer provided by the application. @@ -43,18 +43,18 @@ DAOS service that manages the persistent index via direct load/store. Depending on the I/O characteristics, the DAOS service can decide to store the I/O in either SCM or NVMe storage. As represented in Figure 2-1, latency-sensitive I/Os, like application metadata and byte-granular -data, will typically be stored in the former, whereas checkpoints and +data will typically be stored in the former, whereas checkpoints and bulk data will be stored in the latter. This approach allows DAOS to deliver the raw NVMe bandwidth for bulk data by streaming the data to NVMe storage and maintaining internal metadata index in SCM. The Persistent Memory Development Kit (PMDK) allows managing -transactional access to SCM and the Storage Performance Development Kit +transactional access to SCM, and the Storage Performance Development Kit (SPDK) enables user-space I/O to NVMe devices. ![](../admin/media/image1.png) Figure 2-1. DAOS Storage -DAOS aims at delivering: +DAOS aims to deliver: - High throughput and IOPS at arbitrary alignment and size @@ -96,34 +96,35 @@ DAOS aims at delivering: ## DAOS System A data center may have hundreds of thousands of compute instances -interconnected via a scalable high-performance network, where all, or a -subset of the instances called storage nodes, have direct access to NVM +interconnected via a scalable, high-performance network, where all, or +a subset of the instances called storage nodes have direct access to NVM storage. A DAOS installation involves several components that can be either collocated or distributed. -A DAOS *system* is identified by a system name, and consists of a set of +A DAOS *system* is identified by a system name and consists of a set of DAOS *storage nodes* connected to the same network. The DAOS storage nodes run one DAOS *server* instance per node, which in turn starts one DAOS *Engine* process per physical socket. Membership of the DAOS -servers is recorded into the system map, that assigns a unique integer +servers is recorded into the system map, which assigns a unique integer *rank* to each *Engine* process. Two different DAOS systems comprise -two disjoint sets of DAOS servers, and do not coordinate with each other. +two disjoint sets of DAOS servers and do not coordinate. The DAOS *server* is a multi-tenant daemon running on a Linux instance (either natively on the physical node or in a VM or container) of each *storage node*. Its *Engine* sub-processes export the locally-attached SCM and NVM storage through the network. It listens to a management port -(addressed by an IP address and a TCP port number), plus one or more fabric +(addressed by an IP address and a TCP port number) plus one or more fabric endpoints (addressed by network URIs). + The DAOS server is configured through a YAML file in /etc/daos, including the configuration of its Engine sub-processes. The DAOS server startup can be integrated with different daemon management or -orchestration frameworks (for example a systemd script, a Kubernetes service, +orchestration frameworks (for example, a systemd script, a Kubernetes service, or even via a parallel launcher like pdsh or srun). Inside a DAOS Engine, the storage is statically partitioned across multiple *targets* to optimize concurrency. To avoid contention, each -target has its private storage, its own pool of service threads, and its +target has its private storage, its pool of service threads, and its dedicated network context that can be directly addressed over the fabric independently of the other targets hosted on the same storage node. @@ -133,24 +134,24 @@ independently of the other targets hosted on the same storage node. !!! note When mounting the PMem devices with the `dax` option, - the following warning will be logged in dmesg: - `EXT4-fs (pmem0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk` - This warning can be safely ignored: It is issued because - DAX does not yet support the `reflink` filesystem feature, + the following warning is logged using dmesg: + `EXT4-fs (pmem0): DAX enabled. Warning: EXPERIMENTAL, use at your own risk.` + This warning can be safely ignored; it is issued because + DAX does not yet support the `relink` filesystem feature, but DAOS does not use this feature. * When *N* targets per engine are configured, each target is using *1/N* of the capacity of the `fsdax` SCM capacity of that socket, independently of the other targets. -* Each target is also using a fraction of the NVMe capacity of the NVMe - drives that are attached to this socket. For example, in an engine +* Each target also uses a fraction of the NVMe capacity of the NVMe + drives attached to this socket. For example, in an engine with 4 NVMe disks and 16 targets, each target will manage 1/4 of a single NVMe disk. A target does not implement any internal data protection mechanism against storage media failure. As a result, a target is a single point -of failure and the unit of fault. +of failure and the fault unit. A dynamic state is associated with each target: Its state can be either "up and running", or "down and not available". @@ -163,7 +164,7 @@ configurable, and depends on the underlying hardware (in particular, the number of SCM modules and the number of NVMe SSDs that are served by this engine instance). As a best practice, the number of targets of an engine should be an integer multiple of the number of NVMe drives -that are served by this engine. +this engine serves that. ## SDK and Tools @@ -173,20 +174,20 @@ to administer a DAOS system and is intended for integration with vendor-specific storage management and open-source orchestration frameworks. The `dmg` CLI tool is built over the DAOS management API. On the other hand, the DAOS library (`libdaos`) implements the -DAOS storage model. It is primarily targeted at application and I/O +DAOS storage model. It primarily targets applications and I/O middleware developers who want to store datasets in a DAOS system. User utilities like the `daos` command are also built over the API to allow users to manage datasets from a CLI. Applications can access datasets stored in DAOS either directly through -the native DAOS API, through an I/O middleware library (e.g. POSIX -emulation, MPI-IO, HDF5) or through frameworks like Spark or TensorFlow +the native DAOS API, through an I/O middleware library (e.g., POSIX +emulation, MPI-IO, HDF5), or through frameworks like Spark or TensorFlow that have already been integrated with the native DAOS storage model. ## Agent -The DAOS agent is a daemon residing on the client nodes that interacts +The DAOS agent is a daemon residing on the client nodes interacting with the DAOS library to authenticate the application processes. It is a trusted entity that can sign the DAOS library credentials using -certificates. The agent can support different authentication frameworks, +certificates. The agent can support different authentication frameworks and uses a Unix Domain Socket to communicate with the DAOS library. diff --git a/docs/overview/data_integrity.md b/docs/overview/data_integrity.md index 1dca35482b2..eb8ab507868 100644 --- a/docs/overview/data_integrity.md +++ b/docs/overview/data_integrity.md @@ -10,10 +10,10 @@ attempt to recover the corrupted data using data redundancy mechanisms ## End-to-end Data Integrity In simple terms, end-to-end means that the DAOS Client library will calculate a -checksum for data that is being sent to the DAOS Server. The DAOS Server will +checksum for data sent to the DAOS Server. The DAOS Server will store the checksum and return it upon data retrieval. Then the client verifies -the data by calculating a new checksum and comparing to the checksum received -from the server. There are variations on this approach depending on the type of +the data by calculating a new checksum and comparing it to the checksum received +from the server. There are variations in this approach depending on the type of data being protected, but the following diagram shows the basic checksum flow. ![Basic Checksum Flow](../graph/data_integrity/basic_checksum_flow.png) @@ -21,22 +21,23 @@ data being protected, but the following diagram shows the basic checksum flow. Data integrity is configured for each container. See [Storage Model](./storage.md) for more information about how data is -organized in DAOS. See the Data Integrity in +organized in DAOS. Also, see the Data Integrity in the [Container User Guide](../user/container.md#data-integrity) for details on -how to setup a container with data integrity. +setting up a container with data integrity. ## Keys and Value Objects -Because DAOS is a key/value store, the data for both keys and values is -protected, however, the approach is slightly different. For the two different +Because DAOS is a key/value store, the data for both keys and values are +protected; however, the approach is slightly different. In addition, for the two different value types, single and array, the approach is also slightly different. ### Keys + On an update and fetch, the client calculates a checksum for the data used as the distribution and attribute keys and will send it to the server within the RPC. The server verifies the keys with the checksum. While enumerating keys, the server will calculate checksums for the keys and -pack within the RPC message to the client. The client will verify the keys +pack them within the RPC message to the client. Finally, the client will verify the keys received. !!! note @@ -47,13 +48,14 @@ received. has reliable data integrity protection. ### Values + On an update, the client will calculate a checksum for the data of the value and will send it to the server within the RPC. If "server verify" is enabled, the -server will calculate a new checksum for the value and compare with the checksum +server will calculate a new checksum for the value and compare it with the checksum received from the client to verify the integrity of the value. If the checksums -don't match, then data corruption has occurred and an error is returned to the -client indicating that the client should try the update again. Whether "server -verify" is enabled or not, the server will store the checksum. +don't match, then data corruption has occurred, and an error is returned to the +client, indicating that the client should try the update again. Again, when the "server +verifies" it is enabled, the server will store the checksum. See [VOS](https://github.com/daos-stack/daos/blob/release/2.2/src/vos/README.md) for more info about checksum management and storage in VOS. @@ -62,7 +64,7 @@ values fetched so the client can verify the values received. If the checksums don't match, then the client will fetch from another replica if available in an attempt to get uncorrupted data. -There are some slight variations to this approach for the two different types +There are slight variations to this approach for the two different types of values. The following diagram illustrates a basic example. (See [Storage Model](storage.md) for more details about the single value and array value types) @@ -70,32 +72,32 @@ of values. The following diagram illustrates a basic example. ![Basic Checksum Flow](../graph/data_integrity/basic_checksum_flow.png) #### Single Value + A Single Value is an atomic value, meaning that writes to a single value will update the entire value and reads retrieve the entire value. Other DAOS features such as Erasure Codes might split a Single Value into multiple shards to be -distributed among multiple storage nodes. Either the whole Single Value (if +distributed among multiple storage nodes. The whole Single Value (if going to a single node) or each shard (if distributed) will have a checksum calculated, sent to the server, and stored on the server. Note that it is possible for a single value, or shard of a single value, to -be smaller than the checksum derived from it. It is advised that if an -application needs many small single values to use an Array Type instead. +be smaller than the checksum derived from it. Therefore, if an +application needs many small single values, it is advised to use an Array Type instead. #### Array Values + Unlike Single Values, Array Values can be updated and fetched at any part of -an array. In addition, updates to an array are versioned, so a fetch can include +an array. In addition, updates to an array are versioned so that a fetch can include parts from multiple versions of the array. Each of these versioned parts of an -array are called extents. The following diagrams illustrate a couple examples +array is called extents. The following diagrams illustrate a couple of examples (also see [VOS Key Array Stores](https://github.com/daos-stack/daos/blob/release/2.2/src/vos/README.md#key-array-stores) for more information): - A single extent update (blue line) from index 2-13. A fetched extent (orange line) from index 2-6. The fetch is only part of the original extent written. ![](../graph/data_integrity/array_example_1.png) - Many extent updates and different epochs. A fetch from index 2-13 requires parts from each extent. @@ -106,9 +108,9 @@ The nature of the array type requires that a more sophisticated approach to creating checksums is used. DAOS uses a "chunking" approach where each extent will be broken up into "chunks" with a predetermined "chunk size." Checksums will be derived from these chunks. Chunks are aligned with an absolute offset -(starting at 0), not an I/O offset. The following diagram illustrates a chunk -size configured to be 4 (units is arbitrary in this example). Though not all -chunks have a full size of 4, an absolute offset alignment is maintained. +(starting at 0), not an I/O offset. For example, the following diagram illustrates a chunk +size configured to be 4 (units are arbitrary in this example). Though not all +chunks have a total size of 4, an absolute offset alignment is maintained. The gray boxes around the extents represent the chunks. ![](../graph/data_integrity/array_with_chunks.png) @@ -118,21 +120,24 @@ See [Object Layer](https://github.com/daos-stack/daos/blob/release/2.2/src/objec for more details about the checksum process on object update and fetch) ## Checksum calculations + The actual checksum calculations are done by the [isa-l](https://github.com/intel/isa-l) and [isa-l_crypto](https://github.com/intel/isa-l_crypto) libraries. However, -these libraries are abstracted away from much of DAOS and a common checksum +these libraries are abstracted away from much of DAOS, and a common checksum library is used with appropriate adapters to the actual isa-l implementations. [common checksum library](https://github.com/daos-stack/daos/blob/release/2.2/src/common/README.md#checksum) ## Performance Impact + Calculating checksums can be CPU intensive and will impact performance. To mitigate performance impact, checksum types with hardware acceleration should be chosen. For example, CRC32C is supported by recent Intel CPUs, and many are accelerated via SIMD. ## Quality -Unit and functional testing is performed at many layers. + +Unit and functional testing are performed at many layers. | Test executable | What's tested | Key test files | | --- | --- | --- | @@ -142,15 +147,18 @@ Unit and functional testing is performed at many layers. | daos_test | daos_obj_update/fetch with checksums enabled. The -z flag can be used for specific checksum tests. Also --csum_type flag can be used to enable checksums with any of the other daos_tests | src/tests/suite/daos_checksum.c | ### Running Tests -**With daos_server not running** -``` +#### With daos_server not running + +```bash ./commont_test ./vos_test -z ./srv_checksum_tests ``` -**With daos_server running** -``` + +#### With daos_server running + +```bash export DAOS_CSUM_TEST_ALL_TYPE=1 ./daos_server -z ./daos_server -i --csum_type crc64 diff --git a/docs/overview/fault.md b/docs/overview/fault.md index c5455c273ba..779ebcc5c35 100644 --- a/docs/overview/fault.md +++ b/docs/overview/fault.md @@ -1,21 +1,22 @@ + # Fault Model DAOS relies on massively distributed single-ported storage. Each target -is thus effectively a single point of failure. DAOS achieves availability +is thus effectively a single point of failure. DAOS achieves the availability and durability of both data and metadata by providing redundancy across targets in different fault domains. DAOS internal pool and container metadata are replicated via a robust consensus algorithm. DAOS objects are then safely replicated or erasure-coded by transparently leveraging - the DAOS distributed transaction mechanisms internally. The purpose of -this section is to provide details on how DAOS achieves fault tolerance + the DAOS distributed transaction mechanisms internally. This section aims to provide details on how DAOS achieves fault tolerance and guarantees object resilience. + ## Hierarchical Fault Domains A fault domain is a set of servers sharing the same point of failure and -which are thus likely to fail altogether. DAOS assumes that fault domains +are thus likely to fail altogether. DAOS assumes that fault domains are hierarchical and do not overlap. The actual hierarchy and fault domain membership must be supplied by an external database used by DAOS to generate the pool map. @@ -26,20 +27,22 @@ or erasure-coded over a variable number of fault domains depending on the selected object class. + ## Fault Detection DAOS engines are monitored within a DAOS system through a gossip-based protocol called [SWIM](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1028914) that provides accurate, efficient, and scalable fault detection. Storage attached to each DAOS target is monitored through periodic local -health assessment. Whenever a local storage I/O error is returned to the +health assessments. Whenever a local storage I/O error is returned to the DAOS server, an internal health check procedure will be called automatically. This procedure will make an overall health assessment by analyzing the IO error code and device SMART/Health data. If the result is negative, -the target will be marked as faulty, and further I/Os to this target will be +the target is marked as faulty, and further, I/Os to this target will be rejected and re-routed. + ## Fault Isolation Once detected, the faulty target or engine (effectively a set of targets) @@ -49,38 +52,38 @@ the pool map is eagerly pushed to all storage targets. At this point, the pool enters a degraded mode that might require extra processing on access (e.g. reconstructing data out of erasure code). Consequently, DAOS client and storage nodes retry an RPC until they find an alternative replacement target -from the new pool map or experiences an RPC timeout. At this point, -all outstanding communications with the -evicted target are aborted, and no further messages should be sent to the +from the new pool map or experience an RPC timeout. All outstanding communications with the +evicted target are aborted, and no further messages will be sent to the target until it is explicitly reintegrated (possibly only after maintenance action). All storage targets are promptly notified of pool map changes by the pool -service. This is not the case for client nodes, which are lazily informed -of pool map invalidation each time they communicate with any engines. To do so, -clients pack in every RPC their current pool map version. Servers reply not +service. Unfortunately, this is not the case for client nodes, who are lazily informed +of pool map invalidation each time they communicate with any engine. To do so, +clients pack their current pool map version in every RPC. Servers reply not only with the current pool map version. Consequently, when a DAOS client experiences RPC timeout, it regularly communicates with the other DAOS -target to guarantee that its pool map is always current. Clients will then +target to guarantee its pool map is always current. Clients will eventually be informed of the target exclusion and enter into degraded mode. This mechanism guarantees global node eviction and that all nodes eventually share the same view of target aliveness. + ## Fault Recovery Upon exclusion from the pool map, each target starts the rebuild process automatically to restore data redundancy. First, each target creates a list of local objects impacted by the target exclusion. This is done by scanning a local object table maintained by the underlying storage layer. Then for -each impacted object, the location of the new object shard is determined and -redundancy of the object restored for the whole history (i.e., snapshots). +each impacted object, the location of the new object shard is determined, and +redundancy of the object is restored for the whole history (i.e., snapshots). Once all impacted objects have been rebuilt, the pool map is updated a second -time to report the target as failed out. This marks the end of collective -rebuild process and the exit from degraded mode for this particular fault. -At this point, the pool has fully recovered from the fault and client nodes +time to report the target as failed out. This marks the end of the collective +rebuild process and the exit from the degraded mode for this particular fault. +At this point, the pool has fully recovered from the fault, and client nodes can now read from the rebuilt object shards. -This rebuild process is executed online while applications continue accessing +This rebuilding process is executed online while applications continue accessing and updating objects. diff --git a/docs/overview/security.md b/docs/overview/security.md index d03f661a372..3273589ab80 100644 --- a/docs/overview/security.md +++ b/docs/overview/security.md @@ -120,17 +120,17 @@ The permissions in a resource's ACE permit a certain type of user access to the resource. The order of the permission "bits" (characters) within the `PERMISSIONS` field of the ACE is not significant. -| Permission | Pool Meaning | Container Meaning | +| Permission | Pool Meaning| Container Meaning| | ------------- | --------------------- | --------------------------------------------- | -| r (Read) | Alias for 't' | Read container data and attributes | -| w (Write) | Alias for 'c' + 'd' | Write container data and attributes | -| c (Create) | Create containers | N/A | -| d (Delete) | Delete any container | Delete this container | -| t (Get-Prop) | Connect/query | Get container properties | -| T (Set-Prop) | N/A | Set/Change container properties | -| a (Get-ACL) | N/A | Get container ACL | -| A (Set-ACL) | N/A | Set/Change container ACL | -| o (Set-Owner) | N/A | Set/Change container's owner user and group | +| r (Read)| Alias for 't'| Read container data and attributes| +| w (Write)| Alias for 'c' + 'd'| Write container data and attributes| +| c (Create)| Create containers| N/A| +| d (Delete)| Delete any container| Delete this container| +| t (Get-Prop)| Connect/query| Get container properties| +| T (Set-Prop)| N/A| Set/Change container properties| +| a (Get-ACL)| N/A| Get container ACL| +| A (Set-ACL)| N/A| Set/Change container ACL| +| o (Set-Owner)| N/A| Set/Change container's owner user and group| ACEs containing permissions not applicable to the given resource are considered invalid. diff --git a/docs/overview/storage.md b/docs/overview/storage.md index 3c53a465e45..2a9866cd70c 100644 --- a/docs/overview/storage.md +++ b/docs/overview/storage.md @@ -101,8 +101,8 @@ when a massively distributed job is run on the datacenter. A pool connection is revoked when the original process that issued the connection request disconnects from the pool. - + ## DAOS Container A container represents an object address space inside a pool and is @@ -131,8 +131,8 @@ Each object class is assigned a unique identifier and is associated with a given schema at the pool level. A new object class can be defined at any time with a configurable schema, which is then immutable after creation (or at least until all objects belonging to the class have been destroyed). -For convenience, several object classes that are expected to be the most -commonly used will be predefined by default when the pool is created, +For convenience, several object classes that are expected to be the most +commonly used will be predefined by default when the pool is created, as shown in the table below. @@ -158,7 +158,9 @@ be stored by the application is the full 128-bit address, which is for single use only and can be associated with only a single object schema. **DAOS Object ID Structure** +
+
 ```
 <---------------------------------- 128 bits ---------------------------------->
 --------------------------------------------------------------------------------
@@ -190,14 +192,15 @@ other peer application processes via the container `local2global()` and
 `global2local()` operations.
 
 
+
 ## DAOS Object
 
 To avoid scaling problems and overhead common to a traditional storage system,
 DAOS objects are intentionally simple. No default object metadata beyond the
-type and schema is provided. This means that the system does not maintain
+type and schema is provided. The system does not maintain
 time, size, owner, permissions or even track openers.
-To achieve high availability and horizontal scalability, many object schemas
-(replication/erasure code, static/dynamic striping, and others) are provided.
+Many object schemas
+(replication/erasure code, static/dynamic striping, and others) are provided to achieve high availability and horizontal scalability.
 The schema framework is flexible and easily expandable to allow for new custom
 schema types in the future. The layout is generated algorithmically on object
 open from the object identifier and the pool map. End-to-end integrity is
@@ -208,12 +211,11 @@ A DAOS object can be accessed through different APIs:
 
 -    **Multi-level key-array** API is the native object interface with locality
      feature. The key is split into a distribution (dkey) and an
-     attribute (akey) key. Both the dkey and akey can be of variable
+     attribute (akey) key. Both keys can be variable
      length and type (a string, an integer or even a complex data
      structure). All entries under the same dkey are guaranteed to be
      collocated on the same target. The value associated with akey can be
-     either a single variable-length value that cannot be partially overwritten,
-     or an array of fixed-length values.
+     either a single variable-length value or an array of fixed-length values.
      Both the akeys and dkeys support enumeration.
 
 -    **Key-value** API provides a simple key and variable-length value
@@ -221,5 +223,4 @@ A DAOS object can be accessed through different APIs:
 
 -    **Array API** implements a one-dimensional array of fixed-size elements
      addressed by a 64-bit offset. A DAOS array supports arbitrary extent read,
-     write and punch operations.
-
+     write and punch operations.
\ No newline at end of file
diff --git a/docs/overview/terminology.md b/docs/overview/terminology.md
index 9c001448c55..681114e4aa5 100644
--- a/docs/overview/terminology.md
+++ b/docs/overview/terminology.md
@@ -8,7 +8,7 @@
 |ACID|Atomicity, consistency, isolation, durability|
 |BIO|Blob I/O|
 |CART|Collective and RPC Transport|
-|CGO|Go tools that enable creation of Go packages that call C code|
+|CGO|Go tools that enable the creation of Go packages that call C code|
 |CN|Compute Node|
 |COTS|Commercial off-the-shelf|
 |CPU|Central Processing Unit|
@@ -32,7 +32,7 @@
 |[OFI](https://ofiwg.github.io/libfabric/)|Open Fabrics Interface|
 |OS|Operating System|
 |PM|Persistent Memory|
-|[PMDK](https://pmem.io/pmdk/)|Persistent Memory Devevelopment Kit|
+|[PMDK](https://pmem.io/pmdk/)|Persistent Memory Development Kit|
 |[RAFT](https://raft.github.io/)|Raft is a consensus algorithm used to distribute state transitions among DAOS server nodes.|
 |RAS|Reliability, Availability & Serviceability|
 |RDB|Replicated Database, containing pool metadata and maintained across DAOS servers using the Raft algorithm.|
diff --git a/docs/overview/transaction.md b/docs/overview/transaction.md
index 557d6a3f3dc..e1587452da6 100644
--- a/docs/overview/transaction.md
+++ b/docs/overview/transaction.md
@@ -7,9 +7,9 @@ concurrency control mechanism based on multi-version timestamp ordering.
 DAOS transactions are serializable and can be used on an ad-hoc basis for parts
 of the datasets that need it.
 
-The DAOS versioning mechanism allows creating persistent container snapshots
+The DAOS versioning mechanism allows the creation of persistent container snapshots
 which provide point-in-time distributed consistent views of a container which
-can be used to build producer-consumer pipeline.
+can be used to build a producer-consumer pipeline.
 
 ## Epoch and Timestamp Ordering
 
@@ -29,8 +29,7 @@ can be snapshot at any time.
 
 DAOS snapshots are very lightweight and are tagged with the epoch associated
 with the time when the snapshot was created. Once successfully created,
-a snapshot remains readable until it is explicitly destroyed. The content of
-a container can be rolled back to a particular snapshot.
+a snapshot remains readable until it is explicitly destroyed. Then the content of a container can be rolled back to a particular snapshot.
 
 The container snapshot feature allows supporting native producer/consumer
 pipelines as represented in the diagram below.
@@ -39,12 +38,11 @@ pipelines as represented in the diagram below.
 
 The producer will generate a snapshot once a consistent version of the
 dataset has been successfully written. The consumer applications may
-subscribe to container snapshot events, so that new updates can be processed
+subscribe to container snapshot events to process new updates
 as the producer commits them. The immutability of the snapshots guarantees
 that the consumer sees consistent data, even while the producer continues
-with new updates. Both the producer and consumer indeed operate on different
-versions of the container and do not need any serialization. Once the
-producer generates a new version of the dataset, the consumer may query the
+with new updates. Both the producer and consumer operate on different container versions and do not need any serialization. Once the
+producer generates a new dataset version, the consumer may query the
 differences between the two snapshots and process only the incremental changes.
 
 ## Distributed Transactions
@@ -53,13 +51,13 @@ Unlike POSIX, the DAOS API does not impose any worst-case concurrency
 control mechanism to address conflicting I/O operations. Instead,
 individual I/O operations are tagged with a different epoch and applied
 in epoch order, regardless of execution order. This baseline model
-delivers the maximum scalability and performance to data models and
-applications that do not generate conflicting I/O workload. Typical
+delivers maximum scalability and performance to data models and
+applications that do not generate conflicting I/O workloads. Typical
 examples are collective MPI-IO operations, POSIX file read/write
-or HDF5 dataset read/write.
+, or HDF5 dataset read/write.
 
 For parts of the data model that require conflict serialization,
-DAOS provides distributed serializable transaction based on multi-version
+DAOS provides distributed serializable transactions based on multi-version
 concurrency control. Transactions are typically needed when different user
 processes can overwrite the value associated with a dkey/akey pair.
 Examples are a SQL database over DAOS or a consistent POSIX namespace
@@ -71,11 +69,11 @@ one of the conflicting transactions (the transaction fails to commit
 with `-DER_RESTART`). The failed transaction then has to be restarted
 by the user/application.
 
-In the initial implementation, the transaction API does not support reading
-your own uncommitted changes. In other words, transactional object or key-value
+The transaction API does not support reading
+your uncommitted changes in the initial implementation. In other words, transactional object or key-value
 modifications are invisible to the subsequent operations executed in the
-context of the same transaction. The transaction API is supported for all
-object types and can be combined with the the event and scheduler interface.
+same transaction context. The transaction API is supported for all
+object types and can be combined with the event and scheduler interface.
 
 The typical flow of a transaction is as follows:
 
@@ -86,46 +84,45 @@ int           rc;
 /* allocate transaction */
 rc = daos_tx_open(dfs->coh, &th, 0, NULL);
 if (rc)
-	...
+      ...
 
 restart:
-	/* execute operations under the same transaction */
-	rc = daos_obj_fetch(..., th);
-	...
-	rc = daos_obj_update(..., th);
-	...
-	if (rc) {
-		rc = daos_tx_abort(th, NULL);
-		/* either goto restart or exit */
-	}
-
-	rc = daos_tx_commit(th, NULL);
-	if (rc) {
-		if (rc == -DER_TX_RESTART) {
-			/* conflict with another transaction, try again */
-			rc = daos_tx_restart(th, NULL);
-			goto restart;
-		}
-		...
-	}
+      /* execute operations under the same transaction */
+      rc = daos_obj_fetch(..., th);
+      ...
+      rc = daos_obj_update(..., th);
+      ...
+      if (rc) {
+            rc = daos_tx_abort(th, NULL);
+            /* either goto restart or exit */
+      }
+
+      rc = daos_tx_commit(th, NULL);
+      if (rc) {
+            if (rc == -DER_TX_RESTART) {
+                  /* conflict with another transaction, try again */
+                  rc = daos_tx_restart(th, NULL);
+                  goto restart;
+            }
+            ...
+      }
 
 /* free up all the resources allocated for the transaction */
 rc = daos_tx_close(th, NULL);
 ```
 
 daos\_tx\_open() is a local operation that will allocate the context
-for the transaction. All non-modifying operations (e.g. fetch, list) are
-served by the remote engines while all modifying operations (e.g. update,
-punch) are buffered on the client side.
+for the transaction. The remote engines serve all non-modifying operations (e.g., fetch, list) while all modifying operations (e.g., update,
+punch) are buffered on the client-side.
 
-At commit time, all operations are packed into a compound RPC which is then
+All operations are packed into a compound RPC at commit time, which is then
 sent to the engine algorithmically elected as the leader for this transaction.
 The leader engine then applies all the changes on all the storage nodes.
 In the case of a conflict with another transaction, daos\_tx\_commit() fails
-with -DER\_TX\_RESTART and the whole transaction should be re-executed by the
-client after calling daos\_tx\_restart(). False conflicts might happen, but
+with -DER\_TX\_RESTART. Then the whole transaction should be re-executed by the
+client after calling daos\_tx\_restart(). Of course, false conflicts might happen but
 should be the exception rather than the norm.
 
 At any time, daos\_tx\_abort() may be invoked to cancel the transaction. Once
 the transaction is completed or aborted, all resources allocated to the
-transaction can be freed via daos\_tx\_close(). The th handle is then invalid.
+transaction are freed via daos\_tx\_close(). The handle is then invalid.
diff --git a/docs/overview/use_cases.md b/docs/overview/use_cases.md
index 1b67447ca35..3478cd41531 100644
--- a/docs/overview/use_cases.md
+++ b/docs/overview/use_cases.md
@@ -24,8 +24,8 @@ In this section, we consider two different cluster configurations:
   the fabric. They are not used for computation and thus do not run any
   application code.
 
-At boot time, each storage node starts the DAOS server that instantiates
-service threads. In cluster A, the DAOS threads are bound to the noisy cores
+Each storage node starts the DAOS server that instantiates
+service threads at boot time. In cluster A, the DAOS threads are bound to the noisy cores
 and interact with the FWK if mOS is used. In cluster B, the DAOS server can
 use all the cores of the storage node.
 
@@ -37,14 +37,14 @@ hierarchy (from a database or specific service) and integrates this with the
 storage information.
 
 The resource manager then uses the DAOS management API to query available
-storage and allocate a certain amount of storage (i.e. persistent memory)
+storage and allocate a certain amount of storage (i.e., persistent memory)
 for a new workflow that is to be scheduled. In cluster A, this allocation
 request may list the compute nodes where the workflow is supposed to run,
 whereas in case B, it may ask for storage nearby some allocated compute nodes.
 
 Once successfully allocated, the master server will initialize a DAOS pool
-covering the allocated storage by formatting the VOS layout (i.e. fallocate(1)
-a PMEM file & create VOS super block) and starting the pool service which
+covering the allocated storage by formatting the VOS layout (i.e., fallocate(1)
+a PMEM file & create a VOS superblock) and starting the pool service which
 will initiate the Raft engine in charge of the pool membership and metadata.
 At this point, the DAOS pool is ready to be handed off to the actual workflow.
 
@@ -67,26 +67,26 @@ Each green box represents a different container. All containers are stored
 in the same DAOS pool represented by the gray box. The simulation reads data
 from the input container and writes raw timesteps to another container.
 It also regularly dumps checkpoints to a dedicated ckpt container.
-The down-sample job reads the raw timesteps and generates sampled timesteps
-to be analyzed by the post-process which stores analysis data into yet
+Finally, the down-sample job reads the raw timesteps and generates sampled timesteps
+to be analyzed by the post-process, which stores analysis data into 
 another container.
 
 
 
 ### Bulk Synchronous Checkpoint
 
-Defensive I/O is used to manage a large simulation run over a period of time
+Defensive I/O manages a large simulation run over a period of time
 larger than the platform's mean time between failure (MTBF). The simulation
 regularly dumps the current computation state to a dedicated container used
 to guarantee forward progress in the event of failures. This section
-elaborates on how checkponting could be implemented on top of the DAOS
+elaborates on how checkpointing could be implemented on the DAOS
 storage stack. We first consider the traditional approach relying on
 blocking barriers and then a more loosely coupled execution.
 
 Blocking Barrier
 
 When the simulation job starts, one task opens the checkpoint container
-and fetches the current global HCE. It thens obtains an epoch hold and
+and fetches the current global HCE. It then obtains an epoch hold and
 shares the data (the container handle, the current LHE and global HCE)
 with peer tasks. Each task checks for the latest computation state saved
 to the checkpoint container by reading with an epoch equal to the global
@@ -96,8 +96,8 @@ To checkpoint, each task executes a barrier to synchronize with the
 other tasks, writes its current computation state to the checkpoint
 container at epoch LHE, flushes all updates and finally executes another
 barrier. Once all tasks have completed the last barrier, one designated
-task (e.g. rank 0) commits the LHE which is then increased by one on
-successful commit. This process is repeated regularly until the simulation
+task (e.g., rank 0) commits the LHE, which is then increased by one on
+a successful commit. This process is repeated regularly until the simulation
 successfully completes.
 
 Non-blocking Barrier
@@ -109,17 +109,16 @@ epoch hold and sharing the data with the other peer tasks.
 However, tasks can now checkpoint their computation state at their own pace
 without waiting for each other. After the computation of N timesteps,
 each task dumps its state to the checkpoint container at epoch LHE+1,
-flushes the changes and calls a non-blocking barrier (e.g. MPI_Ibarrier())
+flushes the changes and calls a non-blocking barrier (e.g., MPI_Ibarrier())
 once done. Then after another N timesteps, the new checkpoint is written with
 epoch LHE+2 and so on. For each checkpoint, the epoch number is incremented.
 
 Moreover, each task regularly calls MPI_Test() to check for barrier
-completion which allows them to recycle the MPI_Request. Upon barrier
-completion, one designated task (typically rank 0) also commits the
-associated epoch number. All epochs are guaranteed to be committed in
-sequence and each committed epoch is a new consistent checkpoint to
+completion, allowing them to recycle the MPI_Request. One designated task (typically rank 0) commits the
+associated epoch number upon barrier completion. All epochs are guaranteed to be committed in
+sequence, and each committed epoch is a new consistent checkpoint to
 restart from. On failure, checkpointed states that have been written by
-individual tasks, but not committed, are automatically rolled back.
+individual tasks but not committed are automatically rolled back.
 
 
 
@@ -129,54 +128,54 @@ In the previous figure, we have two examples of
 producer/consumer. The down-sample job consumes raw timesteps generated
 by the simulation job and produces sampled timesteps analyzed by the
 post-process job. The DAOS stack provides specific mechanisms for
-producer/consumer workflow which even allows the consumer to dumps the
+producer/consumer workflow, allowing the consumer to dump the
 result of its analysis into the same container as the producer.
 
 Private Container
 
 The down-sample job opens the sampled timesteps container, fetches the
-current global HCE, obtains an epoch hold and writes new sampled data to
-this container at epoch LHE. While this is occurring, the post process job
+current global HCE, obtains an epoch hold and writes newly sampled data to
+this container at epoch LHE. While this occurs, the post-process job
 opens the container storing analyzed data for write, checks for the latest
 analyzed timesteps and obtains an epoch hold on this container. It then
-opens the sampled timesteps container for read, and checks whether the next
+opens the sampled timesteps container for read and checks whether the next
 time-step to be consumed is ready. If not, it waits for a new global HCE to
 be committed (notified by asynchronous event completion on the event queue)
 and checks again. When the requested time-step is available, the down-sample
 job processes input data for this new time-step, dumps the results in its
 own container and updates the latest analyzed time-step in its metadata.
-It then commits updates to its output container and waits again for a new
-epoch to be committed and repeats the same process.
+It then commits updates to its output container, waits for a new
+epoch to be committed, and repeats the same process.
 
 Another approach is for the producer job to create explicit snapshots for
 epochs of interest and have the analysis job waiting and processing
-snapshots. This avoid processing every single committed epoch.
+snapshots. This avoids processing every single committed epoch.
 
 Shared Container
 
 We now assume that the container storing the sampled timesteps and the one
-storing the analyzed data are a single container. In other words, the
+storing the analyzed data is a single container. In other words, the
 down-sample job consumes input data and writes output data to the same
 container.
 
-The down-sample job opens the shared container, obtains an hold and dumps
-new sampled timesteps to the container. As before, the post-process job also
-opens the container, fetches the latest analyzed timestep, but does not
+The down-sample job opens the shared container, obtains a hold and dumps
+newly sampled timesteps to the container. The post-process job also
+opens the container and fetches the latest analyzed timestep but does not
 obtain an epoch hold until a new global HCE is ready. Once the post-process
-job is notified of a new global HCE, it can analyze the new sampled timesteps,
-obtain an hold and write its analyzed data to the same container. Once this
-is done, the post-process job flushes its updates, commits the held epoch and
-releases the held epoch. At that point, it can wait again for a new global
+job is notified of a new global HCE, it can analyze the newly sampled timesteps,
+obtain a hold and write its analyzed data to the same container. Once this
+is done, the post-process job flushes its updates, commits the held epoch, and
+releases it. At that point, it can wait again for a new global
 HCE to be generated by the down-sample job.
 
 
 
 ### Concurrent Producers
 
-In the previous section, we consider a producer and a consumer job concurrently
-reading and writing into the same container, but in disjoint objects. We now
+In the previous section, we consider a producer and a consumer job simultaneously
+reading and writing into the same container but disjoint objects. We now
 consider a workflow composed of concurrent producer jobs modifying the same
-container in a conflicting and uncoordinated manner. This effectively means
+container in an inconsistent and uncoordinated manner. This effectively means
 that the two producers can update the same key of the same KV object or
 document store or overlapping extents of the same byte array. This model
 requires the implementation of a concurrency-control mechanism (not part of
@@ -184,10 +183,10 @@ DAOS) to coordinate conflicting accesses. This section presents an example
 of such a mechanism based on locking, but alternative approaches can also be
 considered.
 
-A workflow is composed of two applications using a distributed lock manager
+A workflow comprises two applications using a distributed lock manager
 to serialize contended accesses to DAOS objects. Each application individually
 opens the same container and grabs an epoch hold whenever it wants to modify
-some objects in the container. Prior to modifying an object, an application
+some objects in the container. Before modifying an object, an application
 should acquire a write lock on the object. This lock carries a lock value
 block (LVB) storing the last epoch number in which this object was last
 modified and committed. Once the lock is acquired, the writer must:
@@ -205,21 +204,21 @@ object was modified, and the lock can finally be released.
 
 ## Storage Node Failure and Resilvering
 
-In this section, we consider a workflow connected to a DAOS pool and one
+This section considers a workflow connected to a DAOS pool and one
 storage node that suddenly fails. Both DAOS clients and servers communicating
 with the failed server experience RPC timeouts and inform the RAS system.
 Failing RPCs are resent repeatedly until the RAS system or the pool metadata
-service itself decides to declare the storage node dead and evicts it from
-the pool map. The pool map update, along with the new version, is propagated
+service decides to declare the storage node dead and evict it from
+the pool map. Finally, the pool map update, along with the new version, is propagated
 to all the storage nodes that lazily (in RPC replies) inform clients that a
 new pool map version is available. Both clients and servers are thus
 eventually informed of the failure and enter into recovery mode.
 
-Server nodes will cooperate to restore redundancy on different servers for
-the impacted objects, whereas clients will enter in degraded mode and read
-from other replicas, or reconstruct data from erasure code. This rebuild
+Server nodes will cooperate to restore redundancy for
+the impacted objects on different servers. In contrast, clients will enter in degraded mode, read
+from other replicas, or reconstruct data from erasure code. This rebuilding
 process is executed online while the container is still being accessed and
-modified. Once redundancy has been restored for all objects, the poolmap is
+modified. Once redundancy has been restored for all objects, the pool map is
 updated again to inform everyone that the system has recovered from the fault
-and the system can exit from degraded mode.
+and can exit from degraded mode.
 
diff --git a/src/rdb/raft b/src/rdb/raft
index c18bcb8ef18..c81505fe03d 160000
--- a/src/rdb/raft
+++ b/src/rdb/raft
@@ -1 +1 @@
-Subproject commit c18bcb8ef18d5937558e881cd48a7568982aa954
+Subproject commit c81505fe03de46d1b415f28c2cc9dbb3ed45e08d

From 4eeec6ebc2a0099e587a78a801b246b1e2e19035 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 29 Apr 2022 12:35:46 -0700
Subject: [PATCH 02/14] Grammer review

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/admin/administration.md      | 237 +++++++++++++++++-------------
 docs/admin/deployment.md          | 214 +++++++++++++--------------
 docs/admin/env_variables.md       |  29 ++--
 docs/admin/hardware.md            |  45 +++---
 docs/admin/installation.md        |  28 ++--
 docs/admin/performance_tuning.md  | 191 ++++++++++++------------
 docs/admin/pool_operations.md     | 164 +++++++++++----------
 docs/admin/predeployment_check.md | 153 ++++++++++---------
 docs/admin/tiering_uns.md         |  43 +++---
 docs/admin/troubleshooting.md     | 150 +++++++++----------
 10 files changed, 629 insertions(+), 625 deletions(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index 71be72372fc..42964df2fb8 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -10,11 +10,10 @@ communicated and logged within DAOS and syslog.
 The following table describes the structure of a DAOS RAS event, including
 descriptions of mandatory and optional fields.
 
-
 | Field             | Optional/Mandatory   | Description                                              |
 |:----|:----|:----|
 | ID                | Mandatory            | Unique event identifier referenced in the manual.        |
-| Type              | Mandatory            | Event type of STATE\_CHANGE causes an update to the Management Service (MS) database in addition to event being written to SYSLOG. INFO\_ONLY type events are only written to SYSLOG.                                       |
+| Type              | Mandatory            | Event type of STATE\_CHANGE causes an update to the Management Service (MS) database and event being written to SYSLOG. INFO\_ONLY type events are only written to SYSLOG.                 |
 | Timestamp         | Mandatory            | Resolution at the microseconds and include the timezone offset to avoid locality issues.                |
 | Severity          | Mandatory            | Indicates event severity, Error/Warning/Notice.          |
 | Msg               | Mandatory            | Human readable message.                                  |
@@ -33,7 +32,7 @@ descriptions of mandatory and optional fields.
 Below is an example of a RAS event signaling an exclusion of an unresponsive
 engine:
 
-```
+```bash
 &&& RAS EVENT id: [swim_rank_dead] ts: [2021-11-21T13:32:31.747408+0000] host: [wolf-112.wolf.hpdd.intel.com] type: [STATE_CHANGE] sev: [NOTICE] msg: [SWIM marked rank as dead.] pid: [253454] tid: [1] rank: [6] inc: [63a058833280000]
 ```
 
@@ -44,22 +43,21 @@ severity, message, description, and cause.
 
 |Event|Event type|Severity|Message|Description|Cause|
 |:----|:----|:----|:----|:----|:----|
-| engine\_format\_required|INFO\_ONLY|NOTICE|DAOS engine  requires a  format|Indicates engine is waiting for allocated storage to be formatted on formatted on instance  with dmg tool.  can be either SCM or Metadata.|DAOS server attempts to bring-up an engine that has unformatted storage.|
+| engine\_format\_required|INFO\_ONLY|NOTICE|DAOS engine  requires a  format|Indicates engine is waiting for allocated storage to be formatted on formatted on instance  with dmg tool.  can be either SCM or Metadata.|DAOS server attempts to bring up an engine that has unformatted storage.|
 | engine\_died| STATE\_CHANGE| ERROR| DAOS engine  exited exited unexpectedly:  | Indicates engine instance  unexpectedly.  describes the exit state returned from exited daos\_engine process.| N/A                          |
 | engine\_asserted| STATE\_CHANGE| ERROR| TBD| Indicates engine instance  threw a runtime assertion, causing a crash. | An unexpected internal state resulted in assert failure. |
-| engine\_clock\_drift| INFO\_ONLY   | ERROR| clock drift detected| Indicates CART comms layer has detected clock skew between engines.| NTP may not be syncing clocks across DAOS system.      |
-| pool\_rebuild\_started| INFO\_ONLY| NOTICE   | Pool rebuild started.| Indicates a pool rebuild has started. The event data field contains pool map version and pool operation identifier. | When a pool rank becomes unavailable a rebuild will be triggered.   |
+| engine\_clock\_drift| INFO\_ONLY   | ERROR| clock drift detected| Indicates CART comms layer has detected clock skew between engines.| NTP may not be syncing clocks across the DAOS system.      |
+| pool\_rebuild\_started| INFO\_ONLY| NOTICE   | Pool rebuild started.| Indicates a pool rebuild has started. The event data field contains a pool map version and pool operation identifier. | When a pool rank becomes unavailable, a rebuild will be triggered.   |
 | pool\_rebuild\_finished| INFO\_ONLY| NOTICE| Pool rebuild finished.| Indicates a pool rebuild has finished successfully. The event data field includes the pool map version and pool operation identifier.  | N/A|
 | pool\_rebuild\_failed| INFO\_ONLY| ERROR| Pool rebuild failed: .| Indicates a pool rebuild has failed. The event data field includes the pool map version and pool operation identifier.  provides a string representation of DER code.| N/A                          |
-| pool\_replicas\_updated| STATE\_CHANGE| NOTICE| List of pool service replica ranks has been updated.| Indicates a pool service replica list has changed. The event contains the new service replica list in a custom payload. | When a pool service replica rank becomes unavailable a new rank is selected to replace it (if available). |
-| pool\_durable\_format\_incompat| INFO\_ONLY| ERROR| incompatible layout version:  not in [, ]| Indicates the given pool's layout version does not match any of the versions supported by the currently running DAOS software.| DAOS engine is started with pool data in local storage that has an incompatible layout version. |
-| container\_durable\_format\_incompat| INFO\_ONLY| ERROR| incompatible layout version[:  not in [, \]| Indicates the given container's layout version does not match any of the versions supported by the currently running DAOS software.| DAOS engine is started with container data in local storage that has an incompatible layout version.|
-| rdb\_durable\_format\_incompatible| INFO\_ONLY| ERROR| incompatible layout version[:  not in [, ]] OR incompatible DB UUID:  | Indicates the given RDB's layout version does not match any of the versions supported by the currently running DAOS software, or the given RDB's UUID does not match the expected UUID (usually because the RDB belongs to a pool created by a pre-2.0 DAOS version).| DAOS engine is started with rdb data in local storage that has an incompatible layout version.|
+| pool\_replicas\_updated| STATE\_CHANGE| NOTICE| List of pool service replica ranks has been updated.| Indicates a pool service replica list has changed. The event contains the new service replica list in a custom payload. | When a pool service replica rank becomes unavailable, a new rank is selected to replace it (if available). |
+| pool\_durable\_format\_incompat| INFO\_ONLY| ERROR| incompatible layout version:  not in [, ]| Indicates the given pool's layout version does not match any of the versions supported by the currently running DAOS software.| DAOS engine starts with pool data in local storage with an incompatible layout version. |
+| container\_durable\_format\_incompat| INFO\_ONLY| ERROR| incompatible layout version[:  not in [, \]| Indicates the given container's layout version does not match any of the versions supported by the currently running DAOS software.| DAOS engine starts with container data in local storage with an incompatible layout version.|
+| rdb\_durable\_format\_incompatible| INFO\_ONLY| ERROR| incompatible layout version[:  not in [, ]] OR incompatible DB UUID:  | Indicates the given RDB's layout version does not match any of the versions supported by the currently running DAOS software, or the given RDB's UUID does not match the expected UUID (usually because the RDB belongs to a pool created by a pre-2.0 DAOS version).| DAOS engine starts with rdb data in local storage with an incompatible layout version.|
 | swim\_rank\_alive| STATE\_CHANGE| NOTICE| TBD| The SWIM protocol has detected the specified rank is responsive.| A remote DAOS engine has become responsive.|
 | swim\_rank\_dead| STATE\_CHANGE| NOTICE| SWIM rank marked as dead.| The SWIM protocol has detected the specified rank is unresponsive.| A remote DAOS engine has become unresponsive.|
-| system\_start\_failed| INFO\_ONLY| ERROR| System startup failed, | Indicates that a user initiated controlled startup failed.  shows which ranks failed.| Ranks failed to start.|
-| system\_stop\_failed| INFO\_ONLY| ERROR| System shutdown failed during  action,   | Indicates that a user initiated controlled shutdown failed.  identifies the failing shutdown action and  shows which ranks failed.| Ranks failed to stop.|
-
+| system\_start\_failed| INFO\_ONLY| ERROR| System startup failed, | Indicates that a user-initiated controlled startup failed.  shows which ranks failed.| Ranks failed to start.|
+| system\_stop\_failed| INFO\_ONLY| ERROR| System shutdown failed during  action,   | Indicates that a user-initiated controlled shutdown failed.  identifies the failing shutdown action and  shows which ranks failed.| Ranks failed to stop.|
 
 ## System Logging
 
@@ -76,14 +74,15 @@ The command accepts 0-1 positional arguments.
 If no args are passed, then the log masks for each running engine will be reset
 to the value of engine "log\_mask" parameter in the server config file (as set
 at the time of daos\_server startup).
-If a single arg is passed, then this will be used as the log masks setting.
+If a single arg is passed, this will be used as the log masks setting.
 
 Example usage:
-```
+
+```bash
 dmg server set-logmasks ERR,mgmt=DEBUG
 ```
 
-The input string should look like PREFIX1=LEVEL1,PREFIX2=LEVEL2,... where the
+The input string should look like PREFIX1=LEVEL1, PREFIX2=LEVEL2,... where the
 syntax is identical to what is expected by the 'D_LOG_MASK' environment variable.
 If the 'PREFIX=' part is omitted, then the level applies to all defined
 facilities (e.g., a value of 'WARN' sets everything to WARN).
@@ -91,10 +90,9 @@ facilities (e.g., a value of 'WARN' sets everything to WARN).
 Supported priority levels for engine logging are FATAL, CRIT, ERR, WARN, NOTE,
 INFO, DEBUG.
 
-
 ## System Monitoring
 
-The DAOS servers maintain a set of metrics on I/O and internal state
+The DAOS servers maintain a set of metrics on I/O and the internal state
 of the DAOS processes. The metrics collection is very lightweight and
 is always enabled. It cannot be manually enabled or disabled.
 
@@ -126,7 +124,7 @@ collection. This endpoint presents the data in a format compatible with
 To enable remote telemetry collection, update the control plane section of
 your DAOS server configuration file:
 
-```
+```bash
 telemetry_port: 9191
 ```
 
@@ -137,19 +135,18 @@ its local metrics via the endpoint: `http://:/metrics`
 
 ### Remote metrics collection with dmg telemetry
 
-The `dmg telemetry` administrative command can be used to query an individual DAOS
-server for metrics. Only one DAOS host may be queried at a time.
+The `dmg telemetry` administrative command is used to perform queries of individual DAOS servers for metrics, with only one DAOS host may be queried at a time.
 The command will return information for all engines on that server,
 identified by the "rank" attribute.
 
 The metrics have the same names as seen on the telemetry web endpoint.
 
-By default, the `dmg telemetry` command produces human readable output.
+By default, the `dmg telemetry` command produces human-readable output.
 The output can be formatted in JSON by running `dmg -j telemetry`.
 
 To list all metrics for the server with their name, type and description:
 
-```
+```bash
 dmg telemetry [-l ] [-p ] metrics list
 ```
 
@@ -157,7 +154,7 @@ If no host is provided, the default is localhost. The default port is 9191.
 
 To query the values of one or more metrics on the server:
 
-```
+```bash
 dmg telemetry [-l ] [-p ] metrics query [-m ]
 ```
 
@@ -171,6 +168,8 @@ provided, all metrics are queried.
 Prometheus is the preferred way to collect metrics from multiple DAOS servers
 at the same time.
 
+#### Configure existing Prometheus server
+
 To integrate with Prometheus, add a new job to your Prometheus server's
 configuration file, with the `targets` set to the hosts and telemetry ports of
 your DAOS servers:
@@ -183,12 +182,14 @@ scrape_configs:
   - targets: [':']
 ```
 
-If there is not already a Prometheus server set up, DMG offers quick setup
+#### Configure an new Prometheus server
+
+If there is not already a Prometheus server set up, DMG offers a quick setup
 options for DAOS.
 
 To install and configure Prometheus on the local machine:
 
-```
+```bash
 dmg telemetry config [-i ]
 ```
 
@@ -201,13 +202,14 @@ written to `$HOME/.prometheus.yml`.
 
 To start the Prometheus server with the configuration file generated by `dmg`:
 
-```
+```bash
 prometheus --config-file=$HOME/.prometheus.yml
 ```
 
 ## Storage Operations
 
 Storage subcommands can be used to operate on host storage.
+
 ```bash
 $ dmg storage --help
 Usage:
@@ -226,6 +228,7 @@ Available commands:
 
 Storage query subcommands can be used to get detailed information about how DAOS
 is using host storage.
+
 ```bash
 $ dmg storage query --help
 Usage:
@@ -247,6 +250,7 @@ To query SCM and NVMe storage space usage and show how much space is available t
 create new DAOS pools with, run the following command:
 
 - Query Per-Server Space Utilization:
+
 ```bash
 $ dmg storage query usage --help
 Usage:
@@ -256,9 +260,10 @@ Usage:
 ```
 
 The command output shows online DAOS storage utilization, only including storage
-statistics for devices that have been formatted by DAOS control-plane and assigned
+statistics for devices that have been formatted by the DAOS control plane and assigned
 to a currently running rank of the DAOS system. This represents the storage that
 can host DAOS pools.
+
 ```bash
 $ dmg storage query usage
 Hosts   SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used
@@ -291,6 +296,7 @@ overhead).
 Useful admin dmg commands to query NVMe SSD health:
 
 - Query Per-Server Metadata:
+
 ```bash
 $ dmg storage query list-devices --help
 Usage:
@@ -304,6 +310,7 @@ Usage:
       -u, --uuid=         Device UUID (all devices if blank)
       -e, --show-evicted  Show only evicted faulty devices
 ```
+
 ```bash
 $ dmg storage query list-pools --help
 Usage:
@@ -316,6 +323,7 @@ Usage:
       -u, --uuid=     Pool UUID (all pools if blank)
       -v, --verbose   Show more detail about pools
 ```
+
 ```bash
 $ dmg storage scan --nvme-meta --help
 Usage:
@@ -334,10 +342,11 @@ stored SMD device and pool tables, respectively. The device table maps the inter
 device UUID to attached VOS target IDs. The rank number of the server where the device
 is located is also listed, along with the current device state. The current device
 states are the following:
-  - NORMAL: a fully functional device in-use by DAOS
-  - EVICTED: the device is no longer in-use by DAOS
-  - UNPLUGGED: the device is currently unplugged from the system (may or not be evicted)
-  - NEW: the device is plugged and available and not currently in-use by DAOS
+
+- NORMAL: a fully functional device in-use by DAOS
+- EVICTED: the device is no longer in-use by DAOS
+- UNPLUGGED: the device is currently unplugged from the system (may or not be evicted)
+- NEW: the device is plugged and available and not currently in use by DAOS
 
 To list only devices in the EVICTED state, use the (--show-evicted|-e) option to the
 list-devices command.
@@ -349,8 +358,8 @@ are both VMD devices with transport addresses in the BDF format behind the VMD a
 0000:5d:05.5.
 
 The pool table maps the DAOS pool UUID to attached VOS target IDs and will list all
-of the server ranks that the pool is distributed on. With the additional verbose flag,
-the mapping of SPDK blob IDs to VOS target IDs will also be displayed.
+of the servers ranks that the pool is distributed on. The mapping of SPDK blob IDs to VOS target IDs will also be displayed with the additional verbose flag.
+
 ```bash
 $ dmg -l boro-11,boro-13 storage query list-devices
 -------
@@ -368,6 +377,7 @@ boro-11
     UUID:2ccb8afb-5d32-454e-86e3-762ec5dca7be [TrAddr:5d0505:03:00.0]
       Targets:[1 3] Rank:1 State:NORMAL
 ```
+
 ```bash
 $ dmg -l boro-11,boro-13 storage query list-pools
 -------
@@ -390,6 +400,7 @@ boro-11
 ```
 
 - Query Storage Device Health Data:
+
 ```bash
 $ dmg storage query device-health --help
 Usage:
@@ -400,6 +411,7 @@ Usage:
 [device-health command options]
       -u, --uuid=     Device UUID
 ```
+
 ```bash
 $ dmg storage query target-health --help
 Usage:
@@ -411,6 +423,7 @@ Usage:
       -r, --rank=     Server rank hosting target
       -t, --tgtid=    VOS target ID to query
 ```
+
 ```bash
 $ dmg storage scan --nvme-health --help
 Usage:
@@ -427,12 +440,12 @@ Usage:
 The NVMe storage query device-health and target-health commands query the device
 health data, including NVMe SSD health stats and in-memory I/O error and checksum
 error counters. The server rank and device state are also listed. The device health
-data can either be queried by device UUID (device-health command) or by VOS target ID
-along with the server rank (target-health command). The same device health information
+data can be queried by device UUID (device-health command), VOS target ID, and server rank (target-health command). The same device's health information
 is displayed with both command options. Additionally, vendor-specific SMART stats are
-displayed, currently for Intel devices only. Note: A reasonable timed workload > 60 min
-must be ran for the SMART stats to register (Raw values are 65535). Media wear percentage
-can be calculated by dividing by 1024 to find the percentage of the maximum rated cycles.
+displayed currently for Intel devices only. Note: A reasonable timed workload > 60 min
+must be running for the SMART stats to register (Raw values are 65535). Media wear percentage
+is calculated by dividing by 1024 to find the percentage of the maximum rated cycles.
+
 ```bash
 $ dmg -l boro-11 storage query device-health --uuid=5bd91603-d3c7-4fb7-9a71-76bc25690c19
 or
@@ -487,9 +500,11 @@ boro-11
         Host Bytes Written:52114
 
 ```
+
 #### Exclusion and Hotplug
 
 - Manually exclude an NVMe SSD:
+
 ```bash
 $ dmg storage set nvme-faulty --help
 Usage:
@@ -504,6 +519,7 @@ Usage:
 
 To manually evict an NVMe SSD (auto eviction will be supported in a future release),
 the device state needs to be set to "FAULTY" by running the following command:
+
 ```bash
 $ dmg -l boro-11 storage set nvme-faulty --uuid=5bd91603-d3c7-4fb7-9a71-76bc25690c19
 -------
@@ -512,47 +528,49 @@ boro-11
   Devices
     UUID:5bd91603-d3c7-4fb7-9a71-76bc25690c19 Targets:[] Rank:1 State:FAULTY
 ```
+
 The device state will transition from "NORMAL" to "FAULTY" (shown above), which will
 trigger the faulty device reaction (all targets on the SSD will be rebuilt, and the SSD
 will remain evicted until device replacement occurs).
 
 !!! note
-    Full NVMe hot plug capability will be available and supported in DAOS 2.2 release.
+    Full NVMe hot-plug capability will be available and supported in DAOS 2.2 release.
     Use is currently intended for testing only and is not supported for production.
 
-- To use a newly added (hot-inserted) SSD it needs to be unbound from the kernel driver
+- To use a newly added (hot-inserted) SSD, it needs to be unbound from the kernel driver
 and bound instead to a user-space driver so that the device can be used with DAOS.
 
-To rebind a SSD on a single host, run the following command (replace SSD PCI address and
+To rebind an SSD on a single host, run the following command (replace SSD PCI address and
 hostname with appropriate values):
+
 ```bash
 $ dmg storage nvme-rebind -a 0000:84:00.0 -l wolf-167
 Command completed successfully
 ```
 
-The device will now be bound to a user-space driver (e.g. VFIO) and can be accessed by
+The device will now be bound to a user-space driver (e.g., VFIO) and can be accessed by
 DAOS I/O engine processes (and used in the following `dmg storage replace nvme` command
 as a new device).
 
-- Once an engine is using a newly added (hot-inserted) SSD it can be added to the persistent
-NVMe config (stored on SCM) so that on engine restart the new device will be used.
+- Once an engine uses a newly added (hot-inserted) SSD, it can be added to the persistent NVMe config (stored on SCM) to use the new device on engine restart.
 
 To update the engine's persistent NVMe config with the new SSD transport address, run the
 following command (replace SSD PCI address, engine index and hostname with appropriate values):
+
 ```bash
 $ dmg storage nvme-add-device -a 0000:84:00.0 -e 0 -l wolf-167
 Command completed successfully
 ```
 
-The optional [--tier-index|-t] command parameter can be used to specify which storage tier to
-insert the SSD into, if specified then the server will attempt to insert the device into the tier
-specified by the index, if not specified then the server will attempt to insert the device into
+The optional [--tier-index|-t] command parameter specifies which storage tier to
+insert the SSD into. The server will attempt to insert the device into the tier
+specified by the index if specified. If not specified, the server will attempt to insert the device into
 the bdev tier with the lowest index value (the first bdev tier).
 
-The device will now be registered in the engine's persistent NVMe config so that when restarted,
-the newly added SSD will be used.
+The device will now be registered in the engine's persistent NVMe config so that when restarted, the newly added SSD is used.
 
 - Replace an excluded SSD with a New Device:
+
 ```bash
 $ dmg storage replace nvme --help
 Usage:
@@ -568,6 +586,7 @@ Usage:
 
 To replace an NVMe SSD with an evicted device and reintegrate it into use with
 DAOS, run the following command:
+
 ```bash
 $ dmg -l boro-11 storage replace nvme --old-uuid=5bd91603-d3c7-4fb7-9a71-76bc25690c19 --new-uuid=80c9f1be-84b9-4318-a1be-c416c96ca48b
 -------
@@ -576,7 +595,8 @@ boro-11
   Devices
     UUID:80c9f1be-84b9-4318-a1be-c416c96ca48b Targets:[] Rank:1 State:NORMAL
 ```
-The old, now replaced device will remain in an "EVICTED" state until it is unplugged.
+
+The old, now replaced device will remain in an "EVICTED" state until unplugged.
 The new device will transition from a "NEW" state to a "NORMAL" state (shown above).
 
 - Reuse a FAULTY Device:
@@ -584,6 +604,7 @@ The new device will transition from a "NEW" state to a "NORMAL" state (shown abo
 In order to reuse a device that was previously set as FAULTY and evicted from the DAOS
 system, an admin can run the following command (setting the old device UUID to be the
 new device UUID):
+
 ```bash
 $ dmg -l boro-11 storage replace nvme --old-uuid=5bd91603-d3c7-4fb7-9a71-76bc25690c19 --new-uuid=5bd91603-d3c7-4fb7-9a71-76bc25690c19
 -------
@@ -592,19 +613,20 @@ boro-11
   Devices
     UUID:5bd91603-d3c7-4fb7-9a71-76bc25690c19 Targets:[] Rank:1 State:NORMAL
 ```
-The FAULTY device will transition from an "EVICTED" state back to a "NORMAL" state,
+
+The FAULTY device will transition from an "EVICTED" state back to a "NORMAL" state
 and will again be available for use with DAOS. The use case of this command will mainly
-be for testing or for accidental device eviction.
+be for testing or accidental device eviction.
 
 #### Identification
 
 The SSD identification feature is simply a way to quickly and visually locate a
-device. It requires the use of Intel VMD (Volume Management Device), which needs
-to be physically available on the hardware as well as enabled in the system BIOS.
-The feature supports two LED device events: locating a healthy device and locating
-an evicted device.
+device. It requires Intel VMD (Volume Management Device), which needs
+to be physically available on the hardware and enabled in the system BIOS.
+The feature supports two LED device events: locating a healthy device and an evicted device.
 
 - Locate a Healthy SSD:
+
 ```bash
 $ dmg storage identify vmd --help
 Usage:
@@ -618,6 +640,7 @@ Usage:
 
 To quickly identify an SSD in question, an administrator can run the following
 command:
+
 ```bash
 $ dmg -l boro-11 storage identify vmd --uuid=6fccb374-413b-441a-bfbe-860099ac5e8d
 
@@ -625,13 +648,13 @@ If a non-VMD device UUID is used with the command, the following error will occu
 localhost DAOS error (-1010): DER_NOSYS
 
 ```
+
 The status LED on the VMD device is now set to an "IDENTIFY" state, represented
 by a quick, 4Hz blinking amber light. The device will quickly blink by default for
-about 60 seconds and then return to the default "OFF" state. The LED event duration
-can be customized by setting the VMD_LED_PERIOD environment variable if a duration
+about 60 seconds and return to the default "OFF" state. The LED event duration
+is customized by setting the VMD_LED_PERIOD environment variable if a duration
 other than the default value is desired.
 
-
 - Locate an Evicted SSD:
 
 If an NVMe SSD is evicted, the status LED on the VMD device is set to a "FAULT"
@@ -647,14 +670,14 @@ that join the DAOS system. Once an engine has joined the DAOS system, it is
 identified by a unique system "rank". Multiple ranks can reside on the same
 host machine, accessible via the same network address.
 
-A DAOS system can be shutdown and restarted to perform maintenance and/or
-reboot hosts. Pool data and state will be maintained providing no changes are
+A DAOS system can be shut down and restarted to perform maintenance and/or
+reboot hosts. Pool data and state will be maintained, providing no changes are
 made to the rank's metadata stored on persistent memory.
 
-Storage reformat can also be performed after system shutdown. Pools will be
+Storage reformat can also be performed after the system shutdown. Pools will be
 removed and storage wiped.
 
-System commands will be handled by a DAOS Server acting as access point and
+System commands will be handled by a DAOS Server acting as an access point and
 listening on the address specified in the DMG config file "hostlist" parameter.
 See
 [`daos_control.yml`](https://github.com/daos-stack/daos/blob/release/2.2/utils/config/daos_control.yml)
@@ -666,6 +689,7 @@ At least one of the addresses in the hostlist parameters should match one of the
 that is supplied when starting `daos_server` instances.
 
 - Commands used to manage a DAOS System:
+
 ```bash
 $ dmg system --help
 Usage:
@@ -675,20 +699,21 @@ Usage:
 
 Available commands:
   cleanup       Clean up all resources associated with the specified machine
-  erase         Erase system metadata prior to reformat
+  erase         Erase system metadata before reformat
   leader-query  Query for current Management Service leader
   list-pools    List all pools in the DAOS system
   query         Query DAOS system status
   start         Perform start of stopped DAOS system
-  stop          Perform controlled shutdown of DAOS system
+  stop          Perform a controlled shutdown of the DAOS system
 ```
 
 ### Membership
 
 The system membership refers to the DAOS engine processes that have registered,
-or joined, a specific DAOS system.
+or joining a specific DAOS system.
 
 - Query System Membership:
+
 ```bash
 $ dmg system query --help
 Usage:
@@ -702,25 +727,26 @@ Usage:
       -v, --verbose     Display more member details
 ```
 
-The `--ranks` takes a pattern describing rank ranges e.g., 0,5-10,20-100.
+The `--ranks` take a pattern describing rank ranges, e.g., 0,5-10,20-100.
 The `--rank-hosts` takes a pattern describing host ranges e.g. storagehost[0,5-10],10.8.1.[20-100].
 
-The output table will provide system rank mappings to host address and instance
+The output table will provide system rank mappings to the host address and instance
 UUID, in addition to the rank state.
 
-DAOS engines run a gossip-based protocol called SWIM that provides efficient
+DAOS engines run a gossip-based SWIM protocol that provides efficient
 and scalable fault detection. When an engine is reported as unresponsive, a
-RAS event is raised and the associated engine is marked as excluded in the
+RAS event is raised, and the associated engine is marked as excluded in the
 output of `dmg system query`. The engine can be stopped (see next section)
-and then restarted to rejoin the system. An failed engine might also be excluded
-from the pools it hosted, please check the pool operation section on how to
+and then restarted to rejoin the system. A failed engine might also be excluded
+from the pools it hosted. Please check the pool operation section on how to
 reintegrate an excluded engine.
 
 ### Shutdown
 
-When up and running, the entire system can be shutdown.
+The entire system can be shut down using the 'dmg system stop' command when it is up and running.
 
 - Stop a System:
+
 ```bash
 $ dmg system stop --help
 Usage:
@@ -734,10 +760,10 @@ Usage:
           --force       Force stop DAOS system members
 ```
 
-The `--ranks` takes a pattern describing rank ranges e.g., 0,5-10,20-100.
+The `--ranks` take a pattern describing rank ranges, e.g., 0,5-10,20-100.
 The `--rank-hosts` takes a pattern describing host ranges e.g. storagehost[0,5-10],10.8.1.[20-100].
 
-The output table will indicate action and result.
+The output table will indicate the action and result.
 
 While the engines are stopped, the DAOS servers will continue to
 operate and listen on the management network.
@@ -745,22 +771,23 @@ operate and listen on the management network.
 !!! warning
     All engines monitor each other and pro-actively exclude unresponsive
     members. It is critical to properly stop a DAOS system as with dmg in
-    the case of a planned maintenance on all or a majority of the DAOS
+    the case of planned maintenance on all or a majority of the DAOS
     storage nodes. An abrupt reboot of the storage nodes might result
-    in massive exclusion that will take time to recover.
+    in a massive exclusion that will take time to recover.
 
-The force option can be passed to for cases when a clean shutown is not working.
-Monitoring is not disabled in this case and spurious exclusion might happen,
+The force option can be passed for cases when a clean shutdown is not working.
+Monitoring is not disabled in this case, and spurious exclusion might happen,
 but the engines are guaranteed to be killed.
 
-dmg also allows to stop a subsection of engines identified by ranks or hostnames.
+dmg also allows stopping a subsection of engines identified by ranks or hostnames.
 This is useful to stop (and restart) misbehaving engines.
 
 ### Start
 
-The system can be started backup after a controlled shutdown.
+The system can be started back up after a controlled shutdown.
 
 - Start a System:
+
 ```bash
 $ dmg system start --help
 Usage:
@@ -773,17 +800,17 @@ Usage:
           --rank-hosts= Hostlist representing hosts whose managed ranks are to be operated on
 ```
 
-The `--ranks` takes a pattern describing rank ranges e.g., 0,5-10,20-100.
+The `--ranks` take a pattern describing rank ranges, e.g., 0,5-10,20-100.
 The `--rank-hosts` takes a pattern describing host ranges e.g. storagehost[0,5-10],10.8.1.[20-100].
 
-The output table will indicate action and result.
+The output table will indicate the action and result.
 
 DAOS I/O Engines will be started.
 
 As for shutdown, a subsection of engines identified by ranks or hostname can be
 specified on the command line:
 
-If the ranks were excluded from pools (e.g., unclean shutdown), they will need to
+If the ranks were excluded from pools (e.g., unclean shutdown), they would need to
 be reintegrated. Please see the pool operation section for more information.
 
 ### Storage Reformat
@@ -800,7 +827,7 @@ config file's `hostlist` parameter.
 - if system membership has records of previously running ranks, storage
 allocated to those ranks will be formatted
 
-The output table will indicate action and result.
+The output table will indicate the action and result.
 
 DAOS I/O Engines will be started, and all DAOS pools will have been removed.
 
@@ -818,29 +845,27 @@ DAOS I/O Engines will be started, and all DAOS pools will have been removed.
     $ wipefs -a /dev/pmem0
     $ wipefs -a /dev/pmem0
     ```
+    
     Then restart DAOS Servers and format.
 
-
 ### System Erase
 
-To erase the DAOS sorage configuration, the `dmg system erase`
-command can be used. Before doing this, the affected engines need to be
+To erase the DAOS storage configuration, use the `dmg system erase`
+command. Before doing this, the affected engines need to be
 stopped by running `dmg system stop` (if necessary with the `--force` flag).
-The erase operation will destroy any pools that may still exist, and will
+The erase operation will destroy any pools that may still exist and will
 unconfigure the storage. It will not stop the daos\_server process, so
-the `dmg` command can still be used. For example, the system can be
+the `dmg` command is still used. For example, the system can be
 formatted again by running `dmg storage format`.
 
 !!! note
     Note that `dmg system erase` does not currently reset the SCM.
     The `/dev/pmemX` devices will remain mounted,
     and the PMem configuration will not be reset to Memory Mode.
-    To completely unconfigure the SCM, it is advisable to run
-    `daos_server storage prepare --scm-only --reset` which will
-    completely reset the PMem. A reboot will be required to finalize
+    To unconfigure the SCM, it is advisable to run
+    `daos_server storage prepare --scm-only --reset` A reboot will be required to finalize
     the change of the PMem allocation goals.
 
-
 ### System Extension
 
 To add a new server to an existing DAOS system, one should install:
@@ -849,21 +874,21 @@ To add a new server to an existing DAOS system, one should install:
 - the server yaml file pointing to the access points of the running
   DAOS system
 
-The daos\_control.yml file should also be updated to include the new DAOS server.
+The daos\_control.yml file must also be updated to include the new DAOS server.
 
-Then starts the daos\_server via systemd and format the new server via
+Then start the daos\_server via systemd and format the new server via
 dmg as follows:
 
-```
-$ dmg storage format -l ${new_storage_node}
+```bash
+dmg storage format -l ${new_storage_node}
 ```
 
 new_storage_node should be replaced with the hostname or the IP address of the
-new storage node (comma separated list or range of hosts for multiple nodes)
+new storage node (comma-separated list or range of hosts for multiple nodes)
 to be added.
 
 Upon completion of the format operation, the new storage nodes will join
-the system (this can be checked with `dmg system query -v`).
+the system (use `dmg system query -v` to check the status.).
 
 !!! note
     New pools created after the extension will automatically use the newly added
@@ -873,19 +898,19 @@ the system (this can be checked with `dmg system query -v`).
 
 ## Software Upgrade
 
-The DAOS v2.0 wire protocol and persistent layout is not compatible with
-previous DAOS versions and would require a reformat and all client and server
+The DAOS v2.0 wire protocol and persistent layout are not compatible with
+previous DAOS versions and would require a reformat, and all client and server
 nodes to be upgraded to a 2.x version.
 
 !!! warning
     Attempts to start DAOS v2.0 over a system formatted with a previous DAOS
     version will trigger a RAS event and cause all the engines to abort.
-    Similarly, a 2.0 DAOS client or engine will refuse to communicate with a
+    Similarly, a 2.0 DAOS client or the engine will refuse to communicate with a
     peer that runs an incompatible version.
 
-DAOS v2.0 will maintain interoperability for both the wire protocol and
+DAOS v2.0 will maintain interoperability for the wire protocol and
 persistent layout with any future v2.x versions. That being said, it is
 required that all engines in the same system run the same DAOS version.
 
 !!! warning
-    Rolling upgrade is not supporting at this time.
+    Rolling upgrade is not supported at this time.
diff --git a/docs/admin/deployment.md b/docs/admin/deployment.md
index 49fb6834da4..ec4a415e72d 100644
--- a/docs/admin/deployment.md
+++ b/docs/admin/deployment.md
@@ -1,6 +1,6 @@
 # System Deployment
 
-The DAOS deployment workflow requires to start the DAOS server instances
+The DAOS deployment workflow requires starting the DAOS server instances
 early on to enable administrators to perform remote operations in parallel
 across multiple storage nodes via the dmg management utility. Security is
 guaranteed via the use of certificates.
@@ -9,7 +9,7 @@ storage hardware provisioning and would typically be run from a login
 node.
 
 After `daos_server` instances have been started on each storage node for the
-first time, `daos_server storage prepare --scm-only` will set PMem storage
+The first time, `daos_server storage prepare --scm-only` will set PMem storage
 into the necessary state for use with DAOS when run on each host.
 Then `dmg storage format` formats persistent storage devices (specified in the
 server configuration file) on the storage nodes and writes necessary metadata
@@ -30,46 +30,41 @@ following steps:
 - [Validate](#system-validation) that the DAOS system is operational
 
 Note that starting the DAOS server instances can be performed automatically
-on boot if start-up scripts are registered with systemd.
+on boot, if start-up scripts are registered with systemd.
 
 The following subsections will cover each step in more detail.
 
 ## DAOS Server Setup
 
-First of all, the DAOS server should be started to allow remote administration
-command to be executed via the dmg tool. This section describes the minimal
-DAOS server configuration and how to start it on all the storage nodes.
+This section describes the minimal DAOS server configuration and how to start it on all the storage nodes.
 
 ### Example RPM Deployment Workflow
 
 A recommended workflow to get up and running is as follows:
 
-* Install DAOS Server RPMs - `daos_server` systemd services will start in
-  listening mode which means DAOS I/O engine processes will not be started as the
+- Install DAOS Server RPMs - `daos_server` systemd services will start in
+  listening mode, which means DAOS I/O engine processes will not be started as the
   server config file (default location at `/etc/daos/daos_server.yml`) has not
   yet been populated.
-
-* Run `dmg config generate -l  -a ` across the entire
+- Run `dmg config generate -l  -a ` across the entire
   hostset (all the storage servers that are now running the `daos_server` service
   after RPM install).
   The command will only generate a config if hardware setups on all the hosts are
   similar and have been given sensible NUMA mappings.
   Adjust the hostset until you have a set with homogeneous hardware configurations.
-
-* Once a recommended config file can be generated, copy it to the server config
+- Once a recommended config file can be generated, copy it to the server config
   file default location (`/etc/daos/daos_server.yml`) on each DAOS Server host
   and restart all `daos_server` services.
   An example command to restart the services is
   `clush -w machines-[118-121,130-133] "sudo systemctl restart daos_server"`.
-  The services should prompt for format on restart and after format is triggered
+  The services should prompt for format on a restart, and after the format is triggered
   from `dmg`, the DAOS I/O engine processes should start.
 
 ### Server Configuration File
 
-The `daos_server` configuration file is parsed when starting the `daos_server`
-process.
+The `daos_server` configuration file is parsed when starting the `daos_server` process.
 The configuration file location can be specified on the command line
-(`daos_server -h` for usage) otherwise it will be read from the default location
+(`daos_server -h` for usage) otherwise, it will be read from the default location
 (`/etc/daos/daos_server.yml`).
 
 Parameter descriptions are specified in
@@ -78,11 +73,11 @@ and example configuration files in the
 [examples](https://github.com/daos-stack/daos/tree/release/2.2/utils/config/examples)
 directory.
 
-Any option supplied to `daos_server` as a command line option or flag will
-take precedence over equivalent configuration file parameter.
+Any option supplied to `daos_server` as a command-line option or flag will
+take precedence over equivalent configuration file parameters.
 
 For convenience, active parsed configuration values are written to a temporary
-file for reference, and the location will be written to the log.
+file for reference and the location will be written to the log.
 
 #### Configuration Options
 
@@ -93,11 +88,11 @@ available at
 
 The location of this configuration file is determined by first checking
 for the path specified through the -o option of the `daos_server` command
-line, if unspecified then `/etc/daos/daos_server.yml` is used.
+line, if unspecified, then `/etc/daos/daos_server.yml` is used.
 
 Refer to the example configuration file
 [`daos_server.yml`](https://github.com/daos-stack/daos/blob/release/2.2/utils/config/daos_server.yml)
-for latest information and examples.
+for the latest information and examples.
 
 At this point of the process, the servers: and provider: section of the yaml
 file can be left blank and will be populated in the subsequent sections.
@@ -135,53 +130,48 @@ Application Options:
 
 The command will output recommended config file if supplied requirements are
 met. Requirements will be derived based on the number of NUMA nodes present on
-the hosts if '--num-engines' is not specified on the commandline.
+the hosts if '--num-engines' is not specified on the command line.
 
 - '--num-engines' specifies the number of engine sections to populate in the
   config file output.
   Each section will specify a persistent memory (PMem) block devices that must be
   present on the host in addition to a fabric network interface and SSDs all
   bound to the same NUMA node.
-  If not set explicitly on the commandline, default is the number of NUMA nodes
+  If not set explicitly on the command line, the default is the number of NUMA nodes
   detected on the host.
 
-- '--min-ssds' specifies the minimum number of NVMe SSDs per-engine that need
+- '--min-ssds' specifies the minimum number of NVMe SSDs per engine that needs
   to be present on each host.
   For each engine entry in the generated config, at least this number of SSDs
   must be bound to the NUMA node that matches the affinity of the PMem device
-  and fabric network interface associated with the engine.
-  If not set on the commandline, default is "1".
-  If set to "0" NVMe SSDs will not be added to the generated config and SSD
+  and fabric network interfaces associated with the engine.
+  If not set on the command line, the default is "1".
+  If set to "0", the NVMe SSDs will not be added to the generated config and SSD
   validation will be disabled.
 
-- '--net-class' specifies preference for network interface class, options are
-  'ethernet', 'infiband' or 'best-available'.
+- '--net-class' specifies a preference for network interface class, options are
+  'ethernet', 'infiniband' or 'best-available'.
   'best-available' will attempt to choose the most performant (as judged by
-  libfabric) sets of interfaces and supported provider that match the number and
+  libfabric) sets of interfaces and supported providers that match the number and
   NUMA affinity of PMem devices.
-  If not set on the commandline, default is "best-available".
+  If not set on the command line, the default is "best-available".
 
 The configuration file that is generated by the command and output to stdout
-can be copied to a file and used on the relevant hosts and used as server
+can be copied to a file and used on the relevant hosts, and used as the server
 config to determine the starting environment for 'daos_server' instances.
 
 Config file output will not be generated in the following cases:
 
 - PMem device count, capacity or NUMA mappings differ on any of the hosts in the
   hostlist (the hostlist can be specified either in the 'dmg' config file or on
-  the commandline).
-
-- NVMe SSD count, PCI address distribution or NUMA affinity differs on any of
+  the command line).
+- NVMe SSD count, PCI address distribution, or NUMA affinity differs on any of
   the hosts in the host list.
-
 - NUMA node count can't be detected on the hosts or differs on any host in the
   host list.
-
 - PMem device count or NUMA affinity doesn't meet the 'num-engines' requirement.
-
 - NVMe device count or NUMA affinity doesn't meet the 'min-ssds' requirement.
-
-- network device count or NUMA affinity doesn't match the configured PMem
+- Network device count or NUMA affinity doesn't match the configured PMem
   devices, taking into account any specified network device class preference
   (Ethernet or InfiniBand).
 
@@ -189,24 +179,27 @@ Config file output will not be generated in the following cases:
     Some CentOS 7.x kernels from before the 7.9 release were known to have a defect
     that prevented `ndctl` from being able to report the NUMA affinity for a
     namespace.
-    This prevents generation of dual engine configs using `dmg config generate`
+    This prevents the generation of dual engine configs using `dmg config generate`
     when running with one of the above-mentioned affected kernels.
 
 #### Certificate Configuration
 
 The DAOS security framework relies on certificates to authenticate
-components and administrators in addition to encrypting DAOS control plane
-communications. A set of certificates for a given DAOS system may be
+components,administrators and encrypting the DAOS control plane
+communications.
+
+A set of certificates for a given DAOS system may be
 generated by running the `gen_certificates.sh` script provided with the DAOS
-software if there is not an existing TLS certificate infrastructure. The
-`gen_certificates.sh` script uses the `openssl` tool to generate all of the
-necessary files. We highly recommend using OpenSSL Version 1.1.1h or higher as
+software if there is not an existing TLS certificate infrastructure.
+
+The `gen_certificates.sh` script uses the `OpenSSL` tool to generate all the
+necessary files. We recommend using OpenSSL Version 1.1.1h or higher as
 keys and certificates generated with earlier versions are vulnerable to attack.
 
 When DAOS is installed from RPMs, this script is provided in the base `daos` RPM, and
 may be invoked in the directory to which the certificates will be written. As part
 of the generation process, a new local Certificate Authority is created to handle
-certificate signing, and three role certificates are created:
+certificate signing and three role certificates are created:
 
 ```bash
 # /usr/lib64/daos/certgen/gen_certificates.sh
@@ -260,7 +253,7 @@ Server nodes require:
   config file below)
 
 After the certificates have been securely distributed, the DAOS configuration files must be
-updated in order to enable authentication and secure communications. These examples assume
+updated to enable authentication and secure communications. These examples assume
 that the configuration and certificate files have been installed under `/etc/daos`:
 
 ```yaml
@@ -305,7 +298,7 @@ transport_config:
 
 The DAOS Server is started as a systemd service. The DAOS Server
 unit file is installed in the correct location when installing from RPMs.
-The DAOS Server will be run as `daos-server` user which will be created
+The DAOS Server will be run as a `daos-server` user which will be created
 during RPM install.
 
 If you wish to use systemd with a development build, you must copy the service
@@ -315,38 +308,38 @@ modify the ExecStart line to point to your `daos_server` binary.
 After modifying ExecStart, run the following command:
 
 ```bash
-$ sudo systemctl daemon-reload
+udo systemctl daemon-reload
 ```
 
 Once the service file is installed you can start `daos_server`
 with the following commands:
 
 ```bash
-$ systemctl enable daos_server.service
-$ systemctl start daos_server.service
+systemctl enable daos_server.service
+systemctl start daos_server.service
 ```
 
 To check the component status use:
 
 ```bash
-$ systemctl status daos_server.service
+systemctl status daos_server.service
 ```
 
 If DAOS Server failed to start, check the logs with:
 
 ```bash
-$ journalctl --unit daos_server.service
+journalctl --unit daos_server.service
 ```
 
-After RPM install, `daos_server` service starts automatically running as user
+After RPM is installed, the `daos_server` service starts automatically running as thw user
 "daos". The server config is read from `/etc/daos/daos_server.yml` and
 certificates are read from `/etc/daos/certs`.
 With no other admin intervention other than the loading of certificates,
-`daos_server` will enter a listening state enabling discovery of storage and
+`daos_server` will enter a listening state enabling the discovery of storage and
 network hardware through the `dmg` tool without any I/O engines specified in the
 configuration file. After device discovery and provisioning, an updated
 configuration file with a populated per-engine section can be stored in
-`/etc/daos/daos_server.yml`, and after reestarting the `daos_server` service
+`/etc/daos/daos_server.yml`, and after restarting the `daos_server` service
 it is then ready for the storage to be formatted.
 
 ## DAOS Server Remote Access
@@ -356,8 +349,7 @@ performed via the `dmg` utility.
 
 To set the addresses of which DAOS Servers to task, provide either:
 
-- `-l ` on the commandline when invoking, or
-
+- `-l ` on the command line when invoking, or
 - `hostlist: ` in the control configuration file
   [`daos_control.yml`](https://github.com/daos-stack/daos/blob/release/2.2/utils/config/daos_control.yml)
 
@@ -367,7 +359,7 @@ The first entry in the hostlist (after alphabetic then numeric sorting) will be
 assumed to be the access point as set in the server configuration file.
 
 Local configuration files stored in the user directory will be used in
-preference to the default location e.g. `~/.daos_control.yml`.
+preference to the default location e.g., `~/.daos_control.yml`.
 
 ## Hardware Provisioning
 
@@ -402,11 +394,11 @@ as resource allocations on any available PMem modules (one region per NUMA
 node/socket). The regions are activated after BIOS reads the new resource
 allocations.
 Upon completion, the storage prepare command will prompt the admin to reboot
-the storage node(s) in order for the BIOS to activate the new storage
+the storage node(s) for the BIOS to activate the new storage
 allocations.
 The storage prepare command does not initiate the reboot itself.
 
-After running the command a reboot will be required, the command will then need
+After running the command, a reboot will be required, the command will then need
 to be run for a second time to expose the namespace device to be used by DAOS.
 
 Example usage:
@@ -421,17 +413,17 @@ Example usage:
   regions) should be available on each of the hosts.
 
 On the second run, one namespace per region is created, and each namespace may
-take up to a few minutes to create. Details of the pmem devices will be
+take up to a few minutes to create. Details of the PMem devices will be
 displayed in JSON format on command completion.
 
-Upon successful creation of the pmem devices, the Intel(R) Optane(TM)
-persistent memory is configured and one can move on to the next step.
+Upon successful creation of the PMem devices, the Intel(R) Optane(TM)
+persistent memory is configured, and one can move on to the next step.
 
-If required, the pmem devices can be destroyed with the command
+If required, the PMem devices can be destroyed with the command
 `daos_server storage prepare --scm-only --reset`.
 
 All namespaces are disabled and destroyed. The SCM regions are removed by
-resetting modules into "MemoryMode" through resource allocations.
+resetting modules into "Memory Mode" through resource allocations.
 
 Note that undefined behavior may result if the namespaces/pmem kernel
 devices are mounted before running reset (as per the printed warning).
@@ -456,11 +448,11 @@ storage selection.
 processes over the management network.
 
 `daos_server storage scan` can be used to query local `daos_server` instances
-directly (scans locally-attached SSDs and Intel Persistent Memory Modules usable
+directly (scans locally attached SSDs and Intel Persistent Memory Modules usable
 by DAOS).
 NVMe SSDs need to be made accessible first by running
 `daos_server storage prepare --nvme-only`.
-The output will be equivalent running `dmg storage scan --verbose` remotely.
+The output will be equivalent to running `dmg storage scan --verbose` remotely.
 
 ```bash
 bash-4.2$ dmg storage scan
@@ -485,7 +477,7 @@ NVMe PCI     Model                FW Revision Socket ID Capacity
 ```
 
 The NVMe PCI field above is what should be used in the server
-configuration file to identified NVMe SSDs.
+configuration file to identify NVMe SSDs.
 
 Devices with the same NUMA node/socket should be used in the same per-engine
 section of the server configuration file for best performance.
@@ -704,9 +696,9 @@ memory) and "nvme" for NVMe SSDs.
 
 For class == "dcpm", the following parameters should be populated:
 
-- `scm_list` should should contain PMem interleaved-set namespaces
-  (e.g. `/dev/pmem1`).
-  Currently the size of the list is limited to 1.
+- `scm_list` should contain PMem interleaved-set namespaces
+  (e.g., `/dev/pmem1`).
+  Currently, the size of the list is limited to 1.
 - `scm_mount` gives the desired local directory to be used as the mount point
   for DAOS persistent storage mounted on the specified PMem device specified in
   `scm_list`.
@@ -810,11 +802,12 @@ This section will help you determine what to provide for the `provider`,
 `fabric_iface` and `pinned_numa_node` entries in the `daos_server.yml` file.
 
 The following commands are typical examples:
+
 ```bash
-$ dmg network scan
-$ dmg network scan -p all
-$ dmg network scan -p ofi+tcp
-$ dmg network scan --provider ofi+verbs
+dmg network scan
+dmg network scan -p all
+dmg network scan -p ofi+tcp
+dmg network scan --provider ofi+verbs
 ```
 
 In the early stages when a `daos_server` has not yet been fully configured and
@@ -825,11 +818,12 @@ Use either of these `dmg` commands in the early stages to accomplish
 this goal:
 
 ```bash
-$ dmg network scan
-$ dgm network scan -p all
+dmg network scan
+dgm network scan -p all
 ```
 
 Typical network scan results look as follows:
+
 ```bash
 $ dmg network scan
 -------
@@ -889,7 +883,7 @@ Each I/O engine is configured with a unique `fabric_iface` and optional
 `pinned_numa_node`.
 The interfaces and NUMA Sockets listed in the scan results map to the
 `daos_server.yml` `fabric_iface` and `pinned_numa_node` respectively.
-The use of `pinned_numa_node` is optional, but recommended for best
+The use of `pinned_numa_node` is optional but recommended for best
 performance.
 When specified with the value that matches the network interface, the I/O
 engine will bind itself to that NUMA node and to cores purely within that NUMA
@@ -935,11 +929,10 @@ Each storage target manages a fraction of the (interleaved) SCM storage space,
 and a fraction of one of the NVMe SSDs that are managed by this engine.
 The optimal number of storage targets per engine depends on two conditions:
 
-* For optimal balance regarding the NVMe space, the number of targets should be
+- For optimal balance regarding the NVMe space, the number of targets should be
 an integer multiple of the number of NVMe disks that are configured in the
 `bdev_list:` of the engine.
-
-* To obtain the maximum SCM performance, a certain number of targets is needed.
+- To obtain the maximum SCM performance, a certain number of targets is needed.
 This is device- and workload-dependent, but around 16 targets usually work well.
 
 While not required, it is recommended to also specify a number of
@@ -950,7 +943,6 @@ the dispatching of server-side RPCs from the main I/O service threads.
 The server should have sufficiently many physical cores to support the
 number of targets plus the additional service threads.
 
-
 ## Storage Formatting
 
 Once the `daos_server` has been restarted with the correct storage devices,
@@ -967,12 +959,12 @@ modules and NVMe SSDs installed and prepared.
 Upon successful format, DAOS Control Servers will start DAOS I/O engines that
 have been specified in the server config file.
 
-Successful start-up is indicated by the following on stdout:
+A successful start-up is indicated by the following on stdout:
 `DAOS I/O Engine (v2.0.1) process 433456 started on rank 1 with 8 target, 2 helper XS, firstcore 0, host wolf-72.wolf.hpdd.intel.com.`
 
 ### SCM Format
 
-When the command is run, the pmem kernel devices created on SCM/PMem regions are
+When the command is run, the PMem kernel devices created on SCM/PMem regions are
 formatted and mounted based on the parameters provided in the server config file.
 
 - `scm_mount` specifies the location of the mountpoint to create.
@@ -1015,7 +1007,6 @@ the necessary DAOS metadata indicating that the server has been formatted.
 When starting, `daos_server` will skip `maintenance mode` and attempt to start
 I/O engines if valid DAOS metadata is found in `scm_mount`.
 
-
 ## Agent Setup
 
 This section addresses how to configure the DAOS Agents on the client nodes.
@@ -1039,7 +1030,7 @@ for details on creating the necessary certificates.
 
 !!! note
     It is possible to disable the use of certificates for testing purposes.
-    This should *never* be done in production environments.
+    This should _never_ be done in production environments.
     Running in insecure mode will allow arbitrary un-authenticated user processes
     to access and potentially damage the DAOS storage.
 
@@ -1064,21 +1055,23 @@ precedence over equivalent configuration file parameter.
 The following section lists the format, options, defaults, and descriptions
 available in the configuration file.
 
-
 #### Defining fabric interfaces manually
 
 By default, the DAOS Agent automatically detects all fabric interfaces on the
-client node. It selects an appropriate one for DAOS I/O based on the NUMA node
-of the client request and the interface type preferences reported by the DAOS
-management service.
+client node. It selects an appropriate one for DAOS I/O based on:
+
+- NUMA node of the client request
+- interface type preferences reported by the DAOS management service.
 
 If the DAOS Agent does not detect the fabric interfaces correctly,
 the administrator may define them manually in the Agent configuration file.
-These `fabric_iface` entries must be organized by NUMA node.
-If using the verbs provider, the interface domain is also required.
+
+These `fabric_iface` entries must be organized by NUMA node. If using the
+verbs provider, the interface domain is also required.
 
 Example:
-```
+
+```bash
 fabric_ifaces:
 -
   numa_node: 0
@@ -1102,26 +1095,26 @@ fabric_ifaces:
 
 ### Agent Startup
 
-The DAOS Agent is a standalone application to be run on each client node.
+The DAOS Agent is a standalone application that is run on each client node.
 By default, the DAOS Agent will be run as a systemd service.
-The DAOS Agent unit file is installed in the correct location
+The DAOS Agent unit file is installed at
 (`/usr/lib/systemd/system/daos_agent.service`) during RPM installation.
 
-After the RPM installation, and after the Agent configuration file has
+After the RPM installation, and the Agent configuration file has
 been created, the following commands will enable the DAOS Agent to be
-started at the next reboot, will start it immediately,
-and will check the status of the Agent after being started:
+started at the next reboot. Upon reboot it will start immediately,
+and will check the status of the Agent:
 
 ```bash
-$ sudo systemctl enable daos_agent.service
-$ sudo systemctl start  daos_agent.service
-$ sudo systemctl status daos_agent.service
+sudo systemctl enable daos_agent.service
+sudo systemctl start  daos_agent.service
+sudo systemctl status daos_agent.service
 ```
 
 If the DAOS Agent fails to start, check the systemd logs for errors:
 
 ```bash
-$ sudo journalctl --unit daos_agent.service
+sudo journalctl --unit daos_agent.service
 ```
 
 #### Starting the DAOS Agent with a non-default configuration
@@ -1130,7 +1123,7 @@ To start the DAOS Agent from the command line, for example to run with a
 non-default configuration file, run:
 
 ```bash
-$ daos_agent -o <'path to agent configuration file/daos_agent.yml'> &
+daos_agent -o <'path to agent configuration file/daos_agent.yml'> &
 ```
 
 If you wish to use systemd with a development build, you must copy the Agent service
@@ -1145,15 +1138,15 @@ as shown above.
 
 #### Disable Agent Cache (Optional)
 
-In certain circumstances (e.g. for DAOS development or system evaluation), it
+In certain circumstances (e.g., for DAOS development or system evaluation), it
 may be desirable to disable the DAOS Agent's caching mechanism in order to avoid
 stale system information being retained across reformats of a system. The DAOS
-Agent normally caches a map of rank-to-fabric URI lookups as well as client network
-configuration data in order to reduce the number of management RPCs required to
+agent normally caches a map of rank-to-fabric URI lookups as well as client network
+configuration data to reduce the number of management RPCs required to
 start an application. When this information becomes stale, the Agent must be
-restarted in order to repopulate the cache with new information.
+restarted to repopulate the cache with new information.
 Alternatively, the caching mechanism may be disabled, with the tradeoff that
-each application launch will invoke management RPCs in order to obtain system
+each application launch will invoke management RPCs to obtain system
 connection information.
 
 To disable the DAOS Agent caching mechanism, set the following environment
@@ -1167,7 +1160,6 @@ the `[Service]` section before reloading systemd and restarting the
 
 `Environment=DAOS_AGENT_DISABLE_CACHE=true`
 
-
 [^1]: https://github.com/intel/ipmctl
 
 [^2]: https://github.com/daos-stack/daos/tree/release/2.2/utils/config
diff --git a/docs/admin/env_variables.md b/docs/admin/env_variables.md
index 2d128fb1f87..13533fbd672 100644
--- a/docs/admin/env_variables.md
+++ b/docs/admin/env_variables.md
@@ -8,15 +8,12 @@ This section lists the environment variables used by DAOS.
 
 The description of each variable follows the following format:
 
--   Short description
+- Short description
+- **Type**
+- The default behavior if not set.
+- A longer description if necessary
 
--   `Type`
-
--   The default behavior if not set.
-
--   A longer description if necessary
-
-`Type` is defined by this table:
+**Type** is defined by this table:
 
 |Type   |Values                                                  |
 |-------|--------------------------------------------------------|
@@ -26,10 +23,9 @@ The description of each variable follows the following format:
 |INTEGER|Non-negative decimal integer                            |
 |STRING |String                                                  |
 
-
 ## Server environment variables
 
-Environment variables in this section only apply to the server side.
+Environment variables in this section only apply to the server-side.
 
 |Variable              |Description|
 |----------------------|-----------|
@@ -39,7 +35,7 @@ Environment variables in this section only apply to the server side.
 |RDB\_AE\_MAX\_ENTRIES |Maximum number of entries in a Raft AppendEntries request. INTEGER. Default to 32.|
 |RDB\_AE\_MAX\_SIZE    |Maximum total size in bytes of all entries in a Raft AppendEntries request. INTEGER. Default to 1 MB.|
 |DAOS\_REBUILD         |Determines whether to start rebuilds when excluding targets. BOOL2. Default to true.|
-|DAOS\_MD\_CAP         |Size of a metadata pmem pool/file in MBs. INTEGER. Default to 128 MB.|
+|DAOS\_MD\_CAP         |Size of a metadata PMem pool/file in MBs. INTEGER. Default to 128 MB.|
 |DAOS\_START\_POOL\_SVC|Determines whether to start existing pool services when starting a daos\_server. BOOL. Default to true.|
 |CRT\_DISABLE\_MEM\_PIN|Disable memory pinning workaround on a server side. BOOL. Default to 0.|
 |DAOS\_SCHED\_PRIO\_DISABLED|Disable server ULT prioritizing. BOOL. Default to 0.|
@@ -54,18 +50,16 @@ Environment variables in this section apply to both the server side and the clie
 |Variable              |Description|
 |----------------------|-----------|
 |FI\_OFI\_RXM\_USE\_SRX|Enable shared receive buffers for RXM-based providers (verbs, tcp). BOOL. Auto-defaults to 1.|
-|FI\_UNIVERSE\_SIZE    |Sets expected universe size in OFI layer to be more than expected number of clients. INTEGER. Auto-defaults to 2048.|
-
+|FI\_UNIVERSE\_SIZE    |Sets expected universe size in OFI layer to be more than the expected number of clients. INTEGER. Auto-defaults to 2048.|
 
 ## Client environment variables
 
-Environment variables in this section only apply to the client side.
+Environment variables in this section only apply to the client-side.
 
 |Variable                 |Description|
 |-------------------------|-----------|
 |FI\_MR\_CACHE\_MAX\_COUNT|Enable MR (Memory Registration) caching in OFI layer. Recommended to be set to 0 (disable) when CRT\_DISABLE\_MEM\_PIN is NOT set to 1. INTEGER. Default to unset.|
 
-
 ## Debug System (Client & Server)
 
 |Variable    |Description|
@@ -73,11 +67,10 @@ Environment variables in this section only apply to the client side.
 |D\_LOG\_FILE|DAOS debug logs (both server and client) are written to stdout by default. The debug location can be modified by setting this environment variable ("D\_LOG\_FILE=/tmp/daos_debug.log").|
 |D\_LOG\_FILE\_APPEND\_PID|If set and not 0, causes the main PID to be appended at the end of D\_LOG\_FILE path name (both server and client).|
 |D\_LOG\_STDERR\_IN\_LOG|If set and not 0, causes stderr messages to be merged in D\_LOG\_FILE.|
-|D\_LOG\_SIZE|DAOS debug logs (both server and client) have a 1GB file size limit by default. When this limit is reached, the current log file is closed and renamed with a .old suffix, and a new one is opened. This mechanism will repeat each time the limit is reached, meaning that available saved log records could be found in both ${D_LOG_FILE} and last generation of ${D_LOG_FILE}.old files, to a maximum of the most recent 2*D_LOG_SIZE records.  This can be modified by setting this environment variable ("D_LOG_SIZE=536870912"). Sizes can also be specified in human-readable form using `k`, `m`, `g`, `K`, `M`, and `G`. The lower-case specifiers are base-10 multipliers and the upper case specifiers are base-2 multipliers.|
-|D\_LOG\_FLUSH|Allows to specify a non-default logging level where flushing will occur. By default, only levels above WARN will cause an immediate flush instead of buffering.|
+|D\_LOG\_SIZE|DAOS debug logs (both server and client) have a 1GB file size limit by default. When this limit is reached, the current log file is closed and renamed with a .old suffix, and a new one is opened. This mechanism will repeat each time the limit is reached, meaning that available saved log records could be found in both ${D_LOG_FILE} and last generation of ${D_LOG_FILE}.old files, to a maximum of the most recent 2*D_LOG_SIZE records.  This can be modified by setting this environment variable ("D_LOG_SIZE=536870912"). Sizes can also be specified in human-readable form using `k`, `m`, `g`, `K`, `M`, and `G`. The lower-case specifiers are base-10 multipliers and the upper-case specifiers are base-2 multipliers.|
+|D\_LOG\_FLUSH|Allows specifying a non-default logging level where flushing will occur. By default, only levels above WARN will cause an immediate flush instead of buffering.|
 |D\_LOG\_TRUNCATE|By default log is appended. But if set this variable will cause log to be truncated upon first open and logging start.|
 |DD\_SUBSYS  |Used to specify which subsystems to enable. DD\_SUBSYS can be set to individual subsystems for finer-grained debugging ("DD\_SUBSYS=vos"), multiple facilities ("DD\_SUBSYS=bio,mgmt,misc,mem"), or all facilities ("DD\_SUBSYS=all") which is also the default setting. If a facility is not enabled, then only ERR messages or more severe messages will print.|
 |DD\_STDERR  |Used to specify the priority level to output to stderr. Options in decreasing priority level order: FATAL, CRIT, ERR, WARN, NOTE, INFO, DEBUG. By default, all CRIT and more severe DAOS messages will log to stderr ("DD\_STDERR=CRIT"), and the default for CaRT/GURT is FATAL.|
 |D\_LOG\_MASK|Used to specify what type/level of logging will be present for either all of the registered subsystems or a select few. Options in decreasing priority level order: FATAL, CRIT, ERR, WARN, NOTE, INFO, DEBUG. DEBUG option is used to enable all logging (debug messages as well as all higher priority level messages). Note that if D\_LOG\_MASK is not set, it will default to logging all messages excluding debug ("D\_LOG\_MASK=INFO"). Example: "D\_LOG\_MASK=DEBUG". This will set the logging level for all facilities to DEBUG, meaning that all debug messages, as well as higher priority messages will be logged (INFO, NOTE, WARN, ERR, CRIT, FATAL). Example 2: "D\_LOG\_MASK=DEBUG,MEM=ERR,RPC=ERR". This will set the logging level to DEBUG for all facilities except MEM & RPC (which will now only log ERR and higher priority level messages, skipping all DEBUG, INFO, NOTE & WARN messages)|
 |DD\_MASK    |Used to enable different debug streams for finer-grained debug messages, essentially allowing the user to specify an area of interest to debug (possibly involving many different subsystems) as opposed to parsing through many lines of generic DEBUG messages. All debug streams will be enabled by default ("DD\_MASK=all"). Single debug masks can be set ("DD\_MASK=trace") or multiple masks ("DD\_MASK=trace,test,mgmt"). Note that since these debug streams are strictly related to the debug log messages, D\_LOG\_MASK must be set to DEBUG. Priority messages higher than DEBUG will still be logged for all facilities unless otherwise specified by D\_LOG\_MASK (not affected by enabling debug masks).|
-
diff --git a/docs/admin/hardware.md b/docs/admin/hardware.md
index 5181f702859..a651ca65c07 100644
--- a/docs/admin/hardware.md
+++ b/docs/admin/hardware.md
@@ -1,34 +1,30 @@
 # Hardware Requirements
 
-
-The purpose of this section is to describe processor, storage, and
+The purpose of this section is to describe the processor, storage, and
 network requirements to deploy a DAOS system.
 
 ## Deployment Options
 
-
 A DAOS storage system is deployed as a **Pooled Storage Model**.
 The DAOS servers can run on dedicated storage nodes in separate racks.
 This is a traditional pool model where storage is uniformly accessed by
-all compute nodes. In order to minimize the number of I/O racks and to
+all compute nodes. To minimize the number of I/O racks and to
 optimize floor space, this approach usually requires high-density storage
 servers.
 
-
 ## Processor Requirements
 
-
 DAOS requires a 64-bit processor architecture and is primarily developed
 on Intel x86\_64 architecture. The DAOS software and the libraries it
 depends on (e.g., [ISA-L](https://github.com/intel/isa-l),
 [SPDK](https://pmem.io/pmdk/), [PMDK](https://spdk.io/), and
 [DPDK](https://www.dpdk.org/) can take
-advantage of Intel Intel Streaming SIMD (SSE) and Intel Advanced Vector (AVX) extensions.
+advantage of Intel Streaming SIMD (SSE) and Intel Advanced Vector (AVX) extensions.
 
-Some success was also reported by the community on running the DAOS client
+The community also reported some success in running the DAOS client
 on 64-bit ARM processors configured in Little Endian mode. That being said,
 ARM testing is not part of the current DAOS CI pipeline and is thus not
-validated on a regular basis.
+validated regularly.
 
 ## Network Requirements
 
@@ -48,12 +44,11 @@ cluster.
 
 ## Storage Requirements
 
-
 DAOS requires each storage node to have direct access to storage-class
 memory (SCM). While DAOS is primarily tested and tuned for Intel
-Optane^TM^ Persistent Memory, the DAOS software stack is built over the
+Optane Persistent Memory, the DAOS software stack, is built over the
 Persistent Memory Development Kit (PMDK) and the Direct Access (DAX) feature of the
-Linux operating systems as described in the
+Linux operating systems, as described in the
 [SNIA NVM Programming Model](https://www.snia.org/sites/default/files/technical\_work/final/NVMProgrammingModel\_v1.2.pdf).
 As a result, the open-source DAOS software stack should be
 able to run transparently over any storage-class memory supported by the
@@ -61,11 +56,11 @@ PMDK.
 
 The storage node can optionally be equipped with [NVMe](https://nvmexpress.org/)
 (non-volatile memory express)[^10] SSDs to provide capacity. HDDs,
-as well as SATA andSAS SSDs, are not supported by DAOS.
+as well as SATA and SAS SSDs, are not supported by DAOS.
 Both NVMe 3D-NAND and Optane SSDs are supported. Optane SSDs are
-preferred for DAOS installation that targets a very high IOPS rate.
+preferred for DAOS installation, targeting a very high IOPS rate.
 NVMe-oF devices are also supported by the
-userspace storage stack but have never been tested.
+User-space storage stack but have never been tested.
 
 A minimum 6% ratio of SCM to SSD capacity will guarantee that DAOS has
 enough space in SCM to store its internal metadata (e.g., pool metadata,
@@ -76,31 +71,29 @@ metadata, if the ratio is too low, it is possible to have bulk storage
 available but insufficient SCM for DAOS metadata.
 
 For testing purposes, SCM can be emulated with DRAM by mounting a tmpfs
-filesystem, and NVMe SSDs can be also emulated with DRAM or a loopback
+filesystem and NVMe SSDs can also be emulated with DRAM or a loopback
 file.
 
 ## Storage Server Design
 
-
 The hardware design of a DAOS storage server balances the network
 bandwidth of the fabric with the aggregate storage bandwidth of the NVMe
 storage devices. This relationship sets the number of NVMe drives
 depending on the read/write balance of the application workload. Since
-NVMe SSDs have read faster than they write, a 200Gbps PCIe4 x4 NIC can
-be balanced for read only workloads by 4 NVMe4 x4 SSDs, but for write
+NVMe SSDs have read faster than they write; a 200Gbps PCIe4 x4 NIC can
+be balanced for read-only workloads by 4 NVMe4 x4 SSDs, but for write
 workloads by 8 NVMe4 x4 SSDs. The capacity of the SSDs will determine
 the minimum capacity of the Optane PMem DIMMs needed to provide the 6%
 ratio for DAOS metadata.
 
-![](media/image2.png)
+![Storage Server Design](media/image2.png)
 
 ## CPU Affinity
 
-
-Recent Intel Xeon data center platforms use two processor CPUs connected
+Recent Intel Xeon data center platforms use two processors’ CPUs connected
 together with the Ultra Path Interconnect (UPI). PCIe lanes in these
 servers have a natural affinity to one CPU. Although globally accessible
-from any of the system cores, NVMe SSDs and network interface cards
+from any system core, NVMe SSDs and network interface cards
 connected through the PCIe bus may provide different performance
 characteristics (e.g., higher latency, lower bandwidth) to each CPU.
 Accessing non-local PCIe devices may involve traffic over the UPI link
@@ -117,11 +110,8 @@ to that CPU from that server instance. The DAOS control plane is
 responsible for detecting the storage and network affinity and starting
 the I/O Engines accordingly.
 
-![](media/image3.png)
-
 ## Fault Domains
 
-
 DAOS relies on single-ported storage massively distributed across
 different storage nodes. Each storage node is thus a single point of
 failure. DAOS achieves fault tolerance by providing data redundancy
@@ -129,9 +119,8 @@ across storage nodes in different fault domains.
 
 DAOS assumes that fault domains are hierarchical and do not overlap. For
 instance, the first level of a fault domain could be the racks and the
-second one, the storage nodes.
+second one is the storage nodes.
 
 For efficient placement and optimal data resilience, more fault domains
 are better. As a result, it is preferable to distribute storage nodes
 across as many racks as possible.
-
diff --git a/docs/admin/installation.md b/docs/admin/installation.md
index 58cef6e01f9..b4e5608f606 100644
--- a/docs/admin/installation.md
+++ b/docs/admin/installation.md
@@ -26,10 +26,10 @@ version of at least 1.10 is required.
 An exhaustive list of packages for each supported Linux distribution is
 maintained in the Docker files and/or their helpers (please click on the link):
 
--    [CentOS 7](https://github.com/daos-stack/daos/blob/release/2.2/utils/docker/Dockerfile.centos.7#L19-L79)
--    [EL 8](https://github.com/daos-stack/daos/blob/release/2.2/utils/scripts/install-el8.sh#L12-L69)
--    [openSUSE Leap 15](https://github.com/daos-stack/daos/blob/release/2.2/utils/docker/Dockerfile.leap.15#L36-L85)
--    [Ubuntu 20.04](https://github.com/daos-stack/daos/blob/release/2.2/1.2/utils/docker/Dockerfile.ubuntu.20.04#L14-L22)
+- [CentOS 7](https://github.com/daos-stack/daos/blob/release/2.2/utils/docker/Dockerfile.centos.7#L19-L79)
+- [EL 8](https://github.com/daos-stack/daos/blob/release/2.2/utils/scripts/install-el8.sh#L12-L69)
+- [openSUSE Leap 15](https://github.com/daos-stack/daos/blob/release/2.2/utils/docker/Dockerfile.leap.15#L36-L85)
+- [Ubuntu 20.04](https://github.com/daos-stack/daos/blob/release/2.2/1.2/utils/docker/Dockerfile.ubuntu.20.04#L14-L22)
 
 The command lines to install the required packages can be extracted from
 the Docker files by removing the "RUN" command, which is specific to Docker.
@@ -47,7 +47,7 @@ The DAOS repository is hosted on [GitHub](https://github.com/daos-stack/daos).
 To checkout the latest development version, simply run:
 
 ```bash
-$ git clone --recurse-submodules https://github.com/daos-stack/daos.git
+git clone --recurse-submodules https://github.com/daos-stack/daos.git
 ```
 
 This command clones the DAOS git repository (path referred as ${daospath}
@@ -59,7 +59,7 @@ If all the software dependencies listed previously are already satisfied, then
 type the following command in the top source directory to build the DAOS stack:
 
 ```bash
-$ scons-3 --config=force install
+scons-3 --config=force install
 ```
 
 If you are a developer of DAOS, we recommend following the instructions in the
@@ -70,7 +70,7 @@ Otherwise, the missing dependencies can be built automatically by invoking scons
 with the following parameters:
 
 ```bash
-$ scons-3 --config=force --build-deps=yes install
+scons-3 --config=force --build-deps=yes install
 ```
 
 By default, DAOS and its dependencies are installed under ${daospath}/install.
@@ -89,8 +89,8 @@ files in the installation path. This step is not required if standard locations
 (e.g. /bin, /sbin, /usr/lib, ...) are used.
 
 ```bash
-$ export CPATH=${daospath}/install/include/:$CPATH
-$ export PATH=${daospath}/install/bin/:${daospath}/install/sbin:$PATH
+export CPATH=${daospath}/install/include/:$CPATH
+export PATH=${daospath}/install/bin/:${daospath}/install/sbin:$PATH
 ```
 
 If using bash, PATH can be set up for you after a build by sourcing the script
@@ -119,7 +119,7 @@ $ docker build https://github.com/daos-stack/daos.git#release/2.2 \
 or from a local tree:
 
 ```bash
-$ docker build  . -f utils/docker/Dockerfile.centos.7 -t daos
+docker build  . -f utils/docker/Dockerfile.centos.7 -t daos
 ```
 
 This creates a CentOS 7 image, fetches the latest DAOS version from GitHub,
@@ -133,7 +133,7 @@ Once the image created, one can start a container that will eventually run
 the DAOS service:
 
 ```bash
-$ docker run -it -d --privileged --cap-add=ALL --name server -v /dev:/dev daos
+docker run -it -d --privileged --cap-add=ALL --name server -v /dev:/dev daos
 ```
 
 !!! note
@@ -155,7 +155,7 @@ uses 4GB of DRAM to emulate persistent memory and 16GB of bulk storage under
 The DAOS service can be started in the docker container as follows:
 
 ```bash
-$ docker exec server daos_server start \
+docker exec server daos_server start \
         -o /home/daos/daos/utils/config/examples/daos_server_local.yml
 ```
 
@@ -166,11 +166,11 @@ Once started, the DAOS server waits for the administrator to format the system.
 This can be triggered in a different shell, using the following command:
 
 ```bash
-$ docker exec server dmg -i storage format
+docker exec server dmg -i storage format
 ```
 
 Upon successful completion of the format, the storage engine is started, and pools
 can be created using the daos admin tool (see next section).
 
 For more advanced configurations involving SCM, SSD or a real fabric, please
-refer to the next section.
+refer to the next section
diff --git a/docs/admin/performance_tuning.md b/docs/admin/performance_tuning.md
index 69c30379035..c3dac27c456 100644
--- a/docs/admin/performance_tuning.md
+++ b/docs/admin/performance_tuning.md
@@ -10,12 +10,9 @@ The CaRT `self_test` can run against the DAOS servers in a production environmen
 in a non-destructive manner. CaRT `self_test` supports different message sizes,
 bulk transfers, multiple targets, and the following test scenarios:
 
--   **Selftest client to servers** - where `self_test` issues RPCs directly
-    to a list of servers.
+- Self-test client to servers: where `self_test` issues RPCs directly to a list of servers.
 
--   **Cross-servers** - where `self_test` sends instructions to the different
-    servers that will issue cross-server RPCs. This model supports a
-    many to many communication model.
+- Cross-servers: where `self_test` sends instructions to the different servers that will issue cross-server RPCs. This model supports a many-to-many communication model.
 
 ### Building CaRT self_test
 
@@ -23,10 +20,10 @@ The CaRT `self_test` and its tests are delivered as part of the daos_client
 and daos_tests [distribution packages][2]. It can also be built from scratch.
 
 ```bash
-$ git clone --recurse-submodules https://github.com/daos-stack/daos.git
-$ cd daos
-$ scons --build-deps=yes install
-$ cd install
+git clone --recurse-submodules https://github.com/daos-stack/daos.git
+cd daos
+scons --build-deps=yes install
+cd install
 ```
 
 For detailed information, please refer to the [DAOS build documentation][3]
@@ -36,13 +33,13 @@ section.
 
 Instructions to run CaRT `self_test` are as follows.
 
-**Start DAOS server**
+#### Start DAOS server
 
-`self_test` requires DAOS server to be running before attempt running
-`self_test`. For detailed instruction on how to start DAOS server, please refer
+`self_test` requires the DAOS server to be running before attempt running
+`self_test`. For detailed instructions on starting the DAOS server, please refer
 to the [server startup][4] documentation.
 
-**Dump system attachinfo**
+#### Dump system attachinfo
 
 `self_test` will use the address information in `daos_server.attach_info_tmp`
 file. To create such file, run the following command:
@@ -51,28 +48,28 @@ file. To create such file, run the following command:
 ./bin/daos_agent dump-attachinfo -o ./daos_server.attach_info_tmp
 ```
 
-**Prepare hostfile**
+#### Prepare hostfile
 
 The list of nodes from which `self_test` will run can be specified in a
 hostfile (referred to as ${hostfile}). Hostfile used here is the same as the
-ones used by OpenMPI. For additional details, please refer to the
-[mpirun documentation][5].
+ones used by OpenMPI. Please refer too the
+[mpirun documentation][5] for additional details
 
-**Run CaRT self_test**
+#### Run CaRT self_test
 
 The example below uses an Ethernet interface and TCP provider.
 In the `self_test` commands:
 
--   **Selftest client to servers** - Replace the argument for `--endpoint`
+- Selftest client to servers: Replace the argument for `--endpoint`
     accordingly.
 
--   **Cross-servers** - Replace the argument for `--endpoint` and
-    `--master-endpoint` accordingly.
+- Cross-servers: Replace the argument for `--endpoint` and `--master-endpoint` accordingly.
 
-For example, if you have 8 servers, you would specify `--endpoint 0-7:0` and
+For example, if you have eight servers, you would specify `--endpoint 0-7:0` and
 `--master-endpoint 0-7:0`
 
-The commands below will run `self_test` benchmark using the following message sizes:
+The commands below will run the `self_test` benchmark using the following message sizes:
+
 ```bash
 b1048576     1Mb bulk transfer Get and Put
 b1048576 0   1Mb bulk transfer Get only
@@ -85,11 +82,13 @@ i2048 0      2Kb iovec Input only
 For a full description of `self_test` usage, run:
 
 ```bash
-$ ./bin/self_test --help
+./bin/self_test --help
 ```
 
-**To run self_test in client-to-servers mode:**
+#### To run self_test in client-to-servers mode
+
 (Assuming sockets provider over eth0)
+
 ```bash
 
 # Specify provider
@@ -99,21 +98,22 @@ export CRT_PHY_ADDR_STR='ofi+tcp'
 export OFI_INTERFACE=eth0
 
 # Specify domain; usually only required when running over ofi+verbs;ofi_rxm
-# For example in such configuration OFI_DOMAIN might be set to mlx5_0
-# run fi_info --provider='verbs;ofi_rxm' in order to find an appropriate domain
-# if only specifying OFI_INTERFACE without OFI_DOMAIN, we assume we do not need one
+# For example, in such configuration, OFI_DOMAIN might be set to mlx5_0
+# run fi_info --provider='verbs;ofi_rxm' to find an appropriate domain
+# If only specifying OFI_INTERFACE without OFI_DOMAIN, we assume we do not need one
 export OFI_DOMAIN=eth0
 
 # Export additional CART-level environment variables as described in README.env
-# if needed. For example export D_LOG_FILE=/path/to/log will allow dumping of the
-# log into the file instead of stdout/stderr
+# If needed. For example, export D_LOG_FILE=/path/to/log will allow the dumping of the
+# Log into the file instead of stdout/stderr
 
 $ ./bin/self_test --group-name daos_server --endpoint 0-:0 \
   --message-sizes "b1048576,b1048576 0,0 b1048576,i2048,i2048 0,0 i2048" \
   --max-inflight-rpcs 16 --repetitions 100 -p /path/to/attach_info
 ```
 
-**To run self_test in cross-servers mode:**
+#### To run self_test in cross-servers mode
+
 ```bash
 
 $ ./bin/self_test --group-name daos_server --endpoint 0-:0 \
@@ -123,60 +123,60 @@ $ ./bin/self_test --group-name daos_server --endpoint 0-:0 \
 ```
 
 Note:
-Number of repetitions, max inflight rpcs, message sizes can be adjusted based on the
+The number of repetitions, max inflight rpcs, message sizes can be adjusted based on the
 particular test/experiment.
 
-
 ## Benchmarking DAOS
 
 DAOS can be benchmarked using several widely used IO benchmarks like IOR,
-mdtest, and FIO. There are several backends that can be used with those
+mdtest, and FIO. Several backends can be used with those
 benchmarks.
 
-### ior
+### IOR
 
 IOR () with the following backends:
 
--   The IOR APIs POSIX, MPIIO and HDF5 can be used with DAOS POSIX containers
-    that are accessed over dfuse. This works without or with the I/O
+- The IOR APIs POSIX, MPIIO and HDF5 can be used with DAOS POSIX containers
+    accessed over dfuse. This works without or with the I/O
     interception library (`libioil`). Performance is significantly better when
     using `libioil`. For detailed information on dfuse usage with the IO
     interception library, please refer to the [POSIX DFUSE section][7].
 
--   A custom DFS (DAOS File System) plugin for DAOS can be used by building IOR
-    with DAOS support, and selecting API=DFS. This integrates IOR directly with the
+- A custom DFS (DAOS File System) plugin for DAOS can be used by building IOR
+    with DAOS support and selecting API=DFS. This integrates IOR directly with the
     DAOS File System (`libdfs`), without requiring FUSE or an interception library.
     Please refer to the [DAOS README][10] in the hpc/ior repository for some basic
     instructions on how to use the DFS driver.
 
--   When using the IOR API=MPIIO, the ROMIO ADIO driver for DAOS can be used by
+- When using the IOR API=MPIIO, the ROMIO ADIO driver for DAOS can be used by
     providing the `daos://` prefix to the filename. This ADIO driver bypasses `dfuse`
     and directly invkes the `libdfs` calls to perform I/O to a DAOS POSIX container.
     The DAOS-enabled MPIIO driver is available in the upstream MPICH repository and
     included with Intel MPI. Please refer to the [MPI-IO documentation][8].
 
--   An HDF5 VOL connector for DAOS is under development. This maps the HDF5 data model
-    directly to the DAOS data model, and works in conjunction with DAOS containers of
-    `--type=HDF5` (in contrast to DAOS container of `--type=POSIX` that are used for
-    the other IOR APIs). Please refer the the [HDF5 with DAOS documentation][9].
+- An HDF5 VOL connector for DAOS is under development. This maps the HDF5 data model
+    directly to the DAOS data model and works in conjunction with DAOS containers of
+    `--type=HDF5` (in contrast to the DAOS container of `--type=POSIX` used for
+    the other IOR APIs). Please refer the [HDF5 with DAOS documentation][9].
 
-IOR has several parameters to characterize performance. The main parameters to
+IOR has several parameters to characterize the performance. The main parameters to
 work with include:
+
 - transfer size (-t)
 - block size (-b)
 - segment size (-s)
 
-For more use cases, the IO-500 workloads are a good starting point to measure
-performance on a system: https://github.com/IO500/io500
+For more use cases, the [IO-500 workloads](https://github.com/IO500/io500) are a good starting point to measure
+performance on a system
 
 ### mdtest
 
 mdtest is released in the same repository as IOR. The corresponding backends
-that are listed above support mdtest, except for the MPI-IO and HDF5 backends
+listed above support mdtest, except for the MPI-IO and HDF5 backends
 that were only designed to support IOR. The [DAOS README][10] in the hpc/ior
 repository includes some examples to run mdtest with DAOS.
 
-The IO-500 workloads for mdtest provide some good criteria for performance
+The IO-500 workloads for mdtest provide some excellent criteria for performance
 measurements.
 
 ### FIO
@@ -185,68 +185,71 @@ A DAOS engine is integrated into FIO and available upstream.
 To build it, just run:
 
 ```bash
-$ git clone http://git.kernel.dk/fio.git
-$ cd fio
-$ ./configure
-$ make install
+git clone http://git.kernel.dk/fio.git
+cd fio
+./configure
+make install
 ```
 
 If DAOS is installed via packages, it should be automatically detected.
-If not, please specific the path to the DAOS library and headers to configure
+If not, please specify the path to the DAOS library and headers to configure
 as follows:
+
+```bash
+CFLAGS="-I/path/to/daos/install/include" LDFLAGS="-L/path/to/daos/install/lib64" ./configure
 ```
-$ CFLAGS="-I/path/to/daos/install/include" LDFLAGS="-L/path/to/daos/install/lib64" ./configure
-```
 
-Once successfully build, once can run the default example:
+Once successfully built, one can run the default example:
+
 ```bash
-$ export POOL= # your pool UUID
-$ export CONT= # your container UUID
-$ fio ./examples/dfs.fio
+export POOL= # your pool UUID
+export CONT= # your container UUID
+fio ./examples/dfs.fio
 ```
 
-Please note that DAOS does not transfer data (i.e. zeros) over the network
+Please note that DAOS does not transfer data (i.e., zeros) over the network
 when reading a hole in a sparse POSIX file. Very high read bandwidth can
 thus be reported if fio reads unallocated extents in a file. It is thus
-a good practice to start fio with a first write phase.
+an excellent practice to start fio with a first write phase.
 
 FIO can also be used to benchmark DAOS performance using dfuse and the
-interception library with all the POSIX based engines like sync and libaio.
+interception library with all the POSIX-based engines like sync and libaio.
 
 ### daos_perf & vos_perf
 
-Finally, DAOS provides a tool called `daos_perf` which allows benchmarking to the
+Finally, DAOS provides a tool called `daos_perf`, which allows benchmarking to the
 DAOS object API directly and a tool called 'vos_perf' to benchmark the internal
 VOS API, which bypasses the client and network stack and reports performance
-accessing the storage directly using VOS. For a full description of `daos_perf` or
+accessing the storage directly using VOS. For a complete description of `daos_perf` or
 `vos_perf` usage, run:
 
 ```bash
-$ daos_perf --help
+daos_perf --help
 ```
+
 ```bash
-$ vos_perf --help
+vos_perf --help
 ```
 
-The `-R` option is used to define the operation to be performanced:
+The `-R` option is used to define the operation to be performed:
 
-- `U` for `update` (i.e. write) operation
-- `F` for `fetch` (i.e. read) operation
-- `P` for `punch` (i.e. truncate) operation
+- `U` for `update` (i.e., write) operation
+- `F` for `fetch` (i.e., read) operation
+- `P` for `punch` (i.e., truncate) operation
 - `p` to display the performance result for the previous operation.
 
 For instance, -R "U;p F;p" means update the keys, print the update rate/bandwidth,
-fetch the keys and then print the fetch rate/bandwidth. The number of
+fetch the, keys and then print the fetch rate/bandwidth. The number of
 object/dkey/akey/value can be passed via respectively the -o, -d, -a and -n
-options. The value size is specified via the -s parameter (e.g. -s 4K for 4K
+options. The value size is specified via the -s parameter (e.g., -s 4K for 4K
 value).
 
-For instance, to measure rate for 10M update & fetch operation in VOS mode,
-mount the pmem device and then run:
+For instance, to measure the rate for 10M update & fetch operation in VOS mode,
+mount the PMem device and then run:
 
 ```bash
-$ cd /mnt/daos0
-$ df .
+cd /mnt/daos0
+df .
 Filesystem      1K-blocks  Used  Available Use% Mounted on
 /dev/pmem0     4185374720 49152 4118216704   1% /mnt/daos0
 $ taskset -c 1 vos_perf -D . -P 100G -d 10000000 -a 1 -n 1 -s 4K -z -R "U;p F;p"
@@ -300,7 +303,7 @@ Taskset is used to change the CPU affinity of the daos\_perf process.
 The same test can be performed on the 2nd pmem device to compare the
 performance.
 
-```
+```bash
 $ cd /mnt/daos1/
 $ df .
 Filesystem      1K-blocks   Used  Available Use% Mounted on
@@ -329,9 +332,9 @@ UPDATE successfully completed:
         rate      : 81044.19   IO/sec
         latency   : 12.339     us (nonsense if credits > 1)
 Duration across processes:
-        MAX duration : 123.389467 sec
-        MIN duration : 123.389467 sec
-        Average duration : 123.389467 sec
+        MAX duration: 123.389467 sec
+        MIN duration: 123.389467 sec
+        Average duration: 123.389467 sec
 Completed test=UPDATE
 Running test=FETCH
 Running FETCH test (iteration=1)
@@ -350,7 +353,7 @@ Completed test=FETCH
 Bandwidth can be tested by using a larger record size (i.e. -s option). For
 instance:
 
-```
+```bash
 $ taskset -c 36 vos_perf -D . -P 100G -d 40000 -a 1 -n 1 -s 1M -z -R "U;p F;p"
 Test :
         VOS (storage only)
@@ -395,17 +398,17 @@ Completed test=FETCH
 
 !!! note
     With 3rd Gen Intel® Xeon® Scalable processors (ICX), the PMEM_NO_FLUSH
-    environment variable can be set to 1 to take advantage of the extended
+    An environment variable can be set to 1 to take advantage of the extended
     asynchronous DRAM refresh (eADR) feature
 
 In DAOS mode, daos\_perf can be used as an MPI application like IOR.
 Parameters are the same, except that `-T daos` can be used to select the daos
-mode. This option can be omitted too since this is the default.
+mode. This option can be omitted since this is the default.
 
 ## Client Performance Tuning
 
-For best performance, a DAOS client should specifically bind itself to a NUMA
-node instead of leaving core allocation and memory binding to chance.  This
+A DAOS client should specifically bind itself to an NUMA
+node for best performance instead of leaving core allocation and memory binding to chance.  This
 allows the DAOS Agent to detect the client's NUMA affinity from its PID and
 automatically assign a network interface with a matching NUMA node.  The network
 interface provided in the GetAttachInfo response is used to initialize CaRT.
@@ -421,7 +424,7 @@ OFI provider.  This request occurs as part of the initialization sequence in the
 
 Upon receipt, the Agent populates a cache of responses indexed by NUMA affinity.
 Provided a client application has bound itself to a specific NUMA node and that
-NUMA node has a network device associated with it, the DAOS Agent will provide a
+NUMA node has a network device associated with it; the DAOS Agent will provide a
 GetAttachInfo response with a network interface corresponding to the client's
 NUMA node.
 
@@ -429,13 +432,13 @@ When more than one appropriate network interface exists per NUMA node, the agent
 uses a round-robin resource allocation scheme to load balance the responses for
 that NUMA node.
 
-If a client is bound to a NUMA node that has no matching network interface, then
-a default NUMA node is used for the purpose of selecting a response.  Provided
+If a client is bound to a NUMA node with no matching network interface, then
+a default NUMA node is used to select a response.  Provided
 that the DAOS Agent can detect any valid network device on any NUMA node, the
 default response will contain a valid network interface for the client.  When a
 default response is provided, a message in the Agent's log is emitted:
 
-```
+```bash
 No network devices bound to client NUMA node X.  Using response from NUMA Y
 ```
 
@@ -444,11 +447,11 @@ the wrong NUMA node, or if expected network devices for that NUMA node are
 missing from the Agent's fabric scan.
 
 In some situations, the Agent may detect no network devices and the response
-cache will be empty.  In such a situation, the GetAttachInfo response will
-contain no interface assignment and the following info message will be found in
+The cache will be empty.  In such a situation, the GetAttachInfo response will
+contain no interface assignment, and the following info message will be found in
 the Agent's log:
 
-```
+```bash
 No network devices detected in fabric scan; default AttachInfo response may be incorrect
 ```
 
@@ -463,8 +466,8 @@ desired, the cache may be disabled prior to DAOS Agent startup by setting the
 Agent's environment variable `DAOS_AGENT_DISABLE_CACHE=true` or updating the
 Agent configuration file with `disable_caching: true`.
 
-If the network configuration changes while the Agent is running, and the cache
-is enabled, the Agent must be restarted to gain visibility to these changes.
+If the network configuration changes while the Agent is running and the cache
+is enabled, the Agent must be restarted to gain visibility of these changes.
 For additional information, please refer to the
 [System Deployment: Agent Startup][6] documentation section.
 
diff --git a/docs/admin/pool_operations.md b/docs/admin/pool_operations.md
index 0506daaa700..f76e2603e09 100644
--- a/docs/admin/pool_operations.md
+++ b/docs/admin/pool_operations.md
@@ -1,4 +1,4 @@
-# Pool Operations
+# DAOS Pool Operations
 
 A DAOS pool is a storage reservation that can span any storage nodes in a
 DAOS system and is managed by the administrator. The amount of space allocated
@@ -12,25 +12,26 @@ management interface or the `dmg` utility.
 A DAOS pool can be created and destroyed through the `dmg` utility.
 
 To create a pool labeled `tank`:
+
 ```bash
-$ dmg pool create --size=TB tank
+dmg pool create --size=TB tank
 ```
 
 This command creates a pool labeled `tank` distributed across the DAOS servers
-with a target size on each server that is comprised of N * 0.94 TB of NVMe storage
-and N * 0.06 TB (i.e., 6% of NVMe) of SCM storage. The default SCM:NVMe ratio
-may be adjusted at pool creation time as described below.
+with a target size on each server comprised of N *0.94 TB of NVMe storage
+and N* 0.06 TB (i.e., 6% of NVMe) of SCM storage. As described below, the default SCM: NVMe ratio
+may be adjusted at pool creation time.
 
 The UUID allocated to the newly created pool is printed to stdout
 as well as the pool service replica ranks.
 
 !!! note
-    The --scm-size and --nvme-size options still exist, but should be
+    The --scm-size and --nvme-size options still exist but should be
     considered deprecated and will likely be removed in a future release.
 
 The label must consist of alphanumeric characters, colon (':'), period ('.'),
 hyphen ('-') or underscore ('\_'). The maximum length is set to 127 characters.
-Labels that can be parsed as UUID (e.g. 123e4567-e89b-12d3-a456-426614174000)
+Labels that can be parsed as UUID (e.g., 123e4567-e89b-12d3-a456-426614174000)
 are forbidden.
 
 ```bash
@@ -57,7 +58,7 @@ The typical output of this command is as follows:
 ```bash
 $ dmg pool create --size 50GB tank
 Creating DAOS pool with automatic storage allocation: 50 GB NVMe + 6.00% SCM
-Pool created with 6.00% SCM/NVMe ratio
+Pool created with a 6.00% SCM/NVMe ratio
 -----------------------------------------
   UUID                 : 8a05bf3a-a088-4a77-bb9f-df989fce7cc8
   Service Ranks        : [1-3]
@@ -71,18 +72,19 @@ This created a pool with UUID 8a05bf3a-a088-4a77-bb9f-df989fce7cc8,
 with pool service redundancy enabled by default
 (pool service replicas on ranks 1-3).
 
-If no redundancy is desired, use --nsvc=1 in order to specify that only
+If no redundancy is desired, use --nsvc=1 to specify that only
 a single pool service replica should be created.
 
-The -t option allows defining the ratio between SCM and NVMe SSD space.
+The -t option allows defining the SCM and NVMe SSD space ratio.
 The default value is 6%, which means the space provided after --size
 will be distributed as follows:
+
 - 6% is allocated on SCM (i.e., 3GB in the example above)
 - 94% is allocated on NVMe SSD (i.e., 47GB in the example above)
 
 Note that it is difficult to determine the usable space by the user, and
-currently we cannot provide the precise value. The usable space depends not only
-on pool size, but also on number of targets, target size, object class,
+Currently, we cannot provide a precise value. The usable space depends not only
+on pool size, but also the number of targets, target size, object class,
 storage redundancy factor, etc.
 
 ### Listing Pools
@@ -98,16 +100,17 @@ tank     47 GB  0%   0%        0/32
 
 This returns a table of pool labels (or UUIDs if no label was specified)
 with the following information for each pool:
+
 - the total pool size
 - the percentage of used space (i.e., 100 * used space  / total space)
-- the imbalance percentage indicating whether data distribution across
-  the difference storage nodes is well balanced. 0% means that there is
-  no imbalance and 100% means that out-of-space errors might be returned
+- the imbalance percentage indicates whether data distribution across
+  the different storage nodes is well balanced. 0% means that there is
+  no imbalance, and 100% means that out-of-space errors might be returned
   by some storage nodes while space is still available on others.
 - the number of disabled targets (0 here) and the number of targets that
-  the pool was originally configured with (total).
+  the pool was initially configured with (total).
 
-The --verbose option provides more detailed information including the
+The --verbose option provides more detailed information, including the
 number of service replicas, the full UUIDs and space distribution
 between SCM and NVMe for each pool:
 
@@ -138,7 +141,7 @@ is integrated into the dmg utility.
 To query a pool labeled `tank`:
 
 ```bash
-$ dmg pool query tank
+dmg pool query tank
 ```
 
 The label can be replaced with the pool UUID.
@@ -158,7 +161,7 @@ Below is the output for a pool created with SCM space only.
     Rebuild done, 10 objs, 1026 recs
 ```
 
-The total and free sizes are the sum across all the targets whereas
+The total and free sizes are the sums across all the targets whereas
 min/max/mean gives information about individual targets. A min value
 close to 0 means that one target is running out of space.
 
@@ -246,7 +249,7 @@ Self-healing policy (self_heal) exclude
 Rebuild space ratio (space_rb)  0%
 ```
 
-Some properties can be modified after pool creation via the `set-prop` option.
+After pool creation, some properties can be modified via the `set-prop` option.
 
 ```bash
 $ dmg pool set-prop tank2 reclaim:lazy
@@ -263,16 +266,18 @@ Reclaim strategy (reclaim) lazy
 
 DAOS is a versioned object store that tags every I/O with an epoch number.
 This versioning mechanism is the baseline for multi-version concurrency control and
-snapshot support. Over time, unused versions need to be reclaimed in order to
-release storage space and also simplify the metadata index. This process is
+snapshot support. Over time, unused versions need to be reclaimed to
+release storage space and simplify the metadata index. This process is
 called aggregation.
 
-The reclaim property defines what strategy to use to reclaimed unused version.
+The reclaim property defines what strategy to use to reclaim unused versions.
 Three options are supported:
 
-* "lazy"     : Trigger aggregation only when there is no IO activities or SCM free space is under pressure (default strategy)
-* "time"     : Trigger aggregation regularly despite of IO activities.
-* "disabled" : Never trigger aggregation. The system will eventually run out of space even if data is being deleted.
+|Option|Description|
+|---|---|
+|lazy|Trigger aggregation, only when there are no IO activities or SCM free space, is under pressure (default strategy)|
+|time|Trigger aggregation regularly despite IO activities.|
+|disabled|Never trigger aggregation. The system will eventually run out of space even if data is not deleted.|
 
 ### Self-healing Policy (self\_heal)
 
@@ -284,14 +289,14 @@ Two options are supported: "exclude" (default strategy) and "rebuild".
 ### Reserved Space for Rebuilds (space\_rb)
 
 This property defines the percentage of total space reserved on each storage
-node for self-healing purpose. The reserved space cannot be consumed by
+node for self-healing purposes. The reserved space cannot be consumed by
 applications. Valid values are 0% to 100%, the default is 0%.
 When setting this property, specifying the percentage symbol is optional:
 `space_rb:2%` and `space_rb:2` both specify two percent of storage capacity.
 
 ### Default EC Cell Size (ec\_cell\_sz)
 
-This property defines the default erasure code cell size inherited to DAOS
+This property defines the default erasure code cell size inherited from DAOS
 containers. The EC cell size can be between 1kiB and 1GiB,
 although it should typically be set to a value between 32kiB and 1MiB.
 The default is 1MiB.
@@ -309,18 +314,14 @@ credentials.
 
 Access-controlled client pool accesses include:
 
-* Connecting to the pool.
-
-* Querying the pool.
+- Connecting to the pool.
+- Querying the pool
+- Creating containers in the pool.
+- Deleting containers in the pool.
 
-* Creating containers in the pool.
+This is reflected in the set of supported [pool permissions](https://docs.daos.io/v2.2/overview/security/#permissions).
 
-* Deleting containers in the pool.
-
-This is reflected in the set of supported
-[pool permissions](https://docs.daos.io/v2.2/overview/security/#permissions).
-
-A user must be able to connect to the pool in order to access any containers
+A user must be able to connect to the pool to access any containers
 inside, regardless of their permissions on those containers.
 
 ### Ownership
@@ -334,7 +335,7 @@ be explicitly defined by an administrator in the pool ACL.
 To create a pool with a custom ACL:
 
 ```bash
-$ dmg pool create --size  --acl-file  
+dmg pool create --size  --acl-file  
 ```
 
 The ACL file format is detailed in [here](https://docs.daos.io/v2.2/overview/security/#acl-file).
@@ -344,7 +345,7 @@ The ACL file format is detailed in [here](https://docs.daos.io/v2.2/overview/sec
 To view a pool's ACL:
 
 ```bash
-$ dmg pool get-acl --outfile= 
+dmg pool get-acl --outfile= 
 ```
 
 The output is in the same string format used in the ACL file during creation,
@@ -372,7 +373,7 @@ noted above for container creation.
 To replace a pool's ACL with a new ACL:
 
 ```bash
-$ dmg pool overwrite-acl --acl-file  
+dmg pool overwrite-acl --acl-file  
 ```
 
 #### Adding and Updating ACEs
@@ -380,13 +381,13 @@ $ dmg pool overwrite-acl --acl-file  
 To add or update multiple entries in an existing pool ACL:
 
 ```bash
-$ dmg pool update-acl --acl-file  
+dmg pool update-acl --acl-file  
 ```
 
 To add or update a single entry in an existing pool ACL:
 
 ```bash
-$ dmg pool update-acl --entry  
+dmg pool update-acl --entry  
 ```
 
 If there is no existing entry for the principal in the ACL, the new entry is
@@ -412,19 +413,19 @@ A:G:GROUP@:rw
 To delete an entry for a given principal in an existing pool ACL:
 
 ```bash
-$ dmg pool delete-acl --principal  
+dmg pool delete-acl --principal  
 ```
 
-The principal corresponds to the principal portion of an ACE that was
+The principle corresponds to the principal portion of an ACE that was
 set during pool creation or a previous pool ACL operation. For the delete
-operation, the principal argument must be formatted as follows:
+the operation, the principal argument must be formatted as follows:
 
-* Named user: `u:username@`
-* Named group: `g:groupname@`
-* Special principals: `OWNER@`, `GROUP@`, and `EVERYONE@`
+- Named user: `u:username@`
+- Named group: `g:groupname@`
+- Special principals: `OWNER@`, `GROUP@`, and `EVERYONE@`
 
-The entry for that principal will be completely removed. This does not always
-mean that the principal will have no access. Rather, their access to the pool
+The entry for that principal will be removed entirely. This does not always
+mean that the principal will have no access. Instead, their access to the pool
 will be decided based on the remaining ACL rules.
 
 ## Pool Modifications
@@ -444,26 +445,26 @@ surviving engine.
 !!! note
     The rebuild process may consume many resources on each engine and
     is thus throttled to reduce the impact on application performance. This
-    current logic relies on CPU cycles on the storage nodes. By default, the
+    current logic relies on CPU cycles on the storage nodes. By default,, the
     rebuild process is configured to consume up to 30% of the CPU cycles,
     leaving the other 70% for regular I/O operations.
 
 ### Manual Exclusion
 
 An operator can exclude one or more engines or targets from a specific DAOS pool
-using the rank the target resides, as well as the target idx on that rank.
+using the rank the target resides and the target idx on that rank.
 If a target idx list is not provided, all targets on the rank will be excluded.
 
 To exclude a target from a pool:
 
 ```bash
-$ dmg pool exclude --rank=${rank} --target-idx=${idx1},${idx2},${idx3} 
+ dmg pool exclude --rank=${rank} --target-idx=${idx1},${idx2},${idx3} 
 ```
 
-The pool target exclude command accepts 2 parameters:
+The pool target exclude command accepts two parameters:
 
-* The engine rank of the target(s) to be excluded.
-* The target Indices of the targets to be excluded from that rank (optional).
+- The engine rank of the target(s) to be excluded.
+- The target Indices of the targets to be excluded from that rank (optional).
 
 Upon successful manual exclusion, the self-healing mechanism will be triggered
 to restore redundancy on the remaining engines/targets.
@@ -477,19 +478,19 @@ A pool drain operation initiates rebuild without excluding the designated engine
 or target until after the rebuild is complete.
 This allows the drained entity to continue to perform I/O while the rebuild
 operation is ongoing. Drain additionally enables non-replicated data to be
-rebuilt onto another target whereas in a conventional failure scenario non-replicated
+rebuilt onto another target, whereas in a conventional failure scenario non-replicated
 data would not be integrated into a rebuild and would be lost.
 
 To drain a target from a pool:
 
 ```bash
-$ dmg pool drain --rank=${rank} --target-idx=${idx1},${idx2},${idx3} $DAOS_POOL
+dmg pool drain --rank=${rank} --target-idx=${idx1},${idx2},${idx3} $DAOS_POOL
 ```
 
-The pool target drain command accepts 2 parameters:
+The pool target drain command accepts two parameters:
 
-* The engine rank of the target(s) to be drained.
-* The target Indices of the targets to be drained from that rank (optional).
+- The engine rank of the target(s) to be drained.
+- The target Indices of the targets to be drained from that rank (optional).
 
 ### Reintegration
 
@@ -497,22 +498,22 @@ After an engine failure and exclusion, an operator can fix the underlying issue
 and reintegrate the affected engines or targets to restore the pool to its
 original state.
 The operator can either reintegrate specific targets for an engine rank by
-supplying a target idx list, or reintegrate an entire rank by omitting the list.
+supplying a target idx list or reintegrating an entire rank by omitting the list.
 
-```
-$ dmg pool reintegrate $DAOS_POOL --rank=${rank} --target-idx=${idx1},${idx2},${idx3}
+```bash
+dmg pool reintegrate $DAOS_POOL --rank=${rank} --target-idx=${idx1},${idx2},${idx3}
 ```
 
-The pool reintegrate command accepts 3 parameters:
+The pool reintegrate command accepts three parameters:
 
-* The label or UUID of the pool that the targets will be reintegrated into.
-* The engine rank of the affected targets.
-* The target indices of the targets to be reintegrated on that rank (optional).
+- The label or UUID of the pool that the targets will be reintegrated into.
+- The engine rank of the affected targets.
+- The target indices of the targets to be reintegrated on that rank (optional).
 
-When rebuild is triggered it will list the operations and their related engines/targets
+When the rebuild is triggered, it will list the operations and their related engines/targets
 by their engine rank and target index.
 
-```
+```bash
 Target (rank 5 idx 0) is down.
 Target (rank 5 idx 1) is down.
 ...
@@ -522,12 +523,12 @@ Target (rank 5 idx 1) is down.
 
 These should be the same values used when reintegrating the targets.
 
-```
-$ dmg pool reintegrate $DAOS_POOL --rank=5 --target-idx=0,1
+```bash
+dmg pool reintegrate $DAOS_POOL --rank=5 --target-idx=0,1
 ```
 
 !!! warning
-    While dmg pool query and list show how many targets are disabled for each
+    While the dmg pool query and list show how many targets are disabled for each
     pool, there is currently no way to list the targets that have actually
     been disabled. As a result, it is recommended for now to try to reintegrate
     all engine ranks one after the other via `for i in seq $NR_RANKs; do dmg
@@ -541,7 +542,7 @@ $ dmg pool reintegrate $DAOS_POOL --rank=5 --target-idx=0,1
 Full Support for online target addition and automatic space rebalancing is
 planned for a future release and will be documented here once available.
 
-Until then the following command(s) are placeholders and offer limited
+Until then, the following command(s) are placeholders and offer limited
 functionality related to Online Server Addition/Rebalancing operations.
 
 An operator can choose to extend a pool to include ranks not currently in the
@@ -549,11 +550,11 @@ pool.
 This will automatically trigger a server rebalance operation where objects
 within the extended pool will be rebalanced across the new storage.
 
-```
-$ dmg pool extend $DAOS_POOL --ranks=${rank1},${rank2}...
+```bash
+dmg pool extend $DAOS_POOL --ranks=${rank1},${rank2}...
 ```
 
-The pool extend command accepts one required parameter which is a comma
+The pool extend command accepts one required parameter, which is a comma
 separated list of engine ranks to include in the pool.
 
 The pool rebalance operation will work most efficiently when the pool is
@@ -569,7 +570,7 @@ without adding new ones) is currently not supported and is under consideration.
 
 A DAOS pool is instantiated on each target by a set of pmemobj files
 managed by PMDK and SPDK blobs on SSDs. Tools to verify and repair this
-persistent data is scheduled for DAOS v2.4 and will be documented here
+persistent data are scheduled for DAOS v2.4 and will be documented here
 once available.
 
 Meanwhile, PMDK provides a recovery tool (i.e., pmempool check) to verify
@@ -587,12 +588,13 @@ user and/or group.
 To change the owner user:
 
 ```bash
-$ dmg cont set-owner --pool  --cont  --user 
+dmg cont set-owner --pool  --cont  --user 
 ```
+
 To change the owner group:
 
 ```bash
-$ dmg cont set-owner --pool  --cont  --group 
+dmg cont set-owner --pool  --cont  --group 
 ```
 
 The user and group names are case sensitive and must be formatted as
diff --git a/docs/admin/predeployment_check.md b/docs/admin/predeployment_check.md
index 4f11432fc5d..e295555a80f 100644
--- a/docs/admin/predeployment_check.md
+++ b/docs/admin/predeployment_check.md
@@ -1,15 +1,15 @@
 # Pre-deployment Checklist
 
-This section covers the preliminary setup required on the compute and
+This section covers the preliminary setup steps required on the compute and
 storage nodes before deploying DAOS.
 
 ## Enable IOMMU
 
-In order to run the DAOS server as a non-root user with NVMe devices, the hardware
+To run the DAOS server as a non-root user with NVMe devices, the hardware
 must support virtualized device access, and it must be enabled in the system BIOS.
 On Intel® systems, this capability is named Intel® Virtualization Technology for
 Directed I/O (VT-d). Once enabled in BIOS, IOMMU support must also be enabled in
-the Linux kernel. Exact details depend on the distribution, but the following
+the Linux kernel. The exact details depend on the distribution, but the following
 example should be illustrative:
 
 ```bash
@@ -23,8 +23,8 @@ GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
 # the bootloader:
 $ sudo grub2-mkconfig --output=/boot/grub2/grub.cfg
 
-# if the command completed with no errors, reboot the system
-# in order to make the changes take effect
+# if the command is completed with no errors, reboot the system
+# to make the changes take effect
 $ sudo reboot
 ```
 
@@ -34,12 +34,11 @@ $ sudo reboot
     but note that this will require running daos_server as root.
 
 !!! warning
-    If VFIO is not enabled on RHEL 8.x and derivatives, you will run into the issue described in:
-    https://github.com/spdk/spdk/issues/1153
-
+    If VFIO is not enabled on RHEL 8.x and derivatives, you will run into the issue described in [spdk issue #1153](
+    https://github.com/spdk/spdk/issues/1153)
     The problem manifests with the following signature in the kernel logs:
 
-    ```
+    ```bash
     [82734.333834] genirq: Threaded irq requested with handler=NULL and !ONESHOT for irq 113
     [82734.341761] uio_pci_generic: probe of 0000:18:00.0 failed with error -22
     ```
@@ -57,11 +56,14 @@ any other equivalent protocol.
 
 ### DAOS User/Groups on the Servers
 
-The `daos_server` and `daos_engine` processes run under a non-privileged userid `daos_server`.
-If that user does not exist at the time the `daos-server` RPM is installed, the user will be
-created as part of the RPM installation. A group `daos-server` will also be created as its
-primary group, as well as two additional groups `daos_metrics` and `daos_daemons` to which
-the `daos_server` user will be added.
+As part of the `daos-server` RPM Install, several users and groups are required and will be created if they don’t already exist.
+
+- `daos-server` will also be created as its primary group
+- `daos_metrics` secondary group
+- `daos_daemons` secondary group
+- `daos_server` - non-privileged user
+
+Note: The `daos_server` and `daos_engine` processes run under the `daos_server` userid.
 
 If there are site-specific rules for the creation of users and groups, it is advisable to
 create these users and groups following the site-specific conventions _before_ installing the
@@ -69,11 +71,13 @@ create these users and groups following the site-specific conventions _before_ i
 
 ### DAOS User/Groups on the Clients
 
-The `daos_agent` process runs under a non-privileged userid `daos_agent`.
-If that user does not exist at the time the `daos-client` RPM is installed, the user will be
-created as part of the RPM installation. A group `daos-agent` will also be created as its
-primary group, as well as an additional group `daos_daemons` to which the `daos_agent` user
-will be added.
+As part of the `daos-client RPM Install, several users and groups are required and will be created if they don’t already exist.
+
+- `daos-agent` will also be created as its primary group
+- `daos_daemons` secondary group
+- `daos_agent` - non-privileged user, member of daos_daemons
+
+Note: The `daos_agent` process runs under a non-privileged userid `daos_agent`.
 
 If there are site-specific rules for the creation of users and groups, it is advisable to
 create these users and groups following the site-specific conventions _before_ installing the
@@ -84,7 +88,7 @@ create these users and groups following the site-specific conventions _before_ i
 DAOS ACLs for pools and containers store the actual user and group names (instead of numeric
 IDs). Therefore the servers do not need access to a synchronized user/group database.
 The DAOS Agent (running on the client nodes) is responsible for resolving a user's
-UID/GID to user/group names, which are then added to a signed credential and sent to
+UID/GID to user/group names are then added to a signed credential and sent to
 the DAOS storage nodes.
 
 ## Multi-rail/NIC Setup
@@ -95,7 +99,7 @@ multiple engine instances.
 ### Subnet
 
 Since all engines need to be able to communicate, the different network
-interfaces must be on the same subnet or you must configuring routing
+interfaces must be on the same subnet, or you must configure routing
 across the different subnets.
 
 ### Infiniband Settings
@@ -104,40 +108,40 @@ Some special configuration is required to use librdmacm with multiple
 interfaces.
 
 First, the accept_local feature must be enabled on the network interfaces
-to be used by DAOS. This can be done using the following command ( must
+used by DAOS. This can be done using the following command (`` must
 be replaced with the interface names):
 
-```
-$ sudo sysctl -w net.ipv4.conf.all.accept_local=1
+```bash
+sudo sysctl -w net.ipv4.conf.all.accept_local=1
 ```
 
 Second, Linux must be configured to only send ARP replies on the interface
 targeted in the ARP request. This is configured via the arp_ignore parameter.
 This should be set to 2 if all the IPoIB interfaces on the client and storage
-nodes are in the same logical subnet (e.g. ib0 == 10.0.0.27, ib1 == 10.0.1.27,
+nodes are in the same logical subnet (e.g., ib0 == 10.0.0.27, ib1 == 10.0.1.27,
 prefix=16).
 
-```
-$ sysctl -w net.ipv4.conf.all.arp_ignore=2
+```bash
+sysctl -w net.ipv4.conf.all.arp_ignore=2
 ```
 
-If separate logical subnets are used (e.g. prefix = 24), then the value must be
+If separate logical subnets are used (e.g., prefix = 24), then the value must be
 set to 1.
 
-```
-$ sysctl -w net.ipv4.conf.all.arp_ignore=1
+```bash
+sysctl -w net.ipv4.conf.all.arp_ignore=1
 ```
 
-Finally, the rp_filter is set to 1 by default on several distributions (e.g. on
-CentOS 7 and EL 8) and should be set to either 0 or 2, with 2 being more secure. This is
+Finally, the rp_filter is set to 1 by default on several distributions (e.g., on
+CentOS 7 and EL 8) should be set to either 0 or 2, with two more secure. This is
 true even if the configuration uses a single logical subnet.
 
-```
-$ sysctl -w net.ipv4.conf..rp_filter=2
+```bash
+sysctl -w net.ipv4.conf..rp_filter=2
 ```
 
 All those parameters can be made persistent in /etc/sysctl.conf by adding a new
-sysctl file under /usr/lib/sysctl.d (e.g. /usr/lib/sysctl.d/95-daos-net.conf)
+sysctl file under /usr/lib/sysctl.d (e.g., /usr/lib/sysctl.d/95-daos-net.conf)
 with all the relevant settings.
 
 For more information, please refer to the [librdmacm documentation](https://github.com/linux-rdma/rdma-core/blob/release/2.2/Documentation/librdmacm.md)
@@ -153,14 +157,16 @@ DAOS uses a series of Unix Domain Sockets to communicate between its
 various components. On modern Linux systems, Unix Domain Sockets are
 typically stored under /run or /var/run (usually a symlink to /run) and
 are a mounted tmpfs file system. There are several methods for ensuring
-the necessary directories are setup.
+the necessary directories are set up.
 
 A sign that this step may have been missed is when starting daos_server
 or daos_agent, you may see the message:
+
 ```bash
 $ mkdir /var/run/daos_server: permission denied
 Unable to create socket directory: /var/run/daos_server
 ```
+
 #### Non-default Directory
 
 By default, daos_server and daos_agent will use the directories
@@ -168,12 +174,12 @@ By default, daos_server and daos_agent will use the directories
 the default location that daos_server uses for its runtime directory,
 uncomment and set the socket_dir configuration value in /etc/daos/daos_server.yml.
 For the daos_agent, either uncomment and set the runtime_dir configuration value in
-/etc/daos/daos_agent.yml or a location can be passed on the command line using
+/etc/daos/daos_agent.yml. Alternatively, a location can be passed on the command line using
 the --runtime_dir flag (`daos_agent -d /tmp/daos_agent`).
 
 !!! warning
     Do not change these when running under `systemd` control.
-    If these directories need to be changed, insure they match the
+    If these directories need to be changed, ensure they match the
     RuntimeDirectory setting in the /usr/lib/systemd/system/daos_agent.service
     and /usr/lib/systemd/system/daos_server.service configuration files.
     The socket directories will be created and removed by `systemd` when the
@@ -185,16 +191,19 @@ Files and directories created in /run and /var/run only survive until
 the next reboot. These directories are required for subsequent runs;
 therefore, if reboots are infrequent, an easy solution
 while still utilizing the default locations is to create the
-required directories manually. To do this execute the following commands.
+required directories manually. To do this, execute the following commands.
 
 daos_server:
+
 ```bash
 $ mkdir /var/run/daos_server
 $ chmod 0755 /var/run/daos_server
 $ chown user:user /var/run/daos_server (where user is the user you
     will run daos_server as)
 ```
+
 daos_agent:
+
 ```bash
 $ mkdir /var/run/daos_agent
 $ chmod 0755 /var/run/daos_agent
@@ -204,7 +213,7 @@ $ chown user:user /var/run/daos_agent (where user is the user you
 
 #### Default Directory (persistent)
 
-The following steps are not necessary if DAOS is installed from rpms.
+The following steps are not necessary if DAOS is installed from RPMs.
 
 If the server hosting `daos_server` or `daos_agent` will be rebooted often,
 systemd provides a persistent mechanism for creating the required
@@ -213,13 +222,11 @@ time the system is provisioned and requires a reboot to take effect.
 
 To tell systemd to create the necessary directories for DAOS:
 
--   Copy the file utils/systemd/daosfiles.conf to /etc/tmpfiles.d\
+- Copy the file utils/systemd/daosfiles.conf to /etc/tmpfiles.d\
     cp utils/systemd/daosfiles.conf /etc/tmpfiles.d
-
--   Modify the copied file to change the user and group fields
-    (currently daos) to the user daos will be run as
-
--   Reboot the system, and the directories will be created automatically
+- Modify the copied file to change the user and group fields
+    (currently daos) to the user that DAOS will be run as
+- Reboot the system, and the directories will be created automatically
     on all subsequent reboots.
 
 ### Privileged Helper
@@ -227,7 +234,6 @@ To tell systemd to create the necessary directories for DAOS:
 DAOS employs a privileged helper binary (`daos_admin`) to perform tasks
 that require elevated privileges on behalf of `daos_server`.
 
-
 When DAOS is installed from RPM, the `daos_admin` helper is automatically installed
 to the correct location with the correct permissions. The RPM creates a "daos_server"
 system group and configures permissions such that `daos_admin` may only be invoked
@@ -236,7 +242,7 @@ from `daos_server`.
 For non-RPM installations, there are two supported scenarios:
 
 1. `daos_server` is run as root, which means that `daos_admin` is also invoked as root,
-and therefore no additional setup is necessary.
+and therefore, no additional setup is necessary.
 2. `daos_server` is run as a non-root user, which means that `daos_admin` must be
 manually installed and configured.
 
@@ -261,7 +267,7 @@ $ sudo ln -s $daospath/include \
 
 !!! note
     The RPM installation is preferred for production scenarios. Manual
-    installation is most appropriate for development and predeployment
+    installation is most appropriate for development and pre-deployment
     proof-of-concept scenarios.
 
 ### Memory Lock Limits
@@ -284,10 +290,10 @@ Note that values set in `/etc/security/limits.conf` are ignored by services
 launched by systemd.
 
 For non-RPM installations where `daos_server` is launched directly from the
-commandline (including source builds), limits should be adjusted in
+command-line (including source builds), limits should be adjusted in
 `/etc/security/limits.conf` as per
-[this article](https://access.redhat.com/solutions/61334) (which is a RHEL
-specific document but the instructions apply to most Linux distributions).
+[this article](https://access.redhat.com/solutions/61334) (which is an RHEL
+specific document, but the instructions apply to most Linux distributions).
 
 ## Socket receive buffer size
 
@@ -316,18 +322,20 @@ using any of the methods described in
 
 ## Optimize NVMe SSD Block Size
 
-DAOS server performs NVMe I/O in 4K granularity so in order to avoid alignment
-issues it is beneficial to format the SSDs that will be used with a 4K block size.
+DAOS server performs NVMe I/O in 4K granularity, so in order to avoid alignment
+issues it is beneficial to format the SSDs used with a 4K block size.
 
-First the SSDs need to be bound to a user-space driver to be usable with SPDK, to do
+First, the SSDs must be bound to a user-space driver to be usable with SPDK. To do
 this, use the SPDK setup script.
 
 `setup.sh` script is provided by SPDK and will be found in the following locations:
+
 - `/usr/share/spdk/scripts/setup.sh` if DAOS-maintained spdk-tools-21.07 (or greater) RPM
 is installed
 - `/install/share/spdk/scripts/setup.sh` after build from DAOS source
 
 Bind the SSDs with the following commands:
+
 ```bash
 $ sudo /usr/share/spdk/scripts/setup.sh
 0000:01:00.0 (8086 0953): nvme -> vfio-pci
@@ -338,14 +346,16 @@ able to use with DPDK and VFIO if run as user "daos".
 To change this, please adjust limits.conf memlock limit for user "daos".
 ```
 
-Now the SSDs can be accessed by SPDK we can use the `spdk_nvme_manage` tool to format
+Now the SSDs can be accessed by SPDK; we can use the `spdk_nvme_manage` tool to format
 the SSDs with a 4K block size.
 
 `spdk_nvme_manage` tool is provided by SPDK and will be found in the following locations:
+
 - `/usr/bin/spdk_nvme_manage` if DAOS-maintained spdk-21.07-10 (or greater) RPM is installed
 - `/install/prereq/release/spdk/bin/spdk_nvme_manage` after build from DAOS source
 
 Choose to format a SSD, use option "6" for formatting:
+
 ```bash
 $ sudo /usr/bin/spdk_nvme_manage
 NVMe Management Options
@@ -360,18 +370,20 @@ NVMe Management Options
 6
 ```
 
-Available SSDs will then be listed and you will be prompted to select one.
+Available SSDs will then be listed, and you will be prompted to select one.
 
 Select the SSD to format, enter PCI Address "01:00.00":
+
 ```bash
 0000:01:00.00 INTEL SSDPEDMD800G4 CVFT45050002800CGN 0
 Please Input PCI Address(domain:bus:dev.func):
 01:00.00
 ```
 
-Erase settings will be displayed and you will be prompted to select one.
+Erase settings will be displayed, and you will be prompted to select one.
 
 Erase the SSD using option "0":
+
 ```bash
 Please Input Secure Erase Setting:
 0: No secure erase operation requested
@@ -380,9 +392,10 @@ Please Input Secure Erase Setting:
 0
 ```
 
-Supported LBA formats will then be displayed and you will be prompted to select one.
+Supported LBA formats will then be displayed, and you will be prompted to select one.
+
+Format the SSD into 4KB block size using the option "3".
 
-Format the SSD into 4KB block size using option "3".
 ```bash
 Supported LBA formats:
 0: 512 data bytes
@@ -392,25 +405,27 @@ Supported LBA formats:
 4: 4096 data bytes + 8 metadata bytes
 5: 4096 data bytes + 64 metadata bytes
 6: 4096 data bytes + 128 metadata bytes
-Please input LBA format index (0 - 6):
+Please input the LBA format index (0 - 6):
 3
 ```
 
-A warning will be displayed and you will be prompted to confirm format action.
+A warning will be displayed, and you will be prompted to confirm the format action.
 
 Confirm format request by entering "Y":
+
 ```bash
 Warning: use this utility at your own risk.
-This command will format your namespace and all data will be lost.
+This command will format your namespace, and all data will be lost.
 This command may take several minutes to complete,
 so do not interrupt the utility until it completes.
 Press 'Y' to continue with the format operation.
 Y
 ```
 
-Format will now proceed and a reset notice will be displayed for the given SSD.
+The format will proceed, and a reset notice will be displayed for the given SSD.
+
+The format is complete if you see something like the following:
 
-Format is complete if you see something like the following:
 ```bash
 [2022-01-04 12:56:30.075104] nvme_ctrlr.c:1414:nvme_ctrlr_reset: *NOTICE*: [0000:01:00.0] resetting
 controller
@@ -418,9 +433,10 @@ press Enter to display cmd menu ...
 
 ```
 
-Once formats has completed, verify LBA format has been applied as expected.
+Once formats have been completed, verify LBA format has been applied as expected.
 
 Choose to list SSD controller details, use option "1":
+
 ```bash
 NVMe Management Options
 [1: list controllers]
@@ -434,9 +450,10 @@ NVMe Management Options
 1
 ```
 
-Controller details should show new "Current LBA Format".
+Controller details should show the new "Current LBA Format".
 
 Verify "Current LBA Format" is set to "LBA Format #03":
+
 ```bash
 =====================================================
 NVMe Controller:        0000:01:00.00
@@ -468,4 +485,4 @@ Current LBA Format:          LBA Format #03
 
 Displayed details for controller show LBA format is now "#03".
 
-Perform the above process for all SSDs that will be used by DAOS.
+Perform the above process for all SSDs that DAOS will use.
diff --git a/docs/admin/tiering_uns.md b/docs/admin/tiering_uns.md
index 1a2fc31e50a..589f86772f9 100644
--- a/docs/admin/tiering_uns.md
+++ b/docs/admin/tiering_uns.md
@@ -7,46 +7,40 @@ in which DAOS containers will be represented through the Lustre namespace.
 
 The current state of work can be summarized as follows :
 
--   DAOS integration with Lustre uses the Lustre foreign file/dir feature
+- DAOS integration with Lustre uses the Lustre foreign file/dir feature
     (from LU-11376 and associated patches).
-
--   Each time a DAOS POSIX container is created using the `daos` utility and its
+- Each time a DAOS POSIX container is created using the `daos` utility and its
     '--path' UNS option, a Lustre foreign file/dir of 'symlink' type is
     created with a specific LOV/LMV EA content that will allow the
     DAOS pool and containers UUIDs to be stored.
-
--   The Lustre Client patch for LU-12682 adds DAOS specific support to the Lustre
+- The Lustre Client patch for LU-12682 adds DAOS-specific support to the Lustre
     foreign file/dir feature. It allows for the foreign file/dir of `symlink` type
     to be presented and act as an `//`
     symlink to the Linux Kernel/VFS.
-
--   The `` can be specified as the new `foreign_symlink=`
+- The `` can be specified as the new `foreign_symlink=`
     Lustre Client mount option, or also through the new `llite.*.foreign_symlink_prefix`
     Lustre dynamic tuneable. Both `` and `` are
     extracted from foreign file/dir LOV/LMV EA.
-
--   To allow for symlink resolution and transparent access to the DAOS
+- To allow for symlink resolution and transparent access to the DAOS
     container content, it is expected that a DFuse/DFS instance/mount of
     DAOS Server root exists on ``, presenting all served
     pools/containers as `/` relative paths.
-
--   `daos` foreign support is enabled at mount time with the `symlink=` option
+- `daos` foreign support is enabled at mount time with the `symlink=` option
     present or dynamically, through the `llite.*.daos_enable` setting.
 
 ### Building and using a DAOS-aware Lustre version
 
 As indicated before, a Lustre Client patch (for LU-12682) has been developed
-    to allow for the application's transparent access to the DAOS container's data
-    from a Lustre foreign file/dir.
+    to allow for the application's transparent access to the DAOS container's
+    data from a Lustre foreign file/dir.
 
-This patch can be found at https://review.whamcloud.com/35856 and has
-    been landed onto master but is still not integrated with an official
-    Lustre version. This patch must be applied on top of the selected Lustre
-    version's source tree.
+This [patch](https://review.whamcloud.com/35856) has been landed onto master
+    but is still not integrated with an official Lustre version. This patch
+    must be applied on top of the selected Lustre version's source tree.
 
 After any conflicts are resolved, Lustre must be built and
-    the generated RPMs installed on client nodes by following the instructions at
-    https://wiki.whamcloud.com/display/PUB/Building+Lustre+from+Source.
+    the generated RPMs are installed on client nodes by following the
+    [Build Lustrefrom source](https://wiki.whamcloud.com/display/PUB/Building+Lustre+from+Source.)
 
 The Lustre client mount command must use the new
     `foreign_symlink=` option to set the prefix to be used in
@@ -61,14 +55,14 @@ The Lustre client mount command must use the new
 
 To allow non-root/admin users to use the llapi_set_dirstripe()
     API (like the `daos cont create` command with `--path` option), or the
-    `lfs setdirstripe` command, the Lustre MDS servers configuration must
+    `lfs setdirstripe` command, the Lustre MDS server’s configuration must
     be modified accordingly by running the
     `lctl set_param mdt/*/enable_remote_dir_gid=-1` command.
 
  Additionally, there is a feature available to provide a customized format
     of LOV/LMV EAs, apart from the default `/`, through the
     `llite/*/foreign_symlink_upcall` tunable. This provides the path
-    of a user-land upcall, that will indicate  where to extract
+    of a user-land up call that will indicate  where to extract
     `` and `` in the LOV/LMV EAs, using a series of [pos, len]
     tuples and constant strings. `lustre/utils/l_foreign_symlink.c` is a helper
     example in the Lustre source code.
@@ -88,7 +82,7 @@ The POSIX data mover was released with DAOS v1.2 and supports data migration
 to/from a POSIX filesystem. Parallel data migration is available through
 mpiFileUtils, which contains a DAOS backend. Serial data migration is supported
 through the daos filesystem copy utility. The first version of the data mover
-tool that contains support for HDF5 containers is scheduled for release in DAOS
+tool supporting HDF5 containers is scheduled for release in DAOS
 v2.2.
 
 ### Container Parking
@@ -99,7 +93,6 @@ POSIX filesystem. This transformation is agnostic to the data model and
 container type and retains most DAOS internal metadata. The serialized file(s)
 are written to a POSIX filesystem in an HDF5 file format. A preview of the
 serialization and deserialization tools is available in DAOS v2.0 through
-mpiFileUtils, and they will be officially released in DAOS v2.2.
+mpiFileUtils and they will be officially released in DAOS v2.2.
 
-More details and instructions on data mover usage can be found at:
-https://github.com/daos-stack/daos/blob/release/2.2/docs/user/datamover.md
+For more details and instructions, see the [data mover GitHub Repo](https://github.com/daos-stack/daos/blob/release/2.2/docs/user/datamover.md)
diff --git a/docs/admin/troubleshooting.md b/docs/admin/troubleshooting.md
index bcbd2c8c027..9435f0ea880 100644
--- a/docs/admin/troubleshooting.md
+++ b/docs/admin/troubleshooting.md
@@ -53,7 +53,7 @@ errors are documented in the table below.
 |DER_AGENT_INCOMPAT|2029|Agent is incompatible with libdaos
 
 When an operation fails, DAOS returns a negative DER error.
-For a full list of errors, please check
+For a complete list of errors, please check
 
 (`DER_ERR_GURT_BASE` is equal to 1000, and `DER_ERR_DAOS_BASE` is equal
 to 2000).
@@ -63,7 +63,7 @@ number to an error message.
 
 ## Log Files
 
-On the server side, there are three log files created as part of normal
+On the server-side, there are three log files created as part of normal
 server operations:
 
 |Component|Config Parameter|Example Config Value|
@@ -93,7 +93,7 @@ section of this document.
 
 ### Privileged Helper Log
 
-By default, the privileged helper only emits ERROR-level logging which
+By default, the privileged helper only emits ERROR-level logging, which
 is captured by the control plane and included in that log. If the
 `helper_log_file` parameter is set in the server config, then
 DEBUG-level logging will be sent to the specified file.
@@ -107,96 +107,79 @@ DEBUG-level logging will be sent to the specified file.
 
 DAOS uses the debug system defined in
 [CaRT](https://github.com/daos-stack/daos/tree/release/2.2/src/cart),
-specifically the GURT library.
-Both server and client default log is `stdout`, unless
-otherwise set by `D_LOG_FILE` environment variable (client) or
+specifically, the GURT library.
+Both server and client default log is `stdout` unless
+otherwise set by the `D_LOG_FILE` environment variable (client) or
 `log_file` config parameter (server).
 
 ### Registered Subsystems/Facilities
 
 The debug logging system includes a series of subsystems or facilities
 which define groups for related log messages (defined per source file).
-There are common facilities which are defined in GURT, as well as other
+There are common facilities that are defined in GURT, as well as to other
 facilities that can be defined on a per-project basis (such as those for
 CaRT and DAOS). DD_SUBSYS can be used to set which subsystems to enable
-logging. By default all subsystems are enabled ("DD_SUBSYS=all").
+logging. By default, all subsystems are enabled ("DD_SUBSYS=all").
 
--   DAOS Facilities:
-    array, kv, common, tree, vos, client, server, rdb, rsvc, pool, container,
+* DAOS Facilities:
+    array, kV, common, tree, vos, client, server, rdb, rsvc, pool, container,
     object, placement, rebuild, tier, mgmt, bio, tests, dfs, duns, drpc,
     security, dtx, dfuse, il, csum
-
--   Common Facilities (GURT):
+* Common Facilities (GURT):
     MISC, MEM, SWIM, TELEM
-
--   CaRT Facilities:
+* CaRT Facilities:
     RPC, BULK, CORPC, GRP, HG, ST, IV, CTL
 
 ### Priority Logging
 
 The priority level that outputs to stderr is set with DD_STDERR. By
-default in DAOS (specific to the project), this is set to CRIT
+Default, in DAOS (specific to the project), this is set to CRIT
 ("DD_STDERR=CRIT") meaning that all CRIT and more severe log messages
 will dump to stderr. However, this is separate from the priority of
 logging to "/tmp/daos.log". The priority level of logging can be set
 with D_LOG_MASK, which by default is set to INFO
-("D_LOG_MASK=INFO"), which will result in all messages excluding DEBUG
-messages being logged. D_LOG_MASK can also be used to specify the
+("D_LOG_MASK=INFO"), which will result in all messages, excluding DEBUG
+messages, being logged. D_LOG_MASK can also be used to specify the
 level of logging on a per-subsystem basis as well
 ("D_LOG_MASK=DEBUG,MEM=ERR").
 
-### Debug Masks/Streams:
+### Debug Masks/Streams
 
-DEBUG messages account for a majority of the log messages, and
-finer-granularity might be desired. Mask bits are set as the first
+DEBUG messages account for most log messages, and
+finer granularity might be desired. Mask bits are set as the first
 argument passed in D_DEBUG(mask, ...). To accomplish this, DD_MASK can
 be set to enable different debug streams. Similar to facilities, there
-are common debug streams defined in GURT, as well as other streams that
+are common to debug streams defined in GURT, and other streams that
 can be defined on a per-project basis (CaRT and DAOS). All debug streams
 are enabled by default ("DD_MASK=all"). Convenience "group mask" values
-are defined for common use cases and convenience, and consist of a
+are defined for common use cases and convenience and consist of a
 composition of multiple individual bits.
 
--   DAOS Debug Masks:
-
-    -   md = metadata operations
-
-    -   pl = placement operations
-
-    -   mgmt = pool management
-
-    -   epc = epoch system
-
-    -   df = durable format
-
-    -   rebuild = rebuild process
-
-    -   group_default = (group mask) io, md, pl, and rebuild operations
-
-    -   group_metadata_only = (group mask) mgmt, md operations
-
-    -   group_metadata = (group mask) group_default plus mgmt operations
-
--   Common Debug Masks (GURT):
-
-    -   any = generic messages, no classification
-
-    -   trace = function trace, tree/hash/lru operations
-
-    -   mem = memory operations
-
-    -   net = network operations
-
-    -   io = object I/Otest = test programs
+* DAOS Debug Masks:
+  * md = metadata operations
+  * pl = placement operations
+  * mgmt = pool management
+  * epc = epoch system
+  * df = durable format
+  * rebuild = rebuild process
+  * group_default = (group mask) io, md, pl, and rebuild operations
+  * group_metadata_only = (group mask) mgmt, md operations
+  * group_metadata = (group mask) group_default plus mgmt operations
+* Common Debug Masks (GURT):
+  * any = generic messages, no classification
+  * trace = function trace, tree/hash/lru operations
+  * mem = memory operations
+  * net = network operations
+  * io = object I/Otest = test programs
 
 ### Common Use Cases
 
-Please note: where in these examples the export command is shown setting an environment variable,
-this is intended to convey either that the variable is actually set (for the client environment), or
+Please note wherein these examples, the export command is shown setting an environment variable,
+this is intended to convey either that the variable is set (for the client environment), or
 configured for the engines in the `daos_server.yml` file (`log_mask` per engine, and env_vars
 values per engine for the `DD_SUBSYS` and `DD_MASK` variable assignments).
 
--   Generic setup for all messages (default settings)
+- Generic setup for all messages (default settings)
 
         D_LOG_MASK=DEBUG
         DD_SUBSYS=all
@@ -205,7 +188,7 @@ values per engine for the `DD_SUBSYS` and `DD_MASK` variable assignments).
 -   Disable all logs for performance tuning
 
         D_LOG_MASK=ERR -> will only log error messages from all facilities
-        D_LOG_MASK=FATAL -> will only log system fatal messages
+        D_LOG_MASK=FATAL -> will only log fatal system messages
 
 -   Gather daos metadata logs if a pool/container resource problem is observed, using the provided group mask
 
@@ -234,7 +217,9 @@ Refer to the DAOS Environment Variables document for
 more information about the debug system environment.
 
 ## Common DAOS Problems
-### Incompatible Agent ####
+
+### Incompatible Agent
+
 When DER_AGENT_INCOMPAT is received, it means that the client library libdaos.so
 is likely mismatched with the DAOS Agent.  The libdaos.so, DAOS Agent and DAOS
 Server must be built from compatible sources so that the GetAttachInfo protocol
@@ -242,25 +227,27 @@ is the same between each component.  Depending on your situation, you will need
 to either update the DAOS Agent or the libdaos.so to the newer version in order
 to maintain compatibility with each other.
 
-### HLC Sync ###
+### HLC Sync
+
 When DER_HLC_SYNC is received, it means that sender and receiver HLC timestamps
-are off by more than maximum allowed system clock offset (1 second by default).
+are off by more than the maximum allowed system clock offset (1 second by default).
 
-In order to correct this situation synchronize all server clocks to the same
+To correct this situation, synchronize all server clocks to the same
 reference time, using services like NTP.
 
-### Shared Memory Errors ###
-When DER_SHMEM_PERMS is received it means that this I/O Engine lacked the
-permissions to access the shared memory megment left behind by a previous run of
+### Shared Memory Errors
+
+When DER_SHMEM_PERMS is received, this I/O Engine lacks the
+permissions to access the shared memory segment left behind by a previous run of
 the I/O Engine on the same machine.  This happens when the I/O Engine fails to
-remove the shared memory segment upon shutdown, and, there is a mismatch between
+remove the shared memory segment upon shutdown, and there is a mismatch between
 the user/group used to launch the I/O Engine between these successive runs.  To
 remedy the problem, manually identify the shared memory segment and remove it.
 
-Issue ```ipcs``` to view the Shared Memory Segments.  The output will show a
-list of segments organized by ```key```.
+Issue `ipcs` to view the Shared Memory Segments.  The output will show a
+list of segments organized by `key`.
 
-```
+```bash
 ipcs
 
 ------ Message Queues --------
@@ -277,20 +264,20 @@ key        semid      owner      perms      nsems
 ```
 
 Shared Memory Segments with keys [0x10242048 .. (0x10242048 + number of I/O
-Engines running)] are the segments that must be removed.  Use ```ipcrm``` to
+Engines running)] are the segments that must be removed.  Use `ipcrm` to
 remove the segment.
 
 For example, to remove the shared memory segment left behind by I/O Engine
 instance 0, issue:
-```
-sudo ipcrm -M 0x10242048
-```
+
+    sudo ipcrm -M 0x10242048
+
 To remove the shared memory segment left behind by I/O Engine instance 1, issue:
-```
-sudo ipcrm -M 0x10242049
-```
+
+    sudo ipcrm -M 0x10242049
 
 ### Server Start Issues
+
 1. Read the log located in the `control_log_file`.
 1. Verify that the `daos_server` process is not currently running.
 1. Check the SCM device path in /dev.
@@ -304,15 +291,17 @@ sudo ipcrm -M 0x10242049
 1. Generate the config file using `dmg config generate`. The various requirements will be populated without a syntax error.
 1. Try starting with `allow_insecure: true`. This will rule out the credential certificate issue.
 1. Verify that the `access_points` host is accessible and the port is not used.
-1. Check the `provider` entry. See the "Network Scan and Configuration" section of the admin guide for determining the right provider to use.
+1. Check the `provider` entry. See the "Network Scan and Configuration" section of the admin guide to determine the right provider.
 1. Check `fabric_iface` in `engines`. They should be available and enabled.
 1. Check that `socket_dir` is writeable by the daos_server.
 
 ### Errors creating a Pool
+
 1. Check which engine rank you want to create a pool in with `dmg system query --verbose` and verify their State is Joined.
-1. `DER_NOSPACE(-1007)` appears: Check the size of the NVMe and PMEM. Next, check the size of the existing pool. Then check that this new pool being created will fit into the remaining disk space.
+1. `DER_NOSPACE(-1007)` appears: Check the size of the NVMe and PMEM. Next, check the size of the existing pool. Then check that this new pool will fit into the remaining disk space.
 
 ### Problems creating a container
+
 1. Check that the path to daos is your intended binary. It's usually `/usr/bin/daos`.
 1. When the server configuration is changed, it's necessary to restart the agent.
 1. `DER_UNREACH(-1006)`: Check the socket ID consistency between PMEM and NVMe. First, determine which socket you're using with `daos_server network scan -p all`. e.g., if the interface you're using in the engine section is eth0, find which NUMA Socket it belongs to. Next, determine the disks you can use with this socket by calling `daos_server storage scan` or `dmg storage scan`. e.g., if eth0 belongs to NUMA Socket 0, use only the disks with 0 in the Socket ID column.
@@ -321,6 +310,7 @@ sudo ipcrm -M 0x10242049
 1. Call `daos pool query` and check that the pool exists and has free space.
 
 ### Applications run slow
+
 Verify if you're using Infiniband for `fabric_iface`: in the server config. The IO will be significantly slower with Ethernet.
 
 ## Common Errors and Workarounds
@@ -345,7 +335,7 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
 
 	# 2. Make sure the admin-host allow_insecure mode matches the applicable servers.
 
-### use daos command before daos_agent started
+### use the daos command before daos_agent started
 
 	$ daos cont create $DAOS_POOL
 	daos ERR  src/common/drpc.c:217 unixcomm_connect() Failed to connect to /var/run/daos_agent/daos_agent.sock, errno=2(No such file or directory)
@@ -358,7 +348,7 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
 		$ sudo systemctl enable daos_agent.service
 		$ sudo systemctl start daos_agent.service
 
-### use daos command with invalid or wrong parameters
+### use the daos command with invalid or wrong parameters
 
 	# Lack of providing daos pool_uuid
 	$ daos pool list-cont
@@ -402,7 +392,7 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
 	Creating DAOS pool with automatic storage allocation: 50 GB NVMe + 6.00% SCM
 	ERROR: dmg: pool create failed: DER_NOSPACE(-1007): No space on storage target
 
-	# Workaround: dmg storage query scan to find current available storage
+	# Workaround: dmg storage query scan to find currently available storage
 		dmg storage query usage
 		Hosts  SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used
 		-----  --------- -------- -------- ---------- --------- ---------

From 0615863c1ad394abd751ec4d4eb95b3b43e7cf05 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 29 Apr 2022 14:14:19 -0700
Subject: [PATCH 03/14] resolve comments by daosbuil1 from 04/28

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/admin/administration.md  | 6 +++---
 docs/overview/architecture.md | 2 +-
 docs/overview/use_cases.md    | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index 42964df2fb8..c715574da25 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -295,7 +295,7 @@ overhead).
 
 Useful admin dmg commands to query NVMe SSD health:
 
-- Query Per-Server Metadata:
+##### Query Per-Server Metadata:
 
 ```bash
 $ dmg storage query list-devices --help
@@ -399,7 +399,7 @@ boro-11
 
 ```
 
-- Query Storage Device Health Data:
+##### Query Storage Device Health Data
 
 ```bash
 $ dmg storage query device-health --help
@@ -501,7 +501,7 @@ boro-11
 
 ```
 
-#### Exclusion and Hotplug
+##### Exclusion and Hotplug
 
 - Manually exclude an NVMe SSD:
 
diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md
index 53bb3176354..fb4dd44ed89 100644
--- a/docs/overview/architecture.md
+++ b/docs/overview/architecture.md
@@ -96,7 +96,7 @@ DAOS aims to deliver:
 ## DAOS System
 
 A data center may have hundreds of thousands of compute instances
-interconnected via a scalable, high-performance network, where all, or 
+interconnected via a scalable, high-performance network, where all, or
 a subset of the instances called storage nodes have direct access to NVM
 storage. A DAOS installation involves several components that can be
 either collocated or distributed.
diff --git a/docs/overview/use_cases.md b/docs/overview/use_cases.md
index 3478cd41531..817de83c769 100644
--- a/docs/overview/use_cases.md
+++ b/docs/overview/use_cases.md
@@ -68,7 +68,7 @@ in the same DAOS pool represented by the gray box. The simulation reads data
 from the input container and writes raw timesteps to another container.
 It also regularly dumps checkpoints to a dedicated ckpt container.
 Finally, the down-sample job reads the raw timesteps and generates sampled timesteps
-to be analyzed by the post-process, which stores analysis data into 
+to be analyzed by the post-process, which stores analysis data into
 another container.
 
 

From c2be695f6d1e728f9978ce2dfb831b6b0138e978 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 29 Apr 2022 14:52:33 -0700
Subject: [PATCH 04/14] fix a typo

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/admin/deployment.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/admin/deployment.md b/docs/admin/deployment.md
index ec4a415e72d..1dbf7b15487 100644
--- a/docs/admin/deployment.md
+++ b/docs/admin/deployment.md
@@ -331,7 +331,7 @@ If DAOS Server failed to start, check the logs with:
 journalctl --unit daos_server.service
 ```
 
-After RPM is installed, the `daos_server` service starts automatically running as thw user
+After RPM is installed, the `daos_server` service starts automatically running as the user
 "daos". The server config is read from `/etc/daos/daos_server.yml` and
 certificates are read from `/etc/daos/certs`.
 With no other admin intervention other than the loading of certificates,

From f58517236fd85a0c16bcac00ad28c91b365b16ff Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Mon, 2 May 2022 16:19:54 -0700
Subject: [PATCH 05/14] misc updates

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/admin/administration.md        |   7 +-
 docs/dev/contributing.md            |  56 ++++-----
 docs/dev/development.md             | 107 +++++++++--------
 docs/release/support_matrix_v2_2.md |  45 +++-----
 docs/release/upgrading.md           |   1 -
 docs/release/upgrading_v2_2.md      |  15 +--
 docs/testing/datamover.md           |  14 +--
 docs/testing/dbench.md              |   3 +-
 docs/testing/ior.md                 |  16 +--
 docs/user/container.md              | 173 ++++++++++++++--------------
 docs/user/datamover.md              |  37 +++---
 docs/user/filesystem.md             |  23 ++--
 docs/user/hdf5.md                   |   5 +-
 docs/user/interface.md              |  24 ++--
 docs/user/mpi-io.md                 |  35 +++---
 docs/user/python.md                 |  50 ++++----
 docs/user/spark.md                  |  86 +++++++-------
 docs/user/tensorflow.md             | 147 ++++++++++++-----------
 docs/user/workflow.md               |  25 ++--
 19 files changed, 434 insertions(+), 435 deletions(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index c715574da25..1c555b12fbe 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -295,7 +295,7 @@ overhead).
 
 Useful admin dmg commands to query NVMe SSD health:
 
-##### Query Per-Server Metadata:
+##### Query Per-Server Metadata
 
 ```bash
 $ dmg storage query list-devices --help
@@ -498,7 +498,6 @@ boro-11
         PLL Lock Loss Count:0
         NAND Bytes Written:244081
         Host Bytes Written:52114
-
 ```
 
 ##### Exclusion and Hotplug
@@ -509,7 +508,6 @@ boro-11
 $ dmg storage set nvme-faulty --help
 Usage:
   dmg [OPTIONS] storage set nvme-faulty [nvme-faulty-OPTIONS]
-
 ...
 
 [nvme-faulty command options]
@@ -575,7 +573,6 @@ The device will now be registered in the engine's persistent NVMe config so that
 $ dmg storage replace nvme --help
 Usage:
   dmg [OPTIONS] storage replace nvme [nvme-OPTIONS]
-
 ...
 
 [nvme command options]
@@ -631,7 +628,6 @@ The feature supports two LED device events: locating a healthy device and an evi
 $ dmg storage identify vmd --help
 Usage:
   dmg [OPTIONS] storage identify vmd [vmd-OPTIONS]
-
 ...
 
 [vmd command options]
@@ -646,7 +642,6 @@ $ dmg -l boro-11 storage identify vmd --uuid=6fccb374-413b-441a-bfbe-860099ac5e8
 
 If a non-VMD device UUID is used with the command, the following error will occur:
 localhost DAOS error (-1010): DER_NOSYS
-
 ```
 
 The status LED on the VMD device is now set to an "IDENTIFY" state, represented
diff --git a/docs/dev/contributing.md b/docs/dev/contributing.md
index 7dfa955998a..e281c042e77 100644
--- a/docs/dev/contributing.md
+++ b/docs/dev/contributing.md
@@ -1,12 +1,12 @@
 # Contributing to DAOS
 
 Your contributions are most welcome! There are several good ways to suggest new
-features, offer to add a feature, or just begin a dialog about DAOS:
+features, offer to add a feature, or begin a dialog about DAOS:
 
-- Open an issue in [jira](http://jira.daos.io)
+- Open an issue in [Jira](http://jira.daos.io)
 - Suggest a feature, ask a question, start a discussion, etc. in our
   [community mailing list](https://daos.groups.io/g/daos)
-- Chat with members of the DAOS community real-time on [slack](https://daos-stack.slack.com/).
+- Chat with members of the DAOS community in real-time on [slack](https://daos-stack.slack.com/).
   An invitation to join the slack workspace is automatically sent when joining
   the community mailing list.
 
@@ -30,34 +30,34 @@ The reason for a change may be manyfold: bug, enhancement, feature, code style,
 etc. so providing information about this sets the stage for understanding the
 change. If it is a bug, include information about what usage triggers the bug
 and how it manifests (error messages, assertion failure, etc.). If it is a
-feature, include information about what improvement is being made, and how it
+feature, include information about what improvement is being made and how it
 will affect usage.
 
 Providing some high-level information about the code path that is being modified
-is useful for the reader, since the files and patch fragments are not
+is helpful for the reader since the files and patch fragments are not
 necessarily going to be listed in a sensible order in the patch. Including the
 important functions being modified provides a starting point for the reader to
 follow the logic of the change, and makes it easier to search for such changes
 in the future.
 
 If the patch is based on some earlier patch, then including the git commit hash
-of the original patch, Jira ticket number, etc. is useful for tracking the
+of the original patch, Jira ticket number, etc. helps track the
 chain of dependencies. This can be very useful if a patch is landed separately
 to different maintenance branches, if it is fixing a problem in a previously
 landed patch, or if it is being imported from an upstream kernel commit.
 
 Having long commit comments that describe the change well is a good thing.
 The commit comments will be tightly associated with the code for a long time
-into the future, even many of the original commit comments from years earlier
-are still available through changes of the source code repository.
-In contrast, bug tracking systems come and go, and cannot be relied upon to
+into the future. Many of the original commit comments from years earlier
+are still available through changes to the source code repository.
+In contrast, bug tracking systems come and go and cannot be relied upon to
 track information about a change for extended periods of time.
 
 ### Commit Message Format
 
 Unlike the content of the commit message, the format is relatively easy to
-verify for correctness. Having the same standard format allows Git tools like
-git shortlog to extract information from the patches more easily.
+verify for correctness. The same standard format allows Git tools like
+git shortlog to extract information from the patches more quickly.
 
 The first line of the commit comment is the commit summary of the change.
 Changes submitted to the DAOS master branch require a DAOS Jira ticket number
@@ -66,34 +66,34 @@ with DAOS and is, therefore, part of the DAOS project within Jira.
 
 The commit summary should also have a `component:` tag immediately following the
 Jira ticket number that indicates to which DAOS subsystem the commit is
-related. Example DAOS subsystems relate to modules like client, pool,
-container, object, vos, rdb; functional components like rebuild; or auxiliary
-components like build, tests, doc. This subsystem list is not exhaustive
+related. Example DAOS subsystems relate to modules like the client, pool,
+container, object, Vos, rdb; functional components, rebuild; or auxiliary
+components (build, tests, doc, etc.). This subsystem list is not exhaustive
 but provides a good guideline for consistency.
 
 The commit summary line must be 62 characters or less, including the Jira
 ticket number and component tag, so that git shortlog and git format-patch
-can fit the summary onto a single line. The summary must be followed by a blank
-line. The rest of the comments should be wrapped to 70 columns or less.
-This allows for the first line to be used as a subject in emails, and also for the
+can fit the summary onto a single line. A blank line must follow the summary.
+The rest of the comments should be wrapped in 70 columns or less.
+This allows for the first line to be used as a subject in emails and also for the
 entire body to be displayed using tools like git log or git shortlog in an 80
 column window.
 
-```
-DAOS-nnn component: short description of change under 62 columns
+```bash
+DAOS-nnn component: a short description of change under 62 columns
 
 The "component:" should be a lower-case single-word subsystem of the
 DAOS code that best encompasses the change being made.  Examples of
-components include modules: client, pool, container, object, vos, rdb,
+components include modules:, client, pool, container, object, vos, rdb,
 cart; functional subsystems: recovery; and auxiliary areas: build,
-tests, docs. This list is not exhaustive, but is a guideline.
+tests, docs. This list is not exhaustive but is a guideline.
 
-The commit comment should contain a detailed explanation of changes
+The commit comment should contain a detailed explanation of the changes
 being made.  This can be as long as you'd like.  Please give details
 of what problem was solved (including error messages or problems that
 were seen), a good high-level description of how it was solved, and
 which parts of the code were changed (including important functions
-that were changed, if this is useful to understand the patch, and
+that was changed if this is useful to understand the patch, and
 for easier searching).  Wrap lines at/under 70 columns.
 
 Signed-off-by: Your Real Name 
@@ -103,16 +103,16 @@ Signed-off-by: Your Real Name 
 
 The `Signed-off-by:` line asserts that you have permission to contribute the
 code to the project according to the Developer's Certificate of Origin.
-The -s option to `git commit` also adds the `Signed-off-by:` line automatically.
+The -s option to `git commit` also automatically adds the `Signed-off-by:` line.
 
 ### Additional commit tags
 
-A number of additional commit tags can be used to further explain who has
-contributed to the patch, and for tracking purposes. These tags are commonly
+Several additional commit tags can explain who has
+contributed to the patch and for tracking purposes. These tags are commonly
 used with Linux kernel patches. These tags should appear before the
 `Signed-off-by:` tag.
 
-```
+```bash
 Acked-by: User Name 
 Tested-by: User Name 
 Reported-by: User Name 
@@ -122,5 +122,5 @@ CC: User Name 
 
 ## Pull Requests (PR)
 
-DAOS uses the common fork & merge workflow used by most GitHub-hosted projects.
+DAOS uses the standard fork & merge workflow used by most GitHub-hosted projects.
 Please refer to the [online GitHub documentation](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests).
diff --git a/docs/dev/development.md b/docs/dev/development.md
index 3dfe7f16903..65ab847c2af 100644
--- a/docs/dev/development.md
+++ b/docs/dev/development.md
@@ -1,6 +1,6 @@
 # Development Environment
 
-This section covers specific instructions to create a developer-friendly
+This section covers specific instructions for creating a developer-friendly
 environment to contribute to the DAOS development. This includes how to
 regenerate the protobuf files or add new Go package dependencies, which is
 only required for development purposes.
@@ -8,16 +8,16 @@ only required for development purposes.
 ## Building DAOS for Development
 
 The DAOS repository is hosted on [GitHub](https://github.com/daos-stack/daos).
-To checkout the current development version, simply run:
+To check out the current development version, run:
 
 ```bash
-$ git clone --recurse-submodules https://github.com/daos-stack/daos.git
+git clone --recurse-submodules https://github.com/daos-stack/daos.git
 ```
 
-For a specific branch or tag (e.g. v1.2), add `-b v1.2` to the command
+For a specific branch or tag (e.g., v1.2), add `-b v1.2` to the command
 line above.
 
-Prerequisite when built using `--build-deps` are installed in component
+Prerequisites when built using `--build-deps` are installed in component
 specific directories under PREFIX/prereq/$TARGET_TYPE.
 
 Run the following `scons` command:
@@ -29,7 +29,7 @@ $ scons PREFIX=${daos_prefix_path}
       --config=force
 ```
 
-Installing the components into separate directories allow upgrading the
+Installing the components into separate directories allows upgrading the
 components individually by replacing `--build-deps=yes` with
 `--update-prereq={component\_name}`. This requires a change to the environment
 configuration from before. For automated environment setup, source
@@ -41,18 +41,18 @@ libraries and binaries should work without any change due to relative
 paths.  Editing the `.build-vars.sh` file to replace the old with the new can
 restore the capability of setup_local.sh to automate path setup.
 
-To run daos_server, either the rpath in daos_admin needs to be patched to
+To run daos_server, the rpath in daos_admin needs to be patched to
 the new installation location of `spdk` and `isal` or `LD_LIBRARY_PATH` needs to
 be set.  This can be done using `SL_SPDK_PREFIX` and `SL_ISAL_PREFIX` set when
 sourcing `setup_local.sh`.   This can also be done with the following
 commands:
 
-```
+```bash
 source utils/sl/setup_local.sh
 sudo -E utils/setup_daos_admin.sh [path to new location of daos]
 ```
 
-This script is intended only for developer setup of `daos_admin`.
+This script is intended only for the developer setup of `daos_admin`.
 
 With this approach, DAOS gets built using the prebuilt dependencies in
 `${daos_prefix_path}/prereq`, and required options are saved for future compilations.
@@ -63,16 +63,16 @@ source code.
 If you wish to compile DAOS with clang rather than `gcc`, set `COMPILER=clang` on
 the scons command line.   This option is also saved for future compilations.
 
-Additionally, users can specify `BUILD_TYPE=[dev|release|debug]` and scons will
+Additionally, users can specify `BUILD_TYPE=[dev|release|debug]`, and scons will
 save the intermediate build for the various `BUILD_TYPE`, `COMPILER`, and `TARGET_TYPE`
-options so a user can switch between options without a full rebuild and thus
-with minimal cost.   By default, `TARGET_TYPE` is set to `'default'` which means
+options so a user can switch between options without a complete rebuild and thus
+minimal cost.   By default, `TARGET_TYPE` is set to `'default'`, which means
 it uses the `BUILD_TYPE` setting.  To avoid rebuilding prerequisites for every
 `BUILD_TYPE` setting, `TARGET_TYPE` can be explicitly set to a `BUILD_TYPE` setting
-to always use that set of prerequisites.  These settings are stored in daos.conf
-so setting the values on subsequent builds is not necessary.
+Always use that set of prerequisites.  These settings are stored in daos.conf
+, so setting the values on subsequent builds is unnecessary.
 
-If needed, `ALT_PREFIX` can be set to a colon separated prefix path where to
+If needed, `ALT_PREFIX` can be set to a colon-separated prefix path to
 look for already built components.  If set, the build will check these
 paths for components before proceeding to build.
 
@@ -85,7 +85,7 @@ built.  At present, just three such targets are defined, `client`, `server`, and
 To build only client libraries and tools, use the following command:
 
 ```bash
-$ scons [args] client install
+scons [args] client install
 ```
 
 To build the server instead, substitute `server` for `client` in the above
@@ -93,7 +93,7 @@ command.
 
 Note that such targets need to be specified each time you build as the default
 is equivalent to specifying `client server test` on the command line.  The
-`test` target is, at present, dependent on `client` and `server` as well.
+`test` target is, at present, dependent on `client` and `server`.
 
 ### Stack analyzer
 
@@ -103,28 +103,28 @@ function in DAOS.
 
 The report is enabled using the `--analyze-stack="[arg] ..."` option.
 
-To get usage information for this option, run
+To get usage information for this option, run.
 
 ```bash
-$ scons COMPILER=gcc --analyze-stack="-h"
+scons COMPILER=gcc --analyze-stack="-h"
 ```
 
-The tool normally runs post build but the `-e` option can be added to run it immediately and exit
+The tool normally runs post-build, but the `-e` option can be added to run it immediately and exit
 as in the following example:
 
 ```bash
-$ scons COMPILER=gcc --analyze-stack="-e -c 1024 -x tests" -Q
+scons COMPILER=gcc --analyze-stack="-e -c 1024 -x tests" -Q
 ```
 
 One should only use this option if a prior build with gcc has been executed.  The `-Q` option to
-scons reduces the clutter from compiler setup.
+scons reduces the clutter from the compiler setup.
 
 Additionally, the tool supports options to filter by directory and file names and specify a lower
 bound value to report.
 
 ### Building Optional Components
 
-There are a few optional components that can be included into the DAOS build.
+A few optional components can be included in the DAOS build.
 For instance, to include the `psm2` provider. Run the following `scons`
 command:
 
@@ -154,10 +154,9 @@ The version of the components can be changed by editing the
 [utils/build.config](https://github.com/daos-stack/daos/blob/release/2.2/utils/build.config)
 file.
 
-
 >**_NOTE_**
 >
->The support of the optional components is not guarantee and can be removed
+>The support of the optional components is not guaranteed and can be removed
 >without further notification.
 
 ## Go dependencies
@@ -165,19 +164,19 @@ file.
 Developers contributing Go code may need to change the external dependencies
 located in the `src/control/vendor` directory. The DAOS codebase uses
 [Go Modules](https://github.com/golang/go/wiki/Modules) to manage these
-dependencies. As this feature is built in to Go distributions starting with
+dependencies. This feature is built into Go distributions starting with
 version 1.11, no additional tools are needed to manage dependencies.
 
-Among other benefits, one of the major advantages of using Go Modules is that it
+Among other benefits, one of the significant advantages of using Go Modules is that it
 removes the requirement for builds to be done within the `$GOPATH`, which
 simplifies our build system and other internal tooling.
 
 While it is possible to use Go Modules without checking a vendor directory into
-SCM, the DAOS project continues to use vendored dependencies in order to
+SCM, the DAOS project, continues to use vendored dependencies to
 insulate our build system from transient network issues and other problems
-associated with nonvendored builds.
+associated with non-vendored builds.
 
-The following is a short list of example workflows. For more details, please
+The following is a shortlist of example workflows. For more details, please
 refer to [one](https://github.com/golang/go/wiki/Modules#quick-start) of
 [the](https://engineering.kablamo.com.au/posts/2018/just-tell-me-how-to-use-go-modules/)
 [many](https://blog.golang.org/migrating-to-go-modules) resources available online.
@@ -199,11 +198,11 @@ $ git commit -a # should pick up go.mod, go.sum, vendor/*, etc.
 # update an existing dependency
 $ cd ~/daos/src/control # or wherever your daos clone lives
 $ go get -u github.com/awesome/thing
-# make sure that github.com/awesome/thing is imported somewhere in the codebase
+# Make sure that github.com/awesome/thing is imported somewhere in the codebase
 $ ./run_go_tests.sh
-# note that go.mod and go.sum have been updated automatically
+# Note that go.mod and go.sum have been updated automatically
 #
-# when ready to commit and push for review:
+# When ready to commit and push for review:
 $ go mod vendor
 $ git commit -a # should pick up go.mod, go.sum, vendor/*, etc.
 ```
@@ -212,27 +211,27 @@ $ git commit -a # should pick up go.mod, go.sum, vendor/*, etc.
 # replace/remove an existing dependency
 $ cd ~/daos/src/control # or wherever your daos clone lives
 $ go get github.com/other/thing
-# make sure that github.com/other/thing is imported somewhere in the codebase,
+# Make sure that github.com/other/thing is imported somewhere in the codebase,
 # and that github.com/awesome/thing is no longer imported
 $ ./run_go_tests.sh
-# note that go.mod and go.sum have been updated automatically
+# Note that go.mod and go.sum have been updated automatically
 #
-# when ready to commit and push for review:
+# When ready to commit and push for review:
 $ go mod tidy
 $ go mod vendor
 $ git commit -a # should pick up go.mod, go.sum, vendor/*, etc.
 ```
 
 In all cases, after updating the vendor directory, it is a good idea to verify
-that your changes were applied as expected. In order to do this, a simple
+that your changes were applied as expected. To do this, a simple
 workflow is to clear the caches to force a clean build and then run the test
 script, which is vendor-aware and will not try to download missing modules:
 
 ```bash
-$ cd ~/daos/src/control # or wherever your daos clone lives
-$ go clean -modcache -cache
-$ ./run_go_tests.sh
-$ ls ~/go/pkg/mod # ~/go/pkg/mod should either not exist or be empty
+cd ~/daos/src/control # or wherever your daos clone lives
+go clean -modcache -cache
+./run_go_tests.sh
+ls ~/go/pkg/mod # ~/go/pkg/mod should either not exist or be empty
 ```
 
 ## Protobuf Compiler
@@ -241,14 +240,14 @@ The DAOS control plane infrastructure uses [Protocol Buffers](https://github.com
 as the data serialization format for its RPC requests. Not all developers will
 need to compile the `\*.proto` files, but if Protobuf changes are needed, the
 developer must regenerate the corresponding C and Go source files using a
-Protobuf compiler compatible with proto3 syntax.
+Protobuf compiler is compatible with proto3 syntax.
 
 ### Recommended Versions
 
 The recommended installation method is to clone the git repositories, check out
-the tagged releases noted below, and install from source. Later versions may
-work, but are not guaranteed.  You may encounter installation errors when
-building from source relating to insufficient permissions.  If that occurs,
+the tagged releases noted below and install from the source. Later versions may
+work but are not guaranteed.  You may encounter installation errors when
+building from a source relating to insufficient permissions.  If that occurs,
 you may try relocating the repo to `/var/tmp/` in order to build and install from there.
 
 - [Protocol Buffers](https://github.com/protocolbuffers/protobuf) v3.11.4. [Installation instructions](https://github.com/protocolbuffers/protobuf/blob/master/src/README.md).
@@ -265,7 +264,7 @@ updated.
 
 Note that the generated files are checked into SCM and are not generated as part
 of the normal DAOS build process. This allows developers to ensure that the
-generated files are correct after any changes to the source files are made.
+generated files are correct after making any changes to the source files.
 
 ```bash
 $ cd ~/daos/src/proto # or wherever your daos clone lives
@@ -275,8 +274,8 @@ protoc -I /home/foo/daos/src/proto/mgmt/ --go_out=plugins=grpc:/home/foo/daos/sr
 ...
 $ git status
 ...
-#       modified:   ../control/common/proto/mgmt/acl.pb.go
-#       modified:   ../control/common/proto/mgmt/mgmt.pb.go
+#       Modified:   ../control/common/proto/mgmt/acl.pb.go
+#       Modified:   ../control/common/proto/mgmt/mgmt.pb.go
 ...
 ```
 
@@ -302,21 +301,21 @@ $ docker build https://github.com/daos-stack/daos.git#release/2.2 \
 or from a local tree:
 
 ```bash
-$ docker build  . -f utils/docker/Dockerfile.centos.7 -t daos
+docker build  . -f utils/docker/Dockerfile.centos.7 -t daos
 ```
 
 This creates a CentOS 7 image, fetches the latest DAOS version from GitHub,
-builds it, and installs it in the image.
+builds it and installs it in the image.
 For Ubuntu and other Linux distributions, replace Dockerfile.centos.7 with
 Dockerfile.ubuntu.20.04 and the appropriate version of interest.
 
 ### Simple Docker Setup
 
-Once the image created, one can start a container that will eventually run
+Once the image is created, one can start a container that will eventually run
 the DAOS service:
 
 ```bash
-$ docker run -it -d --privileged --cap-add=ALL --name server -v /dev:/dev daos
+docker run -it -d --privileged --cap-add=ALL --name server -v /dev:/dev daos
 ```
 
 !!! note
@@ -346,10 +345,10 @@ $ docker exec server daos_server start \
     Please make sure that the uio_pci_generic module is loaded on the host.
 
 Once started, the DAOS server waits for the administrator to format the system.
-This can be triggered in a different shell, using the following command:
+This can be triggered in a different shell using the following command:
 
 ```bash
-$ docker exec server dmg -i storage format
+docker exec server dmg -i storage format
 ```
 
 Upon successful completion of the format, the storage engine is started, and pools
diff --git a/docs/release/support_matrix_v2_2.md b/docs/release/support_matrix_v2_2.md
index e845e9bca99..1b07aeb9424 100644
--- a/docs/release/support_matrix_v2_2.md
+++ b/docs/release/support_matrix_v2_2.md
@@ -1,6 +1,5 @@
 # DAOS Version 2.2 Support
 
-
 ## Community Support and Commercial Support
 
 Community support for DAOS is available through the
@@ -8,7 +7,7 @@ Community support for DAOS is available through the
 [DAOS Slack channel](https://daos-stack.slack.com/).
 The [DAOS community JIRA tickets](https://daosio.atlassian.net/jira)
 can be searched for known issues and possible solutions.
-Community support is provided on a best effort basis
+Community support is provided on a best-effort basis
 without any guaranteed SLAs.
 
 Starting with DAOS Version 2, the Intel DAOS engineering team
@@ -25,17 +24,16 @@ for information on the DAOS partner ecosystem.
 This document describes the supported environments for Intel Level-3 support
 at the DAOS 2.2 release level.
 Information for future releases is indicative only and may change.
-Partner support offerings may impose further constraints, for example if they
+Partner support offerings may impose further constraints, for example, if they
 include DAOS support as part of a more general cluster support offering
-with its own release cycle.
+with its release cycle.
 
 Some members of the DAOS community have reported successful compilation
-and basic testing of DAOS in other environments (for example on ARM64
+and basic testing of DAOS in other environments (for example, on ARM64
 platforms, or on other Linux distributions). Those activities are highly
-appreciated community contributions. However such environments are
+appreciated community contributions. However, such environments are
 not currently supported in a production environment.
 
-
 ## Hardware platforms supported for DAOS Servers
 
 DAOS Version 2.2 supports the x86\_64 architecture.
@@ -55,9 +53,9 @@ All Optane PMem modules in a DAOS server must have the same capacity.
 
 While not strictly required, DAOS servers typically include NVMe disks
 for bulk storage, which must be supported by [SPDK](https://spdk.io/).
-(NVMe storage can be emulated by files on non-NVMe storage for development
+(Files on non-NVMe storage can emulate nVMe storage for development
 and testing purposes, but this is not supported in a production environment.)
-All NVMe disks managed by a single DAOS engine must have identical capacity,
+All NVMe disks managed by a single DAOS engine must have the identical capacity,
 and it is strongly recommended to use identical drive models.
 It is also strongly recommended that all DAOS engines in a DAOS system
 have identical NVMe storage configurations.
@@ -70,7 +68,7 @@ in the Administration Guide.
 Each DAOS engine needs one high-speed network port for communication in the
 DAOS data plane. DAOS Version 2.2 does not support more than one
 high-speed network port per DAOS engine.
-(It is possible that two DAOS engines on a 2-socket server share a
+(Two DAOS engines on a 2-socket server may share a
 single high-speed network port for development and testing purposes,
 but this is not supported in a production environment.)
 It is strongly recommended that all DAOS engines in a DAOS system use the same
@@ -79,7 +77,6 @@ Heterogeneous adapter population across DAOS engines has **not** been tested,
 and running with such configurations may cause unexpected behavior.
 Please refer to "Fabric Support" below for more details.
 
-
 ## Hardware platforms supported for DAOS Clients
 
 DAOS Version 2.2 supports the x86\_64 architecture.
@@ -90,7 +87,6 @@ Each DAOS client needs a network port on the same high-speed interconnect
 that the DAOS servers are connected to.
 Multiple high-speed network ports per DAOS client are supported.
 
-
 ## Operating Systems supported for DAOS Servers
 
 The DAOS software stack is built and supported on
@@ -120,16 +116,16 @@ Links to CentOS Linux 7 and RHEL 7 Release Notes:
 * [CentOS 7.9.2009](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.2009)
 * [RHEL 7.9](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index)
 
-CentOS Linux 7 will reach End Of Life (EOL) on June 30st, 2024.
+CentOS Linux 7 will reach End Of Life (EOL) on June 30th, 2024.
 Refer to the [RHEL Life Cycle](https://access.redhat.com/support/policy/updates/errata/)
 description on the Red Hat support website for information on RHEL support phases.
 
 ### CentOS Linux 8
 
-CentOS Linux 8 has reached End Of Life (EOL) on December 31st, 2021.
+CentOS Linux 8 reached the End Of Life (EOL) on December 31st, 2021.
 Consequently, DAOS Version 2.2 does not support CentOS Linux 8.
 DAOS clusters that have been running CentOS Linux 8 have to be migrated to
-Rocky Linux 8 or RHEL 8 in order to deploy DAOS Version 2.2.
+Rocky Linux 8 or RHEL 8 to deploy DAOS Version 2.2.
 
 ### Red Hat Enterprise Linux 8
 
@@ -140,7 +136,7 @@ DAOS Version 2.2 supports RHEL 8.5 and RHEL 8.6.
     which is expected to be superseded by RHEL 8.6 by the end of May 2022.
     DAOS 2.2 support of RHEL 8.6 may therefore not be available with
     the initial DAOS 2.2.0 release: If validation issues are discovered that
-    require a fix, those fixes may only be provided in a DAOS 2.2.x bugfix release.
+    require a fix; those fixes may only be provided in a DAOS 2.2.x bugfix release.
 
 Links to RHEL 8 Release Notes:
 
@@ -159,7 +155,7 @@ DAOS Version 2.2 supports Rocky Linux 8.5 and 8.6.
     which is expected to be superseded by Rocky Linux 8.6 by the end of May 2022.
     DAOS 2.2 support of Rocky Linux 8.6 may therefore not be available with
     the initial DAOS 2.2.0 release: If validation issues are discovered that
-    require a fix, those fixes may only be provided in a DAOS 2.2.x bugfix release.
+    require a fix; those fixes may only be provided in a DAOS 2.2.x bugfix release.
 
 Links to Rocky Linux Release Notes:
 
@@ -195,7 +191,6 @@ Alma Linux,
 Ubuntu, or
 Oracle Linux.
 
-
 ## Operating Systems supported for DAOS Clients
 
 The DAOS software stack is built and supported on
@@ -207,7 +202,6 @@ are identical to those for DAOS servers. Please refer to the previous section fo
 In future DAOS releases, DAOS client support may be added for additional
 Linux distributions and/or versions.
 
-
 ## Fabric Support
 
 DAOS relies on [libfabric](https://ofiwg.github.io/libfabric/)
@@ -215,23 +209,23 @@ for communication in the DAOS data plane.
 
 DAOS Version 2.2 requires at least libfabric Version 1.14. The RPM distribution
 of DAOS includes libfabric RPM packages with the correct version.
-It is strongly recommended to use exactly the provided libfabric version
+It is strongly recommended to use the provided libfabric version
 on all DAOS servers and all DAOS clients.
 
 Not all libfabric providers are supported.
 DAOS Version 2.2 has been validated mainly with the `verbs` provider
-for InfiniBand fabrics, and the `tcp` provider for other fabrics.
+for InfiniBand fabrics and the `tcp` provider for other fabrics.
 Future DAOS releases may add support for additional libfabric providers.
 
 Note:
-DAOS Version 2.2 has been validated with and supports the `tcp` provider.
+DAOS Version 2.2 has been validated and supports the `tcp` provider.
 Please refer to the
 [Provider Feature Matrix v1.14](https://github.com/ofiwg/libfabric/wiki/Provider-Feature-Matrix-v1.14.x)
 for information on the `tcp` provider in libfabric-1.14.
 
 Note:
 The `psm2` provider for Omni-Path fabrics has known issues
-when used in a DAOS context, and is not supported for production
+when used in a DAOS context and is not supported for production
 environments. Please refer to the
 [Cornelis Networks presentation](https://daosio.atlassian.net/wiki/download/attachments/11015454821/12_Update_on_Omni-Path_Support_for_DAOS_DUG21_19Nov2021.pdf)
 at [DUG21](https://daosio.atlassian.net/wiki/spaces/DC/pages/11015454821/DUG21)
@@ -262,15 +256,14 @@ The only exception to this recommendation is the mix of single-port
 and dual-port adapters of the same generation, where only one of the ports
 of the dual-port adapter(s) is used by DAOS.
 
-
 ## DAOS Scaling
 
 DAOS is a scale-out storage solution that is designed for extreme scale.
-This section summarizes the DAOS scaling targets, some DAOS architectural limits,
+This section summarizes the DAOS scaling targets,, some DAOS architectural limits,
 and the current testing limits of DAOS Version 2.2.
 
 Note: Scaling characteristics depend on the properties of the high-performance
-interconnect, and the libfaric provider that is used. The DAOS scaling targets
+interconnect, and the libfabric provider that is used. The DAOS scaling targets
 below assume a non-blocking, RDMA-capable fabric. Most scaling tests so far
 have been performed on InfiniBand fabrics with the libfabric `verbs`provider.
 
diff --git a/docs/release/upgrading.md b/docs/release/upgrading.md
index c1252b9a8f8..4e1f8208e43 100644
--- a/docs/release/upgrading.md
+++ b/docs/release/upgrading.md
@@ -4,4 +4,3 @@ DAOS 2.2 is under active development and has not been released yet.
 The release is planned for the first half of 2022.
 In the meantime, please refer to the upgrading information for the
 [latest](https://docs.daos.io/latest/release/upgrading/) DAOS release.
-
diff --git a/docs/release/upgrading_v2_2.md b/docs/release/upgrading_v2_2.md
index 31b91524d25..882c2360bbc 100644
--- a/docs/release/upgrading_v2_2.md
+++ b/docs/release/upgrading_v2_2.md
@@ -1,6 +1,5 @@
 # Upgrading to DAOS Version 2.2
 
-
 ## Upgrading DAOS from Version 2.2.x to Version 2.2.y
 
 Upgrading DAOS from one 2.2.x fix level to a newer 2.2.y fix level is
@@ -18,13 +17,12 @@ The recommended procedure for the upgrade is:
 - Perform the RPM update to the new DAOS fix level.
 - Start the `daos_server` daemons.
 - Validate that all engines have started successfully,
-  for example using `dmg system query -v`.
+  For example, use `dmg system query -v`.
 - Start the `daos_agent` daemons.
 
 DAOS fix levels include all previous fix levels. So it is possible to update
 from Version 2.2.0 to Version 2.2.2 without updating to Version 2.2.1 first.
 
-
 ## Upgrading DAOS from Version 2.0 to Version 2.2
 
 ### DAOS servers running CentOS 7.9, SLES 15.3 or openSUSE Leap 15.3
@@ -43,7 +41,7 @@ The recommended procedure for the upgrade is:
 - Perform the RPM update to the new DAOS fix level.
 - Start the `daos_server` daemons.
 - Validate that all engines have started successfully,
-  for example using `dmg system query -v`.
+  For example, use `dmg system query -v`.
 - Start the `daos_agent` daemons.
 
 DAOS fix levels include all previous fix levels. So it is possible to update
@@ -56,17 +54,16 @@ on CentOS Linux 8 need to be reinstalled with a supported EL8 operating system
 (Rocky Linux 8.5/8.6 or RHEL 8.5/8.6) in order to use DAOS Version 2.2.
 
 The process of reinstalling a DAOS server's EL8 operating system while maintaining
-the data on PMem and NVMe has not been validated, and is not supported.
+the data on PMem and NVMe has not been validated and is not supported.
 This implies that the upgrade to DAOS Version 2.2 in those environments is essentially
-a new installation, without maintaining the data in DAOS pools and containers.
+a new installation without maintaining the data in DAOS pools and containers.
 Please refer to the
 [Upgrading to DAOS Version 2.0](https://docs.daos.io/v2.0/release/upgrading_to_v2_0/)
-document for further information on how to save existing user data before the update.
-
+document for further information on saving existing user data before the update.
 
 ## Upgrading DAOS from Version 1.x to Version 2.2
 
-Note that there is **no** backwards compatibility of DAOS Version 2.x with
+Note that there is **no** backward compatibility of DAOS Version 2.x with
 DAOS Version 1.y, so an upgrade from DAOS Version 1.0 or 1.2 to
 DAOS Version 2.2 is essentially a new installation. Please refer to the
 [Upgrading to DAOS Version 2.0](https://docs.daos.io/v2.0/release/upgrading_to_v2_0/)
diff --git a/docs/testing/datamover.md b/docs/testing/datamover.md
index 39efdce1bb4..3a8190cdf92 100644
--- a/docs/testing/datamover.md
+++ b/docs/testing/datamover.md
@@ -205,7 +205,6 @@ $ mpirun -hostfile /path/to/hostfile -np 16 /path/to/mpifileutils/install/bin/dc
 [2021-04-30T01:22:37] Rate: 457.513 MiB/s (161197834240 bytes in 336.013 seconds)
 ```
 
-
 Pool Query to verify data was copied (free space should reduce):
 
 ```sh
@@ -223,8 +222,6 @@ Pool space info:
 Rebuild idle, 0 objs, 0 recs
 ```
 
-
-
 Move data from DAOS container to POSIX directory:
 
 ```sh
@@ -268,10 +265,10 @@ $ mpirun -hostfile /path/to/hostfile -np 16 dcp --bufsize 64MB --chunksize 128MB
 [2021-04-30T01:32:17] Rate: 423.118 MiB/s (161197834240 bytes in 363.327 seconds)
 ```
 
-
 Pool Query to very data was copied:
 
-```
+```sh
+
 $ dmg pool query --pool $DAOS_POOL
 
 Pool b22220ea-740d-46bc-84ad-35ed3a28aa31, ntarget=64, disabled=0, leader=1, version=1
@@ -286,7 +283,6 @@ Pool space info:
 Rebuild idle, 0 objs, 0 recs
 ```
 
-
 Check data inside the POSIX directories:
 
 ```sh
@@ -302,18 +298,16 @@ ls -latr /tmp/daos_dfuse/daos_container_copy
 drwxr-xr-x 1 standan standan 64 Apr 30 01:26 daos_dfuse
 ```
 
-
 For more details on datamover, reference
 [DAOS Support](https://github.com/hpc/mpifileutils/DAOS-Support.md)
 on the mpifileutils website.
 
-
 ## Clean Up
 
 Remove one of the copy created using datamover
 
 ```sh
-$ rm -rf /tmp/daos_dfuse/daos_container_copy
+rm -rf /tmp/daos_dfuse/daos_container_copy
 ```
 
 Remove dfuse mountpoint:
@@ -359,7 +353,6 @@ Pool UUID                            Svc Replicas
 b22220ea-740d-46bc-84ad-35ed3a28aa31 [1-3]
 ```
 
-
 Destroy Pool:
 
 ```sh
@@ -367,7 +360,6 @@ Destroy Pool:
 $ dmg pool destroy --pool $DAOS_POOL
 ```
 
-
 Stop Agents:
 
 ```sh
diff --git a/docs/testing/dbench.md b/docs/testing/dbench.md
index 8ccecb5b722..b4d9ac7d1a5 100644
--- a/docs/testing/dbench.md
+++ b/docs/testing/dbench.md
@@ -3,7 +3,7 @@
 Install dbench on all client nodes:
 
 ```sh
-$ sudo yum install dbench
+sudo yum install dbench
 ```
 
 From one of the client node:
@@ -49,6 +49,7 @@ Throughput 104.568 MB/sec  100 clients  10 procs  max_latency=285.639 ms
 ```
 
 List the dfuse mount point:
+
 ```sh
 # 'testfile' comes from ior run
 # 'test-dir.0-0' comes from mdtest run
diff --git a/docs/testing/ior.md b/docs/testing/ior.md
index 320a7a162cd..f4df2ac47eb 100644
--- a/docs/testing/ior.md
+++ b/docs/testing/ior.md
@@ -24,14 +24,14 @@ section of the Administration Guide contains further information on IOR and mdte
 ## Build ior and mdtest
 
 ```sh
-$ module load mpi/mpich-x86_64 # or any other MPI stack
-$ git clone https://github.com/hpc/ior.git
-$ cd ior
-$ ./bootstrap
-$ mkdir build;cd build
-$ ../configure --with-daos=/usr --prefix=
-$ make
-$ make install
+module load mpi/mpich-x86_64 # or any other MPI stack
+git clone https://github.com/hpc/ior.git
+cd ior
+./bootstrap
+mkdir build;cd build
+../configure --with-daos=/usr --prefix=
+make
+make install
 ```
 
 ## Run ior
diff --git a/docs/user/container.md b/docs/user/container.md
index 32ea5071c4f..752e7840f23 100644
--- a/docs/user/container.md
+++ b/docs/user/container.md
@@ -2,7 +2,7 @@
 
 DAOS containers are datasets managed by the users. Similarly to S3 buckets,
 a DAOS container is a collection of objects that can be presented to applications
-through different interfaces including POSIX I/O (files and directories),
+through different interfaces, including POSIX I/O (files and directories),
 HDF5, SQL or any other data models of your choice.
 
 A container belongs to a single pool and shares the space with other containers.
@@ -55,7 +55,7 @@ The container type (i.e., POSIX or HDF5) can be passed via the --type option.
 For convenience, a container can also be identified by a path to a filesystem
 supporting extended attributes. In this case, the pool and container UUIDs
 are stored in an extended attribute of the target file or directory that can
-then be used in subsequent command invocations to identify the container.
+be used in subsequent command invocations to identify the container.
 
 ```bash
 $ daos cont create tank --label mycont --path /tmp/mycontainer --type POSIX --oclass=SX
@@ -108,6 +108,7 @@ daefe12c-45d4-44f7-8e56-995d02549041 mycont
 ### Destroying a Container
 
 To destroy a container:
+
 ```bash
 $ daos cont destroy tank mycont
 Successfully destroyed container mycont
@@ -120,16 +121,15 @@ If the container is in use, the force option (i.e., --force or -f) must be added
 Active users of a force-deleted container will fail with a bad handle error.
 
 !!! tip
-    It is orders of magniture faster to destroy a container compared to
+    It is orders of magnitude faster to destroy a container compared to
     punching/deleting each object individually.
 
 ## Container Properties
 
-Container properties are the main mechanism that one can use to control the
+Container properties are the primary mechanism that one can use to control the
 behavior of containers. This includes the type of middleware, whether some
 features like deduplication or checksum are enabled. Some properties are
-immutable after creation creation, while some others can be dynamically
-changed.
+immutable after creation, while others can be dynamically changed.
 
 ### Listing Properties
 
@@ -137,7 +137,7 @@ The `daos` utility may be used to list all container's properties as follows:
 
 ```bash
 $ daos cont get-prop tank mycont
-# -OR- --path interface shown below
+# -OR- --path interface is shown below
 $ daos cont get-prop --path=/tmp/mycontainer
 Properties for container mycont
 Name                  Value
@@ -170,7 +170,7 @@ available [here](https://docs.daos.io/v2.2/doxygen/html/).
 
 ### Changing Properties
 
-By default, a container will inherit a set of default value for each property.
+By default, a container will inherit a default value for each property.
 Those can be overridden at container creation time via the `--properties` option.
 
 ```bash
@@ -250,54 +250,53 @@ The table below summarizes the available container properties.
 
 | **Container Property**  | **Immutable**   | **Description** |
 | ------------------------| --------------- | ----------------|
-| label			  | No              | String associate with a containers. e.g., "Cat\_Pics" or "training\_data"|
+| label     | No              | String associate with containers. e.g., "Cat\_Pics" or "training\_data"|
 | owner                   | Yes             | User acting as the owner of the container|
 | group                   | Yes             | Group acting as the owner of the container|
 | acl                     | No              | Container access control list|
 | layout\_type            | Yes             | Container type (e.g., POSIX, HDF5, ...)|
 | layout\_ver             | Yes             | Layout version to be used at the discretion of I/O middleware for interoperability|
-| rf                      | Yes             | Redundancy Factor which is the maximum number of simultaneous engine failures that objects can support without data loss|
-| rf\_lvl                 | Yes             | Redundancy Level which is the level in the fault domain hierarchy to use for object placement|
+| rf                      | Yes             | Redundancy Factor, which is the maximum number of simultaneous engine failures that objects can support without data loss|
+| rf\_lvl                 | Yes             | Redundancy Level, which is the level in the fault domain hierarchy to use for object placement|
 | health                  | No              | Current state of the container|
 | alloc\_oid              | No              | Maximum allocated object ID by container allocator|
 | ec\_cell                | Yes             | Erasure code cell size for erasure-coded objects|
 | cksum                   | Yes             | Checksum off, or algorithm to use (adler32, crc[16,32,64] or sha[1,256,512])|
 | cksum\_size             | Yes             | Checksum Size determining the maximum extent size that a checksum can cover|
-| srv\_cksum              | Yes             | Whether to verify checksum on the server before writing data (default: off)|
-
+| srv\_cksum              | Yes             | Whether to verify the checksum on the server before writing data (default: off)|
 
-Moreover, the following properties have been added as placeholders, but are not
+Moreover, the following properties have been added as placeholders but are not
 fully supported yet:
 
 | **Container Property**  | **Immutable**   | **Description** |
 | ------------------------| --------------- | ----------------|
-| max\_snapshot           | No              | Impose a upper limit on number of snapshots to retain (default: 0, no limitation)|
+| max\_snapshot           | No              | Impose an upper limit on the number of snapshots to retain (default: 0, no limitation)|
 | compression             | Yes             | Online compression off, or algorithm to use (off, lz4, deflate[1-4])|
 | dedup                   | Yes             | Inline deduplication off, or algorithm to use (hash or memcmp)|
 | dedup\_threshold        | Yes             | Minimum I/O size to consider for deduplication|
 | encryption              | Yes             | Inline encryption off, or algorithm to use (XTS[128,256], CBC[128,192,256] or GCM[128,256])|
 
-Please refer to the next sections for more details on each property.
+Please refer to the following sections for more details on each property.
 
 ### Redundancy Factor
 
 Objects in a DAOS container may belong to different object classes and
 have different levels of data protection. While this model gives a lot of control
 to the users, it also requires carefully selecting a suitable class for each
-object. If objects with different data protection level are also stored in the
-same container, the user should also be prepared for the case where some objects
-might suffer from data loss after several cascading failures, while some others
-with higher level of data protection may not. This incurs extra complexity that
+object. Suppose objects with different data protection levels are also stored in the
+same container. In that case, the user should also be prepared for the case where some objects
+might suffer from data loss after several cascading failures, while others
+with a higher level of data protection may not. This incurs extra complexity that
 not all I/O middleware necessarily wants to deal with.
 
 To lower the bar of adoption while still keeping the flexibility, two container
 properties have been introduced:
 
-- the redundancy factor (rf) that describes the number of concurrent engine
+- the redundancy factor (rf) describes the number of concurrent engine
   exclusions that objects in the container are protected against. The rf value
   is an integer between 0 (no data protection) and 5 (support up to 5
   simultaneous failures).
-- a `health` property representing whether any object content might have been
+- a `health` property represents whether any object content might have been
   lost due to cascading engine failures. The value of this property can be
   either `HEALTHY` (no data loss) or `UNCLEAN` (data might have been lost).
 
@@ -334,8 +333,8 @@ will fail.
 For rf2, only objects with at least 3-way replication or erasure code with two
 parities or more can be stored in the container.
 
-As long as the number of simulatenous engine failures is below the redundancy
-factor, the container is reported as healthy. if not, then the container is
+As long as the number of simultaneous engine failures is below the redundancy
+factor, the container is reported as healthy. If not, the container is
 marked as unclean and cannot be accessed.
 
 ```bash
@@ -356,15 +355,15 @@ follows:
 ```bash
 $ dfuse --pool tank --container mycont1 -m /tmp/dfuse
 dfuse ERR  src/client/dfuse/dfuse_core.c:873 dfuse_cont_open(0x19b9b00) daos_cont_open() failed: DER_RF(-2031): 'Failures exceed RF'
-Failed to connect to container (5) Input/output error
+Failed to connect to the container (5) Input/output error
 ```
 
-If the excluded engines can be reintegrated in the pool by the administrator,
+If the excluded engines can be reintegrated into the pool by the administrator,
 then the container state will automatically switch back to healthy and can be
 accessed again.
 
 If the user is willing to access an unhealthy container (e.g., to recover data),
-the force flag can be passed on container open or the container state can be
+the force flag can be passed on the container open, or the container state can be
 forced to healthy via `daos cont set-prop tank mycont1 --properties health:healthy`.
 
 The redundancy level (rf\_lvl) is another property that was introduced to
@@ -372,28 +371,28 @@ specify the fault domain level to use for placement.
 
 ### Data Integrity
 
-DAOS allows to detect and fix (when data protection is enabled) silent data
+DAOS allows detecting and fixing (when data protection is enabled) silent data
 corruptions. This is done by calculating checksums for both data and metadata
-in the DAOS library on the client side and storing those checksums persistently
+in the DAOS library on the client-side and storing those checksums persistently
 in SCM. The checksums will then be validated on access and on update/write as
-well on the server side if server verify option is enabled.
+well on the server-side if the server verify option is enabled.
 
-Corrupted data will never be returned to the application. When a corruption is
+Corrupted data will never be returned to the application. When corruption is
 detected, DAOS will try to read from a different replica, if any.  If the
-original data cannot be recovered, then an error will be reported to the
+original data cannot be recovered, an error will be reported too the
 application.
 
 To enable and configure checksums, the following container properties are used
-during container create.
+during container creation.
 
 - cksum (`DAOS_PROP_CO_CSUM`): the type of checksum algorithm to use.
   Supported values are adler32, crc[16|32|64] or sha[1|256|512]. By default,
-  checksum is disabled for new containers.
+  The checksum is disabled for new containers.
 - cksum\_size (`DAOS_PROP_CO_CSUM_CHUNK_SIZE`): defines the chunk size used for
   creating checksums of array types. (default is 32K).
 - srv\_cksum (`DAOS_PROP_CO_CSUM_SERVER_VERIFY`): Because of the probable decrease to
   IOPS, in most cases, it is not desired to verify checksums on an object
-  update on the server side. It is sufficient for the client to verify on
+  update on the server-side. It is sufficient for the client to verify on
   a fetch because any data corruption, whether on the object update,
   storage, or fetch, will be caught. However, there is an advantage to
   knowing if corruption happens on an update. The update would fail
@@ -401,7 +400,7 @@ during container create.
   error to upper levels.
 
 For instance, to create a new container with crc64 checksum enabled and
-checksum verification on the server side, one can use the following command
+checksum verification on the server-side, one can use the following command
 line:
 
 ```bash
@@ -440,8 +439,8 @@ administrator at pool creation time.
 
 ### Deduplication (Preview)
 
-Data deduplication (dedup) is a process that allows to eliminate duplicated
-data copies in order to decrease capacity requirements. DAOS has some initial
+Data deduplication (dedup) is a process that allows eliminating duplicated
+data copies to decrease capacity requirements. DAOS has some initial
 support of inline dedup.
 
 When dedup is enabled, each DAOS server maintains a per-pool table indexing
@@ -452,9 +451,9 @@ If an extent is found, then two options are provided:
 
 - Transferring the data from the client to the server and doing a memory compare
   (i.e., memcmp) of the two extents to verify that they are indeed identical.
-- Trusting the hash function and skipping the data transfer. To minimize issue
+- Trusting the hash function and skipping the data transfer. To minimize the issue
   with hash collision, a cryptographic hash function (i.e., SHA256) is used in
-  this case. The benefit of this approarch is that the data to be written does
+  this case. The benefit of this approach is that the data to be written does
   not need to be transferred to the server. Data processing is thus greatly
   accelerated.
 
@@ -462,17 +461,20 @@ The inline dedup feature can be enabled on a per-container basis. To enable and
 configure dedup, the following container properties are used:
 
 - dedup (`DAOS_PROP_CO_DEDUP`): Type of dedup mechanism to use. Supported values
-  are off (default), memcmp (memory compare) or hash (hash-based using SHA256).
+  Are:
+  - off (default)
+  - memcmp (memory compare)
+  - hash (hash-based using SHA256).
 - dedup\_threshold (`DAOS_PROP_CO_DEDUP_THRESHOLD`): defines the minimal I/O size
   to consider the I/O for dedup (default is 4K).
 
 !!! warning
     Dedup is a feature preview in 2.0 and has some known
-    limitations. Aggregation of deduplicated extents isn't supported and the
+    limitations. Aggregation of deduplicated extents isn't supported, and the
     checksum tree isn't persistent yet. This means that aggregation is disabled
-    for a container with dedplication enabled and duplicated extents won't be
+    for a container with duplication enabled, and duplicated extents won't be
     matched after a server restart.
-    NVMe isn't supported for dedup enabled container, so please make sure not
+    NVMe isn't supported for dedup enabled containers, so please make sure not
     using dedup on the pool with NVMe enabled.
 
 ### Compression (unsupported)
@@ -502,8 +504,7 @@ $ daos cont destroy-snap tank mycont -e 262508437483290624
 ```
 
 The max\_snapshot (`DAOS_PROP_CO_SNAPSHOT_MAX`) property is used to limit the
-maximum number of snapshots to retain. When a new snapshot is taken, and the
-threshold is reached, the oldest snapshot will be automatically deleted.
+maximum number of snapshots to retain. The oldest snapshot will be automatically deleted when a new snapshot is taken, and the threshold is reached
 
 Rolling back the content of a container to a snapshot is planned for future
 DAOS versions.
@@ -543,24 +544,23 @@ Client user and group access for containers is controlled by
 
 Access-controlled container accesses include:
 
-* Opening the container for access.
+- Opening the container for access.
 
-* Reading and writing data in the container.
+- Reading and writing data in the container.
 
-  * Reading and writing objects.
+  - Reading and writing objects.
 
-  * Getting, setting, and listing user attributes.
+  - Getting, setting, and listing user attributes.
 
-  * Getting, setting, and listing snapshots.
+  - Getting, setting, and listing snapshots.
 
-* Deleting the container (if the pool does not grant the user permission).
+- Deleting the container (if the pool does not grant the user permission).
 
-* Getting and setting container properties.
+- Getting and setting container properties.
 
-* Getting and modifying the container ACL.
-
-* Modifying the container's owner.
+- Getting and modifying the container ACL.
 
+- Modifying the container's owner.
 
 This is reflected in the set of supported
 [container permissions](https://docs.daos.io/v2.2/overview/security/#permissions).
@@ -572,11 +572,11 @@ to one does not guarantee access to the other. However, a user must have
 permission to connect to a container's pool before they can access the
 container in any way, regardless of their permissions on that container.
 Once the user has connected to a pool, container access decisions are based on
-the individual container ACL. A user need not have read/write access to a pool
-in order to open a container with read/write access, for example.
+the individual container ACL. For example, a user need not have read/write access to a pool
+to open a container with read/write access.
 
-There is one situation in which the pool can grant a container-level permission:
-Container deletion. If a user has Delete permission on a pool, this grants them
+There is one situation in which the pool can grant container-level permission:
+Container deletion. If a user has to Delete permission on a pool, this grants them
 the ability to delete *any* container in the pool, regardless of their
 permissions on that container.
 
@@ -589,9 +589,9 @@ permission in the container's ACL.
 To create a container labeled mycont in a pool labeled tank with a custom ACL:
 
 ```bash
-$ export DAOS_POOL="tank"
-$ export DAOS_CONT="mycont"
-$ daos cont create $DAOS_POOL --label $DAOS_CONT --acl-file=
+export DAOS_POOL="tank"
+export DAOS_CONT="mycont"
+daos cont create $DAOS_POOL --label $DAOS_CONT --acl-file=
 ```
 
 The ACL file format is detailed in the
@@ -602,7 +602,7 @@ The ACL file format is detailed in the
 To view a container's ACL:
 
 ```bash
-$ daos cont get-acl $DAOS_POOL $DAOS_CONT
+daos cont get-acl $DAOS_POOL $DAOS_CONT
 ```
 
 The output is in the same string format used in the ACL file during creation,
@@ -618,7 +618,7 @@ noted above for container creation.
 To replace a container's ACL with a new ACL:
 
 ```bash
-$ daos cont overwrite-acl $DAOS_POOL $DAOS_CONT --acl-file=
+daos cont overwrite-acl $DAOS_POOL $DAOS_CONT --acl-file=
 ```
 
 #### Adding and Updating ACEs
@@ -626,13 +626,13 @@ $ daos cont overwrite-acl $DAOS_POOL $DAOS_CONT --acl-file=
 To add or update multiple entries in an existing container ACL:
 
 ```bash
-$ daos cont update-acl $DAOS_POOL $DAOS_CONT --acl-file=
+daos cont update-acl $DAOS_POOL $DAOS_CONT --acl-file=
 ```
 
 To add or update a single entry in an existing container ACL:
 
 ```bash
-$ daos cont update-acl $DAOS_POOL $DAOS_CONT --entry 
+daos cont update-acl $DAOS_POOL $DAOS_CONT --entry 
 ```
 
 If there is no existing entry for the principal in the ACL, the new entry is
@@ -644,24 +644,24 @@ is replaced with the new one.
 To delete an entry for a given principal in an existing container ACL:
 
 ```bash
-$ daos cont delete-acl $DAOS_POOL $DAOS_CONT --principal=
+daos cont delete-acl $DAOS_POOL $DAOS_CONT --principal=
 ```
 
 The `principal` argument refers to the
 [principal](https://docs.daos.io/v2.2/overview/security/#principal), or
-identity, of the entry to be removed.
+identity of the entry to be removed.
 
 For the delete operation, the `principal` argument must be formatted as follows:
 
-* Named user: `u:username@`
-* Named group: `g:groupname@`
-* Special principals:
-  * `OWNER@`
-  * `GROUP@`
-  * `EVERYONE@`
+- Named user: `u:username@`
+- Named group: `g:groupname@`
+- Special principals:
+  - `OWNER@`
+  - `GROUP@`
+  - `EVERYONE@`
 
-The entry for that principal will be completely removed. This does not always
-mean that the principal will have no access. Rather, their access to the
+The entry for that principal will be removed entirely. This does not always
+mean that the principal will have no access. Instead, their access too the
 container will be decided based on the remaining ACL rules.
 
 ### Ownership
@@ -673,20 +673,21 @@ They may be set on container creation and changed later.
 #### Privileges
 
 The owner-user (`OWNER@`) has some implicit privileges on their container.
-These permissions are silently included alongside any permissions that the
-user was explicitly granted by entries in the ACL.
+These permissions are silently included alongside any permissions that entries explicitly granted the
+
+userin the ACL.
 
 The owner-user will always have the following implicit capabilities:
 
-* Open container
-* Set ACL (A)
-* Get ACL (a)
+- Open container
+- Set ACL (A)
+- Get ACL (a)
 
 Because the owner's special permissions are implicit, they do not need to be
 specified in the `OWNER@` entry. After
 [determining](https://docs.daos.io/v2.2/overview/security/#enforcement)
 the user's privileges from the container ACL, DAOS checks whether the user
-requesting access is the owner-user. If so, DAOS grants the owner's
+requesting access is the owner-user. If so, DAOS grants the owner
 implicit permissions to that user, in addition to any permissions granted by
 the ACL.
 
@@ -700,7 +701,7 @@ creating the container. However, a specific user and/or group may be specified
 at container creation time.
 
 ```bash
-$ daos cont create $DAOS_POOL --label $DAOS_CONT --user= --group=
+daos cont create $DAOS_POOL --label $DAOS_CONT --user= --group=
 ```
 
 The user and group names are case sensitive and must be formatted as
@@ -711,13 +712,13 @@ The user and group names are case sensitive and must be formatted as
 To change the owner user:
 
 ```bash
-$ daos cont set-owner $DAOS_POOL $DAOS_CONT --user=
+daos cont set-owner $DAOS_POOL $DAOS_CONT --user=
 ```
 
 To change the owner group:
 
 ```bash
-$ daos cont set-owner $DAOS_POOL $DAOS_CONT --group=
+daos cont set-owner $DAOS_POOL $DAOS_CONT --group=
 ```
 
 The user and group names are case sensitive and must be formatted as
diff --git a/docs/user/datamover.md b/docs/user/datamover.md
index c769d3a9fa6..35e1d62e2a8 100644
--- a/docs/user/datamover.md
+++ b/docs/user/datamover.md
@@ -11,22 +11,16 @@ in an HDF5 file(s) and can be restored to a new DAOS container.
 
 These tools are implemented within the `daos` command.
 
-* `daos filesystem copy` - Copy between POSIX containers and POSIX filesystems
-  using the `libdfs` library.
-* `daos container clone` - Copy any container to a new container using the
-  Object API (`libdaos` library).
+- `daos filesystem copy` - Copy between POSIX containers and POSIX filesystems using the `libdfs` library.
+- `daos container clone` - Copy any container to a new container using the   Object API (`libdaos` library).
 
 These tools have MPI support and are implemented in the external
 [MpiFileutils](https://github.com/hpc/mpifileutils) repository.
 
-* `dcp` - Copy between POSIX containers and POSIX filesystems using the
-  `libdfs` library, or copy between any two DAOS containers using the
-  Object API (`libdaos` library).
-* `dsync` - Similar to `dcp`, but attempts to only copy the difference
-  between the source and destination.
-* `daos-serialize` - Serialize any DAOS container to an HDF5 file(s).
-* `daos-deserialize` - Deserialize any DAOS container that was serialized with
-  `daos-serialize`.
+- `dcp` - Copy between POSIX containers and POSIX filesystems using the `libdfs` library, or copy between any two DAOS containers using the Object API (`libdaos` library).
+- `dsync` - Similar to `dcp`, but attempts to only copy the difference   between the source and destination.
+- `daos-serialize` - Serialize any DAOS container to an HDF5 file(s).
+- `daos-deserialize` - Deserialize any DAOS container that was serialized with `daos-serialize`.
 
 More documentation and uses cases for these tools can be found
 [here](https://github.com/hpc/mpifileutils/blob/release/2.2/DAOS-Support.md).
@@ -49,21 +43,24 @@ There are two mandatory command-line options; these are:
     In DAOS 1.2, only directories are supported as the source or destination.
     Files, directories, and symbolic links are copied from the source directory.
 
-#### Examples
+#### Example #1
 
 Copy a POSIX container to a POSIX filesystem:
+
 ```shell
-$ daos filesystem copy --src daos:/// --dst 
+daos filesystem copy --src daos:/// --dst 
 ```
 
 Copy from a POSIX filesystem to a sub-directory in a POSIX container:
+
 ```shell
-$ daos filesystem copy --src  --dst daos:////
+ daos filesystem copy --src  --dst daos:////
 ```
 
 Copy from a POSIX container by specifying a UNS path:
+
 ```shell
-$ daos filesystem copy --src  --dst 
+ daos filesystem copy --src  --dst 
 ```
 
 ### `daos container clone`
@@ -77,14 +74,16 @@ There are two mandatory command-line options; these are:
 
 The destination container must not already exist.
 
-#### Examples
+#### Examples #2
 
 Clone a container to a new container with a given UUID:
+
 ```shell
-$ daos container clone --src // --dst //
+daos container clone --src // --dst //
 ```
 
 Clone a container to a new container with an auto-generated UUID:
+
 ```shell
-$ daos container clone --src // --dst /
+daos container clone --src // --dst /
 ```
diff --git a/docs/user/filesystem.md b/docs/user/filesystem.md
index dcb28a46819..58327135aa4 100644
--- a/docs/user/filesystem.md
+++ b/docs/user/filesystem.md
@@ -121,7 +121,7 @@ to DAOS. It should be run with the credentials of the user, and typically will
 be started and stopped on each compute node as part of the prolog and epilog
 scripts of any resource manager or scheduler in use.
 
-### Core binding and threads.
+### Core binding and threads
 
 DFuse will launch one thread per available core by default, although this can be
 changed by the `--thread-count` option. To change the cores that DFuse runs on
@@ -213,7 +213,7 @@ To create a new container and link it into the namespace of an existing one,
 use the following command.
 
 ```bash
-$ daos container create  --type POSIX --path 
+daos container create  --type POSIX --path 
 ```
 
 The pool should already exist, and the path should specify a location
@@ -225,7 +225,7 @@ not supplied, it will be created.
 To destroy a container again, the following command should be used.
 
 ```bash
-$ daos container destroy --path 
+daos container destroy --path 
 ```
 
 This will both remove the link between the containers and remove the container
@@ -237,8 +237,9 @@ links to containers without also removing the container itself.
 Information about a container, for example, the presence of an entry point between
 containers, or the pool and container uuids of the container linked to can be
 read with the following command.
+
 ```bash
-$ daos container info --path 
+daos container info --path 
 ```
 
 Please find below an example.
@@ -339,11 +340,11 @@ owned by the container owner, regardless of the user used to create them.  Permi
 checked on connect, so if permissions are revoked users need to
 restart DFuse for these to be picked up.
 
-#### Pool permissions.
+#### Pool permissions
 
 DFuse needs 'r' permission for pools only.
 
-#### Container permissions.
+#### Container permissions
 
 DFuse needs 'r' and 't' permissions to run: read for accessing the data, 't' to read container
 properties to know the container type. For older layout versions (containers created by DAOS v2.0.x
@@ -356,7 +357,7 @@ Write permission for the container is optional; however, without it the containe
 When done, the file system can be unmounted via fusermount:
 
 ```bash
-$ fusermount3 -u /tmp/daos
+fusermount3 -u /tmp/daos
 ```
 
 When this is done, the local DFuse daemon should shut down the mount point,
@@ -376,14 +377,14 @@ leading to improved performance.
 To use the interception library, set `LD_PRELOAD` to point to the shared library
 in the DAOS install directory:
 
-```
+```sh
 LD_PRELOAD=/path/to/daos/install/lib/libioil.so
 LD_PRELOAD=/usr/lib64/libioil.so # when installed from RPMs
 ```
 
 For instance:
 
-```
+```bash
 $ dd if=/dev/zero of=./foo bs=1G count=20
 20+0 records in
 20+0 records out
@@ -414,13 +415,13 @@ library will print to stderr on the first two intercepted read calls, the first
 two write calls and the first two stat calls.  To have all calls printed set the
 value to -1.  A value of 0 means to print the summary at program exit only.
 
-```
+```sh
 D_IL_REPORT=2
 ```
 
 For instance:
 
-```
+```bash
 $ D_IL_REPORT=1 LD_PRELOAD=/usr/lib64/libioil.so dd if=/dev/zero of=./bar bs=1G count=20
 [libioil] Intercepting write of size 1073741824
 20+0 records in
diff --git a/docs/user/hdf5.md b/docs/user/hdf5.md
index b27e37044a8..29d7ef5c173 100644
--- a/docs/user/hdf5.md
+++ b/docs/user/hdf5.md
@@ -1,7 +1,7 @@
 # HDF5 Support
 
 The Hierarchical Data Format Version 5 (HDF5) specification and tools are
-maintained by the HDF Group (https://www.hdfgroup.org/).
+maintained by the [HDF Group](https://www.hdfgroup.org/).
 Applications that use HDF5 can utilize DAOS in two ways:
 
 ## HDF5 over MPI-IO
@@ -21,11 +21,10 @@ is available from the HDF Group.
 
 Please refer to the [HDF5 DAOS VOL Connector Users
 Guide](https://github.com/HDFGroup/vol-daos/blob/master/docs/users_guide.pdf)
-for instructions on how to build and use HDF5 with this DAOS VOL connector.
+for instructions on building and using HDF5 with this DAOS VOL connector.
 
 The presentation [Advancing HDF5's Parallel I/O for Exascale with
 DAOS](https://www.hdfgroup.org/wp-content/uploads/2020/10/HDF5_HUG_2020_DAOS.pdf)
 from the HDF Users Group 2020 describes the HDF5 DAOS VOL Connector Project
 and its current status.  The [video](https://youtu.be/P_V7y_G4vM0)
 of that presentation is also available online.
-
diff --git a/docs/user/interface.md b/docs/user/interface.md
index c4a8775b89a..4c8951041dd 100644
--- a/docs/user/interface.md
+++ b/docs/user/interface.md
@@ -18,10 +18,10 @@ The pydaos.raw submodule provides access to DAOS API functionality via Ctypes
 and was developed with an emphasis on test use cases. While the majority of unit
 tests are written in C, higher-level tests are written primarily using the
 Python API. Interfaces are provided for accessing DAOS management and DAOS API
-functionality from Python. This higher level interface allows a faster
-turnaround time on implementing test cases for DAOS.
+functionality from Python. This higher-level interface allows a faster
+turnaround time for implementing test cases for DAOS.
 
-#### Layout
+### Layout
 
 The Python API is split into several files based on functionality:
 
@@ -31,6 +31,7 @@ The Python API is split into several files based on functionality:
   [daos_cref.py](https://github.com/daos-stack/daos/tree/release/2.2/src/client/pydaos/raw/daos_cref.py)
 
 High-level abstraction classes exist to manipulate DAOS storage:
+
 ```python
 class DaosPool(object)
 class DaosContainer(object)
@@ -56,6 +57,7 @@ from the specified transaction only), and object query.
 DAOS object.
 
 Several classes exist for management purposes as well:
+
 ```python
 class DaosContext(object)
 class DaosLog
@@ -83,7 +85,7 @@ Ctypes is a built-in Python module for interfacing Python with existing
 libraries written in C/C++. The Python API is built as an object-oriented
 wrapper around the DAOS libraries utilizing ctypes.
 
-Ctypes documentation can be found here 
+For additional information, check out the [Ctypes documentation](https://docs.python.org/3/library/ctypes.html)
 
 The following demonstrates a simplified example of creating a Python wrapper
 for the C function `daos_pool_tgt_exclude_out`, with each input parameter to the
@@ -104,11 +106,12 @@ an input parameter, a corresponding Python class can be created. For struct `d_t
 
 ```c
 struct d_tgt_list {
-	d_rank_t	*tl_ranks;
-	int32_t		*tl_tgts;
-	uint32_t	tl_nr;
+ d_rank_t *tl_ranks;
+ int32_t  *tl_tgts;
+ uint32_t tl_nr;
 };
 ```
+
 ```python
 class DTgtList(ctypes.Structure):
     _fields_ = [("tl_ranks", ctypes.POINTER(ctypes.c_uint32)),
@@ -145,9 +148,9 @@ my_lib.daos_pool_tgt_exclude_out(c_uuid, c_grp, c_tgt_list, None)
 
 #### Error Handling
 
-The API was designed using the EAFP (Easier to Ask
-Forgiveness than get Permission) idiom. A given function will
-raise a custom exception on error state, `DaosApiError`.
+The API was designed using the EAFP (**E**asier to **A**sk
+**F**orgiveness than get **P**ermission) idiom. A given function will
+raise a custom exception on the error state, `DaosApiError`.
 A user of the API is expected to catch and handle this exception as needed:
 
 ```python
@@ -174,6 +177,7 @@ self.d_log.DEBUG("Debugging code")
 self.d_log.WARNING("Be aware, may be issues")
 self.d_log.ERROR("Something went very wrong")
 ```
+
 ## Go Bindings
 
 API bindings for Go[^2] are also available.
diff --git a/docs/user/mpi-io.md b/docs/user/mpi-io.md
index 4ca776122e9..934bd17d3d4 100644
--- a/docs/user/mpi-io.md
+++ b/docs/user/mpi-io.md
@@ -7,11 +7,10 @@ includes a chapter on MPI-IO.
 [ROMIO](https://www.mcs.anl.gov/projects/romio/) is a well-known
 implementation of MPI-IO and is included in many MPI implementations.
 DAOS provides its own MPI-IO ROMIO ADIO driver.
-This driver has been merged in the upstream MPICH repository, see the
+This driver has been merged into the upstream MPICH repository. See the
 [adio/ad\_daos](https://github.com/pmodels/mpich/tree/main/src/mpi/romio/adio/ad_daos)
 section in the MPICH git repository for details.
 
-
 ## Supported MPI Version
 
 ### MPICH
@@ -23,8 +22,8 @@ It is included in mpich-3.4.1 (released Jan 2021) and in
 !!! note
     Starting with DAOS 1.2, the `--svc` parameter (number of service replicas)
     is no longer needed, and the DAOS API has been changed accordingly.
-    Patches have been contributed to MPICH that detect the DAOS API version
-    to gracefully handle this change. MPICH 3.4.2 includes those changes,
+    Patches have been contributed to MPICH that detects the DAOS API version
+    To handle this change gracefully. MPICH 3.4.2 includes those changes,
     and works out of the box with DAOS 2.0.
     MPICH 3.4.1 does not include those changes. Please check the latest commits
     [here](https://github.com/pmodels/mpich/commits/main?author=mchaarawi)
@@ -50,8 +49,8 @@ make -j8; make install
 
 This assumes that DAOS is installed into the `/usr` tree, which is the case for
 the DAOS RPM installation. Other configure options can be added, modified, or
-removed as needed, like the network communicatio device, fortran support,
-etc. For those, please consule the mpich user guide.
+removed as needed, like the network communication devices, Fortran support,
+etc. For those, please consult the mpich user guide.
 
 Set the `PATH` and `LD_LIBRARY_PATH` to where you want to build your client
 apps or libs that use MPI to the path of the installed MPICH.
@@ -63,7 +62,8 @@ includes DAOS support since the
 [2019.8 release](https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-library-release-notes-linux.html).
 
 Note that Intel MPI uses `libfabric` and includes it as part of the Intel MPI installation:
-* 2019.8 and 2019.9 includes `libfabric-1.10.1-impi`
+
+* 2019.8 and 2019.9 include` libfabric-1.10.1-impi`
 * 2021.1, 2021.2 and 2021.3 includes `libfabric-1.12.1-impi`
 
 Care must be taken to ensure that the version of libfabric that is used
@@ -71,10 +71,10 @@ is at a level that includes the patches that are critical for DAOS.
 DAOS 1.0.1 includes `libfabric-1.9.0`, DAOS 1.2 includes `libfabric-1.12`,
 and DAOS 2.0 includes `libfabric-1.14`.
 
-To use DAOS with Intel MPI, the `libfabric` that is supplied by DAOS
+To use DAOS with Intel MPI, the `libfabric` that DAOS supplies
 (and that is installed into `/usr/lib64` by default) must be used.
 Intel MPI provides a mechanism to indicate that the Intel MPI version of
-`libfabric` should **not** be used, by setting this variable **before**
+`libfabric` should **not** be used by setting this variable **before**
 loading the Intel MPI environment:
 
 ```bash
@@ -89,7 +89,7 @@ system library search path back as the first path in the library search path:
 export LD_LIBRARY_PATH="/usr/lib64/:$LD_LIBRARY_PATH"
 ```
 
-There are other environment variables that need to be set on the client side to
+Other environment variables need to be set on the client-side to
 ensure proper functionality with the DAOS MPIIO driver, including:
 
 ```bash
@@ -109,7 +109,6 @@ it will likely pick up DAOS support in an upcoming release.
 Since its MPI-IO implementation is based on ROMIO,
 it will likely pick up DAOS support in an upcoming release.
 
-
 ## Testing MPI-IO with DAOS
 
 Build any client (HDF5, ior, mpi test suites) normally with the mpicc command
@@ -123,11 +122,12 @@ container uuids/labels.
 
 Create a container with a path on dfuse or lustre, or any file system that supports extended
 attributes:
+
 ```bash
 daos cont create mypool --label mycont --path=/mnt/dfuse/ --type POSIX
 ```
 
-Then using that path, one can start creating files using the DAOS MPIIO driver by just appending
+Then using that path, one can start creating files using the DAOS MPIIO driver by appending
 `daos:` to the filename/path. For example:
 `daos:/mnt/dfuse/file`
 `daos:/mnt/dfuse/dir1/file`
@@ -136,10 +136,13 @@ Then using that path, one can start creating files using the DAOS MPIIO driver b
 
 Another way to use the DAOS MPIIO driver is using an environment variable to set the prefix itself
 for the file:
+
 ```bash
 export DAOS_UNS_PREFIX="path"
 ```
+
 That prefix path can be:
+
 1. The UNS prefix if that exists (similar to the UNS mode above): /mnt/dfuse
 2. A direct path using the pool and container label (or uuid): daos://pool/container/
 
@@ -149,14 +152,14 @@ would pass `daos:/dir1/file' to MPI_File_open().
 
 ### Using Pool and Container Environment Variables
 
-This mode is meant just for quick testing to use the MPIIO DAOS driver bypassing the UNS and setting
-direct access with pool and container environment variables. At the client side, the following
+This mode is just for quick testing to use the MPIIO DAOS driver bypassing the UNS and setting
+direct access with pool and container environment variables. On the client-side, the following
 environment variables need to be set:
 `export DAOS_POOL={uuid/label}; export DAOS_CONT={uuid/label}; export DAOS_BYPASS_DUNS=1`.
-The user still need to append the `daos:` prefix to the file being passed to MPI_File_open().
+The user still needs to append the `daos:` prefix to the file passed to MPI_File_open().
 
 ## Known limitations
 
 Limitations of the current implementation include:
 
--   No support for MPI file atomicity, preallocate, or shared file pointers.
+* No support for MPI file atomicity, preallocate, or shared file pointers.
diff --git a/docs/user/python.md b/docs/user/python.md
index 7673f4c6451..3b2b3d3bb39 100644
--- a/docs/user/python.md
+++ b/docs/user/python.md
@@ -5,18 +5,18 @@ provides the DAOS API to python users. It aims at providing a pythonic interface
 to the DAOS objects by exposing them via native python data structures.
 This section focuses on the main PyDAOS interface that comes with its own
 container type and layout. It does not cover the python bindings for the native
-DAOS API which is available via the [PyDAOS.raw](#Native_Programming_Interface)
+DAOS API, which is available via the [PyDAOS.raw](#Native_Programming_Interface)
 submodule.
 
 ## Design
 
 PyDAOS is a python module primarily written in C. It exposes DAOS key-value
-store objects as a python dictionary. Other data structures (e.g. Array
-compatible with numpy) are under consideration.
+store objects as a python dictionary. Other data structures (e.g., Array
+compatible with NumPy) are under consideration.
 Python objects allocated by PyDAOS are:
 
 - **persistent** and identified by a string name. The namespace is shared
-  by all the objects and implementing by a root key-value store storing the
+  by all the objects and implemented by a root key-value store storing the
   association between names and objects.
 
 - immediately **visible** upon creation to any process running on the same
@@ -24,15 +24,14 @@ Python objects allocated by PyDAOS are:
 
 - not consuming any significant amount of memory. Objects have a **very low
   memory footprint** since the actual content is stored remotely.  This allows
-  to manipulate gigantic datasets that are way bigger than the amount of
+  manipulation of massive datasets that are way bigger than the amount of
   memory available on the node.
 
-
 ## Python Container
 
 To create a python container in a pool labeled tank:
 
-```
+```bash
 $ daos cont create tank --label neo --type PYTHON
   Container UUID : 3ee904b3-8868-46ed-96c7-ef608093732c
   Container Label: neo
@@ -44,7 +43,7 @@ Successfully created container 3ee904b3-8868-46ed-96c7-ef608093732c
 One can then connect to the container by passing the pool and container
 labels to the DCont constructor:
 
-```
+```bash
 >>> import pydaos
 >>> dcont = pydaos.DCont("tank", "neo")
 >>> print(dcont)
@@ -57,9 +56,9 @@ tank/neo
 
 ## DAOS Dictionaries
 
-The first type of data structures exported by the PyDAOS module is DAOS
+The first data structure exported by the PyDAOS module is the DAOS
 Dictionary (DDict) that aims at mimicking the python dict interface. Leveraging
-mutablemapping and UserDict has been considered during design, but eventually
+Mutable mapping and UserDict have been considered during design, but eventually
 ruled out for performance reasons. The DDict class is built over DAOS key-value
 stores and supports all the methods of the regular python dictionary class.
 One limitation is that only strings and bytes can be stored.
@@ -67,7 +66,7 @@ One limitation is that only strings and bytes can be stored.
 A new DDict object can be allocated by calling the dict() method on the parent
 python container.
 
-```
+```bash
 >>> dd = dcont.dict("stadium", {"Barcelona" : "Camp Nou", "London" : "Wembley"})
 >>> print(dd)
 stadium
@@ -80,7 +79,7 @@ key-value pairs.
 
 Once the dictionary created, it is persistent and cannot be overridden:
 
-```
+```bash
 >>> dd1 = dcont.dict("stadium")
 Traceback (most recent call last):
   File "", line 1, in 
@@ -95,15 +94,15 @@ pydaos.PyDError: failed to create DAOS dict: DER_EXIST
 
 To retrieve an existing dictionary, use the get() method:
 
-```
+```bash
 >>> dd1 = dcont.get("stadium")
 ```
 
 New records can be inserted one at a time via put operation. Existing
-records can be fetched via the get() operation. Similarly to python dictionary,
+records can be fetched via the get() operation. Similar to the Python dictionary,
 direct assignment is also supported.
 
-```
+```bash
 >>> dd["Milano"] = "San Siro"
 >>> dd["Rio"] = "Maracanã"
 >>> print(dd["Milano"])
@@ -116,7 +115,7 @@ Key-value pairs can also be inserted/looked up in bulk via the bput()/bget()
 methods, taking a python dict as an input. The bulk operations are issued in
 parallel (up to 16 operations in flight) to maximize the operation rate.
 
-```
+```bash
 >>> dd.bput({"Madrid" : "Santiago-Bernabéu", "Manchester" : "Old Trafford"})
 >>> print(len(dd))
 6
@@ -129,7 +128,7 @@ to either None or the empty string. Once deleted, the key won't be reported
 during iteration. It also supports the del operation via the del() and pop()
 methods.
 
-```
+```bash
 >>> del dd["Madrid"]
 >>> dd.pop("Rio")
 >>> print(len(dd))
@@ -138,7 +137,7 @@ methods.
 
 The key space can be worked through via python iterators.
 
-```
+```bash
 >>> for key in dd: print(key, dd[key])
 ...
 Manchester b'Old Trafford'
@@ -150,25 +149,26 @@ London b'Wembley'
 The content of a DAOS dictionary can be exported to a regular python dictionary
 via the dump() method.
 
-```
+```bash
 >>> d = dd.dump()
 >>> print(d)
 {'Manchester': b'Old Trafford', 'Barcelona': b'Camp Nou', 'Milano': b'San Siro', 'London': b'Wembley'}
 ```
 
 !!! warning
-    Care is required when using the dump() method for large DAOS dictionary.
+    When using the dump() method for a large DAOS dictionary requires care.
 
-The resulting python dictionary will be reported as equivalent to the original
+The resulting python dictionary will be reported as equivalent to the original.
 DAOS dictionary.
 
-```
+```bash
 >>> d == dd
 True
 ```
+
 And will be reported as different as both objects diverge.
 
-```
+```bash
 >>> dd["Rio"] = "Maracanã"
 >>> d == dd
 False
@@ -176,7 +176,7 @@ False
 
 One can also directly test whether a key is in the dictionary.
 
-```
+```bash
 >>> "Rio" in dd
 True
 >>> "Paris" in dd
@@ -185,6 +185,6 @@ False
 
 ## Arrays
 
-Class representing of DAOS array leveraging the numpy's dispatch mechanism.
+Class representing of DAOS array leveraging the NumPy’s dispatch mechanism.
 See [https://numpy.org/doc/stable/user/basics.dispatch.html](https://numpy.org/doc/stable/user/basics.dispatch.html) for more info.
 Work in progress
diff --git a/docs/user/spark.md b/docs/user/spark.md
index b0a4fe23398..b6c03acd918 100644
--- a/docs/user/spark.md
+++ b/docs/user/spark.md
@@ -20,18 +20,18 @@ environment. Otherwise, they can be deployed by following the
 There are two artifacts to download, daos-java and hadoop-daos, from maven.
 Here are maven dependencies.
 
-You can download them with below commands if you have maven installed.
+You can download them with the commands below if you have maven installed.
+
 ```bash
 mvn dependency:get -Dartifact=io.daos:daos-java: -Ddest=./
 mvn dependency:get -Dartifact=io.daos:hadoop-daos: -Ddest=./
 ```
 
-Or search these artifacts from maven central(https://search.maven.org) and
+Or search these artifacts from [Maven Central](https://search.maven.org) and
 download them manually.
 
 You can also build artifacts by yourself.
-see [Build DAOS Hadoop Filesystem](#builddaos) for details.
-
+See [Build DAOS Hadoop Filesystem](#builddaos) for details.
 
 ## Deployment
 
@@ -41,7 +41,7 @@ see [Build DAOS Hadoop Filesystem](#builddaos) for details.
 on every compute node that runs Spark or Hadoop.
 Place them in a directory, e.g., `$SPARK_HOME/jars` for Spark and
 `$HADOOP_HOME/share/hadoop/common/lib` for Hadoop, which is accessible to all
-the nodes or copy them to every node.
+the nodes or copy them to every node. ### `daos-site-example.xml` @@ -53,30 +53,30 @@ Spark and `$HADOOP_HOME/etc/hadoop` for Hadoop. ### Environment Variable -Export all DAOS-specific env variables in your application, e.g., +Export all DAOS-specific env variables in your application,, e.g., `spark-env.sh` for Spark and `hadoop-env.sh` for Hadoop. Or you can simply put env variables in your `.bashrc`. -Besides, you should have LD\_LIBRARY\_PATH include DAOS library path so that Java +Besides, you should have LD\_LIBRARY\_PATH include the DAOS library path so that Java can link to DAOS libs, like below. ```bash -$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib64:/lib +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib64:/lib ``` ### DAOS URI -In `daos-site.xml`, we default the DAOS URI as simplest form, "daos:///". For -other form of URIs, please check [DAOS More URIs](#uris). +In `daos-site.xml`, we default the DAOS URI as the simplest form, "daos:///". For +other forms of URIs, please check [DAOS More URIs](#uris). If the DAOS pool and container have not been created, we can use the following command to create them and get the pool UUID and container UUID. ```bash -$ export DAOS_POOL="mypool" -$ export DAOS_CONT="mycont" -$ dmg pool create --scm-size= --nvme-size= $DAOS_POOL -$ daos cont create --label $DAOS_CONT --type POSIX $DAOS_POOL +export DAOS_POOL="mypool" +export DAOS_CONT="mycont" +dmg pool create --scm-size= --nvme-size= $DAOS_POOL +daos cont create --label $DAOS_CONT --type POSIX $DAOS_POOL ``` After that, configure `daos-site.xml` with the pool and container created. @@ -98,7 +98,7 @@ After that, configure `daos-site.xml` with the pool and container created. ``` -Please put `daos-site.xml` in right place, e.g., Java classpath, and +Please put `daos-site.xml` in the right place, e.g., Java classpath, and loadable by Hadoop DAOS FileSystem. ### Validating Hadoop Access @@ -107,7 +107,7 @@ If everything goes well, you should see `/user` directory being listed after issuing below command. ```bash -$ hadoop fs -ls / +hadoop fs -ls / ``` You can also play around with other Hadoop commands, like -copyFromLocal and @@ -120,7 +120,7 @@ To access DAOS Hadoop filesystem in Spark, add the jar files to the classpath of the Spark executor and driver. This can be configured in Spark's configuration file `spark-defaults.conf`. -``` +```bash spark.executor.extraClassPath /path/to/daos-java-.jar:/path/to/hadoop-daos-.jar spark.driver.extraClassPath /path/to/daos-java-.jar:/path/to/hadoop-daos-.jar ``` @@ -129,7 +129,7 @@ spark.driver.extraClassPath /path/to/daos-java-.jar:/path/to/hadoop All Spark APIs that work with the Hadoop filesystem will work with DAOS. We can use the `daos:///` URI to access files stored in DAOS. For example, to read the -`people.json` file from the root directory of DAOS filesystem, we can use the +`people.json` file from the root directory of the DAOS filesystem, we can use the following pySpark code: ```python @@ -171,22 +171,23 @@ initialized and configured. The simple form of URI is "daos:///\\[/sub path\]". "\" is your OS file path created with the `daos` command or Java DAOS UNS method, `DaosUns.create()`. The "\[sub path\]" is optional. You can -create the UNS path with below command. +create the UNS path with the below command. ```bash -$ daos cont create --label $DAOS_CONT --path --type POSIX $DAOS_POOL +daos cont create --label $DAOS_CONT --path --type POSIX $DAOS_POOL ``` + Or ```bash -$ java -Dpath="your_path" -Dpool_id=$DAOS_POOL -cp ./daos-java--shaded.jar io.daos.dfs.DaosUns create +java -Dpath="your_path" -Dpool_id=$DAOS_POOL -cp ./daos-java--shaded.jar io.daos.dfs.DaosUns create ``` -After creation, you can use below command to see what DAOS properties set to +After creation, you can use the below command to see what DAOS properties set to the path. ```path -$ getfattr -d -m - +getfattr -d -m - ``` #### DAOS Non-UNS Path @@ -194,15 +195,16 @@ $ getfattr -d -m - Check [Set DAOS URI and Pool/Container](#non-uns). #### Special UUID Path + DAOS supports a specialized URI with pool/container UUIDs embedded. The format is "daos://pool UUID/container UUID". As you can see, we don't need to find the -UUIDs from neither UNS path nor configuration like above two types of URIs. +UUIDs from neither UNS path nor configuration like the above two types of URIs. You may want to connect to two DAOS servers or two DFS instances mounted to -different containers in one DAOS server from same JVM. Then, you need to add +different containers in one DAOS server from the same JVM. Then, you need to add authority to your URI to make it unique since Hadoop caches filesystem instance -keyed by "schema + authority" in global (JVM). It applies to the both types of -URIs described above. +keyed by "schema + authority" in global (JVM). It applies to both types of +URIs that are described above. ### Run Map-Reduce in Hadoop @@ -213,9 +215,9 @@ because Hadoop considers the default filesystem is DAOS since you configured DAOS UNS URI. YARN has some working directories defaulting to local path without schema, like "/tmp/yarn", which is then constructed as "daos:///tmp/yarn". With this URI, Hadoop cannot connect to DAOS since no -pool/container UUIDs can be found if daos-site.xml is not provided too. +pool/container UUIDs can be found if daos-site.xml is not provided. -Then append below configuration to this file and +Then append the below configuration to this file and `$HADOOP_HOME/etc/hadoop/yarn-site.xml`. ```xml @@ -242,21 +244,21 @@ Then replicate `daos-site.xml`, `core-site.xml`, `yarn-site.xml` and #### Known Issues -If you use Omni-Path PSM2 provider in DAOS, you'll get connection issue in -Yarn container due to PSM2 resource not being released properly in time. -The PSM2 provides has known issues with DAOS and is not supported in +If you use Omni-Path PSM2 provider in DAOS, you'll get a connection issue in +Yarn container due to PSM2 resource not being appropriately released in time. +The PSM2 provided has known issues with DAOS and is not supported in production environments. ### Tune More Configurations If your DAOS URI is the non-UNS, you can follow descriptions of each -config item to set your own values in loadable `daos-site.xml`. +config item to set your values in loadable `daos-site.xml`. If your DAOS URI is the UNS path, your configurations, except those set by DAOS -UNS creation, in `daos-site.xml` can still be effective. To make configuration +UNS creation in `daos-site.xml` can still be effective. To make the configuration source consistent, an alternative to the configuration file `daos-site.xml` is to set all configurations to the UNS path. You put the configs to the same UNS -path with below command. +path with the below command. ```bash # install attr package if get "command not found" error @@ -270,20 +272,20 @@ $ java -Dpath="your path" -Dattr=user.daos.hadoop -Dvalue="fs.daos.server.group= -cp ./daos-java--shaded.jar io.daos.dfs.DaosUns setappinfo ``` -For the "value" property, you need to follow pattern, key1=value1:key2=value2.. +For the "value" property, you need to follow a pattern, key1=value1:key2=value2.. .. And key\* should be from [daos-site-example.xml](https://github.com/daos-stack/daos/blob/release/release/2.2/src/client/java/hadoop-daos/src/main/resources/daos-site-example.xml). If value\* contains characters of '=' or ':', you need to escape the value with below command. ```bash -$ java -Dop=escape-app-value -Dinput="daos_server:1=2" -cp ./daos-java--shaded.jar io.daos.dfs.DaosUns util +java -Dop=escape-app-value -Dinput="daos_server:1=2" -cp ./daos-java--shaded.jar io.daos.dfs.DaosUns util ``` You'll get escaped value, "daos_server\u003a1\u003d2", for "daos_server:1=2". -If you configure the same property in both `daos-site.xml` and UNS path, the -value in `daos-site.xml` takes priority. If user sets Hadoop configuration +If you configure the same property in both `daos-site.xml` and the UNS path, the +value in `daos-site.xml` takes priority. If the user sets Hadoop configuration before initializing Hadoop DAOS FileSystem, the user's configuration takes priority. @@ -292,12 +294,12 @@ priority. For some libfabric providers, like PSM2, signal chaining should be enabled to better interoperate with DAOS and its dependencies which may install its own signal handlers. It ensures that signal calls are intercepted so that they do -not actually replace the JVM's signal handlers if the handlers conflict with +not replace the JVM's signal handlers if the handlers conflict with those already installed by the JVM. Instead, these calls save the new signal -handlers, or "chain" them behind the JVM-installed handlers. Later, when any of +handlers or "chain" them behind the JVM-installed handlers. Later when any of these signals are raised and found not to be targeted at the JVM, the DAOS's handlers are invoked. ```bash -$ export LD_PRELOAD=/jre/lib/amd64/libjsig.so +export LD_PRELOAD=/jre/lib/amd64/libjsig.so ``` diff --git a/docs/user/tensorflow.md b/docs/user/tensorflow.md index 553ca6efdf0..3b12e61e1cb 100644 --- a/docs/user/tensorflow.md +++ b/docs/user/tensorflow.md @@ -1,119 +1,136 @@ # DAOS Tensorflow-IO -Tensorflow-IO is an open-source Python sub-library of the Tensorflow framework -which offers a wide range of file systems and formats (e.g HDFS, HTTP) otherwise unavailable +Tensorflow-IO is an open-source Python sub-library of the Tensorflow framework +which offers a wide range of file systems and formats (e.g., HDFS, HTTP) otherwise unavailable in Tensorflow's built-in support. -For a more complete look on the functionalities of the Tensorflow-IO library, +For a complete look at the functionalities of the Tensorflow-IO library, visit the official [Tensorflow Documentation](https://www.tensorflow.org/api_docs/python/tf/io/gfile). -[The DFS Plugin](https://github.com/daos-stack/tensorflow-io-daos/tree/devel/tensorflow_io/core/filesystems/dfs) supports +[The DFS Plugin](https://github.com/daos-stack/tensorflow-io-daos/tree/devel/tensorflow_io/core/filesystems/dfs) supports many of Tensorflow-IO API functionalities and adds support to the DAOS FileSystem. -This constitutes several operations including reading and writing datasets +This constitutes several operations, including reading and writing datasets on the DAOS filesystem for AI workloads that use Tensorflow as a framework. ## Supported API -Tensorflow-IO offers users several operations for loading data, and manipulating +Tensorflow-IO offers users several operations for loading data and manipulating file-systems. These operations include : -* FileSystem Operations e.g creation and deletion of files, querying files, - directories etc. -* File-specific operations for: - * RandomAccessFiles - * WritableFiles - * ReadOnlyMemoryRegion (which is left unimplemented in the case of the DFS plugin) -The DFS Plugin translates the key operations offered by Tensorflow IO to their DAOS Filesystem equivalent, while utilizing -DAOS underlying functionalities and features to ensure a high I/O bandwidth for its users. +- FileSystem Operations, e.g., creation and deletion of files, querying files directories etc. +- File-specific operations for: + - RandomAccessFiles + - WritableFiles + - ReadOnlyMemoryRegion (which is left unimplemented in the case of the DFS plugin) + +The DFS Plugin translates the key operations offered by Tensorflow IO to their DAOS Filesystem equivalent while utilizing +DAOS’s underlying functionalities and features ensure high I/O bandwidth for its users. ## Setup -In order to utilize the DFS Plugin for the meantime, the Tensorflow-IO library will need to be + +To utilize the DFS Plugin, the Tensorflow-IO library will need to be built from [source](https://github.com/daos-stack/tensorflow-io-daos/tree/devel). ### Prerequisites + Assuming you are in a terminal in the repository root directory: -* Install latest versions of the following dependencies by running - * Centos 8 - ``` - $ yum install -y python3 python3-devel gcc gcc-c++ git unzip which make +- Install the latest versions of the following dependencies by running + - Centos 8 + + ```bash + yum install -y python3 python3-devel gcc gcc-c++ git unzip which make ``` - * Ubuntu 20.04 - ``` - $ sudo apt-get -y -qq update - $ sudo apt-get -y -qq install gcc g++ git unzip curl python3-pip + + - Ubuntu 20.04 + + ```bash + sudo apt-get -y -qq update + sudo apt-get -y -qq install gcc g++ git unzip curl python3-pip ``` -* Download the Bazel installer - ``` - $ curl -sSOL https://github.com/bazelbuild/bazel/releases/download/\$(cat .bazelversion)/bazel-\$(cat .bazelversion)-installer-linux-x86_64.sh - ``` -* Install Bazel - ``` - $ bash -x -e bazel-$(cat .bazelversion)-installer-linux-x86_64.sh + +- Download the Bazel installer + + ```bash + curl -sSOL https://github.com/bazelbuild/bazel/releases/download/\$(cat .bazelversion)/bazel-\$(cat .bazelversion)-installer-linux-x86_64.sh ``` -* Update Pip and install pytest + +- Install Bazel + + ```bash + bash -x -e bazel-$(cat .bazelversion)-installer-linux-x86_64.sh ``` - $ python3 -m pip install -U pip - $ python3 -m pip install pytest + +- Update Pip and install pytest + + ```bash + python3 -m pip install -U pip + python3 -m pip install pytest ``` ### Building Assuming you are in a terminal in the repository root directory: -* Configure and install tensorflow (the current version should be tensorflow2.6.2) - ``` +- Configure and install tensorflow (the current version should be tensorflow2.6.2) + + ```bash $ ./configure.sh ## Set python3 as default. $ ln -s /usr/bin/python3 /usr/bin/python ``` -* At this point, all libraries and dependencies should be installed. - * Make sure the environment variable **LIBRARY_PATH** includes the paths to all daos libraries - * Make sure the environment variable **LD_LIBRARY_PATH** includes the paths to: - * All daos libraries - * The tensorflow framework (libtensorflow and libtensorflow_framework) - * If not, find the required libraries and add their paths to the environment variable - ``` +- At this point, all libraries and dependencies should be installed. + - Make sure the environment variable --LIBRARY_PATH-- includes the paths to all daos libraries + - Make sure the environment variable --LD_LIBRARY_PATH-- includes the paths to: + - All daos libraries + - The tensorflow framework (libtensorflow and libtensorflow_framework) + - If not, find the required libraries and add their paths to the environment variable + + ```bash export LD_LIBRARY_PATH=":$LD_LIBARY_PATH" ``` - * Make sure the environment variable **CPLUS_INCLUDE_PATH** and **C_INCLUDE_PATH** includes the paths to: - * The tensorflow headers (usually in /usr/local/lib64/python3.6/site-packages/tensorflow/include) - * If not, find the required headers and add their paths to the environment variable - ``` + + - Make sure the environment variable --CPLUS_INCLUDE_PATH-- and --C_INCLUDE_PATH-- includes the paths to: + - The tensorflow headers (usually in /usr/local/lib64/python3.6/site-packages/tensorflow/include) + - If not, find the required headers and add their paths to the environment variable + + ```bash export CPLUS_INCLUDE_PATH=":$CPLUS_INCLUDE_PATH" export C_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$C_INCLUDE_PATH ``` - * Build the project using bazel - ``` + - Build the project using Bazel + + ```bash bazel build --action_env=LIBRARY_PATH=$LIBRARY_PATH -s --verbose_failures //tensorflow_io/... //tensorflow_io_gcs_filesystem/... ``` - This should take a few minutes. - - Note that sandboxing may result in build failures when using - Docker Containers for DAOS due to mounting issues, if that’s the case, - add **--spawn_strategy=standalone** to the above build command to - bypass sandboxing. (When disabling sandbox, an error may be thrown for - an undefined type z_crc_t due to a conflict in header files. - In that case, find the crypt.h file in the bazel cache in subdirectory - /external/zlib/contrib/minizip/crypt.h and add the following line to the - file **typedef unsigned long z_crc_t;** then re-build). + This should take a few minutes. + Note that sandboxing may result in build failures when using + Docker Containers for DAOS due to mounting issues, if that’s the case, + add ----spawn_strategy=standalone-- to the above build command to + bypass sandboxing. (When disabling sandbox, an error may be thrown for + an undefined type z_crc_t due to a conflict in header files. + In that case, find the crypt.h file in the bazel cache in subdirectory + /external/zlib/contrib/minizip/crypt.h and add the following line to the + file --typedef unsigned long z_crc_t;-- then re-build). ### Testing + Assuming you are in a terminal in the repository root directory: -* Run the following command for the simple serial test to validate building. Note that any tests need to be run with the TFIO_DATAPATH flag to specify the location of the binaries. - ``` - $ TFIO_DATAPATH=bazel-bin python3 -m pytest -s -v tests/test_serialization.py +- Run the following command for the simple serial test to validate the build. Note that any tests need to be run with the TFIO_DATAPATH flag to specify the location of the binaries. + ```bash + TFIO_DATAPATH=bazel-bin python3 -m pytest -s -v tests/test_serialization.py ``` -* Run the following commands to run the dfs plugin test: - ``` +- Run the following commands to run the dfs plugin test: + + ```bash # To create the required pool and container and export required env variables for the dfs tests. $ source tests/test_dfs/dfs_init.sh # To run dfs tests @@ -124,7 +141,7 @@ Assuming you are in a terminal in the repository root directory: ## User Guide -To use the Tensorflow-IO Library, you'll need to import the required packages +To use the Tensorflow-IO Library, you'll need to import the required packages as follows: ```python @@ -139,7 +156,7 @@ dfs://// OR -dfs://Path where Path includes the path to the DAOS container +dfs://Path, where Path includes the path to the DAOS container ```python filename = "dfs://POOL_LABEL/CONT_LABEL/FILE_NAME.ext" diff --git a/docs/user/workflow.md b/docs/user/workflow.md index 184ccde4503..8f83982dabb 100644 --- a/docs/user/workflow.md +++ b/docs/user/workflow.md @@ -2,23 +2,20 @@ ## Use Cases -A DAOS pool is a persistent storage reservation that is allocated to a +A DAOS pool is a persistent storage reservation allocated to a project or specific job. Pools are allocated, shrunk, grown and destroyed by the administrators. The typical workflow consists of: -* New project members meet and define storage requirements including space, -bandwidth, IOPS & data protection needs. +- New project members meet and define storage requirements, including space, bandwidth, IOPS & data protection needs. -* Administrators collect those requirements, create a pool for the new -project and set relevant ACL to grant access to project members. +- Administrators collect those requirements, create a pool for the new project and set relevant ACL to grant access to project members. -* Administrators notify the project members that the pool has been created and -provide the pool label to the users. +- Administrators notify the project members that the pool has been created and provide the pool label to the users. -Users can then create containers (i.e. datasets or buckets) in their pool. -Containers will share the pool space and have their own ACL to be managed by +Users can then create containers (i.e., datasets or buckets) in their pool. +Containers will share the pool space and have their ACL to be managed by the container owner. Since pool creation is relatively fast, it is also possible to integrate it @@ -35,7 +32,7 @@ command-line interface for users to interact with their pool and containers. It supports a `-j` option to generate a parseable json output. The `daos` utility follows the same syntax as `dmg` (reserved for administrator) -and takes a resource (e.g. pool, container, filesystem) and a command (e.g. +and takes a resource (e.g., pool, container, filesystem) and a command (e.g. query, create, destroy) plus a set of command-specific options. ```bash @@ -68,7 +65,7 @@ Available commands: ### Access Validation -To validate the pool can be successfully accessed prior to running +To validate the pool can be successfully accessed before running applications, the daos pool autotest suite can be executed. To run it against a pool labeled `tank`, run the following command: @@ -98,7 +95,7 @@ All steps passed. !!! note The command is executed in a development environment, - performance differences will vary, based on your system. + performance differences will vary based on your system. !!! warning Smaller pools may show DER_NOSPACE(-1007): 'No space @@ -125,7 +122,7 @@ Rebuild idle, 0 objs, 0 recs ``` In addition to the space information, details on the pool rebuild status and -number of targets is also provided. +A number of targets are also provided. This information can also be retrieved programmatically via the `daos_pool_query()` function of the libdaos library and python equivalent. @@ -158,6 +155,6 @@ Attributes for pool 004abf7c-26c8-4cba-9059-8b3be39161fc: No attributes found. ``` -Pool attributes can be manipulaged programmatically via the +Pool attributes can be manipulated programmatically via the `daos_pool_[get|get|list|del]_attr()` functions exported by the libdaos library and python equivalent (see PyDAOS). From 210d757e3d2563c2a8bcf5bbafb40d5cf6f9bc79 Mon Sep 17 00:00:00 2001 From: JoeOster <52936608+JoeOster@users.noreply.github.com> Date: Tue, 3 May 2022 10:03:47 -0700 Subject: [PATCH 06/14] Misc format adjustments Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com> --- docs/admin/deployment.md | 15 +- docs/admin/performance_tuning.md | 6 +- docs/admin/predeployment_check.md | 17 +- docs/admin/troubleshooting.md | 249 ++++++++------- docs/overview/architecture.md | 52 ++-- docs/overview/data_integrity.md | 5 +- docs/overview/security.md | 17 +- docs/overview/storage.md | 12 +- docs/overview/use_cases.md | 23 +- docs/release/support_matrix_v2_2.md | 83 ++--- docs/testing/cartselftest.md | 452 ++++++++++++++-------------- docs/testing/datamover.md | 3 +- docs/user/container.md | 8 - docs/user/filesystem.md | 72 ++--- docs/user/interface.md | 4 +- docs/user/mpi-io.md | 6 +- docs/user/python.md | 2 - docs/user/tensorflow.md | 6 +- docs/user/workflow.md | 2 - 19 files changed, 506 insertions(+), 528 deletions(-) diff --git a/docs/admin/deployment.md b/docs/admin/deployment.md index 1dbf7b15487..d7f0f31d5d4 100644 --- a/docs/admin/deployment.md +++ b/docs/admin/deployment.md @@ -19,14 +19,10 @@ To sum up, the typical workflow of a DAOS system deployment consists of the following steps: - Configure and start the [DAOS server](#daos-server-setup). - - [Provision Hardware](#hardware-provisioning) on all the storage nodes via the dmg utility. - - [Format](#storage-formatting) the DAOS system - - [Set up and start the agent](#agent-setup) on the client nodes - - [Validate](#system-validation) that the DAOS system is operational Note that starting the DAOS server instances can be performed automatically @@ -139,7 +135,6 @@ the hosts if '--num-engines' is not specified on the command line. bound to the same NUMA node. If not set explicitly on the command line, the default is the number of NUMA nodes detected on the host. - - '--min-ssds' specifies the minimum number of NVMe SSDs per engine that needs to be present on each host. For each engine entry in the generated config, at least this number of SSDs @@ -148,7 +143,6 @@ the hosts if '--num-engines' is not specified on the command line. If not set on the command line, the default is "1". If set to "0", the NVMe SSDs will not be added to the generated config and SSD validation will be disabled. - - '--net-class' specifies a preference for network interface class, options are 'ethernet', 'infiniband' or 'best-available'. 'best-available' will attempt to choose the most performant (as judged by @@ -308,7 +302,7 @@ modify the ExecStart line to point to your `daos_server` binary. After modifying ExecStart, run the following command: ```bash -udo systemctl daemon-reload +sudo systemctl daemon-reload ``` Once the service file is installed you can start `daos_server` @@ -405,9 +399,7 @@ Example usage: - `clush -w wolf-[118-121,130-133] daos_server storage prepare --scm-only` after running, the user should be prompted for a reboot. - - `clush -w wolf-[118-121,130-133] reboot` - - `clush -w wolf-[118-121,130-133] daos_server storage prepare --scm-only` after running, PMem devices (/dev/pmemX namespaces created on the new SCM regions) should be available on each of the hosts. @@ -791,11 +783,12 @@ information, please refer to the [DAOS build documentation][6]. ### Network Configuration -#### Network Scan - The `dmg` utility supports the `network scan` function to display the network interfaces, related OFI fabric providers and associated NUMA node for each device. + +#### Network Scan + This information is used to configure the global fabric provider and the unique local network interface for each I/O engine on the storage nodes. This section will help you determine what to provide for the `provider`, diff --git a/docs/admin/performance_tuning.md b/docs/admin/performance_tuning.md index c3dac27c456..0224d14e76e 100644 --- a/docs/admin/performance_tuning.md +++ b/docs/admin/performance_tuning.md @@ -62,7 +62,6 @@ In the `self_test` commands: - Selftest client to servers: Replace the argument for `--endpoint` accordingly. - - Cross-servers: Replace the argument for `--endpoint` and `--master-endpoint` accordingly. For example, if you have eight servers, you would specify `--endpoint 0-7:0` and @@ -141,19 +140,16 @@ IOR () with the following backends: interception library (`libioil`). Performance is significantly better when using `libioil`. For detailed information on dfuse usage with the IO interception library, please refer to the [POSIX DFUSE section][7]. - - A custom DFS (DAOS File System) plugin for DAOS can be used by building IOR with DAOS support and selecting API=DFS. This integrates IOR directly with the DAOS File System (`libdfs`), without requiring FUSE or an interception library. Please refer to the [DAOS README][10] in the hpc/ior repository for some basic instructions on how to use the DFS driver. - - When using the IOR API=MPIIO, the ROMIO ADIO driver for DAOS can be used by providing the `daos://` prefix to the filename. This ADIO driver bypasses `dfuse` and directly invkes the `libdfs` calls to perform I/O to a DAOS POSIX container. The DAOS-enabled MPIIO driver is available in the upstream MPICH repository and included with Intel MPI. Please refer to the [MPI-IO documentation][8]. - - An HDF5 VOL connector for DAOS is under development. This maps the HDF5 data model directly to the DAOS data model and works in conjunction with DAOS containers of `--type=HDF5` (in contrast to the DAOS container of `--type=POSIX` used for @@ -459,7 +455,7 @@ In either situation, the admin may execute the command `daos_agent net-scan` with appropriate debug flags to gain more insight into the configuration problem. -**Disabling the GetAttachInfo cache:** +### Disabling the GetAttachInfo cache The default configuration enables the Agent GetAttachInfo cache. If it is desired, the cache may be disabled prior to DAOS Agent startup by setting the diff --git a/docs/admin/predeployment_check.md b/docs/admin/predeployment_check.md index e295555a80f..8e0ebe79539 100644 --- a/docs/admin/predeployment_check.md +++ b/docs/admin/predeployment_check.md @@ -38,13 +38,12 @@ $ sudo reboot https://github.com/spdk/spdk/issues/1153) The problem manifests with the following signature in the kernel logs: - ```bash - [82734.333834] genirq: Threaded irq requested with handler=NULL and !ONESHOT for irq 113 - [82734.341761] uio_pci_generic: probe of 0000:18:00.0 failed with error -22 - ``` - - As a consequence, the use of VFIO on these distributions is a requirement - since UIO is not supported. + ```bash + [82734.333834] genirq: Threaded irq requested with handler=NULL and !ONESHOT for irq 113 + [82734.341761] uio_pci_generic: probe of 0000:18:00.0 failed with error -22 + As a consequence, the use of VFIO on these distributions is a requirement + since UIO is not supported. + ``` ## Time Synchronization @@ -61,7 +60,7 @@ As part of the `daos-server` RPM Install, several users and groups are required - `daos-server` will also be created as its primary group - `daos_metrics` secondary group - `daos_daemons` secondary group -- `daos_server` - non-privileged user +- `daos_server` non-privileged user Note: The `daos_server` and `daos_engine` processes run under the `daos_server` userid. @@ -75,7 +74,7 @@ As part of the `daos-client RPM Install, several users and groups are required a - `daos-agent` will also be created as its primary group - `daos_daemons` secondary group -- `daos_agent` - non-privileged user, member of daos_daemons +- `daos_agent` non-privileged user, member of daos_daemons Note: The `daos_agent` process runs under a non-privileged userid `daos_agent`. diff --git a/docs/admin/troubleshooting.md b/docs/admin/troubleshooting.md index 9435f0ea880..ddc6b777a38 100644 --- a/docs/admin/troubleshooting.md +++ b/docs/admin/troubleshooting.md @@ -179,39 +179,51 @@ this is intended to convey either that the variable is set (for the client envir configured for the engines in the `daos_server.yml` file (`log_mask` per engine, and env_vars values per engine for the `DD_SUBSYS` and `DD_MASK` variable assignments). -- Generic setup for all messages (default settings) +* Generic setup for all messages (default settings) - D_LOG_MASK=DEBUG - DD_SUBSYS=all - DD_MASK=all + ```sh + D_LOG_MASK=DEBUG + DD_SUBSYS=all + DD_MASK=all + ``` -- Disable all logs for performance tuning +* Disable all logs for performance tuning - D_LOG_MASK=ERR -> will only log error messages from all facilities - D_LOG_MASK=FATAL -> will only log fatal system messages + ```sh + D_LOG_MASK=ERR -> will only log error messages from all facilities + D_LOG_MASK=FATAL -> will only log fatal system messages + ``` -- Gather daos metadata logs if a pool/container resource problem is observed, using the provided group mask +* Gather daos metadata logs if a pool/container resource problem is observed, using the provided group mask - D_LOG_MASK=DEBUG -> log at DEBUG level from all facilities - DD_MASK=group_metadata -> limit logging to include deault and metadata-specific streams. Or, specify DD_MASK=group_metadata_only for just metadata-specific log entries. + ```sh + D_LOG_MASK=DEBUG -> log at DEBUG level from all facilities + DD_MASK=group_metadata -> limit logging to include deault and metadata-specific streams. Or, specify DD_MASK=group_metadata_only for just metadata-specific log entries. + ``` -- Disable a noisy debug logging subsystem +* Disable a noisy debug logging subsystem - D_LOG_MASK=DEBUG,MEM=ERR -> disables MEM facility by - restricting all logs from that facility to ERROR or higher priority only - D_LOG_MASK=DEBUG,SWIM=ERR,RPC=ERR,HG=ERR -> disables SWIM and RPC/HG facilities + ```sh + D_LOG_MASK=DEBUG,MEM=ERR -> disables MEM facility by + restricting all logs from that facility to ERROR or higher priority only + D_LOG_MASK=DEBUG,SWIM=ERR,RPC=ERR,HG=ERR -> disables SWIM and RPC/HG facilities + ``` -- Enable a subset of facilities of interest +* Enable a subset of facilities of interest - DD_SUBSYS=rpc,tests - D_LOG_MASK=DEBUG -> required to see logs for RPC and TESTS - less severe than INFO (the majority of log messages) + ```sh + DD_SUBSYS=rpc,tests + D_LOG_MASK=DEBUG -> required to see logs for RPC and TESTS + less severe than INFO (the majority of log messages) + ``` -- Fine-tune the debug messages by setting a debug mask +* Fine-tune the debug messages by setting a debug mask - D_LOG_MASK=DEBUG - DD_MASK=mgmt -> only logs DEBUG messages related to pool - management + ```sh + D_LOG_MASK=DEBUG + DD_MASK=mgmt -> only logs DEBUG messages related to pool + management + ``` Refer to the DAOS Environment Variables document for more information about the debug system environment. @@ -270,11 +282,15 @@ remove the segment. For example, to remove the shared memory segment left behind by I/O Engine instance 0, issue: - sudo ipcrm -M 0x10242048 + ```bash + sudo ipcrm -M 0x10242048 + ``` To remove the shared memory segment left behind by I/O Engine instance 1, issue: - sudo ipcrm -M 0x10242049 + ```bash + sudo ipcrm -M 0x10242049 + ``` ### Server Start Issues @@ -317,110 +333,113 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The ### Use dmg command without daos_admin privilege - # Error message or timeout after dmg system query - $ dmg system query - ERROR: dmg: Unable to load Certificate Data: could not load cert: stat /etc/daos/certs/admin.crt: no such file or directory - - # Workaround - - # 1. Make sure the admin-host /etc/daos/daos_control.yml is correctly configured. - # including: - # hostlist: - # port: - # transport\config: - # allow_insecure: - # ca\cert: /etc/daos/certs/daosCA.crt - # cert: /etc/daos/certs/admin.crt - # key: /etc/daos/certs/admin.key - - # 2. Make sure the admin-host allow_insecure mode matches the applicable servers. + ```bash + # Error message or timeout after dmg system query + $ dmg system query + ERROR: dmg: Unable to load Certificate Data: could not load cert: stat /etc/daos/certs/admin.crt: no such file or directory + + # Workaround + # 1. Make sure the admin-host /etc/daos/daos_control.yml is correctly configured. + # including: + # hostlist: + # port: + # transport\config: + # allow_insecure: + # ca\cert: /etc/daos/certs/daosCA.crt + # cert: /etc/daos/certs/admin.crt + # key: /etc/daos/certs/admin.key + # 2. Make sure the admin-host allow_insecure mode matches the applicable servers. +``` ### use the daos command before daos_agent started - $ daos cont create $DAOS_POOL - daos ERR src/common/drpc.c:217 unixcomm_connect() Failed to connect to /var/run/daos_agent/daos_agent.sock, errno=2(No such file or directory) - mgmt ERR src/mgmt/cli_mgmt.c:222 get_attach_info() failed to connect to /var/run/daos_agent/daos_agent.sock DER_MISC(-1025): 'Miscellaneous error' - failed to initialize daos: Miscellaneous error (-1025) - - - # Work around to check for daos_agent certification and start daos_agent - #check for /etc/daos/certs/daosCA.crt, agent.crt and agent.key - $ sudo systemctl enable daos_agent.service - $ sudo systemctl start daos_agent.service + ```bash + $ daos cont create $DAOS_POOL + daos ERR src/common/drpc.c:217 unixcomm_connect() Failed to connect to /var/run/daos_agent/daos_agent.sock, errno=2(No such file or directory) + mgmt ERR src/mgmt/cli_mgmt.c:222 get_attach_info() failed to connect to /var/run/daos_agent/daos_agent.sock DER_MISC(-1025): 'Miscellaneous error' + failed to initialize daos: Miscellaneous error (-1025) + # Work around to check for daos_agent certification and start daos_agent + #check for /etc/daos/certs/daosCA.crt, agent.crt and agent.key + $ sudo systemctl enable daos_agent.service + $ sudo systemctl start daos_agent.service +``` ### use the daos command with invalid or wrong parameters - # Lack of providing daos pool_uuid - $ daos pool list-cont - pool UUID required - rc: 2 - daos command (v1.2), libdaos 1.2.0 - usage: daos RESOURCE COMMAND [OPTIONS] - resources: - pool pool - container (cont) container - filesystem (fs) copy to and from a POSIX filesystem - object (obj) object - shell Interactive obj ctl shell for DAOS - version print command version - help print this message and exit - use 'daos help RESOURCE' for resource specifics - - # Invalid sub-command cont-list - $ daos pool cont-list --pool=$DAOS_POOL - invalid pool command: cont-list - error parsing command line arguments - daos command (v1.2), libdaos 1.2.0 - usage: daos RESOURCE COMMAND [OPTIONS] - resources: - pool pool - container (cont) container - filesystem (fs) copy to and from a POSIX filesystem - object (obj) object - shell Interactive obj ctl shell for DAOS - version print command version - help print this message and exit - use 'daos help RESOURCE' for resource specifics - - # Working daos pool command - $ daos pool list-cont --pool=$DAOS_POOL - bc4fe707-7470-4b7d-83bf-face75cc98fc + ```bash + # Lack of providing daos pool_uuid + $ daos pool list-cont + pool UUID required + rc: 2 + daos command (v1.2), libdaos 1.2.0 + usage: daos RESOURCE COMMAND [OPTIONS] + resources: + pool pool + container (cont) container + filesystem (fs) copy to and from a POSIX filesystem + object (obj) object + shell Interactive obj ctl shell for DAOS + version print command version + help print this message and exit + use 'daos help RESOURCE' for resource specifics + + # Invalid sub-command cont-list + $ daos pool cont-list --pool=$DAOS_POOL + invalid pool command: cont-list + error parsing command line arguments + daos command (v1.2), libdaos 1.2.0 + usage: daos RESOURCE COMMAND [OPTIONS] + resources: + pool pool + container (cont) container + filesystem (fs) copy to and from a POSIX filesystem + object (obj) object + shell Interactive obj ctl shell for DAOS + version print command version + help print this message and exit + use 'daos help RESOURCE' for resource specifics + + # Working daos pool command + $ daos pool list-cont --pool=$DAOS_POOL + bc4fe707-7470-4b7d-83bf-face75cc98fc + ``` ## dmg pool create failed due to no space - $ dmg pool create --size=50G mypool - Creating DAOS pool with automatic storage allocation: 50 GB NVMe + 6.00% SCM - ERROR: dmg: pool create failed: DER_NOSPACE(-1007): No space on storage target - - # Workaround: dmg storage query scan to find currently available storage - dmg storage query usage - Hosts SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used - ----- --------- -------- -------- ---------- --------- --------- - boro-8 17 GB 6.0 GB 65 % 0 B 0 B N/A - - $ dmg pool create --size=2G mypool - Creating DAOS pool with automatic storage allocation: 2.0 GB NVMe + 6.00% SCM - Pool created with 100.00% SCM/NVMe ratio - ----------------------------------------- - UUID : b5ce2954-3f3e-4519-be04-ea298d776132 - Service Ranks : 0 - Storage Ranks : 0 - Total Size : 2.0 GB - SCM : 2.0 GB (2.0 GB / rank) - NVMe : 0 B (0 B / rank) - - $ dmg storage query usage - Hosts SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used - ----- --------- -------- -------- ---------- --------- --------- - boro-8 17 GB 2.9 GB 83 % 0 B 0 B N/A + ```bash + $ dmg pool create --size=50G mypool + Creating DAOS pool with automatic storage allocation: 50 GB NVMe + 6.00% SCM + ERROR: dmg: pool create failed: DER_NOSPACE(-1007): No space on storage target + + # Workaround: dmg storage query scan to find currently available storage + dmg storage query usage + Hosts SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used + ----- --------- -------- -------- ---------- --------- --------- + boro-8 17 GB 6.0 GB 65 % 0 B 0 B N/A + + $ dmg pool create --size=2G mypool + Creating DAOS pool with automatic storage allocation: 2.0 GB NVMe + 6.00% SCM + Pool created with 100.00% SCM/NVMe ratio + ----------------------------------------- + UUID : b5ce2954-3f3e-4519-be04-ea298d776132 + Service Ranks : 0 + Storage Ranks : 0 + Total Size : 2.0 GB + SCM : 2.0 GB (2.0 GB / rank) + NVMe : 0 B (0 B / rank) + + $ dmg storage query usage + Hosts SCM-Total SCM-Free SCM-Used NVMe-Total NVMe-Free NVMe-Used + ----- --------- -------- -------- ---------- --------- --------- + boro-8 17 GB 2.9 GB 83 % 0 B 0 B N/A ### dmg pool destroy timeout - # dmg pool destroy Timeout or failed due to pool has active container(s) - # Workaround pool destroy --force option + # dmg pool destroy Timeout or failed due to pool has active container(s) + # Workaround pool destroy --force option - $ dmg pool destroy --pool=$DAOS_POOL --force - Pool-destroy command succeeded + $ dmg pool destroy --pool=$DAOS_POOL --force + Pool-destroy command succeeded ## Bug Report diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md index fb4dd44ed89..625f6061c82 100644 --- a/docs/overview/architecture.md +++ b/docs/overview/architecture.md @@ -51,46 +51,32 @@ Persistent Memory Development Kit (PMDK) allows managing transactional access to SCM, and the Storage Performance Development Kit (SPDK) enables user-space I/O to NVMe devices. -![](../admin/media/image1.png) +![Figure 2-1. DAOS Storage](../admin/media/image1.png) Figure 2-1. DAOS Storage DAOS aims to deliver: -- High throughput and IOPS at arbitrary alignment and size - -- Fine-grained I/O operations with true zero-copy I/O to SCM - -- Support for massively distributed NVM storage via scalable +- High throughput and IOPS at arbitrary alignment and size +- Fine-grained I/O operations with true zero-copy I/O to SCM +- Support for massively distributed NVM storage via scalable collective communications across the storage servers - -- Non-blocking data and metadata operations to allow I/O and +- Non-blocking data and metadata operations to allow I/O and computation to overlap - -- Advanced data placement taking into account fault domains - -- Software-managed redundancy supporting both replication and erasure +- Advanced data placement taking into account fault domains +- Software-managed redundancy supporting both replication and erasure code with an online rebuild - -- End-to-end data integrity - -- Scalable distributed transactions with guaranteed data consistency +- End-to-end data integrity +- Scalable distributed transactions with guaranteed data consistency and automated recovery - -- Dataset snapshot - -- Security framework to manage access control to storage pools - -- Software-defined storage management to provision, configure, modify +- Dataset snapshot +- Security framework to manage access control to storage pools +- Software-defined storage management to provision, configure, modify and monitor storage pools over COTS hardware - -- Native support for Hierarchical Data Format (HDF)5, MPI-IO and POSIX +- Native support for Hierarchical Data Format (HDF)5, MPI-IO and POSIX namespace over the DAOS data model - -- Tools for disaster recovery - -- Seamless integration with the Lustre parallel filesystem - -- Mover agent to migrate datasets among DAOS pools and from parallel +- Tools for disaster recovery +- Seamless integration with the Lustre parallel filesystem +- Mover agent to migrate datasets among DAOS pools and from parallel filesystems to DAOS and vice versa ## DAOS System @@ -128,7 +114,7 @@ target has its private storage, its pool of service threads, and its dedicated network context that can be directly addressed over the fabric independently of the other targets hosted on the same storage node. -* The SCM modules are configured in *AppDirect interleaved* mode. +- The SCM modules are configured in *AppDirect interleaved* mode. They are thus presented to the operating system as a single PMem namespace per socket (in `fsdax` mode). @@ -140,11 +126,11 @@ independently of the other targets hosted on the same storage node. DAX does not yet support the `relink` filesystem feature, but DAOS does not use this feature. -* When *N* targets per engine are configured, +- When *N* targets per engine are configured, each target is using *1/N* of the capacity of the `fsdax` SCM capacity of that socket, independently of the other targets. -* Each target also uses a fraction of the NVMe capacity of the NVMe +- Each target also uses a fraction of the NVMe capacity of the NVMe drives attached to this socket. For example, in an engine with 4 NVMe disks and 16 targets, each target will manage 1/4 of a single NVMe disk. diff --git a/docs/overview/data_integrity.md b/docs/overview/data_integrity.md index eb8ab507868..d9f95a9467f 100644 --- a/docs/overview/data_integrity.md +++ b/docs/overview/data_integrity.md @@ -96,14 +96,13 @@ more information): A single extent update (blue line) from index 2-13. A fetched extent (orange line) from index 2-6. The fetch is only part of the original extent written. -![](../graph/data_integrity/array_example_1.png) +![Fetch 1](../graph/data_integrity/array_example_1.png) Many extent updates and different epochs. A fetch from index 2-13 requires parts from each extent. ![Array Example 2](../graph/data_integrity/array_example_2.png) - The nature of the array type requires that a more sophisticated approach to creating checksums is used. DAOS uses a "chunking" approach where each extent will be broken up into "chunks" with a predetermined "chunk size." Checksums @@ -113,7 +112,7 @@ size configured to be 4 (units are arbitrary in this example). Though not all chunks have a total size of 4, an absolute offset alignment is maintained. The gray boxes around the extents represent the chunks. -![](../graph/data_integrity/array_with_chunks.png) +![Fetch 2](../graph/data_integrity/array_with_chunks.png) ( See [Object Layer](https://github.com/daos-stack/daos/blob/release/2.2/src/object/README.md) diff --git a/docs/overview/security.md b/docs/overview/security.md index 3273589ab80..5c426ffae3d 100644 --- a/docs/overview/security.md +++ b/docs/overview/security.md @@ -155,19 +155,19 @@ It is _not_ possible to deny access to a specific group in this way, due to ##### ACE Examples * `A::daos_user@:rw` - * Allow the UNIX user named `daos_user` to have read-write access. + * Allow the UNIX user named `daos_user` to have read-write access. * `A:G:project_users@:tc` - * Allow anyone in the UNIX group `project_users` to access a pool's + * Allow anyone in the UNIX group `project_users` to access a pool's contents and create containers. * `A::OWNER@:rwdtTaAo` - * Allow the UNIX user who owns the container to have full control. + * Allow the UNIX user who owns the container to have full control. * `A:G:GROUP@:rwdtT` - * Allow the UNIX group that owns the container to read and write data, delete + * Allow the UNIX group that owns the container to read and write data, delete the container, and manipulate container properties. * `A::EVERYONE@:r` - * Allow any user not covered by other rules to have read-only access. + * Allow any user not covered by other rules to have read-only access. * `A::daos_user@:` - * Deny the UNIX user named `daos_user` any access to the resource. + * Deny the UNIX user named `daos_user` any access to the resource. #### Enforcement @@ -209,7 +209,7 @@ non-whitespace character on the line. For example: -``` +```sh # ACL for my container # Owner can't touch data - just do admin-type things A::OWNER@:dtTaAo @@ -231,8 +231,7 @@ To calculate the internal data size of an ACL, use the following formula for each ACE: * The base size of an ACE is 256 Bytes. -* If the ACE principal is *not* one of the special principals: +* If the ACE principal is _not_ one of the special principals: * Add the length of the principal string + 1. * If that value is not 64-Byte aligned, round up to the nearest 64-Byte boundary. - diff --git a/docs/overview/storage.md b/docs/overview/storage.md index 2a9866cd70c..116935d6d78 100644 --- a/docs/overview/storage.md +++ b/docs/overview/storage.md @@ -157,7 +157,7 @@ scalable object ID allocator is provided in the DAOS API. The object ID to be stored by the application is the full 128-bit address, which is for single use only and can be associated with only a single object schema. -**DAOS Object ID Structure** +## DAOS Object ID Structure
 
@@ -209,7 +209,7 @@ and storage.
 
 A DAOS object can be accessed through different APIs:
 
--    **Multi-level key-array** API is the native object interface with locality
+- **Multi-level key-array** API is the native object interface with locality
      feature. The key is split into a distribution (dkey) and an
      attribute (akey) key. Both keys can be variable
      length and type (a string, an integer or even a complex data
@@ -217,10 +217,8 @@ A DAOS object can be accessed through different APIs:
      collocated on the same target. The value associated with akey can be
      either a single variable-length value or an array of fixed-length values.
      Both the akeys and dkeys support enumeration.
-
--    **Key-value** API provides a simple key and variable-length value
+- **Key-value** API provides a simple key and variable-length value
      interface. It supports the traditional put, get, remove and list operations.
-
--    **Array API** implements a one-dimensional array of fixed-size elements
+- **Array API** implements a one-dimensional array of fixed-size elements
      addressed by a 64-bit offset. A DAOS array supports arbitrary extent read,
-     write and punch operations.
\ No newline at end of file
+     write and punch operations.
diff --git a/docs/overview/use_cases.md b/docs/overview/use_cases.md
index 817de83c769..ecbd0e33363 100644
--- a/docs/overview/use_cases.md
+++ b/docs/overview/use_cases.md
@@ -7,9 +7,9 @@ This document contains the following sections:
 
 - Storage Management and Workflow Integration
 - Workflow Execution
-    -  Bulk Synchronous Checkpoint
-    - Producer/Consumer
-    - Concurrent Producers
+  - Bulk Synchronous Checkpoint
+  - Producer/Consumer
+  - Concurrent Producers
 - Storage Node Failure and Resilvering
 
 
@@ -18,9 +18,9 @@ This document contains the following sections:
 
 In this section, we consider two different cluster configurations:
 
-* Cluster A: All or a majority of the compute nodes have local persistent
+- Cluster A: All or a majority of the compute nodes have local persistent
   memory. In other words, each compute node is also a storage node.
-* Cluster B: Storage nodes are dedicated to storage and disseminated across
+- Cluster B: Storage nodes are dedicated to storage and disseminated across
   the fabric. They are not used for computation and thus do not run any
   application code.
 
@@ -83,7 +83,7 @@ elaborates on how checkpointing could be implemented on the DAOS
 storage stack. We first consider the traditional approach relying on
 blocking barriers and then a more loosely coupled execution.
 
-Blocking Barrier
+### Blocking Barrier
 
 When the simulation job starts, one task opens the checkpoint container
 and fetches the current global HCE. It then obtains an epoch hold and
@@ -100,7 +100,7 @@ task (e.g., rank 0) commits the LHE, which is then increased by one on
 a successful commit. This process is repeated regularly until the simulation
 successfully completes.
 
-Non-blocking Barrier
+### Non-blocking Barrier
 
 We now consider another approach to checkpointing where the execution is
 more loosely coupled. As in the previous case, one task is responsible for
@@ -131,7 +131,7 @@ post-process job. The DAOS stack provides specific mechanisms for
 producer/consumer workflow, allowing the consumer to dump the
 result of its analysis into the same container as the producer.
 
-Private Container
+#### Private Container
 
 The down-sample job opens the sampled timesteps container, fetches the
 current global HCE, obtains an epoch hold and writes newly sampled data to
@@ -151,7 +151,7 @@ Another approach is for the producer job to create explicit snapshots for
 epochs of interest and have the analysis job waiting and processing
 snapshots. This avoids processing every single committed epoch.
 
-Shared Container
+#### Shared Container
 
 We now assume that the container storing the sampled timesteps and the one
 storing the analyzed data is a single container. In other words, the
@@ -191,9 +191,9 @@ should acquire a write lock on the object. This lock carries a lock value
 block (LVB) storing the last epoch number in which this object was last
 modified and committed. Once the lock is acquired, the writer must:
 
-* read from an epoch equal to the greatest of the epoch specified in the
+- read from an epoch equal to the greatest of the epoch specified in the
   LVB and the handle LRE.
-* submit new writes with an epoch higher than the one in the LVB and the
+- submit new writes with an epoch higher than the one in the LVB and the
   currently held epoch.
 
 After all the I/O operations have been completed, flushed, and committed by
@@ -221,4 +221,3 @@ process is executed online while the container is still being accessed and
 modified. Once redundancy has been restored for all objects, the pool map is
 updated again to inform everyone that the system has recovered from the fault
 and can exit from degraded mode.
-
diff --git a/docs/release/support_matrix_v2_2.md b/docs/release/support_matrix_v2_2.md
index 1b07aeb9424..408d3ac6a5b 100644
--- a/docs/release/support_matrix_v2_2.md
+++ b/docs/release/support_matrix_v2_2.md
@@ -73,7 +73,7 @@ single high-speed network port for development and testing purposes,
 but this is not supported in a production environment.)
 It is strongly recommended that all DAOS engines in a DAOS system use the same
 model of high-speed fabric adapter.
-Heterogeneous adapter population across DAOS engines has **not** been tested,
+Heterogeneous adapter population across DAOS engines has **not*- been tested,
 and running with such configurations may cause unexpected behavior.
 Please refer to "Fabric Support" below for more details.
 
@@ -98,7 +98,7 @@ and [openSUSE Leap 15.3](https://en.opensuse.org/openSUSE:Roadmap).
 The following subsections provide details on the Linux distributions
 which DAOS Version 2.2 supports on DAOS servers.
 
-Note that all DAOS servers in a DAOS server cluster (also called _DAOS system_)
+Note that all DAOS servers in a DAOS server cluster (also called *DAOS system*)
 must run the same Linux distribution. DAOS clients that access a DAOS server
 cluster can run the same or different Linux distributions.
 
@@ -107,14 +107,14 @@ cluster can run the same or different Linux distributions.
 With DAOS Version 2.2, CentOS 7.9 and RHEL 7.9 are supported on DAOS servers
 with 2nd gen Intel Xeon Scalable processors (Cascade Lake).
 
-CentOS 7.9 or RHEL 7.9 are **not** supported on DAOS servers
+CentOS 7.9 or RHEL 7.9 are **not*- supported on DAOS servers
 with 3rd gen Intel Xeon Scalable processors (Ice Lake)
 or newer Intel Xeon processor generations.
 
 Links to CentOS Linux 7 and RHEL 7 Release Notes:
 
-* [CentOS 7.9.2009](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.2009)
-* [RHEL 7.9](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index)
+- [CentOS 7.9.2009](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.2009)
+- [RHEL 7.9](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index)
 
 CentOS Linux 7 will reach End Of Life (EOL) on June 30th, 2024.
 Refer to the [RHEL Life Cycle](https://access.redhat.com/support/policy/updates/errata/)
@@ -140,8 +140,8 @@ DAOS Version 2.2 supports RHEL 8.5 and RHEL 8.6.
 
 Links to RHEL 8 Release Notes:
 
-* [RHEL 8.5](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/index)
-* [RHEL 8.6](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/index)
+- [RHEL 8.5](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/index)
+- [RHEL 8.6](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/index)
 
 Refer to the [RHEL Life Cycle](https://access.redhat.com/support/policy/updates/errata/)
 description on the Red Hat support website for information on RHEL support phases.
@@ -159,8 +159,8 @@ DAOS Version 2.2 supports Rocky Linux 8.5 and 8.6.
 
 Links to Rocky Linux Release Notes:
 
-* [Rocky Linux 8.5](https://docs.rockylinux.org/release_notes/8.5/)
-* [Rocky Linux 8.6](https://docs.rockylinux.org/release_notes/8.6/)
+- [Rocky Linux 8.5](https://docs.rockylinux.org/release_notes/8.5/)
+- [Rocky Linux 8.6](https://docs.rockylinux.org/release_notes/8.6/)
 
 ### openSUSE Leap 15
 
@@ -168,7 +168,7 @@ DAOS Version 2.2 is supported on openSUSE Leap 15.3.
 
 Links to openSUSE Leap 15 Release Notes:
 
-* [openSUSE Leap 15.3](https://doc.opensuse.org/release-notes/x86_64/openSUSE/Leap/15.3/)
+- [openSUSE Leap 15.3](https://doc.opensuse.org/release-notes/x86_64/openSUSE/Leap/15.3/)
 
 ### SUSE Linux Enterprise Server 15
 
@@ -176,7 +176,7 @@ DAOS Version 2.2 is supported on SLES 15 SP3.
 
 Links to SLES 15 Release Notes:
 
-* [SLES 15 SP3](https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP3/)
+- [SLES 15 SP3](https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP3/)
 
 Refer to the [SLES Life Cycle](https://www.suse.com/lifecycle/)
 description on the SUSE support website for information on SLES support phases.
@@ -184,12 +184,13 @@ description on the SUSE support website for information on SLES support phases.
 ### Unsupported Linux Distributions
 
 DAOS does not support
-openSUSE Tumbleweed,
-Fedora,
-CentOS Stream,
-Alma Linux,
-Ubuntu, or
-Oracle Linux.
+
+- openSUSE Tumbleweed
+- Fedora
+- CentOS Stream
+- Alma Linux
+- Ubuntu
+- Oracle Linux
 
 ## Operating Systems supported for DAOS Clients
 
@@ -240,9 +241,9 @@ Version 5.6-1. Versions older than 5.4-3 are not supported by DAOS 2.2.
 
 Links to MLNX\_OFED Release Notes:
 
-* [MLNX\_OFED 5.4-3](https://docs.nvidia.com/networking/display/MLNXOFEDv543100/Release+Notes)
-* [MLNX\_OFED 5.5-1](https://docs.nvidia.com/networking/display/MLNXOFEDv551032/Release+Notes)
-* [MLNX\_OFED 5.6-1](https://docs.nvidia.com/networking/display/MLNXOFEDv561000/Release+Notes)
+- [MLNX\_OFED 5.4-3](https://docs.nvidia.com/networking/display/MLNXOFEDv543100/Release+Notes)
+- [MLNX\_OFED 5.5-1](https://docs.nvidia.com/networking/display/MLNXOFEDv551032/Release+Notes)
+- [MLNX\_OFED 5.6-1](https://docs.nvidia.com/networking/display/MLNXOFEDv561000/Release+Notes)
 
 It is strongly recommended that all DAOS servers and all DAOS clients
 run the same version of MLNX\_OFED, and that the InfiniBand adapters are
@@ -250,7 +251,7 @@ updated to the firmware levels that are included in that MLNX\_OFED
 distribution.
 It is also strongly recommended that the same model of
 InfiniBand fabric adapter is used in all DAOS servers.
-DAOS Version 2.2 has **not** been tested with heterogeneous InfiniBand
+DAOS Version 2.2 has **not*- been tested with heterogeneous InfiniBand
 adapter configurations.
 The only exception to this recommendation is the mix of single-port
 and dual-port adapters of the same generation, where only one of the ports
@@ -271,16 +272,16 @@ DAOS scaling targets
 (these are order of magnitude figures that indicate what the DAOS architecture
 should support - see below for the scales at which DAOS 2.2 has been validated):
 
-* DAOS client nodes in a DAOS system:   105 (hundreds of thousands)
-* DAOS servers in a DAOS system:        103 (thousands)
-* DAOS engines per DAOS server:         100 (less than ten)
-* DAOS targets per DAOS engine:         101 (tens)
-* SCM storage devices per DAOS engine:  101 (tens)
-* NVMe storage devices per DAOS engine: 101 (tens)
-* DAOS pools in a DAOS system:          102 (hundreds)
-* DAOS containers in a DAOS pool:       102 (hundreds)
-* DAOS objects in a DAOS container:     1010 (tens of billions)
-* Application tasks accessing a DAOS container: 106 (millions)
+- DAOS client nodes in a DAOS system:   105 (hundreds of thousands)
+- DAOS servers in a DAOS system:        103 (thousands)
+- DAOS engines per DAOS server:         100 (less than ten)
+- DAOS targets per DAOS engine:         101 (tens)
+- SCM storage devices per DAOS engine:  101 (tens)
+- NVMe storage devices per DAOS engine: 101 (tens)
+- DAOS pools in a DAOS system:          102 (hundreds)
+- DAOS containers in a DAOS pool:       102 (hundreds)
+- DAOS objects in a DAOS container:     1010 (tens of billions)
+- Application tasks accessing a DAOS container: 106 (millions)
 
 Note that DAOS has an architectural limit of 216=65536 storage targets
 in a DAOS system, because the number of storage targets is encoded in
@@ -288,15 +289,15 @@ in a DAOS system, because the number of storage targets is encoded in
 
 DAOS Version 2.2 has been validated at the following scales:
 
-* DAOS client nodes in a DAOS system:   256
-* DAOS servers in a DAOS system:        128
-* DAOS engines per DAOS server:         1, 2 and 4
-* DAOS targets per DAOS engine:         4-16
-* SCM storage devices per DAOS engine:  6 (Optane PMem 100), 8 (Optane PMem 200)
-* NVMe storage devices per DAOS engine: 0 (PMem-only pools), 4-12
-* DAOS pools in a DAOS system:          100
-* DAOS containers in a DAOS pool:       100
-* DAOS objects in a DAOS container:     6 billion (in mdtest benchmarks)
-* Application tasks accessing a DAOS container: 3072 (using verbs)
+- DAOS client nodes in a DAOS system:   256
+- DAOS servers in a DAOS system:        128
+- DAOS engines per DAOS server:         1, 2 and 4
+- DAOS targets per DAOS engine:         4-16
+- SCM storage devices per DAOS engine:  6 (Optane PMem 100), 8 (Optane PMem 200)
+- NVMe storage devices per DAOS engine: 0 (PMem-only pools), 4-12
+- DAOS pools in a DAOS system:          100
+- DAOS containers in a DAOS pool:       100
+- DAOS objects in a DAOS container:     6 billion (in mdtest benchmarks)
+- Application tasks accessing a DAOS container: 3072 (using verbs)
 
 This test coverage will be expanded in subsequent DAOS releases.
diff --git a/docs/testing/cartselftest.md b/docs/testing/cartselftest.md
index 371865e6eef..8782ab90747 100644
--- a/docs/testing/cartselftest.md
+++ b/docs/testing/cartselftest.md
@@ -6,228 +6,230 @@ It is advisable to use the `-u` (or `--use-daos-agent-env`) option
 in order to obtain the fabric environment from the running
 `daos_agent` process. See `self_test --help` for details.
 
-	# set env
-	export FI_UNIVERSE_SIZE=2048
-
-	# for 4 servers --endpoint 0-3:0-1 ranks:tags.
-	self_test --group-name daos_server --use-daos-agent-env --endpoint 0-1:0-1
-
-	Adding endpoints:
-	  ranks: 0-1 (# ranks = 2)
-	  tags: 0-1 (# tags = 2)
-	Warning: No --master-endpoint specified; using this command line application as the master endpoint
-	Self Test Parameters:
-	  Group name to test against: daos_server
-	  # endpoints:                4
-	  Message sizes:              [(200000-BULK_GET 200000-BULK_PUT), (200000-BULK_GET 0-EMPTY), (0-EMPTY 200000-BULK_PUT), (200000-BULK_GET 1000-IOV), (1000-IOV 200000-BULK_PUT), (1000-IOV 1000-IOV), (1000-IOV 0-EMPTY), (0-EMPTY 1000-IOV), (0-EMPTY 0-EMPTY)]
-	  Buffer addresses end with:  
-	  Repetitions per size:       40000
-	  Max inflight RPCs:          1000
-
-	CLI [rank=0 pid=40050]  Attached daos_server
-	##################################################
-	Results for message size (200000-BULK_GET 200000-BULK_PUT) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 197.56
-		RPC Throughput (RPCs/sec): 518
-		RPC Latencies (us):
-			Min    : 38791
-			25th  %: 1695365
-			Median : 1916632
-			75th  %: 2144087
-			Max    : 2969361
-			Average: 1907415
-			Std Dev: 373832.81
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 1889518
-			0:1 - 1712934
-			1:0 - 1924995
-			1:1 - 2110649
-
-	##################################################
-	Results for message size (200000-BULK_GET 0-EMPTY) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 112.03
-		RPC Throughput (RPCs/sec): 587
-		RPC Latencies (us):
-			Min    : 4783
-			25th  %: 1480053
-			Median : 1688064
-			75th  %: 1897392
-			Max    : 2276555
-			Average: 1681303
-			Std Dev: 314999.11
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 2001222
-			0:1 - 1793990
-			1:0 - 1385306
-			1:1 - 1593675
-
-	##################################################
-	Results for message size (0-EMPTY 200000-BULK_PUT) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 112.12
-		RPC Throughput (RPCs/sec): 588
-		RPC Latencies (us):
-			Min    : 6302
-			25th  %: 1063532
-			Median : 1654468
-			75th  %: 2287784
-			Max    : 3488227
-			Average: 1680617
-			Std Dev: 880402.68
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 1251585
-			0:1 - 1323953
-			1:0 - 2099173
-			1:1 - 2043352
-
-	##################################################
-	Results for message size (200000-BULK_GET 1000-IOV) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 112.54
-		RPC Throughput (RPCs/sec): 587
-		RPC Latencies (us):
-			Min    : 5426
-			25th  %: 1395359
-			Median : 1687404
-			75th  %: 1983402
-			Max    : 2426175
-			Average: 1681970
-			Std Dev: 393256.99
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 2077476
-			0:1 - 1870102
-			1:0 - 1318136
-			1:1 - 1529193
-
-	##################################################
-	Results for message size (1000-IOV 200000-BULK_PUT) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 112.66
-		RPC Throughput (RPCs/sec): 588
-		RPC Latencies (us):
-			Min    : 5340
-			25th  %: 442729
-			Median : 1221371
-			75th  %: 2936906
-			Max    : 3502405
-			Average: 1681142
-			Std Dev: 1308472.80
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 3006315
-			0:1 - 2913808
-			1:0 - 434763
-			1:1 - 465469
-
-	##################################################
-	Results for message size (1000-IOV 1000-IOV) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 80.71
-		RPC Throughput (RPCs/sec): 42315
-		RPC Latencies (us):
-			Min    : 1187
-			25th  %: 20187
-			Median : 23322
-			75th  %: 26833
-			Max    : 30246
-			Average: 23319
-			Std Dev: 4339.87
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 26828
-			0:1 - 26839
-			1:0 - 20275
-			1:1 - 20306
-
-	##################################################
-	Results for message size (1000-IOV 0-EMPTY) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 42.68
-		RPC Throughput (RPCs/sec): 44758
-		RPC Latencies (us):
-			Min    : 935
-			25th  %: 15880
-			Median : 21444
-			75th  %: 28434
-			Max    : 34551
-			Average: 22035
-			Std Dev: 7234.26
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 28418
-			0:1 - 28449
-			1:0 - 16301
-			1:1 - 16318
-
-	##################################################
-	Results for message size (0-EMPTY 1000-IOV) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 42.91
-		RPC Throughput (RPCs/sec): 44991
-		RPC Latencies (us):
-			Min    : 789
-			25th  %: 20224
-			Median : 22195
-			75th  %: 24001
-			Max    : 26270
-			Average: 21943
-			Std Dev: 3039.50
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 24017
-			0:1 - 23987
-			1:0 - 20279
-			1:1 - 20309
-
-	##################################################
-	Results for message size (0-EMPTY 0-EMPTY) (max_inflight_rpcs = 1000):
-
-	Master Endpoint 2:0
-	-------------------
-		RPC Bandwidth (MB/sec): 0.00
-		RPC Throughput (RPCs/sec): 47807
-		RPC Latencies (us):
-			Min    : 774
-			25th  %: 16161
-			Median : 20419
-			75th  %: 25102
-			Max    : 29799
-			Average: 20633
-			Std Dev: 5401.96
-		RPC Failures: 0
-
-		Endpoint results (rank:tag - Median Latency (us)):
-			0:0 - 25103
-			0:1 - 25099
-			1:0 - 16401
-			1:1 - 16421
+```bash
+ # set env
+ export FI_UNIVERSE_SIZE=2048
+
+ # for 4 servers --endpoint 0-3:0-1 ranks:tags.
+ self_test --group-name daos_server --use-daos-agent-env --endpoint 0-1:0-1
+
+ Adding endpoints:
+   ranks: 0-1 (# ranks = 2)
+   tags: 0-1 (# tags = 2)
+ Warning: No --master-endpoint specified; using this command line application as the master endpoint
+ Self Test Parameters:
+   Group name to test against: daos_server
+   # endpoints:                4
+   Message sizes:              [(200000-BULK_GET 200000-BULK_PUT), (200000-BULK_GET 0-EMPTY), (0-EMPTY 200000-BULK_PUT), (200000-BULK_GET 1000-IOV), (1000-IOV 200000-BULK_PUT), (1000-IOV 1000-IOV), (1000-IOV 0-EMPTY), (0-EMPTY 1000-IOV), (0-EMPTY 0-EMPTY)]
+   Buffer addresses end with:  
+   Repetitions per size:       40000
+   Max inflight RPCs:          1000
+
+ CLI [rank=0 pid=40050]  Attached daos_server
+ ##################################################
+ Results for message size (200000-BULK_GET 200000-BULK_PUT) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 197.56
+  RPC Throughput (RPCs/sec): 518
+  RPC Latencies (us):
+   Min    : 38791
+   25th  %: 1695365
+   Median : 1916632
+   75th  %: 2144087
+   Max    : 2969361
+   Average: 1907415
+   Std Dev: 373832.81
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 1889518
+   0:1 - 1712934
+   1:0 - 1924995
+   1:1 - 2110649
+
+ ##################################################
+ Results for message size (200000-BULK_GET 0-EMPTY) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 112.03
+  RPC Throughput (RPCs/sec): 587
+  RPC Latencies (us):
+   Min    : 4783
+   25th  %: 1480053
+   Median : 1688064
+   75th  %: 1897392
+   Max    : 2276555
+   Average: 1681303
+   Std Dev: 314999.11
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 2001222
+   0:1 - 1793990
+   1:0 - 1385306
+   1:1 - 1593675
+
+ ##################################################
+ Results for message size (0-EMPTY 200000-BULK_PUT) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 112.12
+  RPC Throughput (RPCs/sec): 588
+  RPC Latencies (us):
+   Min    : 6302
+   25th  %: 1063532
+   Median : 1654468
+   75th  %: 2287784
+   Max    : 3488227
+   Average: 1680617
+   Std Dev: 880402.68
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 1251585
+   0:1 - 1323953
+   1:0 - 2099173
+   1:1 - 2043352
+
+ ##################################################
+ Results for message size (200000-BULK_GET 1000-IOV) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 112.54
+  RPC Throughput (RPCs/sec): 587
+  RPC Latencies (us):
+   Min    : 5426
+   25th  %: 1395359
+   Median : 1687404
+   75th  %: 1983402
+   Max    : 2426175
+   Average: 1681970
+   Std Dev: 393256.99
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 2077476
+   0:1 - 1870102
+   1:0 - 1318136
+   1:1 - 1529193
+
+ ##################################################
+ Results for message size (1000-IOV 200000-BULK_PUT) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 112.66
+  RPC Throughput (RPCs/sec): 588
+  RPC Latencies (us):
+   Min    : 5340
+   25th  %: 442729
+   Median : 1221371
+   75th  %: 2936906
+   Max    : 3502405
+   Average: 1681142
+   Std Dev: 1308472.80
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 3006315
+   0:1 - 2913808
+   1:0 - 434763
+   1:1 - 465469
+
+ ##################################################
+ Results for message size (1000-IOV 1000-IOV) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 80.71
+  RPC Throughput (RPCs/sec): 42315
+  RPC Latencies (us):
+   Min    : 1187
+   25th  %: 20187
+   Median : 23322
+   75th  %: 26833
+   Max    : 30246
+   Average: 23319
+   Std Dev: 4339.87
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 26828
+   0:1 - 26839
+   1:0 - 20275
+   1:1 - 20306
+
+ ##################################################
+ Results for message size (1000-IOV 0-EMPTY) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 42.68
+  RPC Throughput (RPCs/sec): 44758
+  RPC Latencies (us):
+   Min    : 935
+   25th  %: 15880
+   Median : 21444
+   75th  %: 28434
+   Max    : 34551
+   Average: 22035
+   Std Dev: 7234.26
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 28418
+   0:1 - 28449
+   1:0 - 16301
+   1:1 - 16318
+
+ ##################################################
+ Results for message size (0-EMPTY 1000-IOV) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 42.91
+  RPC Throughput (RPCs/sec): 44991
+  RPC Latencies (us):
+   Min    : 789
+   25th  %: 20224
+   Median : 22195
+   75th  %: 24001
+   Max    : 26270
+   Average: 21943
+   Std Dev: 3039.50
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 24017
+   0:1 - 23987
+   1:0 - 20279
+   1:1 - 20309
+
+ ##################################################
+ Results for message size (0-EMPTY 0-EMPTY) (max_inflight_rpcs = 1000):
+
+ Master Endpoint 2:0
+ -------------------
+  RPC Bandwidth (MB/sec): 0.00
+  RPC Throughput (RPCs/sec): 47807
+  RPC Latencies (us):
+   Min    : 774
+   25th  %: 16161
+   Median : 20419
+   75th  %: 25102
+   Max    : 29799
+   Average: 20633
+   Std Dev: 5401.96
+  RPC Failures: 0
+
+  Endpoint results (rank:tag - Median Latency (us)):
+   0:0 - 25103
+   0:1 - 25099
+   1:0 - 16401
+   1:1 - 16421
+```
diff --git a/docs/testing/datamover.md b/docs/testing/datamover.md
index 3a8190cdf92..e39800f22e8 100644
--- a/docs/testing/datamover.md
+++ b/docs/testing/datamover.md
@@ -8,9 +8,8 @@ Create Second container:
 # Create Second container
 $ daos container create --pool $DAOS_POOL --type POSIX --label cont2
 Successfully created container 158469db-70d2-4a5d-aac9-3c06cbfa7459
-```
-
 export DAOS_CONT2=
+```
 
 Pool Query before copy:
 
diff --git a/docs/user/container.md b/docs/user/container.md
index 752e7840f23..829cf066046 100644
--- a/docs/user/container.md
+++ b/docs/user/container.md
@@ -545,21 +545,13 @@ Client user and group access for containers is controlled by
 Access-controlled container accesses include:
 
 - Opening the container for access.
-
 - Reading and writing data in the container.
-
   - Reading and writing objects.
-
   - Getting, setting, and listing user attributes.
-
   - Getting, setting, and listing snapshots.
-
 - Deleting the container (if the pool does not grant the user permission).
-
 - Getting and setting container properties.
-
 - Getting and modifying the container ACL.
-
 - Modifying the container's owner.
 
 This is reflected in the set of supported
diff --git a/docs/user/filesystem.md b/docs/user/filesystem.md
index 58327135aa4..f4d5fc48b96 100644
--- a/docs/user/filesystem.md
+++ b/docs/user/filesystem.md
@@ -31,38 +31,38 @@ a per-file or per-directory basis.
 
 The DFS API closely represents the POSIX API. The API includes operations to:
 
-* Mount: create/open superblock and root object
-* Un-mount: release open handles
-* Lookup: traverse a path and return an open file/dir handle
-* IO: read & write with an iovec
-* Stat: retrieve attributes of an entry
-* Mkdir: create a dir
-* Readdir: enumerate all entries under a directory
-* Open: create/Open a file/dir
-* Remove: unlink a file/dir
-* Move: rename
-* Release: close an open handle of a file/dir
-* Extended Attributes: set, get, list, remove
+- Mount: create/open superblock and root object
+- Un-mount: release open handles
+- Lookup: traverse a path and return an open file/dir handle
+- IO: read & write with an iovec
+- Stat: retrieve attributes of an entry
+- Mkdir: create a dir
+- Readdir: enumerate all entries under a directory
+- Open: create/Open a file/dir
+- Remove: unlink a file/dir
+- Move: rename
+- Release: close an open handle of a file/dir
+- Extended Attributes: set, get, list, remove
 
 ### POSIX Compliance
 
 The following features from POSIX will not be supported:
 
-* Hard links
-* mmap support with MAP\_SHARED will be consistent from single client only. Note
+- Hard links
+- mmap support with MAP\_SHARED will be consistent from single client only. Note
   that this is supported through DFUSE only (i.e. not through the DFS API).
-* Char devices, block devices, sockets and pipes
-* User/group quotas
-* setuid(), setgid() programs, supplementary groups, ACLs are not supported
+- Char devices, block devices, sockets and pipes
+- User/group quotas
+- setuid(), setgid() programs, supplementary groups, ACLs are not supported
   within the DFS namespace.
-* [access/change/modify] time not updated appropriately, potentially on close only.
-* Flock (maybe at dfuse local node level only)
-* Block size in stat buf is not accurate (no account for holes, extended attributes)
-* Various parameters reported via statfs like number of blocks, files,
+- [access/change/modify] time not updated appropriately, potentially on close only.
+- Flock (maybe at dfuse local node level only)
+- Block size in stat buf is not accurate (no account for holes, extended attributes)
+- Various parameters reported via statfs like number of blocks, files,
   free/available space
-* POSIX permissions inside an encapsulated namespace
-  * Still enforced at the DAOS pool/container level
-  * Effectively means that all files belong to the same "project"
+- POSIX permissions inside an encapsulated namespace
+  - Still enforced at the DAOS pool/container level
+  - Effectively means that all files belong to the same "project"
 
 It is possible to use `libdfs` in a parallel application from multiple nodes.
 DFS provides two modes that offer different levels of consistency. The modes can
@@ -80,7 +80,7 @@ accessed in balanced mode only. If the container was created with relaxed mode,
 it can be accessed in relaxed or balanced mode. In either mode, there is a
 consistency semantic issue that is not properly handled:
 
-* Open-unlink semantics: This occurs when a client obtains an open handle on an
+- Open-unlink semantics: This occurs when a client obtains an open handle on an
   object (file or directory), and accesses that object (reads/writes data or
   create other files), while another client removes that object that the other
   client has opened from under it. In DAOS, we don't track object open handles
@@ -90,15 +90,15 @@ consistency semantic issue that is not properly handled:
 
 Other consistency issues are handled differently between the two consistency mode:
 
-* Same Operation Executed Concurrently (Supported in both Relaxed and Balanced
+- Same Operation Executed Concurrently (Supported in both Relaxed and Balanced
   Mode): For example, clients try to create or remove the same file
   concurrently, one should succeed and others will fail.
-* Create/Unlink/Rename Conflicts (Supported in Balanced Mode only): For example,
+- Create/Unlink/Rename Conflicts (Supported in Balanced Mode only): For example,
   a client renames a file, but another unlinks the old file at the same time.
-* Operation Atomicity (Supported only in Balanced mode): If a client crashes in
+- Operation Atomicity (Supported only in Balanced mode): If a client crashes in
   the middle of the rename, the state of the container should be consistent as
   if the operation never happened.
-* Visibility (Supported in Balanced and Relaxed mode): A write from one client
+- Visibility (Supported in Balanced and Relaxed mode): A write from one client
   should be visible to another client with a simple coordination between the
   clients.
 
@@ -176,7 +176,7 @@ Additionally, there are several optional command-line options:
 | --sys-name=         | DAOS system name                 |
 | --foreground               | run in foreground                |
 | --singlethreaded           | run single threaded              |
-| --thread-count=     | Number of threads to use         |
+| --thread-count=< count >   | Number of threads to use         |
 
 When DFuse starts, it will register a single mount with the kernel, at the
 location specified by the `--mountpoint` option. This mount will be
@@ -274,12 +274,12 @@ on the DFuse command line and fine grained control via container attributes.
 
 The following types of data will be cached by default.
 
-* Kernel caching of dentries
-* Kernel caching of negative dentries
-* Kernel caching of inodes (file sizes, permissions etc)
-* Kernel caching of file contents
-* Readahead in dfuse and inserting data into kernel cache
-* MMAP write optimization
+- Kernel caching of dentries
+- Kernel caching of negative dentries
+- Kernel caching of inodes (file sizes, permissions etc)
+- Kernel caching of file contents
+- Readahead in dfuse and inserting data into kernel cache
+- MMAP write optimization
 
 !!! warning
     Caching is enabled by default in dfuse. This might cause some parallel
diff --git a/docs/user/interface.md b/docs/user/interface.md
index 4c8951041dd..293503df9d6 100644
--- a/docs/user/interface.md
+++ b/docs/user/interface.md
@@ -25,9 +25,9 @@ turnaround time for implementing test cases for DAOS.
 
 The Python API is split into several files based on functionality:
 
-* The Python object API:
+- The Python object API:
   [daos_api.py](https://github.com/daos-stack/daos/tree/release/2.2/src/client/pydaos/raw/daos_api.py).
-* The mapping of C structures to Python classes
+- The mapping of C structures to Python classes
   [daos_cref.py](https://github.com/daos-stack/daos/tree/release/2.2/src/client/pydaos/raw/daos_cref.py)
 
 High-level abstraction classes exist to manipulate DAOS storage:
diff --git a/docs/user/mpi-io.md b/docs/user/mpi-io.md
index 934bd17d3d4..99b06a87dea 100644
--- a/docs/user/mpi-io.md
+++ b/docs/user/mpi-io.md
@@ -63,8 +63,8 @@ includes DAOS support since the
 
 Note that Intel MPI uses `libfabric` and includes it as part of the Intel MPI installation:
 
-* 2019.8 and 2019.9 include` libfabric-1.10.1-impi`
-* 2021.1, 2021.2 and 2021.3 includes `libfabric-1.12.1-impi`
+- 2019.8 and 2019.9 include` libfabric-1.10.1-impi`
+- 2021.1, 2021.2 and 2021.3 includes `libfabric-1.12.1-impi`
 
 Care must be taken to ensure that the version of libfabric that is used
 is at a level that includes the patches that are critical for DAOS.
@@ -162,4 +162,4 @@ The user still needs to append the `daos:` prefix to the file passed to MPI_File
 
 Limitations of the current implementation include:
 
-* No support for MPI file atomicity, preallocate, or shared file pointers.
+- No support for MPI file atomicity, preallocate, or shared file pointers.
diff --git a/docs/user/python.md b/docs/user/python.md
index 3b2b3d3bb39..e0c5ed83f86 100644
--- a/docs/user/python.md
+++ b/docs/user/python.md
@@ -18,10 +18,8 @@ Python objects allocated by PyDAOS are:
 - **persistent** and identified by a string name. The namespace is shared
   by all the objects and implemented by a root key-value store storing the
   association between names and objects.
-
 - immediately **visible** upon creation to any process running on the same
   or a different node.
-
 - not consuming any significant amount of memory. Objects have a **very low
   memory footprint** since the actual content is stored remotely.  This allows
   manipulation of massive datasets that are way bigger than the amount of
diff --git a/docs/user/tensorflow.md b/docs/user/tensorflow.md
index 3b12e61e1cb..a0bed6ce3bb 100644
--- a/docs/user/tensorflow.md
+++ b/docs/user/tensorflow.md
@@ -152,11 +152,11 @@ import tensorflow_io as tfio
 To use the DFS Plugin, all that needs to be done is to supply the paths of the required
 files/directories in the form of a DFS URI:
 
+```sh
 dfs:////
-
-OR
-
+# OR
 dfs://Path, where Path includes the path to the DAOS container
+```
 
 ```python
 filename = "dfs://POOL_LABEL/CONT_LABEL/FILE_NAME.ext"
diff --git a/docs/user/workflow.md b/docs/user/workflow.md
index 8f83982dabb..496add58464 100644
--- a/docs/user/workflow.md
+++ b/docs/user/workflow.md
@@ -9,9 +9,7 @@ the administrators.
 The typical workflow consists of:
 
 - New project members meet and define storage requirements, including space, bandwidth, IOPS & data protection needs.
-
 - Administrators collect those requirements, create a pool for the new project and set relevant ACL to grant access to project members.
-
 - Administrators notify the project members that the pool has been created and provide the pool label to the users.
 
 Users can then create containers (i.e., datasets or buckets) in their pool.

From 364bafead0fc49ef88aab5be763337cccea74d70 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Tue, 3 May 2022 12:58:13 -0700
Subject: [PATCH 07/14] Misc format adjustments

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/overview/fault.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/overview/fault.md b/docs/overview/fault.md
index 779ebcc5c35..917bead5c9c 100644
--- a/docs/overview/fault.md
+++ b/docs/overview/fault.md
@@ -63,7 +63,7 @@ of pool map invalidation each time they communicate with any engine. To do so,
 clients pack their current pool map version in every RPC. Servers reply not
 only with the current pool map version. Consequently, when a DAOS client
 experiences RPC timeout, it regularly communicates with the other DAOS
-target to guarantee its pool map is always current. Clients will 
+target to guarantee its pool map is always current. Clients will
 eventually be informed of the target exclusion and enter into degraded mode.
 
 This mechanism guarantees global node eviction and that all nodes eventually

From cde022006d30aafcd72fc0c8b1891bfc131f60ca Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Tue, 3 May 2022 13:18:52 -0700
Subject: [PATCH 08/14] Update troubleshooting.md

---
 docs/admin/troubleshooting.md | 1 -
 1 file changed, 1 deletion(-)

diff --git a/docs/admin/troubleshooting.md b/docs/admin/troubleshooting.md
index ddc6b777a38..3bee14aca26 100644
--- a/docs/admin/troubleshooting.md
+++ b/docs/admin/troubleshooting.md
@@ -337,7 +337,6 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
   # Error message or timeout after dmg system query
   $ dmg system query
   ERROR: dmg: Unable to load Certificate Data: could not load cert: stat /etc/daos/certs/admin.crt: no such file or directory
-  
   # Workaround
     # 1. Make sure the admin-host /etc/daos/daos_control.yml is correctly configured.
       # including:

From b8cbb6fd3a279b5a981a8764c99e8386e9d88206 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Tue, 3 May 2022 13:19:57 -0700
Subject: [PATCH 09/14] Update tensorflow.md

---
 docs/user/tensorflow.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/user/tensorflow.md b/docs/user/tensorflow.md
index a0bed6ce3bb..eb331d85f6c 100644
--- a/docs/user/tensorflow.md
+++ b/docs/user/tensorflow.md
@@ -46,7 +46,7 @@ Assuming you are in a terminal in the repository root directory:
   - Ubuntu 20.04
 
        ```bash
-       sudo apt-get -y -qq update 
+       sudo apt-get -y -qq update
        sudo apt-get -y -qq install gcc g++ git unzip curl python3-pip
        ```
 

From 9e6dcf8533d99c1782baeda04f735a50946cb67f Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Tue, 3 May 2022 13:21:55 -0700
Subject: [PATCH 10/14] Update administration.md

---
 docs/admin/administration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index 1c555b12fbe..b3e0c84305b 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -840,7 +840,7 @@ DAOS I/O Engines will be started, and all DAOS pools will have been removed.
     $ wipefs -a /dev/pmem0
     $ wipefs -a /dev/pmem0
     ```
-    
+
     Then restart DAOS Servers and format.
 
 ### System Erase

From 228184ecea2adbd3141df9e3155b576b34921eb3 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 6 May 2022 09:20:25 -0700
Subject: [PATCH 11/14] Revert "Update troubleshooting.md"

This reverts commit cde022006d30aafcd72fc0c8b1891bfc131f60ca.
---
 docs/admin/troubleshooting.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/admin/troubleshooting.md b/docs/admin/troubleshooting.md
index 3bee14aca26..ddc6b777a38 100644
--- a/docs/admin/troubleshooting.md
+++ b/docs/admin/troubleshooting.md
@@ -337,6 +337,7 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
   # Error message or timeout after dmg system query
   $ dmg system query
   ERROR: dmg: Unable to load Certificate Data: could not load cert: stat /etc/daos/certs/admin.crt: no such file or directory
+  
   # Workaround
     # 1. Make sure the admin-host /etc/daos/daos_control.yml is correctly configured.
       # including:

From 31593e5ab498b10b4a80a42b1079a41b4a982291 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 6 May 2022 09:21:59 -0700
Subject: [PATCH 12/14] Revert "Update administration.md"

This reverts commit 9e6dcf8533d99c1782baeda04f735a50946cb67f.
---
 docs/admin/administration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index b3e0c84305b..1c555b12fbe 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -840,7 +840,7 @@ DAOS I/O Engines will be started, and all DAOS pools will have been removed.
     $ wipefs -a /dev/pmem0
     $ wipefs -a /dev/pmem0
     ```
-
+    
     Then restart DAOS Servers and format.
 
 ### System Erase

From 984c3a3581b568ff423da8b90b69ebc72942d789 Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 6 May 2022 09:22:12 -0700
Subject: [PATCH 13/14] Revert "Update tensorflow.md"

This reverts commit b8cbb6fd3a279b5a981a8764c99e8386e9d88206.
---
 docs/user/tensorflow.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/user/tensorflow.md b/docs/user/tensorflow.md
index eb331d85f6c..a0bed6ce3bb 100644
--- a/docs/user/tensorflow.md
+++ b/docs/user/tensorflow.md
@@ -46,7 +46,7 @@ Assuming you are in a terminal in the repository root directory:
   - Ubuntu 20.04
 
        ```bash
-       sudo apt-get -y -qq update
+       sudo apt-get -y -qq update 
        sudo apt-get -y -qq install gcc g++ git unzip curl python3-pip
        ```
 

From a67534c3dc28ed4b934009e339100ce87117f55a Mon Sep 17 00:00:00 2001
From: JoeOster <52936608+JoeOster@users.noreply.github.com>
Date: Fri, 6 May 2022 09:27:42 -0700
Subject: [PATCH 14/14] fixing space issues

Signed-off-by: JoeOster <52936608+JoeOster@users.noreply.github.com>
---
 docs/admin/administration.md  | 2 +-
 docs/admin/troubleshooting.md | 2 +-
 docs/user/tensorflow.md       | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/admin/administration.md b/docs/admin/administration.md
index 1c555b12fbe..b3e0c84305b 100644
--- a/docs/admin/administration.md
+++ b/docs/admin/administration.md
@@ -840,7 +840,7 @@ DAOS I/O Engines will be started, and all DAOS pools will have been removed.
     $ wipefs -a /dev/pmem0
     $ wipefs -a /dev/pmem0
     ```
-    
+
     Then restart DAOS Servers and format.
 
 ### System Erase
diff --git a/docs/admin/troubleshooting.md b/docs/admin/troubleshooting.md
index ddc6b777a38..4a476e88c68 100644
--- a/docs/admin/troubleshooting.md
+++ b/docs/admin/troubleshooting.md
@@ -337,7 +337,7 @@ Verify if you're using Infiniband for `fabric_iface`: in the server config. The
   # Error message or timeout after dmg system query
   $ dmg system query
   ERROR: dmg: Unable to load Certificate Data: could not load cert: stat /etc/daos/certs/admin.crt: no such file or directory
-  
+
   # Workaround
     # 1. Make sure the admin-host /etc/daos/daos_control.yml is correctly configured.
       # including:
diff --git a/docs/user/tensorflow.md b/docs/user/tensorflow.md
index a0bed6ce3bb..eb331d85f6c 100644
--- a/docs/user/tensorflow.md
+++ b/docs/user/tensorflow.md
@@ -46,7 +46,7 @@ Assuming you are in a terminal in the repository root directory:
   - Ubuntu 20.04
 
        ```bash
-       sudo apt-get -y -qq update 
+       sudo apt-get -y -qq update
        sudo apt-get -y -qq install gcc g++ git unzip curl python3-pip
        ```