From 7d3f7385aabed3dbd5d3524e1e175f90095b7262 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Wed, 10 Jul 2013 16:09:48 +0200 Subject: [PATCH 1/9] description of how reads/writes work in detail Adding detailed descriptions of the following to the md(4) manpage: * How reads/writes work in detail, especially with respect to the minimum/maximum number of bytes that are always fully read/written. * How reads/writes work when the array is degraded and or when a rebuild takes place. * Some general concepts of how the chunksize affect reads/writes. Signed-off-by: Christoph Anton Mitterer --- md.4 | 152 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 152 insertions(+) diff --git a/md.4 b/md.4 index 2574c37e..9553970b 100644 --- a/md.4 +++ b/md.4 @@ -352,6 +352,158 @@ transient. The list of faulty sectors can be flushed, and the active list of failure modes can be cleared. +.SS HOW MD READS/WRITES DEPENDING ON THE LEVEL AND CHUNK SIZE + +The following explains how MD reads\ /\ writes data depending on the MD\ level; +\fIespecially how many bytes are consecutively read\ /\ written fully at once +from\ /\ to the underlying device(s)\fP. +.br +Further block layers below MD may influence and change this of course. + +Generally, the number of bytes read\ /\ written is \fIindependent of the chunk +size\fP. + +.TP +.B LINEAR +Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +dm-crypt, LVM or a filesystem) above MD. + +As data is neither striped nor mirrored in chunks over the devices, no IO +distribution takes place on reads\ /\ writes. + +There is no resynchronisation nor can the MD be degraded. +.PP + +.TP +.B RAID0 +Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +dm-crypt, LVM or a filesystem) above MD \fIup to the chunk size\fP (obviously, +if any of the block layers above is not aligned with MD, even less will at most +be read\ /\ written). + +As data is striped in chunks over the devices, IO distribution takes place on +reads\ /\ writes. + +There is no resynchronisation nor can the MD be degraded. +.PP + +.TP +.B RAID1 +Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +dm-crypt, LVM or a filesystem) above MD. + +As data is mirroed in chunks over the devices, IO distribution takes place on +reads, with MD automatically selecting the optimal device (for example that +with the minimum seek time). +.br +On writes, data must be written to all the deivces, though. + +On resynchronisation data will be IO distributedly read from the devices that +are synchronised and written to all those needed to be synchonised. + +When degraded, failed devices won’t be used for reads\ /\ writes. +.PP + +.TP +.B RAID10 +Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +dm-crypt, LVM or a filesystem) above MD \fIup to the chunk size\fP (obviously, +if any of the block layers above is not aligned with MD, even less will at most +be read\ /\ written). + +As data is mirroed and striped in chunks over some of the devices, IO +distribution takes place on reads, with MD automatically selecting the optimal +device (for example that with the minimum seek time). +.br +On writes, data must be written to all of the respectively mirrored deivces, +though. + +On resynchronisation data will be IO distributedly read from the devices that +are synchronised and written to those needed to be synchonised. + +When degraded, failed devices won’t be used for reads\ /\ writes. +.PP + +.TP +.B RAID4, RAID5, and RAID6 +\fIWhen not degraded on reads\fP: +.br +Reads as many bytes as requested by the block +layer (for example MD, dm-crypt, LVM or a filesystem) above MD \fIup to the +chunk size\fP (obviously, if any of the block layers above is not aligned with +MD, even less will at most be read). +.br +\fIWhen degraded on reads\fP \fBor\fP \fIalways on writes\fP: +.br +Reads\ /\ writes \fIgenerally\fP in blocks of 4\ KiB (hoping that block layers +below MD will optimise this). + +\fIWhen not degraded\fP: +As data is striped in chunks over the devices, IO distribution takes place on +reads (using the different data chunks but not the parity chunk(s)). +.br +On writes, data and parity must be written to the respective devices (that is +1\ device with the respective data chunk and 1\ (in case of RAID4 or RAID5) or +2\ (in case of RAID6) device(s) with the respective parity chunk(s). These +writes but also any necessary reads are done blocks of 4\ KiB. +.br +\fIWhen degraded or on resynchronisation\fP: +Failed devices won’t be used for reads\ /\ writes. +.br +In order to read from within a failed data chunk, the respective blocks of +4\ KiB are read from all the other corresponding data and parity chunks and the +failed data is calculated from these. +.br +Resynchronising works analogously with the addition of writing the missing data +or parity, which happens again in blocks of 4\ KiB. +.PP + + +.TP +.B Chunk Size +The chunk size has no effect for the non-striped levels LINEAR and RAID1. +.br +Fruther, MD’s reads\ /\ writes are in general \fInot\fP in blocks of the chunk +size (see above). + +For the levels RAID0, RAID10, RAID4, RAID5 and RAID6 it controls the number of +consecutive data bytes placed on one device before the following data bytes +continue at a “next” device. +.br +Obviously it also controls the size of any parity chunks, but \fIthe actual +parity data itself is split into blocks of 4\ KiB\fP (within a parity chunk). + +With striped levels, IO distribution on reads\ /\ writes takes place over the +devices where possible. +.br +The main effect of the chunk size is basically how much data is consecutively +read\ /\ written from\ /\ to a single deivce, (typically) before it has to seek +to an arbitrary other (on random reads\ /\ writes) or the “next” (on sequential +reads\ /\ writes) chunk (on the same device). Due to the striping, “next chunk” +doesn’t necessarily mean directly consecutive data (as this may be on the “next” +device), but rather the “next” of consecutive data found \fIon the respective +device\fP. + +The ideal chunk size depends greatly on the IO scenario, some general guidelines +include: +.br +• On sequential reads\ /\ writes, having to read\ /\ write from\ /\ to less +chunks is faster (for example since less seeks may be necessary) and thus a +larger chunk size may be better. +.br +This applies analogously for “pseudo-random” reads\ /\ writes, that is not +strictly sequential ones but such that take place in a very close consecutive +area. + +• For very large sequential reads\ /\ writes, this may apply less, since larger +chunk sizes tend to result in larger IO requests to the underlying devices. + +• For reads\ /\ writes, the stripe size (that is ) should ideally match the typical size for reads\ /\ writes in the +respective scenario. +.PP + + .SS UNCLEAN SHUTDOWN When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array From df3af895ee342206d0339f9fe435bf39f59ffb25 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Tue, 16 Jul 2013 17:43:44 +0200 Subject: [PATCH 2/9] fix some typos Signed-off-by: Christoph Anton Mitterer --- md.4 | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/md.4 b/md.4 index 9553970b..e237d92f 100644 --- a/md.4 +++ b/md.4 @@ -392,14 +392,14 @@ There is no resynchronisation nor can the MD be degraded. Reads\ /\ writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD. -As data is mirroed in chunks over the devices, IO distribution takes place on +As data is mirrored in chunks over the devices, IO distribution takes place on reads, with MD automatically selecting the optimal device (for example that with the minimum seek time). .br -On writes, data must be written to all the deivces, though. +On writes, data must be written to all the devices, though. On resynchronisation data will be IO distributedly read from the devices that -are synchronised and written to all those needed to be synchonised. +are synchronised and written to all those needed to be synchronised. When degraded, failed devices won’t be used for reads\ /\ writes. .PP @@ -463,7 +463,7 @@ or parity, which happens again in blocks of 4\ KiB. .B Chunk Size The chunk size has no effect for the non-striped levels LINEAR and RAID1. .br -Fruther, MD’s reads\ /\ writes are in general \fInot\fP in blocks of the chunk +Further, MD’s reads\ /\ writes are in general \fInot\fP in blocks of the chunk size (see above). For the levels RAID0, RAID10, RAID4, RAID5 and RAID6 it controls the number of @@ -477,7 +477,7 @@ With striped levels, IO distribution on reads\ /\ writes takes place over the devices where possible. .br The main effect of the chunk size is basically how much data is consecutively -read\ /\ written from\ /\ to a single deivce, (typically) before it has to seek +read\ /\ written from\ /\ to a single device, (typically) before it has to seek to an arbitrary other (on random reads\ /\ writes) or the “next” (on sequential reads\ /\ writes) chunk (on the same device). Due to the striping, “next chunk” doesn’t necessarily mean directly consecutive data (as this may be on the “next” From cf4b381e9f29cd5befa2868315389ccabcc367fb Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Tue, 16 Jul 2013 17:52:00 +0200 Subject: [PATCH 3/9] =?UTF-8?q?change=20"=E2=80=A6=20/=20=E2=80=A6"=20styl?= =?UTF-8?q?e=20to=20"=E2=80=A6/=E2=80=A6"?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Christoph Anton Mitterer --- md.4 | 63 ++++++++++++++++++++++++++++++------------------------------ 1 file changed, 31 insertions(+), 32 deletions(-) diff --git a/md.4 b/md.4 index e237d92f..706eeab3 100644 --- a/md.4 +++ b/md.4 @@ -354,42 +354,42 @@ failure modes can be cleared. .SS HOW MD READS/WRITES DEPENDING ON THE LEVEL AND CHUNK SIZE -The following explains how MD reads\ /\ writes data depending on the MD\ level; -\fIespecially how many bytes are consecutively read\ /\ written fully at once -from\ /\ to the underlying device(s)\fP. +The following explains how MD reads/writes data depending on the MD\ level; +\fIespecially how many bytes are consecutively read/written fully at once +from/to the underlying device(s)\fP. .br Further block layers below MD may influence and change this of course. -Generally, the number of bytes read\ /\ written is \fIindependent of the chunk +Generally, the number of bytes read/written is \fIindependent of the chunk size\fP. .TP .B LINEAR -Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD. As data is neither striped nor mirrored in chunks over the devices, no IO -distribution takes place on reads\ /\ writes. +distribution takes place on reads/writes. There is no resynchronisation nor can the MD be degraded. .PP .TP .B RAID0 -Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD \fIup to the chunk size\fP (obviously, if any of the block layers above is not aligned with MD, even less will at most -be read\ /\ written). +be read/written). As data is striped in chunks over the devices, IO distribution takes place on -reads\ /\ writes. +reads/writes. There is no resynchronisation nor can the MD be degraded. .PP .TP .B RAID1 -Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD. As data is mirrored in chunks over the devices, IO distribution takes place on @@ -401,15 +401,15 @@ On writes, data must be written to all the devices, though. On resynchronisation data will be IO distributedly read from the devices that are synchronised and written to all those needed to be synchronised. -When degraded, failed devices won’t be used for reads\ /\ writes. +When degraded, failed devices won’t be used for reads/writes. .PP .TP .B RAID10 -Reads\ /\ writes as many bytes as requested by the block layer (for example MD, +Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD \fIup to the chunk size\fP (obviously, if any of the block layers above is not aligned with MD, even less will at most -be read\ /\ written). +be read/written). As data is mirroed and striped in chunks over some of the devices, IO distribution takes place on reads, with MD automatically selecting the optimal @@ -421,7 +421,7 @@ though. On resynchronisation data will be IO distributedly read from the devices that are synchronised and written to those needed to be synchonised. -When degraded, failed devices won’t be used for reads\ /\ writes. +When degraded, failed devices won’t be used for reads/writes. .PP .TP @@ -435,8 +435,8 @@ MD, even less will at most be read). .br \fIWhen degraded on reads\fP \fBor\fP \fIalways on writes\fP: .br -Reads\ /\ writes \fIgenerally\fP in blocks of 4\ KiB (hoping that block layers -below MD will optimise this). +Reads/writes \fIgenerally\fP in blocks of 4\ KiB (hoping that block layers below +MD will optimise this). \fIWhen not degraded\fP: As data is striped in chunks over the devices, IO distribution takes place on @@ -448,7 +448,7 @@ On writes, data and parity must be written to the respective devices (that is writes but also any necessary reads are done blocks of 4\ KiB. .br \fIWhen degraded or on resynchronisation\fP: -Failed devices won’t be used for reads\ /\ writes. +Failed devices won’t be used for reads/writes. .br In order to read from within a failed data chunk, the respective blocks of 4\ KiB are read from all the other corresponding data and parity chunks and the @@ -463,8 +463,8 @@ or parity, which happens again in blocks of 4\ KiB. .B Chunk Size The chunk size has no effect for the non-striped levels LINEAR and RAID1. .br -Further, MD’s reads\ /\ writes are in general \fInot\fP in blocks of the chunk -size (see above). +Further, MD’s reads/writes are in general \fInot\fP in blocks of the chunk size +(see above). For the levels RAID0, RAID10, RAID4, RAID5 and RAID6 it controls the number of consecutive data bytes placed on one device before the following data bytes @@ -473,13 +473,13 @@ continue at a “next” device. Obviously it also controls the size of any parity chunks, but \fIthe actual parity data itself is split into blocks of 4\ KiB\fP (within a parity chunk). -With striped levels, IO distribution on reads\ /\ writes takes place over the +With striped levels, IO distribution on reads/writes takes place over the devices where possible. .br The main effect of the chunk size is basically how much data is consecutively -read\ /\ written from\ /\ to a single device, (typically) before it has to seek -to an arbitrary other (on random reads\ /\ writes) or the “next” (on sequential -reads\ /\ writes) chunk (on the same device). Due to the striping, “next chunk” +read/written from/to a single device, (typically) before it has to seek to an +arbitrary other (on random reads/writes) or the “next” (on sequential +reads/writes) chunk (on the same device). Due to the striping, “next chunk” doesn’t necessarily mean directly consecutive data (as this may be on the “next” device), but rather the “next” of consecutive data found \fIon the respective device\fP. @@ -487,19 +487,18 @@ device\fP. The ideal chunk size depends greatly on the IO scenario, some general guidelines include: .br -• On sequential reads\ /\ writes, having to read\ /\ write from\ /\ to less -chunks is faster (for example since less seeks may be necessary) and thus a -larger chunk size may be better. +• On sequential reads/writes, having to read/write from/to less chunks is faster +(for example since less seeks may be necessary) and thus a larger chunk size may +be better. .br -This applies analogously for “pseudo-random” reads\ /\ writes, that is not -strictly sequential ones but such that take place in a very close consecutive -area. +This applies analogously for “pseudo-random” reads/writes, that is not strictly +sequential ones but such that take place in a very close consecutive area. -• For very large sequential reads\ /\ writes, this may apply less, since larger +• For very large sequential reads/writes, this may apply less, since larger chunk sizes tend to result in larger IO requests to the underlying devices. -• For reads\ /\ writes, the stripe size (that is ) should ideally match the typical size for reads\ /\ writes in the +• For reads/writes, the stripe size (that is ) should ideally match the typical size for reads/writes in the respective scenario. .PP From 5c942f497ad2137c0766d601b4fdd5391153b872 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Tue, 16 Jul 2013 18:04:33 +0200 Subject: [PATCH 4/9] use PAGE_SIZE instead of 4 KiB Signed-off-by: Christoph Anton Mitterer --- md.4 | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/md.4 b/md.4 index 706eeab3..b50292c8 100644 --- a/md.4 +++ b/md.4 @@ -435,8 +435,8 @@ MD, even less will at most be read). .br \fIWhen degraded on reads\fP \fBor\fP \fIalways on writes\fP: .br -Reads/writes \fIgenerally\fP in blocks of 4\ KiB (hoping that block layers below -MD will optimise this). +Reads/writes \fIgenerally\fP in blocks of \fBPAGE_SIZE\fP (hoping that block +layers below MD will optimise this). \fIWhen not degraded\fP: As data is striped in chunks over the devices, IO distribution takes place on @@ -445,17 +445,17 @@ reads (using the different data chunks but not the parity chunk(s)). On writes, data and parity must be written to the respective devices (that is 1\ device with the respective data chunk and 1\ (in case of RAID4 or RAID5) or 2\ (in case of RAID6) device(s) with the respective parity chunk(s). These -writes but also any necessary reads are done blocks of 4\ KiB. +writes but also any necessary reads are done in blocks of \fBPAGE_SIZE\fP. .br \fIWhen degraded or on resynchronisation\fP: Failed devices won’t be used for reads/writes. .br In order to read from within a failed data chunk, the respective blocks of -4\ KiB are read from all the other corresponding data and parity chunks and the -failed data is calculated from these. +\fBPAGE_SIZE\fP are read from all the other corresponding data and parity chunks +and the failed data is calculated from these. .br Resynchronising works analogously with the addition of writing the missing data -or parity, which happens again in blocks of 4\ KiB. +or parity, which happens again in blocks of \fBPAGE_SIZE\fP. .PP @@ -471,7 +471,9 @@ consecutive data bytes placed on one device before the following data bytes continue at a “next” device. .br Obviously it also controls the size of any parity chunks, but \fIthe actual -parity data itself is split into blocks of 4\ KiB\fP (within a parity chunk). +parity data itself is split into blocks of\fP +.BI PAGE_SIZE +(within a parity chunk). With striped levels, IO distribution on reads/writes takes place over the devices where possible. From 0a2b2cc113b911f60900e0df1eb54b0354380feb Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Thu, 18 Jul 2013 00:20:26 +0200 Subject: [PATCH 5/9] don't use words that Neil doesn't know ;-) Signed-off-by: Christoph Anton Mitterer --- md.4 | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/md.4 b/md.4 index b50292c8..d52f4ab5 100644 --- a/md.4 +++ b/md.4 @@ -398,8 +398,8 @@ with the minimum seek time). .br On writes, data must be written to all the devices, though. -On resynchronisation data will be IO distributedly read from the devices that -are synchronised and written to all those needed to be synchronised. +On resynchronisation data will be read in an IO distributed way from the devices +that are synchronised and written to all those needed to be synchronised. When degraded, failed devices won’t be used for reads/writes. .PP From 9b4fb7f5b202a624bee2c208108ca3eeedd0e603 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Thu, 18 Jul 2013 00:52:59 +0200 Subject: [PATCH 6/9] chunks aren't relevant in the mirroring Signed-off-by: Christoph Anton Mitterer --- md.4 | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/md.4 b/md.4 index d52f4ab5..2c3d060e 100644 --- a/md.4 +++ b/md.4 @@ -392,9 +392,9 @@ There is no resynchronisation nor can the MD be degraded. Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD. -As data is mirrored in chunks over the devices, IO distribution takes place on -reads, with MD automatically selecting the optimal device (for example that -with the minimum seek time). +As data is mirrored over the devices, IO distribution takes place on reads, with +MD automatically selecting the optimal device (for example that with the minimum +seek time). .br On writes, data must be written to all the devices, though. @@ -411,9 +411,9 @@ dm-crypt, LVM or a filesystem) above MD \fIup to the chunk size\fP (obviously, if any of the block layers above is not aligned with MD, even less will at most be read/written). -As data is mirroed and striped in chunks over some of the devices, IO -distribution takes place on reads, with MD automatically selecting the optimal -device (for example that with the minimum seek time). +As data is mirroed over some of the devices and also striped in chunks over some +of the devices, IO distribution takes place on reads, with MD automatically +selecting the optimal device (for example that with the minimum seek time). .br On writes, data must be written to all of the respectively mirrored deivces, though. From f2f18d0e2b018bcc2ad2849af3f5dba3b3dde5be Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Thu, 18 Jul 2013 01:01:17 +0200 Subject: [PATCH 7/9] clarifying that this is only a heuristic Signed-off-by: Christoph Anton Mitterer --- md.4 | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/md.4 b/md.4 index 2c3d060e..35b36304 100644 --- a/md.4 +++ b/md.4 @@ -393,8 +393,8 @@ Reads/writes as many bytes as requested by the block layer (for example MD, dm-crypt, LVM or a filesystem) above MD. As data is mirrored over the devices, IO distribution takes place on reads, with -MD automatically selecting the optimal device (for example that with the minimum -seek time). +MD trying to heuristically select the optimal device (for example that with the +minimum seek time). .br On writes, data must be written to all the devices, though. @@ -412,8 +412,9 @@ if any of the block layers above is not aligned with MD, even less will at most be read/written). As data is mirroed over some of the devices and also striped in chunks over some -of the devices, IO distribution takes place on reads, with MD automatically -selecting the optimal device (for example that with the minimum seek time). +of the devices, IO distribution takes place on reads, with MD trying to +heuristically select the optimal device (for example that with the minimum seek +time). .br On writes, data must be written to all of the respectively mirrored deivces, though. From f60e525eec68f6444c51160cef43d04484844622 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Thu, 18 Jul 2013 02:58:05 +0200 Subject: [PATCH 8/9] no IO distribution on recovery for RAID1/10 Signed-off-by: Christoph Anton Mitterer --- md.4 | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/md.4 b/md.4 index 35b36304..e37f1ceb 100644 --- a/md.4 +++ b/md.4 @@ -398,8 +398,9 @@ minimum seek time). .br On writes, data must be written to all the devices, though. -On resynchronisation data will be read in an IO distributed way from the devices -that are synchronised and written to all those needed to be synchronised. +On resynchronisation data will be read from the “first” usable device (that is +the device with the lowest role number that has not failed) and written to all +those needed to be synchronised (there is no IO distribution). When degraded, failed devices won’t be used for reads/writes. .PP @@ -419,8 +420,10 @@ time). On writes, data must be written to all of the respectively mirrored deivces, though. -On resynchronisation data will be IO distributedly read from the devices that -are synchronised and written to those needed to be synchonised. +On resynchronisation data will be read from the “first” usable device (that is +the device with the lowest role number that holds the data and that has not +failed) and written to all those needed to be synchronised (there is no IO +distribution). When degraded, failed devices won’t be used for reads/writes. .PP From 814d44172785849642f8d31e02668d6abccd7b16 Mon Sep 17 00:00:00 2001 From: Christoph Anton Mitterer Date: Thu, 18 Jul 2013 03:13:30 +0200 Subject: [PATCH 9/9] use lists Signed-off-by: Christoph Anton Mitterer --- md.4 | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/md.4 b/md.4 index e37f1ceb..fbf102ac 100644 --- a/md.4 +++ b/md.4 @@ -492,20 +492,22 @@ device\fP. The ideal chunk size depends greatly on the IO scenario, some general guidelines include: -.br -• On sequential reads/writes, having to read/write from/to less chunks is faster +.RS +.IP \(bu 2 +On sequential reads/writes, having to read/write from/to less chunks is faster (for example since less seeks may be necessary) and thus a larger chunk size may be better. .br This applies analogously for “pseudo-random” reads/writes, that is not strictly sequential ones but such that take place in a very close consecutive area. - -• For very large sequential reads/writes, this may apply less, since larger +.IP \(bu 2 +For very large sequential reads/writes, this may apply less, since larger chunk sizes tend to result in larger IO requests to the underlying devices. - -• For reads/writes, the stripe size (that is ) should ideally match the typical size for reads/writes in the respective scenario. +.RE .PP