diff options
author | Jens Axboe <axboe@kernel.dk> | 2012-11-22 13:50:29 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2012-11-22 13:50:29 +0100 |
commit | 51ede0b1e9c9b570b942b50b44d0455183a0d5ec (patch) | |
tree | 1cc069f169869580c8e51b532fac39f122d81c34 /io_ddir.h | |
parent | ec5c6b125c1eab992882602158bab54957aa733d (diff) | |
download | fio-51ede0b1e9c9b570b942b50b44d0455183a0d5ec.tar.gz |
Rework file random map
Fio slows down at the end of a random IO run, when the random
map is used and it gets fuller. This causes slowdowns in
IOPS. This is largely due to the file random map being an
array of bits, and with random access to single bits of the
array at the time, locality is awful. The effect is observable
throughout a run, though, where it gradually gets slower and
slower. It just becomes more apparent at near the end of the
run, where the last 10% are fairly bad. This is even with
doing a bunch of tricks to reduce that cost.
Implement an N-level bitmap, where layer N uses a single bit
to represent 32/64-bits at layer N-1. The number of layers
depends on the number of blocks.
This has slightly higher overhead initially in theory, in
practice it performs about the same. As a bonus, the throughput
remains constant during the run, and even becomes faster
near the end.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_ddir.h')
-rw-r--r-- | io_ddir.h | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -30,7 +30,7 @@ enum td_ddir { #define td_trim(td) ((td)->o.td_ddir & TD_DDIR_TRIM) #define td_rw(td) (((td)->o.td_ddir & TD_DDIR_RW) == TD_DDIR_RW) #define td_random(td) ((td)->o.td_ddir & TD_DDIR_RAND) -#define file_randommap(td, f) (!(td)->o.norandommap && (f)->file_map) +#define file_randommap(td, f) (!(td)->o.norandommap && (f)->io_bitmap) static inline int ddir_sync(enum fio_ddir ddir) { |