A Systematic Configuration Space Exploration of the Linux Kyber I/O Scheduler

NVMe SSDs have become the de-facto storage choice for high-performance I/O-intensive workloads. Often, these workloads are run in a shared setting, such as in multi-tenant clouds where they share access to fast NVMe storage. In such a shared setting, ensuring quality of service among competing workloads can be challenging. To offer performance differentiation to I/O requests, various SSD-optimized I/O schedulers have been designed. However, many of them are either not publicly available or are yet to be proven in a production setting. Among the widely-tested I/O schedulers available in the Linux kernel, it has been shown that \kyber is one of the best-fit schedulers for SSDs due to its low CPU overheads and high scalability. However, \kyber has various configuration options, and there is limited knowledge on how to configure \kyber to improve applications' performance. In this paper, we systematically characterize how \kyber 's configurations affect the performance of I/O workloads and how this effect differs with different file systems and storage devices. We report 11 observations and make 5 guidelines that indicate that (i) \kyber can deliver up to 26.3% lower read latency than the \none scheduler with interfering write workloads; (ii) with a file system, \kyber can be configured to deliver up to 35.9% lower read latency at the cost of 34.5%--50.3% lower write throughput, allowing users to make a trade-off between read latency and write throughput; and (iii) \kyber leads to performance losses when \kyber is used with multiple throughput-bound workloads and the SSDs is not the bottleneck. Our benchmarking scripts and results are open-sourced and available at: https://github.com/stonet-research/hotcloudperf24-kyber-artifact-public.


INTRODUCTION
Modern high-performance solid-state drives (SSDs) are able to deliver millions of I/O operations per second (IOPS) with single-digit microsecond-level latency [5,11,12].These devices are widely used in multi-tenant cloud environments for their improved performance over hard disks [30,36,46].Cloud providers need to provide quality of service (QoS) guarantees for I/O services such as throughput or tail latency service-level objectives across multiple tenants.These guarantees are usually achieved by scheduling I/O requests with an I/O scheduler [29,34].
However, existing Linux I/O schedulers designed for hard disks do not work well with these high-performance SSDs and induce significant CPU and scalability overheads [38,42].To reduce these overheads, there are many state-of-the-art I/O schedulers designed for SSDs [21, 24, 26, 27, 31-33, 35, 39, 41, 43].Despite these studies on I/O schedulers for SSDs, using these past published I/O schedulers is challenging.Many of them do not have their source code public or are written for a specific kernel version, or assume specific hardware support from SSDs [24,43].Thus, users need to implement these I/O schedulers in the Linux kernel, which is not trivial, preventing their widespread use.
Compared to these state-of-the-art I/O schedulers, the state-ofthe-practice plug-and-play Linux I/O schedulers [7], Kyber [6], MQ-Deadline [8], and BFQ [2], are the most accessible schedulers.In our past studies, we demonstrate that Kyber has a low CPU overhead and high scalability on fast SSDs and recommend using Kyber on high-performance SSDs for its low CPU overhead and high scalability [37,38].We also identify that Kyber's configuration significantly impacts workload performance in terms of latency and throughput, and this impact also differs between different workloads [37].Kyber provides two configurable parameters, read and write target latency, allowing users to set the target latencies that Kyber should try to deliver.The effect of Kyber's configurations and the difference of this effect on different workloads create challenges in using Kyber in practice.There is no existing study on configuring Kyber for specific software and hardware settings.Specifically, how to find an optimized Kyber configuration with a specific setting for different (1) workloads, (2) file systems, and (3) types of SSDs.
In our study, we cover these three aspects to show the effect of Kyber's configurations on its performance.Firstly, workloads have different I/O patterns and latency/throughput requirements [23,24].Existing studies of Kyber focus on its CPU and latency overhead, scalability, and its ability to deliver low latency for foreground workloads [23,33,38,42].There is a lack of systematic studies on how Kyber's configurations affect the performance of interfering concurrent workloads with diverse demands in terms of expected read/write latencies and throughputs.Moreover, predicting the achieved performance with the latency targets is not trivial since the user-specified latency targets are not guaranteed by Kyber.Thus, there is a gap between the target performance (Kyber's configurable parameters of read/write latencies) and the achieved performance (latency and throughput), which is not obvious from the latency targets configured.Secondly, real-world workloads usually work with file systems instead of directly accessing the storage device.File systems change the I/O patterns of the workloads.Thus, the effect of Kyber on workload performance with different file systems is unknown.Thirdly, different types of SSDs have significantly different performance properties such as peak throughput, latency, and read/write interference behavior [23,35].For example, flash-based SSDs have unpredictable performance and read/write interference, but non-flash-based ultra-low latency (UUL) SSDs such as Intel Optane SSDs have stable performance and no read/write interference [44,47].
In conclusion, the lack of understanding of how Kyber's configurations affect the achieved performance with different workloads, file systems, and types of SSDs makes it unclear how to optimize Kyber in practice.Specifically, we investigate the following research questions (RQs) around how Kyber's configurations affect the workloads' performance with different workloads, file systems and types of SSDs: (RQ1) How does Kyber affect the performance of workloads when workloads run concurrently and interfere with each other?We investigate how Kyber affects the performance of different workloads by studying the relation between target latency and the workloads' achieved performance.(RQ2) How to configure Kyber's parameters for diverse types NVMe SSDs and diverse file systems to meet workloads' requirements?
The key motivation is to find out if and how our findings on Kyber's configurations performance effects can be generalized to different file systems and types of SSDs.
We also provide guidelines on how to configure Kyber to meet the workloads' requirements in practice with diverse software and hardware environments.
To address these questions, we conduct a first-of-its-kind systematic study of Linux' Kyber I/O scheduler with various kinds of workloads, file systems, and types of SSDs to establish guidelines on how to configure Kyber in practice.Our key contributions in this work include: • We extensively study how Kyber with different configurations affects workload performance using different combinations of latency-sensitive and throughput-bound workloads on 2 types of SSDs, resulting in 11 observations.To the best of our knowledge, we are the first to investigate the effect of Kyber's configurations on workloads.Flash-based SSDs are composed of a controller that is connected to an array of flash chips.Each flash chip is organized in a hierarchy of dies, planes, blocks, and pages.SSDs have high internal parallelism as both dies and planes can operate in parallel.The NVMe protocol [10] exposes this parallelism to workloads with a multi-queue interface that allows SSDs to execute multiple I/O requests in parallel.Nevertheless, to fully utilize this parallelism, workloads need to issue multiple concurrent I/O requests to the SSD.A challenge here is that a plane can not execute different types of commands (read or write) in parallel.If a read is issued to a die where a write is already being executed, the read is blocked until the write finishes, leading to a 10-40× longer read latency.This performance degradation is called read/write interference [17,45].Moreover, the physical constraints of flash chips do not allow inplace updates or intra-block random writes.Pages in a block can only be written sequentially, and written pages need to be erased before they can be rewritten.Erasures happen at the unit of blocks, not at the unit of pages.To imitate the block interface provided by hard disks, the Flash Translation Layer (FTL) in SSD controllers maps logical addresses provided in the block interface to physical addresses in the flash chips.On an update, the data of the update is written to a new page, and the old page is marked invalid.The internal operations lead to additional interference with user I/O requests, leading to unpredictable performance.In conclusion, flash-based SSDs have (1) high parallelism, (2) unpredictable performance, and (3) read/write interference.
There are also non-flash-based SSDs such as Intel Optane SSDs [4], made with 3D Xpoint technology [1,44].3D Xpoint has two big differences from flash: (1) it is byte-addressable, thus an I/O request can be broken into smaller pieces and processed in parallel by multiple channels to achieve low latency; and (2) it supports in-place updates and can, thus, provide stable performance without internal translation operations needed such as flash-based SSDs [47].Kyber Internals.Kyber is an I/O scheduler designed for fast and highly parallel storage devices inspired by active queue management techniques from network routing [6, 13].Kyber prioritizes reads over writes based on the heuristic that a process that issues a read request usually waits for the issued read to finish.In contrast, a process that issues a write request usually continues executing without waiting for the write to finish.Figure 1 shows the architecture of the Linux Kyber I/O scheduler.Kyber maintains two queues for each CPU core, one for reads and one for writes.Kyber inserts I/O requests into the queues on the same core where the application issued the requests.These read/write queues are associated with a global token bucket.These tokens are used to limit the number of concurrent requests issued to the SSD to achieve high responsiveness.An I/O request is dispatched to the NVMe device driver only when there are available tokens.The number of tokens remains the same if both read and write target latencies are satisfied, and is increased if read or write P99 latency exceeds the target latency.Increasing the number of tokens increases the priority of that workload.The number of NVMe tokens for a particular type of request (read or write) is reduced when (1) the achieved P90 latency for that request type is lower than the target latency; and (2) the achieved P99 latency for the other type is higher than the target latency.Kyber aims to deliver the user configured target latencies.However, there is no guarantee that Kyber achieves the target latencies.Further on, it is not studied how these target latencies affect the achieved workload throughput and latency.The default read and write target latencies are 2 ms and 10 ms, respectively.However, achievable latencies for NVMe SSDs range from 10 to 80 s and differ between different types of SSDs [4,11].Therefore, there is a huge gap between the default target latencies and the best latencies that NVMe SSDs can deliver.It is unknown how this gap and the performance difference of SSDs affect workload performance on NVMe SSDs when using Kyber.The aim of this paper is to investigate how Kyber's target latencies and the performance of different SSDs affect the achieved workload throughput and latency.

METHODOLOGY
Hardware and Software.Our benchmarking environment is shown in Table 1.We use fio [3] as a workload generator with the io_uring interface [14].All the I/O requests are issued with the O_DIRECT flag so the I/O requests bypass the page cache.We use two metrics to evaluate performance: throughput and latency.We measure throughput in I/O operations per second (IOPS) and latency in 99 percentile operation tail latencies (P99 latency).Before running the experiments, we precondition the flash SSD according to [16]

BASELINE PERFORMANCE WITH THE NONE SCHEDULER
As we explained in §2, flash-based SSDs have read/write interference, which means that a write blocks concurrent reads to the same die.In this section, we establish the baseline performance of the evaluated flash SSDs with and without interference.We report the read/write throughput and latency with different workload combinations (i.e., L-app, T-app).We use the None scheduler, a no-op scheduler, which passes the I/O requests to the NVMe device driver in a first-in-first-out manner.Each workload is pinned to a dedicated CPU core to avoid interference by the process scheduler.
Table 2 shows the throughput (in KIOPS) and P99 tail latency (in s) of different workload combinations.We show the (combination) of workloads in the second column and we show the throughput and latency of the read and write workloads in the third to sixth columns.We have three observations: Asymmetric read/write performance.The flash SSDs have asymmetric read/write performance (Observation 1, O-1).With a single CPU core and no interfering workloads, the flash SSD delivers up to 364.3 KIOPS random read throughput at QD=256 and 77.5 s P99 random read latency at QD=1 (row 1).When fio issues random writes, the flash SSD delivers up to 70.0 KIOPS throughput at QD=256 (row 4) and 23.1 s P99 latency at QD=1 (row 3).In short, the flash SSD has different throughput and latency for reads and writes without interference.Next, we show how this performance changes with the interference of a second workload.
Writes have a huge impact on read performance.A concurrent write workload significantly degrades the performance of a co-running read workload.When a latency-sensitive read workload R1 is mixed with a latency-sensitive write workload W1 (row 5), the read throughput drops 76.5% (from 17.0 to 4.0 KIOPS) and the latency increases 24.2× (from 77.5 to 1,879.2 s) compared to R1 without interference (row 1).The read performance degradation is more significant with a throughput-bound write workload W256 (row 6), showing 98.2% lower throughput (from 17.0 to  The key finding here is that the None scheduler can not mitigate read/write interference.When a latency-sensitive read workload runs concurrently with a write workload, the read workload has a significantly higher P99 tail latency than the read workload running in isolation.When there are two throughput-bound read and write workloads competing for throughput, None does not provide any functionality to tune the throughput share between the read and write workloads.Kyber offers configuration options that let the users prioritize reads or writes over each other.Thus, in the following sections, we investigate if and how Kyber affects read/write interference and how it affects the throughput share between throughput-bound read and write workloads under different configurations. We repeat the same benchmark on an Intel Optane P900 SSD (the results are not plotted in the paper).We have two observations.Firstly, the Optane SSD have symmetric read/write performance.Unlike the flash SSD, the Optane SSD deliver comparable throughput and latency for both reads and writes.Secondly, the Optane SSD have less read/write interference.R1 has up to 65.6% higher latency (15.4 vs. 44.8s) with a concurrently running W256 compared to running R1 in isolation, which is significantly lower than the Samsung 980 PRO (24.2× lower).We show how Kyber and it's configurations affect the performance of these workloads in §5.

PERFORMANCE EFFECT OF KYBER'S CONFIGURATIONS WITHOUT A FILE SYSTEM
We start our analysis with a performance characterization of the impact of different Kyber configurations on fio-based micro-benchmarks.We run these micro-benchmarks without any file system.Specifically, we investigate how different Kyber configurations affect the P99 latency of L-apps and throughput of T-apps when concurrent read and write workloads interfere with each other, see Table 3 for all combinations.Such a setup is common in the multi-tenant cloud.
For each benchmark, we start two concurrent fio processes.One fio process issues reads and one process issues writes.Each fio process is pinned to a separate and dedicated CPU core to prevent them from competing for CPU resources.We do a grid search to investigate how Kyber's configurations affect the achieved performance of fio workloads in the search space.We set the lowest read and write target latency to 50 s and 20 s respectively, based on the minimum P99 latency of the flash SSD ( §4), and we gradually increase the target latency to 100 ms.We report our performance results in Figure 2 and Figure 3 as heatmaps where the x-axis represents the write target latency and the y-axis represents the read target latency.The temperature in the heatmaps is the measured performance with read and write target latencies set to corresponding values in y-and x-axis.How does Kyber affect the performance of different combinations of workloads?We report that Kyber's configurations do not have a significant effect on the performance of workload combinations R1-W1 (thus they are not plotted in the paper) (O-4).We do not observe a relation between the P99 read/write latencies and Kyber configurations.The P99 read latency varies between 1.3 and 1.6 ms and the P99 write latency varies between 23.4 and 27.3 s.
The reason that Kyber does not have a significant effect on R1-W1 is that Kyber's mechanism is only effective with multiple outstanding I/O requests.Yet, with R1-W1, there is only 1 outstanding read and write request.Since Kyber allows at least one read and one write to be sent to the SSDs, thus Kyber does not throttle the request with R1-W1.We report that Kyber is effective when there is at least more than one outstanding read or more than one outstanding write (Guideline 1, G-1).Can Kyber provide bounded P99 latency for the L-app when an L-app interferes with a T-app? Figure 2 shows how Kyber's configuration affects fio-workload performance when a read Lapp (R1) and a write T-app (W256) run concurrently.The temperature in Figure 2a shows the read P99 latency (in ms, darker is better) of the read L-app and the temperature in Figure 2b shows the throughput (in KIOPS, lighter is better) of the write T-app.In our experiments, Kyber mitigates read/write interference at the cost of write throughput (O-5).When the read target latency is set to 50 s and the write target latency is set to 100 ms, the achieved read P99 latency is 1.4 ms, 26.3% lower (1.8 ms) than the read latency of R1-W1 with the None scheduler (row 5, Table 2).Thus, Kyber delivers low read latency with background throughput-bound write workloads by setting the read target latency to the lowest read P99 latency that the SSD can achieve (50 s in our case) and the write target latency to a value higher than the achieved write latency with the None scheduler (15.6 ms, row 6, Table 3).However, the cost of achieving low read latency is lower write throughput (from peak 152.1 to 74.6 KIOPS, 50.9% lower throughput).We suggest using Kyber in multi-tenant situations when low read latency is considered more important than high throughput (G-2).How do Kyber's configurations affect the throughput share of two throughput-bound fio-workloads? Figure 3 shows how Kyber's configurations affect the interference between a read Tapp and a write T-app.The temperature in Figure 3a shows the read throughput and the temperature in Figure 3b shows the write throughput (in KIOPS, lighter is better).Firstly, decreasing the read target latency leads to higher read throughput.With a fixed write target latency (fixed x value), as the read target latency decreases, the read throughput increases (O-6).For example, with the write target latency is set to 20 s (first column in Figure 3a), as the read target latency decreases from 100 ms to 50 s, the read throughput increases from 2.6 KIOPS to 181.1 KIOPS, a 69.7× increase.Secondly, when the read target latency is lower than 10 ms and the write target latency is lower than 5 ms, changing Kyber configurations does not lead to a statistical difference in read and write throughput.The reason is that the achieved read and write latencies are 5 ms and 18 ms (not visualized).Tuning the target latencies in a configuration space where all the candidate values are much lower than the lowest achievable latency, all the configurations in this configuration space lead to comparable performance (we call this space the dead configuration space).In conclusion, by tuning Kyber's configuration, the throughput between reads and writes can be distributed.We suggest that the users (1) run this grid search micro-benchmark in Figure 2 and Figure 3 to find out how the target latencies affect the performance of a specific SSD and (2) avoid tuning Kyber in the dead configuration space (G-3).How does this effect change with different SSDs?We repeat our experiments on the Intel Optane SSD to investigate how this effect varies across different SSDs.Firstly, similar to the Samsung SSD, we observe that with R1-W256, prioritizing reads by setting low read target latency and high write target latency leads to lower P99 read latency at the cost of write throughput.When Kyber is configured to prioritize reads, it delivers 62.5% lower latency (from 44.8 to 16.8 s) than it does with the None scheduler at the cost of 67.1% lower write throughput (from 228.2 to 75.0 KIOPS) (O-7).
Secondly, when two throughput-bound workloads interfere with each other (R256-W256), we report that prioritizing reads leads to lower write throughput (from 202.5 to 68.9 KIOPS) and comparable read throughput when prioritizing writes.However, when neither reads nor writes are prioritized, setting the read and write latency to the same value leads to high read and write throughputs (215.4 and 203.1 KIOPS, respectively) at the same time.The explanation for this phenomenon is that the Optane SSD has low read/write interference [44].When the SSD is not saturated, adding a concurrent write workload with a read workload does not have a significant effect on the performance of the read workload.In this setting (R256-W256), the SSD is not saturated.In short, limiting read (or write) throughput does not increase the write (or read) throughput on the Optane SSD.With the Optane SSD, a misconfiguration when the SSD is not saturated leads to a throughput drop for reads or writes without any throughput increase for writes or reads (O-8).Thus, we suggest using Kyber with Optane SSDs only when the SSDs are the bottleneck.

PERFORMANCE EFFECT OF KYBER'S CONFIGURATIONS WITH FILE SYSTEMS
In the previous section, we investigate how Kyber affects the performance of fio-workloads without using any file system.However, real-world workloads usually access SSDs via file systems.In this section, we characterize how Kyber's configuration affects the I/O performance with three different file systems: ext4 [9], f2fs [28], and xfs [22].The goal is to investigate if the observations of our microbenchmarks ( §5) generalize to file systems.We evaluate two workload combinations: R1-W256 and R256-W256 in Table 3 with four Kyber configurations where the target read and write latency is set to (50 s R, 20 s W), (50 s R, 100 ms W), (100 ms R, 20 s W) and (100 ms R, 100 ms W), the four extreme configurations in the configuration search space in the previous section.The performance of the fio workloads is reported in Figure 4.
How does Kyber affect the I/O performance with the use of a file system?We first investigate how Kyber's configurations affect the performance of R1-W256 with different file systems.Figure 4a and Figure 4b show the P99 read latency (in s, the lower the better) and write throughput (in KIOPS, the higher the better) respectively with workload R1-W256.Kyber delivers the lowest read P99 latency with configuration (50 s R, 100 ms W), 13.0%-35.9%lower latency than the worst configuration (160.4-160.8s vs. 184.9-249.9s) .This lower P99 read latency comes at the cot of lower write throughput (72.9-75.5 KIOPS or 34.9%-50.1% lower) compared to the highest write throughput (116.0-147.0KIOPS) (O-9).If the workloads access the SSD via a file system, Kyber can be configured to deliver up to 35.9% lower read P99 latency than the P99 read latency delivered in other configurations with concurrent background writes (G-4).
Next, we investigate how Kyber's configurations affect the throughput when a read throughput-bound workload and a write throughputbound workload run concurrently.Figure 4c and Figure 4d show the read and write throughput with workload setting R256-W256.We have two observations.Firstly, the workloads with ext4 and xfs have similar performance.Configuring Kyber to prioritize read (e.g.,  ).Thus, we recommend that users should configure Kyber with the same read and write target latencies.In our setup, the read and write target latencies are set to (50 s R, 20 s W) and (100 ms R, 100 ms W) to achieve both read and write peak throughput (G-5).
In conclusion, when a latency-sensitive workload runs concurrently with a throughput-bound write workload via a file system, Kyber can be configured to deliver low read P99 latency by setting low read target latency and high write target latency to prioritize reads.When there are read and write throughput workloads running concurrently, we suggest setting the read and write target latencies to similar values to achieve high read and write throughput.

RELATED WORK
I/O schedulers for flash SSDs.Our study focuses on the state-ofthe-practice Linux I/O scheduler Kyber.However, there are many start-of-the-art I/O schedulers for SSDs for Linux.
There are also I/O schedulers that are optimized to deliver low latency for latency-sensitive workloads in shared environments [24,31,33].K2 [33] strictly prioritizes high-priority requests and trades throughput for latency.blk-switch [24] provides low latency for high-priority workloads and preserves high total throughput at the same time.FastResponse [31] co-designs the I/O scheduler with the storage stack to reduce the I/O interference.
Flash-based SSDs have many idiosyncrasies because of their complex internal architectures.Various I/O schedulers are built to utilize these idiosyncrasies to increase SSDs' write performance and lifespan by using fine-grained access [41], reducing SSD GC overhead [19,20,26] and reducing read/write interference [27,35].
Performance characterization of Linux I/O schedulers.Many studies characterize the performance of Linux I/O schedulers with NVMe SSDs [37,38,42].Whitaker et al. [42] characterize the performance of the Linux I/O schedulers on ULL SSDs based on 3D XPoint technology.Their findings include that Linux I/O schedulers lead to higher latency, lower throughputs, and higher energy overhead than without the I/O schedulers.Ren et al. [37] extended this work by characterizing the performance overhead, scalability, QoS with more common flash-based SSDs.Additionally, they characterize how Kyber's configurations affect the interference between foreground read workloads and background write workloads.We extend this work on Kyber by characterizing the performance of Kyber with different combinations of workloads and how these effects can be generalized to different file systems.We presented an in-depth, systematic study to give guidelines on how to configure Kyber with specified SSDs and workloads.

CONCLUSION AND FUTURE WORK
In this paper, we investigate how Kyber's configurations affect the performance of different workloads with various file systems and storage devices.Our results show that Kyber can be configured to deliver low read latency when there is a concurrently running write workload.Kyber can also be used to balance the throughput share between read and write throughput-bound workloads when the applications directly run on the top of block devices.
This work can be expended in (1) evaluating how Kyber's configuration affects the performance of applications with mixed read and write workloads, (2) designing an automatic tool that can find the best Kyber configuration automatically, and (3) designing algorithms that dynamically configures Kyber when the workload changes.

L
-app with write (W1) T-app with write (W256) L-app with read (R1)R1-W1 R1-W256 T-app with read (R256) R256-W1 R256-W256 0.3 KIOPS) and 200.8× higher latency (from 77.5 to 15,217.5 s) than R1 without interference.When a read throughput-bound workload and a write throughput-bound workload compete for throughput (row 8), the read workload has 77.2% lower throughput (from 364.3 to 83.2 KIOPS) than without the interference of concurrent writes.To conclude, the read performance is highly sensitive to the write workload, changing the write workload leads to a significant effect on read throughput and latency (O-2).Reads have a less significant impact on write performance.A co-running read workload has less impact on the write performance than the impact write workloads have on read workloads.When W1 runs with R1 in the background (row 5), the write workload has comparable throughput (from 62.3 to 65.0 KIOPS), and the latency only increases by 16.0% (23.1 to 26.8 s) compared to running W1 in isolation.With a throughput-bound workload R256 on the background (row 7), the write latency increases 39.0% (from 23.1 to 32.1 s), much lower than the latency increase of reads in this setting (200.8×).Thus, the write performance is less sensitive to the read workload than the interference of writes on reads (O-3).

Table 2 :
Baseline performance of Samsung 980 PRO SSD with the None scheduler.KiB read or write requests.For the L-apps, we issue a single outstanding request (we use queue depth, or QD to represent 'the number of outstanding requests' for simplicity in later sections).The T-apps issue 256 outstanding requests to saturate the SSDs.In the following sections, we use R1 and W1 to represent L-app read and write workloads respectively and R256 and W256 to represent T-app read and write workloads respectively.The number after R and W represents the QD of the workload.
Synthetic Workloads and Methodology.Workloads in cloud environments have diverse I/O requirements, such as latency-sensitive workloads (e.g., online database query) and throughput-bound
The same occurs when we configure Kyber to prioritize writes over reads (e.g., 100 ms write latencies and 20 s read latencies, the third bar in each group) (O-10).Secondly, Kyber's configurations do not have a significant effect on the read throughput of f2fs (the read throughput is 229.7-239.3KIOPS).However, prioritizing reads causes the write throughput to decrease from 108.7 KIOPS to 67.9 KIOPS, a 37.5% lower write throughput (O-11