Protean: Resource-efficient Instruction Prefetching

Increases in code footprint and control flow complexity have made low-latency instruction fetch challenging. Dedicated Instruction Prefetchers (DIPs) can provide performance gains (up to 5%) for a subset of applications that are poorly served by today’s ubiquitous Fetch-Directed Instruction Prefetching (FDIP). However, DIPs incur the significant overhead of in-core metadata storage (for all workloads) and energy and performance loss from excess prefetches (for many workloads), leading to 11% of workloads actually losing performance. This work addresses how to provide the benefits of a DIP without its costs when the DIP cannot provide a benefit. Our key insight is that workloads that benefit from DIPs can tolerate increased Branch Target Buffer (BTB) misses. This allows us to dynamically re-purpose the existing BTB storage between the BTB and the DIP. We train a simple performance counter based decision tree to select the optimal configuration at runtime, which allows us to achieve different energy/performance optimization goals. As a result, we pay essentially no area overhead when a DIP is needed, and can use the larger BTB when it is beneficial, or even power it off when not needed. We look at our impact on two groups of benchmarks: those where the right configuration choice can improve performance or energy and those where the wrong choice could hurt them. For the benchmarks with improvement potential, when optimizing for performance, we are able to obtain 86% of the oracle potential, and when optimizing for energy, 98% of the potential, both while avoiding essentially all performance and energy losses on the remaining benchmarks. This demonstrates that our technique is able to dynamically adapt to different performance/energy goals and obtain essentially all of the potential gains of DIP without the overheads they experience today.


INTRODUCTION
Several studies have shown that the increasing code footprint of datacenter workloads cause frequent instruction cache misses [  or save energy.We assume a 12k-entry, 3-bank baseline BTB storage, leading to 6 configurations.16,18,25,50,54,59].Even with well-provisioned frontends (effective Branch-Prediction-Unit (BPU), large BTBs and I-caches, etc.), instruction supply often falls short for modern server and cloud applications.Dedicated Instruction Prefetchers (DIPs) 1 have been proposed to alleviate this bottleneck, and a plethora of works has explored both hardware and software techniques.Common hardware based approaches include record and replay based prefetchers ( [6,9,10,17,27,28,31,49,50]) and branch predictor directed prefetchers ( [7,12,34,35,43,58]), while software based techniques often use profiling to change the code layout ( [5,39,40,44]) or insert prefetch instructions ( [4, 9, 29, 37, 38]).While there has been extensive work on DIPs, it is crucial to evaluate them together with the baseline instruction prefetcher that already exists in nearly all modern processors.Essentially all modern processors employ decoupled frontends for effective instruction prefetch [1,3,11,15,20,41,56].This Fetch Directed Instruction Prefetching (FDIP) [46][47][48] leverages the existing Branch Predictor Unit (BPU) and BTB to run-ahead and identify future basic blocks to prefetch into the L1I.As modern processors include large BTBs and highly accurate BPUs, FDIP is extremely effective for the majority of applications, and particularly those that fit in the existing frontend structures.Further, as FDIP leverages the existing BTB and BPU resources, it requires very little additional metadata storage compared to most DIPs.
The Potential and Peril of DIPs.While today's well-provisioned frontends make FDIP highly effective in most cases, workloads with particularly large instruction footprints can overwhelm the BTB and BPU and make FDIP ineffective [8,9,25,54].In these cases, DIPs have been shown to deliver performance benefits on top of FDIP ( [6, 28, 33-35, 49]).According to our simulations of 2122 benchmark traces, a DIP can provide more than 2% performance improvement for 1% of the benchmarks, with a maximum benefit of 5%.However, these benefits come with the cost of extra in-core metadata storage (area), cache pollution and contention (performance), and excess prefetch requests (energy).Specifically, while we observe that 5% of benchmarks see a >1% performance improvement, almost 12% suffer from >1% performance loss, and 33% see a >1% energy increase.In addition, the 84% of benchmarks that see no performance/energy gains or losses still pay the in-core area overhead.These results show that with a representative (aggressive) FDIP baseline [23,24] and including wrong path instruction prefetching, the overall benefit of DIPs is limited, as are the number of benchmarks that benefit.However, as pointed out in many studies, instruction footprint is growing, and this gives reason to believe that the performance difference between FDIP and DIP will increase in future workloads.
Goal.Our goal is to provide the performance and energy benefits of DIPs for those workloads that benefit from them while avoiding the costs (performance, energy, and area) for the majority of workloads that do not.To achieve this goal, we introduce Protean, which dynamically allocates the existing BTB storage between the BTB (to support the baseline FDIP) and the DIP metadata (for applications where the DIP is effective), or power gates it to save energy (for applications that do not require the large BTB capacity).(See Figure 1.)This allows us to match the instruction prefetching needs of the application or program phase 2 without extra storage or the dynamic costs of the DIP when it is not beneficial.
Insight.We observe that most benchmarks that benefit from a DIP can tolerate BTB misses owing to Post Fetch Correction (PFC) of direct branches.This is because PFC [24] is able to re-steer the frontend early on a BTB miss to a direct branch, since direct branches contain the target address encoded in the instruction.As a result, such BTB misses do not wait until the execute stage for re-steering and only need to flush the Fetch Target Queue (FTQ), not the ROB and pipeline, making them far less performance critical.We observe that applications that benefit from a DIP are also insensitive to increased (direct) BTB misses for this reason, making it effective to share BTB capacity.
Solution.Protean dynamically allocates the existing BTB storage between BTB and DIP metadata or power-gated modes to obtain the best benefit according to the specified performance/energy optimization goal.For our baseline design with a three-bank BTB, we can choose from among the six configurations shown in Figure 1.
To do so, we need: compatible BTB and DIP metadata layouts, to allow us to share the same storage; a mechanism that enables us to define a desired performance/energy goal; and a method to dynamically and efficiently choose the correct configuration at runtime.Our final design uses a scoring metric that enables us to vary the desired performance/energy optimization goal, which, in turn, allows us to train a machine learning algorithm to choose the correct configuration for the goal at runtime.The result is a system that can efficiently and accurately share BTB storage between the BTB and DIP to achieve a given performance/energy optimization goal, and thereby achieve nearly all of the potential benefits of DIPs, while avoiding nearly all of their cost where they do not provide a benefit. 2 Hence Protean, referring to the shape-shifting Greek god, Proteus.Our work makes the following contributions: • We show that BTB and instruction prefetching interact in a useful way: benchmarks that require DIP can tolerate direct BTB misses due to effective Post Fetch Correction.• We show how the storage used for the BTB can be repurposed for storing DIP metadata, leading to a range of BTB/DIP configurations for essentially the same area cost as today's standard frontend.• We propose a scoring methodology that allows us to evaluate and train for a range of performance/energy goals.• We show how a decision tree can be used to accurately choose the best BTB/DIP configuration for a range of performance/energy goals.• We demonstrate that this system is able to achieve nearly all of the potential DIP performance/energy benefits for the applications where there is a potential gain, while avoiding nearly all of the potential losses.• We note that while our work enables the limited benefits available from DIPs today, it also provides a framework for obtaining the benefits of future DIPs and addressing future workloads by simply re-training the decision tree.

BACKGROUND
We start by introducing FDIP (Section 2.1) and demonstrating how FDIP provides nearly all of the potential benefits, which corroborates previous results (Section 2.2).We then look at how a stateof-the-art dedicated instruction prefetcher (DIP) can improve on FDIP (Section 2.3), and see that it pays significant penalties in area, energy, and performance for the majority of applications where it provides no benefit (Section 2.4).This background motivates our goals of obtaining the benefits of the DIPs without their costs, which we then address in the remainder of the paper.
FDIP places predicted instructions in a Fetch Target Queue (FTQ, see Figure 2), which decouples the generated instruction prefetches from instruction fetch, and enables the branch predictor to runahead during fetch stalls from instruction cache misses.While the FTQ logically holds predicted instructions, in practice they are stored at a coarser granularity, e.g., 32 byte [24] or 64 byte [32] basic blocks.Because FDIP leverages the existing frontend resources it is almost metadata-free (requiring only 186 bytes for the FTQ [24]) and benefits from improvements to the BPU and BTB.
With FDIP, the pipeline takes the entry at the head of the FTQ as a demand fetch while the other entries in the FTQ are issued to the instruction cache as prefetches.As long as FDIP can accurately run far enough ahead, basic blocks are likely to have been prefetched into the instruction cache by the time they reach the head of the FTQ, thereby resulting in a cache hit at demand fetch.To avoid repeated cache searches when entries reach the head of the FTQ, Ishii et al. [24] propose issuing all FTQ entries as demand loads into the instruction cache such that the loaded lines are locked, and then storing the resulting cache way in the FTQ's metadata.The way information then allows them to avoid a tag search when the FTQ entry makes it to the head.
One challenge with FDIP is how far ahead to prefetch.Since the predicted control flow accuracy decreases with each predicted branch, prefetching further ahead increases the likelihood of wrong path prefetching, leading to cache pollution and port contention.To address this, Reinmann et al. advocated limiting prefetching to 10 basic blocks [47] while Perais et al. limit in-flight prefetches to 4 [43].
One particularly important FDIP detail is Post Fetch Correction (PFC) [24], which re-steers the frontend as early as possible, thereby minimizing the impact of direct BTB misses and branchmispredictions. PFC detects if an unconditional direct branch is mispredicted as not-taken or if a predicted taken direct branch misses in the BTB (dashed arrow in Figure 2 from decode).These cases can be detected early in decode/pre-decode stage, which means that only the FTQ needs to be flushed after such a miss, as opposed to waiting until branch resolution in the execute stage, which requires costly flushing of ROB entries.This is particularly important to model, as BTB misses and mispredicted direct branches appear far more costly without PFC, which can result in an over-emphasis of the impact of BTB size and BTB prefetching.

FDIP Delivers Excellent Prefetching
Current processors with large BTBs and highly accurate branch predictors provide a solid foundation for FDIP. Figure 3 shows that a realistic FDIP implementation [24] comes very close to a stateof-the art DIP, the Entangled Instruction Prefetching (EIP) [49], and even does well against an infinite capacity instruction cache.Indeed, the majority of our 2122 benchmark traces show that FDIP is within 1% of the ideal instruction cache (51%) or state-of-the-art DIP (95%).As discussed by Ishii et al. [24], this demonstrates how effective FDIP implementations are, but also shows that there is a small subset (5%) of applications where a DIP is beneficial.Our work not only enables the benefits of DIPs for this subset today, but also provides a framework for leveraging future DIPs and for addressing future workloads through simply re-training our decision trees.Across our 2122 benchmark traces we see that FDIP instruction prefetching is within 1% of an infinite instruction cache for 51% of the benchmarks and within 1% of a state-of-the-art DIP for almost 95% of the benchmarks.This makes it hard to justify the area and dynamic costs of a DIP by itself.

EIP: A Dedicated Instruction Prefetcher
While our approach is independent of any particular DIP, we evaluate our approach with the winner of the 1st Instruction Prefetching Championship [22], the Entangled Instruction Prefetcher (EIP) [49].
EIP is a state-of-the-art Dedicated Instruction Prefetcher that has been shown to deliver considerable performance improvement over FDIP baseline for a subset of applications [49].EIP is a "record and replay" based DIP, which works by finding earlier basic blocks and using them to trigger timely prefetches for later misses.EIP achieves this by keeping a history buffer of the start addresses of recent basic blocks and the time at which they were demanded.On an instruction cache miss, EIP uses the history to find a basic block that was sufficiently far before the miss, and then "entangles" the start address of the earlier basic block with the desired prefetch address via the Entanglement mapping table.For demand access at the head of the FTQ, EIP checks the entanglement table to see if there are any entangled prefetches and issues them.If a prefetch is late, EIP finds an even earlier basic block in the history and entangles the prefetch with it instead.To improve efficiency, EIP uses a compressed storage format that allows multiple cache lines to be entangled to each basic block and stores the length of the basic block for better prefetching coverage.EIP uses this information to both issue prefetches for the subsequent cache lines in the current basic block and for basic blocks which are entangled to the current basic block.
EIP has two main advantages over FDIP.First, because EIP is able to adjust how early it initiates prefetch requests over a very large range, as opposed to FDIP, which can only run ahead in the predicted program flow, it is able to generate prefetches far earlier.This allows it to schedule prefetches earlier enough to handle BTB misses or BPU mispredictions when the frontend is stalled.Second, because it uses a separate metadata storage (the entanglement table), EIP is not affected by the BPU and BTB capacity limitations that FDIP suffers from for applications with large code footprints.As a result, the state-of-the-art EIP provides a maximum of 5% 2122 benchmark traces on performance (blue) and energy (red).Left: For the 5% of benchmarks that exhibit an performance gain of >1% with the DIP.Middle: For the 84% of benchmarks that have less than a 1% performance change.
Right: For the rest 11% of benchmarks that show a performance loss of >1%.

The cost of a DIP
Although EIP is effective for a some applications, it also entails significant costs, particularly for the applications that see no benefit on top of the FDIP baseline.These costs include increased energy consumption due to excess prefetches and performance loss due to cache pollution and data cache misses being delayed due to port contention.The complexity of this trade-off is shown in Figure 4.
We see that almost 5% of our 2122 benchmark traces show >1% performance gain with EIP (left), while almost 12% show a performance loss of >1% (right).The remaining 84% (middle) see no loss or benefit with EIP.For energy, the impact ranges from a savings of 0.15% (up to 2.7%) for those with a performance benefit (left) to an increase of 3.6% (up to 4.5%) for those with a performance loss (right).And for the vast majority which do not benefit from EIP (middle), we see an energy increase of 0.6% (up to 3.3%).This demonstrates that while EIP can benefit a subset of applications, overall it is hard to justify its area and dynamic performance and energy costs.(See Section 7 for simulation details.)While the dynamic performance and energy costs of EIP could be avoided by deactivating it when not needed, the area cost is fixed.The proposed 4K-entry entanglement table (which represents 97% area overhead of EIP) requires roughly 40KB of metadata storage [49].As few applications benefit from EIP, an industrial research team (Ishii et al. [23,24]), concluded based on an ISO-area comparison that a DIP, such as EIP, on top of FDIP is difficult to justify.It is further worth noting that not all area in a processor is of equal cost: adding a 40KB of metadata to a multi-MB cache is indeed a small overhead, but adding a similar amount inside the frontend of the core is a much more daunting challenge from a timing point of view.As DIPs such as EIP are tightly integrated into the core, and they benefit only a small set of applications, this makes them hard to justify in practice4 .

Challenge: The Benefits of a DIP without its cost
The preceding analysis of FDIP and DIP shows us that DIPs can provide a meaningful benefit for a subset of applications but that they come with a significant dynamic and static cost.This leads us to the two main questions of this work: (1) Can we avoid the static area costs by sharing existing frontend metadata storage?(Section 3) (2) Can we obtain the benefits of DIP without its dynamic performance and energy costs by dynamically enabling/ disabling it at runtime?We take a step further to identify the best configuration when enabling/disabling the DIP (Section 4).

AVOIDING THE STATIC COSTS
To reduce the static area cost of a DIP, we propose sharing metadata storage with another frontend structure.For this to work, we need to identify a structure that is compatible (in terms of the physical storage format) and complementary (i.e., not needed at the same time).There are a range of frontend storage structures one could consider: TLBs, BPU, BTBs, etc.The first-level TLBs and BTBs are latency-critical, and, even though the logic changes needed to dynamically re-purpose their storage are small, doing so might have a significant impact on the cycle time.First-level TLBs also have low capacities, for example, 256 4kB pages in Intel's Alder Lake [2], and translation is still required to issue FDIP prefetches.The BPU is also latency-critical and its costs of mispredictions are high as they are detected late in the execute stage and require ROB/FTQ flushes.
Sharing the second-level BTB storage, however, is more promising.Not only are L2 BTBs larger (up to 12k entries in Intel's Alder Lake [2]), but there is reason to believe that their performance is complementary to that of a DIP.This is because a DIP is needed when FDIP is unsuccessful, and, when that happens, it is either because the BTB or the BPU are not effective.While this might suggest sharing the BPU storage, the cost of BTB misses are far lower than that of BPU misses, as they can often be corrected early in the pipeline with PFC.

Are BTB and DIP data compatible?
To dynamically share the BTB storage with a DIP, the existing BTB storage must have a readout width, associativity, and capacity that is compatible with those required for the DIP.We examine EIP and focus on sharing storage for its entanglement table, as it is almost 98% of its metadata storage 5 .While the exact details of BTB implementations are proprietary, we compare a typical BTB [53]  Figure 5: BTB [53] and EIP [49] metadata storage formats are largely compatible.
to EIP.We assume a baseline 12-way 12k-entry BTB, as in Intel's Alder Lake, built of three 4-way, 4k-entry banks, with each bank separately configurable as either DIP metadata storage, BTB storage, or power-gated.This is consistent with Intel's recent designs which allow adjusting BTB capacity on-demand [21].
Figure 5 shows that the BTB entry is 79 bits, which is compatible to the EIP entanglement entry.However, we see that the number of tag bits varies, indicating that the lookup logic (indexing and tag comparisons) will require a configurable shifter on the index and tag comparators.While the rough width of each entry is a good match, specific designs may end up wasting a few bits of each entry if they do not align perfectly.This significant storage compatibility indicates that it is plausible to dynamically allocate the three storage banks between BTB storage, DIP metadata storage, and powered-off states.As we assume a minimum of 4k-entries for BTB storage, we end up with the six configurations shown in Figure 1.Alternatively, one could allow DIP and BTB entries to compete for space within the banks, although we have not investigated this.
While adding reconfigurability to the BTB will incur some overhead, it is unlikely to significantly affect the critical path as we are modifying the 2nd-level BTB and Intel has already demonstrated similar reconfigurability.If added latency did become an issue, we could readily support a design where one bank (the one is always assigned to BTB entries) retains the baseline latency, as it is never reconfigured.This would allow our decision tree (Section 5.1) to choose a low-latency configuration as appropriate.

Is the BTB complementary to the DIP?
For the performance loss of sharing the BTB to be minimal, we need the effects of the BTB and the DIP to be complementary.That is, the performance loss of reduced BTB capacity must be negligible when the DIP is beneficial.To investigate this, we look at the impact of reducing the BTB capacity on the potential benefit of improved instruction delivery: e.g., how much the potential benefit a DIP could deliver would be reduced by a smaller BTB.Specifically, we use an infinite instruction cache with our baseline 12k BTB to show the limit of what an ideal DIP could achieve over FDIP, and consider whether the applications that could benefit from a DIP suffer from reduced BTB capacity.
Figure 6a shows that approximately half of our benchmarks could benefit from a DIP as their performance goes up with the infinite instruction cache, e.g., the black line is above zero.More interestingly, the vast majority of the benchmarks see little loss from reducing the BTB capacity from 12k to 8k entries (red dots are close to the black line).However, Figure 6b shows that some  Reducing the BTB size to 4k-entries causes a significant number of benchmarks to no longer be able to reach that potential (bottom, red dots below 0).By applying PFC to indirect branches, which is not realistic, we see that those benchmarks regain half of their potential (bottom, blue dots).This, combined with the data showing the dramatically greater increase in direct BTB misses in Figure 7, indicates that much of our ability to re-purpose the BTB storage for DIP without losing performance potential is due to the benefit of effective PFC.We also see that the benefits of increasing BTB size alone are far smaller (top, blue line).
benchmarks suffer significantly from reducing the BTB further to 4k entries (red benchmarks below 0 on the right).This suggests that using 4k entries of BTB capacity for a DIP (e.g., 8k BTB + 4k DIP) will not significantly reduce the potential benefit, but that there are many benchmarks for which giving up 8k entries may be problematic (e.g., 4k BTB + 8k DIP).We also explored the potential benefits of a larger 64k-entry BTB directly in Figure 6a, but found that there was more to be gained from better instruction fetch.
To understand why some benchmarks are particularly sensitive to the 4k BTB vs. the 8k BTB, we look at the impact of the type of BTB misses they experience.We observe that benchmarks that see a significant loss in performance potential with a smaller BTB do so because they have significantly more indirect branch BTB misses with the smaller BTB size (red dots on Figure 7, left).This leads to a performance loss, as the PFC can eliminate most of the performance loss from direct branch BTB misses, but not the indirect ones.We confirmed this by modifying the simulator to provide the same PFC benefit for indirect BTB misses (which is not possible to implement), and observed that about half of the performance loss is recovered (blue dots in Figure 6b).What is most interesting to note is that half the performance loss is from the indirect BTB misses, despite the fact that the direct BTB misses increase enormously more with the smaller BTB size.Specifically, Figure 7 shows that the indirect MPKI for the benchmarks that are sensitive (red dots) increases by 1 while the direct BTB MPKI increases by over 20.And yet, both indirect and direct contribute a similar amount to the performance loss.This indicates that our ability to re-purpose the BTB for DIP is heavily enabled by PFC's ability to cheaply correct direct BTB misses and indicates that L1I hits are more important than BTB hits, as re-steering can happen quickly even if the FTQ goes down the wrong path.Finally, this suggests that for the applications that suffer increased indirect branch misses, the BTB size should not be decreased, as these applications are limited by the BTB and not the cache.

AVOIDING THE DYNAMIC COSTS
To explore the potential of dynamically sharing metadata storage between the BTB and a DIP, we first look at the normalized performance and energy for each benchmark and configuration in Figure 8.We see that there is no single configuration (color) that provides the best performance (right) and energy (bottom) for all benchmarks.This demonstrates that the optimal configuration varies depending on the benchmark and the desired performance-energy trade-off.
To choose the best configuration we need to define an objective function that allows us to trade-off performance and energy.In this work we use the perpendicular distance to a configuration from a line through the origin of the performance-energy space as the score.Example scores are shown in Figure 8 by the solid black line (Performance-only goal, e.g., distance in the performance dimension only) and the dashed line (Balanced performance/energy goal, e.g., distance equally in the performance and energy dimensions).The further the configuration is to the lower-right of the line, the higher the score.This allows us to change the performance-energy optimization goal by simply varying the slope of the line as shown in Figure 9 from 4:1 (Performance over energy) to 1:1 (Balanced energy equal to performance) to 1:4 (Energy over performance).Figure 9 shows that the configurations with the best scores vary with the goal.
As previously discussed, only a minority of the benchmarks see a performance increase with a DIP.More generally, only a minority of the benchmarks see a potential score increase for a given optimization goal, and this subset varies with the goal.For example, Figure 9 shows that if we define sensitive benchmarks as those where the best configuration choice can increase the score by more than 1.0 (dashed line) 6 , only 8% of the benchmarks are sensitive for the Performance goal, while that number is 68% for the Energy goal, and 66% for the Balanced goal.Conversely, a large number of benchmarks can be severely impacted by bad configuration choices.These configurations are to the upper-left of the dashed lines in Figure 9, and continue further in Figure 8.
We define three groups of benchmarks: Sensitive-good have a best configuration that improves their score by 1.0 or more, Sensitivebad have a worst configuration that hurts their score by more than 1.0 and no configuration that improves the score by more than 1.0, and Insensitive have a best configuration that improves the score by less than 1.0 and a worst configuration that hurts the score by no more than 1.0.We found that 41/123/193 of the Performance/Balanced/Energy Sensitive-good applications (2-10% of total) also had a configuration that could hurt their score by more than 1.0.
The distribution of benchmark sensitivity for the different optimization goals is shown in Table 1.The Performance goal has many fewer Sensitive-good benchmarks as the slope of its trade-off line   9a) is steep enough that the large number of 4k BTB configurations (blue points) fall close within the dashed lines, leading to more benchmarks being classified as Insensitive.As the slope of the optimization goals is lowered (more emphasis on energy, Figure 9b and Figure 9c), more of the 4k BTB configurations are classified as the Sensitive-good, resulting in less Insensitive benchmarks for these two optimization goals.The potential benefit of dynamically choosing the configuration is shown in Figure 10, which compares an Oracle selection of the best dynamic policy for each Sensitive-good benchmark to the five Static configurations across all benchmarks, normalized to the 12k BTB baseline.We see that the Sensitive-good benchmarks have a potential 1.5% performance gain on average (max 5.5%) with a 1.5% average energy savings for the the Performance goal (a), and a 2.7% energy savings (max 6.1%) with 0% performance loss for the Energy goal (b).These results are better than any of the static single policy configurations shown to their right.(Note that we do not include the Sensitive-bad or Insensitive benchmarks in the Oracle configurations as they are essentially unchanged in the Oracle configuration, while they may see significant losses in the Static Configurations.)The impact on potential benefit across the different optimization goals is shown in Figure 10c, where the swing from performance to energy as the goal changes is shown in the marked averages.It is worth noting that the best configuration is the same for 80% of all benchmarks regardless of goal, as the majority have little potential for improvement, and the benefits are limited to only the sensitive-good subset.

DYNAMIC RUNTIME CONFIGURATION
There are two main challenges with dynamically choosing the configuration at run-time: First, building a sufficiently cheap, yet accurate, classifier, and second, choosing a reconfiguration period that is short enough to capture phase behavior while being long enough to amortize the warm-up incurred from configuration changes.

Classification
In this work we train a Decision Tree (DT) to select the configuration based on readily available performance counters.A DT is a tree structure that is walked to determine the final choice.Each non-leaf node in the tree contains the ID of the input to evaluate and its threshold value for deciding which way to go.The leaf nodes contain the final output, or choice of the DT.This makes DTs compact in storage (they store input IDs and thresholds at each node) and cheap to evaluate (the latency is determined by the depth of the tree).
To address our range of optimization goals, we simply train one DT for each goal.This results in 3 decision trees: 1 for each optimization goal (Performance/Balanced/Energy).At runtime, we periodically evaluate the DT for the current optimization goal using the current performance counter inputs to determine the next configuration.Our DT uses 15 different performance counters, with the three most important ones for each optimization goal shown in Table 2.We did not include the current configuration in the training as that would combinatorially explode the training.Despite this, the DT is very accurate on the chosen performance counters.
For training we simulate all six configurations for the benchmarks in the training set and calculate the score for each of the three optimization goals for each time period after 20M instructions of warm-up.We split the training benchmarks into windows of 20M instructions and used the performance counter values during those windows to identify the optimal configuration for next window.The optimal choice for each time period is determined by picking the configuration with the highest score.This approach does not include the cost of warming the BTB/DIP storage structures on configuration changes, and we find the final accuracy to be slightly lower (see Section 7).
To evaluate training robustness, we performed 20 random permutations of an 80/20 training/test split of the benchmarks.The resulting DTs provided accuracies between 97% to 98.8%, demonstrating robust training.We also explored 50/50 training/test splits with the random permutations and observed slightly reduced accuracies between 92% to 97%.Training on just 8% or 20% of benchmarks resulted in 90% DT accuracy for balanced and energy optimization goals, but lower for performance, showing the DT is robust even with little training data.Further, we found that random forests performed similarly, while the accuracy of a single-layer neural network was lower, at 87% to 95%, for the 80/20 split.
The resulting DTs contain 1291/773/603 nodes and a maximum depth of 23/22/19 for performance/balance/energy trade-offs respectively, meaning they require approximately 4KB of storage (assuming 16 bits for threshold and remaining 1 byte for the ID of the feature to compare) and up to 23 memory accesses for each evaluation.Due to the simple nature of the DTs, either HW or SW could evaluate the DT in less than 2500 cycles (depth of 25 with at most one 100-cycle memory access per level), which is a negligible overhead compared to the 20M instruction windows between evaluations.The DT can always be made smaller (using cost-complexity pruning) for a trade-off in accuracy.However, evaluating this size decision tree has a negligible time and storage overhead so we did not look into this further.

Dynamic Reconfiguration Interval
To evaluate the potential gains from shorter reconfiguration intervals vs. the cost of more frequent BTB/DIP metadata zeroing and warm-up, we compared a system with no warm-up penalty to one which zeros the portion of the metadata that is reconfigured.In both cases we use an Oracle decision to isolate the potential of shorter intervals from the accuracy of the decision.The Performance, Energy, and Balanced results are shown in Figure 11.
The results show that without the warm-up cost, shorter intervals can improve the score significantly across all three optimization goals (blue, top line) as they can take the benefit of shorter phases.However, the warm-up cost from such frequent changes (red, bottom line) negates this benefit.Indeed, we see that the warm-up cost is sufficiently high that only when we use our full simulation trace (e.g., a single 100M instruction interval), do we amortize the warm-up cost.This suggests that intervals of ∼100M instructions or longer between re-configurations would be preferred.However, as our simulation traces are on average only 100M instructions, we use 20M instruction intervals in the rest of the paper and include the cost of warm-up.

METHODOLOGY
We use ChampSim [19] with the architectural parameters in Table 3.Our L2 BTB has 12k entries, as with Intel's Alder Lake performance core [2], and we assume it is composed of 3 separate banks, each of which can be power-gated, as in [21,26].We use 2122 traces from the 1st Instruction Prefetching Competition [22] and the Championship Value Prediction Workshop [13].The traces contain a diverse set of workloads including server, client, cryptography, compute integer/floating point, and SPEC.We model a 24-entry FTQ, with each entry holding 32 aligned bytes (8 instructions), for a total capacity of 192 instructions, following [24].The FTQ issues prefetches from the entries which are not at the head and a small prefetch queue tries to coalesce requests before issuing them to the L1I.The fetch unit issues a demand request for the entry at the head of the FTQ, which means that the instruction cache is potentially probed twice for each instruction: once on a prefetch and once on demand.However, prefetch reads are significantly lower-energy tag array reads, while the demand accesses also read the data.Writes to the tag and data array are similar in the case of either a demand or prefetch miss.
We implement Post fetch correction (PFC) for BTB misses (direct calls/jumps/returns) and branch mispredictions (direct calls/jumps).BTB misses are resolved earlier (1 cycle after fetch, e.g., pre-decode [24]), whereas branch-mispredictions for direct calls/jumps are resolved after full decode of 4 cycles, as shown in Figure 2  7 .We track returns and indirect branches in the BTB, but get their targets from 7 To explore sensitivity to PFC latencies, we evaluated a 4-cycle re-steer on correctly predicted branches with BTB misses.We found that it had negligible impact for our energy and balanced optimization goals, but reduced the number of sensitive-good performance benchmarks by half and reduced their potential score by 21%.This the return address stack and indirect target predictor, respectively.BTB misses to indirect branches result in ROB flushes after execute, as they cannot be corrected early.
ChampSim does not natively model the impact of wrong-path execution on caches or TLBs, but only includes their pipeline execution delay.This means that FDIP generates prefetches until the FTQ is filled or an incorrectly predicted branch or BTB miss is encountered, thereby avoiding wrong-path prefetching.The EIP [49] and FDIP [24] evaluations use trace-driven simulators with this behavior.However, as wrong-path prefetching has been shown to be beneficial by bringing in lines that will be needed shortly [34,45], the lack of it can hurt the baseline and thereby increase the apparent benefit of DIPs.
To address this, we added wrong-path instruction prefetching for BTB misses for predicted taken branches and branch mispredictions.When on the wrong-path, our frontend continues predicting basic blocks using the BP and BTB and issuing prefetches.As soon as the BTB miss is resolved (at decode) or the misprediction is resolved (at execute), execution returns to the correct path.With wrong-path prefetching we observed that the potential gain of a DIP over the baseline was reduced from 2% (max 9.8%) to 1.52% (max 5.3%) for Sensitive good benchmarks, but Protean's ability to take advantage of that potential was not significantly affected, showing the robustness of our approach.
With wrong-path prefetching for instructions, we found that FDIP could consume all L1I MSHRs, and thereby prevent the DIP from issuing prefetches.Indeed, some benchmarks had up to 12% of execution cycles where no MSHRs were available.We therefore limited FDIP to 8 of the 16 MSHRs and reserved the remaining 8 for the DIP.While MSHR allocation could be adjusted dynamically, we demonstrates that the ability to tradeoff BTB space is dependent on the PFC latency and the optimization goal.
found that statically allocating 8 MHSRs to FDIP across all applications had negligible impact on performance (max 0.02% of cycles with no MSHRs available), and actually delivered a performance benefit of 0.038% (up to 2.3%) across all the applications.As a result, our proposed configurations all statically assign 8 L1I MHSRs to FDIP and the remaining 8 to the DIP when it is active, vs. our baseline 12k BTB which shares 16 MHSRs between the two.
As Protean can work with essentially any DIP whose metadata is largely compatible with the BTB storage, we chose the stateof-the-art Entangled Instruction Prefetcher (EIP) [49] and use the implementation provided by the authors [14].We support two configurations sharing the BTB storage for the EIP entanglement table: 4k entries, 4-way (one bank) or 8k entries, 8-way (two banks).The original EIP used a 4k-entry, 16-way table.We use physical address prefetching in EIP to avoid the severe TLB contention EIP causes, as observed by Vavouliotis et al. [57].During execution, EIP accesses its entanglement table after every demand access to generate prefetches.
All simulations start with 20M instructions of warm-up in the baseline 12K BTB configuration followed by 100M instructions of simulation, or execution to the end for the shorter traces.After each 20M instruction period we evaluate the DT for the chosen optimization goal using the performance counter values from the previous period, and choose the next configuration based on its output.We zero the L2 BTB bank if it is reconfigured to run a different configuration (e.g.BTB to DIP, DIP to BTB, or BTB to off, etc.).We do not move entries or adjust the LRU ordering of the BTB entries across the ways.
We modeled both dynamic and static energy of the BTBs, ITLB, DTLB, L2 TLB, L1I, L1D, L2, and LLC using CACTI with a 22nm process [36].Note that we have chosen to focus on on-chip storage structures related to instruction delivery in our energy evaluation to investigate the impact in the core itself.However, our decision tree could easily be retrained to include energy effects from other parts of the system8 .

RESULTS
We evaluate Protean's ability to achieve the benefits of a DIP without its cost for our three optimization goals (Performance, Balanced, and Energy) across three metrics: Performance (IPC), Energy, and Score in Figure 12. Results are normalized to the baseline 12k BTB configuration without a DIP.For each optimization goal and metric we present results with perfect configuration selection (Oracle) and using our online Decision Tree (DT ).Results are presented for the Sensitive-good and Sensitive-bad benchmarks and include metadata warm-up costs on configuration changes.Each column shows Score, Performance and Energy for a specific optimization labeled on the top of the column.For Score and Performance higher is better, and, for Energy, lower is better.This is seen in the Oracle extending further down in the bottom right plot.
Classification.The online classification accuracy of the DT is a bit lower than during training (95-97.5%, vs. 97-98.5%,see Section 5.1).This comes from the online DT evaluation using inputs from performance counters which have experienced warm-up effects from configuration changes, while our offline training data did not include warm-up.One extreme case of this is an outlier in the sensitive-bad set whose score is -16 (outlier in the energy optimization column of Figure 12.For this benchmark, the DT repeatedly switches between the 4/8/12K BTB configurations, while the Oracle choice is to stay with the 12k BTB.As a result, Protean reduces performance by 14% and increases energy by 13% for this case.Overall we note that the 6 similar outliers are a small subset of the 674 total in the sensitive-bad group, which demonstrates that we succeed in avoiding most of the downsides.The distribution of the configurations selected across the sensitive-good benchmarks (across individual windows of 20M instructions) is shown in Table 4. Impact.Across all benchmarks, the DT is able to achieve an absolute score difference vs. the Oracle of less than 0.25/0.50for 90%/95% of all benchmarks, highlighting the robustness of the classifier.Table 5 demonstrates that Protean is able to come very close to achieving the full potential benefits of the state-of-the art DIP while Table 6 shows that this is achieved while avoiding nearly all of its costs.We see that when optimizing for Performance, Balanced, or Energy, Protean achieves an average of 86/96/98% of the maximum potential (Oracle) for the Sensitive-good benchmarks, without requiring extra in-core metadata storage and while avoiding nearly all of the DIP's run-time costs.For the sensitive-bad benchmarks, Protean avoids almost all of the downsides of static configurations.This demonstrates that Protean is successful in obtaining nearly all the benefits of a DIP while avoiding nearly all its costs.

CONCLUSIONS
State-of-the-art Dedicated Instruction Prefetchers (DIPs) provide benefits for a very limited number of applications at significant static (area) and dynamic (energy and bandwidth) cost for most others.This work shows how we can achieve these benefits where possible without paying the costs.Protean accomplishes this by sharing the existing frontend BTB storage between the DIP and the BTB.We demonstrate that this is practical because the applications that benefit from DIPs are less sensitive to BTB misses due to effective Post-Fetch Correction.This allows us to dynamically re-assign BTB storage banks based on the application behavior, resulting in six frontend configurations.
We show that a Decision Tree can accurately and robustly pick a good configuration at run-time, and that we can target a range of energy/performance goals by simply training for the desired trade-off.While our use of aggressive, but realistic, baseline FDIP prefetching and Post-Fetch Correction limits the maximum benefits of the particular state-of-the-art DIP we considered, we were able to achieve 86% of the performance potential, 98% of the energy potential, and 96% of the balanced performance/energy potential, while avoiding essentially all of the penalties.
While the benefits today are limited by current-generation DIPs, Protean provides a general framework that can be easily re-trained for future advances.Indeed, with the development of more effective DIPs, and as applications become more challenging for FDIP, we expect Protean to be of even greater value in enabling designs that obtain the benefits of DIPs without paying their area and run-time costs.

Figure 1 :
Figure 1: Our proposed dynamic reconfiguration of the BTB to share space with a Dedicated Instruction Prefetcher (DIP) or save energy.We assume a 12k-entry, 3-bank baseline BTB storage, leading to 6 configurations.

Figure 2 :
Figure 2: Baseline design with the Post-Fetch-Correction (PFC) in the decode stage.The head of the FTQ (demand), FDIP prefetches, and DIP prefetches share 2 ports to the instruction cache.

Figure 3 :
Figure3: Benefits of an infinite instruction cache or a stateof-the-art DIP over the baseline FDIP.Across our 2122 benchmark traces we see that FDIP instruction prefetching is within 1% of an infinite instruction cache for 51% of the benchmarks and within 1% of a state-of-the-art DIP for almost 95% of the benchmarks.This makes it hard to justify the area and dynamic costs of a DIP by itself.

Figure 4 :
Figure4: The impact of a specific DIP (the Entangled Instruction Prefetcher) normalized to the FDIP baseline across our 2122 benchmark traces on performance (blue) and energy (red).Left: For the 5% of benchmarks that exhibit an performance gain of >1% with the DIP.Middle: For the 84% of benchmarks that have less than a 1% performance change.Right: For the rest 11% of benchmarks that show a performance loss of >1%.
Reducing the BTB to 8k-entries (red) has little impact on the potential benefits of improved instruction fetch (black).(b)Many benchmarks see a significant loss with a 4k-entry BTB (red), but providing PFC for indirect branches addresses most of this (blue).

Figure 6 :
Figure 6: Exploring whether BTB and DIP effects are complementary.For most applications (top, red dots) reducing the BTB size to 8k-entries has very little impact on the potential improvements of better instruction fetch (infinite instruction cache, black line).Reducing the BTB size to 4k-entries causes a significant number of benchmarks to no longer be able to reach that potential (bottom, red dots below 0).By applying PFC to indirect branches, which is not realistic, we see that those benchmarks regain half of their potential (bottom, blue dots).This, combined with the data showing the dramatically greater increase in direct BTB misses in Figure7, indicates that much of our ability to re-purpose the BTB storage for DIP without losing performance potential is due to the benefit of effective PFC.We also see that the benefits of increasing BTB size alone are far smaller (top, blue line).

Figure 7 :
Figure 7: Increase in indirect (left) and overall (right) BTB MPKI for benchmarks with an infinite instruction cache and 4k BTB over the baseline of 12k BTB and normal instruction cache.Red dots show benchmarks sensitive to reducing the BTB capacity, from Figure 6b.

Figure 8 :
Figure 8: Configuration space normalized to the 12k BTB FDIP baseline with 16 MSHRs.The score for the furthest red point shown for a performance-only objective function (solid black line) and an equal performance-energy objective function (dashed black line).

Figure 9 :
Figure9: Impact of optimization goal on configuration choice.Performance/Energy trade-off is shown by the slope of the lines.Insensitive benchmark's best and worst configurations are marked in white and red respectively.The best configuration for the Sensitive-good applications are marked in black.Table1: Benchmark sensitivities by optimization goal

Figure 10 :
Figure 10: Potential improvements on the benchmarks of choosing the best configuration for each Sensitive-good benchmark (Oracle, left) vs. a single policy for all benchmarks (Static, right), for performance (a) and energy (b), and the average impact of changing the optimization goal on the score in the performance/energy configuration space(c).

Figure 11 :
Figure11: Impact of reconfiguration interval with and without the cost of warming the metadata, with an Oracle configuration selector.Without the cost of warm-up (blue lines) shorter intervals are able to take advantage of application phases.But with the warm-up costs (red line), longer intervals are required to amortize the warm-up overhead.As the traces are limited to 100M instructions, the 100M interval has only 1 re-configuration (after a warm-up interval of 20M instructions).

Figure 12 :
Figure12: Ability of Protean to achieve the Performance and Energy benefits of a DIP (Sensitive-good) without suffering from its costs (Sensitive-bad), nor requiring its additional storage.Oracle (left, blue) shows the potential with perfect configuration selection compared to our online Decision Tree (right, red).Left column: Performance Goal, Middle column: Balanced Goal, Right column: Energy Goal. Results are normalized to the baseline 12k BTB with 16 shared L1I MSHRs.For Energy, 1 outlier extends beyond the plot as discussed in the text.

Table 1 :
off is shown by the slope of the lines.Insensitive benchmark's best and worst configurations are marked in white and red respectively.The best configuration for the Sensitive-good applications are marked in black.Benchmark sensitivities by optimization goal (Figure

Table 2 :
Top 3 Performance Counters

Table 4 :
Configurations for Sensitive-good Benchmarks

Table 5 :
Protean's ability to achieve the potential benefits vs. the Oracle for sensitive-good benchmarks Positive numbers indicate better results: increased score, increased performance, or energy savings.

Table 6 :
Protean's ability to avoid potential costs vs. the Oracle for sensitive-bad Benchmarks