0% found this document useful (0 votes)
13 views14 pages

Understanding Rowhammer Under Reduced Wordline Voltage: An Experimental Study Using Real Dram Devices

Overclocking RAM

Uploaded by

geoffreyugarcia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views14 pages

Understanding Rowhammer Under Reduced Wordline Voltage: An Experimental Study Using Real Dram Devices

Overclocking RAM

Uploaded by

geoffreyugarcia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Understanding RowHammer Under Reduced Wordline Voltage:

An Experimental Study Using Real DRAM Devices


A. Giray Yağlıkçı1 Haocong Luo1 Geraldo F. de Oliviera1 Ataberk Olgun1 Minesh Patel1
Jisung Park1 Hasan Hassan1 Jeremie S. Kim1 Lois Orosa1,2 Onur Mutlu1
1 ETH Zürich 2 Galicia Supercomputing Center (CESGA)
RowHammer is a circuit-level DRAM vulnerability, where DRAM chips (manufactured in 2019–2020), which is 14.4×
repeatedly activating and precharging a DRAM row, and thus and 6.9× lower than the HC f irst of 69.2K for some older DRAM
alternating the voltage of a row’s wordline between low and chips (manufactured in 2010–2013) [11]; and 2) the fraction of
high voltage levels, can cause bit flips in physically nearby DRAM cells that experience a bit flip in a DRAM row (BER)
rows. Recent DRAM chips are more vulnerable to RowHam- after hammering two aggressor rows for 30K times is 2 × 10−6
mer: with technology node scaling, the minimum number of for some newer DRAM chips from 2019–2020, which is 500×
arXiv:2206.09999v1 [cs.AR] 20 Jun 2022

activate-precharge cycles to induce a RowHammer bit flip re- larger than that for some other older chips manufactured in
duces and the RowHammer bit error rate increases. Therefore, 2016–2017 (4 × 10−9 ) [11]. As the RowHammer vulnerability
it is critical to develop effective and scalable approaches to worsens, ensuring RowHammer-safe operation becomes more
protect modern DRAM systems against RowHammer. To enable expensive across a broad range of system-level design met-
such solutions, it is essential to develop a deeper understand- rics, including performance overhead, energy consumption, and
ing of the RowHammer vulnerability of modern DRAM chips. hardware complexity [8, 9, 11, 12, 36, 43, 49–52].
However, even though the voltage toggling on a wordline is a To find effective and efficient solutions for RowHammer,
key determinant of RowHammer vulnerability, no prior work it is essential to develop a deeper understanding of the Row-
experimentally demonstrates the effect of wordline voltage (VPP ) Hammer vulnerability of modern DRAM chips [8, 9, 12]. Prior
on the RowHammer vulnerability. Our work closes this gap in works [3,4,6–12,15] hypothesize that the RowHammer vulnera-
understanding. bility originates from circuit-level interference between 1) word-
This is the first work to experimentally demonstrate on 272 lines that are physically nearby each other and 2) between a
real DRAM chips that lowering VPP reduces a DRAM chip’s wordline and physically nearby DRAM cells. Existing circuit-
RowHammer vulnerability. We show that lowering VPP 1) in- level models [7, 10, 15] suggest that toggling of the voltage on a
creases the number of activate-precharge cycles needed to in- wordline is a key determinant of how much repeated aggressor
duce a RowHammer bit flip by up to 85.8 % with an average of row activations disturb physically nearby circuit components.
7.4 % across all tested chips and 2) decreases the RowHammer However, it is still unclear 1) how the magnitude of the wordline
bit error rate by up to 66.9 % with an average of 15.2 % across voltage (VPP ) affects modern DRAM chips’ RowHammer vul-
all tested chips. At the same time, reducing VPP marginally nerability and 2) whether it is possible to reduce RowHammer
worsens a DRAM cell’s access latency, charge restoration, and vulnerability by reducing VPP , without significantly worsening
data retention time within the guardbands of system-level nom- other issues related to reliable DRAM operation. Therefore,
inal timing parameters for 208 out of 272 tested chips. We our goal is to experimentally understand how VPP affects Row-
conclude that reducing VPP is a promising strategy for reducing Hammer vulnerability and DRAM operation.
a DRAM chip’s RowHammer vulnerability without requiring Our Hypothesis. We hypothesize that lowering VPP can reduce
modifications to DRAM chips. RowHammer vulnerability without significantly impacting reli-
able DRAM operation. To test this hypothesis, we experimen-
1. Introduction tally demonstrate how RowHammer vulnerability varies with
Manufacturing process technology scaling continuously in- VPP by conducting rigorous experiments on 272 real DDR4
creases DRAM storage density by reducing circuit component DRAM chips from three major DRAM vendors. To isolate the
sizes and enabling tighter packing of DRAM cells. Such ad- effect of VPP and to avoid failures in DRAM chip I/O circuitry,
vancements reduce DRAM chip cost but worsen DRAM relia- we scale only VPP and supply the rest of the DRAM circuitry
bility [1, 2]. Kim et al. [3] show that modern DRAM chips are using the nominal supply voltage (VDD ).
susceptible to a read disturbance effect, called RowHammer, Key Findings. Our experimental results yield six novel ob-
where repeatedly activating and precharging a DRAM row (i.e., servations about VPP ’s effect on RowHammer (§5). Our key
aggressor row) many times (i.e., hammering the aggressor row) observation is that a DRAM chip’s RowHammer vulnerability
can cause bit flips in physically nearby rows (i.e., victim rows) reduces by scaling down VPP : 1) HC f irst increases by 7.4 %
at consistently predictable bit locations [3–15]. (85.8 %), and 2) the BER caused by a RowHammer attack re-
Many works [3, 4, 6–48] demonstrate that RowHammer is duces by 15.2 % (66.9 %), on average (at max) across all tested
a serious security vulnerability that can be exploited to mount DDR4 DRAM chips.
system-level attacks, such as escalating privilege or leaking pri- To investigate the potential adverse effects of reducing VPP
vate data. To make matters worse, recent experimental studies on reliable DRAM operation, we conduct experiments using
on real DRAM chips [3, 8, 9, 11, 12, 36, 37, 43] find that the both real DDR4 DRAM chips and SPICE [53] simulations
RowHammer vulnerability is more severe in newer DRAM chip that measure how reducing VPP affects a DRAM cells’ 1) row
generations. For example, 1) the minimum aggressor row acti- activation latency, 2) charge restoration process, and 3) data
vation count necessary to cause a RowHammer bit flip (HC f irst ) retention time. Our measurements yield nine novel observa-
is only 4.8K and 10K for some newer LPDDR4 and DDR4 tions (§6). We make two key observations: First, VPP reduction

1
only marginally worsens the access latency, charge restoration the data for each DRAM access. A DRAM module may have
process, and data retention time of most DRAM chips: 208 one or more ranks, communicating with the memory controller
out of 272 tested DRAM chips reliably operate using nomi- over the memory channel.
nal timing parameters due to the built-in safety margins (i.e., DRAM Operation. The memory controller services main
guardbands) in nominal timing parameters that DRAM manu- memory requests using three key operations.
facturers already provide. Second, 64 DRAM chips that exhibit 1) Row Activation. The memory controller sends an ACT
erroneous behavior at reduced VPP can reliably operate using command along with a row address to a bank, and the DRAM
1) a longer row activation latency, i.e., 24 ns / 15 ns for 48 / 16 chip asserts the corresponding wordline to activate the DRAM
chips (§6.1), 2) simple single-error-correcting codes [54] (§6.3), row. Asserting a wordline connects each cell capacitor in the
or 3) doubling the refresh rate only for 16.4 % of DRAM rows. activated row to its corresponding bitline, perturbing the bitline
We make the following major contributions in this paper. voltage. Then, the sense amplifier senses and amplifies the
◦ We present the first experimental RowHammer characteriza- voltage perturbation until the cell charge is restored. The data
tion study under reduced wordline voltage (VPP ). is accessible when the bitline voltage is amplified to a certain
◦ Our experiments on 272 real DDR4 DRAM chips show that level. The latency from the start of row activation until the data
when a DRAM module is operated at a reduced VPP , an is reliably readable is called row activation latency (tRCD ). A
attacker 1) needs to hammer a row in the module more times DRAM cell loses its charge during row activation, and thus its
(by 7.4 % / 85.8 %) to induce a bit flip, and 2) can cause initial charge needs to be restored before the row is closed. The
fewer (15.2 % / 66.9 %) RowHammer bit flips in the module latency from the start of row activation until the completion of
(on average / at maximum across all tested modules). the DRAM cell’s charge restoration is called charge restoration
◦ We present the first experimental study of how reducing VPP latency (tRAS ). DRAM manufacturers provide a built-in safety
affects DRAM access latency, charge restoration process, and margin in the nominal timing parameters to account for the
data retention time. worst-case latency in tRCD and tRAS operations [58, 60, 81].
◦ Our experiments on real DRAM chips show that reducing 2) Read/Write. The memory controller sends a RD/W R com-
VPP slightly worsens DRAM access latency, charge restora- mand along with a column address to perform a read or write
tion process, and data retention time. Most (208 out of 272) to the activated row in the DRAM bank. A RD command
DRAM chips reliably operate under reduced VPP , while the serves data from the row buffer to the memory channel. A W R
remaining 64 chips reliably operate using increased row acti- command writes data into the row buffer, which subsequently
vation latency, simple error correcting codes, or doubling the modifies the data stored in the DRAM cell. The latency of per-
refresh rate only for 16.4 % of the rows. forming a read/write operation is called column access latency
(tCL ) / column write latency (tCW L ).
2. Background 3) Precharge. The memory controller sends a PRE command
We provide a high-level overview of DRAM design and opera- to an active bank. The DRAM chip de-asserts the active row’s
tion as relevant to our work. For a more detailed overview, we wordline and precharges the bitlines to prepare the DRAM bank
refer the interested reader to prior works [3, 55–79]. for a new row activation. The timing parameter for precharge
is called precharge latency (tRP ), which is the latency between
2.1. DRAM Background
issuing a PRE command and when the DRAM bank is ready
DRAM Organization. Fig. 1 illustrates a DRAM module’s for a new row activation.
hierarchical organization. At the lowest level of the hierarchy, a DRAM Refresh. A DRAM cell inherently leaks charge and
single DRAM cell comprises 1) a storage capacitor that stores a thus can retain data for only a limited amount of time, called
single bit of data encoded using the charge level in the capacitor data retention time. To prevent data loss due to such leakage, the
and 2) an access transistor that is used to read from and write to memory controller periodically issues REF (refresh) commands
the storage capacitor. The DRAM cell is connected to a bitline that ensure every DRAM cell is refreshed at a fixed interval,
that is used to access the data stored in the cell and a wordline called refresh window (tREFW ) (e.g., every 64 ms [80, 82, 83] or
that controls access to the cell. 32 ms [84]).

2.2. DRAM Voltage Control


Modern DRAM chips (e.g., DDR4 [80], DDR5 [83],
GDDR5X [85], and GDDR6 [86] standard compliant ones)
use two separate voltage rails: 1) supply voltage (VDD ), which
is used to operate the core DRAM array and peripheral circuitry
Figure 1: Organization of a typical modern DRAM module. (e.g., the sense amplifiers, row/column decoders, precharge
DRAM cells are organized as a two-dimensional array to and I/O logic), and 2) wordline voltage (VPP ), which is exclu-
form a bank. Each cell in a bank is addressed by its row and sively used to assert a wordline during a DRAM row activation.
column. Each DRAM cell in a DRAM row is connected to a VPP is generally significantly higher (e.g., 2.5V [87–90]) than
common wordline via its access transistor. A bitline connects VDD (e.g., 1.25–1.5V [87–90]) in order to ensure 1) full acti-
a column of DRAM cells to a DRAM sense amplifier to read vation of all access transistors of a row when the wordline is
or write data. A row of sense amplifiers is called a row buffer. asserted and 2) low leakage when the wordline is de-asserted.
Multiple (e.g., 16 [80]) DRAM banks are put together to form VPP is internally generated from VDD in older DRAM chips
a single DRAM chip. Multiple chips form a rank. Chips in a (e.g., DDR3 [82]). However, newer DRAM chips (e.g., DDR4
rank operate in lock-step such that each chip serves a portion of onwards [80, 83, 85, 86]) expose both VDD and VPP rails to ex-

2
ternal pins, allowing VDD and VPP to be independently driven activation latency and increased data retention time, leading
with different voltage sources. to more reliable DRAM operation.2 Unfortunately, there is
no prior work that tests this hypothesis and quantifies VPP ’s
2.3. The RowHammer Vulnerability effect on real DRAM chips’ reliable operation (i.e., row acti-
Modern DRAM is susceptible to a circuit-level vulnerability vation and charge restoration characteristics). §6 studies the
known as RowHammer [3–15], where a cell’s stored data can effect of reduced VPP on DRAM operation reliability using both
be corrupted by repeatedly activating physically nearby (aggres- real-device characterizations and SPICE [53, 95] simulations.
sor) rows. RowHammer results in unwanted software-visible bit
flips and breaks memory isolation [3, 8, 9]. RowHammer poses 3. Motivation
a significant threat to system security, reliability, and DRAM
RowHammer is a critical vulnerability for modern DRAM-
technology scaling. First, RowHammer leads to data corruption,
based computing platforms [3–48]. Many prior works [3, 5,
system crashes, and security attacks if not appropriately miti-
13, 30, 45, 48, 50–52, 65, 80, 91, 96–114] propose RowHammer
gated. Many prior works [8, 9, 16–48] show that RowHammer
mitigation mechanisms that aim to prevent RowHammer bit
can be exploited to mount system-level attacks to compromise
flips. Unfortunately, RowHammer solutions need to consider
system security (e.g., to acquire root privileges or leak private
a large number of design space constraints that include cost,
data). Second, RowHammer vulnerability worsens as DRAM
performance impact, energy and power overheads, hardware
technology scales to smaller node sizes [3,8,9,11,12,36,37,43].
complexity, technology scalability, security guarantees, and
This is because process technology shrinkage reduces the size
changes to existing DRAM standards and interfaces. Recent
of circuit elements, exacerbating charge leakage paths in and
works [8,9,11,12,36,43,49–52] suggest that many existing pro-
around each DRAM cell. Prior works [11,12,36,43] experimen-
posals may fall short in one or more of these dimensions. As a
tally demonstrate with modern DRAM chips that RowHammer
result, there is a critical need for developing better RowHammer
is and will continue to be an increasingly significant reliability,
mitigation mechanisms.
security, and safety problem going forward [8, 9], given that the
To enable more effective and efficient RowHammer miti-
minimum aggressor row activation count necessary to cause a
gation mechanisms, it is critical to develop a comprehensive
RowHammer bit flip (HC f irst ) is only 4.8K in modern DRAM
understanding of how RowHammer bit flips occur [8, 9, 12].
chips [11] and it continues to reduce.
In this work, we observe that although the wordline voltage
We describe two major error mechanisms that lead to Row-
(VPP ) is expected to affect the amount of disturbance caused by
Hammer, as explained by prior works [4, 6, 7, 10, 15, 91, 92]: 1)
a RowHammer attack [3, 4, 6–15], no prior work experimentally
electron injection / diffusion / drift and 2) capacitive crosstalk.
studies its real-world impact on a DRAM chip’s RowHammer
The electron injection / diffusion / drift mechanism creates tem-
vulnerability.3 Therefore, our goal is to understand how VPP
porary charge leakage paths that degrade the voltage of a cell’s
affects RowHammer vulnerability and DRAM operation.
storage capacitor [6, 10, 15, 91]. A larger voltage difference
between a wordline and a DRAM cell or between two word- To achieve this goal, we start with the hypothesis that VPP can
lines exacerbates the electron injection / diffusion / drift error be used to reduce a DRAM chip’s RowHammer vulnerability
mechanism. The capacitive crosstalk mechanism exacerbates without impacting the reliability of normal DRAM operations.
charge leakage paths in and around a DRAM cell’s capaci- Reducing a DRAM chip’s RowHammer vulnerability via VPP
tor [4, 15, 91, 92] due to the parasitic capacitance between two scaling has two key advantages. First, as a circuit-level Row-
wordlines or between a wordline and a DRAM cell. Hammer mitigation approach, VPP scaling is complementary to
existing system-level and architecture-level RowHammer miti-
2.4. Wordline Voltage’s Impact on DRAM Reliability gation mechanisms [3,5,13,30,45,48,50–52,65,80,91,96–114].
RowHammer. As explained in §2.3, a larger VPP exacerbates Therefore, VPP scaling can be used alongside these mechanisms
both electron injection / diffusion / drift and capacitive crosstalk to increase their effectiveness and/or reduce their overheads.
mechanisms. Therefore, we hypothesize that the RowHammer Second, VPP scaling can be implemented with a fixed hardware
vulnerability of a DRAM chip increases as VPP increases. Un- cost for a given power budget, irrespective of the number and
fortunately, there is no prior work that tests this hypothesis and types of DRAM chips used in a system.
quantifies the effect of VPP on real DRAM chips’ RowHammer We test this hypothesis through the first experimental Row-
vulnerability. §3 discusses this hypothesis in further detail, and Hammer characterization study under reduced VPP . In this
§5 experimentally examines the effects of changing VPP on the study, we test 272 real DDR4 DRAM chips from three major
RowHammer vulnerability of real DRAM chips. DRAM manufacturers. Our study is inspired by state-of-the-art
Row Activation and Charge Restoration. An access transis- analytical models for RowHammer, which suggest that the ef-
tor turns on (off) when its gate voltage is higher (lower) than a fect of RowHammer’s underlying error mechanisms depends
threshold. An access transistor’s gate is connected to a wordline on VPP [7, 10, 15]. §5 reports our findings, which yield valuable
(Fig. 1) and driven by VPP (ground) when the row is activated insights into how VPP impacts the circuit-level RowHammer
(precharged).1 Between VPP and ground, a larger access tran- 2 Increasing/decreasing V
PP does not affect the reliability of RD/W R and
sistor gate voltage forms a stronger channel between the bitline PRE operations since the DRAM circuit components involved in these opera-
and the capacitor. A strong channel allows fast DRAM row tions are powered using only VDD .
3 Both V
PP and VDD can affect a DRAM chip’s RowHammer vulnerability.
activation and full charge restoration. Based on these proper- However, changing VDD can negatively impact DRAM reliability in ways that
ties, we hypothesize that a larger VPP provides smaller row are unrelated to RowHammer (e.g., I/O circuitry instabilities) because VDD
1 To increase DRAM cell retention time, modern DRAM chips may apply supplies power to all logic elements within the DRAM chip. In contrast, VPP
a negative voltage to the wordline [93, 94] when the wordline is not asserted. affects only the wordline voltage, so VPP can influence RowHammer without
Doing so reduces the leakage current and this improves data retention. adverse effects on unrelated parts of the DRAM chip.

3
characteristics of modern DRAM chips, both confirming our Table 1: Summary of the tested DDR4 DRAM chips.
hypothesis and supporting VPP scaling as a promising new di- Mfr. #DIMMs #Chips Density Die Rev. Org. Date
mension toward robust RowHammer mitigation.
1 8 4Gb ×8 48-16
4. Experimental Methodology Mfr. A 4 64 8Gb B ×4 11-19
(Micron) 3 24 4Gb F ×8 07-21
We describe our methodology for two analyses. First, we exper- 2 16 4Gb ×8
imentally characterize the behavior of 272 real DDR4 DRAM
2 16 8Gb B ×8 52-20
chips from three major manufacturers under reduced VPP in 1 8 8Gb C ×8 19-19
terms of RowHammer vulnerability (§4.2), row activation la- Mfr. B 3 24 8Gb D ×8 10-21
tency (tRCD ) (§4.3), and data retention time (§4.4). Second, (Samsung) 1 8 4Gb E ×8 08-17
to verify our observations from real-device experiments, we 1 8 4Gb F ×8 02-21
investigate reduced VPP ’s effect on both DRAM row activation 2 16 8Gb ×8
and charge restoration using SPICE [53, 95] simulations (§4.5). 2 16 16Gb A ×8 51-20
Mfr. C 3 24 4Gb B ×8 02-21
4.1. Real-Device Testing Infrastructure (SK Hynix) 2 16 4Gb C ×8
We conduct real-device characterization experiments using an 3 24 8Gb D ×8 48-20
infrastructure based on SoftMC [64, 115], the state-of-the-art
FPGA-based open-source infrastructure for DRAM characteri- Temperature. We conduct RowHammer and tRCD tests at 50 °C
zation. We extensively modify SoftMC to test modern DDR4 and retention tests at 80 °C to ensure both stable and representa-
DRAM chips. Fig. 2 shows a picture of our experimental setup. tive testing conditions.5 We conduct tRCD tests at 50 °C because
We attach heater pads to the DRAM chips that are located on 50 °C is our infrastructure’s minimum stable temperature due
both sides of a DDR4 DIMM. We use a MaxWell FT200 PID to cooling limitations.6 We conduct retention tests at 80 °C to
temperature controller [116] connected to the heaters pads to capture any effects of increased charge leakage [74] at the upper
maintain the DRAM chips under test at a preset temperature bound of regular operating temperatures [80].7
level with the precision of ±0.1 °C. We program a Xilinx Alveo Disabling Sources of Interference. To understand fundamen-
U200 FPGA board [117] with the modified version of SoftMC. tal device behavior in response to VPP reduction, we make
The FPGA board is connected to a host machine through a PCIe sure that VPP is the only control variable in our experiments
port for running our tests. We connect the DRAM module to so that we can accurately measure the effects of VPP on Row-
the FPGA board via a commercial interposer board from Adex- Hammer, row activation latency (tRCD ), and data retention time.
elec [118] with current measurement capability. The interposer To do so, we follow four steps, similar to prior rigorous Row-
board enforces the power to be supplied through a shunt resistor Hammer [11, 12], row activation latency [58, 60, 81], and data
on the VPP rail. We remove this shunt resistor to electrically retention time [74, 77] characterization methods. First, we dis-
disconnect the VPP rails of the DRAM module and the FPGA able DRAM refresh to ensure no disturbance on the desired
board. Then, we supply power to the DRAM module’s VPP access pattern. Second, we ensure that during our RowHam-
power rail from an external TTi PL068-P power supply [119], mer and tRCD experiments, no bit flips occur due to data re-
which enables us to control VPP at the precision of ±1 mV. We tention failures by conducting each experiment within a time
start testing each DRAM module at the nominal VPP of 2.5 V. period of less than 30 ms (i.e., much shorter than the nomi-
We gradually reduce VPP with 0.1 V steps until the lowest VPP at nal tREFW of 64 ms). Third, we test DRAM modules without
error-correction code (ECC) support to ensure neither on-die
which the DRAM module can successfully communicate with
ECC [121–127] nor rank-level ECC [32, 128] can affect our ob-
the FPGA (VPPmin ).
servations by correcting VPP -reduction-induced bit flips. Fourth,
we disable known on-DRAM-die RowHammer defenses (i.e.,
TRR [36, 43, 83, 88, 129, 130]) by not issuing refresh commands
throughout our tests [11,12,36,43] (as all TRR defenses require
refresh commands to work).
Data Patterns. We use six commonly used data patterns [3,
11, 12, 60, 66, 67, 72, 74, 81, 131, 132]: row stripe (0xFF/0x00),
checkerboard (0xAA/0x55), and thickchecker (0xCC/0x33). We
identify the worst-case data pattern (WCDP) for each row
among these six patterns at nominal VPP separately for each of
5 A recent work [12] shows a complex interaction between RowHammer
Figure 2: Our experimental setup based on SoftMC [64, 115].
and temperature, suggesting that one should repeat characterization at many
To show that our observations are not specific to a certain
different temperature levels to find the worst-case RowHammer vulnerability.
DRAM architecture/process but rather common across differ- Since such characterization requires many months-long testing time, we leave
ent designs and generations, we test DDR4 DRAM modules it to future work to study temperature, voltage, and RowHammer interaction in
from all three major manufacturers with different die revisions, detail.
6 We do not repeat the t
purchased from the retail market. Table 1 provides the chip RCD tests at different temperature levels because prior
density, die revision (Die Rev.), chip organization (Org.), and work [60] shows small variation in tRCD with varying temperature.
7 DDR4 DRAM chips are refreshed at 2× the nominal refresh rate when the
manufacturing date of tested DRAM modules.4 We report the
chip temperature reaches 85 °C [80]. Thus, we choose 80 °C as a representative
manufacturing date of these modules in the form of week − year. high temperature within the regular operating temperature range. For a detailed
All tested modules are listed in Table 3 in Appendix A. analysis of the effect of temperature on data retention in DRAM, we refer the
4 Die Rev. and Date columns are blank if undocumented. reader to [74, 77, 120].

4
RowHammer (§4.2), row activation latency (tRCD ) (§4.3), and Alg. 1: Test for HC f irst and BER for a Given VPP
data retention time (§4.4) tests. We use each row’s correspond-
// RAvictim : victim row address
ing WCDP for a given test, at reduced VPP levels. //WCDP: worst-case data pattern
// HC: number of activations per aggressor row
4.2. RowHammer Experiments Function measure_BER(RAvictim , WCDP, HC):
initialize_row (RAvictim , WCDP)
We perform multiple experiments to understand how VPP affects initialize_aggressor_rows (RAvictim , bitwise_inverse(WCDP))
hammer_doublesided (RAvictim , HC)
the RowHammer vulnerability of a DRAM chip. BERrow = compare_data (RAvictim , WCDP)
Metrics. We measure the RowHammer vulnerability of a return BERrow
DRAM chip using two metrics: 1) the minimum aggressor
//Vpp : wordline voltage for the experiment
row activation count necessary to cause a RowHammer bit flip //WCDP_list : the list of WCDPs (one WCDP per row)
(HC f irst ) and 2) the fraction of DRAM cells that experience // row_list : the list of tested rows
a bit flip in a DRAM row (BER) caused by a double-sided Function test_loop(Vpp , WCDP_list):
set_vpp (Vpp )
RowHammer attack with a fixed hammer count of 300K per foreach RAvictim in row_list do
aggressor row.8 HC = 300K // initial hammer count to test
HCstep = 150K // how much to increment/decrement HC
WCDP. We choose WCDP as the data pattern that causes the while HCstep > 100 do
lowest HC f irst . If there are multiple data patterns that cause BERrowmax = 0
the lowest HC f irst , we choose the data pattern that causes the for i ← 0 to num_iterations do
BERrow = measure_BER (RAvictim , WCDP, HC)
largest BER for the fixed hammer count of 300K.9 record_BER(Vpp , RAvictim , WCDP, HC, BERrow , i)
RowHammer Tests. Alg. 1 describes the core test loop of BERrowmax = max(BERrowmax , BERrow )
end
each RowHammer test that we run. The algorithm performs a if BERrowmax == 0 then
double-sided RowHammer attack on each row within a DRAM HC+ = HCstep // Increase HC if no bit flips occur
bank. A double-sided RowHammer attack activates the two end
else
attacker rows that are physically adjacent to a victim row (i.e., HC− = HCstep // Reduce HC if a bit flip occurs
the victim row’s two immediate neighbors) in an alternating end
HCstep = HCstep /2
manner. We define hammer count (HC) as the number of times end
each physically-adjacent row is activated. In this study, we record_HCfirst(Vpp , RAvictim , WCDP, HC)
perform double-sided attacks instead of single- [3] or many- end
sided attacks (e.g., as in TRRespass [36], U-TRR [43], and
BlackSmith [44]) because a double-sided attack is the most
effective RowHammer attack when no RowHammer defense controller can use to access the aggressor rows in a double-
mechanism is employed: it reduces HC f irst and increases BER sided RowHammer attack. To do so, we reverse-engineer the
compared to both single- and many-sided attacks [3, 11, 12, 36, physical row organization using techniques described in prior
43, 44]. Due to time limitations, 1) we test 4K rows per DRAM works [11, 12].
module (four chunks of 1K rows evenly distributed across a
DRAM bank) and 2) we run each test ten times and record the
4.3. Row Activation Latency (tRCD ) Experiments
smallest (largest) observed HC f irst (BER) to account for the
worst-case. We conduct experiments to find how a DRAM chip’s row acti-
Finding Physically Adjacent Rows. DRAM-internal ad- vation latency (tRCD ) changes with reduced VPP .
dress mapping schemes [37, 87] are used by DRAM man-
Metric. We measure the minimum time delay required (tRCDmin )
ufacturers to translate logical DRAM addresses (e.g., row,
between a row activation and the following read operation to
bank, and column) that are exposed over the DRAM inter-
ensure that there are no bit flips in the entire DRAM row.
face (to the memory controller) to physical DRAM addresses
(e.g., physical location of a row). Internal address map- WCDP. We choose WCDP as the data pattern that leads to the
ping schemes allow 1) post-manufacturing row repair tech- largest observed tRCDmin .
niques to repair erroneous DRAM rows by remapping these tRCD Tests. Alg. 2 describes the core test loop of each tRCD
rows to spare rows and 2) DRAM manufacturers to organize test that we run. The algorithm sweeps tRCD starting from the
DRAM internals in a cost-optimized way, e.g., by organiz- nominal tRCD of 13.5 ns with steps of 1.5 ns.10 We decrement
ing internal DRAM buffers hierarchically [67, 133]. The map- (increment) tRCD by 1.5 ns until we observe at least one (no) bit
ping scheme can vary substantially across different DRAM flip in the entire DRAM row in order to pinpoint tRCDmin . To
chips [3, 12, 14, 37, 55, 67, 68, 72, 74, 124, 134–137]. For every test a DRAM row for a given tRCD , the algorithm 1) initializes
victim DRAM row we test, we identify the two neighboring the row with the row’s WCDP, 2) performs an access using
physically-adjacent DRAM row addresses that the memory the given tRCD for each column in the row and 3) checks if
8 We choose the 300K hammer count because 1) it is low enough to be used the access results in any bit flips. After testing each column
in a system-level RowHammer attack in a real system, and 2) it is high enough in a DRAM row, the algorithm identifies the row’s tRCDmin as
to provide us with a large number of bit flips to make meaningful observations the minimum tRCD that does not cause any bit flip in the entire
in all DRAM modules we tested. DRAM row. Due to time limitations, we 1) test the same set of
9 To investigate if WCDP changes with reduced V , we repeat WCDP
PP rows as we use in RowHammer tests (§4.2) and 2) run each test
determination experiments for different VPP values for 16 DRAM chips. We ten times and record the largest tRCDmin for each row across all
observe that WCDP changes for only 2.4 % of tested rows, causing less than
9 % deviation in HC f irst for 90 % of the affected rows. We leave a detailed 10 Our version of SoftMC can send a DRAM command every 1.5 ns due to
sensitivity analysis of WCDP to VPP for future work. the clock frequency limitations in the FPGA’s physical DRAM interface.

5
runs.11 Alg. 3: Test for Data Retention Times for a Given VPP
Alg. 2: Test for Row Activation Latency for a Given VPP // Vpp : wordline voltage for the experiment
// WCDP_list : the list of WCDPs (one WCDP per row)
//Vpp : wordline voltage for the experiment // row_list : the list of tested rows
//WCDP_list : the list of WCDPs (one WCDP per row) Function test_loop(Vpp , WCDP_list, row_list ):
// row_list : the list of tested rows set_vpp (Vpp )
Function test_loop(Vpp , WCDP_list, row_list ): tREFW = 16 ms
set_vpp (Vpp ) while tREFW ≤ 16 s do
foreach RA in row_list do for i ← 0 to num_iterations do
tRCD = 13.5 ns foreach RA in row_list do
found_faulty, found_reliable = False, False initialize_row (RA, WCDP_list[RA])
while not found_faulty or not found_reliable do wait(tREFW )
is_faulty = False read_data = read_row(RA)
for i ← 0 to num_iterations do BERrow = compare_data (WCDP_list[RA], read_data)
foreach column C in row RA do record_retention_errors(RA, tREFW , BERrow )
initialize_row (RA, WCDP_list[RA])
end
activate_row(RA, tRCD ) //activate the row using tRCD
read_data = read_col(C) end
close_row(RA) tREFW = tREFW ×2
BERcol = compare (WCDP_list[RA], read_data) end
if BERcol > 0 then is_faulty=True
end transistor model [139, 140] and scale the simulation parame-
end
if is_faulty then {tRCD += 1.5 ns; found_faulty = True;}
ters according to the ITRS roadmap [141, 142].12 To account
else {tRCDmin = tRCD ; tRCD -= 1.5 ns; found_reliable = True;} for manufacturing process variation, we perform Monte-Carlo
end simulations by randomly varying the component parameters
record_tRCDmin (RA, tRCDmin )
end
up to 5 % for each simulation run. We run the simulation at
VPP levels from 1.5 V to 2.5 V with a step size of at 0.1 V 10K
times, similar to prior works [65, 76].
Table 2: Key parameters used in SPICE simulations.
4.4. Data Retention Time Experiments
Component Parameters
We conduct data retention time experiments to understand the
effects of VPP on DRAM cell data retention characteristics. We DRAM Cell C: 16.8 fF, R: 698 Ω
test the same set of DRAM rows as we use in RowHammer Bitline C: 100.5 fF, R: 6980 Ω
Cell Access NMOS W: 55 nm, L: 85 nm
tests (§4.2) for a set of fixed refresh windows from 16 ms to Sense Amp. NMOS W: 1.3 um, L: 0.1 um
16 s in increasing powers of two. Sense Amp. PMOS W: 0.9 um, L: 0.1 um
Metric. We measure the fraction of DRAM cells that experi-
ence a bit flip in a DRAM row (retention-BER) due to violating 4.6. Statistical Significance of Experimental Results
a DRAM row’s data retention time, using a reduced refresh rate.
To evaluate the statistical significance of our methodology, we
WCDP. We choose WCDP as the data pattern which causes a
investigate the variation in our measurements by examining
bit flip at the smallest refresh window (tREFW ) among the six
the coefficient of variation (CV) across ten iterations. CV is a
data patterns. If we find more than one such data pattern, we
standardized metric to measure the extent of variability in a set
choose the one that leads to the largest BER for tREFW of 16 s.
of measurements, in relation to the mean of the measurements.
Data Retention Time Tests. Alg. 3 describes how we perform CV is calculated as the ratio of standard deviation over the mean
data retention tests to measure retention-BER for a given VPP value [143]. A smaller CV shows a smaller variation across
and refresh rate. The algorithm 1) initializes a DRAM row measurements, indicating higher statistical significance. The
with WCDP, 2) waits as long as the given refresh window, and coefficient of variation is 0.08, 0.13, and 0.24 for 90th , 95th , and
3) reads and compares the data in the DRAM row to the row’s 99th percentiles of all of our experimental results, respectively.
initial data.
5. RowHammer Under Reduced VPP
4.5. SPICE Model We provide the first experimental characterization of how word-
To provide insights into our real-chip-based experimental obser- line voltage (VPP ) affects the RowHammer vulnerability of a
vations about the effect of reduced VPP on row activation latency DRAM row in terms of 1) the fraction of DRAM cells that
and data retention time, we conduct a set of SPICE [53, 95] sim- experience a bit flip in a DRAM row (BER) (§5.1) and 2) the
ulations to estimate the bitline and cell voltage levels during minimum aggressor row activation count necessary to cause a
two relevant DRAM operations: row activation and charge RowHammer bit flip (HC f irst ) (§5.2). To conduct this analysis,
restoration. To do so, we adopt and modify a SPICE model we provide experimental results from 272 real DRAM chips,
used in a relevant prior work [60] that studies the impact of using the methodology described in §4.1 and §4.2.
changing VDD (but not VPP ) on DRAM row access and refresh
operations. Table 2 summarizes our SPICE model, which we 5.1. Effect of VPP on RowHammer BER
open-source [138]. We use LTspice [95] with the 22 nm PTM Fig. 3 shows the RowHammer BER a DRAM row experiences
11 To understand whether reliable DRAM row activation latency changes over
at a fixed hammer count of 300K under different voltage levels,
time, we repeat these tests for 24 DRAM chips after one week, during which normalized to the row’s RowHammer BER at nominal VPP
the chips are tested for RowHammer vulnerability. We observe that only 2.1 % 12 We do not expect SPICE simulation and real-world experimental results to

of tested DRAM rows experience only a small variation (<1.5 ns) in tRCD . This be identical because a SPICE model cannot simulate a real DRAM chip’s exact
result is consistent with results of prior works [60, 69, 81]. behavior without proprietary design and manufacturing information.

6
(2.5 V). Each line represents a different DRAM module. The (2.5 V). Each line represents a different DRAM module. The
band of shade around each line marks the 90 % confidence band of shade around each line marks the 90 % confidence in-
interval of the normalized BER value across all tested DRAM terval of the normalized HC f irst values across all tested DRAM
rows. We make Obsvs. 1 and 2 from Fig. 3. rows in the module. We make Obsvs. 4 and 5 from Fig. 3.
Mfr. A Mfr. B Mfr. C Mfr. A Mfr. B Mfr. C
1.2 2.0 A0 A5 B0 B5 C0 C5
1.0 1.8 A1 A6 B1 B6 C1 C6
A2 A7 B2 B7 C2 C7

Norm. HCfirst
1.6
Norm. BER

0.8 A3
A4
A8
A9
B3
B4
B8
B9
C3
C4
C8
C9
A0 A5 B0 B5 C0 C5 1.4
0.6 A1 A6 B1 B6 C1 C6
A2 A7 B2 B7 C2 C7 1.2
0.4 A3 A8 B3 B8 C3 C8 1.0
A4 A9 B4 B9 C4 C9
0.2 0.8
1.3 1.6 1.9 2.2 2.5 1.5 1.7 1.9 2.1 2.3 2.5 1.5 1.7 1.9 2.1 2.3 2.5 1.3 1.6 1.9 2.2 2.5 1.5 1.7 1.9 2.1 2.3 2.5 1.5 1.7 1.9 2.1 2.3 2.5
Wordline Voltage (VPP) Wordline Voltage (VPP) Wordline Voltage (VPP) Wordline Voltage (VPP) Wordline Voltage (VPP) Wordline Voltage (VPP)
Figure 3: Normalized BER values across different VPP levels. Each Figure 5: Normalized HC f irst values across different VPP levels.
curve represents a different DRAM module. Each curve represents a different DRAM module.
Observation 1. Fewer DRAM cells experience bit flips due to Observation 4. DRAM cells experience RowHammer bit flips
RowHammer under reduced wordline voltage. at higher hammer counts under reduced wordline voltage.
We observe that RowHammer BER decreases as VPP reduces We observe that HC f irst of a DRAM row increases as VPP
in 81.2 % of tested rows across all tested modules. This reduc- reduces in 69.3 % of tested rows across all tested modules. This
tion in BER reaches up to 66.9 % (B3 at VPP = 1.6V ) with an increase in HC f irst reaches up to 85.8 % (B3 at VPP = 1.6V )
average of 15.2 % (not shown in the figure) across all modules with an average of 7.4 % (not shown in the figure) across all
we test. We conclude that the disturbance caused by hammering tested modules. We conclude that the disturbance caused by
a DRAM row becomes weaker, on average, with reduced VPP . hammering a DRAM row becomes weaker with reduced VPP .
Observation 2. In contrast to the dominant trend, reducing VPP Observation 5. In contrast to the dominant trend, reducing VPP
can sometimes increase BER. can sometimes cause bit flips at lower hammer counts.
We observe that BER increases in 15.4 % of tested rows with We observe that HC f irst reduces in 14.2 % of tested rows
reduced VPP by up to 11.7 % (B5 at VPP = 2.0V ). We suspect with reduced VPP by up to 9.1 % (C8 at VPP =1.6 V). Similar to
that the BER increase we observe occurs due to a weakened Obsv. 2, we suspect that this behavior is caused by the weakened
charge restoration process rather than an actual increase in read charge restoration process (see §6.3).
disturbance (due to RowHammer). §6.3 analyzes the impact of Variation in HC f irst Increase Across DRAM Rows. We in-
reduced VPP on the charge restoration process. vestigate how HC f irst increase varies with reduced VPP across
Variation in BER Reduction Across DRAM Rows. We in- DRAM rows. To do so, we measure HC f irst increase of each
vestigate how BER reduction with reduced VPP varies across DRAM row at VPPmin (§4.1). Fig. 6 shows a population density
DRAM rows. To do so, we measure BER reduction of each distribution of DRAM rows (y-axis) based on their HC f irst at
DRAM row at VPPmin (§4.1). Fig. 4 shows a population den- VPPmin , normalized to their HC f irst at the nominal VPP level
sity distribution of DRAM rows (y-axis) based on their BER (x-axis), for each manufacturer. We make Obsv. 6 from Fig. 6.
at VPPmin , normalized to their BER at the nominal VPP level Mfr. A Mfr. B Mfr. C
Fraction of DRAM Rows

0.80 0.5 0.25


(x-axis), for each manufacturer. We make Obsv. 3 from Fig. 4. A0
A1
A5
A6 0.4
B0
B1
B5
B6 0.20
C0
C1
C5
C6
Mfr. A Mfr. B Mfr. C 0.60 A2 A7 B2 B7 C2 C7
Fraction of DRAM Rows

0.9 0.9 0.20 0.45 A3 A8 0.3 B3 B8 0.15 C3 C8


A0 A5 B0 B5 C0 C5 A4 A9 B4 B9 C4 C9
A1 A6 B1 B6 0.15 C1 C6 0.30 0.2 0.10
0.6 A2
A3
A7
A8
0.6 B2
B3
B7
B8
C2
C3
C7
C8 0.15 0.1 0.05
A4 A9 B4 B9 0.10 C4 C9
0.3 0.3 0.00 0.0 0.00
0.05 0.8 1.0 1.2 1.4 1.6 0.8 1.0 1.3 1.6 1.9 0.8 1.0 1.2 1.4
Norm. HCfirst Norm. HCfirst Norm. HCfirst
0.0 0.0 0.00
0.4 0.6 0.8 1.0 1.2 0.2 0.4 0.6 0.8 1.0 1.2 0.7 0.8 0.9 1.0 1.1 1.2 Figure 6: Population density distribution of DRAM rows based
Norm. BER Norm. BER Norm. BER on their normalized HC f irst values at VPPmin .
Figure 4: Population density distribution of DRAM rows based
on their normalized BER values at VPPmin . Observation 6. HC f irst increase with reduced VPP varies
Observation 3. BER reduction with reduced VPP varies across across different DRAM rows and different manufacturers.
different DRAM rows and different manufacturers. DRAM rows in chips from the same manufacturer exhibit a
DRAM rows exhibit a large range of normalized BER values large range of normalized HC f irst values (0.94–1.52, 0.92–1.86,
(0.43–1.11, 0.33–1.03, and 0.74–0.94 in chips from Mfrs. A, B, and 0.91–1.35 for Mfrs. A, B, and C, respectively). HC f irst in-
and C, respectively). BER reduction also varies across different crease also varies across different manufacturers. For example,
manufacturers. For example, BER reduces by more than 5 % for HC f irst increases with reduced VPP for 83.5 % of DRAM rows
all DRAM rows of Mfr. C, while BER variation with reduced in modules from Mfr. C, while 50.9 % of DRAM rows exhibit
VPP is smaller than 2 % in 49.6 % of the rows of Mfr. A. this behavior in modules from Mfr. A.
Based on Obsvs. 1–3, we conclude that a DRAM row’s Row- Based on Obsvs. 4–6, we conclude that a DRAM row’s
Hammer BER tends to decrease with reduced VPP , while both HC f irst tends to increase with reduced VPP , while both the
the amount and the direction of change in BER varies across amount and the direction of change in HC f irst varies across
different DRAM rows and manufacturers. different DRAM rows and manufacturers.
Summary of Findings. Based on our analyses on both BER
5.2. Effect of VPP on HC f irst and HC f irst , we conclude that a DRAM chip’s RowHammer
Fig. 5 shows the HC f irst a DRAM row exhibits under different vulnerability can be reduced by operating the chip at a VPP level
voltage levels, normalized to the row’s HC f irst at nominal VPP that is lower than the nominal VPP value.

7
6. DRAM Reliability Under Reduced VPP VPP level. We annotate the bitline’s supply voltage (VDD ) and
To investigate the effect of reduced VPP on reliable DRAM the voltage threshold that the bitline voltage should exceed for
operation, we provide the first experimental characterization the activation to be reliably completed (VT H ). We make Obsv. 8
of how VPP affects the reliability of three VPP -related funda- from Fig. 8a.
mental DRAM operations: 1) DRAM row activation (§6.1), a) Waveform plot of bitline voltage during DRAM row activation
1.20 VDD

Bitline Voltage (V)


2) charge restoration (§6.2), and 3) DRAM refresh (§6.3). To
1.00
conduct these analyses, we provide both 1) experimental results VTH
0.80
from real DRAM devices, using the methodology described in 1.7 1.9 2.1 2.3 2.5
0.60 1.8 2.0 2.2 2.4
§4.1, §4.3, and §4.4 and 2) SPICE simulation results, using the 0 5 10 15 20 25 30 35 40
methodology described in §4.5. Time (ns)
b) Histogram of minimum reliable activation latency (tRCDmin) values
0.20 1.9V 1.8V 1.7V

Probability Density
6.1. DRAM Row Activation Under Reduced VPP 0.15

Nominal tRCD
1.7V 2.2V
Motivation. DRAM row activation latency (tRCD ) should theo- 0.10 1.8V 2.3V
1.9V 2.4V
retically increase with reduced VPP (§2.2). We investigate how 0.05 2.0V 2.5V
2.1V
tRCD of real DRAM chips change with reduced VPP . 0.00
9 10 11 12 13 14 15 16 17 18
Novelty. We provide the first experimental analysis of the iso- Minimum Reliable Activation Latency tRCDmin (ns)
lated impact of VPP on activation latency. Prior work [60] Figure 8: (a) Waveform of the bitline voltage during row activa-
tests DDR3 DRAM chips under reduced supply voltage (VDD ), tion and (b) probability density distribution of tRCDmin values, for
which may or may not change internally-generated VPP level. different VPP levels.
In contrast, we modify only wordline voltage (VPP ) without Observation 8. Row activation successfully completes under
modifying VDD to avoid the possibility of negatively impacting reduced VPP with an increased activation latency.
DRAM reliability due to I/O circuitry instabilities (§2.2). Fig. 8a shows that, as VPP decreases, the bitline voltage takes
Experimental Results. Fig. 7 demonstrates the variation in longer to increase to VT H , resulting in a slower row activation.
tRCDmin (§4.3) on the y-axis under reduced VPP on the x-axis, For example, tRCDmin increases from 11.6 ns to 13.6 ns (on av-
across 30 DRAM modules. We annotate the nominal tRCD value erage across 104 Monte-Carlo simulation iterations) when VPP
(13.5 ns) [80] with a black horizontal line. We make Obsv. 7 is reduced from 2.5 V to 1.7 V. This happens due to two rea-
from Fig. 7. sons. First, a lower VPP creates a weaker channel in the access
Mfr. A Mfr. B Mfr. C transistor, requiring a longer time for the capacitor and bitline
25 A0A5 B0B5 C0C5
A1A6 B1B6 C1C6 to share charge. Second, the charge sharing process (0–5 ns in
Min. Reliable tRCD

20 A2A7 B2B7 C2C7


Fig. 8a) leads to a smaller change in bitline voltage when VPP is
A3A8 B3B8 C3C8
A4A9 B4B9 C4C9
15 Nom. tRCD Nom. tRCD Nom. tRCD reduced due to the weakened charge restoration process that we
10 explain in §6.2.
Fig. 8b shows the probability density distribution of tRCDmin
5 values under reduced VPP across a total of 104 Monte-Carlo
1.3 1.6 1.9 2.2 2.5 1.5 1.7 1.9 2.1 2.3 2.5 1.5 1.7 1.9 2.1 2.3 2.5
Wordline Voltage (VPP) Wordline Voltage (VPP) Wordline Voltage (VPP) simulation iterations for different VPP levels (color-coded). Ver-
Figure 7: Minimum reliable tRCD values across different VPP levels. tical lines annotate the worst-case reliable tRCDmin values across
Each curve represents a different DRAM module. all iterations of our Monte-Carlo simulation (§4.5) for different
Observation 7. Reliable row activation latency generally in- VPP levels. We make Obsv. 9 from Fig. 8b.
creases with reduced VPP . However, 208 (25) out of 272 (30) Observation 9. SPICE simulations agree with our activa-
DRAM chips (modules) complete row activation before the nom- tion latency-related observations based on experiments on real
inal activation latency. DRAM chips: tRCDmin increases with reduced VPP .
The minimum reliable activation latency (tRCDmin ) increases We analyze the variation in 1) the probability density distribu-
with reduced VPP across all tested modules. tRCDmin exceeds the tion of tRCDmin , and 2) the worst-case (largest) reliable tRCDmin
nominal tRCD of 13.5 ns for only 5 of 30 tested modules (A0– value when VPP is reduced. Fig. 8b shows that the probabil-
A2, B2, and B5). Among these, modules from Mfr. A and B ity density distribution of tRCDmin both shifts to larger values
contain 16 and 8 chips per module. Therefore, we conclude that and becomes wider with reduced VPP . The worst-case (largest)
208 of 272 tested DRAM chips do not experience bit flips when tRCDmin increases from 12.9 ns to 13.3 ns, 14.2 ns, and 16.9 ns
operated using nominal tRCD . We observe that since tRCDmin in- when VPP is reduced from 2.5 V to 1.9 V, 1.8 V and 1.7 V, re-
creases with reduced VPP , the available tRCD guardband reduces spectively.13 For a realistic nominal value of 13.5 ns, tRCD ’s
by 21.9 % with reduced VPP , on average across all DRAM mod- guardband reduces from 4.4 % to 1.5 % as VPP reduces from
ules that reliably work with nominal tRCD . We also observe that 2.5 V to 1.9 V. As §4.5 explains, SPICE simulation results do
the three and two modules from Mfrs. A and B, which exhibit not exactly match measured real-device characteristics (shown
tRCDmin values larger than the nominal tRCD , reliably operate in Obsv. 7) because a SPICE model cannot simulate a real
when we use a tRCD of 24 ns and 15 ns, respectively. DRAM chip’s exact behavior without proprietary design and
To verify our experimental observations and provide a deeper manufacturing information.
insight into the effect of VPP on activation latency, we perform From Obsvs. 7–9, we conclude that 1) the reliable row ac-
SPICE simulations (as described in §4.5). Fig. 8a shows a wave- tivation latency increases with reduced VPP , 2) the increase in
form of the bitline voltage during the row activation process. reliable row activation latency does not immediately require in-
The time in the x-axis starts when an activation command is is- 13 SPICE
simulation results do not show reliable operation when VPP ≤1.6 V,
sued. Each color corresponds to the bitline voltage at a different yet real DRAM chips do operate reliably as we show in §6.1 and §6.3.

8
creasing the nominal tRCD , but reduces the available guardband Similar to the variation in tRCD values that we discuss in
by 21.9 % for 208 out of 272 tested chips, and 3) observed bit Obsv. 9, the probability density distribution of tRAS values also
flips can be eliminated by increasing tRCD to 24 ns and 15 ns for shifts to larger values (i.e., tRAS exceeds the nominal value when
erroneous modules from Mfrs. A and B. VPP is lower than 2.0V) and becomes wider as VPP reduces. This
happens as a result of reduced cell voltage, weakened channel
6.2. DRAM Charge Restoration Under Reduced VPP in the access transistor, and reduced voltage level at the end of
Motivation. A DRAM cell’s charge restoration process is af- the charge sharing process, as we explain in Obsv. 9.
fected by VPP because, similar to the row activation process, From Obsvs. 10 and 11, we conclude that reducing VPP can
a DRAM cell capacitor’s charge is restored through the chan- negatively affect the charge restoration process. Reduced VPP ’s
nel formed in the access transistor, which is controlled by the negative impact on charge restoration can potentially be miti-
wordline. Due to access transistor’s characteristics, reducing gated by leveraging the guardbands in DRAM timing parame-
VPP without changing VDD reduces gate-to-source voltage (VGS ) ters [58,60,69,72,81] and using intelligent DRAM refresh tech-
and forms a weaker channel. To understand the impact of VPP niques, where a partially restored DRAM row can be refreshed
reduction on the charge restoration process, we investigate how more frequently than other rows, so that the row’s charge is re-
charge restoration of a DRAM cell varies with reduced VPP . stored before it experiences a data retention bit flip [75,144,145].
Experimental Results. Since our FPGA infrastructure cannot We leave exploring such solutions to future work.
probe a DRAM cell capacitor’s voltage level, we conduct this
study in our SPICE simulation environment (§4.5). Fig. 9a 6.3. DRAM Row Refresh Under Reduced VPP
shows the waveform plot of capacitor voltage (y-axis) over Motivation. §6.2 demonstrates that the charge restored in a
time (x-axis), following a row activation event (at t=0). Fig. 9b DRAM cell after a row activation can be reduced as a result
shows the probability density distribution (y-axis) of the mini- of VPP reduction. This phenomenon is important for DRAM-
mum latency required (tRASmin ) to reliably complete the charge based memories because reduced charge in a cell might reduce
restoration process on the x-axis under different VPP levels. We a DRAM cell’s data retention time, causing retention bit flips
make Obsvs. 10 and 11 from Fig. 9a and 9b. if the cell is not refreshed more frequently. To understand the
a) Waveform plot of DRAM cell capacitor voltage during charge restoration impact of VPP reduction on real DRAM chips, we investigate
Capacitor Voltage (V)

1.20 VDD the effect of reduced VPP on data retention related bit flips using
DRAM Cell

1.00 the methodology described in §4.4.


0.80
1.7V 1.9V 2.1V 2.3V 2.5V Novelty. This is the first work that experimentally analyzes the
0.60 1.8V 2.0V 2.2V 2.4V
isolated impact of VPP on DRAM cell retention times. Prior
0 20 40 60 80 100
Time (ns) work [60] tests DDR3 DRAM chips under reduced VDD , which
b) Histogram of minimum reliable charge restoration latency (tRASmin) may or may not change the internally-generated VPP level.
0.16 2.0V 1.9V 1.8V 1.7V
Probability Density

0.14
0.12 Experimental Results. Fig. 10 demonstrates reduced VPP ’s ef-
Nominal tRAS

0.10 1.7V 2.2V


0.08 1.8V 2.3V fect on data retention BER on real DRAM chips. Fig. 10a shows
0.06 1.9V 2.4V
0.04 2.0V 2.5V how the data retention BER (y-axis) changes with increasing
0.02 2.1V
0.00 refresh window (log-scaled in x-axis) for different VPP levels
20 25 30 35 40 45 50 55 60 65
Minimum Reliable Charge Restoration Latency (tRASmin) (ns) (color-coded). Each curve in Fig. 10a shows the average BER
Figure 9: (a) Waveform of the cell capacitor voltage following a across all DRAM rows, and error bars mark the 90 % confi-
row activation and (b) probability density distribution of tRASmin dence interval. The x-axis starts from 64 ms because we do not
values, for different VPP levels. observe any bit flips at tREFW values smaller than 64 ms. To pro-
Observation 10. A DRAM cell’s capacitor voltage can saturate vide deeper insight into reduced VPP ’s effect on data retention
at a lower voltage level when VPP is reduced. BER, Fig. 10b demonstrates the population density distribution
We observe that a DRAM cell capacitor’s voltage saturates of data retention BER across tested rows for a tREFW of 4 s. Dot-
at VDD (1.2 V) when VPP is 2.0 V or higher. However, the cell ted vertical lines mark the average BER across rows for each
capacitor’s voltage saturates at a lower voltage level by 4.1 %, VPP level. We make Obsvs. 12 and 13 from Fig. 10.
11.0 %, and 18.1 % when VPP is 1.9 V, 1.8 V, and 1.7 V, re- a) Data Retention BER across Different Refresh Rates for Different VPP Levels
spectively. This happens because the access transistor turns Mfr. A Mfr. B Mfr. C
0.12 0.05 0.18
Data Retention BER

1.5V 2.1V 1.5V 2.1V 0.15 1.5V 2.1V


off when the voltage difference between its gate and source is 0.09 1.6V 2.2V 0.04 1.6V 2.2V 1.6V 2.2V
1.7V 2.3V 0.03 1.7V 2.3V 0.12 1.7V 2.3V
smaller than a threshold level. For example, when VPP is set to 0.06 1.8V 2.4V 1.8V 2.4V 0.09 1.8V 2.4V
1.9V 2.5V 0.02 1.9V 2.5V 0.06 1.9V 2.5V
1.7 V, the access transistor allows charge restoration until the 0.03 2.0V 0.01 2.0V
0.03
2.0V

cell voltage reaches 0.98 V. When the cell voltage reaches this 0.00 0.00 0.00
64 256 1K 4K 16K 64 256 1K 4K 16K 64 256 1K 4K 16K
level, the voltage difference between the gate (1.7 V) and the Refresh Window (ms) Refresh Window (ms) Refresh Window (ms)
b) Histogram of BER at a Fixed Refresh Window of 4s for Different VPP Levels
Fraction of DRAM Rows

source (0.98 V) is not large enough to form a strong channel, 0.32 0.32 0.24
1.5V 1.9V 2.3V 1.5V 2.1V 1.5V 1.8V 2.1V 2.4V
causing the cell voltage to saturate at 0.98 V. This reduction in 0.24 1.6V 2.0V 2.4V 0.24 1.6V 2.2V 0.18 1.6V 1.9V 2.2V 2.5V
1.7V 2.1V 2.5V 1.7V 2.3V 1.7V 2.0V 2.3V
voltage can potentially 1) increase the row activation latency 0.16 1.8V 2.2V 0.16 1.8V 2.4V 0.12
1.9V 2.5V
(tRCD ) and 2) reduce the cell’s retention time. We 1) already 0.08 0.08 2.0V 0.06
account for reduced saturation voltage’s effect on tRCD in §6.1 0.00 0.00 0.00
0% 1.0% 2.0% 3.0% 0% 0.5% 1.0% 1.5% 0% 1.0% 2.0% 3.0%
and 2) investigate its effect on retention time in §6.3. Data Retention BER (%) Data Retention BER (%) Data Retention BER (%)
Observation 11. The increase in a DRAM cell’s charge restora- Figure 10: Reduced VPP ’s effect on a) data retention BER across
tion latency with reduced VPP can increase the tRAS timing pa- different refresh rates and b) the distribution of data retention
rameter, depending on the VPP level. BER across different DRAM rows for a fixed tREFW of 4 s.

9
a) 64ms Refresh Window b) 128ms Refresh Window

Fraction of DRAM Rows


Observation 12. More DRAM cells tend to experience data 100
A A B C
retention bit flips when VPP is reduced. 10 1 B
C
Fig. 10a shows that data retention BER curve is higher (e.g., 10 2
dark-purple compared to yellow) for smaller VPP levels (e.g., 10 3
1.5 V compared to 2.5 V). To provide a deeper insight, Fig. 10b 10 4
0 1 4 116 0 1 2
shows that average data retention BER across all tested rows Number of 64-bit words with one bit flip Number of 64-bit words with one bit flip
when tREFW =4 s increases from 0.3 %, 0.2 %, and 1.4 % for Figure 11: Data retention bit flip characteristics of DRAM rows
a VPP of 2.5 V to 0.8 %, 0.5 %, and 2.5 % for a VPP of 1.5 V in DRAM modules that exhibit bit flips at (a) 64 ms and (b) 128 ms
refresh windows but not at lower tREFW values when operated at
for Mfrs. A, B, and C, respectively. We hypothesize that this VPPmin . Each subplot shows the distribution of DRAM rows based
happens because of the weakened charge restoration process on the number of erroneous 64-bit words that the rows exhibit.
with reduced VPP (§6.2).
from Mfrs. A, B, and C contain 1, 2, and 1 erroneous data
Observation 13. Even though DRAM cells experience retention
words, respectively, when the refresh window is 128 ms. We
bit flips at smaller retention times when VPP is reduced, 23 of
conclude that all of these data retention bit flips can be avoided
30 tested modules experience no data retention bit flips at the
by doubling the refresh rate14 only for 16.4 % / 5.0 % of DRAM
nominal refresh window (64 ms).
rows [75, 144, 145] when tREFW is 64 ms / 128 ms.
Data retention BER is very low at the tREFW of 64 ms even From Obsvs. 12–15, we conclude that a DRAM row’s data re-
for a VPP of 1.5 V. We observe that no DRAM module from tention time can reduce when VPP is reduced. However, 1) most
Mfr. A exhibits a data retention bit flip at the 64 ms tREFW , and of (i.e., 23 out of 30) tested modules do not exhibit any bit flips
only three and four modules from Mfrs. B (B6, B8, and B9) at the nominal tREFW of 64 ms and 2) bit flips observed in seven
and C (C1, C3, C5, and C9) experience bit flips across all 30 modules can be mitigated using existing SECDED ECC [54] or
DRAM modules we test. selective refresh methods [75, 144, 145].
We investigate the significance of the observed data retention
bit flips and whether it is possible to mitigate these bit flips using 7. Limitations of Wordline Voltage Scaling
error correcting codes (ECC) [54] or other existing methods to
We highlight four key limitations of wordline voltage scaling
avoid data retention bit flips (e.g., selectively refreshing a small
and our experimental characterization.
fraction of DRAM rows at a higher refresh rate [75, 144, 145]).
First, in our experiments, we observe that none of the tested
To do so, we analyze the nature of data retention bit flips when
DRAM modules reliably operate at a VPP lower than a certain
each tested module is operated at the module’s VPPmin for two
voltage level, called VPPmin . This happens because an access
tREFW values: 64 ms and 128 ms, which are the smallest refresh
transistor cannot connect the DRAM cell capacitor to the bitline
windows that yield non-zero BER for different DRAM modules.
when the access transistor’s gate-to-source voltage difference
To evaluate whether data retention bit flips can be avoided
is not larger than the transistor’s threshold voltage. Therefore,
using ECC, we assume a realistic data word size of 64 bits [32,
each DRAM chip has a minimum VPP level at which it can
123–125, 127, 128, 137]. We make Obsv. 14 from this analysis.
reliably operate (e.g., lowest at 1.4 V for A0 and highest at
Observation 14. Data retention errors can be avoided using 2.4 V for A5). With this limitation, we observe 7.4 % / 15.2 %
simple single error correcting codes at the smallest tREFW that average increase / reduction in HC f irst / BER across all tested
yields non-zero BER. DRAM chips at their respective VPPmin levels. A DRAM chip’s
We observe that no 64-bit data word contains more than RowHammer vulnerability can potentially reduce further if
one bit flip for the smallest tREFW that yield non-zero BER. access transistors are designed to operate at smaller VPP levels.
We conclude that simple single error correction double error Second, we cannot investigate the root cause of all results
detection (SECDED) ECC can correct all erroneous data words. we observe since 1) DRAM manufacturers do not describe
To evaluate whether data retention bit flips can be avoided the exact circuit design details of their commodity DRAM
by selectively refreshing a small fraction of DRAM rows, we chips [14, 36, 127, 137] and 2) our infrastructure’s physical
analyze the distribution of these bit flips across different DRAM limitations prevent us from observing a DRAM chip’s exact
rows. Fig. 11a (Fig. 11b) shows the distribution of DRAM rows internal behavior (e.g., it is not possible to directly measure a
that experience a data retention bit flip when tREFW is 64 ms cell’s capacitor voltage).
(128 ms) but not at a smaller tREFW , based on their data retention Third, this paper does not thoroughly analyze the three-way
bit flip characteristics. The x-axis shows the number of 64-bit interaction between VPP , temperature, and RowHammer. There
data words with one bit flip in a DRAM row. The y-axis shows is already a complex two-way interaction between RowHammer
the fraction of DRAM rows in log-scale, exhibiting the behavior, and temperature, requiring studies to test each DRAM cell at all
specified in the x-axis for different manufacturers (color-coded). allowed temperature levels [12]. Since a three-way interaction
We make Obsv. 15 from Fig. 11. study requires even more characterization that would take sev-
Observation 15. Only a small fraction (16.4 % / 5.0 %) of eral months of testing time, we leave it to future work to study
DRAM rows contain erroneous data words at the smallest tREFW the interaction between VPP , temperature, and RowHammer.
(64 ms / 128 ms) that yields non-zero BER. Fourth, we experimentally demonstrate that the RowHam-
Fig. 11a shows that modules from Mfr. A do not exhibit mer vulnerability can be mitigated by reducing VPP at the
any bit flips when tREFW is 64 ms, while 15.5 % and 0.2 % of cost of a 21.9 % average reduction in the tRCD guardband of
DRAM rows in modules from Mfrs. B and C exhibit four and 14 We test our chips at fixed refresh rates in increasing powers of two (§4.4).
one 64-bit words with a single bit flip, respectively; and 0.01 % Therefore, our experiments do not capture whether eliminating a bit flip is
of DRAM rows from Mfr. B contain 116 data words with one possible by increasing the refresh rate by less than 2×. We leave a finer
bit flip. Fig. 11b shows that 0.1 %, 4.7 %, and 0.2 % of rows granularity data retention time analysis to future work.

10
tested DRAM chips. Although reducing the guardband can hurt and frequency scaling (DVFS) for DRAM chips and [146]
DRAM manufacturing yield, we leave studying VPP reduction’s provides results in a real system. [60] proposes to scale down
effect on yield to future work because we do not have access to VDD without reducing DRAM chip frequency. To do so, [60]
DRAM manufacturers’ proprietary yield statistics. experimentally demonstrates the interaction between VDD and
DRAM row access latency in real DDR3 DRAM chips. These
8. Key Takeaways three works neither focus on the RowHammer vulnerability nor
We summarize the key findings of our experimental analyses of distinguish between VDD and VPP . Unlike these works, we focus
the wordline voltage (VPP )’s effect on the RowHammer vulner- on the impact of VPP (isolated from VDD ) on RowHammer and
ability and reliable operation of modern DRAM chips. From reliable operation characteristics of real DDR4 DRAM chips.
our new observations, we draw two key takeaways. Experimental RowHammer Characterization. Prior works
Takeaway 1: Effect of VPP on RowHammer. We observe that extensively characterize the RowHammer vulnerability in real
scaling down VPP reduces a DRAM chip’s RowHammer vul- DRAM chips [3, 6, 11, 12, 36, 43]. These works experimentally
nerability, such that RowHammer BER decreases by 15.2 % demonstrate (using real DDR3, DDR4, and LPDDR4 DRAM
(up to 66.9 %) and HC f irst increases by 7.4 % (up to 85.8 %) chips how) a DRAM chip’s RowHammer vulnerability varies
on average across all DRAM rows. Only 15.4 % and 14.2 % of with 1) DRAM refresh rate [3, 36, 43], 2) the physical distance
DRAM rows exhibit opposite BER and HC f irst trends, respec- between aggressor and victim rows [3, 11], 3) DRAM genera-
tively (§5.1 and §5.2). tion and technology node [3, 11, 12, 43], 4) temperature [6, 12],
Takeaway 2: Effect of VPP on DRAM reliability. We observe 5) the time the aggressor row stays active [6, 12], and 6) phys-
that reducing VPP 1) reduces the existing guardband for row ical location of the victim DRAM cell [12]. None of these
activation latency by 21.9 % on average across tested chips and works analyze how reduced VPP affects RowHammer vulnera-
2) causes DRAM cell charge to saturate at 1 V instead of 1.2 V bility in real DRAM chips. Our characterization study furthers
(VDD ) (§6.2), leading 0 %, 15.5 %, and 0.2 % of DRAM rows to the analyses in these works by uncovering new insights into
experience SECDED ECC-correctable data retention bit flips at RowHammer behavior and DRAM operation.
the nominal refresh window of 64 ms in DRAM modules from RowHammer Attacks and Defenses. Many prior works [3,
Mfrs. A, B, and C, respectively (§6.3). 4, 6–48] show that RowHammer can be exploited to mount
Finding Optimal Wordline Voltage. Our two key takeaways system-level attacks to compromise system security and safety
suggest that reducing RowHammer vulnerability of a DRAM (e.g., to acquire root privileges or leak private data). To protect
chip via VPP reduction can require 1) accessing DRAM rows against these attacks, many prior works [3, 5, 13, 30, 45, 48, 50–
with a slightly larger latency, 2) employing error correcting 52, 65, 80, 91, 96–114] propose RowHammer mitigation mecha-
codes (ECC), or 3) refreshing a small subset of rows at a nisms that prevent RowHammer bit flips from compromising a
higher refresh rate. Therefore, one can define different Pareto- system. The novel observations we make in this work can be
optimal operating conditions for different performance and re- leveraged to reduce RowHammer vulnerability and complement
liability requirements. For example, a security-critical system existing RowHammer defense mechanisms, further increasing
can choose a lower VPP to reduce RowHammer vulnerability, their effectiveness and reducing their overheads.
whereas a performance-critical and error-tolerant system might
prefer lower access latency over higher RowHammer tolerance. 10. Conclusion
DRAM designs and systems that are informed about the trade-
offs between VPP , access latency, and retention time can make We present the first experimental RowHammer characterization
better-informed design decisions (e.g., fundamentally enable study under reduced wordline voltage (VPP ). Our results, using
lower access latency) or employ better-informed memory con- 272 real DDR4 DRAM chips from three major manufacturers,
troller policies (e.g., using longer tRCD , employing SECDED show that RowHammer vulnerability can be reduced by reduc-
ECC, or doubling the refresh rate only for a small fraction of ing VPP . Using real-device experiments and SPICE simulations,
rows when the chip operates at reduced VPP ). We believe such we demonstrate that although the reduced VPP slightly wors-
designs are important to explore in future work. We hope that ens DRAM access latency, charge restoration process and data
the new insights we provide can lead to the design of stronger retention time, most of (208 out of 272) tested chips reliably
DRAM-based systems against RowHammer along with better- work under reduced VPP leveraging already existing guardbands
informed DRAM-based system designs. of nominal timing parameters and employing existing ECC or
selective refresh techniques. Our findings provide new insights
9. Related Work into the increasingly critical RowHammer problem in modern
DRAM chips. We hope that they lead to the design of systems
To our knowledge, this is the first work that experimentally that are more robust against RowHammer attacks.
studies how reducing wordline voltage affects a real DRAM
chip’s 1) RowHammer vulnerability, 2) row activation latency,
3) charge restoration process, and 4) data retention time. We di-
Acknowledgments
vide prior work into three categories: 1) explorations of reduced- We thank our shepherd Karthik Pattabiraman and the anony-
voltage DRAM operation, 2) experimental characterization stud- mous reviewers of DSN 2022 for valuable feedback. We thank
ies of the RowHammer vulnerability of real DRAM chips, and the SAFARI Research Group members for valuable feedback
3) RowHammer attacks and defenses. and the stimulating intellectual environment they provide. We
Reduced-Voltage DRAM Operation. Prior works [60, 146, acknowledge the generous gifts provided by our industrial part-
147] propose operating DRAM with reduced VDD to improve ners, including Google, Huawei, Intel, Microsoft, and VMware,
energy efficiency. [146] and [147] propose dynamic voltage and support from the Microsoft Swiss Joint Research Center.

11
References om/CMU-SAFARI/rowhammer, 2021.
[41] F. Yao, A. S. Rakin, and D. Fan, “DeepHammer: Depleting the Intelligence of Deep
[1] O. Mutlu, “Memory Scaling: A Systems Architecture Perspective,” in IMW, 2013. Neural Networks Through Targeted Chain of Bit Flips,” in USENIX Security, 2020.
[2] J. Meza, Q. Wu, S. Kumar, and O. Mutlu, “Revisiting Memory Errors in Large- [42] F. de Ridder, P. Frigo, E. Vannacci, H. Bos, C. Giuffrida, and K. Razavi, “SMASH:
Scale Production Data Centers: Analysis and Modeling of New Trends from the Synchronized Many-Sided Rowhammer Attacks from JavaScript,” in USENIX Se-
Field,” in DSN, 2015. curity, 2021.
[3] Y. Kim, R. Daly, J. Kim, C. Fallin, J. H. Lee, D. Lee, C. Wilkerson, K. Lai, and [43] H. Hassan, Y. C. Tugrul, J. S. Kim, V. v. d. Veen, K. Razavi, and O. Mutlu, “Un-
O. Mutlu, “Flipping Bits in Memory Without Accessing Them: An Experimental covering in-DRAM RowHammer Protection Mechanisms: A New Methodology,
Study of DRAM Disturbance Errors,” in ISCA, 2014. Custom RowHammer Patterns, and Implications,” in MICRO, 2021.
[4] M. Redeker, B. F. Cockburn, and D. G. Elliott, “An Investigation into Crosstalk [44] P. Jattke, V. van der Veen, P. Frigo, S. Gunter, and K. Razavi, “Blacksmith: Scalable
Noise in DRAM Structures,” in MTDT, 2002. Rowhammering in the Frequency Domain,” in SP, 2022.
[5] B. Aichinger, “DDR Memory Errors Caused by Row Hammer,” in HPEC, 2015. [45] M. Marazzi, P. Jattke, F. Solt, and K. Razavi, “ProTRR: Principled yet Optimal
[6] K. Park, C. Lim, D. Yun, and S. Baeg, “Experiments and Root Cause Analysis for In-DRAM Target Row Refresh,” in S&P, 2022.
Active-Precharge Hammering Fault in DDR3 SDRAM under 3xnm Technology,” [46] M. C. Tol, S. Islam, B. Sunar, and Z. Zhang, “Toward Realistic Backdoor Injection
Microelectronics Reliability, 2016. Attacks on DNNs using RowHammer,” arXiv:2110.07683v2 [cs.LG], 2022.
[7] K. Park, D. Yun, and S. Baeg, “Statistical Distributions of Row-Hammering In- [47] W. Burleson, O. Mutlu, and M. Tiwari, “Invited: Who is the Major Threat to To-
duced Failures in DDR3 Components,” Microelectronics Reliability, 2016. morrow’s Security? You, the Hardware Designer,” in DAC, 2016.
[8] O. Mutlu, “The RowHammer Problem and Other Issues We May Face as Memory [48] F. Brasser, L. Davi, D. Gens, C. Liebchen, and A.-R. Sadeghi, “Can’t Touch This:
Becomes Denser,” in DATE, 2017. Software-Only Mitigation Against Rowhammer Attacks Targeting Kernel Mem-
[9] O. Mutlu and J. S. Kim, “RowHammer: A Retrospective,” TCAD, 2019. ory,” in USENIX Security, 2017.
[10] T. Yang and X.-W. Lin, “Trap-Assisted DRAM Row Hammer Effect,” EDL, 2019. [49] O. Mutlu and L. Subramanian, “Research Problems and Opportunities in Memory
[11] J. S. Kim, M. Patel, A. G. Yağlıkçı, H. Hassan, R. Azizi, L. Orosa, and O. Mutlu, Systems,” SUPERFRI, 2014.
“Revisiting RowHammer: An Experimental Analysis of Modern Devices and Miti- [50] A. G. Yağlıkçı, M. Patel, J. S. Kim, R. Azizibarzoki, A. Olgun, L. Orosa, H. Hassan,
gation Techniques,” in ISCA, 2020. J. Park, K. Kanellopoullos, T. Shahroodi, S. Ghose, and O. Mutlu, “BlockHammer:
[12] L. Orosa, A. G. Yağlıkçı, H. Luo, A. Olgun, J. Park, H. Hassan, M. Patel, J. S. Kim, Preventing RowHammer at Low Cost by Blacklisting Rapidly-Accessed DRAM
and O. Mutlu, “A Deeper Look into RowHammer’s Sensitivities: Experimental Rows,” in HPCA, 2021.
Analysis of Real DRAM Chips and Implications on Future Attacks and Defenses,” [51] Y. Park, W. Kwon, E. Lee, T. J. Ham, J. H. Ahn, and J. W. Lee, “Graphene: Strong
in MICRO, 2021. yet Lightweight Row Hammer Protection,” in MICRO, 2020.
[13] M. Qureshi, “Rethinking ECC in the Era of Row-Hammer,” DRAMSec, 2021. [52] A. G. Yağlıkçı, J. S. Kim, F. Devaux, and O. Mutlu, “Security Analysis of the Silver
[14] S. Saroiu, A. Wolman, and L. Cojocar, “The Price of Secrecy: How Hiding Internal Bullet Technique for RowHammer Prevention,” arXiv:2106.07084 [cs.CR], 2021.
DRAM Topologies Hurts Rowhammer Defenses,” in IRPS, 2022. [53] L. Nagel and D. O. Pederson, “SPICE (Simulation Program with Integrated Circuit
[15] A. J. Walker, S. Lee, and D. Beery, “On DRAM RowHammer and the Physics on Emphasis),” 1973.
Insecurity,” IEEE TED, 2021. [54] R. W. Hamming, “Error Detecting and Error Correcting Codes,” The Bell System
[16] M. Seaborn and T. Dullien, “Exploiting the DRAM Rowhammer Bug to Gain Ker- Technical Journal, 1950.
nel Privileges,” Black Hat, 2015. [55] B. Keeth and R. Baker, DRAM Circuit Design: A Tutorial. Wiley, 2001.
[17] V. van der Veen, Y. Fratantonio, M. Lindorfer, D. Gruss, C. Maurice, G. Vigna, [56] B. Keeth, R. J. Baker, B. Johnson, and F. Lin, DRAM Circuit Design: Fundamental
H. Bos, K. Razavi, and C. Giuffrida, “Drammer: Deterministic Rowhammer At- and High-Speed Topics. John Wiley & Sons, 2007.
tacks on Mobile Platforms,” in CCS, 2016. [57] D. Lee, Y. Kim, V. Seshadri, J. Liu, L. Subramanian, and O. Mutlu, “Tiered-Latency
[18] D. Gruss, C. Maurice, and S. Mangard, “Rowhammer.js: A Remote Software- DRAM: A Low Latency and Low Cost DRAM Architecture,” in HPCA, 2013.
Induced Fault Attack in Javascript,” arXiv:1507.06955 [cs.CR], 2016. [58] D. Lee, Y. Kim, G. Pekhimenko, S. Khan, V. Seshadri, K. Chang, and O. Mutlu,
[19] K. Razavi, B. Gras, E. Bosman, B. Preneel, C. Giuffrida, and H. Bos, “Flip Feng “Adaptive-Latency DRAM: Optimizing DRAM Timing for the Common-Case,” in
Shui: Hammering a Needle in the Software Stack,” in USENIX Security, 2016. HPCA, 2015.
[20] P. Pessl, D. Gruss, C. Maurice, M. Schwarz, and S. Mangard, “DRAMA: Exploiting [59] V. Seshadri, Y. Kim, C. Fallin, D. Lee, R. Ausavarungnirun, G. Pekhimenko, Y. Luo,
DRAM Addressing for Cross-CPU Attacks,” in USENIX Security, 2016. O. Mutlu, P. B. Gibbons, M. A. Kozuch, and T. Mowry, “RowClone: Fast and
[21] Y. Xiao, X. Zhang, Y. Zhang, and R. Teodorescu, “One Bit Flips, One Cloud Flops: Energy-Efficient In-DRAM Bulk Data Copy and Initialization,” in MICRO, 2013.
Cross-VM Row Hammer Attacks and Privilege Escalation,” in USENIX Security, [60] K. K. Chang, A. G. Yağlıkçı, S. Ghose, A. Agrawal, N. Chatterjee, A. Kashyap,
2016. D. Lee, M. O’Connor, H. Hassan, and O. Mutlu, “Understanding Reduced-Voltage
[22] E. Bosman, K. Razavi, H. Bos, and C. Giuffrida, “Dedup Est Machina: Memory Operation in Modern DRAM Devices: Experimental Characterization, Analysis,
Deduplication as an Advanced Exploitation Vector,” in S&P, 2016. and Mechanisms,” in SIGMETRICS, 2017.
[23] S. Bhattacharya and D. Mukhopadhyay, “Curious Case of Rowhammer: Flipping [61] K. K. Chang, P. J. Nair, D. Lee, S. Ghose, M. K. Qureshi, and O. Mutlu, “Low-Cost
Secret Exponent Bits Using Timing Analysis,” in CHES, 2016. Inter-Linked Subarrays (LISA): Enabling Fast Inter-Subarray Data Movement in
[24] R. Qiao and M. Seaborn, “A New Approach for RowHammer Attacks,” in HOST, DRAM,” in HPCA, 2016.
2016. [62] S. Ghose, A. G. Yaglikci, R. Gupta, D. Lee, K. Kudrolli, W. Liu, H. Hassan,
[25] Y. Jang, J. Lee, S. Lee, and T. Kim, “SGX-Bomb: Locking Down the Processor via K. Chang, N. Chatterjee, A. Agrawal, M. O’Connor, and O. Mutlu, “What Your
Rowhammer Attack,” in SOSP, 2017. DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimen-
[26] M. T. Aga, Z. B. Aweke, and T. Austin, “When Good Protections Go Bad: Exploit- tal Study,” in SIGMETRICS, 2018.
ing Anti-DoS Measures to Accelerate Rowhammer Attacks,” in HOST, 2017. [63] H. Hassan, G. Pekhimenko, N. Vijaykumar, V. Seshadri, D. Lee, O. Ergin, and
[27] A. Tatar, C. Giuffrida, H. Bos, and K. Razavi, “Defeating Software Mitigations O. Mutlu, “ChargeCache: Reducing DRAM Latency by Exploiting Row Access
Against Rowhammer: A Surgical Precision Hammer,” in RAID, 2018. Locality,” in HPCA, 2016.
[28] D. Gruss, M. Lipp, M. Schwarz, D. Genkin, J. Juffinger, S. O’Connell, W. Schoechl, [64] H. Hassan, N. Vijaykumar, S. Khan, S. Ghose, K. Chang, G. Pekhimenko, D. Lee,
and Y. Yarom, “Another Flip in the Wall of Rowhammer Defenses,” in S&P, 2018. O. Ergin, and O. Mutlu, “SoftMC: A Flexible and Practical Open-Source Infras-
[29] M. Lipp, M. T. Aga, M. Schwarz, D. Gruss, C. Maurice, L. Raab, and L. Lam- tructure for Enabling Experimental DRAM Studies,” in HPCA, 2017.
ster, “Nethammer: Inducing Rowhammer Faults Through Network Requests,” [65] H. Hassan, M. Patel, J. S. Kim, A. G. Yağlıkçı, N. Vijaykumar, N. Mansouri Ghiasi,
arXiv:1805.04956 [cs.CR], 2018. S. Ghose, and O. Mutlu, “CROW: A Low-Cost Substrate for Improving DRAM
[30] V. van der Veen, M. Lindorfer, Y. Fratantonio, H. P. Pillai, G. Vigna, C. Kruegel, Performance, Energy Efficiency, and Reliability,” in ISCA, 2019.
H. Bos, and K. Razavi, “GuardION: Practical Mitigation of DMA-Based Rowham- [66] S. Khan, D. Lee, Y. Kim, A. R. Alameldeen, C. Wilkerson, and O. Mutlu, “The Effi-
mer Attacks on ARM,” in DIMVA, 2018. cacy of Error Mitigation Techniques for DRAM Retention Failures: A Comparative
[31] P. Frigo, C. Giuffrida, H. Bos, and K. Razavi, “Grand Pwning Unit: Accelerating Experimental Study,” in SIGMETRICS, 2014.
Microarchitectural Attacks with the GPU,” in S&P, 2018. [67] S. Khan, D. Lee, and O. Mutlu, “PARBOR: An Efficient System-Level Technique
[32] L. Cojocar, K. Razavi, C. Giuffrida, and H. Bos, “Exploiting Correcting Codes: On to Detect Data-Dependent Failures in DRAM,” in DSN, 2016.
the Effectiveness of ECC Memory Against Rowhammer Attacks,” in S&P, 2019. [68] S. Khan, C. Wilkerson, Z. Wang, A. R. Alameldeen, D. Lee, and O. Mutlu, “Detect-
[33] S. Ji, Y. Ko, S. Oh, and J. Kim, “Pinpoint Rowhammer: Suppressing Unwanted Bit ing and Mitigating Data-Dependent DRAM Failures by Exploiting Current Mem-
Flips on Rowhammer Attacks,” in ASIACCS, 2019. ory Content,” in MICRO, 2017.
[34] S. Hong, P. Frigo, Y. Kaya, C. Giuffrida, and T. Dumitraş, “Terminal Brain Damage: [69] J. S. Kim, M. Patel, H. Hassan, and O. Mutlu, “Solar-DRAM: Reducing DRAM
Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Access Latency by Exploiting the Variation in Local Bitlines,” in ICCD, 2018.
Fault Attacks,” in USENIX Security, 2019. [70] J. S. Kim, M. Patel, H. Hassan, and O. Mutlu, “The DRAM Latency PUF: Quickly
[35] A. Kwong, D. Genkin, D. Gruss, and Y. Yarom, “RAMBleed: Reading Bits in Evaluating Physical Unclonable Functions by Exploiting the Latency–Reliability
Memory Without Accessing Them,” in S&P, 2020. Tradeoff in Modern Commodity DRAM Devices,” in HPCA, 2018.
[36] P. Frigo, E. Vannacci, H. Hassan, V. van der Veen, O. Mutlu, C. Giuffrida, H. Bos, [71] Y. Kim, W. Yang, and O. Mutlu, “Ramulator: A Fast and Extensible DRAM Simu-
and K. Razavi, “TRRespass: Exploiting the Many Sides of Target Row Refresh,” in lator,” CAL, 2016.
S&P, 2020. [72] D. Lee, S. Khan, L. Subramanian, S. Ghose, R. Ausavarungnirun, G. Pekhimenko,
[37] L. Cojocar, J. Kim, M. Patel, L. Tsai, S. Saroiu, A. Wolman, and O. Mutlu, “Are We V. Seshadri, and O. Mutlu, “Design-Induced Latency Variation in Modern DRAM
Susceptible to Rowhammer? An End-to-End Methodology for Cloud Providers,” in Chips: Characterization, Analysis, and Latency Reduction Mechanisms,” in SIG-
S&P, 2020. METRICS, 2017.
[38] Z. Weissman, T. Tiemann, D. Moghimi, E. Custodio, T. Eisenbarth, and B. Sunar, [73] D. Lee, L. Subramanian, R. Ausavarungnirun, J. Choi, and O. Mutlu, “Decoupled
“JackHammer: Efficient Rowhammer on Heterogeneous FPGA–CPU Platforms,” Direct Memory Access: Isolating CPU and IO Traffic by Leveraging a Dual-Data-
arXiv:1912.11523 [cs.CR], 2020. Port DRAM,” in PACT, 2015.
[39] Z. Zhang, Y. Cheng, D. Liu, S. Nepal, Z. Wang, and Y. Yarom, “PThammer: Cross- [74] J. Liu, B. Jaiyen, Y. Kim, C. Wilkerson, O. Mutlu, J. Liu, B. Jaiyen, Y. Kim,
User-Kernel-Boundary Rowhammer through Implicit Accesses,” in MICRO, 2020. C. Wilkerson, and O. Mutlu, “An Experimental Study of Data Retention Behav-
[40] SAFARI Research Group, “RowHammer — GitHub Repository,” https://github.c ior in Modern DRAM Devices,” in ISCA, 2013.

12
[75] J. Liu, B. Jaiyen, R. Veras, and O. Mutlu, “RAIDR: Retention-Aware Intelligent CSR Option,” http://www.adexelec.com/ddr4-sod-v1.
DRAM Refresh,” in ISCA, 2012. [119] TTi, “PL & PL-P Series DC Power Supplies Data Sheet - Issue 5,”
[76] H. Luo, T. Shahroodi, H. Hassan, M. Patel, A. G. Yaglikci, L. Orosa, J. Park, https://resources.aimtti.com/datasheets/AIM-PL+PL-P_series_DC_power_sup
and O. Mutlu, “CLR-DRAM: A Low-Cost DRAM Architecture Enabling Dynamic plies_data_sheet-Iss5.pdf.
Capacity-Latency Trade-Off,” in ISCA, 2020. [120] T. Hamamoto, S. Sugiura, and S. Sawada, “On the Retention Time Distribution of
[77] M. Patel, J. S. Kim, and O. Mutlu, “The Reach Profiler (REAPER): Enabling the Dynamic Random Access Memory (DRAM),” IEEE TED, 1998.
Mitigation of DRAM Retention Failures via Profiling at Aggressive Conditions,” in [121] Micron Technology Inc., “ECC Brings Reliability and Power Efficiency to Mobile
ISCA, 2017. Devices,” White Paper, 2017.
[78] M. Qureshi, D.-H. Kim, S. Khan, P. Nair, and O. Mutlu, “AVATAR: A Variable- [122] P. J. Nair, V. Sridharan, and M. K. Qureshi, “XED: Exposing On-Die Error Detec-
Retention-Time (VRT) Aware Refresh for DRAM Systems,” in DSN, 2015. tion Information for Strong Memory Reliability,” in ISCA, 2016.
[79] V. Seshadri, D. Lee, T. Mullins, H. Hassan, A. Boroumand, J. Kim, M. A. Kozuch, [123] M. Patel, G. F. Oliveira, and O. Mutlu, “HARP: Practically and Effectively Iden-
O. Mutlu, P. B. Gibbons, and T. C. Mowry, “Ambit: In-Memory Accelerator for tifying Uncorrectable Errors in Main Memory Chips That Use On-Die ECC,” in
Bulk Bitwise Operations Using Commodity DRAM Technology,” in MICRO, 2017. MICRO, 2021.
[80] JEDEC, JESD79-4C: DDR4 SDRAM Standard, 2020. [124] M. Patel, J. Kim, T. Shahroodi, H. Hassan, and O. Mutlu, “Bit-Exact ECC Recovery
[81] K. K. Chang, A. Kashyap, H. Hassan, S. Ghose, K. Hsieh, D. Lee, T. Li, G. Pekhi- (BEER): Determining DRAM On-Die ECC Functions by Exploiting DRAM Data
menko, S. Khan, and O. Mutlu, “Understanding Latency Variation in Modern Retention Characteristics (Best Paper),” in MICRO, 2020.
DRAM Chips: Experimental Characterization, Analysis, and Optimization,” in [125] M. Patel, J. S. Kim, H. Hassan, and O. Mutlu, “Understanding and Modeling On-
SIGMETRICS, 2016. Die Error Correction in Modern DRAM: An Experimental Study Using Real De-
[82] JEDEC, JESD79-3: DDR3 SDRAM Standard, 2012. vices,” in DSN, 2019.
[83] JEDEC, JESD79-5: DDR5 SDRAM Standard, 2020. [126] U. Kang, H.-S. Yu, C. Park, H. Zheng, J. Halbert, K. Bains, S. Jang, and J. S. Choi,
[84] JEDEC, JESD209-4B: Low Power Double Data Rate 4 (LPDDR4) Standard, 2017. “Co-Architecting Controllers and DRAM to Enhance DRAM Process Scaling,” in
[85] JEDEC, JESD232A: Graphics Double Data Rate (GDDR5X) Standard, 2016. The Memory Forum, 2014.
[86] JEDEC, JESD250C: Graphics Double Data Rate 6 (GDDR6) Standard, 2021. [127] M. Patel, “Enabling Effective Error Mitigation in Modern Memory Chips that Use
[87] Y. Kim, V. Seshadri, D. Lee, J. Liu, O. Mutlu, Y. Kim, V. Seshadri, D. Lee, On-Die Error-Correcting Codes,” Ph.D. dissertation, ETH Zurich, 2021.
J. Liu, and O. Mutlu, “A Case for Exploiting Subarray-Level Parallelism (SALP) [128] J. Kim, M. Sullivan, S. Lym, and M. Erez, “All-Inclusive ECC: Thorough End-to-
in DRAM,” in ISCA, 2012. End Protection for Reliable Computer Memory,” in ISCA, 2016.
[88] Micron, “DDR4 SDRAM Datasheet,” in Micron, 2016, p. 380. [129] JEDEC, JESD209-5A: LPDDR5 SDRAM Standard, 2020.
[89] Micron, “DDR4 SDRAM RDIMM MTA18ASF2G72PZ – 16GB,” 2016. [130] J. Lee, “Green Memory Solution,” Investor’s Forum, Samsung Electronics, 2014.
[90] Micron Technology, “SDRAM, 4Gb: x4, x8, x16 DDR4 SDRAM Features,” 2014. [131] S. Khan, C. Wilkerson, D. Lee, A. R. Alameldeen, and O. Mutlu, “A Case for
[91] S.-W. Ryu, K. Min, J. Shin, H. Kwon, D. Nam, T. Oh, T.-S. Jang, M. Yoo, Y. Kim, Memory Content-Based Detection and Mitigation of Data-Dependent Failures in
and S. Hong, “Overcoming the Reliability Limitation in the Ultimately Scaled DRAM,” CAL, 2016.
DRAM using Silicon Migration Technique by Hydrogen Annealing,” in IEDM, [132] L. Mukhanov, D. S. Nikolopoulos, and G. Karakonstantis, “DStress: Automatic
2017. Synthesis of DRAM Reliability Stress Viruses using Genetic Algorithms (Best Pa-
[92] T. Sakurai, “Closed-Form Expressions for Interconnection Delay, Coupling, and per Nominee),” in MICRO, 2020.
Crosstalk in VLSIs,” IEEE Transactions on Electron Devices, 1993. [133] A. van de Goor and I. Schanstra, “Address and Data Scrambling: Causes and Im-
[93] D. Frank, R. Dennard, E. Nowak, P. Solomon, Y. Taur, and H.-S. P. Wong, “Device pact on Memory Tests,” in DELTA, 2002.
Scaling Limits of Si MOSFETs and Their Application Dependencies,” Proceedings [134] A. Barenghi, L. Breveglieri, N. Izzo, and G. Pelosi, “Software-Only Reverse Engi-
of the IEEE, 2001. neering of Physical DRAM Mappings for Rowhammer Attacks,” in IVSW, 2018.
[94] D.-S. Lee, Y.-H. Jun, and B.-S. Kong, “Simultaneous Reverse Body and Negative [135] M. Horiguchi, “Redundancy Techniques for High-Density DRAMs,” in ISIS, 1997.
Word-Line Biasing Control Scheme for Leakage Reduction of DRAM,” IEEE Jour- [136] K. Itoh, VLSI Memory Chip Design. Springer, 2001.
nal of Solid-State Circuits, 2011. [137] M. Patel, T. Shahroodi, A. Manglik, A. G. Yaglikci, A. Olgun, H. Luo,
[95] Linear Technology Corp., “LTspice IV,” http://www.linear.com/LTspice. and O. Mutlu, “A Case for Transparent Reliability in DRAM Systems,”
[96] Apple Inc., “About the Security Content of Mac EFI Security Update 2015-001,” cs.AR:2204.10378, 2022.
https://support.apple.com/en-us/HT204934, 2015. [138] SAFARI Research Group, “RowHammer Under Reduced Wordline Voltage —
[97] Z. B. Aweke, S. F. Yitbarek, R. Qiao, R. Das, M. Hicks, Y. Oren, and T. Austin, GitHub Repository,” https://github.com/CMU-SAFARI/RowHammer-Under-Re
“ANVIL: Software-Based Protection Against Next-Generation Rowhammer At- duced-Wordline-Voltage, 2022.
tacks,” in ASPLOS, 2016. [139] PTM, “Predictive Technology Model,” http://ptm.asu.edu/.
[98] D.-H. Kim, P. J. Nair, and M. K. Qureshi, “Architectural Support for Mitigating [140] W. Zhao and Y. Cao, “New Generation of Predictive Technology Model for sub-45
Row Hammering in DRAM Memories,” CAL, 2014. nm Early Design Exploration,” IEEE TED, 2006.
[99] M. Son, H. Park, J. Ahn, and S. Yoo, “Making DRAM Stronger Against Row Ham- [141] International Technology Roadmap for Semiconductors, “ITRS Reports,” http://ww
mering,” in DAC, 2017. w.itrs2.net/itrs-reports.html, 2015.
[100] E. Lee, I. Kang, S. Lee, G. Edward Suh, and J. Ho Ahn, “TWiCe: Preventing Row- [142] T. Vogelsang, “Understanding the Energy Consumption of Dynamic Random Ac-
Hammering by Exploiting Time Window Counters,” in ISCA, 2019. cess Memories,” in MICRO, 2010.
[101] J. M. You and J.-S. Yang, “MRLoc: Mitigating Row-Hammering Based on Memory [143] B. Everitt, DRAM Circuit Design: Fundamental and High-Speed Topics. Cam-
Locality,” in DAC, 2019. bridge University Press, 1998.
[102] S. M. Seyedzadeh, A. K. Jones, and R. Melhem, “Mitigating Wordline Crosstalk [144] A. Das, H. Hassan, and O. Mutlu, “VRL-DRAM: Improving DRAM Performance
Using Adaptive Trees of Counters,” in ISCA, 2018. via Variable Refresh Latency,” in DAC, 2018.
[103] R. K. Konoth, M. Oliverio, A. Tatar, D. Andriesse, H. Bos, C. Giuffrida, [145] Y. Wang, A. Tavakkol, L. Orosa, S. Ghose, N. M. Ghiasi, M. Patel, J. S. Kim,
and K. Razavi, “ZebRAM: Comprehensive and Compatible Software Protection H. Hassan, M. Sadrosadati, and O. Mutlu, “Reducing DRAM Latency via Charge-
Against Rowhammer Attacks,” in OSDI, 2018. Level-Aware Look-Ahead Partial Restoration,” in MICRO, 2018.
[104] I. Kang, E. Lee, and J. H. Ahn, “CAT-TWO: Counter-Based Adaptive Tree, Time [146] H. David, C. Fallin, E. Gorbatov, U. R. Hanebutte, and O. Mutlu, “Memory Power
Window Optimized for DRAM Row-Hammer Prevention,” IEEE Access, 2020. Management via Dynamic Voltage/Frequency Scaling,” in ICAC, 2011.
[105] K. Bains, J. Halbert, C. Mozak, T. Schoenborn, and Z. Greenfield, “Row Hammer [147] Q. Deng, D. Meisner, L. Ramos, T. F. Wenisch, and R. Bianchini, “MemScale:
Refresh Command,” 2015, U.S. Patent 9,117,544. Active Low-Power Modes for Main Memory,” in ASPLOS, 2011.
[106] K. S. Bains and J. B. Halbert, “Distributed Row Hammer Tracking,” 2016, U.S. [148] Micron, “DDR4 SDRAM RDIMM MTA18ASF2G72PZ – 16GB,” https://www.mi
Patent 9,299,400. cro-semiconductor.com/datasheet/7c-MTA18ASF2G72PZ-2G9E1.pdf.
[107] K. S. Bains and J. B. Halbert, “Row Hammer Monitoring Based on Stored Row [149] Crucial, “CT4G4DFS8266,” https://www.crucial.com/memory/eol_ddr4/ct4g4dfs
Hammer Threshold Value,” 2016, U.S. Patent 9,384,821. 8266.
[108] H. Gomez, A. Amaya, and E. Roa, “DRAM Row-Hammer Attack Reduction Using [150] CORSAIR, “SKU CMV4GX4M1A2133C15 Specification,” https://tinyurl.com/
Dummy Cells,” in NORCAS, 2016. CMV4GX4M1A2133C15.
[109] F. Devaux and R. Ayrignac, “Method and Circuit for Protecting a DRAM Memory [151] Samsung, “288pin Unbuffered DIMM based on 8Gb D-die, Rev 1.1,”
Device from the Row Hammer Effect,” 2021, U.S. Patent 10,885,966. https://semiconductor.samsung.com/resources/data-sheet/DDR4_8Gb_D_die
[110] C. Yang, C. K. Wei, Y. J. Chang, T. C. Wu, H. P. Chen, and C. S. Lai, “Suppression _Unbuffered_DIMM_Rev1.1_Jun.18.pdf, 2018.
of RowHammer Effect by Doping Profile Modification in Saddle-Fin Array Devices [152] GSKill, “F4-2400C17S-8GNT Specifications,” https://www.gskill.com/product
for Sub-30-nm DRAM Technology,” TDMR, 2016. /165/186/1535961538/F4-2400C17S-8GNT.
[111] C.-M. Yang, C.-K. Wei, H.-P. Chen, J.-S. Luo, Y. J. Chang, T.-C. Wu, and C.-S. [153] Samsung, “288pin Registered DIMM based on 8Gb B-die, Rev 1.91,”
Lai, “Scanning Spreading Resistance Microscopy for Doping Profile in Saddle-Fin https://semiconductor.samsung.com/resources/data-sheet/20170731_DDR4_8G
Devices,” IEEE Transactions on Nanotechnology, 2017. b_B_die_Registered_DIMM_Rev1.91_May.17.pdf, 2017.
[112] S. Gautam, S. Manhas, A. Kumar, M. Pakala, and E. Yieh, “Row Hammering Miti- [154] Samsung, “M471A5143EB0-CPB Specifications,” https://semiconductor.samsung.
gation Using Metal Nanowire in Saddle Fin DRAM,” IEEE TED, 2019. com/dram/module/sodimm/m471a5143eb0-cpb/.
[113] Z. Greenfield and T. Levy, “Throttling Support for Row-Hammer Counters,” 2016, [155] CORSAIR, “CMK16GX4M2B3200C16,” https://www.corsair.com/eu/en/Categori
U.S. Patent 9,251,885. es/Products/Memory/VENGEANCE-LPX/p/CMK16GX4M2B3200C16.
[114] G. Saileshwar, B. Wang, M. Qureshi, and P. J. Nair, “Randomized Row-Swap: Mit- [156] Samsung, “260pin Unbuffered SODIMM based on 8Gb C-die,” https:
igating Row Hammer by Breaking Spatial Correlation Between Aggressor and Vic- //semiconductor.samsung.com/resources/data-sheet/DDR4_8Gb_C_die_Unb
tim Rows,” in ASPLOS, 2022. uffered_SODIMM_Rev1.5_Apr.18.pdf, 2018.
[115] SAFARI Research Group, “SoftMC — GitHub Repository,” https://github.com/C [157] Kingston, “KSM32RD8/16HDR Specifications,” https://www.kingston.com/dataS
MU-SAFARI/softmc, 2021. heets/KSM32RD8_16HDR.pdf, 2020.
[116] Maxwell, “FT20X,” https://www.maxwell-fa.com/upload/files/base/8/m/311.pdf. [158] Memory.NET, “HMAA4GU6AJR8N,” https://memory.net/product/hmaa4gu6ajr
[117] Xilinx, “Xilinx Alveo U200 FPGA Board,” https://www.xilinx.com/products/boar 8n-xn-sk-hynix-1x-32gb-ddr4-3200-udimm-pc4-25600u-dual-rank-x8-module/.
ds-and-kits/alveo/u200.html, 2021.
[118] Adexelec, “DDR4-SOD-V1 260-pin 1.2V, DDR4 SODIMM Vertical Extender with

13
Appendix A. Tested DRAM Modules
Table 3 shows the characteristics of the DDR4 DRAM modules we test and analyze.15 For each DRAM module, we provide the
1) DRAM chip manufacturer, 2) DIMM name, 3) DIMM model,16 4) die density, 5) data transfer frequency, 6) chip organization,
7) die revision, specified in the module’s serial presence detect (SPD) registers, 8) manufacturing date, specified on the module’s
label in the form of week − year, and 9) RowHammer vulnerability characteristics of the module. Table 3 reports the RowHammer
vulnerability characteristics of each DIMM under two wordline voltage (VPP ) levels: i) nominal VPP (2.5 V) and ii) the lowest
VPP at which the DRAM module can successfully communicate with the FPGA (VPPmin ). We quantify a DIMM’s RowHammer
vulnerability characteristics at a given VPP in terms of two metrics: i) the minimum aggressor row activation count necessary to
cause a RowHammer bit flip (HC f irst ) and ii) the fraction of DRAM cells that experience a bit flip in a DRAM row (BER). Based
on these two metrics at nominal VPP and VPPmin , Table 3 also provides a recommended VPP level (VPPRec ) and the corresponding
RowHammer characteristics in the right-most three columns.

Table 3: Tested DRAM modules and their characteristics when VPP =2.5 V (nominal) and VPP =VPPmin . VPPmin is specified for each module.
VPP = 2.5V VPP = VPPmin VPP = VPPRec

Frequency (MT/s)
DRAM Chip Mfr.

Recommended
DIMM Name

Die Revision

VPP (VPPRec )
Die Density

Chip Org.

Minimum

Minimum

Minimum
Mfr. Date

HCfirst

HCfirst

HCfirst
VPPmin
DIMM Model BER BER BER
A0 MTA18ASF2G72PZ-2G3B1QK [148] 8Gb 2400 x4 B 11-19 39.8K 1.24e-03 1.4 42.2K 1.00e-03 1.4 42.2K 1.00e-03
A1 MTA18ASF2G72PZ-2G3B1QK [148] 8Gb 2400 x4 B 11-19 42.2K 9.90e-04 1.4 46.4K 7.83e-04 1.4 46.4K 7.83e-04
Mfr. A (Micron)

A2 MTA18ASF2G72PZ-2G3B1QK [148] 8Gb 2400 x4 B 11-19 41.0K 1.24e-03 1.7 39.8K 1.35e-03 2.1 42.1K 1.55e-3
A3 CT4G4DFS8266.C8FF [149] 4Gb 2666 x8 F 07-21 16.7K 3.33e-02 1.4 16.5K 3.52e-02 1.7 17.0K 3.48e-02
A4 CT4G4DFS8266.C8FF [149] 4Gb 2666 x8 F 07-21 14.4K 3.18e-02 1.5 14.4K 3.33e-02 2.5 14.4K 3.18e-02
A5 CT4G4SFS8213.C8FBD1 4Gb 2400 x8 - 48-16 140.7K 1.39e-06 2.4 145.4K 3.39e-06 2.4 145.4K 3.39e-06
A6 CT4G4DFS8266.C8FF [149] 4Gb 2666 x8 F 07-21 16.5K 3.50e-02 1.5 16.5K 3.66e-02 2.5 16.5K 3.50e-02
A7 CMV4GX4M1A2133C15 [150] 4Gb 2133 x8 - - 16.5K 3.42e-02 1.8 16.5K 3.52e-02 2.5 16.5K 3.42e-02
A8 MTA18ASF2G72PZ-2G3B1QG [148] 8Gb 2400 x4 B 11-19 35.2K 2.38e-03 1.4 39.8K 2.07e-03 1.4 39.8K 2.07e-03
A9 CMV4GX4M1A2133C15 [150] 4Gb 2133 x8 - - 14.3K 3.33e-02 1.5 14.3K 3.48e-02 1.6 14.6K 3.47e-02
B0 M378A1K43DB2-CTD [151] 8Gb 2666 x8 D 10-21 7.9K 1.18e-01 2.0 7.6K 1.22e-01 2.5 7.9K 1.18e-01
B1 M378A1K43DB2-CTD [151] 8Gb 2666 x8 D 10-21 7.3K 1.26e-01 2.0 7.6K 1.28e-01 2.0 7.6K 1.28e-01
Mfr. B (Samsung)

B2 F4-2400C17S-8GNT [152] 4Gb 2400 x8 F 02-21 11.2K 2.52e-02 1.6 12.0K 2.22e-02 1.6 12.0K 2.22e-02
B3 M393A1K43BB1-CTD6Y [153] 8Gb 2666 x8 B 52-20 16.6K 2.73e-03 1.6 21.1K 1.09e-03 1.6 21.1K 1.09e-03
B4 M393A1K43BB1-CTD6Y [153] 8Gb 2666 x8 B 52-20 21.0K 2.95e-03 1.8 19.9K 2.52e-03 2.0 21.1K 2.68e-03
B5 M471A5143EB0-CPB [154] 4Gb 2133 x8 E 08-17 21.0K 7.78e-03 1.8 21.0K 6.02e-03 2.0 21.1K 8.67e-03
B6 CMK16GX4M2B3200C16 [155] 8Gb 3200 x8 - - 10.3K 1.14e-02 1.7 10.5K 9.82e-03 1.7 10.5K 9.82e-03
B7 M378A1K43DB2-CTD [151] 8Gb 2666 x8 D 10-21 7.3K 1.32e-01 2.0 7.6K 1.33e-01 2.0 7.6K 1.33e-01
B8 CMK16GX4M2B3200C16 [155] 8Gb 3200 x8 - - 11.6K 2.88e-02 1.7 10.5K 2.37e-02 1.8 11.7K 2.58e-02
B9 M471A5244CB0-CRC [156] 8Gb 2133 x8 C 19-19 11.8K 2.68e-02 1.7 8.8K 2.39e-02 1.8 12.3K 2.54e-02
C0 F4-2400C17S-8GNT [152] 4Gb 2400 x8 B 02-21 19.3K 7.29e-03 1.7 23.4K 6.61e-03 1.7 23.4K 6.61e-03
C1 F4-2400C17S-8GNT [152] 4Gb 2400 x8 B 02-21 19.3K 6.31e-03 1.7 20.6K 5.90e-03 1.7 20.6K 5.90e-03
Mfr. C (SK Hynix)

C2 KSM32RD8/16HDR [157] 8Gb 3200 x8 D 48-20 9.6K 2.82e-02 1.5 9.2K 2.34e-02 2.3 10.0K 2.89e-02
C3 KSM32RD8/16HDR [157] 8Gb 3200 x8 D 48-20 9.3K 2.57e-02 1.5 8.9K 2.21e-02 2.3 9.7K 2.66e-02
C4 HMAA4GU6AJR8N-XN [158] 16Gb 3200 x8 A 51-20 11.6K 3.22e-02 1.5 11.7K 2.88e-02 1.5 11.7K 2.88e-02
C5 HMAA4GU6AJR8N-XN [158] 16Gb 3200 x8 A 51-20 9.4K 3.28e-02 1.5 12.7K 2.85e-02 1.5 12.7K 2.85e-02
C6 CMV4GX4M1A2133C15 [150] 4Gb 2133 x8 C - 14.2K 3.08e-02 1.6 15.5K 2.25e-02 1.6 15.5K 2.25e-02
C7 CMV4GX4M1A2133C15 [150] 4Gb 2133 x8 C - 11.7K 3.24e-02 1.6 13.6K 2.60e-02 1.6 13.6K 2.60e-02
C8 KSM32RD8/16HDR [157] 8Gb 3200 x8 D 48-20 11.4K 2.69e-02 1.6 9.5K 2.57e-02 2.5 11.4K 2.69e-02
C9 F4-2400C17S-8GNT [152] 4Gb 2400 x8 B 02-21 12.6K 2.18e-02 1.7 15.2K 1.63e-02 1.7 15.2K 1.63e-02

15 Alltested DRAM modules implement the DDR4 DRAM standard [80]. We make our best effort in identifying the DRAM chips used in our tests. We identify
the DRAM chip density and die revision through the original manufacturer markings on the chip. For certain DIMMs we tested, the original DRAM chip markings
are removed by the DIMM manufacturer. In this case, we can only identify the chip manufacturer and density by reading the information stored in the SPD. However,
these DIMM manufacturers also tend to remove the die revision information in the SPD. Therefore, we cannot identify the die revision of five DIMMs and the
manufacturing date of six DIMMs we test, shown as ‘-’ in the table.
16 DIMM models CMV4GX4M1A2133C15 and F4-2400C17S-8GNT appear in more than one DRAM chip manufacturer because different batches of these

modules use DRAM chips from different manufacturers (i.e., Micron-SK Hynix and Samsung-SK Hynix, respectively) across different batches.

14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy