0% found this document useful (0 votes)
29 views9 pages

Cryptanalysis of Strong Physically Unclonable Functions

This document discusses the cryptanalysis of Strong Physically Unclonable Functions (PUFs), which are used for device authentication. The authors debunk the assumption that the security of Strong PUFs relies solely on their resistance to machine learning attacks, demonstrating that three specific Strong PUFs can be compromised using generic symmetric key cryptanalysis techniques. The article serves as a tutorial for designing secure PUFs and highlights the need for a comprehensive security evaluation beyond just machine learning resistance.

Uploaded by

Bhuvan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views9 pages

Cryptanalysis of Strong Physically Unclonable Functions

This document discusses the cryptanalysis of Strong Physically Unclonable Functions (PUFs), which are used for device authentication. The authors debunk the assumption that the security of Strong PUFs relies solely on their resistance to machine learning attacks, demonstrating that three specific Strong PUFs can be compromised using generic symmetric key cryptanalysis techniques. The article serves as a tutorial for designing secure PUFs and highlights the need for a comprehensive security evaluation beyond just machine learning resistance.

Uploaded by

Bhuvan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Received 29 July 2022; revised 2 October 2022 and 30 October 2022; accepted 22 November 2022.

Date of publication 6 December 2022;


date of current version 7 April 2023.
Digital Object Identifier 10.1109/OJSSCS.2022.3227009

Cryptanalysis of Strong Physically


Unclonable Functions
LILIYA KRALEVA 1 , MOHAMMAD MAHZOUN2 , RALUCA POSTEUCA 1 , DILARA TOPRAKHISAR 1,

TOMER ASHUR 3 , AND INGRID VERBAUWHEDE 1 (Fellow, IEEE) (Fellow, IEEE)


(Invited Paper)
1 imec-COSIC, KU Leuven, 3001 Leuven, Belgium

2 Department of Mathematics and Computer Science, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands

3 Cryptomeria, Leuven, Belgium

CORRESPONDING AUTHOR: R. POSTEUCA (e-mail: raluca.posteuca@esat.kuleuven.be)


This work was supported in part by CyberSecurity Research Flanders under Grant VR20192203; in part by the Research Council of the KU Leuven
under Grant C16/15/058; in part by the European Commission through the Horizon 2020 Research and Innovation Program under Grant Agreement Belfort
ERC Advanced under Grant 101020005 695305; and in part by the Intel through the Intel Project on Cryptographic Frontiers. The work of Liliya Kraleva
was supported by the Research Foundation Flanders (FWO). The work of Tomer Ashur was supported by FWO under Grant 12ZH420N.

ABSTRACT Physically unclonable functions (PUFs) are being proposed as a low-cost alternative to
permanently store secret keys or provide device authentication without requiring nonvolatile memory, large
e-fuses, or other dedicated processing steps. In the literature, PUFs are split into two main categories.
The so-called strong PUFs are mainly used for authentication purposes; hence, also called authentication
PUFs. They promise to be lightweight by avoiding extensive digital post-processing and cryptography. The
so-called weak PUFs, also called key generation PUFs, can only provide authentication when combined
with a cryptographic authentication protocol. Over the years, multiple research results have demonstrated
that Strong PUFs can be modeled and attacked by machine learning (ML) techniques. Hence, the general
assumption is that the security of a strong PUF is solely dependent on its security against ML attacks. The
goal of this article is to debunk this myth, by analyzing and breaking three recently published Strong PUFs
(Suresh et al., VLSI Circuits 2020; Liu et al., ISSCC 2021; and Jeloka et al., VLSI Circuits 2017). The
attacks presented in this article have practical complexities and use generic symmetric key cryptanalysis
techniques.

INDEX TERMS Cascaded PUFs, cryptanalysis, physically unclonable functions (PUFs), strong PUF.

I. INTRODUCTION relatively small number of CRPs, while the number of CRPs

P HYSICALLY unclonable functions (PUFs) are the


method of choice for hardware applications requiring
device authentication. Since securely storing a secret key
supported by a Strong PUF is much larger. Thus, Weak
PUFs are usually used for storing a (small number of) cryp-
tographic key(s), whereas Strong PUFs are often perceived
in an integrated circuit (IC) is expensive and simply hard- as a building block in an authentication protocol.
coding it is vulnerable to physical attacks, PUFs offer a third The focus of this article is on analyzing the security of
option: as the manufacturing process of ICs is subject to envi- Strong PUFs as a device implementing a random n-to-m
ronmental variances, one can parameterize a cryptographic function. Broadly speaking, the workings of such a device
algorithm by harvesting the resulting randomness. consist of an n-bit challenge and an m-bit response; in
PUF taxonomy distinguishes between two types of PUFs, Strong PUF-literature, typically, m = 1. To compute the
namely, Weak- and Strong PUFs. While both types are response, the PUF uses a finite amount of intrinsic random-
described in the literature as Challenge–Response Protocols, ness harvested from some physical properties of the hardware
they differ by the challenge domain’s size, i.e., the number implementing it (e.g., the start-up value of a SRAM or the
of challenge–response pairs (CRPs). Weak PUFs support a delay of a multiplexer in the case of an arbiter PUF or the
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

VOLUME 3, 2023 32
KRALEVA et al.: CRYPTANALYSIS OF STRONG PUFs

frequency of a ring oscillator). Following a sequence of suc- Moreover, Delvaux [7] presented an analysis of five
cessful attacks using machine learning (ML) techniques, the Arbiter-PUF-based authentication protocols, using ML
most recent trend is to build Strong PUFs from an IC tem- techniques, concluding that the use of lightweight obfusca-
plate cascading nonlinear components. The behavior of the tion logic provides insufficient protection against machine-
nonlinear components is determined by the intrinsic random- learning attacks for all five analyzed primitives.
ness mentioned above and the intuition behind this approach Consequently, the primary focus in PUF-design has shifted
is that the cascade amplifies the nonlinear effect. It appears toward explicitly showing resistance to ML-modeling. A
from the literature that this approach indeed thwarts ML recent trend in this direction is cascaded Strong PUFs sug-
attacks; however, as we show in this article, it is not enough gested in [8]. The idea is to use a composition of random
by itself to provide the desired properties. subfunctions (i.e., a cascade), conjecturing that the over-
all nonlinear effect thwarts ML-modeling. The three PUFs
we investigate in this article are all of this type. Besides
A. PREVIOUS WORK
the three primitives addressed in this article, many more
The first PUF allowing for a large number of CRPs (thus
strong PUFs are proposed in the literature, always aim-
initiating the research line on Strong PUFs) was the arbiter
ing at increasing the ML resistance. One such example
PUF introduced in [1]. The arbiter PUF exploits signal delay
is the “Double-Arbiter” PUF [9], which introduces a new
variations unique to each IC to parameterize a challenge–
type of Strong PUF based on the Arbiter PUF. The aim
response protocol (i.e., assigning a unique behavior to each
of this new primitive is to increase the unpredictability of
device). This article further introduces the adversary objec-
the response, which is measured as the tolerance of the
tives and capabilities. It is assumed that the adversary has
primitive to ML attacks. Another example of a cascaded
physical access to the IC and their goal is to clone the
PUF is the LP-PUF introduced in [10]. The LP-PUF con-
PUF. The PUF is considered secure if the adversary is
sists of three layers, namely, an Arbiter layer, a mixing
unable to:1
layer, and a XOR layer, leading to the generation of 1-bit
1) apply an exhaustive search over the entire CRP space; responses. As the author observes, the structure of the LP-
2) produce a counterfeit that perfectly simulates the PUF resembles the substitution-permutation-network (SPN),
behavior of the attacked PUF; a technique used for the design of block ciphers. The same
3) apply a timing attack based on a measurement of observation could apply for many cascaded Strong PUFs
the delays in the attacked strong PUF, followed by presented in the literature, leading to a natural inclusion of
a prediction of the outputs; symmetric-key cryptanalysis techniques in the security evalu-
4) apply noninvasive attacks (e.g., algorithmic attacks); ation of a Strong PUF. Although ML represents an important
in this case, the adversary models the PUF and tries technique to asses the security of a primitive, it represents
to predict the response associated with a specific only a first step in the security analysis. ML is a black-box
challenge with “very high” probability.2 approach and it does not take into account the structure of
A series of strong PUFs were subsequently proposed in the a Strong PUF. In this work, we go beyond this approach,
literature, e.g., ring oscillator [2], XOR-Arbiter [2], or the OT- by using tools from symmetric-key cryptanalysis and by
based PUFs [3]. At CCS’10, Rührmair et al. [4] developed a taking insights from the description of the primitive. More
ML model attempting to predict the response of previously precisely, in this article, we choose to analyze three represen-
published strong PUFs. This resulted in breaking all Strong tative strong PUFs (Suresh et al. [11], VLSI Circuits 2020;
PUFs published until that point. Liu et al. [12], ISSCC 2021; and Jeloka et al. [13], VLSI
Other important work in the field of Strong PUFs’ anal- Circuits 2017) because they have nice CMOS circuit
ysis is presented by Delvaux et al. [5], in which they implementations.
analyzed eight Strong PUFs in an integrated framework.
This work was then extended in [6], in which 11 more
Strong PUFs were added to the integrated framework, show- B. OUR CONTRIBUTION
ing numerous security and practicality issues for all the This article serves as a tutorial for the design of secure
19 analyzed primitives. Both papers focus on the analy- PUFs from a symmetric-key point of view. We motivate the
sis of Strong PUFs as authentication protocols, underlining need for such a tutorial by investigating—from an algorith-
that the CRP size of all the analyzed Strong PUFs is not mic point of view—three recently published cascaded Strong
suitable for ensuring a sufficient level of security. The con- PUFs and show that they all exhibit undesirable properties
clusion of these two papers is that “proper compensation undermining their security.
seems to be in conflict with the lightweight objective” of a We stress that our aim with this article is to provide general
Strong PUF. guidelines to the design of secure algorithms and that we did
not attempt to provide a thorough cryptanalysis of the three
1. We stress that these attack scenarios are reproduced from [1] and are algorithms. Indeed, more powerful attacks probably exist,
therefore informal. We provide a formal discussion in Section V.
2. The term very high probability is used in the original paper. We and directly fixing the issues we raise is unlikely to result
interpret it to mean better than a random guess. in secure devices.

32 VOLUME 3, 2023
to the rest of the device. We say that two inputs are in the
same collision class if they result in the same output after
the first layer (Stage 1). Such two values will result in the
same computation throughout the rest of the device and
subsequently the same response for both inputs.
Our second observation is that each pair (ES132+n , ES1n )
of ES1 functions is isolated from all other ES1 functions.
Therefore, we consider an alternative representation where
the first layer consists of 32 functions each mapping a 4-bit
input to a 2-bit output. Then, the output values are 00, 01,
10, and 11 and they induce four equivalence classes each
containing four values on average.

C. RECOVERING THE EQUIVALENCE CLASSES


By using the two observations presented in the previous
section, we show how 211.8 ≈ 103.55 queries are enough
to group the 128-bit input into 264 sets. Each of these sets
contains on average 264 values all resulting in the same
FIGURE 1. Schematic description of the PUF. The figure is copied without any
modifications from [11] and we use it under the provisions of fair use.
response. Thus, learning the response to one challenge leaks
the response to all the other 264 − 1 values from the same
equivalence class.
II. SURESH ET AL. Without loss of generality, we consider the equivalence
In this section, we present the details of the strong PUF classes of the pair (ES163 , ES131 ). The adversary fixes the
proposed by Suresh et al. [11]. We then show that the cas- last 124 bits of the input to an arbitrary value and iterates the
caded nature of this algorithm collapses the 2128 CRP domain first four. Two sets S0 and S1 are initialized and each 4-bit
into a much smaller subspace of equivalence classes. input is added to the set Si if and only if the response is i.
However, those are not yet the desired equivalence classes.
A. DESCRIPTION Having the same response for two different plaintexts can
Suresh et al. [11] offered a cascaded algorithm in 14-nm be caused by any of the following three reasons.
CMOS with a claimed 1028 ≈ 293 CRP space. It takes a 1) A collision after Stage 1.
128-bit challenge and returns a 1-bit response. The algorithm 2) A collision after Stage 2, without a collision after
is abstracted into three layers as depicted in Fig. 1. Stage 1.
The first layer consists of 64 random 4-to-1 functions 3) The same Hamming weight after Stage 2, regardless
(denoted ES1 boxes in the original paper). It takes a 256-bit of the state after Stage 1 or Stage 2.
input and returns a 64-bit output. The second layer involves The last two cases are false positives that need to be filtered
16 AES Sboxes, each taking an 8-bit input and returning out. In order to do so, the last 124 bits are fixed to a different
an 8-bit output. The third layer again consists of 64 random arbitrary value.
4-to-1 functions (denoted ES2 boxes in the original paper). Note that the elements in the same class will always be
Finally, the 1-bit response is taken as the parity of the third mapped together to one of the sets (but not necessarily the
layer’s output. same one for different values of the last 124 bits). The
The original paper also refers to the first and third layers equivalence classes are then computed by identifying which
as Stage 1 and Stage 2, respectively. In lieu of a complete values are always mapped together to the same set. The
specification, which was not provided in [11], we proceed probability of recovering the correct equivalence classes is
by assuming that the bit permutations are according to the approximately 99% when the process is repeated seven times.
wirings illustrated in Fig. 1. For comparison, even three repetitions result in a success
probability of 65% to identify the right set. The probability
B. INTUITION was computed empirically, using Monte Carlo simulations.
According to [11], the PUF offers a challenge space in the The complexity of determining an equivalence class for
range 1028 –1031 ≈ 293 –2103 . As this is the only quantifi- a single pair of ES1 functions with 99% success rate is
able claim, we understand it to be the advertised security. 7 · 16 = 26.8 ≈ 102.05 chosen queries. This is repeated for
Since the response of the PUF consists of only a single each pair of ES1 functions independently, leading to an
bit, predicting the output with probability better than (1/2) overall complexity of 32 × 26.8 = 211.8 = 103.55 chosen
amounts to a successful attack. challenges for recovering all pair-equivalence classes with
Our first observation is that the first layer is an entropy expected success probability of (1−0.01)32 = 0.73 (or 73%).
choke point, i.e., no matter how much randomness was The next step after recovering all pair-equivalence classes
invested in it, it cannot cascade more than 64 bits of entropy is to link them into state-equivalence classes. We define a

VOLUME 3, 2023 32
KRALEVA et al.: CRYPTANALYSIS OF STRONG PUFs

state equivalence class as a set of challenges for which the


state after Stage 1 is equal, therefore leading to the same
response for both inputs.
Since each pair-equivalence class has an average of four
elements, the average number of elements in one state-
equivalence class is 432 = 264 ≈ 1019.27 . Additionally, the
state after Stage 1 is 64-bit long, leading to 264 ≈ 1019.27
different state-equivalence classes.
At this point, learning the response to any challenge leaks
the response to all other challenges within the same state-
equivalence class. Note that the classes are built such that
on observing a new challenge it requires negligible effort to
identify which state-equivalence class it belongs to.

D. DISCUSSION
Our approach exploits the fact that the input space collapses
from 2128 to 264 after Stage 1. We see that by investing
a relatively small amount of effort (the analysis of 211.8
chosen challenges) an adversary can learn the outputs of
FIGURE 2. Schematic description of the PUF. The figure is copied without any
Stage 1 for any new challenge, effectively removing the first modifications from [12] and we use it under the provisions of fair use.
layer of random functions. Moreover, if the adversary learns
the response associated to a challenge, then they also know
that the same response is associated to all the challenges in Formally, denote by x = (x0 , x1 , x2 , x3 ) the 100-bit
the same state-equivalence class with the initial one. This challenge, where xi is the ith part of length 25. Then
is an undesirable behavior; or alternatively, in case this is
PUF(x) = C(S(S(S(S(x0 ) ⊕ x1 ) ⊕ x2 ) ⊕ x3 )) (1)
an acceptable behavior, it can be achieved using cheaper
components. is a succinct description of the function.
Note that we did not investigate the properties of the other As before, we assume that the adversary does not have
layers, and it is likely that this basic attack can be improved access to the description of the internal functions. Again the
further. adversary seeks to predict the 1-bit response associated to a
newly seen challenge with a probability better than (1/2).
III. LIU ET AL.
The second design we analyze is [12] due to Liu et al. We B. INTUITION
show that this device is vulnerable to two generic attacks. From the formal description in (1) arises a natural
The first attack can be applied if the device exhibits an observation.
inherent bias, which can easily be detected and exploited. Observation 1: Fix the first 75 bits of the challenge,
The second attack is independent on the bias of the device namely, x0 , x1 , and, x2 ; then, (1) is reduced to
and allows the adversary to guess the responses associated
PUF(x) = C(S(c ⊕ x3 )) (2)
to a group of well chosen challenges.
where c = S(S(S(x0 ) ⊕ x1 ) ⊕ x2 ) is fixed but unknown.
A. DESCRIPTION For brevity, we define an auxiliary function
Liu et al. proposed a cascade of 5-to-5-bit random functions f (x) = C(S(x)). (3)
formed in two layers of five with a bit-permutation between
them, resulting in a 25-to-25-bit function which we denote We underline that PUF(x) = f (x ⊕ c), where c =
by S(x); see Phase 1 in Fig. 2. S(x) is used to digest the S(S(S(x0 ) ⊕ x1 ) ⊕ x2 ).
100-bit input by first splitting it into four 25-bit blocks, Definition 1: We define a map of a function f as the
then consuming the blocks iteratively with a feed-backward ordered set Mf = {(x, f (x)) ∀x}, where the order is defined
operation from each block to the next one; see Phase 2 in as follows:
Fig. 2. (x1 , f (x1 )) < (x2 , f (x2 )) ⇔ x1 < x2 .
A finalization function, which we denote by C(x), is used
to compute the response. First, a sequence of 5-to-1-bit ran- Note that the map Mf was constructed such that the input
dom functions is applied to the output coming from the last before the last application of the S function takes all possible
call to S(x), resulting in a 5-bit output. Then, the parity of values. In particular, all the maps have the same elements,
these five bits is returned as the response; see Phase 3 in but arranged in a different order. This property is formally
Fig. 2. described in the following general observation.

32 VOLUME 3, 2023
Observation 2: Let x0 , x1 , and x2 be randomly chosen and Algorithm 1: Psuedocode for the PUF From [3]
fixed. Then, the map Mfc = {(x ⊕ c, f (x ⊕ c) ∀x} is an affine Input: C = {r1 , . . . , rt }
translation by c = S(S(S(x0 ) ⊕ x1 ) ⊕ x2 ) of the map Mf . Output: F(C): = Trt
By constructing one arbitrary Mf in full, the adversary T←S
learns its distribution, i.e., the number of 0 or 1 responses. for (ri , ri+1 ) ∈ C do
for 0 ≤ j ≤ m − 1 do
Due to Observation 2, any map Mfc has the same distri- if Pri ,j > Pri+1,j then
bution as Mf . If the device exhibits an inherent bias, then Tri+1 ,j ← Tri ,j
the distribution of Mf can be trivially used by the adver- else
sary to predict the output to any challenge not in Mf with Tri ,j ← Tri+1 ,j
a probability better than (1/2). For example, if the number end
end
of 0 responses associated to Mf is 225 − 220 , then the prob- end
ability of having a 0 response to an arbitrary challenge is Return Trt
Pr = ([225 − 220 ]/225 ) = 1 − 2−5 = 0.96 (96%).

C. ATTACK DESCRIPTION
To construct Mf , the adversary first chooses an arbitrary Moreover, we see again that by investing a relatively small
75-bit value which they use to fix x0 ||x1 ||x2 . Then, iterating amount of effort (the processing of 225 + 25 chosen chal-
over the remaining 25 bits, the adversary queries the PUF lenges and an exhaustive search over a space of 225 ) an
225 ≈ 107.53 times and records the responses in Mf . adversary can predict the response to unknown queries with
Constructing Mf requires 225 ≈ 107.53 chosen challenges. high probability. This is again an undesirable behavior; or
The memory complexity for storing Mf can be optimized alternatively, in case this, is an acceptable behavior, it can
to 222 bytes, i.e., 4.19 MB by querying the challenges in a be achieved using cheaper components.
natural order and indexing the responses accordingly. Note that we did not investigate the properties of the other
Fixing Mf as a reference system, and recalling layers, nor the ones of the component functions, and it is
Observation 2, we see that determining c is sufficient for likely that this basic attack can be improved further.
translating CRPs from Mf to Mfc . To do so, the adversary
IV. JELOKA ET AL.
observes 25 CRPs from the target equivalence class (i.e.,
these 25 CRPs share the same value in the first 75 bits). The third design we analyze was introduced by
Then, by means of exhaustive search on c, the adversary Jeloka et al. [13]. We show that the responses of this device
filters candidates where Mf [i ⊕ c] = Mfc [i] using the 25 preserve input correlations with high probability.
queries. This exhaustive search could be viewed as solving
a system of equations with 25 bit unknown values. Therefore, A. DESCRIPTION
the minimum number of equations such that this system is Jeloka et al. proposed an SRAM-based PUF with a claimed
independent is 25. Our experiments show that 25 CRPs are CRP space that grows exponentially in the number of rows
enough for the correct c to be the only surviving candidate and the challenge length. The device has a secret initial state
with high probability. At this point, the adversary can deter- S ∈ Fn×m
2 and a secret matrix of powers P ∈ Zn×m n . Each
mine with full certainty that Mfc [i] = Mf [i ⊕ c] for all values column of P is viewed as a permutation on the integers
of i. This way the adversary can learn the responses to all {0, 1, . . . , n − 1}. Larger numbers are associated with “more
the remaining challenges from Mfc . power.” A challenge C = {r1 , . . . , rt } ⊆ {0, 1, . . . , n − 1} is
defined as a sequence of rows, where each two consecutive
D. DISCUSSION rows (ri , ri+1 ) “fight” and the cell with bigger associated
Note that the fact that the device is biased is not a problem power wins the fight and overwrites its value over that of
in itself. Daemen and Rijmen analyzed in [14] the bias of the other cell. The fight takes place independently for each
ideal m-to-n-bit functions and showed that they are approx- column. More precisely, in a fight between row i and row j,
imately normal distribution with mean 0 and variance 2−n . for each column k, if Pi,k > Pj,k , then Sj,k ← Si,k ; otherwise,
For an ideal 100-to-1-bit function, such a bias would not be Si,k ← Sj,k . The corresponding response to a challenge C
detectable. However, what Observation 2 shows is that this denoted by F(C), represents the m bits of the last row in
device actually models a random 25-to-1-bit function (in the the challenge.
best case scenario); making it significantly easier to detect Jeloka et al. [13] described the PUF in an abstract
the bias. way for any dimensions (n, m) and challenge length t (see
A second observation that we did not pursue in this article Algorithm 1). The authors suggest that n = m = 64 and t = 6
is that the cascade structure will amplify the bias introduced are sufficient to attain some unspecified level of security.
by the random 5-to-5-bit functions. Each of these functions
is an entropy choke point in itself and the cascade will B. INTUITION
result in bias that is even larger than what is predicted by We show how two challenges that are “similar” will result
Daemen and Rijmen [14] for the ideal case. in the same response with high probability. Concretely, we

VOLUME 3, 2023 32
KRALEVA et al.: CRYPTANALYSIS OF STRONG PUFs

show that two challenges (sequences) differing only in their V. STRONG PUFS WHEN VIEWED SYMMETRIC-KEY
first value will produce the same 64-bit response with prob- ALGORITHMS
ability 2−0.77 . Note that since the response is 64-bit long, The work presented in this article highlights a gap between
one would expect this probability to be 2−64 . two communities concerned with the development and imple-
mentation of secure cryptographic algorithms. The disparity
C. ATTACK DESCRIPTION is reflected in different areas, such as design strategies,
First, we analyze the state S with a single column. In the security analysis, and even in the way a new algorithm is
case where m = 1, the response F(C) to the challenge C is presented to a larger audience. We discuss some of these dif-
a 1-bit value F(C) = b. ferences, hoping to initiate a larger discussion and exchange
Observation 3: Let X̄ = (x1 , x2 , x3 , x4 ) and F(a, X̄) be of ideas between the two communities.
the result of the fight between the rows a, x1 , x2 , x3 , and x4
(a, is the first row of the fight and the output is x4 ). Let
A. ABSTRACTION LEVEL
G(X̄) be the result of the fight without considering the first
The design process resulting in a secure device involves
row a. Then
        several levels of abstraction. In the context of this article,
F a, X̄ = G X̄ ⇐⇒ F a, X̄ = Sa and G X̄ = Sa . we focus on three of those.
The probability that a changes the response is 1) Mathematical Level: In this abstraction level, the math-
ematical properties of abstract classes of functions are
     1 1 1
Pr F a, X̄ = G X̄ = · = . investigated. This kind of work involves for exam-
2 5! 240 ple methods from probability theory, combinatorics,
The only case in which the result of two functions is different statistics, algebra (both linear and modern), Fourier
is when the response is Sa and the powers have the form Pa > analysis, complexity theory, etc., and is normally pub-
Px1 > Px2 > Px3 > Px4 with p = (1/120) and F(X̄) = Sa lished in pure mathematics- or mathematically oriented
with p = (1/2). cryptography venues.
Observation 3 shows that the first row in the challenge 2) Algorithmic Level: At this level, the abstract func-
has small influence on the final response. Therefore, two tions are used as conceptual building blocks to form
challenges that only differ in the first row have different an algorithmic model. In addition to understanding
responses with the following probability: the properties of the building blocks, the designer
 
    1 must also understand how they interact when com-
Pr F a, X̄ = F b, X̄ = . bined together (for security reasons), and the platform
120
that they will be running on (for efficiency reasons). In
The cases that F(a, X̄) = F(b, X̄) are as follows. addition to the algorithm description itself, the algorith-
1) F(a, X̄) = Sa and F(b, X̄) = Sa with probability mic model includes a well-defined adversarial model
(1/240). and security claims pertaining to it. This is at the core
2) F(a, X̄) = F(X̄) and b is the strongest: F(b, X̄) = F(X̄) of what cryptographers do, and a standard practice
with probability (1/240). is to submit such works to cryptographic venues for
So, the total probability that both queries give the same peer-review and 3rd party evaluation.
result in a single column is 1 − (1/120) = (119/120). 3) Implementation Level: Having completed the vetting
Since the powers in different columns are independent, process of the algorithmic model, the algorithm is
the probability of a correct guess for m = 64 columns is implemented first in simulation and later in a tan-
(119/120)64 = 2−0.77 . gible form. Even assuming the ideal security of the
algorithm, the implementation process itself is sus-
D. DISCUSSION ceptible to issues undermining the device’s security
In this case, we see that the PUF does not offer good dif- (e.g., timing attacks and side channels). It is therefore
fusion properties and that each query allows to predict with not enough to understand the efficiency metrics and
high probability the response to multiple other queries. This one must also understand the idiosyncrasies of secure
is an undesirable property as one can expect from an authen- implementations.
tication device to produce an uncorrelated response even for Strong PUFs aim to achieve goals on the implemen-
correlated entries. tation level (e.g., secure key storage). The PUFs we
For brevity, we did not model the case where the two looked into stem from a deep understanding of the
challenges differ in positions other than the first one or execution platform, namely, IC, and are therefore very
when they differ in more than one position. As a general efficient. However, the mathematical and algorithmic lev-
observation, we offer that the probability for collision is els have been systematically overlooked. This is why
higher when the change occurs in earlier positions. This Rührmair et al. and Delvaux were able to use ML attacks
violates Jeloka et al.’s claim that longer sequences result in in [4] and [7] and why this article can attack more recent
better security. works.

32 VOLUME 3, 2023
B. ADVERSARIAL MODELS A fundamental assumption in cryptography is the reputed
To be able to speak of the security offered by an algorithm, Kerckhoffs’ principle, which states that “a cryptosystem
the conservation must obtain a shared understanding of what should be secure even if everything about the system, except
“security” is. As the notion of security only makes sense with the key, is public knowledge.” In the algorithmic description,
respect to the existence of a bad actor (i.e., an adversary), the key is seen as part of the input that is unknown to the
it makes sense to start the discussion there. An inherent adversary.
imbalance between defenders and attackers (designer and It follows from Kerckhoffs’ principle that the algorithmic
adversaries, respectively) is that attack methods are plenti- description can be provided independent of the key (just like
ful, and it is enough for one of them to succeed for the an IC can be manufactured irrespective of the values it will
defender to fail their role. Since attackers are creative and receive through its input wires).
resourceful, it is futile to attempt predicting what methods The dissemination of new algorithms is a sort of “con-
they will use. Instead, cryptographers work with adversarial versation” between the designers and their audience. To
models. An adversarial model presumes only the capabilities support this conversation, the designers provide a design
of the adversary, but not how the adversary will use these rationale motivating their decisions. A reference implemen-
capabilities. tation, and/or test vectors are provided to alleviate any
Depending on the capabilities, an adversary can be classi- ambiguity in the algorithmic description. If additional data
fied as either passive or active. Whereas a passive adversary was generated or used by the designers in the design process,
is capable of only observing the communication channel, it is also provided for examination and reproducibility.
an active adversary is additionally able to delete, add, and
alter the data sent over the channel. In the context of PUF
design, we identified the following relevant models from [15] D. SECURITY CLAIMS
and [16]. When the key is modeled as an additional input, it becomes
1) Passive Adversaries: apparent that any algorithm using a finite number of secret
a) Ciphertext-Only Attack: The adversary can only bits is vulnerable to brute force attacks since it is always
observe the outputs of the system; thus, the possible (but not necessarily feasible) to exhaustively iterate
outputs of a secure cryptosystem should pro- the key space. From the observation that the key size pro-
vide no information regarding the corresponding vides an upper bound on the effort required for attacking the
inputs, or the secret key/randomness; this model algorithm arises an intuitive definition to what constitutes an
is the easiest to carry out in practice, since the attack.
only requirement is passively eavesdropping the Definition 2: A cryptosystem is said to be broken if an
communication channel. adversary can achieve their goals with less effort that would
b) Known-Plaintext Attack: The adversary is in pos- be required if they used a brute force attack.
session of some inputs and the corresponding Translating this to the case of Strong PUFs, let C be an
outputs generated under the same secret key. arbitrary3 n-bit challenge, f (C) its 1-bit response, and Q
the set challenges that have already been made and whose
2) Active Adversaries:
responses are known to the adversary. A reasonable security
a) Chosen-Plaintext Attack: The adversary is capa- claim would be that if C ∈ / Q, no adversary can predict f (C)
ble of obtaining the outputs corresponding to with probability better than (1/2), even when C is chosen
inputs of the adversary’s choice. after observing all the responses to the challenges in Q
b) Adaptive Chosen-Plaintext Attack: The adversary (adaptive chosen challenge); formally
may select the inputs depending on the received
outputs from the previous requests.   1
Pr f (C)|C, Q = . (4)
Since, in many cases, the input or parts thereof are pub- 2
lic (e.g., HTTP headers) known plaintext attacks are often
regarded practical. Furthermore, in 2-party authentication Another way to view this security claim is through the notion
protocols such as the ones considered for strong PUFs, both of advantage: for any n-bit uniformly random challenge C
active models also appear reasonably feasible.
1 |Q|
ADV = + n. (5)
2 2
C. ALGORITHMIC DESCRIPTION
Symmetric-key algorithms normally define the input, output,
3. We note the terms arbitrary and random are not interchangeable in
and key spaces, and an algorithmic description to produce the cryptography. A value is said to be random if it is sampled rigorously
corresponding output given the input and the secret key. A from a given distribution, usually the uniform one; it is said to be arbitrary
strong PUF can be modeled in the same way by considering when there is no importance to how it was sampled. For example, one’s
birthday is an arbitrary value, but for an encryption algorithm to be secure,
the challenge as the input, the PUF’s response as the output, the key must be chosen randomly from the uniform distribution of k-bit
and the intrinsic randomness as the key. vectors.

VOLUME 3, 2023 32
KRALEVA et al.: CRYPTANALYSIS OF STRONG PUFs

Equation (5) captures the intuitive notion that the only way words, it is unlikely that a casual, nonsystematic approach
for an adversary to gain any knowledge is by querying device would yield a secure algorithm (regardless of its efficiency).5
on that specific challenge.4 We refer readers interested in understanding the state of
With an adversarial model, an algorithmic description, and the art in lightweight cryptography to the Wiki maintained by
a clear notion of security, the designer can provide secu- the cryptography group in the University of Luxembourg [20]
rity claims. Such security claim for the PUF presented in noting that after more than a decade of research into this
Section II can take the following form “the Strong PUF area it is unlikely that the state of the art can be significantly
can resist any chosen plaintext attack that runs in time less improved without a paradigm shift. Interestingly, we observe
than 250 time and requires less than 293 chosen queries.” If that the amount of randomness exploited in PUF designs far
an attacker generates all the state-equivalence classes, i.e., exceeds what is common in symmetric-key cryptography.
requiring 264 queries, they can guess the response associated Whereas contemporary symmetric-key primitives have key
to any future challenge with probability 1. Therefore, such sizes ranging between 80 and 256 bits, the algorithms we
an attack which uses fewer queries and running time than surveyed above use randomness that is measured in the order
what is permissible, is interpreted as breaking the algorithm. of thousands of bits. More randomness is usually associated
A subtle point that we have seen overlooked is that in addi- with better security through an increase in the key size. It
tion to being correct, the security claim must also be sensible. would be interesting to explore in future work if a different
For example, since a PUF does not have a way to verify tradeoff can be obtained by fixing the security level and
that a challenge has been received from a valid server (rather somehow exploiting the additional randomness to improve
than from an adversary), it does not make sense to ignore efficiency.
chosen and adaptive-chosen challenge attacks. Likewise, our
observations in this article do not invalidate the claims about VI. CONCLUSION
resistance to ML-attacks, yet these PUFs are not secure and PUF design shares many common characteristics with
should not be deployed in field settings. the design of symmetric-key cryptographic algorithms.
Finally, we note that in light of the Strong PUFs we Motivated by the undesirable properties we found in three
found in the literature, it does not make sense to consider recently published Strong PUFs, we attempted to provide in
the complexity of a brute-force attack as a function in the this article a tutorial to the approach taken by symmetric-
secret/random material. As the amount of randomness used is key researchers to ensure the security of their algorithms.
much larger than the CRP-space, the adversary can clone the We hope that this article would serve as a starting point for
PUF trivially by querying it completely. Thus, (4) and (5) discussion between the two communities.
are the more natural choice. REFERENCES
[1] J. W. Lee, D. Lim, B. Gassend, G. E. Suh, M. van Dijk, and
E. CONCLUDING REMARKS ON SECURITY-EFFICIENCY S. Devadas, “A technique to build a secret key in integrated circuits for
identification and authentication applications,” in Symp. VLSI Circuits.
TRADEOFFS
Dig. Tech. Papers, 2004, pp. 176–179.
Surveying the recent literature on Strong PUFs, we noticed [2] G. E. Suh and S. Devadas, “Physical unclonable functions for device
that the general trend in designing them is to employ a series authentication and secret key generation,” in Proc. 44th ACM/IEEE
Design Autom. Conf., 2007, pp. 9–14.
of random-based operations, such as random Boolean func- [3] U. Rührmair, “Oblivious transfer based on physical Unclonable func-
tions, Sboxes, or linear layers. While the need for efficient tions,” in Trust and Trustworthy Computing. Heidelberg, Germany:
algorithms is understandable, the security/efficiency trade- Springer, 2010, pp. 430–440.
[4] U. Rührmair, F. Sehnke, J. Sölter, G. Dror, S. Devadas, and
off must be handled carefully, as operations resulting in an J. Schmidhuber, “Modeling attacks on physical unclonable func-
insecure mechanism are by definition inefficient. Moreover, tions,” in Proc. 17th ACM Conf. Comput. Commun. Security, 2010,
in symmetric-key cryptography, two of the most important pp. 237–249. [Online]. Available: https://doi.org/10.1145/1866307.
1866335
properties that are analyzed in a new design are the confu- [5] J. Delvaux, D. Gu, D. Schellekens, and I. Verbauwhede, “Secure
sion and the diffusion ensured by the component functions lightweight entity authentication with strong PUFs: Mission impos-
of a cipher. These aspects are covered in a series of books, sible?” in Proc. 16th Int. Workshop, Cryptograph. Hardw. Embedded
Syst., 2014, pp. 451–475. [Online]. Available: https://doi.org/10.1007/
such as [17], [18], and [19]. 978-3-662-44709-3_25
In the PUFs we surveyed in this article, we see that the [6] J. Delvaux, R. Peeters, D. Gu, and I. Verbauwhede, “A survey on
adversarial model is not stated, and instead, heuristic are lightweight entity authentication with strong PUFs,” ACM Comput.
Surveys, vol. 48, no. 2, p. 26, 2015. [Online]. Available: https://doi.
used to assess the device’s security. Among symmetric-key org/10.1145/2818186
cryptographers, this approach is considered obsolete. Modern [7] J. Delvaux, “Machine-learning attacks on PolyPUFs, OB-PUFs,
techniques for the design of symmetric-key algorithms build RPUFs, LHS-PUFs, and PUF–FSMs,” IEEE Trans. Inf. Forensics
Security, vol. 14, pp. 2043–2058, 2019.
on over 50 years of research in this domain to offer well [8] A. Vijayakumar, V. C. Patil, C. B. Prado, and S. Kundu, “Machine
understood tradeoffs between efficiency and security. In other learning resistant strong PUF: Possible or a pipe dream?” in Proc.
IEEE Int. Symp. Hardw. Oriented Security Trust (HOST), 2016,
4. Cryptographers sometimes use the word leakage to describe “the pp. 19–24.
knowledge an adversary may gain.” However, this term is already loaded
with meaning in the electrical engineering community, hence we omit it to 5. For a version of this message, see Bruce Schneier’s blogpost
avoid confusion. https://www.schneier.com/blog/archives/2015/05/amateurs_produc.html.

32 VOLUME 3, 2023
[9] T. Machida, D. Yamamoto, M. Iwamoto, and K. Sakiyama, “A new RALUCA POSTEUCA received the bachelor’s and
arbiter PUF for enhancing unpredictability on FPGA,” Sci. World J., master’s degrees from the University of Bucharest,
vol. 2015, Sep. 2015, Art. no. 864812. Bucureşti, Romania, in 2011 and 2013, respec-
[10] N. Wisiol, “Towards attack resilient arbiter PUF-based strong PUFs,” tively. She is currently pursuing the Ph.D. degree
Cryptol. ePrint Archive, IACR, Lyon, France, Rep. 2021/1004, 2021. with COSIC, KU Leuven, Leuven, Belgium, under
[Online]. Available: https://eprint.iacr.org/2021/1004 the supervision of Vincent Rijmen and Tomer
[11] V. B. Suresh, R. Kumar, M. Anders, H. Kaul, V. De, and S. Mathew, Ashur.
“A 0.2% BER, 1028 challenge-response machine-learning resistant Her work focuses on the design and analysis
strong-PUF in 14nm CMOS featuring stability-aware adversarial chal- of symmetric-key primitives, with emphasis on
lenge selection,” in Proc. IEEE Symp. VLSI Circuits, 2020, pp. 1–2. lightweight primitives.
[Online]. Available: https://doi.org/10.1109/VLSICircuits18222.2020.
9162890
[12] K. Liu et al., “36.3 a modeling attack resilient strong PUF with
feedback-SPN structure having < 0.73% bit error rate through in-cell
hot-carrier injection burn-in,” in Proc. IEEE Int. Solid-State Circuits
Conf. (ISSCC), vol. 64, 2021, pp. 502–504.
[13] S. Jeloka, K. Yang, M. Orshansky, D. Sylvester, and D. Blaauw, “A
DILARA TOPRAKHISAR received the bachelor’s
sequence dependent challenge-response PUF using 28nm SRAM 6T
degree from Sabanci University, Istanbul, Turkey,
bit cell,” in Proc. Symp. VLSI Circuits, 2017, pp. C270–C271.
in 2019, the master’s degree in behavior of alge-
[14] J. Daemen and V. Rijmen, “Probability distributions of correlation
braic ciphers in fully homomorphic encryption
and differentials in block ciphers,” J. Math. Cryptol., vol. 1, no. 3,
from the Eindhoven University of Technology,
pp. 221–242, 2007. [Online]. Available: https://doi.org/10.1515/JMC.
Eindhoven, The Netherlands, in 2021. She is cur-
2007.011
rently pursuing the Ph.D. degree with COSIC, KU
[15] A. Menezes, P. C. van Oorschot, and S. A. Vanstone, Handbook
Leuven, Leuven, Belgium.
of Applied Cryptography. Boca Raton, FL, USA: CRC Press, 1996.
Her main research interest is symmetric-
[Online]. Available: http://cacr.uwaterloo.ca/hac/
key cryptography: countermeasures against side-
[16] J. Katz and Y. Lindell, Introduction to Modern Cryptography.
channel analysis and fault attacks, and algebraic
Boca Raton, FL, USA: Chapman Hall/CRC Press, 2007.
ciphers.
[17] J. Daemen and V. Rijmen, The Design of Rijndael—The Advanced
Encryption Standard (AES) (Information Security and Cryptography),
2nd ed. Heidelberg, Germany: Springer, 2020. [Online]. Available:
https://doi.org/10.1007/978-3-662-60769-5
[18] J. Katz and Y. Lindell, Introduction to Modern Cryptography. Boca
Raton, FL, USA: CRC Press, 2014. [Online]. Available: https://www.
crcpress.com/Introduction-to-Modern-Cryptography-Second-Edition/
Katz-Lindell/p/book/9781466570269 TOMER ASHUR received the Ph.D. degree from
[19] L. Knudsen and M. Robshaw, The Block Cipher Companion. KU Leuven, Leuven, Belgium, in 2017.
Heidelberg, Germany: Springer, Jan. 2011. He completed a dissertation on Cryptanalysis
[20] (Univ. Luxembourg, Luxembourg City, Luxembourg). Lightweight of Symmetric-Key Primitives. He is the Director
Block Ciphers. (2017). [Online]. Available: https://www.cryptolux.org/ of Cryptomeria Research. He was previously an
index.php/Lightweight_Block_Ciphers Assistant Professor with the Eindhoven University
of Technology, Eindhoven, the Netherlands; and
an FWO fellow with KU Leuven. Prior to this, he
LILIYA KRALEVA received the bachelor’s degree was a grad student and a Teaching Assistant with
in applied mathematics and the master’s degree in the University of Haifa, Haifa, Israel; the CIO
discrete algebraic structure from Sofia University of Mediton Healthcare Services, Dubai, UAE; a
“St. Kliment Ohridski,” Sofia, Bulgaria, in 2015 Project Manager with Katz Delivering Services, New York, NY, USA; the
and 2017, respectively, and the Ph.D. degree from Head of Support with Safend Inc., New York; and a Communication Officer
KU Leuven, Leuven, Belgium, in 2022 with an (OF-2) with the Israel Defense Forces, Jerusalem, Israel. He does not own
FWO grant. a piano.
During her studies, she completed a mod-
ule for the Teacher of Mathematics and stud-
ied one semester with the Linnaeus University,
Vaxjo, Sweden, through the ERASMUS Exchange
Program. The topic of her research was “Cryptanalysis techniques for
lightweight symmetric-key primitives.”
INGRID VERBAUWHEDE (Fellow, IEEE) is a
Professor with the Research Group COSIC, KU
MOHAMMAD MAHZOUN received the bachelor’s Leuven, where she leads the Secure Embedded
degree in computer science from the University Systems and Hardware Group. She is a pioneer in
of Tehran, Tehran, Iran, in 2018, and the mas- the field of efficient and secure implementations
ter’s degree in Master Parisien de Recherche en of cryptographic algorithms on many different
Informatique from the Université Paris Cité, Paris, platforms: ASIC, FPGA, embedded, and cloud.
France, in 2020. He is currently pursuing the With her research she bridges the gaps between
Ph.D. degree with the Eindhoven University of electronics, the mathematics of cryptography and
Technology, Eindhoven, The Netherlands, super- the security of trusted computing. Her group
vised by Tomer Ashur. owns and operates an advanced electronic secu-
He finished his master’s thesis on the “Design rity evaluation lab. Her list of publications and patents is available at
and analysis of multi-input functional encryption www.esat.kuleuven.be/cosic/publications.
schemes” with Michel Abdalla and David Pointcheval. In addition to Dr. Verbauwhede received the IEEE 2017 Computer Society Technical
research in cryptography, he worked as a DevOps Directory with TomanPay, Achievement Award and the IEEE 2023 Don Pederson Solid-State Circuits
Tehran, and a Site Reliability Engineer with Cafe Bazaar, Tehran, while Award. She is a recipient of two ERC Advanced Grants, in 2016 and 2021.
studying for his bachelor’s degree. His research focuses on the design and She is a member of the Royal Academy of Belgium. She is a fellow of
cryptanalysis of Algebraic ciphers. IACR.

VOLUME 3, 2023 32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy