[2002_Gassend]Controlled physical random functions
[2002_Gassend]Controlled physical random functions
Blaise Gassend, Dwaine Clarke, Marten van Dijky and Srinivas Devadas
Massachusetts Institute of Technology
Laboratory for Computer Science
Cambridge, MA 02139, USA
fgassend,declarke,marten,devadasg@mit.edu
1
lems, and briefly describe a software licensing application. Definition 3 A type of PUF is said to be Manufacturer Re-
sistant if it is technically impossible to produce two identi-
cal PUFs of this type given only a polynomial amount of re-
2. Definitions
sources.
Definition 1 A Physical Random Function (PUF)2 is a Manufacturer resistant PUFs are the most interesting
function that maps challenges to responses, that is embodied form of PUF as they can be used to make unclonable sys-
by a physical device, and that verifies the following proper- tems.
ties:
3. Implementing a Controlled Physical Ran-
1. Easy to evaluate: The physical device is easily capable dom Function
of evaluating the function in a short amount of time.
In this section, we describe ways in which PUFs and
2. Hard to characterize: From a polynomial number of
CPUFs could be implemented. In each case, a silicon IC en-
plausible physical measurements (in particular, deter-
forces the control on the PUF.
mination of chosen challenge-response pairs), an at-
tacker who no longer has the device, and who can only
use a polynomial amount of resources (time, matter,
3.1. Digital PUF
etc...) can only extract a negligible amount of infor-
It is possible to produce a PUF with classical crypto-
mation about the response to a randomly chosen chal-
graphic primitives provided a key can be kept secret. If an
IC is equipped with a secret key k, and a pseudo-random
lenge.
hash function h, and tamper resistant technology is used to
In the above definition, the terms short and polynomial
make k impossible to extract from the IC, then the function
are relative to the size of the device, which is the security
parameter. In particular, short means linear or low degree x ! h(k; x)
polynomial. The term plausible is relative to the current
state of the art in measurement techniques and is likely to is a PUF. If control logic is embedded on the tamper resistant
change as improved methods are devised. IC along with the PUF, then we have effectively created a
In previous literature [Rav01] PUFs were referred to CPUF.
as Physical One Way Functions, and realized using 3- However, this kind of CPUF is not very satisfactory.
dimensional micro-structures and coherent radiation. We First, it requires high quality tamper-proofing. There are
believe this terminology to be confusing because PUFs systems available to provide such tamper-resistance. For
do not match the standard meaning of one way functions example, IBM’s PCI Cryptographic Coprocessor, encap-
[MvOV96]. sulates a 486-class processing subsystem within a tamper-
sensing and tamper-responding environment where one can
Definition 2 A PUF is said to be Controlled if it can only run security-sensitive processes [SW99]. Smart cards also
be accessed via an algorithm that is physically linked to the incorporate barriers to protect the hidden key(s), many of
PUF in an inseparable way (i.e., any attempt to circumvent which have been broken [And01]. In general, however, ef-
the algorithm will lead to the destruction of the PUF). In fective tamper resistant packages are expensive and bulky.
particular this algorithm can restrict the challenges that are Secondly, the digital PUF is not manufacturer resistant.
presented to the PUF and can limit the informationabout re- The PUF manufacturer is free to produce multiple ICs with
sponses that is given to the outside world. the same secret key, or someone who manages to violate the
IC’s tamper-resistant packaging and extract the secret key
The definition of control is quite strong. In practice, link- can easily produce a clone of the PUF.
ing the PUF to the algorithm in an inseparable way is far Because of these two weaknesses, a digital PUF does
form trivial. However, we believe that it is much easier to do not offer any security advantage over conventional crypto-
than to link a conventional secret key to an algorithm in an graphic primitives, and it is therefore better to use a conven-
inseparable way, which is what current smartcards attempt. tional crypto-system.
Control turns out to be the fundamental idea that allows
PUFs to go beyond simple authenticated identification ap- 3.2. Silicon PUF
plications. How this is done is the main focus of this paper.
3.2.1. Statistical Variation of Delay
2 PUF actually stands for Physical Unclonable Function. It has the ad-
vantage of being easier to pronounce, and it avoids confusion with Pseudo- By exploiting statistical variations in the delays of devices
Random Functions. (gates and wires) within the IC, we can create a manufac-
turer resistant PUF [GCvDD02]. Manufactured IC’s, from such as field oxide and metal. One can also create a package
either the same lot or wafer have inherent delay variations. which has a significant effect on the delays of each device
There are random variations in dies across a wafer, and from within the IC, and the removal of the package will immedi-
wafer to wafer due to, for instance, process temperature and ately destroy the PUF, since the delays will change appre-
pressure variations, during the various manufacturing steps. ciably.
The magnitude of delay variation due to this random com- The adversary could try to build a model of the PUF by
ponent can be 5% or more. measuring the response of the PUF to a polynomial num-
On-chip measurement of delays can be carried out with ber of adaptively-chosen challenges.3 We believe this to be
very high accuracy, and therefore the signal-to-noise ratio the most plausible form of attack. However, there is a sig-
when delays of corresponding wires across two or more IC’s nificant barrier to this form of attack as well because creat-
are compared is quite high. The delays of the set of devices ing timing models of a circuit accurate to within measure-
in a circuit is unique across multiple IC’s implementing the ment error is a very difficult problem that has received a lot
same circuit with very high probability, if the set of devices of attention from the simulation community. Manageable-
is large [GCvDD02]. These delays correspond to an implicit sized timing models can be produced which are within 10%
hidden key, as opposed to the explicitly hidden key in a dig- of the real delays, but not within the measurement accuracy
ital PUF. While environmental variations can cause changes 0 1%
of : .
in the delays of devices, relative measurement of delays, es- In addition to attacking the PUF directly, the adversary
sentially using delay ratios, provides robustness against en- can attempt to violate a CPUF’s control. This includes try-
vironmental variations, such as varying ambient tempera- ing to get direct access to the PUF, or trying to violate the
ture, and power supply variations. control algorithm (which includes the private and authenti-
cated execution environment that we will be discussing in
3.2.2. Challenge-Response Pairs Section 5). The best way we have found to prevent this at-
tack is for the algorithm (i.e., the digital part of the IC) to be
Given a PUF, challenge-response pairs can be generated, embedded within the physical system that defines the PUF.
where the challenge can be a digital input stimulus, and the In the Silicon PUF case, this can be accomplished by over-
response depends on the transient behavior of the PUF. For laying PUF delay wires over any digital circuitry that needs
instance, we can combine a number of challenge dependent to be protected. Damaging any one of those wires would
delay measures into a digital response. The number of po- change the PUF, rendering the adversary’s attack useless.
tential challenges grows exponentially with the number of This strategy obviates the need for active intrusion sensors
inputs to the IC. Therefore, while two IC’s may have a high that are present in conventional secure devices to destroy
probability of having the same response to a particular chal- key material in the event that an invasive attack occurs. For
lenge, if we apply enough challenges, we can distinguish be- non invasive attacks such as irradiating the IC or making it
tween the two IC’s. undergo voltage spikes and clock glitches, conventional pre-
vention methods must be used.
3.2.3. Attacks on Silicon PUFs
There are many possible attacks on manufacturer resistant 3.3. Improving a PUF Using Control
PUF’s – duplication, model building using direct measure-
ment, and model building using adaptively-chosen chal- Using control, it is possible to make a silicon PUF more
lenge generation. We briefly discuss these and show that sig- robust and reliable. Figure 1 summarizes the control that can
nificant barriers exist for each of these attacks. A more de- be placed around the PUF to improve it. The full details of
tailed description can be found in [GCvDD02]. these improvements can be found in [GCvDD02].
The adversary can attempt to duplicate a PUF by fabri- A random hash function placed before the PUF prevents
cating a counterfeit IC containing the PUF. However, due to the adversary from performing a chosen challenge attack on
statistical variation, unless the PUF is very simple, the ad- the PUF. This prevents a model-building adversary from se-
versary will have to fabricate a huge number of IC’s and pre- lecting challenges that allow him to extract parameters more
cisely characterize each one, in order to create and discover easily. An Error Correcting Code (ECC) can be used to take
a counterfeit. noisy physical measurements and turn them into consistent
Assume that the adversary has unrestricted access to the responses. Finally, an output random hash function decore-
IC containing the PUF. The adversary can attempt to create a lates the response from actual physical measurements, thus
model of the IC by measuring or otherwise determining very making a model-building adversary’s task even harder.
precisely the delays of each device and wire within the IC.
Direct measurement of device delays requires the adversary 3 Clearly, a model can be built by exhaustively enumerating all possible
to open the package of the IC, and remove several layers, challenges, but this is intractable.
Improved PUF sume that the user’s challenges can be public, and that
the user has established several CRPs with the PUF.
ID
Challenge Random Random Response
Personality Hash Hash
PUF ECC
untrusted
User communication CPUF chip
Redundancy Information channel
The interface between the chip and the untrusted com- owner: the owner is the principal that controls access to
munication channel is a PUF. the CPUF. The owner has its own private list of CRPs.
The owner can be considered to be the principal that
Given a challenge a PUF can compute a corresponding bought the CPUF chip from the manufacturer.
response.
certifier: the certifier has its own private list of CRPs
The user is in the possession of her own private list of for the CPUF, and is trusted by the user. The manu-
CRPs originally generated by the PUF. The list is pri- facturer of the CPUF chip can act as a certifier to other
vate because only the user and the PUF know the re- users. After the user has established its own private list
sponses to each of the challenges in the list. We as- of CRPs, it may act as a certifier to another user, if the
second user trusts the first user. For example, if the user As the certifier knows the CRP the user is given, the cer-
trusts the owner of the chip, the owner of the chip can tifier can read all of the messages the user exchanges with
also act as a certifier. the CPUF using this CRP. The user, thus, needs to use the
private renewal protocol to generate his own private list of
We have 5 scenarios: CRPs.
bootstrapping: the manufacturer of the CPUF gets the Furthermore, as, in this scheme, the CPUF honors mes-
initial CRP from the CPUF. sages that are MAC’ed with a key generated from the re-
sponse of the CRP the certifier has given to the user, the user
introduction: a user, who does not have any CRPs for and the certifier can collude to determine that they are com-
the CPUF, securely obtains a CRP from a certifier. municating with the same CPUF. They, and other users who
use the same certifier, may then be able to use this infor-
private renewal: after obtaining a CRP from a certifier,
mation to track and monitor the CPUF’s transactions. The
the user can use this CRP to generate his own private
CPUF’s owner can introduce the CPUF to the user using the
list of CRPs.
anonymous introduction protocol to deal with this problem.
renewal: after generating his own private list of CRPs,
the user can use one of these to generate more private Certifier User
CRPs.
anonymous introduction: in anonymous introduction,
a user, who does not have any CRPs for the CPUF, Figure 4. Model for Introduction
securely obtains a certified, anonymous, CRP for the
CPUF. The user is given a CRP that is certified by
the certifier. However, in anonymous introduction, the 4.2.3. Private Renewal
owner of the CPUF does not want to reveal to the user
which CPUF the user is being given a CRP to. Thus, at Figure 5 illustrates the model for private renewal. The user
the end of the protocol, the user knows that he has been is assumed to already have a certified CRP. However, he
given a CRP that is certified by the certifier, and can use wants to generate a private list of CRPs. In this model, the
this CRP to generate other CRPs with the CPUF and communication channel between the user and the CPUF is
run applications using the CPUF. However, if the user untrusted.
colludes with the certifier, or other users with certified,
anonymous CRPs to the CPUF, he will not be able to
untrusted
use the CRPs to determine that he is communicating User communication CPUF chip
with the same CPUF as them. channel
4.2.1. Bootstrapping
Figure 3 illustrates the model for bootstrapping. When a Figure 5. Model for Private Renewal
CPUF has just been produced, the manufacturer generates a
CRP for it. We assume that, when the manufacturer gener-
ates this CRP, it is in physical contact with the chip, and thus, 4.2.4. Renewal
the communication channel is private and authentic. None
of the other protocols make this assumption. The model for renewal is the same as that for private re-
newal. The user is assumed to have already generated a pri-
Manufacturer CPUF chip vate list of CRPs, and would like to generate more private
CRPs with the CPUF. He may need more CRPs for his ap-
plications, say.
Figure 3. Model for Bootstrapping
4.2.5. Anonymous Introduction
Figure 6 illustrates the model for anonymous introduction.
4.2.2. Introduction
Again, the user is the principal which does not have CRPs
Figure 4 illustrates the model for CPUF introduction. In in- for the CPUF yet, and would like to establish its own pri-
troduction, the certifier gives a CRP for the CPUF to the user vate list of CRPs. The communication channels between the
over a channel that is authentic and private. certifier, owner and user are secure (private and authentic).
The communication channels between each of these princi- the CPUF, since he has a method of doing so. Once Oscar
pals and the CPUF is untrusted. In our version of the pro- has the response, he can impersonate the CPUF because he
tocol, the certifier and owner communicate with each other, knows everything Alice knows about the PUF. This is not at
the owner and user communicate with each other, and the all what Alice intended.
owner communicates with the CPUF. The certifier and user We should take note that in the above scenario, there is
can potentially collude to determine if their CRPs are for the one thing that Oscar has proven to Alice. He has proven that
same CPUF. he has access to the CPUF. In some applications, such as
the key cards from [Rav01], proving that someone has ac-
cess to the CPUF is probably good enough. However, for
untrusted more powerful examples such as certified execution that we
certifier communication
owner channel will cover in section 6.2, where we are trying to protect Al-
ice from the very owner of the CPUF, free access to the PUF
is no longer sufficient.
user CPUF chip More subtle forms of the man-in-the-middle attack exist.
Suppose that Alice wants to use the CPUF to do what we
will refer to in section 6.2 as certified execution. Essentially,
Figure 6. Model for Anonymous Introduction Alice is sending the CPUF a program to execute. This pro-
gram executes on the CPUF, and uses the shared secret that
the CPUF calculates to interact with Alice in a secure way.
Here, Oscar can replace Alice’s program by a program of his
5. Protocols own choosing, and get his program to execute on the CPUF.
Oscar’s program then uses the shared secret to produce mes-
We will now describe the protocols that are necessary sages that look like the messages that Alice is expecting, but
in order to use PUFs. These protocols must be designed that are in fact forgeries.
to make it impossible to get the response to a chosen chal-
lenge. Indeed, if that were possible, then we would be vul- 5.2. Defeating the Man-in-the-Middle Attack
nerable to a man-in-the-middle attack that breaks nearly all
applications. The strategy that we describe is designed to be 5.2.1. Basic CPUF Access Primitives
deterministic and state-free to make it as widely applicable
as possible. Slightly simpler protocols are possible if these In the rest of this section, we will assume that the CPUF is
constraints are relaxed. able to execute some form of program in a private (nobody
can see what the program is doing) and authentic (nobody
can modify what the program is doing) way.4 In some CPUF
5.1. Man-in-the-Middle Attack implementations where we do not need the ability to execute
arbitrary algorithms, the program’s actions might in fact be
Before looking at the protocols, let us have a closer look implemented in hardware or by some other means – the ex-
at man-in-the-middle attack that we must defend against. act implementation details make no difference to the follow-
The ability to prevent this man-in-the-middle attack is the ing discussion.
fundamental difference between controlled and uncontrolled In this paper we will write programs in pseudo-code in
PUFs. which a few basic functions are used:
The scenario is the following. Alice wants to use a
challenge-response pair (CRP) that she has to interact with a Output(arg1, ...) is used to send results out of
CPUF in a controlled way (we are assuming that the CRP is the CPUF. Anything that is sent out of the CPUF is po-
the only shared secret between Alice and the CPUF). Oscar, tentially visible to the whole world, except during boot-
the adversary, has access to the PUF, and has a method that strapping, where the manufacturer is in physical pos-
allows him to extract from it the response to a challenge of session of the CPUF.
his choosing. He wants to impersonate the CPUF that Alice
wants to interact with. EncryptAndMAC(message, key) is used to en-
At some point, in her interaction with the CPUF, Alice crypt and MAC message with key.
will have to give the CPUF the challenge for her CRP so that
the CPUF can calculate the response that it is to share with PublicEncrypt(message, key) is used to en-
her. Oscar can read this challenge because up to this point crypt message with key, the public key.
in the protocol Alice and the CPUF do not share any secret. 4 In fact the privacy requirement can be substantially reduced. Only the
Oscar can now get the response to Alice’s challenge from key material that is being manipulated needs to remain hidden.
MAC(message, key) MACs message with key. ( )
Note that h program includes everything that is con-
tained between begin program and end program.
The CPUF’s control is designed so that the PUF can only That includes the actual value of Challenge. The same
be accessed by programs, and only by using two primitive code with a different value for Challenge would have a
functions: GetResponse and GetSecret. If f is the PUF, different program hash.
and h is a publicly available random hash function (or in The user can determine Secret because he knows
practice some pseudo-random function) then the primitives the response to Challenge, and so he can calculate
are defined as: ( ( ) )
h h program ; response . Now we must show that a
man-in-the-middle cannot determine Secret.
GetResponse(PreChallenge) =
By looking at the program that is being sent to the CPUF,
the adversary can determine the challenge from the CRP that
f (h (h (Program) ; PreChallenge)) is being used. This is the only starting point he has to try to
GetSecret(Challenge) = find the shared secret. Unfortunately for him, the adversary
h (h (Program) ; f (Challenge)) cannot get anything useful from the challenge. Because the
challenge is deduced from the pre-challenge via a random
In these primitives, Program is the program that is be- function, the adversary cannot get the pre-challenge directly.
ing run in an authentic way. Just before starting the pro- Getting the Response directly is impossible because the only
( )
gram, the CPUF calculates h Program , and later uses way to get a response out of the CPUF is starting with a pre-
this value when GetResponse and GetSecret are invoked. challenge. Therefore, the adversary must get the shared se-
We shall show in the next section that these two primitives cret directly from the challenge.
are sufficient to implement the CRP management scenar- However, only a program that hashes to the same value as
ios that were detailed in section 4. We shall also see that the user’s program can get from the challenge to the secret
GetResponse is essentially used for CRP generation while directly by using GetSecret (any other program would get
GetSecret is used by applications that want to produce a a different secret that can’t be used to find out the response
shared secret from a CRP. or the sought after secret because it is the output of a random
Figure 7 summarizes the possible ways of going between function). Since the hash function that we are using is colli-
pre-challenges, challenges, responses and shared secrets. In sion resistant, the only program that the attacker can use to
this diagram moving down is easy. You just have to cal- get the shared secret is the user’s program. If the user pro-
culate a few hashes. Moving up is hard because it would gram is written in such a way that it does not leak the secret
involve reversing those hashes, which happen to be one- to the adversary, then the man-in-the middle attack fails. Of
way hashes. Going from left to right is easy for the pro- course, it is perfectly possible that the user’s program could
gram whose hash is used in the GetResponse or GetSecret leak the shared secret if it is badly written. But this is a prob-
primitives, and hard for all other programs. Going from lem with any secure program, and is not specific to PUFs.
right to left is hard if we assume that the PUF can’t invert Our goal isn’t to prevent a program from giving away its se-
a one-way function. We will not use this fact as the adver- cret but to make it possible for a well written program to pro-
sary’s task wouldn’t be easier if it was easy. duce a shared secret.
5.2.2. Using a CRP to Get a Shared Secret 5.3. Challenge Response Pair Management Proto-
cols
To show that the man-in-the-middle attack has been de-
feated, we shall show that a user who has a CRP can use
it to establish a shared secret with the PUF (previously, Now we shall see how GetResponse and GetSecret
the man-in-the-middle could determine the value of what can be used to implement the key management primitives
should have been a shared secret). that were described in section 4.5 It is worth noting that the
The user sends a program like the one below to the CPUF, CPUF need not preserve any state between program execu-
where Challenge is the challenge from the CRP that the tions.
user already knows. 5 The implementations that are presented contain the minimum amount
h(h(GRP), h(h(GSP),
UF E
PreChal) (5) P Response)
(2) GSP calls GetSecret
Challenge Shared−Secret
GRP GRP, GSP
Figure 7. This diagram shows the different ways of moving between Pre-Challenges, Challenges,
Responses and Shared-Secrets. The dotted arrow indicates what the PUF does, but since the PUF
is controlled, nobody can go along the arrow directly. GRP and GSP are the programs that call
GetResponse and GetSecret respectively. The challenge and the response depend on the GRP that
created them, and the shared secret depends on the GSP.
5.3.1. Bootstrapping that NewResponse that the user is getting originated from
the CPUF. The user gets the challenge for his newly created
The manufacturer makes the CPUF run the following pro-
gram, where PreChall is set to some arbitrary value.
(( )
CRP by calculating h h program ; PreChall . )
begin program 5.3.3. Introduction
Response = GetResponse(PreChall);
Output(Response); Introduction is particularly easy. The certifier simply sends
end program a CRP to the user over some agreed upon secure channel. In
many cases, the certifier will use renewal to generate a new
The user gets the challenge for his newly created CRP CRP, and then send that to the user. The user will then use
(( ) )
by calculating h h program ; PreChall , the response is private renewal to produce a CRP that the certifier does not
the output of the program. know.
3. The certifier decrypts the program, checks that it is the Mesg1 = (NewChallenge, NewResponse);
official anonymous introduction program, then hashes Mesg2 = PublicEncrypt(Mesg1, UserPubKey);
it to calculate CertSecret. He can then verify that Mesg3 = (Mesg2, MAC(Mesg2, OwnerSecret));
Mesg4 is authentic with the MAC. He finally signs Mesg4 = Blind(Mesg3, OwnerSecret);
Mesg4, and sends the result to the owner. Mesg5 = (Mesg4, MAC(Mesg4, CertSecret));
Mesg6 = EncryptAndMAC(Mesg5, OwnerSecret);
4. The owner unblinds the message, and ends up with a Output(Mesg6);
signed version of Mesg3. He can check the signature, end program
and the MAC in Mesg3 to make sure that the certifier
isn’t communicating his identity to the user. He finally
Figure 8. The anonymous introduction pro-
sends the unblinded message to the user. This message
is in fact a version of Mesg3 signed by the certifier.
gram.
5. The user checks the signature, and decrypts Mesg2
with his secret key to get a CRP.
PreChallengeSeed must be encrypted to prevent
Remarks: the certifier from finding out the newly created chal-
lenge when he inspects the program in step 3.
UserPubKey and CertChallenge must be en-
crypted, otherwise it is possible to correlate the mes-
sage that Alice sends to the CPUF with the certifier’s The encryption between Mesg5 and Mesg6 is needed
challenge or with the user’s public key. to prevent correlation of the message from the CPUF to
the owner and the message from the owner to the cer-
Seed must be encrypted to prevent the certifier or the tifier.
user from knowing how to voluntarily get into the per-
sonality that the user is being shown.
6 In this protocol, to avoid over-complication, we have assumed that Al- Interestingly, we are not limited to one layer of encapsu-
ice does not need to know Bob’s public key in order to sign a message. lation. A principal who has gained access to a personality
For real-world protocols such as the one that David Chaum describes in of a CPUF through anonymous introduction can introduce
[Cha85] this is not true. Therefore, an actual implementation of our anony- other parties to this PUF. In particular, he can send the signed
mous introduction protocol might have to include the certifier’s public key
CRP that he received back to the certifier and get the certifier
in the program that is sent to the CPUF. In that case, it should be encrypted
to prevent correlation of messages going to the CPUF with a specific trans- to act as a certifier for his personality when he anonymously
action with the certifier. introduces the CPUF to other parties.
6. Applications If the privacy of the smartcard’s message is a require-
ment, the bank can also encrypt the message with the same
We believe there are many applications for which CPUFs key that is used for the MAC.
can be used, and we describe a few here. Other applications
can be imagined by studying the literature on secure copro- 6.2. Certi®ed execution
cessors, in particular [Yee94]. We note that the general ap-
plications for which this technology can be used include all At present, computation power is a commodity that un-
the applications today in which there is a single symmetric dergoes massive waste. Most computer users only use a
key on the chip. fraction of their computer’s processing power, though they
use it in a bursty way, which justifies the constant demand
6.1. Smartcard Authentication for higher performance. A number of organizations, such
as SETI@home and distributed.net, are trying to tap that
The easiest application to implement is authentication. wasted computing power to carry out large computations in
One widespread application is smartcards. Current smart- a highly distributed way. This style of computation is unre-
cards have hidden digital keys that can sometimes be ex- liable as the person requesting the computation has no way
tracted using many different kinds of attacks [And01]. With of knowing that it was executed without any tampering.
a unique PUF on the smartcard that can be used to authen- With chip authentication, it would be possible for a cer-
ticate the chip, a digital key is not required: the smartcard tificate to be produced that proves that a specific computa-
hardware is itself the secret key. This key cannot be dupli- tion was carried out on a specific chip. The person request-
cated, so a person can lose control of it, retrieve it, and con- ing the computation can then rely on the trustworthiness of
tinue using it. The smartcard can be turned off if the owner the chip manufacturer who can vouch that he produced the
thinks that it is permanently lost by getting the application chip, instead of relying on the owner of the chip.
authority to forget what it knows of the secret signature that There are two ways in which the system could be used.
is associated with the unique smartcard. Either the computation is done directly on the secure chip,
The following basic protocol is an outline of a protocol either it is done on a faster insecure chip that is being moni-
that a bank could use to authenticate messages from PUF tored in a highly interactive way by supervisory code on the
smartcards. This protocol guarantees that the message the secure chip.
bank receives originated from the smartcard. It does not, To illustrate this application, we present a simple exam-
however authenticate the bearer of the smartcard. Some ple in which the computation is done directly on the chip. A
other means such as a PIN number or biometrics must be user, Alice, wants to run a computationally expensive pro-
used by the smartcard to determine if its bearer is allowed gram over the weekend on Bob’s 128-bit, 300MHz, single-
to use it. tasking computer. Bob’s computer has a single chip, which
has a PUF. Alice has already established CRPs with the PUF
1. The bank sends the following program to the smart- chip.
card, where R is a single use number and Challenge
is the bank’s challenge: 1. Alice sends the following program to the CPUF, where
Challenge is the challenge from her CRP:
begin program
Secret = GetSecret(Challenge); begin program
/* The smartcard somehow * Secret = GetSecret(Challenge);
* generates Message to send * /* The certified computation *
* to the bank */ * is performed, the result *
Output(Message, * is placed in Result */
MAC((Message, R), Secret)); Output(Result,
end program MAC(Result, Secret));
end program
2. The bank checks the MAC to verify the authenticity
and freshness of the message that it gets back from the 2. The bank checks the MAC to verify the authenticity of
PUF. the message that it gets back from the PUF.
The number R is useful in the case where the smartcard Unlike the smartcard application, we did not include a
has state that is preserved between executions. In that case single use random number in this protocol. This is because
it is important to ensure the freshness of the message. we are assuming that we are doing pure computation that
cannot become stale (any day we run the same computation References
it will give the same result).
In this application, Alice is trusting that the chip in Bob’s [And01] Ross J. Anderson. Security Engineering:
computer performs the computation correctly. This is eas- A Guide to Building Dependable Distributed
ier to ensure if all the resources used to perform the com- Systems. John Wiley and Sons, 2001.
putation (memory, CPU, etc.) are on the PUF chip, and in-
cluded in the PUF characterization. We are currently re- [Cha85] David Chaum. Security without identifica-
searching and designing more sophisticated architectures in tion: Transaction systems to make big brother
which the PUF chip can securely utilize off-chip resources obsolete. Communications of the ACM,
using some ideas from [Lie00] and a memory authentica- 28:1030–1040, 1985.
tion scheme that can be implemented in a hardware proces-
sor [GSC+ 03].
[GCvDD02] Blaise Gassend, Dwaine Clarke, Marten van
Dijk, and Srinivas Devadas. Silicon physical
There is also the possibility of a PUF chip using the ca- random functions. In Proceedings of the th 9
pabilities of other networked PUF chips and devices using ACM Conference on Computer and Commu-
certified executions. The PUF would have CRPs for each of nications Security, November 2002.
the computers it would be using, and perform computations
using protocols similar to the one described in this section. [GSC+ 03] Blaise Gassend, G. Edward Suh, Dwaine
Clarke, Marten van Dijk, and Srinivas De-
6.3. Software licensing vadas. Caches and merkle trees for effi-
cient memory authentication. In Proceedings
9
of the th International Symposium on High-
We are exploring ways in which a piece of code could be Performance Computer Architecture, Febru-
made to run only on a chip that has a specific identity defined ary 2003.
by a PUF. In this way, pirated code would fail to run. One
method that we are considering is to encrypt the code using [Lie00] David Lie et al. Architectural Support for
the PUF’s responses on an instruction per instruction basis. Copy and Tamper Resistant Software. In Pro-
The instructions would be decrypted inside of the PUF chip, ceedings of the 9th International Conference
and could only be decrypted by the intended chip. As the op- on Architectural Support for Programming
erating system and off-chip storage is untrustworthy, special Languages and Operating Systems (ASPLOS-
architectural support will be needed to protect the intellec- IX), pages 169–177, November 2000.
tual property as in [Lie00]. [MvOV96] Alfred J. Menezes, Paul C. van Oorschot, and
Scott A. Vanstone. Handbook of Applied
Cryptography. CRC Press, 1996.
7. Conclusion
[Rav01] P. S. Ravikanth. Physical One-Way Functions.
In this paper we have introduced the notion of Controlled PhD thesis, Massachusetts Institute of Tech-
Physical Random Functions (CPUFs) and shown how they nology, 2001.
can be used to establish a shared secret with a specific physi- [Sch96] Bruce Schneier. Applied Cryptography. Wi-
cal device. The proposed infrastructure is flexible enough to ley, 1996.
allow multiple mutually mistrusting parties to securely use
the same device. Moreover, provisions have been made to [SW99] S. W. Smith and S. H. Weingart. Building
preserve the privacy of the device’s owner by allowing her a High-Performance, Programmable Secure
to show apparently different PUFs at different times. Coprocessor. In Computer Networks (Special
We have also described two examples of how CPUFs Issue on Computer Network Security), vol-
can be applied. They hold promise in creating smartcards ume 31, pages 831–860, April 1999.
with an unprecedented level of security. They also enable
[Yee94] Bennet S. Yee. Using Secure Coproces-
these smartcards or other processors to run user programs in
sors. PhD thesis, Carnegie Mellon University,
a secure manner, producing a certificate that gives the user
1994.
confidence in the results generated. While we have not de-
scribed software licensing and intellectual property protec-
tion applications in this paper, the protocols for these appli-
cations will have some similarity to those described herein,
and are a subject of ongoing work.