< Back to previous page

Project

PUF Constructions with Limited Information Leakage

Silicon Physical Unclonable Functions (PUFs) arose from MIT research more than 15 years ago with great fanfare and promise. The idea was to use tiny electronic circuits to detect manufacturing variation on chip instead of using external test equipment, and produce a large number of challenge/response pairs (CRPs) that can be used to uniquely authenticate a silicon device. Unlike a normal mathematical function, a PUF is a physical function. When the same challenge is applied repeatedly to the same silicon PUF device, some of the response bits may flip, with the number of bit-flips increasing with environmental changes. Over the years, the physical and noisy nature of PUF circuits have proven to be a barrier to use PUFs in a secure fashion.


For lightweight PUF authentication use cases, noisy response bits are “forgiven” using a threshold-based comparison. Unfortunately, traditional cryptographic principles such as confusion, diffusion, or modulo exponentiation operations cannot be applied directly (without error correction) to the PUF response bits; these operations are designed to cause a single bit-flip at its input to spread to many bits at its output, which would cause PUF noise to spread. Authentication PUF circuits are composed of mostly linearly-evaluated building blocks in order to constrain noise spread, subjecting these circuits to machine learning attacks.

 

For PUF key generation use cases, the PUF response bits need to be bit-exact.  An encoded form of error correction “parity” bits needs to be exposed in order to correct the bit-flips. Unfortunately, the noise levels for PUFs are often higher than traditional error correction application domains such as wireless and storage, requiring specialized error correction approaches. Depending on the coding scheme, the exposed bits often reveal information in a manner that may compromise the PUF-derived key. There is a delicate trade-off between error correction capability and the min-entropy information associated with the secret key that is leaked. Often, proposed coding schemes that address environmental variation were later to found to leak information about the key in terms of induced min-entropy loss.

 

This thesis describes PUF authentication and PUF key generation schemes that minimize information leakage, including schemes designed with resistance against machine learning attacks.

 

First, we take a PUF that was previously broken from a machine learning attack standpoint, and instantiated it in a manner that is secure against those classes of machine learning attacks. Our scheme uses a server-managed CRP lockdown protocol. Unlike prior approaches, an adaptive chosen challenge adversary with machine learning capabilities cannot obtain new CRPs without the server’s implicit permission. The adversary is faced with the problem of reverse-engineering the PUF behavior with a limited amount of machine learning training data. We present a degenerate instantiation that is secure against computationally unrestricted adversaries using conventional SRAM technology. We are also the first to present an authentication PUF instantiation whose learning difficulty is exponential with respect to the circuit size in the context of the probably-approximately-correct (PAC) complexity-theoretic learning framework. We validate our approach using silicon PUF ASIC data, and demonstrate the feasibility of supporting 10, 1000, and 1 M authentications.

 

Then, we consider a scenario where a hash function is applied to the PUF output, which is noisy. Such a configuration would help to prevent an adversary from using machine learning algorithms to reverse engineer the PUF from the exposed CRPs. The verifier, however, would also have problems authenticating since the post-hashed response to a challenge is equally obfuscated. To address this problem, we present the first architecture where the “noise” seen by the machine-learning-equipped adversary and the noise seen by the verifier are bifurcated. We allow the adversary’s noise n_a -> 0.50 while keeping the verifier’s noise n_v constant as security is scaled up. We present supporting data using 28 nm FPGA PUF noise results as well as machine learning attack results.

 

Finally, we consider what has become a popular alternative to code-offset XOR masking used in PUF key generation – Index-Based Syndrome (IBS) coding. Because of a PUF’s noisy nature, error correction using a single-stage block code is particularly inefficient to implement. Several popular approaches were proposed around the time of the original IBS publication, including a soft-decision scheme exposing “confidence” information associated with PUF output values, a strong-bit mask scheme exposing locations of bits that are less likely to flip, and a repetition coding scheme exposing an XOR mask of the PUF bits with a repetition code codeword. All three schemes were later found to induce more min-entropy leaks than originally anticipated. Meanwhile, the original IBS scheme was affirmed in its security and de biasing claims. We conclude the thesis by describing the key results and proof from the original IBS publication, which has spawned many derivatives including C-IBS, DSC, and Generalized IBS, due to its security and de-biasing properties despite its simplicity.

 

Date:5 Feb 2013 →  5 Jul 2018
Keywords:PUF, Authentication, Error Correction, Silicon Manufacturing Variation, FPGAs, ASICs, Stress Testing, Signal Processing, Machine Learning, Information Theoretic Security
Disciplines:Applied mathematics in specific fields, Computer architecture and networks, Distributed computing, Information sciences, Information systems, Programming languages, Scientific computing, Theoretical computer science, Visual computing, Other information and computing sciences, Modelling, Biological system engineering, Signal processing, Control systems, robotics and automation, Design theories and methods, Mechatronics and robotics, Computer theory
Project type:PhD project