Rust for Embedded Security: IoT and Hardware Security (2026)
Learn to use Rust for embedded and IoT security applications, including secure firmware development, hardware security modules, and IoT device protection.
Learn to use Rust for embedded and IoT security applications. Master secure firmware development, hardware security modules, and IoT device protection using Rust’s memory safety guarantees.
Key Takeaways
- Embedded Rust: Understand Rust for embedded systems
- IoT Security: Secure IoT device development
- Hardware Security: Work with security modules
- Firmware Security: Build secure firmware
- Resource Constraints: Optimize for limited resources
- Best Practices: Security considerations for embedded systems
Table of Contents
- Why Rust for Embedded Security
- Embedded Rust Basics
- IoT Security Patterns
- Hardware Security Modules
- Firmware Security
- Advanced Scenarios
- Troubleshooting Guide
- Real-World Case Study
- FAQ
- Conclusion
TL;DR
Use Rust for embedded and IoT security applications. Learn secure firmware development, hardware security integration, and IoT device protection using Rust’s safety guarantees.
Prerequisites
- Rust 1.80+ installed
- Understanding of embedded systems concepts
- Familiarity with hardware basics
- Knowledge of IoT security principles
Safety and Legal
- Follow hardware security standards
- Test in isolated environments
- Comply with IoT security regulations
- Document security implementations
Embedded Threat Model for IoT Devices
Understanding who the attacker is and what they can do.
Embedded security is fundamentally different from server security. Attackers have physical access, control the network, and may compromise the supply chain.
Threat Actors
| Threat Actor | Capabilities | Attack Vectors | Defense Layers |
|---|---|---|---|
| Physical Attacker | Device theft, hardware access | JTAG/SWD debug, flash dumping, fault injection, side-channels | Secure boot, encrypted storage, tamper detection |
| Network Attacker | MITM, eavesdropping, replay | Packet sniffing, protocol manipulation, replay attacks | TLS, certificate pinning, nonce-based auth |
| Supply Chain Attacker | Malicious firmware, backdoors | Compromised update server, malicious components | Code signing, secure boot, hardware root of trust |
| Insider / Compromised Update Server | Signed malicious updates | Legitimate update channel abuse | Multi-signature updates, rollback protection, audit logs |
Attack Scenarios by Threat Actor
Scenario 1: Physical Attacker (Device Theft)
Attacker Goal: Extract encryption keys from stolen device
Attack Steps:
- Debug Interface Abuse: Connect JTAG/SWD to read memory
- Flash Dumping: Extract firmware from flash memory
- Fault Injection: Glitch voltage/clock to bypass security checks
- Side-Channel Analysis: Measure power consumption during crypto operations
Defense Strategy:
- ✅ Disable debug interfaces in production (fuse JTAG/SWD)
- ✅ Encrypt flash storage (keys in secure element)
- ✅ Tamper detection (voltage/clock monitors)
- ✅ Side-channel resistant crypto (constant-time operations)
Code Example:
// ✅ GOOD: Disable debug interface in production
#[cfg(not(debug_assertions))]
fn disable_debug_interface() {
unsafe {
// Fuse JTAG/SWD pins (example for STM32)
const DBGMCU_CR: *mut u32 = 0xE004_2004 as *mut u32;
core::ptr::write_volatile(DBGMCU_CR, 0x0); // Disable all debug
}
}
// ✅ GOOD: Store keys in secure element, not flash
pub struct SecureKeyStorage {
secure_element: SecureElement,
}
impl SecureKeyStorage {
pub fn get_encryption_key(&self) -> Result<[u8; 32], Error> {
// ✅ Key never leaves secure element
self.secure_element.derive_key(KEY_SLOT_0)
}
}
Scenario 2: Network Attacker (MITM)
Attacker Goal: Intercept and modify communication between device and server
Attack Steps:
- ARP Spoofing: Redirect traffic through attacker’s machine
- TLS Downgrade: Force device to use weak encryption
- Certificate Substitution: Present fake certificate
- Replay Attack: Capture and replay valid messages
Defense Strategy:
- ✅ Mutual TLS (device and server authenticate each other)
- ✅ Certificate pinning (hardcode expected server cert)
- ✅ Nonce-based authentication (prevent replay)
- ✅ Timestamp validation (reject old messages)
Code Example:
// ✅ GOOD: Certificate pinning
const EXPECTED_SERVER_CERT_HASH: [u8; 32] = [
0x12, 0x34, 0x56, 0x78, // ... (SHA-256 of server cert)
];
pub fn verify_server_certificate(cert: &[u8]) -> Result<(), Error> {
let cert_hash = sha256(cert);
if cert_hash != EXPECTED_SERVER_CERT_HASH {
return Err(Error::CertificateMismatch);
}
Ok(())
}
// ✅ GOOD: Nonce-based authentication (prevent replay)
pub struct NonceAuthenticator {
last_nonce: u64,
}
impl NonceAuthenticator {
pub fn verify_message(&mut self, message: &[u8], nonce: u64) -> Result<(), Error> {
// ✅ Reject replayed messages
if nonce <= self.last_nonce {
return Err(Error::ReplayAttack);
}
self.last_nonce = nonce;
Ok(())
}
}
Scenario 3: Supply Chain Attacker (Malicious Firmware)
Attacker Goal: Inject backdoor into firmware during manufacturing or updates
Attack Steps:
- Compromise Build System: Inject malicious code during compilation
- Malicious Component: Replace legitimate chip with backdoored version
- Update Server Compromise: Serve malicious firmware updates
- Insider Attack: Developer with signing keys goes rogue
Defense Strategy:
- ✅ Secure boot chain (verify every stage)
- ✅ Multi-signature updates (require 2+ signatures)
- ✅ Reproducible builds (verify firmware matches source)
- ✅ Hardware root of trust (immutable ROM bootloader)
Code Example:
// ✅ GOOD: Multi-signature verification (require 2 of 3 keys)
pub struct MultiSigVerifier {
public_keys: [[u8; 64]; 3], // 3 public keys
}
impl MultiSigVerifier {
pub fn verify_firmware(&self, firmware: &[u8], signatures: &[[u8; 64]]) -> Result<(), Error> {
let mut valid_signatures = 0;
for (i, sig) in signatures.iter().enumerate() {
if verify_signature(firmware, sig, &self.public_keys[i]).is_ok() {
valid_signatures += 1;
}
}
// ✅ Require at least 2 valid signatures
if valid_signatures >= 2 {
Ok(())
} else {
Err(Error::InsufficientSignatures)
}
}
}
Scenario 4: Compromised Update Server (Insider)
Attacker Goal: Use legitimate update channel to deploy malicious firmware
Attack Steps:
- Steal Signing Keys: Compromise developer machine
- Sign Malicious Update: Create valid signature for backdoor
- Deploy via Official Channel: Use legitimate update mechanism
- Downgrade Attack: Roll back to vulnerable version
Defense Strategy:
- ✅ Hardware-backed signing (keys in HSM, not on disk)
- ✅ Rollback protection (monotonic version counter)
- ✅ Audit logging (track all update attempts)
- ✅ Staged rollout (test on subset before full deployment)
Code Example:
// ✅ GOOD: Rollback protection with monotonic counter
pub struct RollbackProtection {
secure_counter: SecureMonotonicCounter,
}
impl RollbackProtection {
pub fn verify_firmware_version(&self, new_version: u32) -> Result<(), Error> {
let current_version = self.secure_counter.read()?;
// ✅ Reject downgrades
if new_version <= current_version {
return Err(Error::RollbackAttempt);
}
Ok(())
}
pub fn commit_update(&mut self, new_version: u32) -> Result<(), Error> {
// ✅ Increment counter (irreversible)
self.secure_counter.increment()?;
Ok(())
}
}
Attack Surface Analysis
| Attack Surface | Exposure | Mitigation | Priority |
|---|---|---|---|
| Debug Interfaces (JTAG/SWD) | High (physical access) | Disable in production, fuse pins | Critical |
| Flash Memory | High (physical access) | Encrypt storage, secure element | Critical |
| Network Communication | High (always exposed) | TLS, certificate pinning | Critical |
| Update Mechanism | Medium (trusted channel) | Code signing, rollback protection | High |
| Power Supply | Low (requires expertise) | Voltage monitors, glitch detection | Medium |
| Side Channels | Low (requires lab equipment) | Constant-time crypto, noise injection | Medium |
Defense-in-Depth for Embedded Devices
Layer 1: Hardware Root of Trust
- Immutable ROM bootloader
- Secure element for key storage
- Tamper detection circuits
Layer 2: Secure Boot Chain
- Verify bootloader signature (ROM)
- Verify firmware signature (bootloader)
- Verify application signature (firmware)
Layer 3: Runtime Protection
- Memory protection unit (MPU)
- Encrypted storage
- Secure communication (TLS)
Layer 4: Update Security
- Code signing
- Rollback protection
- Multi-signature verification
Layer 5: Monitoring & Response
- Tamper detection
- Anomaly detection
- Secure logging
Key Takeaways
Embedded Threat Model Rules:
- Assume physical access - Attacker can open the device
- Network is hostile - All communication is monitored/modified
- Supply chain is compromised - Verify everything
- Updates are attack vectors - Sign and version-check
- Side channels leak secrets - Use constant-time crypto
Critical Rule: Embedded security is about defense-in-depth. No single layer is sufficient. Combine hardware, software, and operational defenses.
Why Rust for Embedded Security
Safety Benefits
Memory Safety:
- No buffer overflows
- No use-after-free
- Prevent entire vulnerability classes
Concurrency Safety:
- Safe multithreading
- No data races
- Predictable behavior
Resource Efficiency
Small Binaries:
- Minimal runtime overhead
- Efficient memory usage
- Optimized for embedded systems
Embedded Rust Basics
no_std Rust
⚠️ Important: Embedded Rust uses no_std (no standard library) because embedded systems lack OS features like heap allocation, file systems, and threads.
Click to view Rust code
#![no_std]
#![no_main]
use core::panic::PanicInfo;
// ✅ RECOMMENDED: Use panic-halt for production (smaller binary)
// Cargo.toml: panic-halt = "0.2"
use panic_halt as _;
// Alternative: panic-abort (even smaller, no unwinding)
// [profile.release]
// panic = "abort"
// ❌ NOT RECOMMENDED: Custom panic handler (for debugging only)
#[cfg(debug_assertions)]
#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
// ⚠️ In production, panics should be minimized
// Use Result<T, E> instead of unwrap() / expect()
// Log panic info (if serial logging available)
#[cfg(feature = "defmt")]
defmt::error!("Panic: {:?}", info);
loop {
// Halt CPU
core::hint::spin_loop();
}
}
#[no_mangle]
pub extern "C" fn _start() -> ! {
// Entry point for embedded application
main();
loop {
core::hint::spin_loop();
}
}
fn main() {
// Your embedded application code
}
Panic Handler Options:
| Crate | Binary Size | Behavior | Use Case |
|---|---|---|---|
| panic-halt | Smallest | Infinite loop | ✅ Production (recommended) |
| panic-abort | Smallest | Abort immediately | ✅ Production (no unwinding) |
| panic-semihosting | Medium | Print to debugger | Debug only |
| Custom handler | Varies | Custom behavior | Advanced use cases |
Key Principle: In production embedded code, panics should never happen. Use Result<T, E> for all fallible operations.
Hardware Abstraction
⚠️ CRITICAL: Rust Cannot Protect Against Bad Hardware Access
Important Distinctions:
- ✅ Rust guarantees: Memory safety, no data races, type safety
- ❌ Rust does NOT guarantee: Correct register access, hardware timing, peripheral configuration
Unsafe blocks are unavoidable in embedded Rust because hardware access requires raw pointers and volatile operations.
Click to view Rust code
// ⚠️ IMPORTANT: Unsafe blocks are required for hardware access
// Rust's safety guarantees do NOT extend to hardware correctness
// Example: GPIO register access
pub struct GpioRegister {
address: *mut u32,
}
impl GpioRegister {
// ✅ SAFE: Volatile read (prevents compiler optimization)
pub unsafe fn read(&self) -> u32 {
// ⚠️ Unsafe because:
// - Raw pointer dereference
// - Hardware may have side effects (e.g., clearing interrupt flag)
// - Wrong address = undefined behavior
core::ptr::read_volatile(self.address)
}
// ✅ SAFE: Volatile write
pub unsafe fn write(&mut self, value: u32) {
// ⚠️ Unsafe because:
// - Raw pointer dereference
// - Wrong value can damage hardware (e.g., incorrect clock config)
// - Wrong timing can cause glitches
core::ptr::write_volatile(self.address, value);
}
// ❌ WRONG: Non-volatile access (compiler may optimize away)
pub unsafe fn read_wrong(&self) -> u32 {
*self.address // ❌ Compiler may cache this!
}
}
// ✅ BETTER: Type-safe register abstraction
pub struct GpioPin<const PIN: u8> {
register: *mut u32,
}
impl<const PIN: u8> GpioPin<PIN> {
pub fn set_high(&mut self) {
unsafe {
let current = core::ptr::read_volatile(self.register);
core::ptr::write_volatile(self.register, current | (1 << PIN));
}
}
pub fn set_low(&mut self) {
unsafe {
let current = core::ptr::read_volatile(self.register);
core::ptr::write_volatile(self.register, current & !(1 << PIN));
}
}
}
// ⚠️ HARDWARE BUGS BYPASS RUST GUARANTEES
// Example: Race condition in hardware
pub struct UartRegister {
data: *mut u8,
status: *mut u8,
}
impl UartRegister {
pub unsafe fn send_byte(&mut self, byte: u8) {
// ✅ Rust guarantees: No memory corruption
// ❌ Rust does NOT guarantee: Correct hardware timing
// Wait for TX ready (hardware-specific)
while (core::ptr::read_volatile(self.status) & 0x80) == 0 {
core::hint::spin_loop();
}
// ⚠️ HARDWARE BUG: If TX buffer fills between check and write,
// data may be lost. Rust cannot prevent this.
core::ptr::write_volatile(self.data, byte);
}
}
Key Takeaways:
What Rust Protects:
- ✅ Memory safety (no buffer overflows in Rust code)
- ✅ Type safety (no type confusion)
- ✅ Concurrency safety (no data races in safe Rust)
What Rust Does NOT Protect:
- ❌ Incorrect register addresses (wrong pointer = hardware damage)
- ❌ Incorrect register values (wrong config = hardware malfunction)
- ❌ Hardware timing issues (race conditions in peripherals)
- ❌ Hardware bugs (silicon errata bypass all software guarantees)
- ❌ Physical attacks (voltage glitching, side channels)
Critical Rule:
unsafeblocks in embedded Rust are unavoidable for hardware access. Rust’s safety guarantees end at the hardware boundary. Always consult the datasheet and test on real hardware.
Using Hardware Abstraction Layers (HALs)
✅ RECOMMENDED: Use existing HALs instead of raw register access
// ✅ GOOD: Use HAL (e.g., stm32f4xx-hal)
use stm32f4xx_hal::{pac, prelude::*};
fn main() -> ! {
let dp = pac::Peripherals::take().unwrap();
let gpioa = dp.GPIOA.split();
// ✅ Type-safe, no raw pointers
let mut led = gpioa.pa5.into_push_pull_output();
loop {
led.set_high();
delay_ms(1000);
led.set_low();
delay_ms(1000);
}
}
Benefits of HALs:
- ✅ Type-safe abstractions (compile-time checks)
- ✅ Tested on real hardware
- ✅ Documented APIs
- ✅ Community support
When to use raw register access:
- ❌ HAL doesn’t support your peripheral
- ❌ HAL has bugs (rare)
- ❌ Extreme performance requirements (avoid abstraction overhead)
Physical Attacks on Embedded Devices
Embedded security is not just software—physical attacks are a major threat.
Physical Attack Vectors
| Attack Type | Difficulty | Cost | Mitigation | Detectability |
|---|---|---|---|---|
| Debug Interface (JTAG/SWD) | Low | $50 | Disable/fuse in production | Easy |
| Flash Dumping | Low | $100 | Encrypt flash, secure element | Easy |
| Fault Injection (Glitching) | Medium | $500-5K | Voltage/clock monitors | Medium |
| Side-Channel (Power Analysis) | High | $10K-100K | Constant-time crypto, noise | Hard |
| Chip Decapping (Invasive) | Very High | $50K+ | Physical tamper detection | Very Hard |
Attack 1: Debug Interface Abuse (JTAG/SWD)
What it is: Connecting a debugger to read/write memory and flash
Attack Steps:
- Open device case
- Identify debug pins (JTAG/SWD)
- Connect debugger (e.g., ST-Link, J-Link)
- Dump flash memory
- Extract encryption keys, firmware
Defense:
// ✅ GOOD: Disable debug interface in production
#[cfg(not(debug_assertions))]
fn disable_debug_ports() {
unsafe {
// Example for STM32: Disable JTAG/SWD
const RCC_APB2ENR: *mut u32 = 0x4002_1018 as *mut u32;
const AFIO_MAPR: *mut u32 = 0x4001_0004 as *mut u32;
// Enable AFIO clock
let rcc = core::ptr::read_volatile(RCC_APB2ENR);
core::ptr::write_volatile(RCC_APB2ENR, rcc | (1 << 0));
// Disable JTAG and SWD
let mapr = core::ptr::read_volatile(AFIO_MAPR);
core::ptr::write_volatile(AFIO_MAPR, mapr | (0b010 << 24));
}
}
// ✅ BETTER: Use fuses to permanently disable (one-time programmable)
// This requires special tools and is irreversible
// Consult your MCU's reference manual for fuse programming
Detection:
- ✅ Tamper-evident seals on device case
- ✅ Tamper detection switch (opens circuit when case opened)
Attack 2: Flash Dumping
What it is: Reading firmware directly from flash memory
Attack Steps:
- Desolder flash chip
- Connect to flash reader
- Dump contents
- Reverse engineer firmware
Defense:
// ✅ GOOD: Encrypt flash contents
pub struct EncryptedFlashStorage {
key: [u8; 32], // Stored in secure element, not flash
}
impl EncryptedFlashStorage {
pub fn write_encrypted(&mut self, address: u32, data: &[u8]) -> Result<(), Error> {
// ✅ Encrypt before writing to flash
let encrypted = aes_gcm_encrypt(data, &self.key)?;
flash_write(address, &encrypted)?;
Ok(())
}
pub fn read_decrypted(&self, address: u32, len: usize) -> Result<Vec<u8>, Error> {
// ✅ Read and decrypt
let encrypted = flash_read(address, len)?;
let decrypted = aes_gcm_decrypt(&encrypted, &self.key)?;
Ok(decrypted)
}
}
Best Practice: Store encryption keys in secure element (ATECC608, SE050), not in flash.
Attack 3: Fault Injection (Voltage/Clock Glitching)
What it is: Causing CPU to skip instructions by manipulating power or clock
Attack Example:
// ❌ VULNERABLE: Security check can be glitched
pub fn verify_pin(entered_pin: u32, correct_pin: u32) -> bool {
if entered_pin == correct_pin {
return true; // ⚠️ Glitch here → always returns true
}
false
}
// ✅ BETTER: Redundant checks
pub fn verify_pin_secure(entered_pin: u32, correct_pin: u32) -> bool {
let check1 = entered_pin == correct_pin;
let check2 = entered_pin == correct_pin;
let check3 = entered_pin == correct_pin;
// ✅ All three checks must pass (harder to glitch all three)
check1 && check2 && check3
}
// ✅ BEST: Hardware glitch detection
pub fn enable_glitch_detection() {
unsafe {
// Enable voltage and clock monitors (MCU-specific)
// Example: STM32 Programmable Voltage Detector (PVD)
const PWR_CR: *mut u32 = 0x4000_7000 as *mut u32;
let cr = core::ptr::read_volatile(PWR_CR);
core::ptr::write_volatile(PWR_CR, cr | (1 << 4)); // Enable PVD
}
}
Hardware Defenses:
- ✅ Voltage monitors (detect undervoltage)
- ✅ Clock monitors (detect frequency anomalies)
- ✅ Watchdog timers (detect CPU hangs)
Attack 4: Side-Channel Attacks (Power Analysis)
What it is: Measuring power consumption to extract encryption keys
How it works:
- Different operations consume different power
- AES encryption power trace reveals key bits
- Requires oscilloscope and statistical analysis
Defense:
// ❌ VULNERABLE: Non-constant-time comparison (timing leak)
pub fn verify_password(input: &[u8], expected: &[u8]) -> bool {
if input.len() != expected.len() {
return false;
}
for (a, b) in input.iter().zip(expected.iter()) {
if a != b {
return false; // ❌ Early return leaks timing info
}
}
true
}
// ✅ PROTECTED: Constant-time comparison
use subtle::ConstantTimeEq;
pub fn verify_password_secure(input: &[u8], expected: &[u8]) -> bool {
if input.len() != expected.len() {
return false;
}
// ✅ Constant-time comparison (no early return)
input.ct_eq(expected).into()
}
Additional Defenses:
- ✅ Use hardware crypto accelerators (less power variation)
- ✅ Add random delays (noise injection)
- ✅ Use secure elements (side-channel resistant)
Physical Attack Summary
Key Takeaways:
- Debug interfaces must be disabled in production
- Flash encryption is mandatory for sensitive data
- Fault injection requires redundant checks and hardware monitors
- Side-channel attacks require constant-time crypto and hardware acceleration
- Physical security is defense-in-depth - no single measure is sufficient
Critical Rule: If an attacker has physical access and unlimited time, they can extract secrets. Physical security is about raising the cost and detecting attacks, not making extraction impossible.
IoT Security Patterns
⚠️ CRITICAL: Cryptography Is Only As Strong As Key Storage and Entropy
Common Misconception: “I used AES-256, so my device is secure.”
Reality: Cryptography depends on:
- Key storage location (flash = insecure, secure element = secure)
- RNG quality (bad entropy = predictable keys)
- Side-channel resistance (power analysis can extract keys)
Key Storage Comparison:
| Storage Location | Security | Cost | Use Case |
|---|---|---|---|
| Flash (plaintext) | ❌ Insecure | Free | ❌ Never use for keys |
| Flash (encrypted) | ⚠️ Weak | Free | Temporary keys only |
| Secure Element (ATECC, SE050) | ✅ Strong | $1-5 | ✅ Production (recommended) |
| TPM | ✅ Strong | $5-20 | Enterprise devices |
| HSM | ✅ Very Strong | $100+ | High-security applications |
Secure Communication
⚠️ WARNING: This example assumes secure key storage (secure element).
Click to view Rust code
// ❌ BAD: Encryption key in flash (insecure)
pub struct InsecureIoTDevice {
encryption_key: [u8; 32], // ❌ Stored in flash, easily extracted
}
impl InsecureIoTDevice {
pub fn encrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
// ❌ Key is in flash, attacker can dump it
encrypt_aes256(data, &self.encryption_key)
}
}
// ✅ GOOD: Encryption key in secure element
pub struct SecureIoTDevice {
secure_element: SecureElement, // ✅ Keys never leave secure element
}
impl SecureIoTDevice {
pub fn encrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
// ✅ Encryption happens inside secure element
// Key never exposed to main CPU
self.secure_element.encrypt_aes256(data, KEY_SLOT_0)
}
pub fn decrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
// ✅ Decryption happens inside secure element
self.secure_element.decrypt_aes256(data, KEY_SLOT_0)
}
}
⚠️ CRITICAL: RNG Quality Matters
Bad entropy = predictable keys = broken crypto
// ❌ TERRIBLE: Predictable RNG (DO NOT USE)
pub fn generate_key_insecure() -> [u8; 32] {
let mut key = [0u8; 32];
// ❌ Predictable seed (system time, counter, etc.)
let mut seed = 12345u32; // ❌ Fixed seed = same keys every time!
for i in 0..32 {
seed = seed.wrapping_mul(1103515245).wrapping_add(12345);
key[i] = (seed >> 16) as u8;
}
key
}
// ✅ GOOD: Hardware RNG (true random)
pub fn generate_key_secure() -> Result<[u8; 32], Error> {
let mut key = [0u8; 32];
// ✅ Hardware RNG (uses thermal noise, unpredictable)
hardware_rng_fill(&mut key)?;
Ok(key)
}
// ✅ BEST: Secure element RNG (highest quality)
pub fn generate_key_best(secure_element: &mut SecureElement) -> Result<(), Error> {
// ✅ Key generated inside secure element (never exposed)
// Uses hardware RNG with post-processing
secure_element.generate_key(KEY_SLOT_0)?;
Ok(())
}
RNG Quality Comparison:
| RNG Type | Entropy Source | Predictability | Use Case |
|---|---|---|---|
| Fixed seed | None | ❌ 100% predictable | ❌ Never use |
| Software PRNG | Initial seed | ⚠️ Predictable if state known | Testing only |
| Hardware RNG | Thermal noise | ✅ Unpredictable | ✅ Production |
| Secure element RNG | Hardware + post-processing | ✅ Highest quality | ✅ High-security |
Device Authentication
⚠️ WARNING: Certificate verification requires secure key storage.
Click to view Rust code
// ❌ BAD: Private key in flash (insecure)
pub fn authenticate_device_insecure(certificate: &[u8], private_key: &[u8]) -> bool {
// ❌ Private key in flash, easily extracted
let signature = sign_with_private_key(certificate, private_key);
verify_signature(certificate, &signature)
}
// ✅ GOOD: Private key in secure element
pub fn authenticate_device_secure(
certificate: &[u8],
secure_element: &SecureElement,
) -> Result<bool, Error> {
// ✅ Sign using key in secure element (never exposed)
let signature = secure_element.sign_ecdsa(certificate, KEY_SLOT_0)?;
// ✅ Verify signature
Ok(verify_ecdsa_signature(certificate, &signature, &PUBLIC_KEY))
}
Key Takeaways
Cryptography Rules for Embedded:
- Keys must be in secure element - Flash storage is insecure
- Use hardware RNG - Software RNG is predictable
- Side-channel resistance matters - Use constant-time crypto
- Certificate pinning is mandatory - Don’t trust any CA
- Crypto is only as strong as key storage - AES-256 in flash = broken
Critical Rule: “I used AES-256” is not a security guarantee. Key storage, RNG quality, and side-channel resistance are equally important.
Hardware Security Modules and Secure Elements
Secure elements are specialized chips for cryptographic operations and key storage.
Secure Element Options
| Chip | Interface | Features | Cost | Use Case |
|---|---|---|---|---|
| ATECC608 | I2C | ECDSA, SHA-256, AES-128, key storage | $0.50-1 | ✅ IoT devices (recommended) |
| SE050 | I2C/SPI | RSA, ECC, AES, secure boot | $2-5 | High-security IoT |
| TPM 2.0 | SPI/I2C | Full TPM spec, attestation | $5-20 | Enterprise devices |
| A71CH | I2C | ECC, AES, secure boot | $1-3 | Industrial IoT |
Secure Element Best Practices
✅ DO:
- Generate keys inside secure element - Never import private keys
- Use hardware RNG - Secure element has high-quality RNG
- Store keys in secure slots - Use key slot 0-15
- Lock configuration - Prevent modification after deployment
- Use attestation - Prove device authenticity
❌ DON’T:
- Export private keys - Keys should never leave secure element
- Store keys in flash - Even encrypted keys are vulnerable
- Use software crypto - Hardware is faster and side-channel resistant
- Skip key rotation - Rotate keys periodically
- Trust default config - Always lock configuration
Firmware Security
Secure Boot Chain (Complete Explanation)
Secure boot ensures only trusted code runs on the device.
Secure Boot Architecture
┌─────────────────────────────────────────────────────────────┐
│ SECURE BOOT CHAIN │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ ROM │ verify │ Bootloader │ │
│ │ (Immutable) ├────────>│ (Mutable) │ │
│ │ │ │ │ │
│ │ - Root of │ │ - Verify │ │
│ │ Trust │ │ firmware │ │
│ │ - Public key │ │ - Load app │ │
│ └──────────────┘ └──────┬───────┘ │
│ │ │
│ │ verify │
│ ▼ │
│ ┌──────────────┐ │
│ │ Firmware │ │
│ │ (Mutable) │ │
│ │ │ │
│ │ - Application│ │
│ │ - Signed │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Stage 1: ROM Bootloader (Root of Trust)
Characteristics:
- ✅ Immutable - Burned into ROM, cannot be modified
- ✅ Trusted - First code to execute after reset
- ✅ Minimal - Only verifies next stage (bootloader)
- ✅ Contains public key - For bootloader verification
Location: On-chip ROM (e.g., 0x0000_0000)
Code Example (Conceptual):
// ⚠️ This code runs in ROM (immutable, factory-programmed)
// You cannot modify this—it's part of the silicon
#[no_mangle]
pub extern "C" fn rom_bootloader() -> ! {
// ✅ Step 1: Read bootloader from flash
let bootloader = read_flash(BOOTLOADER_ADDRESS, BOOTLOADER_SIZE);
// ✅ Step 2: Read bootloader signature
let signature = read_flash(BOOTLOADER_SIG_ADDRESS, SIGNATURE_SIZE);
// ✅ Step 3: Verify signature using public key in ROM
const ROM_PUBLIC_KEY: [u8; 64] = [
// ✅ Public key is immutable (burned into ROM)
0x04, 0x1a, 0x2b, 0x3c, // ... (ECDSA P-256 public key)
];
if verify_ecdsa_signature(&bootloader, &signature, &ROM_PUBLIC_KEY).is_err() {
// ❌ Signature invalid → halt (do not execute untrusted code)
panic_halt();
}
// ✅ Step 4: Jump to bootloader (signature valid)
jump_to_address(BOOTLOADER_ADDRESS);
}
Key Point: ROM bootloader is the root of trust. If this is compromised, the entire chain fails.
Stage 2: Bootloader (Mutable but Signed)
Characteristics:
- ⚠️ Mutable - Stored in flash, can be updated
- ✅ Signed - Signature verified by ROM
- ✅ Verifies firmware - Checks application signature
- ✅ Handles updates - Can install new firmware
Location: Flash memory (e.g., 0x0800_0000)
Code Example:
// ✅ This code runs in flash (mutable, but signature-verified by ROM)
#[no_mangle]
pub extern "C" fn bootloader_main() -> ! {
// ✅ Step 1: Read firmware from flash
let firmware = read_flash(FIRMWARE_ADDRESS, FIRMWARE_SIZE);
// ✅ Step 2: Read firmware signature
let signature = read_flash(FIRMWARE_SIG_ADDRESS, SIGNATURE_SIZE);
// ✅ Step 3: Verify firmware signature
const BOOTLOADER_PUBLIC_KEY: [u8; 64] = [
// ✅ Public key for firmware verification
0x04, 0x5d, 0x6e, 0x7f, // ... (ECDSA P-256 public key)
];
if verify_ecdsa_signature(&firmware, &signature, &BOOTLOADER_PUBLIC_KEY).is_err() {
// ❌ Firmware signature invalid → enter recovery mode
enter_recovery_mode();
}
// ✅ Step 4: Check rollback protection (version counter)
let firmware_version = read_firmware_version(&firmware);
let secure_counter = read_secure_counter();
if firmware_version < secure_counter {
// ❌ Downgrade attack detected → halt
panic_halt();
}
// ✅ Step 5: Jump to firmware (signature valid, version OK)
jump_to_address(FIRMWARE_ADDRESS);
}
Stage 3: Firmware (Application)
Characteristics:
- ⚠️ Mutable - Updated frequently
- ✅ Signed - Signature verified by bootloader
- ✅ Version-controlled - Rollback protection
Location: Flash memory (e.g., 0x0801_0000)
Code Example:
// ✅ This is your application code (verified by bootloader)
#[no_mangle]
pub extern "C" fn firmware_main() -> ! {
// ✅ Application code runs here
// Bootloader has already verified signature
init_peripherals();
init_security();
loop {
run_application();
}
}
Public Key Storage Locations
| Key Type | Storage Location | Mutability | Verified By |
|---|---|---|---|
| ROM Public Key | On-chip ROM | ✅ Immutable | Hardware (fuses) |
| Bootloader Public Key | Bootloader flash | ⚠️ Mutable (but signed) | ROM bootloader |
| Firmware Public Key | Firmware flash | ⚠️ Mutable (but signed) | Bootloader |
Immutable vs Mutable Stages
| Stage | Mutability | Attack Vector | Defense |
|---|---|---|---|
| ROM Bootloader | ✅ Immutable | None (burned in silicon) | Hardware root of trust |
| Bootloader | ⚠️ Mutable | Malicious update | Signature verification by ROM |
| Firmware | ⚠️ Mutable | Malicious update | Signature verification by bootloader |
Bootloader Downgrade Attacks
Attack: Attacker installs old, vulnerable firmware version
Defense: Monotonic Version Counter
// ✅ GOOD: Rollback protection with secure counter
pub struct SecureVersionCounter {
// ✅ Stored in OTP (One-Time Programmable) memory or secure element
counter_address: *mut u32,
}
impl SecureVersionCounter {
pub fn read(&self) -> u32 {
unsafe {
core::ptr::read_volatile(self.counter_address)
}
}
pub fn increment(&mut self) -> Result<(), Error> {
let current = self.read();
// ✅ Write new value (can only increase, never decrease)
unsafe {
core::ptr::write_volatile(self.counter_address, current + 1);
}
// ✅ Verify write succeeded
if self.read() != current + 1 {
return Err(Error::CounterWriteFailed);
}
Ok(())
}
pub fn verify_version(&self, firmware_version: u32) -> Result<(), Error> {
let min_version = self.read();
if firmware_version < min_version {
// ❌ Downgrade attack detected
return Err(Error::RollbackAttempt);
}
Ok(())
}
}
// ✅ Usage in bootloader
pub fn verify_and_boot_firmware() -> ! {
let firmware = read_flash(FIRMWARE_ADDRESS, FIRMWARE_SIZE);
let signature = read_flash(FIRMWARE_SIG_ADDRESS, SIGNATURE_SIZE);
// ✅ Step 1: Verify signature
if verify_signature(&firmware, &signature).is_err() {
panic_halt();
}
// ✅ Step 2: Check version (rollback protection)
let firmware_version = parse_firmware_version(&firmware);
let counter = SecureVersionCounter { counter_address: OTP_COUNTER_ADDR };
if counter.verify_version(firmware_version).is_err() {
// ❌ Downgrade attack → halt
panic_halt();
}
// ✅ Step 3: Boot firmware
jump_to_address(FIRMWARE_ADDRESS);
}
Secure Boot Best Practices
✅ DO:
- Use hardware root of trust (immutable ROM bootloader)
- Store public keys in ROM (cannot be modified)
- Implement rollback protection (monotonic counter)
- Verify every stage (ROM → bootloader → firmware)
- Use strong crypto (ECDSA P-256 or Ed25519)
❌ DON’T:
- Store private keys on device (only public keys)
- Allow unsigned code execution (always verify)
- Skip version checks (enables downgrade attacks)
- Use weak crypto (RSA-1024, SHA-1)
- Trust mutable storage (flash can be modified)
Key Takeaways
Secure Boot Chain Rules:
- ROM is root of trust - Immutable, first code to run
- Each stage verifies next - Chain of trust
- Public keys are immutable - Stored in ROM or OTP
- Rollback protection is mandatory - Monotonic version counter
- Downgrade attacks are real - Always check version
Critical Rule: Secure boot is only as strong as its weakest link. If ROM is compromised (e.g., factory backdoor), the entire chain fails.
Secure Updates with Power-Failure Safety
⚠️ CRITICAL: Firmware updates must be atomic and power-failure safe.
The Problem:
- Power loss during update = bricked device
- Partial writes corrupt firmware
- No way to recover without JTAG
Solution: A/B Partition Scheme
┌────────────────────────────────────────────┐
│ FLASH MEMORY LAYOUT │
├────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ │
│ │ Bootloader │ (64 KB) │
│ │ (Protected) │ │
│ └──────────────┘ │
│ │
│ ┌──────────────┐ │
│ │ Partition A │ (256 KB) │
│ │ (Active) │ ← Currently running │
│ └──────────────┘ │
│ │
│ ┌──────────────┐ │
│ │ Partition B │ (256 KB) │
│ │ (Inactive) │ ← Write new firmware │
│ └──────────────┘ │
│ │
└────────────────────────────────────────────┘
Click to view Rust code
// ✅ GOOD: Power-failure safe firmware updater
pub struct SecureFirmwareUpdater {
public_key: [u8; 64],
active_partition: Partition,
}
#[derive(Clone, Copy)]
pub enum Partition {
A,
B,
}
impl Partition {
fn address(&self) -> u32 {
match self {
Partition::A => 0x0801_0000, // Partition A start
Partition::B => 0x0805_0000, // Partition B start
}
}
fn size(&self) -> usize {
256 * 1024 // 256 KB
}
}
impl SecureFirmwareUpdater {
// ✅ Power-failure safe update process
pub fn update_firmware(&mut self, update: &[u8], signature: &[u8]) -> Result<(), Error> {
// ✅ Step 1: Verify signature (before writing anything)
if verify_ecdsa_signature(update, signature, &self.public_key).is_err() {
return Err(Error::InvalidSignature);
}
// ✅ Step 2: Verify version (rollback protection)
let new_version = parse_firmware_version(update)?;
let current_version = read_secure_counter()?;
if new_version <= current_version {
return Err(Error::RollbackAttempt);
}
// ✅ Step 3: Determine inactive partition
let inactive_partition = match self.active_partition {
Partition::A => Partition::B,
Partition::B => Partition::A,
};
// ✅ Step 4: Write to inactive partition (safe if power fails)
self.write_firmware_to_partition(update, inactive_partition)?;
// ✅ Step 5: Verify written firmware (checksum)
self.verify_partition(inactive_partition, update)?;
// ✅ Step 6: Atomically switch active partition (single write)
self.switch_active_partition(inactive_partition)?;
// ✅ Step 7: Increment version counter (irreversible)
increment_secure_counter(new_version)?;
// ✅ Step 8: Reboot to new firmware
system_reset();
}
// ✅ Write firmware to partition (can be interrupted safely)
fn write_firmware_to_partition(&self, firmware: &[u8], partition: Partition) -> Result<(), Error> {
let address = partition.address();
// ✅ Erase partition first
flash_erase(address, partition.size())?;
// ✅ Write firmware in chunks (can be interrupted)
const CHUNK_SIZE: usize = 256;
for (i, chunk) in firmware.chunks(CHUNK_SIZE).enumerate() {
let offset = i * CHUNK_SIZE;
flash_write(address + offset as u32, chunk)?;
// ✅ Optional: Watchdog kick to prevent timeout
watchdog_kick();
}
Ok(())
}
// ✅ Verify written firmware matches expected
fn verify_partition(&self, partition: Partition, expected: &[u8]) -> Result<(), Error> {
let address = partition.address();
let written = flash_read(address, expected.len())?;
if written != expected {
return Err(Error::VerificationFailed);
}
Ok(())
}
// ✅ Atomically switch active partition (single write)
fn switch_active_partition(&mut self, new_partition: Partition) -> Result<(), Error> {
// ✅ Write to protected flash location (bootloader config)
const ACTIVE_PARTITION_ADDR: u32 = 0x0800_FFF0;
let value = match new_partition {
Partition::A => 0xAAAA_AAAA,
Partition::B => 0xBBBB_BBBB,
};
// ✅ Single atomic write (power-failure safe)
flash_write(ACTIVE_PARTITION_ADDR, &value.to_le_bytes())?;
self.active_partition = new_partition;
Ok(())
}
}
// ✅ Bootloader reads active partition on boot
pub fn bootloader_select_partition() -> Partition {
const ACTIVE_PARTITION_ADDR: u32 = 0x0800_FFF0;
let value = unsafe {
core::ptr::read_volatile(ACTIVE_PARTITION_ADDR as *const u32)
};
match value {
0xAAAA_AAAA => Partition::A,
0xBBBB_BBBB => Partition::B,
_ => {
// ✅ Default to Partition A if invalid
Partition::A
}
}
}
Power-Failure Scenarios
| Scenario | Result | Recovery |
|---|---|---|
| Power loss before write | ✅ No change | Boot normally |
| Power loss during write | ✅ Old firmware intact | Boot from active partition |
| Power loss after write, before switch | ✅ Old firmware still active | Boot normally, retry update |
| Power loss after switch | ✅ New firmware active | Boot from new partition |
Update Process Flow
1. Verify signature ────────> [FAIL] → Reject update
↓
2. Check version ───────────> [FAIL] → Reject (rollback)
↓
3. Write to inactive ───────> [POWER LOSS] → Old firmware OK
↓
4. Verify write ────────────> [FAIL] → Retry or reject
↓
5. Switch partition ────────> [POWER LOSS] → Old firmware OK
(atomic) ↓
6. Increment counter ───────> [POWER LOSS] → New firmware OK
↓
7. Reboot ──────────────────> New firmware running
Key Takeaways
Power-Failure Safe Update Rules:
- Use A/B partitions - Never overwrite running firmware
- Verify before writing - Check signature and version first
- Atomic partition switch - Single write operation
- Verify after writing - Checksum before switching
- Rollback on failure - Always have working firmware
Critical Rule: Firmware updates must be atomic and power-failure safe. A bricked device in the field is unrecoverable without physical access.
Advanced Scenarios
Scenario 1: Secure IoT Gateway
Implementation:
- Encrypted communication
- Device authentication
- Secure key management
- OTA update support
Scenario 2: Hardware Wallet
Features:
- Secure key storage
- Transaction signing
- PIN protection
- Tamper detection
Troubleshooting Guide
Problem: Binary Too Large
Solution:
- Use LTO (Link Time Optimization)
- Disable debug symbols
- Remove unused code
- Optimize for size
Problem: Performance Issues
Solution:
- Profile embedded code
- Optimize hot paths
- Use hardware acceleration
- Review algorithm choices
Real-World Case Study
Case Study: Secure IoT device using Rust
Implementation:
- Rust firmware
- Hardware security module
- Encrypted communication
- Secure boot
Results:
- No memory safety vulnerabilities
- Secure communication
- Efficient resource usage
- Easy to maintain
FAQ
Q: Is Rust suitable for all embedded systems?
A: Rust works well for:
- Modern microcontrollers
- IoT devices
- Systems with sufficient resources
- Security-critical applications
Q: How do I debug embedded Rust?
A: Multiple debugging options:
✅ RECOMMENDED: probe-rs (Rust-native debugger)
- Pure Rust debugger (no OpenOCD needed)
- Fast flashing and debugging
- Works with VS Code, CLI
- Supports most ARM Cortex-M chips
# Install probe-rs
cargo install probe-rs --features cli
# Flash and run
cargo embed --release
# Debug with probe-rs
probe-rs debug --chip STM32F401RETx target/thumbv7em-none-eabihf/release/firmware
Other Options:
- GDB + OpenOCD: Traditional debugging (slower)
- Serial logging:
defmtorlogcrate with UART - Hardware debuggers: ST-Link, J-Link, Black Magic Probe
- Semihosting: Print to debugger console (very slow)
Recommended Setup:
# Cargo.toml
[dependencies]
defmt = "0.3" # Efficient logging
defmt-rtt = "0.4" # Real-Time Transfer (fast)
panic-probe = "0.3" # Panic handler for probe-rs
[profile.dev]
debug = 2 # Full debug info
opt-level = "s" # Optimize for size
Key Takeaway: Use probe-rs for Rust-native debugging—it’s faster and easier than GDB+OpenOCD.
Code Review Checklist for Embedded Rust Security
Resource Constraints
- Memory usage optimized for target hardware
- Stack usage monitored and limited
- Heap usage minimized where possible
- Code size optimized (LTO, etc.)
Security Patterns
- Secure boot implementation verified
- Firmware updates are authenticated
- Secrets stored securely (TPM, secure element)
- No hardcoded credentials
Safety
- No panic!() in production code (use Result)
- Watchdog timer configured
- Proper error recovery mechanisms
- Input validation for all external data
Testing
- Tests run on target hardware
- Hardware-in-the-loop testing performed
- Power consumption tested
- Stress testing performed
Deployment
- Binary signatures verified
- Secure update mechanisms
- Rollback capabilities tested
- Logging doesn’t leak secrets
Conclusion
Rust provides excellent foundations for embedded security applications. Its safety guarantees and resource efficiency make it ideal for IoT and hardware security.
Action Steps
- Learn embedded Rust basics
- Practice with hardware
- Implement security patterns
- Test on real devices
- Deploy securely
Next Steps
- Explore embedded Rust ecosystem
- Study IoT security standards
- Learn about secure boot
- Practice with real hardware
Related Topics
Remember: Embedded security requires careful consideration of resource constraints, hardware capabilities, and security requirements. Test thoroughly on target hardware.
Cleanup
Click to view commands
# Clean up embedded build artifacts
rm -rf target/
rm -f *.elf *.bin *.hex
rm -f *.map *.lst
# Clean up any hardware-specific files
find . -name "*_embedded*" -delete
Validation: Verify no embedded build artifacts remain in the project directory.