rust for embedded security iot - cybersecurity article featured image
Learn Cybersecurity

Rust for Embedded Security: IoT and Hardware Security (2026)

Learn to use Rust for embedded and IoT security applications, including secure firmware development, hardware security modules, and IoT device protection.

rust embedded iot hardware security firmware

Learn to use Rust for embedded and IoT security applications. Master secure firmware development, hardware security modules, and IoT device protection using Rust’s memory safety guarantees.

Key Takeaways

  • Embedded Rust: Understand Rust for embedded systems
  • IoT Security: Secure IoT device development
  • Hardware Security: Work with security modules
  • Firmware Security: Build secure firmware
  • Resource Constraints: Optimize for limited resources
  • Best Practices: Security considerations for embedded systems

Table of Contents

  1. Why Rust for Embedded Security
  2. Embedded Rust Basics
  3. IoT Security Patterns
  4. Hardware Security Modules
  5. Firmware Security
  6. Advanced Scenarios
  7. Troubleshooting Guide
  8. Real-World Case Study
  9. FAQ
  10. Conclusion

TL;DR

Use Rust for embedded and IoT security applications. Learn secure firmware development, hardware security integration, and IoT device protection using Rust’s safety guarantees.


Prerequisites

  • Rust 1.80+ installed
  • Understanding of embedded systems concepts
  • Familiarity with hardware basics
  • Knowledge of IoT security principles

  • Follow hardware security standards
  • Test in isolated environments
  • Comply with IoT security regulations
  • Document security implementations

Embedded Threat Model for IoT Devices

Understanding who the attacker is and what they can do.

Embedded security is fundamentally different from server security. Attackers have physical access, control the network, and may compromise the supply chain.

Threat Actors

Threat ActorCapabilitiesAttack VectorsDefense Layers
Physical AttackerDevice theft, hardware accessJTAG/SWD debug, flash dumping, fault injection, side-channelsSecure boot, encrypted storage, tamper detection
Network AttackerMITM, eavesdropping, replayPacket sniffing, protocol manipulation, replay attacksTLS, certificate pinning, nonce-based auth
Supply Chain AttackerMalicious firmware, backdoorsCompromised update server, malicious componentsCode signing, secure boot, hardware root of trust
Insider / Compromised Update ServerSigned malicious updatesLegitimate update channel abuseMulti-signature updates, rollback protection, audit logs

Attack Scenarios by Threat Actor

Scenario 1: Physical Attacker (Device Theft)

Attacker Goal: Extract encryption keys from stolen device

Attack Steps:

  1. Debug Interface Abuse: Connect JTAG/SWD to read memory
  2. Flash Dumping: Extract firmware from flash memory
  3. Fault Injection: Glitch voltage/clock to bypass security checks
  4. Side-Channel Analysis: Measure power consumption during crypto operations

Defense Strategy:

  • Disable debug interfaces in production (fuse JTAG/SWD)
  • Encrypt flash storage (keys in secure element)
  • Tamper detection (voltage/clock monitors)
  • Side-channel resistant crypto (constant-time operations)

Code Example:

// ✅ GOOD: Disable debug interface in production
#[cfg(not(debug_assertions))]
fn disable_debug_interface() {
    unsafe {
        // Fuse JTAG/SWD pins (example for STM32)
        const DBGMCU_CR: *mut u32 = 0xE004_2004 as *mut u32;
        core::ptr::write_volatile(DBGMCU_CR, 0x0); // Disable all debug
    }
}

// ✅ GOOD: Store keys in secure element, not flash
pub struct SecureKeyStorage {
    secure_element: SecureElement,
}

impl SecureKeyStorage {
    pub fn get_encryption_key(&self) -> Result<[u8; 32], Error> {
        // ✅ Key never leaves secure element
        self.secure_element.derive_key(KEY_SLOT_0)
    }
}

Scenario 2: Network Attacker (MITM)

Attacker Goal: Intercept and modify communication between device and server

Attack Steps:

  1. ARP Spoofing: Redirect traffic through attacker’s machine
  2. TLS Downgrade: Force device to use weak encryption
  3. Certificate Substitution: Present fake certificate
  4. Replay Attack: Capture and replay valid messages

Defense Strategy:

  • Mutual TLS (device and server authenticate each other)
  • Certificate pinning (hardcode expected server cert)
  • Nonce-based authentication (prevent replay)
  • Timestamp validation (reject old messages)

Code Example:

// ✅ GOOD: Certificate pinning
const EXPECTED_SERVER_CERT_HASH: [u8; 32] = [
    0x12, 0x34, 0x56, 0x78, // ... (SHA-256 of server cert)
];

pub fn verify_server_certificate(cert: &[u8]) -> Result<(), Error> {
    let cert_hash = sha256(cert);
    
    if cert_hash != EXPECTED_SERVER_CERT_HASH {
        return Err(Error::CertificateMismatch);
    }
    
    Ok(())
}

// ✅ GOOD: Nonce-based authentication (prevent replay)
pub struct NonceAuthenticator {
    last_nonce: u64,
}

impl NonceAuthenticator {
    pub fn verify_message(&mut self, message: &[u8], nonce: u64) -> Result<(), Error> {
        // ✅ Reject replayed messages
        if nonce <= self.last_nonce {
            return Err(Error::ReplayAttack);
        }
        
        self.last_nonce = nonce;
        Ok(())
    }
}

Scenario 3: Supply Chain Attacker (Malicious Firmware)

Attacker Goal: Inject backdoor into firmware during manufacturing or updates

Attack Steps:

  1. Compromise Build System: Inject malicious code during compilation
  2. Malicious Component: Replace legitimate chip with backdoored version
  3. Update Server Compromise: Serve malicious firmware updates
  4. Insider Attack: Developer with signing keys goes rogue

Defense Strategy:

  • Secure boot chain (verify every stage)
  • Multi-signature updates (require 2+ signatures)
  • Reproducible builds (verify firmware matches source)
  • Hardware root of trust (immutable ROM bootloader)

Code Example:

// ✅ GOOD: Multi-signature verification (require 2 of 3 keys)
pub struct MultiSigVerifier {
    public_keys: [[u8; 64]; 3], // 3 public keys
}

impl MultiSigVerifier {
    pub fn verify_firmware(&self, firmware: &[u8], signatures: &[[u8; 64]]) -> Result<(), Error> {
        let mut valid_signatures = 0;
        
        for (i, sig) in signatures.iter().enumerate() {
            if verify_signature(firmware, sig, &self.public_keys[i]).is_ok() {
                valid_signatures += 1;
            }
        }
        
        // ✅ Require at least 2 valid signatures
        if valid_signatures >= 2 {
            Ok(())
        } else {
            Err(Error::InsufficientSignatures)
        }
    }
}

Scenario 4: Compromised Update Server (Insider)

Attacker Goal: Use legitimate update channel to deploy malicious firmware

Attack Steps:

  1. Steal Signing Keys: Compromise developer machine
  2. Sign Malicious Update: Create valid signature for backdoor
  3. Deploy via Official Channel: Use legitimate update mechanism
  4. Downgrade Attack: Roll back to vulnerable version

Defense Strategy:

  • Hardware-backed signing (keys in HSM, not on disk)
  • Rollback protection (monotonic version counter)
  • Audit logging (track all update attempts)
  • Staged rollout (test on subset before full deployment)

Code Example:

// ✅ GOOD: Rollback protection with monotonic counter
pub struct RollbackProtection {
    secure_counter: SecureMonotonicCounter,
}

impl RollbackProtection {
    pub fn verify_firmware_version(&self, new_version: u32) -> Result<(), Error> {
        let current_version = self.secure_counter.read()?;
        
        // ✅ Reject downgrades
        if new_version <= current_version {
            return Err(Error::RollbackAttempt);
        }
        
        Ok(())
    }
    
    pub fn commit_update(&mut self, new_version: u32) -> Result<(), Error> {
        // ✅ Increment counter (irreversible)
        self.secure_counter.increment()?;
        Ok(())
    }
}

Attack Surface Analysis

Attack SurfaceExposureMitigationPriority
Debug Interfaces (JTAG/SWD)High (physical access)Disable in production, fuse pinsCritical
Flash MemoryHigh (physical access)Encrypt storage, secure elementCritical
Network CommunicationHigh (always exposed)TLS, certificate pinningCritical
Update MechanismMedium (trusted channel)Code signing, rollback protectionHigh
Power SupplyLow (requires expertise)Voltage monitors, glitch detectionMedium
Side ChannelsLow (requires lab equipment)Constant-time crypto, noise injectionMedium

Defense-in-Depth for Embedded Devices

Layer 1: Hardware Root of Trust

  • Immutable ROM bootloader
  • Secure element for key storage
  • Tamper detection circuits

Layer 2: Secure Boot Chain

  • Verify bootloader signature (ROM)
  • Verify firmware signature (bootloader)
  • Verify application signature (firmware)

Layer 3: Runtime Protection

  • Memory protection unit (MPU)
  • Encrypted storage
  • Secure communication (TLS)

Layer 4: Update Security

  • Code signing
  • Rollback protection
  • Multi-signature verification

Layer 5: Monitoring & Response

  • Tamper detection
  • Anomaly detection
  • Secure logging

Key Takeaways

Embedded Threat Model Rules:

  1. Assume physical access - Attacker can open the device
  2. Network is hostile - All communication is monitored/modified
  3. Supply chain is compromised - Verify everything
  4. Updates are attack vectors - Sign and version-check
  5. Side channels leak secrets - Use constant-time crypto

Critical Rule: Embedded security is about defense-in-depth. No single layer is sufficient. Combine hardware, software, and operational defenses.


Why Rust for Embedded Security

Safety Benefits

Memory Safety:

  • No buffer overflows
  • No use-after-free
  • Prevent entire vulnerability classes

Concurrency Safety:

  • Safe multithreading
  • No data races
  • Predictable behavior

Resource Efficiency

Small Binaries:

  • Minimal runtime overhead
  • Efficient memory usage
  • Optimized for embedded systems

Embedded Rust Basics

no_std Rust

⚠️ Important: Embedded Rust uses no_std (no standard library) because embedded systems lack OS features like heap allocation, file systems, and threads.

Click to view Rust code
#![no_std]
#![no_main]

use core::panic::PanicInfo;

// ✅ RECOMMENDED: Use panic-halt for production (smaller binary)
// Cargo.toml: panic-halt = "0.2"
use panic_halt as _;

// Alternative: panic-abort (even smaller, no unwinding)
// [profile.release]
// panic = "abort"

// ❌ NOT RECOMMENDED: Custom panic handler (for debugging only)
#[cfg(debug_assertions)]
#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
    // ⚠️ In production, panics should be minimized
    // Use Result<T, E> instead of unwrap() / expect()
    
    // Log panic info (if serial logging available)
    #[cfg(feature = "defmt")]
    defmt::error!("Panic: {:?}", info);
    
    loop {
        // Halt CPU
        core::hint::spin_loop();
    }
}

#[no_mangle]
pub extern "C" fn _start() -> ! {
    // Entry point for embedded application
    main();
    
    loop {
        core::hint::spin_loop();
    }
}

fn main() {
    // Your embedded application code
}

Panic Handler Options:

CrateBinary SizeBehaviorUse Case
panic-haltSmallestInfinite loop✅ Production (recommended)
panic-abortSmallestAbort immediately✅ Production (no unwinding)
panic-semihostingMediumPrint to debuggerDebug only
Custom handlerVariesCustom behaviorAdvanced use cases

Key Principle: In production embedded code, panics should never happen. Use Result<T, E> for all fallible operations.

Hardware Abstraction

⚠️ CRITICAL: Rust Cannot Protect Against Bad Hardware Access

Important Distinctions:

  • Rust guarantees: Memory safety, no data races, type safety
  • Rust does NOT guarantee: Correct register access, hardware timing, peripheral configuration

Unsafe blocks are unavoidable in embedded Rust because hardware access requires raw pointers and volatile operations.

Click to view Rust code
// ⚠️ IMPORTANT: Unsafe blocks are required for hardware access
// Rust's safety guarantees do NOT extend to hardware correctness

// Example: GPIO register access
pub struct GpioRegister {
    address: *mut u32,
}

impl GpioRegister {
    // ✅ SAFE: Volatile read (prevents compiler optimization)
    pub unsafe fn read(&self) -> u32 {
        // ⚠️ Unsafe because:
        // - Raw pointer dereference
        // - Hardware may have side effects (e.g., clearing interrupt flag)
        // - Wrong address = undefined behavior
        core::ptr::read_volatile(self.address)
    }
    
    // ✅ SAFE: Volatile write
    pub unsafe fn write(&mut self, value: u32) {
        // ⚠️ Unsafe because:
        // - Raw pointer dereference
        // - Wrong value can damage hardware (e.g., incorrect clock config)
        // - Wrong timing can cause glitches
        core::ptr::write_volatile(self.address, value);
    }
    
    // ❌ WRONG: Non-volatile access (compiler may optimize away)
    pub unsafe fn read_wrong(&self) -> u32 {
        *self.address // ❌ Compiler may cache this!
    }
}

// ✅ BETTER: Type-safe register abstraction
pub struct GpioPin<const PIN: u8> {
    register: *mut u32,
}

impl<const PIN: u8> GpioPin<PIN> {
    pub fn set_high(&mut self) {
        unsafe {
            let current = core::ptr::read_volatile(self.register);
            core::ptr::write_volatile(self.register, current | (1 << PIN));
        }
    }
    
    pub fn set_low(&mut self) {
        unsafe {
            let current = core::ptr::read_volatile(self.register);
            core::ptr::write_volatile(self.register, current & !(1 << PIN));
        }
    }
}

// ⚠️ HARDWARE BUGS BYPASS RUST GUARANTEES
// Example: Race condition in hardware
pub struct UartRegister {
    data: *mut u8,
    status: *mut u8,
}

impl UartRegister {
    pub unsafe fn send_byte(&mut self, byte: u8) {
        // ✅ Rust guarantees: No memory corruption
        // ❌ Rust does NOT guarantee: Correct hardware timing
        
        // Wait for TX ready (hardware-specific)
        while (core::ptr::read_volatile(self.status) & 0x80) == 0 {
            core::hint::spin_loop();
        }
        
        // ⚠️ HARDWARE BUG: If TX buffer fills between check and write,
        // data may be lost. Rust cannot prevent this.
        core::ptr::write_volatile(self.data, byte);
    }
}

Key Takeaways:

What Rust Protects:

  • ✅ Memory safety (no buffer overflows in Rust code)
  • ✅ Type safety (no type confusion)
  • ✅ Concurrency safety (no data races in safe Rust)

What Rust Does NOT Protect:

  • ❌ Incorrect register addresses (wrong pointer = hardware damage)
  • ❌ Incorrect register values (wrong config = hardware malfunction)
  • ❌ Hardware timing issues (race conditions in peripherals)
  • ❌ Hardware bugs (silicon errata bypass all software guarantees)
  • ❌ Physical attacks (voltage glitching, side channels)

Critical Rule: unsafe blocks in embedded Rust are unavoidable for hardware access. Rust’s safety guarantees end at the hardware boundary. Always consult the datasheet and test on real hardware.

Using Hardware Abstraction Layers (HALs)

✅ RECOMMENDED: Use existing HALs instead of raw register access

// ✅ GOOD: Use HAL (e.g., stm32f4xx-hal)
use stm32f4xx_hal::{pac, prelude::*};

fn main() -> ! {
    let dp = pac::Peripherals::take().unwrap();
    let gpioa = dp.GPIOA.split();
    
    // ✅ Type-safe, no raw pointers
    let mut led = gpioa.pa5.into_push_pull_output();
    
    loop {
        led.set_high();
        delay_ms(1000);
        led.set_low();
        delay_ms(1000);
    }
}

Benefits of HALs:

  • ✅ Type-safe abstractions (compile-time checks)
  • ✅ Tested on real hardware
  • ✅ Documented APIs
  • ✅ Community support

When to use raw register access:

  • ❌ HAL doesn’t support your peripheral
  • ❌ HAL has bugs (rare)
  • ❌ Extreme performance requirements (avoid abstraction overhead)

Physical Attacks on Embedded Devices

Embedded security is not just software—physical attacks are a major threat.

Physical Attack Vectors

Attack TypeDifficultyCostMitigationDetectability
Debug Interface (JTAG/SWD)Low$50Disable/fuse in productionEasy
Flash DumpingLow$100Encrypt flash, secure elementEasy
Fault Injection (Glitching)Medium$500-5KVoltage/clock monitorsMedium
Side-Channel (Power Analysis)High$10K-100KConstant-time crypto, noiseHard
Chip Decapping (Invasive)Very High$50K+Physical tamper detectionVery Hard

Attack 1: Debug Interface Abuse (JTAG/SWD)

What it is: Connecting a debugger to read/write memory and flash

Attack Steps:

  1. Open device case
  2. Identify debug pins (JTAG/SWD)
  3. Connect debugger (e.g., ST-Link, J-Link)
  4. Dump flash memory
  5. Extract encryption keys, firmware

Defense:

// ✅ GOOD: Disable debug interface in production
#[cfg(not(debug_assertions))]
fn disable_debug_ports() {
    unsafe {
        // Example for STM32: Disable JTAG/SWD
        const RCC_APB2ENR: *mut u32 = 0x4002_1018 as *mut u32;
        const AFIO_MAPR: *mut u32 = 0x4001_0004 as *mut u32;
        
        // Enable AFIO clock
        let rcc = core::ptr::read_volatile(RCC_APB2ENR);
        core::ptr::write_volatile(RCC_APB2ENR, rcc | (1 << 0));
        
        // Disable JTAG and SWD
        let mapr = core::ptr::read_volatile(AFIO_MAPR);
        core::ptr::write_volatile(AFIO_MAPR, mapr | (0b010 << 24));
    }
}

// ✅ BETTER: Use fuses to permanently disable (one-time programmable)
// This requires special tools and is irreversible
// Consult your MCU's reference manual for fuse programming

Detection:

  • ✅ Tamper-evident seals on device case
  • ✅ Tamper detection switch (opens circuit when case opened)

Attack 2: Flash Dumping

What it is: Reading firmware directly from flash memory

Attack Steps:

  1. Desolder flash chip
  2. Connect to flash reader
  3. Dump contents
  4. Reverse engineer firmware

Defense:

// ✅ GOOD: Encrypt flash contents
pub struct EncryptedFlashStorage {
    key: [u8; 32], // Stored in secure element, not flash
}

impl EncryptedFlashStorage {
    pub fn write_encrypted(&mut self, address: u32, data: &[u8]) -> Result<(), Error> {
        // ✅ Encrypt before writing to flash
        let encrypted = aes_gcm_encrypt(data, &self.key)?;
        flash_write(address, &encrypted)?;
        Ok(())
    }
    
    pub fn read_decrypted(&self, address: u32, len: usize) -> Result<Vec<u8>, Error> {
        // ✅ Read and decrypt
        let encrypted = flash_read(address, len)?;
        let decrypted = aes_gcm_decrypt(&encrypted, &self.key)?;
        Ok(decrypted)
    }
}

Best Practice: Store encryption keys in secure element (ATECC608, SE050), not in flash.

Attack 3: Fault Injection (Voltage/Clock Glitching)

What it is: Causing CPU to skip instructions by manipulating power or clock

Attack Example:

// ❌ VULNERABLE: Security check can be glitched
pub fn verify_pin(entered_pin: u32, correct_pin: u32) -> bool {
    if entered_pin == correct_pin {
        return true; // ⚠️ Glitch here → always returns true
    }
    false
}

// ✅ BETTER: Redundant checks
pub fn verify_pin_secure(entered_pin: u32, correct_pin: u32) -> bool {
    let check1 = entered_pin == correct_pin;
    let check2 = entered_pin == correct_pin;
    let check3 = entered_pin == correct_pin;
    
    // ✅ All three checks must pass (harder to glitch all three)
    check1 && check2 && check3
}

// ✅ BEST: Hardware glitch detection
pub fn enable_glitch_detection() {
    unsafe {
        // Enable voltage and clock monitors (MCU-specific)
        // Example: STM32 Programmable Voltage Detector (PVD)
        const PWR_CR: *mut u32 = 0x4000_7000 as *mut u32;
        let cr = core::ptr::read_volatile(PWR_CR);
        core::ptr::write_volatile(PWR_CR, cr | (1 << 4)); // Enable PVD
    }
}

Hardware Defenses:

  • ✅ Voltage monitors (detect undervoltage)
  • ✅ Clock monitors (detect frequency anomalies)
  • ✅ Watchdog timers (detect CPU hangs)

Attack 4: Side-Channel Attacks (Power Analysis)

What it is: Measuring power consumption to extract encryption keys

How it works:

  • Different operations consume different power
  • AES encryption power trace reveals key bits
  • Requires oscilloscope and statistical analysis

Defense:

// ❌ VULNERABLE: Non-constant-time comparison (timing leak)
pub fn verify_password(input: &[u8], expected: &[u8]) -> bool {
    if input.len() != expected.len() {
        return false;
    }
    
    for (a, b) in input.iter().zip(expected.iter()) {
        if a != b {
            return false; // ❌ Early return leaks timing info
        }
    }
    
    true
}

// ✅ PROTECTED: Constant-time comparison
use subtle::ConstantTimeEq;

pub fn verify_password_secure(input: &[u8], expected: &[u8]) -> bool {
    if input.len() != expected.len() {
        return false;
    }
    
    // ✅ Constant-time comparison (no early return)
    input.ct_eq(expected).into()
}

Additional Defenses:

  • ✅ Use hardware crypto accelerators (less power variation)
  • ✅ Add random delays (noise injection)
  • ✅ Use secure elements (side-channel resistant)

Physical Attack Summary

Key Takeaways:

  1. Debug interfaces must be disabled in production
  2. Flash encryption is mandatory for sensitive data
  3. Fault injection requires redundant checks and hardware monitors
  4. Side-channel attacks require constant-time crypto and hardware acceleration
  5. Physical security is defense-in-depth - no single measure is sufficient

Critical Rule: If an attacker has physical access and unlimited time, they can extract secrets. Physical security is about raising the cost and detecting attacks, not making extraction impossible.


IoT Security Patterns

⚠️ CRITICAL: Cryptography Is Only As Strong As Key Storage and Entropy

Common Misconception: “I used AES-256, so my device is secure.”

Reality: Cryptography depends on:

  1. Key storage location (flash = insecure, secure element = secure)
  2. RNG quality (bad entropy = predictable keys)
  3. Side-channel resistance (power analysis can extract keys)

Key Storage Comparison:

Storage LocationSecurityCostUse Case
Flash (plaintext)❌ InsecureFree❌ Never use for keys
Flash (encrypted)⚠️ WeakFreeTemporary keys only
Secure Element (ATECC, SE050)✅ Strong$1-5✅ Production (recommended)
TPM✅ Strong$5-20Enterprise devices
HSM✅ Very Strong$100+High-security applications

Secure Communication

⚠️ WARNING: This example assumes secure key storage (secure element).

Click to view Rust code
// ❌ BAD: Encryption key in flash (insecure)
pub struct InsecureIoTDevice {
    encryption_key: [u8; 32], // ❌ Stored in flash, easily extracted
}

impl InsecureIoTDevice {
    pub fn encrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
        // ❌ Key is in flash, attacker can dump it
        encrypt_aes256(data, &self.encryption_key)
    }
}

// ✅ GOOD: Encryption key in secure element
pub struct SecureIoTDevice {
    secure_element: SecureElement, // ✅ Keys never leave secure element
}

impl SecureIoTDevice {
    pub fn encrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
        // ✅ Encryption happens inside secure element
        // Key never exposed to main CPU
        self.secure_element.encrypt_aes256(data, KEY_SLOT_0)
    }
    
    pub fn decrypt_data(&self, data: &[u8]) -> Result<Vec<u8>, Error> {
        // ✅ Decryption happens inside secure element
        self.secure_element.decrypt_aes256(data, KEY_SLOT_0)
    }
}

⚠️ CRITICAL: RNG Quality Matters

Bad entropy = predictable keys = broken crypto

// ❌ TERRIBLE: Predictable RNG (DO NOT USE)
pub fn generate_key_insecure() -> [u8; 32] {
    let mut key = [0u8; 32];
    
    // ❌ Predictable seed (system time, counter, etc.)
    let mut seed = 12345u32; // ❌ Fixed seed = same keys every time!
    
    for i in 0..32 {
        seed = seed.wrapping_mul(1103515245).wrapping_add(12345);
        key[i] = (seed >> 16) as u8;
    }
    
    key
}

// ✅ GOOD: Hardware RNG (true random)
pub fn generate_key_secure() -> Result<[u8; 32], Error> {
    let mut key = [0u8; 32];
    
    // ✅ Hardware RNG (uses thermal noise, unpredictable)
    hardware_rng_fill(&mut key)?;
    
    Ok(key)
}

// ✅ BEST: Secure element RNG (highest quality)
pub fn generate_key_best(secure_element: &mut SecureElement) -> Result<(), Error> {
    // ✅ Key generated inside secure element (never exposed)
    // Uses hardware RNG with post-processing
    secure_element.generate_key(KEY_SLOT_0)?;
    Ok(())
}

RNG Quality Comparison:

RNG TypeEntropy SourcePredictabilityUse Case
Fixed seedNone❌ 100% predictable❌ Never use
Software PRNGInitial seed⚠️ Predictable if state knownTesting only
Hardware RNGThermal noise✅ Unpredictable✅ Production
Secure element RNGHardware + post-processing✅ Highest quality✅ High-security

Device Authentication

⚠️ WARNING: Certificate verification requires secure key storage.

Click to view Rust code
// ❌ BAD: Private key in flash (insecure)
pub fn authenticate_device_insecure(certificate: &[u8], private_key: &[u8]) -> bool {
    // ❌ Private key in flash, easily extracted
    let signature = sign_with_private_key(certificate, private_key);
    verify_signature(certificate, &signature)
}

// ✅ GOOD: Private key in secure element
pub fn authenticate_device_secure(
    certificate: &[u8],
    secure_element: &SecureElement,
) -> Result<bool, Error> {
    // ✅ Sign using key in secure element (never exposed)
    let signature = secure_element.sign_ecdsa(certificate, KEY_SLOT_0)?;
    
    // ✅ Verify signature
    Ok(verify_ecdsa_signature(certificate, &signature, &PUBLIC_KEY))
}

Key Takeaways

Cryptography Rules for Embedded:

  1. Keys must be in secure element - Flash storage is insecure
  2. Use hardware RNG - Software RNG is predictable
  3. Side-channel resistance matters - Use constant-time crypto
  4. Certificate pinning is mandatory - Don’t trust any CA
  5. Crypto is only as strong as key storage - AES-256 in flash = broken

Critical Rule: “I used AES-256” is not a security guarantee. Key storage, RNG quality, and side-channel resistance are equally important.


Hardware Security Modules and Secure Elements

Secure elements are specialized chips for cryptographic operations and key storage.

Secure Element Options

ChipInterfaceFeaturesCostUse Case
ATECC608I2CECDSA, SHA-256, AES-128, key storage$0.50-1✅ IoT devices (recommended)
SE050I2C/SPIRSA, ECC, AES, secure boot$2-5High-security IoT
TPM 2.0SPI/I2CFull TPM spec, attestation$5-20Enterprise devices
A71CHI2CECC, AES, secure boot$1-3Industrial IoT

Secure Element Best Practices

✅ DO:

  1. Generate keys inside secure element - Never import private keys
  2. Use hardware RNG - Secure element has high-quality RNG
  3. Store keys in secure slots - Use key slot 0-15
  4. Lock configuration - Prevent modification after deployment
  5. Use attestation - Prove device authenticity

❌ DON’T:

  1. Export private keys - Keys should never leave secure element
  2. Store keys in flash - Even encrypted keys are vulnerable
  3. Use software crypto - Hardware is faster and side-channel resistant
  4. Skip key rotation - Rotate keys periodically
  5. Trust default config - Always lock configuration

Firmware Security

Secure Boot Chain (Complete Explanation)

Secure boot ensures only trusted code runs on the device.

Secure Boot Architecture

┌─────────────────────────────────────────────────────────────┐
│                    SECURE BOOT CHAIN                         │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  ┌──────────────┐         ┌──────────────┐                  │
│  │   ROM        │  verify │  Bootloader  │                  │
│  │  (Immutable) ├────────>│  (Mutable)   │                  │
│  │              │         │              │                  │
│  │ - Root of    │         │ - Verify     │                  │
│  │   Trust      │         │   firmware   │                  │
│  │ - Public key │         │ - Load app   │                  │
│  └──────────────┘         └──────┬───────┘                  │
│                                   │                           │
│                                   │ verify                    │
│                                   ▼                           │
│                          ┌──────────────┐                    │
│                          │  Firmware    │                    │
│                          │  (Mutable)   │                    │
│                          │              │                    │
│                          │ - Application│                    │
│                          │ - Signed     │                    │
│                          └──────────────┘                    │
│                                                               │
└─────────────────────────────────────────────────────────────┘

Stage 1: ROM Bootloader (Root of Trust)

Characteristics:

  • Immutable - Burned into ROM, cannot be modified
  • Trusted - First code to execute after reset
  • Minimal - Only verifies next stage (bootloader)
  • Contains public key - For bootloader verification

Location: On-chip ROM (e.g., 0x0000_0000)

Code Example (Conceptual):

// ⚠️ This code runs in ROM (immutable, factory-programmed)
// You cannot modify this—it's part of the silicon

#[no_mangle]
pub extern "C" fn rom_bootloader() -> ! {
    // ✅ Step 1: Read bootloader from flash
    let bootloader = read_flash(BOOTLOADER_ADDRESS, BOOTLOADER_SIZE);
    
    // ✅ Step 2: Read bootloader signature
    let signature = read_flash(BOOTLOADER_SIG_ADDRESS, SIGNATURE_SIZE);
    
    // ✅ Step 3: Verify signature using public key in ROM
    const ROM_PUBLIC_KEY: [u8; 64] = [
        // ✅ Public key is immutable (burned into ROM)
        0x04, 0x1a, 0x2b, 0x3c, // ... (ECDSA P-256 public key)
    ];
    
    if verify_ecdsa_signature(&bootloader, &signature, &ROM_PUBLIC_KEY).is_err() {
        // ❌ Signature invalid → halt (do not execute untrusted code)
        panic_halt();
    }
    
    // ✅ Step 4: Jump to bootloader (signature valid)
    jump_to_address(BOOTLOADER_ADDRESS);
}

Key Point: ROM bootloader is the root of trust. If this is compromised, the entire chain fails.

Stage 2: Bootloader (Mutable but Signed)

Characteristics:

  • ⚠️ Mutable - Stored in flash, can be updated
  • Signed - Signature verified by ROM
  • Verifies firmware - Checks application signature
  • Handles updates - Can install new firmware

Location: Flash memory (e.g., 0x0800_0000)

Code Example:

// ✅ This code runs in flash (mutable, but signature-verified by ROM)

#[no_mangle]
pub extern "C" fn bootloader_main() -> ! {
    // ✅ Step 1: Read firmware from flash
    let firmware = read_flash(FIRMWARE_ADDRESS, FIRMWARE_SIZE);
    
    // ✅ Step 2: Read firmware signature
    let signature = read_flash(FIRMWARE_SIG_ADDRESS, SIGNATURE_SIZE);
    
    // ✅ Step 3: Verify firmware signature
    const BOOTLOADER_PUBLIC_KEY: [u8; 64] = [
        // ✅ Public key for firmware verification
        0x04, 0x5d, 0x6e, 0x7f, // ... (ECDSA P-256 public key)
    ];
    
    if verify_ecdsa_signature(&firmware, &signature, &BOOTLOADER_PUBLIC_KEY).is_err() {
        // ❌ Firmware signature invalid → enter recovery mode
        enter_recovery_mode();
    }
    
    // ✅ Step 4: Check rollback protection (version counter)
    let firmware_version = read_firmware_version(&firmware);
    let secure_counter = read_secure_counter();
    
    if firmware_version < secure_counter {
        // ❌ Downgrade attack detected → halt
        panic_halt();
    }
    
    // ✅ Step 5: Jump to firmware (signature valid, version OK)
    jump_to_address(FIRMWARE_ADDRESS);
}

Stage 3: Firmware (Application)

Characteristics:

  • ⚠️ Mutable - Updated frequently
  • Signed - Signature verified by bootloader
  • Version-controlled - Rollback protection

Location: Flash memory (e.g., 0x0801_0000)

Code Example:

// ✅ This is your application code (verified by bootloader)

#[no_mangle]
pub extern "C" fn firmware_main() -> ! {
    // ✅ Application code runs here
    // Bootloader has already verified signature
    
    init_peripherals();
    init_security();
    
    loop {
        run_application();
    }
}

Public Key Storage Locations

Key TypeStorage LocationMutabilityVerified By
ROM Public KeyOn-chip ROM✅ ImmutableHardware (fuses)
Bootloader Public KeyBootloader flash⚠️ Mutable (but signed)ROM bootloader
Firmware Public KeyFirmware flash⚠️ Mutable (but signed)Bootloader

Immutable vs Mutable Stages

StageMutabilityAttack VectorDefense
ROM Bootloader✅ ImmutableNone (burned in silicon)Hardware root of trust
Bootloader⚠️ MutableMalicious updateSignature verification by ROM
Firmware⚠️ MutableMalicious updateSignature verification by bootloader

Bootloader Downgrade Attacks

Attack: Attacker installs old, vulnerable firmware version

Defense: Monotonic Version Counter

// ✅ GOOD: Rollback protection with secure counter
pub struct SecureVersionCounter {
    // ✅ Stored in OTP (One-Time Programmable) memory or secure element
    counter_address: *mut u32,
}

impl SecureVersionCounter {
    pub fn read(&self) -> u32 {
        unsafe {
            core::ptr::read_volatile(self.counter_address)
        }
    }
    
    pub fn increment(&mut self) -> Result<(), Error> {
        let current = self.read();
        
        // ✅ Write new value (can only increase, never decrease)
        unsafe {
            core::ptr::write_volatile(self.counter_address, current + 1);
        }
        
        // ✅ Verify write succeeded
        if self.read() != current + 1 {
            return Err(Error::CounterWriteFailed);
        }
        
        Ok(())
    }
    
    pub fn verify_version(&self, firmware_version: u32) -> Result<(), Error> {
        let min_version = self.read();
        
        if firmware_version < min_version {
            // ❌ Downgrade attack detected
            return Err(Error::RollbackAttempt);
        }
        
        Ok(())
    }
}

// ✅ Usage in bootloader
pub fn verify_and_boot_firmware() -> ! {
    let firmware = read_flash(FIRMWARE_ADDRESS, FIRMWARE_SIZE);
    let signature = read_flash(FIRMWARE_SIG_ADDRESS, SIGNATURE_SIZE);
    
    // ✅ Step 1: Verify signature
    if verify_signature(&firmware, &signature).is_err() {
        panic_halt();
    }
    
    // ✅ Step 2: Check version (rollback protection)
    let firmware_version = parse_firmware_version(&firmware);
    let counter = SecureVersionCounter { counter_address: OTP_COUNTER_ADDR };
    
    if counter.verify_version(firmware_version).is_err() {
        // ❌ Downgrade attack → halt
        panic_halt();
    }
    
    // ✅ Step 3: Boot firmware
    jump_to_address(FIRMWARE_ADDRESS);
}

Secure Boot Best Practices

✅ DO:

  1. Use hardware root of trust (immutable ROM bootloader)
  2. Store public keys in ROM (cannot be modified)
  3. Implement rollback protection (monotonic counter)
  4. Verify every stage (ROM → bootloader → firmware)
  5. Use strong crypto (ECDSA P-256 or Ed25519)

❌ DON’T:

  1. Store private keys on device (only public keys)
  2. Allow unsigned code execution (always verify)
  3. Skip version checks (enables downgrade attacks)
  4. Use weak crypto (RSA-1024, SHA-1)
  5. Trust mutable storage (flash can be modified)

Key Takeaways

Secure Boot Chain Rules:

  1. ROM is root of trust - Immutable, first code to run
  2. Each stage verifies next - Chain of trust
  3. Public keys are immutable - Stored in ROM or OTP
  4. Rollback protection is mandatory - Monotonic version counter
  5. Downgrade attacks are real - Always check version

Critical Rule: Secure boot is only as strong as its weakest link. If ROM is compromised (e.g., factory backdoor), the entire chain fails.


Secure Updates with Power-Failure Safety

⚠️ CRITICAL: Firmware updates must be atomic and power-failure safe.

The Problem:

  • Power loss during update = bricked device
  • Partial writes corrupt firmware
  • No way to recover without JTAG

Solution: A/B Partition Scheme

┌────────────────────────────────────────────┐
│          FLASH MEMORY LAYOUT               │
├────────────────────────────────────────────┤
│                                            │
│  ┌──────────────┐                         │
│  │  Bootloader  │  (64 KB)                │
│  │  (Protected) │                         │
│  └──────────────┘                         │
│                                            │
│  ┌──────────────┐                         │
│  │  Partition A │  (256 KB)               │
│  │  (Active)    │  ← Currently running    │
│  └──────────────┘                         │
│                                            │
│  ┌──────────────┐                         │
│  │  Partition B │  (256 KB)               │
│  │  (Inactive)  │  ← Write new firmware   │
│  └──────────────┘                         │
│                                            │
└────────────────────────────────────────────┘
Click to view Rust code
// ✅ GOOD: Power-failure safe firmware updater
pub struct SecureFirmwareUpdater {
    public_key: [u8; 64],
    active_partition: Partition,
}

#[derive(Clone, Copy)]
pub enum Partition {
    A,
    B,
}

impl Partition {
    fn address(&self) -> u32 {
        match self {
            Partition::A => 0x0801_0000, // Partition A start
            Partition::B => 0x0805_0000, // Partition B start
        }
    }
    
    fn size(&self) -> usize {
        256 * 1024 // 256 KB
    }
}

impl SecureFirmwareUpdater {
    // ✅ Power-failure safe update process
    pub fn update_firmware(&mut self, update: &[u8], signature: &[u8]) -> Result<(), Error> {
        // ✅ Step 1: Verify signature (before writing anything)
        if verify_ecdsa_signature(update, signature, &self.public_key).is_err() {
            return Err(Error::InvalidSignature);
        }
        
        // ✅ Step 2: Verify version (rollback protection)
        let new_version = parse_firmware_version(update)?;
        let current_version = read_secure_counter()?;
        
        if new_version <= current_version {
            return Err(Error::RollbackAttempt);
        }
        
        // ✅ Step 3: Determine inactive partition
        let inactive_partition = match self.active_partition {
            Partition::A => Partition::B,
            Partition::B => Partition::A,
        };
        
        // ✅ Step 4: Write to inactive partition (safe if power fails)
        self.write_firmware_to_partition(update, inactive_partition)?;
        
        // ✅ Step 5: Verify written firmware (checksum)
        self.verify_partition(inactive_partition, update)?;
        
        // ✅ Step 6: Atomically switch active partition (single write)
        self.switch_active_partition(inactive_partition)?;
        
        // ✅ Step 7: Increment version counter (irreversible)
        increment_secure_counter(new_version)?;
        
        // ✅ Step 8: Reboot to new firmware
        system_reset();
    }
    
    // ✅ Write firmware to partition (can be interrupted safely)
    fn write_firmware_to_partition(&self, firmware: &[u8], partition: Partition) -> Result<(), Error> {
        let address = partition.address();
        
        // ✅ Erase partition first
        flash_erase(address, partition.size())?;
        
        // ✅ Write firmware in chunks (can be interrupted)
        const CHUNK_SIZE: usize = 256;
        for (i, chunk) in firmware.chunks(CHUNK_SIZE).enumerate() {
            let offset = i * CHUNK_SIZE;
            flash_write(address + offset as u32, chunk)?;
            
            // ✅ Optional: Watchdog kick to prevent timeout
            watchdog_kick();
        }
        
        Ok(())
    }
    
    // ✅ Verify written firmware matches expected
    fn verify_partition(&self, partition: Partition, expected: &[u8]) -> Result<(), Error> {
        let address = partition.address();
        let written = flash_read(address, expected.len())?;
        
        if written != expected {
            return Err(Error::VerificationFailed);
        }
        
        Ok(())
    }
    
    // ✅ Atomically switch active partition (single write)
    fn switch_active_partition(&mut self, new_partition: Partition) -> Result<(), Error> {
        // ✅ Write to protected flash location (bootloader config)
        const ACTIVE_PARTITION_ADDR: u32 = 0x0800_FFF0;
        
        let value = match new_partition {
            Partition::A => 0xAAAA_AAAA,
            Partition::B => 0xBBBB_BBBB,
        };
        
        // ✅ Single atomic write (power-failure safe)
        flash_write(ACTIVE_PARTITION_ADDR, &value.to_le_bytes())?;
        
        self.active_partition = new_partition;
        Ok(())
    }
}

// ✅ Bootloader reads active partition on boot
pub fn bootloader_select_partition() -> Partition {
    const ACTIVE_PARTITION_ADDR: u32 = 0x0800_FFF0;
    
    let value = unsafe {
        core::ptr::read_volatile(ACTIVE_PARTITION_ADDR as *const u32)
    };
    
    match value {
        0xAAAA_AAAA => Partition::A,
        0xBBBB_BBBB => Partition::B,
        _ => {
            // ✅ Default to Partition A if invalid
            Partition::A
        }
    }
}

Power-Failure Scenarios

ScenarioResultRecovery
Power loss before write✅ No changeBoot normally
Power loss during write✅ Old firmware intactBoot from active partition
Power loss after write, before switch✅ Old firmware still activeBoot normally, retry update
Power loss after switch✅ New firmware activeBoot from new partition

Update Process Flow

1. Verify signature ────────> [FAIL] → Reject update

2. Check version ───────────> [FAIL] → Reject (rollback)

3. Write to inactive ───────> [POWER LOSS] → Old firmware OK

4. Verify write ────────────> [FAIL] → Retry or reject

5. Switch partition ────────> [POWER LOSS] → Old firmware OK
   (atomic)         ↓
6. Increment counter ───────> [POWER LOSS] → New firmware OK

7. Reboot ──────────────────> New firmware running

Key Takeaways

Power-Failure Safe Update Rules:

  1. Use A/B partitions - Never overwrite running firmware
  2. Verify before writing - Check signature and version first
  3. Atomic partition switch - Single write operation
  4. Verify after writing - Checksum before switching
  5. Rollback on failure - Always have working firmware

Critical Rule: Firmware updates must be atomic and power-failure safe. A bricked device in the field is unrecoverable without physical access.


Advanced Scenarios

Scenario 1: Secure IoT Gateway

Implementation:

  • Encrypted communication
  • Device authentication
  • Secure key management
  • OTA update support

Scenario 2: Hardware Wallet

Features:

  • Secure key storage
  • Transaction signing
  • PIN protection
  • Tamper detection

Troubleshooting Guide

Problem: Binary Too Large

Solution:

  • Use LTO (Link Time Optimization)
  • Disable debug symbols
  • Remove unused code
  • Optimize for size

Problem: Performance Issues

Solution:

  • Profile embedded code
  • Optimize hot paths
  • Use hardware acceleration
  • Review algorithm choices

Real-World Case Study

Case Study: Secure IoT device using Rust

Implementation:

  • Rust firmware
  • Hardware security module
  • Encrypted communication
  • Secure boot

Results:

  • No memory safety vulnerabilities
  • Secure communication
  • Efficient resource usage
  • Easy to maintain

FAQ

Q: Is Rust suitable for all embedded systems?

A: Rust works well for:

  • Modern microcontrollers
  • IoT devices
  • Systems with sufficient resources
  • Security-critical applications

Q: How do I debug embedded Rust?

A: Multiple debugging options:

✅ RECOMMENDED: probe-rs (Rust-native debugger)

  • Pure Rust debugger (no OpenOCD needed)
  • Fast flashing and debugging
  • Works with VS Code, CLI
  • Supports most ARM Cortex-M chips
# Install probe-rs
cargo install probe-rs --features cli

# Flash and run
cargo embed --release

# Debug with probe-rs
probe-rs debug --chip STM32F401RETx target/thumbv7em-none-eabihf/release/firmware

Other Options:

  • GDB + OpenOCD: Traditional debugging (slower)
  • Serial logging: defmt or log crate with UART
  • Hardware debuggers: ST-Link, J-Link, Black Magic Probe
  • Semihosting: Print to debugger console (very slow)

Recommended Setup:

# Cargo.toml
[dependencies]
defmt = "0.3"           # Efficient logging
defmt-rtt = "0.4"       # Real-Time Transfer (fast)
panic-probe = "0.3"     # Panic handler for probe-rs

[profile.dev]
debug = 2               # Full debug info
opt-level = "s"         # Optimize for size

Key Takeaway: Use probe-rs for Rust-native debugging—it’s faster and easier than GDB+OpenOCD.


Code Review Checklist for Embedded Rust Security

Resource Constraints

  • Memory usage optimized for target hardware
  • Stack usage monitored and limited
  • Heap usage minimized where possible
  • Code size optimized (LTO, etc.)

Security Patterns

  • Secure boot implementation verified
  • Firmware updates are authenticated
  • Secrets stored securely (TPM, secure element)
  • No hardcoded credentials

Safety

  • No panic!() in production code (use Result)
  • Watchdog timer configured
  • Proper error recovery mechanisms
  • Input validation for all external data

Testing

  • Tests run on target hardware
  • Hardware-in-the-loop testing performed
  • Power consumption tested
  • Stress testing performed

Deployment

  • Binary signatures verified
  • Secure update mechanisms
  • Rollback capabilities tested
  • Logging doesn’t leak secrets

Conclusion

Rust provides excellent foundations for embedded security applications. Its safety guarantees and resource efficiency make it ideal for IoT and hardware security.

Action Steps

  1. Learn embedded Rust basics
  2. Practice with hardware
  3. Implement security patterns
  4. Test on real devices
  5. Deploy securely

Next Steps

  • Explore embedded Rust ecosystem
  • Study IoT security standards
  • Learn about secure boot
  • Practice with real hardware

Remember: Embedded security requires careful consideration of resource constraints, hardware capabilities, and security requirements. Test thoroughly on target hardware.


Cleanup

Click to view commands
# Clean up embedded build artifacts
rm -rf target/
rm -f *.elf *.bin *.hex
rm -f *.map *.lst

# Clean up any hardware-specific files
find . -name "*_embedded*" -delete

Validation: Verify no embedded build artifacts remain in the project directory.

Similar Topics

FAQs

Can I use these labs in production?

No—treat them as educational. Adapt, review, and security-test before any production use.

How should I follow the lessons?

Start from the Learn page order or use Previous/Next on each lesson; both flow consistently.

What if I lack test data or infra?

Use synthetic data and local/lab environments. Never target networks or data you don't own or have written permission to test.

Can I share these materials?

Yes, with attribution and respecting any licensing for referenced tools or datasets.