Testing and Fuzzing Rust Security Code (2026)
Learn to test and fuzz Rust applications for security, including unit tests, integration tests, property-based testing, and fuzzing with cargo-fuzz.
Learn comprehensive testing and fuzzing techniques for Rust security code. Master unit tests, integration tests, property-based testing, and fuzzing to ensure security and reliability of Rust security tools.
Key Takeaways
- Unit Testing: Test individual components in isolation
- Integration Testing: Test component interactions
- Property-Based Testing: Test with generated inputs
- Fuzzing: Discover security vulnerabilities automatically
- Security Testing: Focus on security-relevant test cases
- CI/CD Integration: Automate testing in pipelines
Table of Contents
- Testing Fundamentals
- Unit Testing
- Integration Testing
- Property-Based Testing
- Fuzzing with cargo-fuzz
- Security Testing
- Advanced Scenarios
- Troubleshooting Guide
- Real-World Case Study
- FAQ
- Conclusion
TL;DR
Master Rust testing and fuzzing for security tools. Learn unit tests, integration tests, property-based testing, and fuzzing to ensure code security and reliability.
Prerequisites
- Rust 1.80+ installed
- Understanding of Rust basics
- Familiarity with testing concepts
Safety and Legal
- Test only code you own or have permission to test
- Use isolated environments for fuzzing
- Follow responsible disclosure for vulnerabilities
- Document security test results
Testing Fundamentals
🎯 What Each Testing Technique Finds (Critical Concept)
Understanding the differences is essential for security engineers.
Different testing techniques find different types of bugs. Using the wrong technique wastes time and misses vulnerabilities.
| Testing Technique | Best At Finding | Example Bugs | When To Use |
|---|---|---|---|
| Unit Tests | Logic bugs, business logic errors | Off-by-one, wrong algorithm, incorrect validation | Always - foundation of testing |
| Integration Tests | Workflow bugs, component interaction issues | API contract violations, state management bugs | For multi-component systems |
| Property-Based Tests | Edge cases, boundary conditions | Integer overflow, empty input handling, special characters | For validation logic, parsers |
| Fuzzing | Crashes, panics, OOM, undefined behavior | Buffer overflows, assertion failures, infinite loops | For input parsing, untrusted data |
| Mutation Testing | Test quality, missing assertions | Weak tests that pass even with bugs | Advanced - to validate test suite |
Detailed Comparison
Unit Tests: Logic Verification
What they find:
- ✅ Incorrect business logic
- ✅ Wrong calculations
- ✅ Invalid state transitions
- ✅ API contract violations
What they miss:
- ❌ Edge cases you didn’t think of
- ❌ Crashes from unexpected input
- ❌ Performance issues
- ❌ Concurrency bugs
Example:
#[test]
fn test_port_validation() {
assert!(is_valid_port(80)); // ✅ Tests happy path
assert!(!is_valid_port(0)); // ✅ Tests known edge case
// ❌ Misses: What about port 65536? 99999? u16::MAX?
}
Property-Based Tests: Edge Case Discovery
What they find:
- ✅ Edge cases you forgot
- ✅ Boundary conditions
- ✅ Input combinations
- ✅ Invariant violations
What they miss:
- ❌ Crashes from malformed input (fuzzing finds these)
- ❌ Performance issues
- ❌ Real-world attack patterns
Example:
proptest! {
#[test]
fn test_any_port(port in 0u16..=65536) {
// ✅ Tests ALL possible u16 values
// ✅ Finds edge cases like 0, 65535, 65536
let result = is_valid_port(port);
if port > 0 && port <= 65535 {
prop_assert!(result);
}
}
}
Fuzzing: Crash Discovery
What they find:
- ✅ Crashes and panics
- ✅ Out-of-memory conditions
- ✅ Undefined behavior
- ✅ Assertion failures
- ✅ Infinite loops
- ✅ Stack overflows
What they miss:
- ❌ Logic bugs that don’t crash
- ❌ Incorrect but valid output
- ❌ Performance regressions
Example:
fuzz_target!(|data: &[u8]| {
// ✅ Finds crashes from malformed input
// ✅ Discovers panics from unexpected data
// ❌ Won't find logic bugs if code doesn't crash
parse_packet(data);
});
Real-World Security Example
Scenario: Parsing a network packet
fn parse_packet(data: &[u8]) -> Result<Packet, Error> {
if data.len() < 20 {
return Err(Error::TooShort);
}
let version = data[0];
let length = u16::from_be_bytes([data[2], data[3]]);
// ... more parsing
}
What each technique finds:
| Technique | Finds | Example |
|---|---|---|
| Unit Test | ✅ Version validation works | assert_eq!(parse_packet(&[0x04, ...]).version, 4) |
| Property Test | ✅ All lengths handled | Tests with random lengths 0-65535 |
| Fuzzing | ✅ Panic on data[2] access | Crashes with 1-byte input |
| Integration Test | ✅ End-to-end parsing works | Full packet from real capture |
Without fuzzing: You’d miss the crash on short input (1-2 bytes) Without property tests: You’d miss edge cases like length=0 or length=65535 Without unit tests: You’d miss incorrect version parsing logic
When To Use What
Use Unit Tests When:
- ✅ Testing specific functions
- ✅ Verifying business logic
- ✅ Checking error handling
- ✅ Fast feedback needed
Use Property-Based Tests When:
- ✅ Testing validation logic
- ✅ Checking invariants
- ✅ Exploring edge cases
- ✅ Testing parsers (combined with fuzzing)
Use Fuzzing When:
- ✅ Parsing untrusted input
- ✅ Handling network data
- ✅ Processing file formats
- ✅ Security-critical code
Use Integration Tests When:
- ✅ Testing workflows
- ✅ Verifying component interaction
- ✅ End-to-end scenarios
- ✅ API contracts
Common Mistakes
❌ Mistake 1: Fuzzing instead of unit tests
// ❌ BAD: Using fuzzing for logic testing
fuzz_target!(|port: u16| {
// This is slow and doesn't test specific logic
let _ = is_valid_port(port);
});
// ✅ GOOD: Use unit tests for logic
#[test]
fn test_port_validation() {
assert!(is_valid_port(80));
assert!(!is_valid_port(0));
}
❌ Mistake 2: Only unit testing parsers
// ❌ BAD: Only testing happy path
#[test]
fn test_parse_packet() {
let data = vec![0x04, 0x00, 0x00, 0x14, ...]; // Valid packet
assert!(parse_packet(&data).is_ok());
}
// ✅ GOOD: Also fuzz for crashes
fuzz_target!(|data: &[u8]| {
let _ = parse_packet(data); // Finds crashes from malformed input
});
❌ Mistake 3: Expecting fuzzing to find logic bugs
// ❌ BAD: Fuzzing won't find this logic bug
fn calculate_discount(price: u32, discount: u32) -> u32 {
price - (price * discount / 100) // BUG: Should be min(price, ...)
}
// Fuzzing won't crash, so bug goes undetected
// ✅ GOOD: Use unit tests for logic
#[test]
fn test_discount_logic() {
assert_eq!(calculate_discount(100, 50), 50);
assert_eq!(calculate_discount(100, 150), 0); // ✅ Catches bug!
}
Key Takeaway
Testing is layered defense:
- Unit tests → Verify logic is correct
- Property tests → Find edge cases you missed
- Fuzzing → Discover crashes from malformed input
- Integration tests → Ensure components work together
Use all techniques together for comprehensive security testing.
Security Rule: If your code processes untrusted input (network, files, user input), you MUST fuzz it. Unit tests alone are insufficient.
Rust Testing Framework
Rust has built-in testing support:
Click to view Rust code
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_functionality() {
assert_eq!(add(2, 2), 4);
}
}
Run tests:
cargo test
Unit Testing
Testing Functions
Click to view Rust code
pub fn validate_port(port: u16) -> bool {
port > 0 && port <= 65535
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_valid_ports() {
assert!(validate_port(80));
assert!(validate_port(443));
assert!(validate_port(65535));
}
#[test]
fn test_invalid_ports() {
assert!(!validate_port(0));
assert!(!validate_port(65536));
}
}
Testing Error Cases
Click to view Rust code
use thiserror::Error;
#[derive(Error, Debug)]
pub enum SecurityError {
#[error("Invalid port: {0}")]
InvalidPort(u16),
}
pub fn parse_port(port: u16) -> Result<u16, SecurityError> {
if port == 0 || port > 65535 {
return Err(SecurityError::InvalidPort(port));
}
Ok(port)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_valid_port() {
assert_eq!(parse_port(80).unwrap(), 80);
}
#[test]
fn test_parse_invalid_port() {
assert!(parse_port(0).is_err());
assert!(parse_port(65536).is_err());
}
}
Integration Testing
Creating Integration Tests
Create tests/integration_test.rs:
Click to view Rust code
use your_crate::SecurityTool;
#[test]
fn test_end_to_end_workflow() {
let tool = SecurityTool::new();
let result = tool.scan("127.0.0.1").unwrap();
assert!(result.len() > 0);
}
Property-Based Testing
Using proptest
Add to Cargo.toml:
[dev-dependencies]
proptest = "1.4"
Example:
Click to view Rust code
use proptest::prelude::*;
proptest! {
#[test]
fn test_port_validation(port in 0u16..=65536) {
let result = validate_port(port);
if port > 0 && port <= 65535 {
prop_assert!(result);
} else {
prop_assert!(!result);
}
}
}
Fuzzing with cargo-fuzz
Setup
Click to view commands
# Install cargo-fuzz
cargo install cargo-fuzz
# Initialize fuzzing
cargo fuzz init
Create Fuzz Target
Click to view Rust code
#![no_main]
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
// Test your function with fuzzed input
if let Ok(input) = std::str::from_utf8(data) {
parse_security_config(input);
}
});
Run fuzzing:
cargo fuzz run your_fuzz_target
⚠️ Critical: Avoid unwrap() in Fuzz Targets
This is a common fuzzing mistake that hides bugs.
// ❌ BAD: unwrap() masks crashes
fuzz_target!(|data: &[u8]| {
if let Ok(input) = std::str::from_utf8(data) {
parse_config(input).unwrap(); // ❌ Fuzzer won't see panics!
}
});
// ✅ GOOD: Let panics propagate
fuzz_target!(|data: &[u8]| {
if let Ok(input) = std::str::from_utf8(data) {
let _ = parse_config(input); // ✅ Errors are ignored, panics propagate
}
});
// ✅ BETTER: Test both Ok and Err paths
fuzz_target!(|data: &[u8]| {
if let Ok(input) = std::str::from_utf8(data) {
match parse_config(input) {
Ok(_) => {}, // Valid parse
Err(_) => {}, // Expected error (not a crash)
}
// Panics still propagate and are caught by fuzzer
}
});
Why this matters:
unwrap()converts errors into panics- Fuzzer can’t distinguish between “expected error” and “crash”
- You lose valuable information about what failed
Using arbitrary for Structured Fuzzing
For complex input types, use the arbitrary crate:
[dependencies]
arbitrary = { version = "1.3", features = ["derive"] }
Example:
use arbitrary::Arbitrary;
#[derive(Arbitrary, Debug)]
struct PacketHeader {
version: u8,
flags: u16,
length: u32,
checksum: u32,
}
fuzz_target!(|header: PacketHeader| {
// ✅ Fuzzer generates valid PacketHeader structures
// ✅ Much better than raw bytes for complex types
process_header(&header);
});
Benefits:
- Generates structurally valid inputs
- Explores deeper code paths
- Finds bugs in business logic, not just parsing
Crash Triage Workflow (Critical for Security)
What to do after fuzzing finds a crash
Finding a crash is just the beginning. Security engineers must:
- Reproduce the crash
- Minimize the input
- Classify severity
- Create regression test
- Fix and verify
Step 1: Reproduce the Crash
Fuzzer saves crashing inputs to fuzz/artifacts/:
# Fuzzer found a crash
$ cargo fuzz run parse_packet
...
Crash detected! Saved to: fuzz/artifacts/parse_packet/crash-abc123
# Reproduce the crash
$ cargo fuzz run parse_packet fuzz/artifacts/parse_packet/crash-abc123
Verify it’s reproducible:
# Run 10 times to check for flakiness
for i in {1..10}; do
cargo fuzz run parse_packet fuzz/artifacts/parse_packet/crash-abc123
done
Step 2: Minimize the Input
Large crashing inputs are hard to debug. Minimize them:
# Minimize the crashing input
cargo fuzz tmin parse_packet fuzz/artifacts/parse_packet/crash-abc123
This produces the smallest input that still crashes:
Original input: 4,582 bytes
Minimized input: 3 bytes
# Much easier to debug!
Manual minimization (if tmin fails):
// Original crash: 1000-byte input
// Try removing bytes systematically
let minimal = &[0x00, 0x01, 0x02]; // Found: 3 bytes trigger crash
Step 3: Classify Severity
Not all crashes are equal. Classify for triage:
| Crash Type | Severity | Exploitability | Example |
|---|---|---|---|
| Panic (bounds check) | Medium | Low (DoS only) | index out of bounds |
| Panic (unwrap) | Medium | Low (DoS only) | unwrap() on None |
| Stack overflow | High | Medium (DoS, possible RCE) | Infinite recursion |
| OOM (Out of Memory) | High | Medium (DoS) | Unbounded allocation |
| Unsafe code crash | Critical | High (possible RCE) | Segfault in unsafe block |
| Assertion failure | Low-Medium | Low | assert! failed |
Triage questions:
-
Does it crash in safe or unsafe code?
- Safe code: Usually DoS only
- Unsafe code: Possible memory corruption (RCE)
-
Can an attacker trigger it remotely?
- Network input: High priority
- Local file: Medium priority
- CLI argument: Low priority
-
Does it leak sensitive data?
- Memory dumps, error messages
-
Can it be triggered repeatedly?
- Amplification attack potential
Example triage:
// Crash: index out of bounds in safe code
fn parse_header(data: &[u8]) -> Header {
let version = data[0]; // ❌ Panics if data.len() == 0
// ...
}
// Severity: MEDIUM
// - Safe Rust (no memory corruption)
// - DoS only (panic crashes program)
// - Remotely triggerable (network input)
// - Easy to fix (add bounds check)
Step 4: Create Regression Test
Turn every crash into a test to prevent reintroduction:
#[test]
#[should_panic(expected = "index out of bounds")]
fn test_crash_empty_input() {
// Regression test for fuzzer crash
let data = &[]; // Minimized crashing input
parse_header(data);
}
// Better: Fix the bug and test the fix
#[test]
fn test_empty_input_handled() {
let data = &[];
assert!(parse_header(data).is_err()); // ✅ Returns error instead of panic
}
Organize crash tests:
#[cfg(test)]
mod fuzz_regression_tests {
use super::*;
// All crashes found by fuzzing
#[test]
fn test_crash_2024_01_15_empty_input() {
// Date helps track when bug was found
assert!(parse_header(&[]).is_err());
}
#[test]
fn test_crash_2024_01_16_short_input() {
assert!(parse_header(&[0x00]).is_err());
}
}
Step 5: Fix and Verify
Fix the bug:
// Before (crashes on empty input)
fn parse_header(data: &[u8]) -> Header {
let version = data[0]; // ❌ Panics
// ...
}
// After (handles empty input)
fn parse_header(data: &[u8]) -> Result<Header, Error> {
if data.is_empty() {
return Err(Error::TooShort);
}
let version = data[0]; // ✅ Safe
// ...
}
Verify the fix:
# 1. Regression test passes
cargo test test_crash_empty_input
# 2. Fuzzer no longer crashes
cargo fuzz run parse_header -- -max_total_time=300
# 3. Minimized input no longer crashes
cargo fuzz run parse_header fuzz/artifacts/parse_header/crash-abc123
Complete Crash Triage Checklist
- Reproduce: Crash is reproducible from saved artifact
- Minimize: Input reduced to smallest crashing case
- Classify: Severity and exploitability assessed
- Regression test: Test added to prevent reintroduction
- Fix: Bug fixed with proper error handling
- Verify: Fuzzer runs clean for 5+ minutes
- Document: Crash details recorded in commit message
Real-World Example
Fuzzer found crash in packet parser:
1. Reproduce:
$ cargo fuzz run parse_packet fuzz/artifacts/parse_packet/crash-abc123
✅ Reproducible panic: "index out of bounds"
2. Minimize:
$ cargo fuzz tmin parse_packet fuzz/artifacts/parse_packet/crash-abc123
✅ Minimized from 1024 bytes to 2 bytes: [0x00, 0x01]
3. Classify:
- Panic in safe code (bounds check)
- Severity: MEDIUM (DoS only)
- Remotely triggerable (network input)
- Fix priority: HIGH
4. Regression test:
#[test]
fn test_short_packet() {
assert!(parse_packet(&[0x00, 0x01]).is_err());
}
5. Fix:
- Added length check before indexing
- Returns Error::TooShort for short packets
6. Verify:
✅ Regression test passes
✅ Fuzzer runs 10 minutes without crash
✅ Minimized input returns error (no panic)
Key Takeaway
Crash triage is more important than fuzzing itself.
- Finding crashes is easy (fuzzer does it automatically)
- Understanding and fixing crashes requires skill
- Every crash should become a regression test
- Classify severity to prioritize fixes
Security Rule: Never ignore fuzzer crashes. Even “harmless” panics can be DoS vulnerabilities.
Sanitizers: Essential for Security Testing
Sanitizers detect bugs that tests and fuzzing might miss.
Rust’s memory safety prevents most memory bugs, but:
unsafecode can still have issues- Logic bugs exist in safe code
- FFI (C interop) can introduce vulnerabilities
- Leaks and undefined behavior can occur
Available Sanitizers
| Sanitizer | Detects | When To Use |
|---|---|---|
| AddressSanitizer (ASan) | Use-after-free, buffer overflows, memory corruption | Always for unsafe code |
| LeakSanitizer (LSan) | Memory leaks | Long-running services, resource management |
| ThreadSanitizer (TSan) | Data races, race conditions | Concurrent code |
| MemorySanitizer (MSan) | Uninitialized memory reads | Unsafe code, FFI |
| UndefinedBehaviorSanitizer (UBSan) | Undefined behavior (integer overflow, null deref) | Unsafe code, arithmetic |
AddressSanitizer (ASan) - Most Important
Detects memory corruption in unsafe code:
# Enable AddressSanitizer
export RUSTFLAGS="-Z sanitizer=address"
export ASAN_OPTIONS="detect_leaks=1"
# Run tests with ASan
cargo +nightly test --target x86_64-unknown-linux-gnu
# Run fuzzing with ASan (automatically enabled)
cargo +nightly fuzz run --sanitizer address your_target
Example bug ASan catches:
unsafe fn buggy_code() {
let mut data = vec![1, 2, 3];
let ptr = data.as_mut_ptr();
drop(data); // Free memory
// ❌ Use-after-free (ASan detects this!)
*ptr = 42;
}
// Without ASan: Might work, might crash, might corrupt memory
// With ASan: Immediate error with stack trace
LeakSanitizer (LSan) - Memory Leak Detection
Detects memory leaks:
# Enable LeakSanitizer (included with ASan)
export RUSTFLAGS="-Z sanitizer=leak"
cargo +nightly test --target x86_64-unknown-linux-gnu
Example leak LSan catches:
fn leaky_code() {
let data = Box::new([0u8; 1024]);
std::mem::forget(data); // ❌ Leak (LSan detects this!)
}
// LSan output:
// Direct leak of 1024 byte(s) in 1 object(s) allocated from:
// #0 in leaky_code
ThreadSanitizer (TSan) - Race Condition Detection
Detects data races in concurrent code:
# Enable ThreadSanitizer
export RUSTFLAGS="-Z sanitizer=thread"
cargo +nightly test --target x86_64-unknown-linux-gnu
Example race TSan catches:
use std::sync::Arc;
use std::thread;
fn racy_code() {
let counter = Arc::new(std::cell::Cell::new(0));
let handles: Vec<_> = (0..10)
.map(|_| {
let counter = counter.clone();
thread::spawn(move || {
// ❌ Data race (TSan detects this!)
counter.set(counter.get() + 1);
})
})
.collect();
for h in handles {
h.join().unwrap();
}
}
// TSan output:
// WARNING: ThreadSanitizer: data race
UndefinedBehaviorSanitizer (UBSan)
Detects undefined behavior:
# Enable UBSan
export RUSTFLAGS="-Z sanitizer=undefined"
cargo +nightly test --target x86_64-unknown-linux-gnu
Example UB UBSan catches:
fn undefined_behavior() {
unsafe {
let x: i32 = i32::MAX;
let y = x + 1; // ❌ Signed integer overflow (UB in C, defined in Rust)
let ptr: *const i32 = std::ptr::null();
let _value = *ptr; // ❌ Null pointer dereference (UBSan detects!)
}
}
Practical Sanitizer Usage
1. Run sanitizers in CI:
# .github/workflows/sanitizers.yml
name: Sanitizers
on: [push, pull_request]
jobs:
address-sanitizer:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- name: Run tests with AddressSanitizer
run: |
export RUSTFLAGS="-Z sanitizer=address"
cargo +nightly test --target x86_64-unknown-linux-gnu
leak-sanitizer:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- name: Run tests with LeakSanitizer
run: |
export RUSTFLAGS="-Z sanitizer=leak"
cargo +nightly test --target x86_64-unknown-linux-gnu
2. Combine fuzzing with sanitizers:
# Fuzz with AddressSanitizer (default)
cargo +nightly fuzz run --sanitizer address parse_packet
# Fuzz with LeakSanitizer
cargo +nightly fuzz run --sanitizer leak parse_packet
# Fuzz with all sanitizers (slower but thorough)
for san in address leak memory thread; do
cargo +nightly fuzz run --sanitizer $san parse_packet -- -max_total_time=300
done
3. Sanitizer configuration:
# ASan options
export ASAN_OPTIONS="detect_leaks=1:abort_on_error=1:symbolize=1"
# LSan options
export LSAN_OPTIONS="suppressions=lsan.supp:print_suppressions=0"
# TSan options
export TSAN_OPTIONS="halt_on_error=1:second_deadlock_stack=1"
When To Use Each Sanitizer
AddressSanitizer (ASan):
- ✅ Always use for unsafe code
- ✅ Use for FFI (C interop)
- ✅ Use for fuzzing
- ✅ Minimal performance overhead (~2x)
LeakSanitizer (LSan):
- ✅ Long-running services
- ✅ Resource-intensive code
- ✅ After refactoring
- ✅ Very low overhead
ThreadSanitizer (TSan):
- ✅ Concurrent code
- ✅ Multi-threaded services
- ✅ Lock-free data structures
- ⚠️ High overhead (~10x slower)
MemorySanitizer (MSan):
- ✅ Unsafe code
- ✅ FFI boundaries
- ⚠️ Requires instrumented stdlib
UndefinedBehaviorSanitizer (UBSan):
- ✅ Unsafe code
- ✅ Arithmetic-heavy code
- ✅ Low overhead
Sanitizer Limitations
What sanitizers DON’T catch:
- ❌ Logic bugs (use unit tests)
- ❌ Performance issues (use profiling)
- ❌ Incorrect but safe code (use property tests)
- ❌ API misuse (use type system)
Key Takeaway
Sanitizers are essential for security-critical Rust code.
- Use ASan for all unsafe code and fuzzing
- Use LSan for long-running services
- Use TSan for concurrent code
- Combine sanitizers with fuzzing for maximum coverage
Security Rule: If your code has
unsafeblocks or FFI, you MUST run sanitizers in CI.
Security Testing
Testing Input Validation
Click to view Rust code
#[test]
fn test_sql_injection_prevention() {
let malicious_input = "'; DROP TABLE users; --";
let result = sanitize_input(malicious_input);
assert!(!result.contains("DROP"));
}
#[test]
fn test_buffer_overflow_prevention() {
let large_input = "A".repeat(1000000);
let result = process_input(&large_input);
assert!(result.is_ok());
}
⚠️ Important: Fuzzing ≠ Penetration Testing
Common misconception that needs clarification:
| Fuzzing | Penetration Testing |
|---|---|
| Automated input generation | Manual testing by security experts |
| Finds crashes and panics | Finds logic flaws and misconfigurations |
| Tests code robustness | Tests entire system security |
| Discovers implementation bugs | Discovers design flaws |
| Fast (millions of inputs/sec) | Slow (hours to days) |
| No business context | Understands business logic |
What fuzzing finds:
- ✅ Buffer overflows
- ✅ Assertion failures
- ✅ Panics and crashes
- ✅ Out-of-memory conditions
- ✅ Infinite loops
What fuzzing misses:
- ❌ Authentication bypass (logic flaw)
- ❌ Authorization issues (design flaw)
- ❌ Business logic errors (incorrect but doesn’t crash)
- ❌ Cryptographic misuse (works but insecure)
- ❌ Configuration issues (deployment problem)
Example:
// Fuzzing WON'T find this vulnerability
fn check_admin(user: &User) -> bool {
user.role == "admin" // ❌ BUG: Should check permissions, not role string
}
// Fuzzing WILL find this vulnerability
fn parse_packet(data: &[u8]) -> Packet {
let len = data[0]; // ❌ Panics if data.is_empty()
// ...
}
Key Takeaway:
- Fuzzing is ONE tool in your security toolkit
- You still need: code review, penetration testing, threat modeling, security audits
- Fuzzing complements but doesn’t replace other security practices
Advanced Scenarios
Scenario 1: Testing Async Code
Click to view Rust code
#[tokio::test]
async fn test_async_function() {
let result = async_operation().await;
assert!(result.is_ok());
}
Scenario 2: Testing with Mocks
Use mockall for mocking:
Click to view Rust code
use mockall::mock;
mock! {
NetworkClient {}
impl NetworkClientTrait for NetworkClient {
fn send(&self, data: &[u8]) -> Result<(), Error>;
}
}
#[test]
fn test_with_mock() {
let mut mock = MockNetworkClient::new();
mock.expect_send()
.returning(|_| Ok(()));
let result = use_client(&mock);
assert!(result.is_ok());
}
Troubleshooting Guide
Problem: Tests Failing Intermittently
Solution:
- Check for race conditions
- Verify async test timeouts
- Review shared state
- Add proper synchronization
Problem: Fuzzing Too Slow
Solution:
- Reduce input size limits
- Optimize code paths
- Use corpus minimization
- Adjust fuzzing parameters
Real-World Case Study
Case Study: Comprehensive testing found 5 critical vulnerabilities
Testing Approach:
- Unit tests for all functions
- Integration tests for workflows
- Property-based tests for validation
- Fuzzing for parsing logic
Results:
- Found buffer overflow
- Discovered race condition
- Identified parsing issues
- Improved code quality significantly
Code Review Checklist for Rust Testing & Fuzzing
Test Coverage
- Unit tests for all public functions
- Integration tests for component interactions
- Property-based tests for edge cases
- Test coverage > 80% for critical paths
Fuzzing
- Fuzz targets defined for all parsers
- Proper seed corpus provided
- Crash reproduction tests added
- Fuzzing integrated into CI/CD with time limits
- Sanitizers enabled for fuzzing
- Corpus stored and reused across runs
Test Quality
- Tests are deterministic (no flaky tests)
- Tests use proper assertions
- Tests clean up resources
- Test data is isolated
Security Testing
- Security-focused test cases included
- Input validation tested thoroughly
- Error handling tested
- Edge cases and boundary conditions tested
Performance
- Tests run in reasonable time
- Integration tests use appropriate timeouts
- Fuzzing runs don’t block CI (time-limited)
- Test execution is optimized
Fuzzing in CI/CD: Critical Limits
Unlimited fuzzing in CI = broken pipelines
Fuzzing can run forever, finding deeper bugs over time. But CI/CD pipelines need bounded execution.
The Problem
# ❌ BAD: This will run forever
- name: Fuzz
run: cargo fuzz run parse_packet
# Never finishes! CI times out or runs for hours
Solution 1: Time Budget
Set strict time limits for CI fuzzing:
# ✅ GOOD: Time-limited fuzzing
- name: Fuzz (5 minutes per target)
run: |
for target in parse_packet parse_header parse_body; do
cargo +nightly fuzz run $target -- -max_total_time=300 || exit 1
done
Recommended time budgets:
| CI Type | Per-Target Time | Total Fuzzing Time |
|---|---|---|
| PR checks | 1-2 minutes | 5-10 minutes |
| Main branch | 5-10 minutes | 30-60 minutes |
| Nightly | 30-60 minutes | 4-8 hours |
| Weekly | 2-4 hours | 24-48 hours |
Solution 2: Corpus Reuse
Don’t start from scratch every time:
# ✅ GOOD: Reuse corpus across runs
- name: Restore corpus cache
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
restore-keys: |
fuzz-corpus-
- name: Fuzz with existing corpus
run: |
cargo +nightly fuzz run parse_packet -- -max_total_time=300
- name: Save corpus
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
Benefits:
- Builds on previous fuzzing runs
- Finds deeper bugs over time
- Faster coverage of known paths
Solution 3: Separate Fuzzing Jobs
Don’t block PRs on long fuzzing:
# Fast checks for PRs
pr-checks:
runs-on: ubuntu-latest
steps:
- name: Quick fuzz (2 min)
run: cargo +nightly fuzz run parse_packet -- -max_total_time=120
# Deep fuzzing for main branch
deep-fuzz:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Extended fuzz (1 hour)
run: cargo +nightly fuzz run parse_packet -- -max_total_time=3600
# Continuous fuzzing (separate system)
# Use OSS-Fuzz, ClusterFuzz, or dedicated fuzzing infrastructure
Solution 4: Fail Fast on Crashes
Stop immediately if crash found:
- name: Fuzz with crash detection
run: |
cargo +nightly fuzz run parse_packet -- \
-max_total_time=300 \
-timeout=10 \
-rss_limit_mb=2048 \
|| (echo "Fuzzing found crash!" && exit 1)
Complete CI/CD Fuzzing Example
name: Fuzzing
on:
pull_request:
push:
branches: [main]
schedule:
- cron: '0 0 * * 0' # Weekly deep fuzz
jobs:
quick-fuzz:
name: Quick Fuzz (PR)
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- name: Install cargo-fuzz
run: cargo install cargo-fuzz
- name: Restore corpus
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
restore-keys: fuzz-corpus-
- name: Quick fuzz (2 min per target)
run: |
for target in $(cargo fuzz list); do
echo "Fuzzing $target for 2 minutes..."
cargo +nightly fuzz run $target -- \
-max_total_time=120 \
-timeout=10 \
|| exit 1
done
- name: Save corpus
if: always()
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
extended-fuzz:
name: Extended Fuzz (Main)
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- name: Install cargo-fuzz
run: cargo install cargo-fuzz
- name: Restore corpus
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
restore-keys: fuzz-corpus-
- name: Extended fuzz (10 min per target)
run: |
for target in $(cargo fuzz list); do
echo "Fuzzing $target for 10 minutes..."
cargo +nightly fuzz run $target -- \
-max_total_time=600 \
-timeout=10 \
-rss_limit_mb=2048 \
|| exit 1
done
- name: Save corpus
if: always()
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
- name: Upload crashes
if: failure()
uses: actions/upload-artifact@v3
with:
name: fuzz-crashes
path: fuzz/artifacts/
deep-fuzz:
name: Deep Fuzz (Weekly)
runs-on: ubuntu-latest
if: github.event_name == 'schedule'
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- name: Install cargo-fuzz
run: cargo install cargo-fuzz
- name: Restore corpus
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
restore-keys: fuzz-corpus-
- name: Deep fuzz (1 hour per target)
run: |
for target in $(cargo fuzz list); do
echo "Deep fuzzing $target for 1 hour..."
cargo +nightly fuzz run $target -- \
-max_total_time=3600 \
-timeout=30 \
-rss_limit_mb=4096 \
|| exit 1
done
- name: Save corpus
if: always()
uses: actions/cache@v3
with:
path: fuzz/corpus
key: fuzz-corpus-${{ github.sha }}
- name: Upload crashes
if: failure()
uses: actions/upload-artifact@v3
with:
name: deep-fuzz-crashes
path: fuzz/artifacts/
Fuzzing Budget Guidelines
PR Checks (Fast Feedback):
- Time: 1-2 minutes per target
- Goal: Catch obvious regressions
- Corpus: Reuse existing
- Sanitizers: ASan only
Main Branch (Moderate):
- Time: 5-10 minutes per target
- Goal: Find new bugs before release
- Corpus: Build on previous runs
- Sanitizers: ASan + LSan
Nightly (Deep):
- Time: 30-60 minutes per target
- Goal: Deep exploration
- Corpus: Continuous growth
- Sanitizers: All (ASan, LSan, MSan, UBSan)
Weekly (Exhaustive):
- Time: 2-4 hours per target
- Goal: Maximum coverage
- Corpus: Long-term accumulation
- Sanitizers: All + custom configurations
Key Takeaways
- Always set time limits (
-max_total_time=N) - Reuse corpus across CI runs (cache it)
- Fail fast on crashes (don’t continue)
- Separate fast/slow fuzzing (PR vs nightly)
- Monitor fuzzing time (adjust budgets as needed)
CI Rule: If fuzzing takes >10 minutes in PR checks, move it to nightly runs.
Advanced: Continuous Fuzzing
For critical projects, use dedicated fuzzing infrastructure:
- OSS-Fuzz (Google): Free for open-source projects
- ClusterFuzz (Google): Self-hosted continuous fuzzing
- Mayhem (ForAllSecure): Commercial fuzzing platform
- Custom: Dedicated fuzzing servers running 24/7
Benefits:
- Runs continuously (not just on commits)
- Finds bugs over days/weeks
- Automatic corpus management
- Crash deduplication
- Integration with bug trackers
FAQ
Q: How much testing is enough?
A: Aim for:
- High code coverage (>80%)
- Test critical paths
- Cover error cases
- Include security test cases
Q: Should I fuzz all code?
A: Focus on:
- Input parsing code
- Protocol handlers
- Security-sensitive functions
- Complex logic
Conclusion
Comprehensive testing and fuzzing are essential for security tools. Use unit tests, integration tests, property-based testing, and fuzzing to ensure reliability and security.
Action Steps
- Write unit tests for functions
- Create integration tests
- Add property-based tests
- Set up fuzzing
- Integrate into CI/CD
Next Steps
- Explore advanced fuzzing techniques
- Learn mutation testing (see below)
- Study coverage-guided fuzzing
- Practice with real projects
Advanced Concept: Mutation Testing
Mutation testing validates your test suite quality.
While other testing techniques find bugs in your code, mutation testing finds bugs in your tests.
How it works:
- Mutate your code (introduce small bugs)
- Run your tests
- Check if tests catch the mutation
Example:
// Original code
fn is_valid_port(port: u16) -> bool {
port > 0 && port <= 65535
}
// Mutation 1: Change > to >=
fn is_valid_port(port: u16) -> bool {
port >= 0 && port <= 65535 // ❌ Mutation
}
// Mutation 2: Change && to ||
fn is_valid_port(port: u16) -> bool {
port > 0 || port <= 65535 // ❌ Mutation
}
// If your tests still pass, they're weak!
Mutation testing tools for Rust:
- cargo-mutants - Actively maintained, good Rust support
- mutagen - Experimental, attribute-based mutations
Usage:
# Install cargo-mutants
cargo install cargo-mutants
# Run mutation testing
cargo mutants
# Output:
# 10 mutants tested
# 8 caught by tests (80%)
# 2 survived (20%) ← Your tests are weak here!
When to use:
- ✅ After writing tests (validate test quality)
- ✅ For critical security code
- ✅ To find missing assertions
- ⚠️ Slow (runs tests many times)
- ⚠️ Advanced technique (not for beginners)
Example weak test:
#[test]
fn test_port_validation() {
let _ = is_valid_port(80); // ❌ No assertion!
// Mutation testing reveals this test is useless
}
// ✅ Strong test
#[test]
fn test_port_validation() {
assert!(is_valid_port(80)); // ✅ Assertion present
assert!(!is_valid_port(0)); // ✅ Tests boundary
}
Key Takeaway:
- Mutation testing is meta-testing (testing your tests)
- Use it to validate critical test suites
- Expensive but valuable for security-critical code
- Complements (doesn’t replace) other testing techniques
Related Topics
Remember: Testing is an investment in code quality and security. Start early and maintain comprehensive test coverage.
Cleanup
Click to view commands
# Clean up test artifacts
rm -rf target/
rm -rf fuzz/corpus/
rm -rf fuzz/artifacts/
# Clean up any test-generated files
find . -name "*.test" -delete
find . -name "*_test_output*" -delete
Validation: Verify no test artifacts remain in the project directory.