Benchmarks

Measured boot times, storage throughput comparisons, and performance baselines from real hardware on the workbench - not synthetic scores from a controlled lab.

Stopwatch beside an open laptop with a terminal showing boot timing output

Numbers settle arguments that opinion cannot. After more than a decade of testing older machines - everything from 2009 netbooks with spinning drives to 2017 business laptops with NVMe slots - I have learned that the single most misleading metric in PC performance is the one people fixate on first: boot time to the desktop. It looks simple, feels objective, and hides at least three variables that change the result dramatically depending on how you measure. This section lays out what I actually measure, how the tests are structured, and why the numbers come out the way they do. If you are looking for step-by-step fixes based on these results, the guides section translates the data into practical walkthroughs for each hardware scenario.

Below you will find cold boot versus warm boot methodology, the storage and software factors that dominate timing results, benchmark strips from common hardware configurations, and comparison cards that match the test machines used across the site. Every number published here is the median of five consecutive runs on the same hardware with the same configuration - no cherry-picked best case, no rounding to look impressive.

Why boot time alone can mislead

A stopwatch from power button to desktop wallpaper tells you one thing: how long it took to reach a screen with icons on it. It does not tell you whether the machine is actually usable at that point. On a lot of older hardware - particularly machines with 4 GB of RAM and a mechanical hard drive - the desktop appears but the system spends another thirty to ninety seconds loading background services, indexing files, and fighting for disk I/O before you can reliably open a browser or file manager. That post-login delay is invisible to most timing methods but very visible to anyone sitting in front of the machine waiting.

I time two things separately: time to desktop (TTD) and time to interactive (TTI). TTD is the moment the desktop environment finishes rendering. TTI is the moment I can open a browser and begin loading a page without the system stalling or the cursor freezing. On a machine with an SSD and a lightweight Linux distro, those two numbers are usually within a second of each other. On a machine with a 5400 RPM hard drive and Windows 10 with default startup programs, the gap can be over a minute. That gap is where most user frustration lives, and it is the gap these benchmarks are designed to expose.

Cold boot vs warm boot

A cold boot starts from a full power-off state - no power to RAM, no suspend image, no fast startup cache. The firmware initialises the hardware from scratch, runs POST, hands off to the bootloader, and the operating system loads everything into memory from storage. This is the slowest and most revealing test because it exposes every bottleneck in the chain: slow firmware initialisation, a hard drive that needs spin-up time, a bloated startup sequence, and an operating system that loads more than the hardware can handle comfortably.

A warm boot - which includes Windows fast startup, suspend-to-disk, and resume-from- hibernate scenarios - skips some or all of that chain by reloading a saved memory image. The numbers look much better, but they mask the real condition of the system. A machine that cold boots in 90 seconds and warm boots in 15 seconds has not become faster - it has just skipped the part where you would notice how slow it is. I publish cold boot numbers as the primary metric because they reflect what happens after a power cut, a BIOS update, a kernel update, or any other event that forces a full restart. Warm boot figures are noted where relevant but never used as the headline number.

Methodology note: Windows fast startup is disabled for all cold boot measurements. Hybrid shutdown writes a hibernation image that makes the next boot look faster than it is. All timings use a physical stopwatch confirmed against systemd-analyze (Linux) or Event Viewer boot timestamps (Windows).
Storage bottlenecks vs software bloat

Two machines with identical CPUs and RAM can boot 50 seconds apart if one has an SSD and the other has a mechanical drive. The storage medium is the single biggest variable in boot time, and it is not close. A 5400 RPM hard drive delivers around 80 to 100 MB/s sequential read in good condition - but boot is not sequential. The operating system reads thousands of small files scattered across the disk, and random read performance on a spinning drive drops to 0.5 to 1.5 MB/s. An SSD handles those same random reads at 20 to 40 MB/s. That difference alone accounts for the majority of the boot time gap on most older machines.

Software bloat is the second factor, and it compounds the storage problem. A fresh Windows 10 install loads a handful of services at startup. After two years of updates, manufacturer utilities, browser helpers, and driver updaters, that list can triple. Each service reads files from disk, and on a mechanical drive each read competes for the same head position. The result is a cascading queue of small I/O operations that makes every individual service slower to load than it would be in isolation. An SSD reduces the per-read penalty, but cutting unnecessary startup entries reduces the total number of reads - both changes help, and they stack.

For a deeper look at what separates a storage-bound machine from a software-bound one, see the dedicated comparison:

  • What really slows an old laptop down - A breakdown of the specific bottlenecks by hardware generation, with test data showing where the time actually goes during a cold boot sequence.
Primary boot time metrics
HDD - Windows 10 (stock)
~68s TTD / ~112s TTI
5400 RPM mechanical drive, default startup programs, 4 GB RAM. The gap between desktop appearing and the system being usable is where most frustration lives.
SSD - Windows 10 (stock)
~19s TTD / ~24s TTI
Same machine, SATA SSD swap, no other changes. TTD drops by 72%. TTI gap shrinks because random reads no longer bottleneck background services.
SSD - Windows 10 (cleaned)
~16s TTD / ~19s TTI
Same SSD, startup programs trimmed to essentials only. The remaining 3-second TTI gap comes from Windows services that cannot be safely disabled.
SSD - Xubuntu 24.04
~11s TTD / ~12s TTI
Lightweight XFCE desktop on the same hardware. Fewer background services, smaller memory footprint, near-instant interactive state after desktop renders.
SSD - antiX 23
~8s TTD / ~9s TTI
Minimal IceWM desktop. Fastest to interactive of any tested configuration, but the desktop environment trades polish for speed. Best suited to single-purpose machines.
USB 2.0 live boot - Xubuntu
~48s TTD / ~55s TTI
Booting from a USB 2.0 flash drive. Useful for testing and diagnostics, but not a permanent solution. USB 3.0 sticks on USB 3.0 ports cut this roughly in half.
Reading the numbers

The pattern across every test configuration is consistent: storage type dominates, operating system weight is second, and startup software load is third. If you only make one change to an older machine, replace the mechanical drive. If you make two, replace the drive and trim or replace the operating system. The SSD-plus-lightweight- Linux combination consistently delivers cold boot to interactive under 15 seconds on hardware from 2012 onward - which is faster than most new budget laptops ship out of the box.

These benchmarks are from a controlled set of test machines, not a survey. Your specific hardware will produce different absolute numbers, but the relative improvements hold. A machine that cold boots in 90 seconds on a mechanical drive will not drop to exactly 19 seconds with an SSD, but it will drop by roughly the same percentage. The ratios are what matter, not the exact figures.

Comparison cards
Comparison

Cold Boot vs Warm Boot Explained

A detailed walkthrough of what happens during each boot type, why the numbers differ so dramatically, and when warm boot figures are useful versus when they are misleading. Includes a side-by-side timing breakdown showing where each second goes in both scenarios on the same hardware.

Read the comparison
Comparison

What Really Slows an Old Laptop Down

Boot time is the symptom, but the cause varies by hardware generation. This comparison isolates the specific bottlenecks - storage throughput, RAM contention, firmware initialisation time, and startup program load - and shows which factor dominates on each class of machine. Includes test data from Celeron, Core i3, and Core i5 systems spanning 2011 to 2016.

Read the comparison
How these benchmarks are conducted

Every measurement follows the same protocol. The machine is powered off completely - not suspended, not hibernated, not using Windows fast startup. Power is disconnected for at least 10 seconds to ensure RAM is cleared. The stopwatch starts when the power button is pressed. TTD is recorded when the desktop background and panel are fully rendered. TTI is recorded when a browser window opens and begins loading a local HTML file without the cursor freezing or the system stalling.

Five consecutive cold boot runs are performed per configuration with no changes between runs. The median value is published. Outliers - usually the first boot after an OS install or update - are noted but excluded from the median. All machines are tested on the same power strip with the same peripherals disconnected. The goal is consistency, not perfection. Real-world numbers will vary based on peripheral load, ambient temperature, and drive age, but the relative comparisons hold.

Test conditions checklist

  • Windows fast startup and hybrid shutdown disabled
  • Full power-off between each run, 10-second drain
  • No external peripherals connected during timing
  • Five runs per configuration, median reported
  • TTD and TTI recorded separately for every run
  • Wi-Fi connected but no background sync during timing
  • Same BIOS firmware version maintained per machine across all tests
Related resources

Every number on this page comes from physical hardware on the workbench - the same machines used in the guides and support documentation across this site. I do not publish synthetic scores, manufacturer claims, or numbers from virtual machines. When the results change - because a new kernel version shifts boot timing or a firmware update affects POST speed - the benchmarks get updated with the new data and the previous figures are noted for reference. The point is not to sell a particular OS or upgrade path. The point is to give you reliable data so you can make the right call for the hardware sitting in front of you.

Stay in the loop — guides and benchmarks when they drop.