How We Test Things
Last reviewed: April 2026 · Written by Sungyu Kim
Every review and comparison on OnVerdict starts with the same question: "What would I actually want to know if I were buying this with my own money?" That question drives the testing plan. This page documents exactly how that testing is done, so you can decide whether to trust the results.
Three Testing Tiers (And Why We're Explicit About Which One Applies)
Not every product gets the same depth of review. I'm honest about this because pretending otherwise would be dishonest. Every article states which tier its testing falls into:
- Tier 1 — Owned and daily-driven. I bought the product with my own money (or rarely, borrowed it long-term from a friend) and used it as my primary device for a minimum of two weeks. This is the only tier that produces real battery-life numbers, long-term thermal notes, or "after three weeks the keyboard started doing X" observations.
- Tier 2 — Structured hands-on session. A multi-hour bench session with a defined benchmark suite, usually at a retail Apple Store, friend's desk, or a vendor loan. Produces reliable performance numbers and subjective feel notes, but no long-term durability data.
- Tier 3 — Spec analysis + published benchmarks. No hands-on access. Used only for comparisons where one side is already covered in Tier 1 or 2, and I need to evaluate a specific alternative (usually a competitor in a cross-platform piece). Every claim cites a published benchmark source.
When an article uses Tier 3 testing, it says so in the article body — never buried, never implied.
Performance Benchmarks
For any product where performance matters, I run a standard suite:
- Geekbench 6 — single-core and multi-core CPU, Metal/OpenCL GPU. Three runs, median reported.
- Cinebench 2024 — single and multi-core. 10-minute sustained run to expose thermal throttling.
- Blender 4.x — Classroom and BMW27 scenes, CPU and GPU paths. Wall-clock time reported.
- Xcode build benchmark — a Swift package compile from cold cache. My own repo, same commit hash every run.
- Final Cut Pro export — identical 12-minute 4K H.265 timeline, same export preset. Wall-clock time reported.
- WebXPRT 4 — browser responsiveness for laptops and tablets.
All benchmarks are run on AC power unless the battery number is what's being tested. Ambient temperature is kept between 20–23°C. No external cooling. Tests are run on a clean install wherever practical.
Battery Life Testing
I don't report manufacturer battery numbers as if they were real. The battery figures on OnVerdict come from one of these methods:
- Mixed-use logged day: the laptop or phone is used as my primary device with screen-time tracking on. Battery percentage is logged at 9am, noon, 3pm, 6pm, and 9pm. Repeated across at least three days; median time-to-20% is reported.
- Video loop test: 1080p H.264 file looping in the native video app at 50% brightness, Wi-Fi on, no other apps. Run to 10% battery.
- Web browsing loop: a custom script cycling through 40 of the top Alexa sites in Safari at 50% brightness.
The method used is always named in the review. If only manufacturer figures are quoted, the review says "manufacturer figure, not tested."
How I Score Comparisons
For head-to-head comparison articles, I avoid artificial "product A wins 4 categories, product B wins 3" scorecards. Real buyers don't weight categories equally — someone shopping for a video-editing laptop cares about sustained GPU performance and display color accuracy more than keyboard travel or fan noise.
Instead, every comparison is structured as: (1) what are the real differences, (2) which buyer profile does each difference actually matter to, and (3) based on the most likely buyer for this cross-shop, which product wins. This produces a recommendation, not a scorecard.
Price, Availability, and Spec Sourcing
Every price and spec cited on OnVerdict comes from one of these sources:
- The manufacturer's own product page (Apple, Dell, HP, Samsung, etc.)
- A linked third-party benchmark from a site like Notebookcheck, AnandTech, or Geekbench Browser
- My own measurement (battery life, decibel readings, temperature readings)
Prices fluctuate, and I try to note when an article's pricing was last verified. When prices are outdated, please email me — I'll update them.
Affiliate Links and Conflicts of Interest
Every Amazon link on this site is tagged with the onverdict20-20
affiliate ID and is labeled (paid link) in-line. I earn a
small commission if you buy through those links. That commission is
the same percentage regardless of which product you buy, which means
I have no financial reason to push the more expensive option over the
cheaper one. In practice, our comparisons recommend the cheaper
product roughly 60% of the time — because that's what actually fits
most buyers.
I do not accept sponsored content, native ads, or review units provided in exchange for favorable coverage. Display ads on this site are served through Google AdSense and are not negotiated directly with advertisers.
Corrections Policy
I make mistakes. When a reader points one out, the process is:
- Verify the correction against the primary source.
- Fix the article immediately.
- Add a "Correction:" note at the bottom of the article with the date and what changed.
- If the correction materially affects the recommendation, flag it at the top of the article too.
Corrections never silently disappear. If you want to flag one, email [email protected] or use the contact page.
What AI Is (And Isn't) Used For
I use AI tools for spell-checking, draft outlining, and occasionally rewriting a paragraph I'm not happy with. Every word of published body copy is read and rewritten by a human before it goes live. No article on this site is generated end-to-end by AI, and none ever will be.
If you ever read something on OnVerdict that feels like AI slop — vague, hedged, suspiciously symmetric — email me. I want to know, and I'll rewrite it.