NotebookCheck Needs to Fix Their Useless Rating System

Everything is “good” according to NotebookCheck. They’ve come across devices with glaring problems, but even those get a high final score.

How are consumers supposed to distinguish between better and worse when three-quarters of all scores are between 80 and 90?

Here’s how NotebookCheck categorizes the scores:

00 – 49 % – Insufficient
50 – 62 % – Sufficient
63 – 74 % – Satisfactory
75 – 87 % – Good
88 – 100 % – Excellent

This means not a single one of the devices they reviewed in 2018 was unsatisfactory.

Don’t be so easily satisfied, NotebookCheck!

The Problem

NotebookCheck rating category weights table

Currently, NotebookCheck uses the Weighted Sum Model:

total score = weight × score₁ + weight × score₂ + …

where each scoreₙ is a percentage (decimal from 0 to 1).

It is severely flawed for this purpose. Suppose a ‘Multimedia’ laptop has an abysmal score in one category, let’s say ‘Temperature’ — it severely overheats. That would be a deal-breaker for most consumers in real life, but even if that score were 0, the device still has the chance to get a 92/100 overall rating!

A Solution

The Weight Product Model much more accurately predicts how consumers would consider a product:

total score = score₁ᵂᴱᴵᴳᴴᵀ × score₂ᵂᴱᴵᴳᴴᵀ × …

where each scoreₙ is a percentage (decimal from 0 to 1).

A horrendous sub-score now has the power to impact the overall score in the way a deal-breaker does.

Don’t get me wrong, NotebookCheck is still the best review website for portable computing devices. I commend them for using quantitative measurements and not just qualitative opinions. However, I hope they adjust how they score their reviews, because the current method really isn’t much help.

1 Comment threads
0 Thread replies
Most reacted comment
Hottest comment thread
1 Comment authors
Feadurn Recent comment authors
newest oldest
Notify of

Thanks for the post. Can we have the raw dataset you used for your plots?