That bold percentage on the box. "97% of women reported visibly firmer skin in four weeks." It sounds like science. It reads like proof. In most cases, it is neither.
Two Kinds of Testing That Sound the Same
Skincare brands test products in two fundamentally different ways, and the packaging rarely tells you which one produced the number you are reading.
The first is an instrumental clinical study. Researchers use validated scientific instruments: corneometers to measure hydration, cutometers to measure elasticity, VISIA imaging to map wrinkle depth and pigmentation, spectrophotometers to quantify skin tone. These tools produce objective data. Reproducible measurements that do not depend on how anyone feels about the product.
The second is a consumer perception survey. A group of people, typically between 20 and 40, use the product for a set period. Then they answer structured questions. "Did your skin feel smoother?" "Did you notice improvement in firmness?" "Would you say your skin looks more radiant?" Their subjective answers get converted into the impressive percentages printed on the box.
Both qualify as "clinical testing" under current regulatory frameworks. The brand is not lying when it uses that phrase. But the gap between these two methods is enormous, and the packaging is designed so you never think to ask which one was used.
Why Consumer Survey Numbers Always Look Impressive
Consumer perception surveys are engineered to produce high percentages. This is not an accident. The questions are positively framed. If you apply any decently formulated moisturizer for four weeks, your skin will feel softer. If someone then asks "Did your skin feel more hydrated after using this product?" you will almost certainly say yes. That response reflects the baseline effect of consistent moisturization, not the unique efficacy of that specific product.
Sample sizes compound the problem. Thirty participants is standard for consumer perception studies. Some brands test with as few as 20. At that scale, two or three negative responses are the difference between "97% saw results" and "87% saw results." Neither number carries real statistical weight. Proper clinical research typically requires sample sizes above 50, with a control group, randomization, and often double-blinding to minimize bias.
There is also the placebo effect. People who know they are testing a new "advanced" serum report more positive results than people using the same formula in a plain container. Expectation drives perception, and perception drives those survey numbers.
The questions themselves reveal the strategy. Surveys rarely ask "Did this product reduce your wrinkles?" That is too specific and too easy to answer honestly. Instead, they ask "Did your skin appear more youthful?" or "Did you feel your complexion looked more even?" The vaguer the question, the more positive the response. Survey design is its own science, and brands hire firms that know exactly how to frame questions for maximum positive yield.
A consumer perception survey measures opinions. An instrumental study measures skin. Those are not the same thing.
What Brands Are Required to Disclose
The FDA does not require skincare brands to prove efficacy claims before selling a product. Under the MoCRA regulations now fully in effect, cosmetics must be safe, properly labeled, and manufacturers must register facilities and report adverse events. But "clinically proven" has no legal definition in cosmetics labeling. A brand can run a 22-person consumer survey, collect favorable self-reported impressions, and print "clinically proven results" on millions of units.
The fine print, when it exists, is where the truth lives. Look at the bottom of advertisements or the back of packaging for the footnote. It often reads something like: "Based on a consumer perception study of 31 women over 4 weeks." That single sentence tells you everything about the rigor behind the headline. When you see "instrumental study," "dermatologist-assessed using [specific device]," or a reference to a published paper, the underlying evidence is fundamentally different.
The FTC, which oversees advertising claims, requires that beauty marketing have "competent and reliable scientific evidence" as a foundation. In practice, enforcement targets only the most egregious cases, leaving a vast gray area where perception-based claims go unchallenged. A 2025 review in Dermatological Reviews found that the majority of claims on bestselling cosmetics could not be verified through published peer-reviewed research.
Some brands include no methodology footnote at all. That absence is itself information.
How to Evaluate a Skincare Claim in 30 Seconds
Not every claim is hollow. Some brands invest in rigorous instrumental testing. Here is how to tell the difference when you are standing in the aisle or scrolling a product page:
- Find the methodology. "Consumer perception study" or "self-assessment" means survey. "Instrumental evaluation," "in-vivo study," or "clinical assessment via [device name]" means objective measurement.
- Check the sample size. Under 50 participants with no control group? The results have limited statistical power. Over 100 participants with controlled conditions? More credible.
- Read the exact wording. "Reported skin felt smoother" is subjective. "Measured 28% increase in stratum corneum hydration via corneometry at 8 weeks" is objective. The specificity of the language tracks directly with the quality of the evidence.
- Look at the timeframe. A 24-hour hydration boost tells you almost nothing about long-term efficacy. Eight-week and twelve-week study durations are the minimum for meaningful results on anti-aging or pigmentation claims.
- Notice what is missing. If a retinol product only claims "skin felt smoother" but never mentions measured wrinkle reduction, the brand chose that wording deliberately. They tested for the claim they could win, not the claim you actually care about.
Price Does Not Predict Testing Quality
There is no correlation between what a product costs and how rigorously it was tested. La Roche-Posay and CeraVe fund extensive peer-reviewed research on their formulations and publish the results openly. Their products cost under $25. Meanwhile, some of the most aggressively marketed luxury serums priced above $150 rely entirely on consumer perception surveys for their headline claims.
SkinCeuticals has published over 200 clinical studies across its product range. The Ordinary includes exact concentrations and key actives on every label, making it straightforward to cross-reference their formulations against independent research. These brands treat transparency as a feature. Others treat opacity as a strategy.
When a brand charges $200 for a serum and the only supporting evidence is "94% of 28 women agreed their skin looked more radiant," ask yourself what that price is actually paying for. Often, the answer is the marketing budget that produced the claim in the first place.
Start With the Footnote, Not the Headline
Every time you choose a product based on a percentage claim, you are making a decision with incomplete information. The "97% saw results" headline is not technically false. Those people did report those impressions. But self-reported feelings are not measurements. Perception is not proof. And a survey of 30 people is not a clinical trial.
Train yourself to flip the box, scroll to the fine print, and read the footnote first. Check whether the study was instrumental or perceptual. Look at the sample size. Note the duration. These three data points take 30 seconds to find and will fundamentally change how you evaluate every product you consider buying.
Skinventry gives you the ingredient breakdown before you buy, so the marketing claims have context. Because the best defense against a compelling number is knowing exactly how it was generated.