01. EXECUTIVE SUMMARY
The $15 Million Question
According to Gartner, organizations attribute an average of $15 million per year in losses to poor data quality. Decision errors stemming from flawed information can erode up to 3% of annual company profits. And perhaps most troubling: 60% of organizations admitted they do not even know the full cost of their bad data.
But the real cost is not in bad data alone. It is in the insights you never found, or worse, the ones you found too late. In a world where nearly 30,000 new products are introduced every year, the difference between a product that captures a market and one that quietly disappears from shelves often comes down to a single thing: the quality of the consumer insight behind it.
This article examines why insight-driven companies consistently outperform their peers, why most companies still get insights wrong despite spending billions on research, and what a practical framework for finding, validating, and acting on consumer insights actually looks like, whether you are a multinational corporation or a small business with no research budget at all.
02. The Problem
The Insight Paradox
Clayton Christensen, the late Harvard Business School professor renowned for his theory of disruptive innovation, estimated that 95% of new products fail. Harvard Business Review’s own research paints a similarly stark picture: approximately 75% of consumer packaged goods and retail products fail to earn even $7.5 million during their first year. Less than 3% of new CPG products exceed $50 million in first-year sales (the benchmark of a highly successful launch).
The consultant Jack Trout found that American families, on average, repeatedly buy the same 150 items, which constitute as much as 85% of their household needs. Getting something new onto that mental shortlist is extraordinarily difficult.
Here is the paradox: the global market research industry generated approximately $140 billion in revenue in 2024, according to Statista and Backlinko. Companies are spending more on research than ever before. Yet the failure rates remain stubbornly, almost defiantly, high. The Insights-as-a-Service market alone is projected to grow from $5.82 billion in 2024 to $18.85 billion by 2030, according to Grand View Research.
The problem, then, is not a lack of research. It is a lack of framework: a structured, disciplined approach to finding insights that are genuinely actionable, rigorously validated, and connected to execution.
95%
New products fail
$140B
Research industry
30K
Products launched/yr
$15M
Cost of bad data/yr
"Less than 3% of new consumer packaged goods exceed first-year sales of $50 million — considered the benchmark of a highly successful launch."
— Harvard Business Review, April 2011
03. The Approach
The 80/20 Principle: Not All Projects Need the Same Research
When working with large corporations, the research investment question is not “should we research?” but rather “where should we concentrate our research resources?” The answer lies in the Pareto principle: the observation, first articulated by Italian economist Vilfredo Pareto, that roughly 80% of effects stem from 20% of causes.
Applied to product development and market research, this means that a company’s portfolio of projects should be divided into 2 tiers. Large, high-stakes projects (the ones that will define the brand’s trajectory for years) demand rigorous, well-funded research. The cost of being wrong on these projects is measured in millions. Research budgets for these initiatives should never be the first line item cut.
For small and medium projects, the calculus is different. These do not necessarily require dedicated market research. Instead, the resources that would have been spread thinly across every project should be redirected, concentrated on the big bets where the stakes justify the investment.
Procter & Gamble, which invests approximately $2 billion annually in research and development, exemplifies this principle. The company’s product success rate rose dramatically when it adopted Clayton Christensen’s “jobs to be done” framework: a structured approach to understanding not what consumers say they want, but what they are actually trying to accomplish. The framework allowed P&G to focus its research investment on the insights that would drive the largest returns.
“When the cost of being wrong is millions, the research budget should never be the first thing you cut. When the stakes are lower, redirect those resources to the bets that matter most.”
04. The MNC Playbook
How Multinationals Find Insights That Survive
Multinational corporations with significant research budgets, companies like Unilever, which spent 15.5% of revenue (approximately $5.17 billion) on marketing in the first half of 2025 alone, operate within a structured insight discovery process. They work with major research firms such as Nielsen, Kantar, and Ipsos. Their internal teams are trained not just to commission research, but to brief it precisely, interpret it critically, and challenge it when the findings seem too convenient.
The most important discipline in this process is midpoint validation. An insight that has not been validated midway through the project is not an insight, it is a hypothesis. The first filter is consumer belief: does the consumer actually believe this? Can they articulate it in their own language? If the insight only makes sense in a boardroom presentation but not in a consumer’s kitchen, it will not survive contact with the market.
The second checkpoint comes at 3 to 6 months. If the insight has not generated measurable traction by this point, in awareness, trial, or repeat purchase, it is time to reassess. Not every failed metric means the insight was wrong; sometimes the execution, the pricing, or the competitive response was the problem. But the checkpoint forces an honest conversation.
Case Study:
Coca-Cola C2
In 2004, Coca-Cola identified a seemingly perfect market gap: men aged 20-40 who liked the taste of Coke but not its calories, and liked the zero-calorie aspect of Diet Coke but not its taste or feminine image. C2, with half the calories and all the flavor, launched with a $50 million advertising campaign.
The insight was wrong. Men did not want half the calories. They wanted full flavor with no calories. C2’s few sales came mostly at the expense of Coke and Diet Coke. The company learned from the mistake: a year later, it launched Coke Zero, which remains on shelves today.
Source: Harvard Business Review, “Why Most Product Launches Fail,” April 2011
Case Study:
SegwaY
Dean Kamen’s Segway was announced as nothing less than an alternative to the automobile. The inventor predicted sales of 10,000 units per week. The actual result: approximately 24,000 units sold in its first five years, at a price point of $5,000 that consumers were never willing to pay.
The fundamental failure was not in the technology. It was in the absence of insight validation. No one systematically tested whether consumers would pay $5,000 for a motorized scooter, regardless of how advanced the engineering was.
Source: Harvard Business Review, “Why Most Product Launches Fail,” April 2011
05. The SME Playbook
Finding Insights Without a Budget
Small and medium enterprises face a fundamentally different reality. They typically cannot afford to commission research from Nielsen or Kantar. Many skip market research entirely (not because they do not value it) but because the budget simply is not there.
The alternative is not to go without insights, but to find them differently. The first option is to leverage third-party insights (syndicated research, industry reports, and publicly available data) that has already been collected and analyzed by others. This is not custom research, but it provides a foundation.
Beyond that, there are zero-budget methods that genuinely work. Onclusive’s 2025 research describes social media as a “live research lab“: faster, broader, and more authentic than most traditional methods. Posting a well-crafted question to your personal or professional network on social media can surface unfiltered consumer language and genuine reactions that a formal survey might never capture.
Reading group chats, comment sections, and online communities where your target consumers naturally congregate is another underutilized approach. The language people use when they are not being formally surveyed (when they are complaining, recommending, or debating) often contains the rawest and most honest insights available.
And then there is the simplest method of all: reading. Books, industry reports, academic papers, and competitor analyses are all publicly accessible sources of insight that require nothing more than time and intellectual curiosity.
06. Framework
The Validation Framework: From Question to Conviction
Whether you are a multinational with a $50 million research budget or a startup with nothing but curiosity, the validation process follows the same intellectual discipline. It mirrors the scientific method, adapted for consumer research:
01
Start with a Question
Not a hypothesis, not an assumption, a genuine question about consumer behavior, unmet needs, or market dynamics. The quality of the insight is bounded by the quality of the question.
02
Form a Hypothesis
Based on available data, observation, and experience, articulate what you believe the answer might be. Be specific enough that the hypothesis can be tested.
03
Draw a Preliminary Conclusion
Gather evidence through research, social listening, interviews, or data analysis and form a preliminary conclusion.
04
Attempt to Disprove It
This is the critical step that most organizations skip. Actively try to find evidence that contradicts your conclusion. If you cannot disprove it, the probability that it is a genuine insight increases significantly.
“If you cannot disprove it, the probability that it is a genuine insight is high. The discipline is not in finding evidence that supports your belief, it is in honestly searching for evidence that contradicts it.”
Thrive Thinking’s 2024 research on insight validation confirms this approach: insight validation bridges qualitative and quantitative methods, using each to confirm or reject hypotheses generated by the other. The key is that insights must be tested against actual consumer behavior not just stated preferences, which are notoriously unreliable.
07. The Hidden Risks
Where Insight Research Goes Wrong
Even well-intentioned research can produce misleading results. Three risks deserve particular attention, because they are common, often invisible, and can invalidate an entire research program.
01
Wrong Audience
Before commissioning any research, you must clearly identify who will use the resulting insight and who the research should represent. If the insight is meant to drive a national product launch but the research only captures urban consumers, the conclusions will be systematically biased.
02
Geographic Sampling Bias
To save costs, research teams often filter their sample to specific cities or districts. When this filtering is done incorrectly — selecting locations based on convenience rather than representativeness — the results cannot be generalized to the broader population. The Qualitative Research Consultants Association (QRCA) warned in 2024 that geographic selection bias can severely distort research findings.
03
Temporal Bias
Conducting research at the wrong time of year (surveying sunscreen preferences in winter, or holiday shopping behavior in March) introduces systematic errors that no amount of statistical correction can fully resolve.
The mitigation for all 3 risks is the same: work with experienced research firms to identify the correct behavioral samples before data collection begins. The sample design is not an administrative detail. It is the foundation on which every subsequent insight will rest. Get it wrong, and everything built on top of it is unreliable.
08. The Execution Gap
Good Insights, Bad Outcomes
There is a painful truth that the research industry rarely discusses openly: sometimes the insight is right, but the outcome is still wrong. A perfectly validated consumer insight can be undermined by poor execution: the in-store experience, the sales team’s behavior, the pricing decision, the competitor’s response.
Case Study:
Mosquito Magnet
American Biophysics’ Mosquito Magnet had a genuine, validated insight: the West Nile virus scare had elevated mosquitoes from irritating nuisances to life-threatening disease carriers. The product, which used carbon dioxide to lure mosquitoes into a trap, quickly became a top seller, generating $70 million in annual revenue.
But when the company expanded manufacturing from a low-volume Rhode Island facility to a mass-production plant in China, quality dropped catastrophically. Consumers revolted. American Biophysics was eventually sold for the bargain-basement price of $6 million. The insight was right. The execution destroyed the business.
Source: Harvard Business Review, April 2011
Case Study:
Microsoft Vista
Microsoft launched Windows Vista in 2007 with $500 million in marketing spend and predicted that 50% of users would run the premium edition within 2 years. The software had so many compatibility and performance problems that even Microsoft’s most loyal customers revolted. Apple’s “I’m a Mac” campaign amplified the negative perception.
The lesson is clear: no amount of marketing budget can overcome a product that does not deliver on its promise. The insight about what consumers wanted was not wrong — the execution simply failed to match it.
Source: Harvard Business Review, April 2011
The insight is necessary but not sufficient. It must be paired with execution capability: the ability to translate what you have learned about the consumer into a product, a service, and an experience that consistently delivers on the promise the insight identified.
09. The Redex Perspective
Good Insights, Bad Outcomes
At Redex, we believe that consumer insights are the foundation of every successful product, service, and digital experience. But we also believe (and the evidence overwhelmingly supports this) that insights alone are not enough. The gap between a validated insight and a successful market outcome is filled by execution: the technology, the processes, the teams, and the discipline to deliver on what the research revealed.
This is why our positioning as an end-to-end transformation partner matters. We advise. We build. We do both. We help organizations design the research framework, validate the insights, and then build the digital platforms, AI systems, and operational processes that turn those insights into measurable business outcomes.
We are tech-agnostic. We use the right tools for the right problem, not the tools that generate the highest vendor commission. And we measure everything, because an insight that cannot be measured after implementation is an insight that cannot be improved.
Key Takeaways
- Apply the 80/20 principle: concentrate research investment on the projects where the cost of being wrong is highest.
- An insight that has not been validated midway through the project is not an insight. It is a hypothesis.
- SMEs can find genuine insights through social listening, community engagement, and structured reading. Budget is not the barrier, discipline is.
- Use the scientific method: Question → Hypothesis → Conclusion → Attempt to Disprove. If you cannot disprove it, you may have found something real.
- Guard against sampling bias: wrong audience, wrong geography, and wrong timing can invalidate an entire research program.
- The insight is necessary but not sufficient. Execution must match the quality of the research.
REFERENCES
- Christensen, C. Harvard Business School, Product failure rates
- Harvard Business Review. “Why Most Product Launches Fail,” April 2011
- Gartner. Data quality and decision error costs, 2018
- Grand View Research. Insights-as-a-Service market forecast, 2024–2030
- Trout, J. Consumer shopping habit research
- Onclusive. Social media as consumer research, 2025
- QRCA Views. Geographic selection bias in research, 2024
- Thrive Thinking. Insight validation frameworks, 2024
- Research and Markets. Market research services forecast, 2025–2026