01. EXECUTIVE SUMMARY
The Map You Don't Have
In our previous article, “Does AI Make Your Employees Experts?“, we introduced the concept of the jagged technological frontier: the irregular, unpredictable boundary between tasks where AI excels and tasks where it actively degrades performance. The research from Harvard Business School and BCG was unambiguous: when 758 consultants used GPT-4 on tasks inside the frontier, they saw a 25% speed increase and 40% quality improvement. But on tasks outside the frontier, AI users performed 19 percentage points worse than those working without AI.
The implication is clear: every organization has a unique jagged frontier, shaped by its industry, workflows, data maturity, and the specific AI tools it deploys. Yet McKinsey’s 2025 State of AI survey found that 88% of companies now use AI while only 6% achieve significant value. The gap is not a technology problem but a mapping problem. Most organizations are deploying AI blindly, without knowing where their frontier actually lies.
This article provides the practical framework for fixing that. It is the operational companion to the strategic overview. A step-by-step protocol for auditing your organization’s tasks, scoring them against AI suitability, and building a living map that guides every AI investment decision. The frameworks draw on research from Harvard, MIT, BCG, and UX Matters, adapted for enterprise implementation.
88%
Companies using AI
6%
Achieving real value
19pp
Worse outside frontier
5
Steps to map it
02. The Problem
Why You Need a Frontier Map
Organizations without a frontier map make three predictable mistakes. First, they deploy AI on high-visibility tasks rather than high-suitability tasks: choosing use cases that impress the board rather than ones where AI genuinely outperforms humans. Second, they fail to protect tasks that sit outside the frontier, allowing AI to quietly erode quality in areas where human judgment is irreplaceable. Third, they treat the frontier as static, never updating their understanding as AI capabilities evolve.
The cost of these mistakes is not hypothetical. The Harvard/BCG study found that consultants who used AI on outside-frontier tasks were not just slightly worse. They were confidently worse. AI gave them fluent, professional-looking outputs that masked fundamental errors. In a consulting context, that means delivering polished recommendations built on flawed analysis. In healthcare, it could mean confident misdiagnosis. In finance, it could mean well-formatted reports with incorrect risk assessments.
The Confidence Trap
Microsoft Research (2022, 2025) documented a phenomenon called “automation-induced complacency” when AI produces fluent, well-structured outputs, humans become less likely to verify them. A 2025 Harvard study confirmed that students using AI for research tasks showed measurably weaker critical thinking over time. The better AI gets at producing plausible outputs, the more dangerous it becomes on tasks where it lacks genuine competence.
A frontier map does not just tell you where to use AI. It tells you where not to use it and that second insight is often more valuable than the first. As Victor Yocco wrote in the AI Value Rubric framework: “The most effective AI strategy involves mastering the power of no.”
"The most effective AI strategy involves mastering the power of no. Move the conversation from 'Can we build this with AI?' to 'Should we build this with AI?'"
— Victor Yocco, UX Matters, December 2025
03. The Approach
The 5-Step Frontier Discovery Protocol
Mapping your jagged frontier is not a one-day workshop but a structured discovery process that takes 4–8 weeks depending on organizational complexity. The protocol below synthesizes research-backed methodologies from the Harvard/BCG study, the AI Value Rubric, and enterprise AI readiness frameworks into a single actionable sequence.
Step 1
Task Inventory
Decompose every role in a department into discrete, observable tasks. A marketing manager’s role might break into 40–60 distinct tasks: writing campaign briefs, analyzing performance data, reviewing creative assets, managing vendor relationships, presenting to stakeholders, etc. Be granular. ‘Writing’ is too broad; ‘drafting initial email copy for product launches’ is the right level. Aim for tasks that take 15 minutes to 4 hours.
Step 2
AI Experiment
For each inventoried task, run a controlled test: have the same person complete the task both with and without AI assistance, or compare outputs from AI-assisted and unassisted team members. The Harvard/BCG study used 18 realistic consulting tasks: You should aim for at least 15–20 tasks per role. Document completion time, output quality, and the worker’s confidence level for each attempt.
Step 3
Blind Quality Assessment
Have domain experts evaluate the outputs without knowing which were AI-assisted. This is critical. The BCG study found that AI outputs often ‘look’ more polished, creating a halo effect that masks substantive errors. Use a standardized rubric with dimensions for accuracy, completeness, originality, and practical applicability. Rate each on a 1–5 scale.
Step 4
Frontier Classification
Based on experiment results, classify each task into 1 of 3 zones: Inside the Frontier (AI improves both speed and quality), Outside the Frontier (AI degrades quality or introduces errors), or Edge of the Frontier (mixed results, context-dependent). The ‘edge’ category is important. These tasks require the most careful human-AI collaboration design.
Step 5
Pattern Recognition
Analyze what makes tasks fall inside vs. outside your frontier. Look for patterns in task characteristics: Does AI struggle with tasks requiring organizational context? Tasks with ambiguous success criteria? Tasks requiring multi-stakeholder judgment? These patterns become your organization’s ‘frontier rules’. Heuristics that help classify new tasks without running full experiments.
04. Framework
The Task Scoring Framework
Not every task warrants a full controlled experiment. For rapid triage, we recommend a six-dimension scoring framework adapted from the AI Value Rubric (Yocco, 2025) and the Harvard/BCG frontier research. Score each task on a 1–5 scale across all six dimensions, then plot the results to identify your highest-value AI opportunities.
Dimension
1
3
5
Frequency
Annual / rare
Weekly
Daily / hourly
Time / Effort
Minutes, trivial
30–60 min, moderate
Hours / days per instance
Business Impact
Minimal effect
Moderate efficiency gain
Revenue / compliance critical
Ambiguity
Clear rules, deterministic
Some judgment needed
Highly subjective, contextual
Error Cost
Easily reversible
Moderate rework
Irreversible / reputational
AI Aptitude
Rule-based solution better
AI adds partial value
Requires synthesis & reasoning
The first 3 dimensions (Frequency, Time/Effort, and Business Impact) measure the value of automating the task. The last 3 (Ambiguity, Error Cost, and AI Aptitude) measure the risk and feasibility of AI involvement. A task that scores high on value dimensions but also high on Ambiguity and Error Cost is a classic “edge of frontier” task that requires careful human-AI collaboration design rather than simple delegation.
Interpreting the Scores
- Total 24–30: Prime AI candidate. High frequency, high effort, and strong AI aptitude. These are your “Agentic Stars” — tasks where AI delivers clear, measurable ROI.
- Total 18–23: Promising but requires design. The task has value but may have elevated ambiguity or error cost. Implement with human-in-the-loop safeguards.
- Total 12–17: Marginal. The effort of AI implementation may not justify the return. Consider simpler automation or traditional software solutions.
- Total 6–11: Not an AI use case. Either the task is too rare to justify investment, or the risks outweigh the benefits. Keep human-only.
05. Score Explanation
The 4-Zone Frontier Map
Once you have scored your tasks, plot them on a 2-axis map. The horizontal axis represents AI Suitability (a composite of AI Aptitude minus Ambiguity and Error Cost). The vertical axis represents Business Value (a composite of Frequency, Time/Effort, and Business Impact). This creates four distinct zones, each requiring a different AI strategy.
Automate Zone
High Value + High AI Suitability
Repetitive, rule-based, high-volume tasks where AI consistently outperforms humans. Delegate fully with periodic quality audits. Examples: data entry reconciliation, standard report generation, email classification.
Accelerate Zone
High Value + Moderate AI Suitability
Structured tasks with moderate complexity where AI provides a strong first draft that humans refine. The “centaur” pattern: AI handles the heavy lifting, humans add judgment. Examples: market analysis drafts, code scaffolding, proposal outlines.
Augment Zone
High Value + Low AI Suitability
Complex, judgment-heavy tasks where AI serves as a research assistant or sounding board, but humans drive the work. The “cyborg” pattern: tight integration with constant human oversight. Examples: strategic planning, client negotiations, creative direction.
Protect Zone
Any Value + Negative AI Impact
Tasks where AI involvement actively degrades performance. The “outside the frontier” territory. Keep humans fully in charge. Examples: ethical judgment calls, relationship-dependent negotiations, novel problem diagnosis, organizational politics.
The key insight from the Harvard/BCG research is that the boundary between these zones is not intuitive. Tasks that seem simple may fall outside the frontier, while tasks that seem complex may fall inside it. That is precisely why the frontier is “jagged” and why empirical testing, not assumption, must drive your classification.
"The frontier is jagged. Some tasks that appear complex are easily handled by AI, while seemingly simple tasks can fall outside its capabilities. The boundary cannot be predicted from task difficulty alone."
— Dell'Acqua et al., Organization Science, 2026
06. Example
Practical Scoring: A Worked Example
To illustrate how the scoring framework works in practice, consider a mid-size consulting firm mapping the frontier for its strategy team. Below are 3 tasks from the same role, each landing in a different zone.
Task
Freq
Effort
Impact
Ambig
Error
AI Apt
Total
Zone
Summarize competitor earnings calls
4
5
3
2
2
5
21
Automate
Draft client strategy recommendations
3
5
5
4
4
4
25
Accelerate
Assess organizational readiness for M&A
2
5
5
5
5
2
24
Protect
Notice that the M&A readiness assessment scores 24 total, higher than the earnings call summary at 21. But it lands in the Protect Zone because its Ambiguity (5) and Error Cost (5) scores are maximal while its AI Aptitude (2) is low. A naive “highest total score = best AI candidate” approach would have prioritized the wrong task. The framework’s multi-dimensional structure prevents this mistake.
The strategy recommendations task is the most interesting case. It scores high across the board, including high ambiguity and error cost. But its AI Aptitude is also high (4), because AI excels at synthesizing large volumes of research into structured recommendations. This is a classic Accelerate Zone task: AI produces a strong first draft that a senior consultant then refines, challenges, and contextualizes with client-specific knowledge.
07. Collaboration Pattern
Choosing Your Collaboration Pattern: Centaur or Cyborg?
The Harvard/BCG study identified two distinct patterns among the highest-performing AI users. Centaurs maintain a clear division of labor. They strategically decide which tasks to hand to AI and which to keep for themselves, switching between human and machine work like the mythical half-human, half-horse creature. Cyborgs blend human and AI work at a much more granular level, moving back and forth across the frontier within a single task starting a sentence for AI to complete, or using AI to validate a human-generated hypothesis.
- The Centaur Pattern
Clear handoffs between human and AI. Best for tasks where the human and AI contributions are distinct and separable.
- AI drafts → Human reviews and refines
- Human designs strategy → AI executes analysis
- AI gathers research → Human synthesizes insights
- Human sets criteria → AI scores and ranks options
Best for: Automate and Accelerate Zone tasks
- The Cyborg Pattern
Deep interweaving of human and AI at the sub-task level. Best for complex, creative, or judgment-heavy work.
- Human writes opening → AI continues → Human edits → AI refines
- AI generates options → Human selects → AI elaborates → Human validates
- Human hypothesizes → AI stress-tests → Human adjusts → AI documents
- AI identifies patterns → Human interprets → AI visualizes → Human narrates
Best for: Augment Zone and Edge-of-Frontier tasks
Ethan Mollick, the Wharton professor who co-authored the BCG study, recommends that most knowledge workers develop both patterns and switch between them based on the task. The centaur pattern is faster and more efficient for well-understood tasks inside the frontier. The cyborg pattern is more appropriate for edge-of-frontier tasks where the human needs to maintain active judgment throughout the process. Neither pattern works for Protect Zone tasks. Those should remain human-only.
08. Common patterns
Common Frontier Patterns Across Industries
While every organization’s frontier is unique, our research and client experience reveal consistent patterns in what falls inside versus outside the frontier. These patterns can serve as starting hypotheses for your own mapping exercise but they must be validated empirically.
Typically Inside Frontier
Typically Outside Frontier
Summarizing large documents or datasets
Assessing organizational culture or politics
Generating first drafts of structured content
Making ethical or moral judgment calls
Code generation for well-defined functions
Debugging novel, system-level architecture issues
Data cleaning and transformation
Interpreting ambiguous stakeholder requirements
Competitive intelligence gathering
Building trust in sensitive client relationships
Standard financial modeling and projections
Assessing counterparty risk in novel situations
Translating content between languages
Adapting messaging to local cultural nuances
Generating test cases from specifications
Prioritizing features based on strategic vision
The Pattern Behind the Patterns
Tasks that fall inside the frontier share common characteristics: they involve processing structured or semi-structured information, they have clear success criteria, and the “right answer” can be verified objectively. Tasks that fall outside share different traits: they require organizational context that AI lacks, they involve navigating human relationships, or their success depends on judgment that cannot be reduced to rules.
The critical insight: task complexity is not the dividing line. Some highly complex tasks (like synthesizing 200 pages of research into a structured brief) fall inside the frontier, while some seemingly simple tasks (like deciding which stakeholder to consult first) fall outside it.
09. Red flags
7 Warning Signs a Task Is Outside Your Frontier
During your mapping exercise, watch for these red flags. When a task exhibits two or more of these characteristics, it is likely outside your frontier and should be classified in the Protect Zone until empirically proven otherwise.
01
The task requires information AI doesn't have
Organizational history, unwritten cultural norms, relationship dynamics between specific people, or confidential context that was never documented.
02
"Confidently wrong" outputs are dangerous
If AI produces a plausible but incorrect output and the downstream cost is high (misdiagnosis, bad investment, legal liability) the task needs human primacy.
03
Success criteria are subjective or political
When “good” depends on who is evaluating, what their priorities are, or how the output will be received by multiple stakeholders with competing interests.
04
The task builds or depends on trust
Client relationships, team morale, stakeholder buy-in. These require emotional intelligence and authentic human connection that AI cannot replicate.
05
Novelty is the point
When the task requires genuine creative breakthrough, not recombination of existing patterns. AI excels at sophisticated pattern matching; it does not innovate from first principles.
06
The task involves ethical or moral reasoning
Decisions about fairness, equity, harm, or values require human moral agency. Delegating these to AI creates accountability gaps.
07
Historical patterns are unreliable guides
In genuinely novel situations: new markets, unprecedented crises, paradigm shifts, AI’s reliance on training data becomes a liability rather than an asset.
10. The Playbook
Building Frontier Literacy Across Your Organization
A frontier map is only useful if the people making daily AI decisions can read it. Frontier literacy, the ability to intuitively sense whether a task is inside or outside the frontier, must become a core organizational competency. This is not about teaching everyone to use ChatGPT. It is about making everyone aware of judging when to use it and when not to.
Level 1
Awareness (All Employees)
Understand that the frontier exists and is jagged. Know that AI excels at some tasks and fails at others, and that the boundary is not intuitive. Be able to identify the four zones (Automate, Accelerate, Augment, Protect) and give examples from their own work.
Level 2
Application (Team Leads & Managers)
Able to score tasks using the six-dimension framework. Can design appropriate human-AI collaboration patterns (centaur vs. cyborg) for their team’s tasks. Know how to run controlled experiments to test frontier boundaries. Can identify the seven warning signs of outside-frontier tasks.
Level 3
Architecture (AI Champions & Executives)
Can design organization-wide frontier maps across departments. Understand how the frontier shifts as AI capabilities evolve and can anticipate which currently-outside tasks may move inside. Can build governance frameworks that protect the Protect Zone while accelerating adoption in the Automate and Accelerate Zones.
The Harvard/BCG study found that the highest-performing AI users were not the most technically skilled. They were the ones with the best judgment about when to use AI. Building this judgment at scale is the single highest-leverage investment an organization can make in its AI strategy.
"The best AI users are not the most technically skilled — they are the ones who have developed the best judgment about when to delegate to AI and when to rely on their own expertise."
— Ethan Mollick, Wharton School, 2023
11. The Living Map
Keeping Your Frontier Current
The jagged frontier is not static. Every major AI model release shifts the boundary. GPT-4 moved tasks inside the frontier that GPT-3.5 could not handle. Multimodal capabilities brought image analysis and document understanding inside. Agentic AI is now moving multi-step workflows inside. Your frontier map must evolve with the technology.
Recommended Review Cadence
01
Quarterly
Review edge-of-frontier tasks. Test whether recent AI improvements have moved any into the Automate or Accelerate zones. Update scoring for tasks where team feedback suggests the classification has shifted.
02
Biannually
Full re-scoring of all inventoried tasks. Run fresh controlled experiments on a sample of tasks from each zone. Update the frontier map and redistribute to all teams.
03
On Major AI Release
When a significant new model or capability launches, immediately re-test Protect Zone and Augment Zone tasks. These are the most likely to shift categories with capability improvements.
Gartner predicts that by 2029, agentic AI will autonomously resolve 80% of common customer service issues. That means tasks currently in the Augment Zone for customer service teams may migrate to the Automate Zone within three years. Organizations that maintain living frontier maps will capture this value first; those that treat their initial map as permanent will be left behind.
12. The RedEx Perspective
Building Frontier Literacy Across Your Organization
At Redex Consulting, we have guided multiple organizations through frontier mapping exercises across industries including energy, financial services, healthcare, and technology. Three lessons emerge consistently from this work.
First, start with one department, not the whole organization. A single team’s frontier map takes 4–6 weeks to build properly. Attempting to map the entire organization simultaneously produces superficial results. Start with a high-stakes department, one where AI is already being used informally, and build depth before breadth.
Second, the Protect Zone is your most important discovery. Every client we have worked with has found tasks in the Protect Zone that they were already delegating to AI. These are the hidden risks, the places where AI is quietly degrading quality while producing outputs that look polished. Identifying and correcting these is often the single highest-ROI outcome of the mapping exercise.
Third, the map changes the conversation. Before frontier mapping, AI discussions in most organizations are polarized. Enthusiasts who want to automate everything versus skeptics who resist all change. The frontier map replaces ideology with evidence. It gives both camps a shared vocabulary and a data-driven basis for decisions. That organizational alignment is often more valuable than any individual task optimization.
Key Takeaways
- Every organization has a unique jagged frontier. The boundary between tasks where AI helps and tasks where it hurts. You cannot predict it from intuition; you must discover it empirically.
- Use the 5-Step Frontier Discovery Protocol: Task Inventory → AI Experiment → Blind Quality Assessment → Frontier Classification → Pattern Recognition.
- Score tasks across 6 dimensions: Frequency, Time/Effort, Business Impact, Ambiguity, Error Cost, and AI Aptitude. High value + high risk tasks need human-AI collaboration design, not simple delegation.
- Classify tasks into 4 zones: Automate (full AI), Accelerate (AI drafts, human refines), Augment (human leads, AI assists), and Protect (human only).
- Choose collaboration patterns deliberately: Centaur (clear handoffs) for Automate/Accelerate tasks, Cyborg (deep interweaving) for Augment/Edge tasks, and no AI for Protect tasks.
- The Protect Zone is your most important discovery. It reveals where AI is quietly degrading quality behind polished outputs.
- Treat your frontier map as a living document. Review quarterly, re-score biannually, and re-test on every major AI release.
REFERENCES
- 2026. Dell’Acqua, F. et al. “Navigating the Jagged Technological Frontier,” Organization Science.
- 2023. Mollick, E. “Centaurs and Cyborgs on the Jagged Frontier,” One Useful Thing.
- 2023. Noy, S. & Zhang, W. “Experimental Evidence on the Productivity Effects of Generative AI,” Science.
- 2026. Mollick, E. “Four Guiding Principles for Using AI at Work,” Big Think.
- 2025. Torres, T. “How to Choose Which Tasks to Automate with AI,” Product Talk.
Ready to map your organization's jagged frontier?
We run structured frontier mapping engagements that give your leadership team a clear, evidence-based view of where AI creates value and where it creates risk. Start with one department. See results in 6 weeks.