May 13, 2026

Beyond Base Camp: A Framework for Thinking Deliberately About Generative AI Adoption

Beyond Base Camp: A deliberate framework to think about generative AI

Strategic Perspective

Beyond Base Camp: A Framework for Thinking Deliberately About Generative AI Adoption

Of late, I have been thinking a good bit about how generative AI would change knowledge work: its anticipated impact on the way we work as individuals and how organizations can integrate this technology into their workflows more strategically.

As companies everywhere scramble to harness the power of generative AI, how we adopt this transformative technology becomes a critical question. Are leaders tapping into its potential with a clear vision to unlock new capabilities or merely reacting to the hype and FOMO, desperate to gain a competitive edge?

The evidence suggests the latter is distressingly common. Examples galore, from mental wellness AI chatbots pulled down by their creators after recognizing the technology wasn’t ready to safely help people in crisis, to publishers retracting books and journals investigating waves of papers with references fabricated by AI, to publicly traded companies who had gone all-in on AI eventually deciding they needed to put some effort into hiring humans even if just to reassure customers that “there will be always a human if you want“. Despite many organizations experimenting with common AI tools such as ChatGPT and CoPilot, or exploring flashier use cases, few have successfully integrated AI into routine workflows at scale (just 5% according to this MIT report, or 16% based on this IBM CEO study). These are symptoms of a deeper problem: of organizations chasing efficiency without forethought and attempting to extract cost and speed savings without investing in quality.

This article introduces a framework for approaching this challenge from a perspective that balances optimizing existing work and creating new forms of value.

A quick note: while many analogies and examples are drawn from fields I work in and am therefore familiar with, I believe the key elements hold true and are transferable across domains.


The Holy Grail of Optimization

If you have led projects in any sort of corporate environment, you are likely familiar with the Holy Trinity of project management: cost, time, and quality. You probably have also heard of the “Pick Two” rule: the idea that you can optimize for any (one or) two of these parameters, but not all three simultaneously. This thinking has governed operational wisdom for decades.

So it is natural that when organizations encounter a new technology with transformative potential, they typically frame its value in these terms. The Holy Grail then, according to this thinking, is achieving optimization across all three.

If the vast sums of capital being poured into generative AI ventures and the astronomic valuations of AI companies were any indication, there is strong belief across industries that the many prayers at the altar of optimization have finally been answered, because AI can make work:

  • Faster: Process acceleration, compressing timelines
  • Cheaper: Resource optimization, doing more with less
  • Better: Quality enhancement, improving outputs

Based on the dizzying speed at which the abilities of generative AI models have evolved in just the past 3–4 years, we know that this technology is capable of creating substantial gains across each of these parameters. The important question becomes how and where organizations choose to allocate those gains.

Where Does the AI Surplus Go?

Consider a project that costs $15,000, takes six weeks, and delivers a certain level of quality under a baseline scenario. Let’s say this is what it takes for skilled humans to do good work, without AI support. Effective use of AI can potentially improve one or more of these factors, resulting in overall gains compared to this baseline: we will call this the AI surplus. Where you invest that surplus is a strategic choice. And as you may already suspect, not all choices are equal.

Figure 1 · Allocation Choices

Where does the AI surplus go?

AI creates a surplus on top of the human-quality baseline. How you allocate it is a strategic choice; not all combinations are equal.

⚠ RISK

Race to the Bottom

Cheaper + Faster
What stays at baseline
Quality (at best)
All gains go to cost and speed. Quality receives no investment; under pressure, it degrades. Volume crowds out value at scale.
✓ PATTERN

Quality-Led Acceleration

Faster + Better
What stays at baseline
Cost
Same investment; faster delivery and better output. AI handles pattern recognition; experts add judgment, context, strategic framing.
✓ PATTERN

Capability-Led Efficiency

Cheaper + Better
What stays at baseline
Timeline
Same timeline; lower cost and/or enhanced value, with quality preserved through expert workflows.
A note on “Cheaper.” “Cheaper” can mean two very different things. Strategic: AI augments expert workflows so the team handles more, frees experts for higher-value work, or reduces cost through smarter allocation. Extractive: AI replaces people, with the savings extracted as headcount reduction. The geometry alone doesn’t tell you which; the Implementation Spectrum (tactical vs. strategic execution) does.

Based on the Pick Two rule, an organization could choose to allocate all gains from AI in various ways: they could use the surplus to improve on the cost and speed, while quality stays at baseline (at best), or they could use it to achieve faster AND better output for the same costs, or attempt to reduce costs AND/OR improve quality while maintaining the same timeline.

The Implementation Spectrum

Within each parameter, the implementation of AI can range from tactical to strategic. This is not simply about which parameters you are optimizing, but rather about how thoughtfully you execute.

Figure 2 · Depth of Execution

The Implementation Spectrum

Within each parameter, implementations range from tactical to strategic. The spectrum shows how thoughtfully you execute.

TACTICAL quick wins, immediate returns surface, point solutions STRATEGIC long-term, sustainable value integrated, compounding how thoughtfully you execute
TACTICAL

Immediate returns: quick wins, proof of concept, response to specific pain points.

Valid when: building organizational buy-in, testing feasibility.

The danger: when tactical becomes the ceiling rather than a starting point. Surface polish without substance. Speed without safeguards. Cost cuts without quality investment.

STRATEGIC

Long-term value: expert integration, quality preservation, sustainable processes.

Looks like: AI augments expert workflows; feedback loops compound learning over time; governance, security, and compliance treated as features.

Tactical implementations focus on immediate returns, often in response to urgent pressures. They serve legitimate purposes and address specific pain points: for eg, proof of concept initiatives can test feasibility and quick wins in pilot projects can build organizational buy-in. The danger is when tactical becomes the ceiling rather than a starting point.

Strategic implementations orient toward long-term value: expert integration, quality preservation, sustainable processes. They require more intentionality but create differentiation that compounds over time. Strategic doesn’t mean flashy: some of the most effective approaches are deliberately unexciting, prioritizing governance and compliance over novelty.

Figure 3 · How They Connect

Geometry × Spectrum

The optimization triangle shows which parameters you’re investing in (breadth). The implementation spectrum shows how thoughtfully you’re executing (depth). Together they reveal your true strategic position.

OPTIMIZATION TRIANGLE
Faster Cheaper Better

Which parameters are active? (breadth)

IMPLEMENTATION SPECTRUM
Tactical Strategic quick wins long-term value

How thoughtfully are you executing? (depth)

The geometry shows which parameters you are optimizing (breadth). The spectrum shows how thoughtfully you’re executing (depth). Together they reveal your true strategic position.

The Race to the Bottom

So, what happens when organizations allocate all gains to cost and speed, with nothing going to quality?

Once AI enables the $15,000 deliverable for $10,000, a competitor offers it for $8,000, then $6,000; and tolerance for quality degradation rises as the price drops. This phenomenon is not unique to AI; it happens with any efficiency-enabling technology. But due to AI’s ability to produce output quickly at scale, and the rapid pace of adoption, it can accelerate this race to the bottom dramatically.

The attention economy we live in today incentivizes this phenomenon, with websites and social media feeds being flooded by mass-produced “AI slop” that’s quickly crowding out other content. And it isn’t just random blogs or LinkedIn posts: by mid-2025, 71% of social media content and 35% of new websites were AI-generated, and I would imagine those numbers have only gone up since. As I mentioned earlier, scientific publishing has fared no better, with AI hallucinations and fake citations increasingly polluting the evidence base. A Nature analysis estimated that tens of thousands of 2025 publications may include AI-fabricated references. Springer Nature retracted a machine learning textbook in 2025 after reviewers found that the majority of its cited references couldn’t be verified. At top AI conferences like NeurIPS and ICLR, reviewers flagged hundreds of hallucinated citations slipping through peer review. These are prestigious publishers, journals, and conferences: sources that researchers and practitioners rely on during high-stakes decision-making.

Many of the botched AI implementation efforts follow an almost predictable pattern: companies aggressively lean on AI to find efficiencies and cut costs, armed with little evidence and a lot of wishful thinking, things go wrong, customer trust erodes, and the “savings” turn out to have been borrowed against the future.

Words of the Klarna CEO after the company’s ill-fated effort to replace human agents in customer service are telling: “Cost, unfortunately, seems to have been a too predominant evaluation factor when organizing this. What you end up having is lower quality.” Talk about buy now, pay later.

Quality: The Integrity Dimension

This reveals something important about allocation choices: not all combinations are equal. Quality functions as the integrity dimension: when it receives investment, value is created or preserved; when it doesn’t, value erodes.

Combinations that maintain quality at or above baseline while improving other parameters can produce real results. However, quality requires investment: attention to detail, subject matter expertise, adequate time for review.

When AI-driven efficiency gains focus on making things quicker and cheaper, with limited investment into making the products and services better, quality often degrades below the previous baseline, because speed and cost pressures crowd out the care required for good work.

This is not to say that “Faster” or “Cheaper” are inherently bad. There are scenarios where you might want to leverage AI to optimize along these parameters for sound tactical reasons: for instance, implementing a pilot workflow to demonstrate a successful use case as a quick win to build organizational buy-in for broader adoption. The danger lies in mistaking the tactical for the strategic or letting cost and speed quietly become the only parameters that matter.

Beyond Base Camp: The New Dimension

A pursuit of optimization is about doing existing things better. So, ultimately, however effectively you pursue it, you are still playing within the rules of the same game. This would probably look something like the following:

Figure 4 · Beyond Base Camp

From Base Camp to a New Dimension

A solid Base Camp is the foundation. The ascent rises from it, into a new dimension of value.

Faster Cheaper Better L4 L5 L6… SUMMIT UNKNOWN (new dimension) BASE CAMP
L1
Focused Optimization BASE CAMP
Surplus to one parameter
L2
Dual Integration BASE CAMP
Surplus to two; which two matters
L3
Full Optimization BASE CAMP
All three; pre-AI Holy Grail. Today: Base Camp.
L4+
Beyond Base Camp NEW DIMENSION
Triangle becomes foundation; new value emerges

If you were to picture the project management triangle as the floor, progressively fuller optimization within this plane establishes what is essentially a solid foundation: let’s call this Base Camp.

Level 1: Focused Optimization

Allocating the AI surplus to one parameter. A single vertex, but potentially powerful with strategic execution.

Level 2: Dual Integration

Allocating the surplus to two parameters. Here, which two matters enormously. Faster + Better with expert oversight creates value; Cheaper + Faster without quality investment creates waste.

Level 3: Full Optimization

The completely optimized triangle. Congrats, you have mastered how to do things well across all three parameters. In the pre-AI world, this was The Holy Grail. Today, this is Base Camp.

Level 4+: Beyond Base Camp

What happens when we add transformational potential to the mix? We unlock the ability to create entirely new forms of value. The Base Camp becomes a foundation. Something new emerges upward. Rather than simply being better at doing things, this becomes about doing better things.

What would this new level look like? Thinking about this, I am drawn to an example from my early days as a wide-eyed medical writer when I used to work on GLP-1 receptor agonists, the class of medicines that weight-loss drugs like Ozempic and Zepbound belong to. Initially, the primary use of GLP-1 RAs wasn’t in weight loss; they were first approved to treat diabetes, and typically prescribed as later-line options (i.e., only after other drugs had failed), because they were competing in a crowded treatment landscape dominated by established therapies. The weight loss was a nice side benefit, so was the protective effect on heart health. As more data came out about the various benefits, guidelines and payers caught on, leading to widespread adoption. These drugs quickly established themselves as the new standard of care in diabetes and obesity, becoming one of the best performing drug classes of all time. Sure, the near-ubiquitous marketing campaigns and celebrity endorsements played a part in making them household names, but the meteoric rise of these medications was built on decades of foundational research and meticulously designed clinical trials spanning multiple diseases, where each well-timed new finding informed and reinforced the next: cardiovascular signals shaping renal trial design, weight-loss data driving studies in sleep apnea among obese patients. The result was an ecosystem of cardiometabolic benefits that no single trial could have established alone. Grounded in that same underlying biology, drugs targeting the GLP-1 receptor are now approved or being studied in cardiovascular, renal, respiratory, and neurodegenerative conditions. One could even see more than a passing resemblance between the years-long arms race between Lilly and Novo Nordisk to dominate this space and what we are now watching unfold among OpenAI, Anthropic, and Google.

If we were trying to imagine what this kind of moonshot potential might look like in AI, one early example worth mentioning is the startup FutureHouse and its attempt at building an “AI scientist”, armed with a suite of AI agents with skillsets that span every step in the scientific discovery process, from initial literature review and hypothesis generation through experiment design and eventual publication of the discovery. The aim of the ambitious initiative is to supercharge scientific research in collaboration with human researchers to achieve the kinds of breakthroughs and discoveries that may require synthesis of vast amounts of literature across multiple disciplines.

It is early days for any AI system attempting work of this scope, and it remains to be seen whether these specific initiatives pan out. Perhaps this reflects the uncharted terrain where we may be headed. To push the mountaineering metaphor a bit further: as we climb, the summit remains unknown, which is the nature of true innovation.

The Human in the Loop

Nothing in the framework above happens without the people who know the work. The optimization, the strategic execution, the ascent into new territory: all of it depends on something AI can surface but cannot supply: judgment.

As agentic AI systems become capable of carrying out increasingly complex tasks and functioning with more autonomy, it may be tempting to think this brings down curtains on the age of experts, though I believe such predictions are premature, at least in the immediate future. (That is not to say it won’t happen; maybe the small decisions we make in outsourcing our thinking to the machines will add up over time, leading to the slow demise of true expertise, or advanced general intelligence could be the catalyst that brings it to a swift end.) A more likely scenario is that the role of the human expert will simply evolve.

But AI alone often produces undifferentiated output. Several studies suggest that overreliance on generative AI models may push us towards reduced diversity of ideas, and similar ways of communication and thinking. When everyone resorts to the same playbook, differentiation becomes impossible; and quality and trust, once compromised at scale, is expensive to rebuild. Real people with differing perspectives derived from unique experiences offer something AI can’t, at least not yet: empathy, nuance, and the ability to handle messy situations.

Figure 5 · The Human in the Loop

The Evolution of Expertise

Each wave of technology shifted what expertise means without eliminating the need for it.

Pre-Internet

Raw Knowledge

Libraries, textbooks, institutional memory
The expert’s edge
Knowing the answer

The premium was on what you carried in your head. The expert who had memorized the reference tables, the formulary, the case law was indispensable.

Internet & Search

Synthesis

Search engines, databases, digital archives
The expert’s edge
Finding, evaluating, connecting

The premium shifted to navigating abundance. The expert who could synthesize across sources, evaluate quality, spot connections, and build a coherent picture from fragments, became far more valuable.

Generative AI

Augmented Intelligence

LLMs, agentic systems, AI-assisted workflows
The expert’s edge
Asking the right questions, discerning value

The premium moves to judgment: which synthesis is valuable, when the AI is confidently wrong, and what questions to ask to uncover answers we haven’t considered before.

The takeaway: AI can enable experts to apply what they know at a scale and depth that wasn’t possible before. The human ability to discern when a finding is novel versus when the AI is confidently wrong is what transforms AI output into something worth trusting.

If we look back a few decades, the arrival of internet and search engines shifted what expertise meant, rather than eliminate the need for subject matter expertise. The premium moved from raw knowledge (“knowing the answer”) to synthesis (“knowing how to find, evaluate, and connect the answers”). The expert who had memorized the reference tables wasn’t suddenly irrelevant, but the expert who could synthesize across sources became far more valuable.

Generative AI is bringing about a similar shift. This time, value will come from the ability to discern which synthesis and framing is actually valuable, to use deep domain expertise to judge the wheat from the chaff in AI output, and perhaps most importantly, to ask the right questions to uncover answers we haven’t even considered before.

The key value proposition of AI for organizations then becomes not just about freeing up human time (though that would be nice from the individual perspective) or reducing the cognitive load and drudgery associated with the less fun aspects of our work (that sounds quite nice too); it will also be about surfacing patterns and connections across large swathes of data that humans might not be able to see on their own. The human ability, rooted in domain expertise, to discern when a finding is novel versus when the AI is just confidently wrong will help transform AI-surfaced patterns into useful, actionable insights grounded in reality. This augmented intelligence, I believe, will define the future of knowledge work.

Key Principles

Invest in quality

If you are allocating all gains from AI to cost and speed without investing in quality, you are racing to the bottom. Prioritize quality, which means reinvesting in proper guardrails and strategic human oversight of AI.

The new dimension rises from a strong foundation

Harnessing the transformative value of AI will require clear-eyed thinking and deliberate implementation. Build readiness and the breakthrough will follow, even if in ways we did not anticipate.

Deepen before expanding

Strategic implementation along one parameter often beats tactical implementation at two. A strong Base Camp enables the climb.

Human expertise evolves

AI shifts the premium from doing the work to discerning what is worth doing. Human expertise refocuses to where it creates the most value: asking the right questions and recognizing when a confident-sounding answer is wrong.

Conclusion

The framework I have outlined here is a way of looking at where you stand on the spectrum of adopting AI, how thoughtfully you are executing your AI strategy, and whether you are using it to create piecemeal efficiencies and quick wins, or to optimize more deliberately to build foundational workflows that set you up for success and new opportunities.

Cost, time, and quality represent the holy trinity of parameters along which AI can create gains, but where you allocate the AI surplus is a strategic choice that will determine the long-term outcome. Execution matters: tactical implementation has its place, but strategic depth is what compounds over time. And beyond optimization lies a new dimension: the potential to create value that didn’t exist before, not just do existing things better.

Organizations that treat AI purely as a cost-cutting tool are bound to find themselves in a crowded race to the bottom, whereas those that invest in quality and integrate human expertise into how they build strategic depth will be well-placed to establish a strong Base Camp. And for those who build that foundation while remaining alert to transformational opportunity, the climb beyond it becomes possible, toward peaks we may not even be able to see yet.


From doing things better to doing better things.

Like what you see at Sciencera? Please spread the word :)