Three public conversations have stayed with me because they were both controversial and revealing.
One concerned GLP-1 agonists. Medications developed for diabetes that moved rapidly into the public imagination as treatments for obesity. Almost overnight, their use became a subject of speculation and moral positioning. People volunteered whether they had “never taken Ozempic” as if it were a declaration of character. Medical privacy thinned. Prevention was discounted. Motive was questioned.
The other conversation concerned work.
Hybrid working was introduced at scale through necessity. In many organisations, it functioned better than expected. Productivity held, absenteeism fell, commuting times disappeared and new ways of collaborating emerged. Once the immediate crisis passed, the dominant narrative shifted. Hybrid was reframed as an exception, a pause from normality. The benefits were rarely examined with the same seriousness as the perceived costs.
At the same time, a third conversation accelerated rapidly. Artificial intelligence. New, unproven at organisational scale, capital-intensive but already reshaping hiring decisions, particularly at the entry level. Here, urgency is replacing caution and investment is flowing quickly. Questions about downstream effects are emerging, but also being deferred.
At first glance, these stories do not belong together. They trigger a particular discomfort. The kind that appears when different decisions are evaluated by different standards, despite comparable levels of uncertainty.
Suffering as a Gatekeeper: The GLP-1 Asymmetry
We know that smoking causes lung cancer. The evidence is overwhelming. When someone develops lung cancer, we do not interrogate their past behaviour before offering treatment. We do not ask whether they “deserve” treatment. We do not speculate publicly about how long they smoked or whether they tried hard enough to quit. Their privacy is protected and care is delivered. Alongside treatment, as a society, we invest heavily in smoking-cessation programmes, public-health warnings, taxation on tobacco products and smoke-free spaces.
Now contrast this with GLP-1 agonists.
Obesity is a recognised risk factor for type 2 diabetes, cardiovascular disease and metabolic syndrome. GLP-1s reduce appetite, improve glycaemic control and lower future risk. They are prescribed under medical supervision, monitored and associated with reductions in population-level disease burden.
Yet their use attracts disproportionate levels of scrutiny. The general public feel entitled to speculate about who is using them and why. Patients conceal treatment to avoid judgement. Medication has been reframed as “cheating.” The question is not “does this reduce risk?” but “has this person earned the right to not be fat shamed?”
Here is the asymmetry, stated plainly. Suffering for obese patients is expected. Diet and exercise should hurt. Social judgement should motivate change. GLP-1s disrupt that logic. They make it possible to reduce weight without visible suffering, thus breaching the moral contract with society.
HIV is one of the few other illnesses where similar logic has operated. Treatment was once morally conditional on how the illness was acquired. Only when HIV was reframed as a public-health issue rather than a moral failure did investment in research and care become non-negotiable.
Obesity has not yet made that transition.
Reversibility Versus Control: The Hybrid and AI Asymmetry
The same pattern appears in organisations. Hybrid working was implemented rapidly, across sectors and geographies, under conditions of global duress and minimal preparation. It altered long-standing assumptions about presence, supervision and productivity. It redistributed autonomy toward employees. It also generated something rare in management: a large-scale, real-world experiment.
From a strategic perspective, hybrid work had several defining features. It required little capital investment, it was highly reversible and it produced vast amounts of live data. The downside of being wrong was real and may have been destabilising, but hybrid working did not hard-wire organisations into irreversible structural commitments. If leaders chose to redesign, rather than retreat, the core systems of the organisation remained intact.
Once the immediate crisis passed, many organisations rolled it back without serious post-analysis precisely because it could be abandoned without forcing a structural reckoning. The question was “how do we return to normal?” not “what did we learn?”
Now contrast that with the decision to replace early-career hiring with AI. Early-career roles are how organisations build future capability. They are where people learn how decisions are made, how culture is transmitted, how judgement is formed and how institutional knowledge accumulates.
When those roles are removed, the change looks efficient, costs fall, outputs remain stable, senior teams continue to function. Over time, talent pipelines thin, future leadership cohorts are never formed, the informal knowledge of employees evaporates. Instead of learning by doing, organisations become increasingly dependent on external systems, vendors and tools they do not fully control. By the time these effects become visible, they are difficult to reverse.
Organisations cannot rehire a cohort that was never trained. A cultural knowledge that was never absorbed cannot be recreated. Structural capability gaps cannot be quickly repaired.
These two cases illustrate a critical strategic distinction. Some decisions create visible disruption in the short term while preserving an organisation’s ability to adapt. Others feel smooth and efficient at first, but quietly remove future options.
Despite this difference, the dominant narrative around AI is urgency. Leaders speak openly about fear of being left behind. Investment decisions of extraordinary magnitude are justified by inevitability rather than evidence. Questions about long-term capability, learning and resilience are raised, then deferred.
The asymmetry is consistent. A human-centred, reversible change attracts scepticism and moral framing. A system-centred, low-reversibility change is treated as strategic necessity.
What Strategy Teaches About Uncertainty
When causal relationships are unclear and data is incomplete, organisations stabilise themselves through familiar decision patterns.
Decision-making shifts toward narrative coherence. Leaders construct stories that make sense of ambiguity, align with cultural expectations and confer legitimacy. These narratives provide psychological and organisational stability when forecasts are unreliable and causal chains are contested.
Uncertainty is compressed into single trajectories that feel decisive and inevitable where confidence substitutes for calibration. Decisions are evaluated by visible outcomes rather than the quality of the assumptions, options and trade-offs embedded in the process.
Value is inferred from effort. Visible struggle, sacrifice, or endurance signals seriousness and commitment. Risk reduction, preventative action or invisible optimisation carries less weight when it lacks symbolic force.
Scrutiny follows power. Decisions that shift autonomy downward to employees, patients or individuals are examined through moral and cultural lenses. Decisions that concentrate control upward toward systems, capital or central authority are framed as strategic imperatives and progress with limited challenge.
These dynamics emerge reliably in conditions of uncertainty where causal clarity is low, feedback loops are delayed and the consequences of error are unevenly distributed over time.
What an Alternative Actually Looks Like
In uncertain environments, waiting for perfect information is not a strategy but neither is mistaking urgency for clarity. Strategy under uncertainty is therefore clearly not about choosing the “right” decision but about designing a sequence of moves that allows an organisation to advance while learning, adapting and limiting irreversible damage.
This means accepting an uncomfortable truth. Uncertainty rarely presents good options. Leaders are often forced to act among imperfect, even unpalatable, choices. The strategic task is to reshape the landscape of what becomes possible next. How reversible is this step? What assumptions are we making? Who carries the downside if we are wrong? What new options does this action create?
Seen through this lens, hybrid work is not a cultural deviation to be corrected. It is an opportunity to design new ways of working, test assumptions about productivity and trust and refine models before committing further. Its failure is not operational but analytical. Too many organisations are closing the experiment without extracting the learning.
Similarly, AI adoption does not demand hesitation, but it does demand structure. The risk is not speed but in treating high-commitment, low-reversibility decisions as if they are easily undone, while deferring clarity about what evidence would justify slowing down, changing course or stopping altogether.
Good strategy also chooses where failure is allowed to occur. It is there that leadership earns its name.
Further Reading
This reflection draws on ideas from strategy and decision-making literature that examines how leaders act when evidence is incomplete, outcomes are uncertain and the cost of error is unevenly distributed.
- Courtney, H., Kirkland, J., & Viguerie, P. Strategy Under Uncertainty. Harvard Business Review. A foundational framework for making strategic choices when the future cannot be reliably predicted.
- Mankins, M., & Gottfredson, M. Strategy-Making in Turbulent Times. Harvard Business Review. Explores strategy as a continuous, option-creating process rather than a fixed plan in volatile environments.
- Kahneman, D., Lovallo, D., & Sibony, O. Before You Make That Big Decision. Harvard Business Review / MIT Sloan Management Review. Examines how bias, noise and premature closure distort judgement in high-stakes decisions.
- Kahneman, D., Lovallo, D., & Sibony, O. A Structured Approach to Strategic Decisions. MIT Sloan Management Review. Argues for disciplined decision architecture when intuition and confidence are unreliable guides.
- Cosier, R., & Schwenk, C. Agreement and Thinking Alike: Ingredients for Poor Decisions. Academy of Management Executive. A classic critique of consensus-seeking and moral comfort in strategic decision-making.
- Eapen, T., Finkenstadt, D., Folk, J., & Venkataswamy, L. How Generative AI Can Augment Human Creativity. Harvard Business Review. Offers a counterpoint to cost-only narratives by framing AI as an option-expanding tool rather than a simple labour substitute.
01 Comments
Debashish Ganguli
March 3, 2026My challenges normally revolve around checking underlying beliefs, narrative over analysis, and short-term visibility over long-term optionality when we make decisions. And also, differentiating between learning decisions & commitment decisions has never been very easy.
My observations may not be in harmony with the article here, however, the above crossed my mind while I was reading the article.