The Discerning Mind: Mastering GenAI In Different Environments
In 2016, Cal Newport’s Deep Work gave us a lifeline. He distinguished between "Shallow Work" (distracted, low-value tasks) and "Deep Work" (focused, cognitively demanding efforts that create real value). His call was clear: reclaim focus to thrive. For many, myself included, this reframing of knowledge work provided a helpful heuristic..
But the game of work is changing with generative AI. The old enemy was distraction; the new, more insidious temptation is cognitive outsourcing. This isn't just an abstract threat; it's a daily choice impacting the quality of our work and our ability to think deeply. What new frameworks and actionable strategies do we need to help us navigate the new landscape, ensuring we can harness AI’s power without dulling our cognitive faculties. As I explored in a previous post on moving from Deep Work to Generative Work, we need to fundamentally rethink how we work when we're no longer the sole cognitive engine.
Deep Work taught us to protect our minds for solitary deep thinking. Generative Work is about leading a collaboration with AI, offloading routine cognition to free our human capacity for judgment, synthesis, and insight. But what does generative work with AI look like when the problems are messy, the stakes are high, and the answers aren't in any training data? A study published this year, "AI, Help Me Think—But for Myself," provides compelling insights into this question by directly comparing two distinct approaches to AI-supported decision-making. It’s not the final word, but it’s a practical illustration that points towards the principles of Generative Work, especially when we venture into complex territory.
A Tale of Two AIs and Pointers for Human-Led Thinking
The aforementioned study aimed to see how different AI designs influenced people's investment decisions and, crucially, their thought processes. Participants were asked to adjust an investment portfolio using one of two AI assistants:
RecommendAI: This AI acted like a typical "easy button," delivering pre-formed, actionable recommendations. One participant likened it to "a friend who is deep into finance, and he tells me, I’ve heard about this health care stuff, maybe have a look at it.”
ExtendAI: This AI took a different path. It asked participants to first write out their own reasoning and strategy. Only then did it offer tailored feedback.
The results were illuminating. While some appreciated RecommendAI's directness, many found it led to passive acceptance. As one user put it, "My main issue with RecommendAI was that it directly gave me some kind of ‘do this, then do this,’ which I... followed without thinking too much about it.” They clicked, but they didn't necessarily understand or own the decision.
ExtendAI, by contrast, introduced what we might call "productive friction." It was, as one user admitted, "Kind of a pain. But I had to think more carefully.” Another captured the sentiment perfectly: "Yeah, no free lunch, right? So you have to do some work.” This initial human-led effort, this act of articulating one's own thoughts first, resulted in fewer, but better, portfolio changes and a deeper sense of ownership and understanding. As one participant reflected about the process of articulating their thoughts for ExtendAI, "I think if the goal is understanding, then you do have to put in some work yourself."
This finding, where leading with your own cognition enhances the process, is a powerful hint. To build upon these observations and develop a more robust framework for Generative Work, I want to propose another interpretive lens: psychologist Robin Hogarth’s distinction between “kind” and “wicked” learning environments. This concept helps explain why an approach like ExtendAI feels more effective in complex situations and how our own learning processes, particularly our intuitions, function differently within them.
The Critical Lens: Kind vs. Wicked Environments
Kind Learning Environments: Where Intuition Serves Us Well
Think of kind learning environments as those where the rules are clear, stable, and feedback is immediate and accurate. Learning to play chess, perfecting your spelling, or mastering basic arithmetic all happen in kind environments. You make a move, you see the consequence. You type a word, autocorrect flags an error. Over time, through repeated exposure and clear feedback, we develop reliable intuitions. Our brains become adept at recognizing patterns, and these learned patterns allow us to make quick, effective judgments. This is where AI, itself a master pattern-recognizer trained on vast datasets, also excels. For tasks like summarizing well-structured documents for key facts, drafting boilerplate text, or checking code syntax—where the underlying patterns are strong and the desired output is predictable—AI can be a powerful accelerator, and letting the model take the lead can be highly efficient. Our intuition aligns with the AI's pattern matching.Wicked Learning Environments: Where Intuition Can Betray Us
Wicked learning environments are a different beast entirely. Here, feedback is often delayed, distorted, ambiguous, or even entirely missing. The rules of the game can shift without warning, making past experience an unreliable guide, or worse, a source of misleading intuitions. Information is noisy, cause and effect are tangled, and the path forward is shrouded in uncertainty. Strategic leadership, complex negotiations, innovating new products for evolving markets, or making long-term policy decisions—these are all quintessential wicked domains.
In such environments, the very patterns our brains strive to learn may be illusory or transient. An action taken today might not reveal its true consequences for months or years, and by then, the context may have changed so much that it’s hard to draw clear lessons. Attempting to rely on the same kind of rapid intuition that serves us in kind environments can lead us astray. We might mistake a lucky outcome for skillful strategy or overgeneralize from a single, unrepresentative event. Human judgment, careful deliberation, scenario planning, and a willingness to question our own assumptions become paramount.
The investment task in the study, while not purely wicked, certainly leaned in that direction. Market behavior is notoriously difficult to predict, and the signals are often noisy. Those participants who relied on RecommendAI’s quick answers were, in essence, treating a somewhat wicked problem as if it were kind. They risked outsourcing their judgment to an AI whose "intuitions" (patterns learned from historical data) might not apply to the current, unfolding situation.
In contrast, the ExtendAI approach, by compelling users to articulate their own reasoning first, nudged them towards the more deliberate, reflective mode of thinking that wicked environments demand. They were better primed to grapple with ambiguity because they had first grappled with their own thoughts, even if, as one said, "I had a hard time describing why I’m doing things” when their reasoning was still vague.
This distinction, illuminated by Hogarth's framework, is crucial for Generative Work. It suggests that simply applying AI in the same way across all tasks is a recipe for trouble. We need to discern the nature of the environment before we decide how to partner with AI in a particular situation.
Generative Work in Wicked Environments: Your Actionable Playbook
The GenAI study offers early clues, and Hogarth's framework provides the lens. Now, how do we translate this into daily practice? The key is to adapt your approach based on the environment:
Discern the Environment, Then Decide Your Approach:
In Kind Environments: For tasks with clear rules, known patterns, and immediate feedback (like drafting routine emails, summarizing factual reports, or generating lists based on explicit criteria), feel more comfortable letting AI take the lead on initial generation. Your role becomes more supervisory: review, edit, and verify the output. This is efficient offloading.
In Wicked Environments: For complex, ambiguous problems, always lead with your own thinking. Before prompting AI for solutions, do your own deep work. Articulate your understanding of the problem, your initial hypotheses, your core assumptions, values, and desired outcomes. Write it down. This act of human sense-making is irreplaceable. Then, bring in the AI to check, challenge, or augment your foundation.
Try This: Before any significant task, ask: "Is this more like a kind or a wicked problem?" This simple question will guide your AI engagement strategy.
Wield AI as a "Cognitive Sparring Partner" (Especially in Wicked Environments):
In wicked environments, AI's best role is to provoke deeper thought, not provide definitive answers. If it always agrees with you, it's probably not pushing you enough. As one study participant noted critically about ExtendAI, "I feel like ExtendAI would be more useful if it was less stuck in my way of doing things.” We need AI that gently pushes us out of our ruts. Actively seek out tension.
Try This (Wicked): "Play devil's advocate against my proposed solution." "Generate three plausible scenarios where this plan fails spectacularly."
Try This (Kind, for ideation): "Generate 10 headlines for an article about X, focusing on benefit Y." Then you curate and refine.
Time Your AI Collaboration Wisely: The "Thoughts in the Making" Sweet Spot (Crucial for Wicked):
The GenAI study suggested that AI input is most valuable when your own reasoning is still forming. Not so early that it anchors you prematurely (like RecommendAI often did), and not so late that you're too committed to your path to genuinely consider alternatives. This is especially true for wicked problems.
Try This (Wicked): After initial research and drafting your own core ideas, use AI to help refine, question, or expand—before you feel locked into a final decision. In kind environments, you might use AI earlier for straightforward generation.
Insist on Understanding; Challenge the "Black Box" (Critically in Wicked):
RecommendAI's suggestions often felt opaque. In wicked problems, you must understand the "why" behind any AI-influenced shift. Even in kind environments, if an AI produces something unexpected or erroneous, understanding why can help you refine prompts or identify limitations.
Try This: "Explain the reasoning or data patterns that led to that suggestion." "What are the key assumptions underpinning that output?" If the AI can't explain it satisfyingly (especially for wicked problems), treat the suggestion with extreme caution.
Embrace "Productive Friction" and Design for Reflection (Especially for Wicked):
The "pain" of ExtendAI was its power. It forced deeper engagement. "No free lunch, right?" In wicked environments, processes or tools that make you slow down, articulate, and justify your reasoning are invaluable.
Try This (Wicked): Instead of asking for a final report, ask AI to help you structure a decision-making framework. "Help me create a table comparing these three strategic options across criteria A, B, and C, listing potential pros and cons for each." Then you do the weighing and deciding. In kind environments, the "friction" might simply be the step of carefully reviewing AI output before use.
The Path Forward: Thinking With, Not Instead Of, Especially in Wicked Environments
The tools are evolving rapidly. But our responsibility to think critically, discerning when to delegate and when to dive deep, only grows—especially when facing the ambiguities of wicked environments. Generative Work isn't about letting AI take the wheel; it's about becoming a more skilled driver, using AI as a sophisticated navigation system that can point out hazards and alternative routes, while we keep our hands firmly on the steering wheel, our eyes on the unpredictable road ahead.
This is the essence of Generative Work: protecting and enhancing uniquely human skills like nuance, ethical consideration, judgment under uncertainty, and deep reflection, by strategically collaborating with AI based on the nature of the task and its environment.
In a world overflowing with AI-generated answers, the real challenge—and the greatest value—lies in asking the right questions and wrestling with the complexities ourselves, with AI as our dedicated, thought-provoking assistant. That’s the work worth doing.
(For more on developing these skills and frameworks, stay tuned as we continue to explore the landscape of Generative Work.)