The Soft Problem is the Hard Problem

As product teams, organizations and integrators work to harness the significant potential of generative AI, the focus tends to be on selling us on the impressive capabilities and the speed of technological change.  There are good reasons to expect that the tools themselves will undoubtedly continue to improve and become more sophisticated.  With that, our primary challenge in unlocking their true value depends on our capacity to adapt how we work and lead. The development of human leadership skills to guide and collaborate with AI—the 'soft problem'—will prove to be the more enduring and hard problem that requires dedicated focus.

Highly skilled knowledge professionals are accustomed to self-direction, managing complex tasks, and often leading initiatives within their domain. They are adept at interpreting needs and shaping their work to meet broader goals. However, even for those who operate with significant autonomy or manage projects, the act of explicitly directing an artificial intelligence as a primary creation partner presents a novel challenge.

The critical skill becomes not just accomplishing complex work, but clearly articulating the problem and desired outcome for a non-human entity and guiding its generative process. This form of direct, iterative leadership of an AI system is a distinct capability. The act of "prompting" AI effectively encompasses a set of sophisticated cognitive skills. These include:

  • Framing a problem with considerable clarity for the AI to understand.

  • Possessing a well-developed sense of what high-quality output looks like.

  • Iterating productively with initial results that may be messy or incomplete.

  • Guiding an intelligent system towards a progressively better and more useful outcome.

What is needed is the capacity to lead a creative process in natural language, a skill many individuals haven't had extensive opportunities to develop through traditional career paths. Even professionals like coders, whose work involves writing instructions, are finding that directing AI through natural language calls for a different kind of thinking. Certain roles, such as designers or analysts already accustomed to abstract thought or systems leadership, may find this transition more intuitive. For a broad segment of the workforce, however, learning to operate in this new generative mode presents a substantial learning challenge. Helping them acquire these skills is paramount.

Furthermore, mastering interaction with generative AI is akin to acquiring any complex professional skill: it demands deliberate practice. This involves more than casual use; it means setting specific learning goals, actively seeking feedback (even if through self-reflection on AI output quality), identifying areas for improvement in prompting or evaluation, and consistently pushing beyond one's current comfort zone. Simply providing access to AI tools without structuring opportunities for this kind of focused, iterative learning will likely yield superficial engagement rather than deep capability.

Another significant, though often unstated, barrier is a feeling of discomfort. For many years, knowledge work has been closely associated with the visible application of effort—the hours spent writing, analyzing, coding, or designing from an initial blank slate. When a tool can accomplish similar tasks in seconds, it can evoke a sense of suspicion or a feeling that the process is "too easy," akin to cheating. This reaction often stems from deeply ingrained ideas about professional identity. Many have learned to equate the visible process of thinking with the act of working. When AI begins to participate in, or even perform parts of, that thinking process, it can create a sense of losing something essential. Individuals may quietly ponder their own contributions if an AI can draft initial content. This discomfort can result in underutilization of AI tools, their misuse, or a generally shallow engagement. The tools' utility is clear, yet their use can conflict with established beliefs about how value is created and demonstrated.

The persuasive nature of large language models also warrants attention. They generate text with notable fluency and confidence, even when their information may be inaccurate or lack context. Without proper support systems and a developed critical perspective, employees might inadvertently begin to offload their thinking prematurely. This could involve:

  • Accepting AI-generated outputs without sufficient scrutiny.

  • Relying on AI for initial idea generation instead of first formulating their own thoughts.

  • Overlooking important nuances related to ethics, context, or factual accuracy.

These behaviors impact individual productivity and also introduce an organizational risk by potentially diminishing the rigor of critical thinking. While generative AI can be a powerful aid to clearer thought, it can also subtly displace active human cognition if not managed with care and intention.

For organizations aiming to truly succeed with generative AI, the objective should extend beyond simple tool adoption. The core goal becomes helping people transition into this new way of working. This involves:

  • Teaching individuals how to effectively lead and direct intelligent tools.

  • Building their confidence in applying judgment, providing clear direction, and engaging in iterative refinement.

  • Creating shared standards for what constitutes effective prompting, thoughtful engagement, and productive human-AI collaboration.

  • Fostering a culture of deliberate practice around AI interaction, where employees are encouraged and guided to set learning goals, experiment, reflect on their methods, and continuously refine their ability to work generatively with these tools.

We need to think about the pathways to achieving our best thinking in partnership with AI by focusing on the soft problem, which is not surprisingly turning out to be the hard one: cultivating the habits of mind that shape how we approach and perform our work.

Previous
Previous

Eulogy for the Em-dash

Next
Next

The Discerning Mind: Mastering GenAI In Different Environments