Scaffolds for Thinking: From Prompt Engineering to AI Product Design
We are all responsible for our level of critical thinking.
In an age of instant answers and AI at our fingertips, the temptation is to outsource it. But the truth is, if we want useful, insightful answers from AI—especially on complex or ambiguous questions—we need to bring clarity ourselves.
Early researchers on prompt engineering understood this. They realized that the key to better answers wasn’t just in what we asked, but how clearly we asked it. One of the most influential prompting strategies to emerge from this insight is called the Cognitive Verifier.[i]
What Is the Cognitive Verifier?
The idea is simple but powerful: before answering your question, the AI first generates three clarifying questions to refine your intent. Then, once you’ve answered them, it composes a final response that combines all the pieces.
Why does this matter? Because research shows that:
Users often ask vague or overly broad questions.
LLMs (Large Language Models) produce better, more accurate answers when they break down a problem into smaller, more specific parts.
So the Cognitive Verifier acts like a pause button—forcing both the model and the user to slow down, get specific, and make sure they’re aligned.
Here’s what a classic prompt might look like:
Prompt: “When I ask a question, generate three clarifying questions before answering. Combine the answers to form your final response.”
An even more thoughtful version might include:
“Assume I don’t know much about the topic and define any terms that aren’t general knowledge.”
This approach helped users get significantly better results from early models like GPT-3. But they had to remember to use it manually.
From Prompt to Product: Built into ChatGPT
Today, the best versions of this strategy are built right into the tools, for example in ChatGPT’s Deep Research feature.
When you submit a research query, the model doesn’t immediately begin pulling sources or generating a report. Instead, it first checks:
"Just to clarify before I begin…"
Then it follows with a set of smart questions that help both of you understand what’s really being asked.
Here’s an example of how it looks in ChatGPT:
You: “Can you do deep research on how AI is being used in elementary education?”
ChatGPT: Just to clarify before I begin the research:
Once you answer those, the model begins its research with a much clearer sense of direction. The result? Sharper, more useful insights.
Why It Matters for Designers
If you’re building an AI product, this is a critical design principle: wherever the thinking gets fuzzy, insert a scaffold.
The Cognitive Verifier is more than a clever trick. It’s a blueprint for thoughtful interaction. And it’s especially valuable in situations where:
The user might not know exactly what they want.
The question has multiple possible interpretations.
The answer depends heavily on context or goals.
By adding this kind of clarification loop into your product—whether manually prompted or built-in—you create space for better thinking and better results.
Another Scaffold: The Alternative Approaches Pattern
While the Cognitive Verifier helps clarify intent, there's another powerful thinking scaffold that deserves attention: the Alternative Approaches Pattern.
Before we dive in, a quick definition: System 1 thinking is fast, automatic, and intuitive—it's your gut reaction, your first impression, the shortcut your brain takes to save time and energy. It's useful, but not always right.
Even when we engage in deliberate, reflective thinking—what psychologists call System 2—we often reach for the most available solution in our minds: a familiar model, a past experience, or a pattern we've used before. This isn't laziness; it's how our brains economize effort. But it means we often default to refining a single idea, rather than exploring whether a better one exists.
That’s why we benefit from scaffolds that challenge our assumptions and widen our view. We don’t just need support thinking more carefully—we need nudges to think more diversely
The Alternative Approaches pattern prompts the model to suggest different ways to accomplish the same task—especially when users might default to a familiar method without realizing better options exist. It works by asking the model to:
Generate a list of viable alternatives within the user's constraints
Compare and contrast the pros and cons of each option
Prompt the user to choose which path they’d like to take
Here's what a prompt might look like:
Prompt: “If there are alternative ways to accomplish this within [scope], list and compare them with pros and cons. Then ask me which I’d like to proceed with.”
This is especially useful in decision-making, planning, and strategy, where the first idea isn’t always the best one.
Unlike the Cognitive Verifier, which is already embedded in tools like Deep Research, the Alternative Approaches pattern is not yet widely implemented by default. It still relies on users to prompt it manually.
That makes it another key design opportunity for AI products: tools that help users not just clarify their questions, but question their answers.
Key Takeaways:
While clarification scaffolds like the Cognitive Verifier are built into tools like ChatGPT's Deep Research, patterns like Alternative Approaches—which help users break out of their default thinking and compare solutions—still largely rely on the user prompting for them.
This remains a powerful design opportunity: future AI tools could build these patterns in by default, helping users not just clarify intent, but consider better ways of achieving it.
The Cognitive Verifier is a prompting strategy that breaks down complex questions through clarifying sub-questions.
It improves outcomes by encouraging both user and model to slow down and think better.
Deep Research tools are now incorporating this strategy automatically, making it easier to get high-quality responses.
Designers should consider these kinds of scaffolds not as friction, but as essential architecture for quality thinking.
The next time you’re about to ask AI a big question, pause. Let it ask you back. That’s where the real thinking begins.
[i] White, Jules, et al. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv:2302.11382, arXiv, 21 Feb. 2023. arXiv.org, https://doi.org/10.48550/arXiv.2302.11382.