Generative AI, Knowledge Work, and Critical Thinking
I recently read through a new study, "The Impact of Generative AI on Critical Thinking," which offers some interesting insights on how AI is reshaping knowledge work. In considering it, I tried to overlay the analogy I’ve used for my own meaning-making since generative AI technologies emerged, which identifies a shift in our own roles from being primarily in the writerly mode to being primarily in the directorial/editorial mode. According to the article, a few shifts in the use of critical thinking seem to be emerging:
From Information Gathering to Verification: As GenAI can increasingly retrieve and organize information, our focus moves towards verifying its accuracy. Like an editor checking facts in the work of a junior writer, we rely on our existing expertise, in writing and coding, for example, to ensure the quality of the sources used and the accuracy of any information derived by AI.
From Task Execution to Stewardship: Much as an editor oversees writers, our roles evolve to include shaping, refining, and guiding GenAI outputs. This stewardship requires new skills in prompting, steering AI effectively, and maintaining accountability for final outcomes of the work performed by AI agents working on our behalf.
These shifts in responsibility carry risks that should be managed proactively. While leaning into developing these new skills is critical, we must take care not to lose the skills that enabled us to be subject matter experts in the first place. Just as editors who stop writing completely might lose their sharpness in composing a complex argument, if we become overly reliant on GenAI, we could risk cognitive atrophy with critical thinking or practical skills we need to maintain.
For experienced professionals—skilled writers, coders, or analysts—this means adopting deliberate practices to stay sharp in the crafts we once performed ourselves:
Regularly alternate between creating from scratch and editing AI output to preserve fluency.
Keep “source code” thinking alive: sketch logic, structure, or argument before prompting AI.
Use AI not just for completion, but for diagnosis: compare your own output to that of generative AI, and train yourself to spot higher-order differences in clarity, logic, tone, or ethics.
And, because becoming a skilled editor requires having been a competent writer, we also need to deploy some strategies to first acquire the skills necessary to perform the reviewer tasks in any particular domain. How might AI promote critical thinking during learning rather than stifling it?
For learners, an effective practice might be:
Write independently for focused periods (30–60 mins).
Regularly (every 30–60 mins) use AI to receive targeted feedback (grammar, clarity, structure).
Critically assess AI feedback, integrating valuable suggestions while reinforcing independent critical thinking and decision-making skills.
The key to thriving in this new environment is actively developing new ways of working with AI while maintaining the critical thinking skills that will make the final product greater than the sum of the human and AI parts that developed it.