Hidden Beliefs May Be Blocking Your Engagement With AI
At least since the invention of the printing press, we’ve lived in what scholars call a writerly culture. In that world, authority, expertise, and identity were shaped by the written word. Our schools, professions, and institutions rewarded those who could craft compelling arguments, construct detailed reports, and somewhat more recently, master the syntax of code.
For centuries, writing was thinking. To be a knowledge worker meant proving your value by how well you could create, refine, and express ideas in written form. And for many of us, it defined who we were.
Enter generative AI.
Suddenly, this deeply ingrained sense of identity is being challenged. Tools can now assist with tasks long-associated with human expertise and cognitive effort: drafting, coding, summarizing, and ideating. For many professionals, it feels like the ground is shifting beneath their feet.
If someone built a career on being the person who could write clearly, solve problems logically, or generate insights, what does it mean to hand over some part of that process to a machine?
This is where the theory of Immunity to Change, developed by Robert Kegan and Lisa Lahey, becomes relevant. It helps explain why so many individuals struggle to embrace new behaviors even when they express a desire to change. Beneath the surface, there are often hidden commitments and assumptions designed to protect their identity.
Consider some common examples of thoughts that might enter our head from time to time these days:
“If I rely on AI to help me write, it will appear that I’m not smart enough to do it myself.”
“If I let a tool assist me, my work will lose its meaning.”
“If a machine can generate insights, too, maybe I’m no longer essential.”
These are not trivial thoughts. They are unspoken assumptions shaped by decades of education and professional reinforcement. They act as a kind of internal immune system, preserving a self-concept that was built around effort, mastery, and authorship.
But we should ask ourselves: are these assumptions 100% true?
To get to the heart of it, two more useful questions might be:
What kind of professionals are we becoming when we integrate these tools into our processes?
What safe and easy tests can I perform to challenge my beliefs about what this means for me and the future of my work?
This shift, from being the expert who produces, to someone who also collaborates with a generative system, may be one of the defining transitions of our time.
Are we ready to examine the assumptions that keep us from engaging with that possibility?