Why not using AI is no longer a professional virtue
This is the first piece in Max’s Playbook because it sets a standard I want to be judged by. For years, “I wrote this myself” signaled seriousness. Today, proudly claiming to write “without AI” is starting to sound like proudly claiming to work without search or spell-check. It may be true. It may even be principled. It’s just not a meaningful professional distinction anymore.
What matters is whether the work is accurate, intentional, and responsible in its consequences. And I’ve come to believe that refusing AI by default often increases friction without increasing quality, making it harder to meet the standard the job actually requires.
Thinking is not writing and confusing the two is the core mistake
The strongest “no AI” argument has always been about authorship: if I delegate the writing, I delegate the thinking; if I delegate the thinking, I’m no longer accountable. I understand that instinct, because I’ve had it myself.
But it rests on a confusion. Thinking and writing are linked, not identical. Writing is one way I think, but my judgment doesn’t live in my fingertips. It lives in choices: what I include, what I cut, what I claim, what I verify, what I refuse to say.
AI can help me produce text. It can’t decide what is strategically true, ethically acceptable, or contextually wise. It can’t own the risk. Those responsibilities stay with me regardless of how the first draft was generated.
AI is infrastructure across the writing chain
Most debates imagine one moment: the blank page becomes a paragraph. In practice, writing is a chain. Mine usually runs: selecting the problem; scanning what’s already known; reading enough to be careful; synthesizing a point of view; drafting; revising; pressure-testing; trimming; polishing; tailoring for an audience; sanity-checking details and tone.
AI can support several links in that chain without taking over authorship. It can widen selection (“what angles am I missing?”), speed up reading with summaries I treat as maps (not truth), impose structure on messy notes, draft alternatives so I can choose and help with compression or rephrasing while I keep control of meaning.
The shift is simple: AI isn’t just a drafting shortcut. It’s a general assistance layer that reduces mechanical friction so more attention can go to judgment and clarity.
Refusing AI often increases friction without improving quality
We like the idea that friction equals depth. Sometimes it does. Often it doesn’t.
Refusing AI by default tends to produce predictable outcomes: I spend more time on tasks that don’t deserve it -rewording, restructuring, generating variants - or I ship later than I should. Neither is automatically “more virtuous.” It’s just more expensive.
Quality doesn’t come from typing every sentence unaided. It comes from defining the point, holding the line, and removing what doesn’t earn its place. AI can help me reach a workable shape faster, but it can’t do the work that matters most: making the hard choices.
A brief fictional example: a comms lead drafts a CEO note after a sensitive internal change. The risk isn’t elegance, but misinterpretation. One sentence can read cold, evasive, or careless. AI can propose tonal options in minutes. That doesn’t solve the problem, but it buys time for the real job: deciding posture, anticipating reactions, and tightening meaning so it can’t be misread.
It’s the same pattern you see across knowledge work: when production becomes more tool-driven, professionalism concentrates in question selection, interpretation, and accountability.
The professional virtue now is responsibility for outcomes, not pride in process
I’ve grown skeptical of virtue claims about process. “I did it the hard way” isn’t a deliverable. It doesn’t protect stakeholders. It doesn’t make a message truer. It doesn’t reduce risk.
The standard is outcome ownership. If I publish something, I’m responsible for what it does: what it convinces, what it confuses, what it escalates, what it harms. That responsibility doesn’t change based on the tools I used. If anything, AI makes it more important, because speed and volume magnify mistakes.
So I prefer a more honest posture than “I don’t use AI”: I use assistance where it reduces friction, and I remain fully accountable for meaning, accuracy, and consequences.
Clear limits: what AI should not do
Using AI becomes irresponsible when boundaries dissolve. Mine are simple.
I don’t use AI to outsource conviction. If I don’t know what I believe, a model can’t supply it; it can only simulate confidence. I don’t use AI to manufacture facts. Anything that must be correct must be checked through reliable sources, not plausibility. I don’t use AI to decide what is fair to say; that’s a human judgment call, shaped by context and consequences.
I also don’t pretend AI output is neutral. It tends toward plausible, smoothed, consensus language. Sometimes that helps. Often it’s exactly what I need to resist. Distinctiveness usually lives in deliberate edges: the framing, the omission, the emphasis. That’s my job.
Craft isn’t threatened by tools, only by abdication
This isn’t a manifesto. It’s a personal adjustment to a new baseline. The old virtue - autonomous writing as proof of seriousness - made sense when “autonomous” was synonymous with “intentional.” Today, autonomy can quietly drift into unnecessary friction, slower cycles, and pride that doesn’t show up in outcomes.
If I care about craft, I should care about what craft is for: clarity, precision, honesty, and impact without collateral damage. AI doesn’t replace those standards. It tests them. It makes text easier to produce, which means the differentiator is less about generating sentences and more about judgment, editorial discipline, and responsibility.
So I’m no longer impressed, least of all in myself, by “I don’t use AI.” I’m impressed by work that is accurate, clear, and accountable. The virtue is the seriousness of the result.
Published on January 3, 2026