I did a Q&A with Pat Hanrahan last week. For anyone who doesn't know the name: Turing Award winner. Co-creator of the programmable GPU pipeline that every graphics card on the planet runs. Co-founder of Tableau. Stanford professor for decades. One of maybe fifty people alive who fundamentally shaped how computers work.
I asked him about the tension in CS programs right now between students who think AI tools are essential and students who think all code should be handwritten. His answer was basically a description of our entire development workflow. He just didn't know it.
You can watch the full session at turing.rsvp.
the prompt is a spec
Here's what Pat said about how he uses Claude:
"When I use it, I think the prompt is a spec. What do I want? So you have to think really clearly. What are you trying to do? That's really good. A lot of people don't know what they're trying to do. They just start hacking away. They go off in random directions and just see what happens."
This is the /discussion and /create-plan pipeline. Before any code gets written, you have a conversation with the agent until you have genuine clarity on what you're building. Then you write a structured plan with pseudocode, file references, and a task list. The plan is the spec. The prompt is a spec.
In the first post, I called the failure mode for skipping this step "the death loop" — when an agent tries to implement something without enough clarity and you waste the day trying to fix it. Pat described the same thing from the other direction: people who "just start hacking away" and "go off in random directions." Same problem. Same cause. Not enough thinking before building.
A Turing Award winner and two non-traditional founders arrived at the same conclusion independently: the quality of your output is entirely downstream of the quality of your spec.
reading code is more important than writing it
This one hit hard. Pat said:
"I think reading code is more important than writing in my experience. So having it generate a lot of code that you read just seems really awesome to me. And I'm really good at reading code because I've worked in industries. You have to read code. You have to do code reviews. You spend more time reading code than you do writing."
I haven't written my own lines of code in about six months. I wrote about that in the second post and said it still feels weird to say. But here's Pat Hanrahan, someone who's been programming his entire life and still programs every day, telling you that reading code has always been the more important skill. AI didn't change that. It just made it obvious.
Pat has an unfair advantage seeing this. He's spent decades reviewing student code as a professor, doing code reviews across industries, and reading other researchers' implementations. The "reading > writing" reality has been his entire career. AI didn't teach him this — it just made it visible to everyone else.
Our workflow is built around this. Agents write code. I read diffs. I run a Codex adversarial review loop where two different models review each other's output. The entire /implement → review cycle is a reading exercise. The writing is automated. The reading is where the judgment lives.
Pat also drew a parallel to mathematics that I keep thinking about:
"The mathematicians that actually prove theorems — mostly they read theorems and comment. So it's the same thing."
Mathematicians don't spend their days writing proofs from scratch. They read proofs, understand their structure, comment on implications, and push the field forward by reasoning about what exists. That's the job now for developers too. You're not writing code from scratch. You're reasoning about code that agents produce. The skill is taste, judgment, and architectural thinking — not typing speed.
spec, read, verify
Pat compressed his whole framework into three words:
"Spec, read, verify. And I think that's a better way to program than the way most people learn to program."
Three words. Three phases. Map them to our pipeline:
- → Spec = /discussion + /create-plan. Think clearly. Write the spec before any code exists.
- → Read = review diffs, read agent output, check that the implementation matches the spec. This is most of what I do in Pane all day — cycling between sessions with Ctrl+Up/Down, reading what agents produced.
- → Verify = the Codex adversarial review loop, typechecking, linting, build verification. Two different models catching each other's mistakes. Automated quality gates that agents run through before I even see the PR.
He also said something that maps to the second post's section on voice-first development:
"It just gets rid of all that boring stuff that lets you think about the higher level."
That's it. That's the whole thesis. Agents do the building. We do the thinking. A Turing Award winner said the same thing in different words without knowing our workflow exists.
the tension in CS programs
The question that prompted all of this was mine, about CS students. Someone in the chat called it the "question of the day."
There's a real rift right now: students who build with AI tools every day, and students (and professors) who think code should be handwritten to "really learn."
Pat was honest about the hard part:
"If you're brand new to programming, you don't know how to read code and you don't know what's good code and bad code. It's a little unclear to me how you learn it with this stuff."
But his instinct was toward a different kind of pedagogy:
"We used to give these really complicated big projects and tell people write a lot of code. Maybe we just give them lots of small exercises and just test their conceptual understanding."
He suggested giving students three implementations of the same function and asking which is strongest and why. Reading, evaluating, reasoning about tradeoffs. Not writing a CSV parser for the twentieth time.
This connects to something I wrote in the first post: "The technical pedigree filter is broken. What matters now is whether you can solve real problems, build trust, and ship relentlessly." Pat's framing is more nuanced because he's a professor thinking about how to teach. Mine is more blunt because I'm a founder thinking about who can ship. But we're pointing at the same shift.
The skill that matters isn't writing code. It's understanding code. Specifying what you want. Reading what you get back. Verifying it works. The students who are training that muscle with AI tools are learning the skill that actually compounds. The students hand-writing every line are practicing a skill that's being automated by the month.
That's not my hot take. That's a Turing Award winner's observation.
what this means
I've now written three posts about our development workflow. The first was the specific process — four commands, the death loop, what actually matters. The second was the evolution — three commands, voice-first, harness engineering, software factories. This one is the validation.
When someone who won the Turing Award, co-created GPU computing, and has been programming and teaching programming for decades independently describes your workflow, it's not a coincidence. It means the workflow is downstream of something fundamental about how humans and AI should collaborate. Spec, read, verify. Discussion, plan, implement. Same structure. Same insight.
The models will keep getting better. The commands will keep collapsing. But "think clearly, read carefully, verify rigorously" isn't going anywhere. That's the foundation.
Watch the full Q&A with Pat Hanrahan: turing.rsvp
All of our Claude Code commands and slash commands are open source: github.com/Dcouple-Inc/Pane/.claude
Previous posts: