I just watched this excellent interview with Michael Dean on the How I Write podcast.
Michael is an architect and writer, and his writing project is fascinating.
He’s built a framework for thinking about writing that adapts Christopher Alexander’s concept of pattern languages to writing.
If you’re unfamiliar, Alexander created a way of thinking about design and functionality that gave us a modular, nested framework for how to build
spaces — from whole cities down to features within rooms. A “pattern” is a loose and modifiable guideline for how a component of a system should work.
More defined than a rule-of-thumb, but less rigid than a rule. So patterns can be refined and adjusted to adapt to different settings.
Thinking about writing this way is interesting. Language has similarities to other complex systems: letters, words, sentences, phrases, paragraphs, stories, narratives. It’s made of modular components that nest together in a hierarchy, where ideas (“wholes”) emerge from the interactions between parts, even at different levels in the hierarchy.
Michael’s system gets more abstract than the simple physical form of the words and sentences, into things like voice and tone, cohesion, motifs, stakes, rhythm, and repetition.
While sometimes the mess is a certifiable inefficient disaster resulting from laziness, the “organized chaoos” messy space acts like a mental buffer.
Here’s computer scientist Jim Gray on the purpose of buffering in a programming context, from his book Transaction Processing:
The main idea behind buffering is to exploit locality. Everybody employs it without even thinking about it. A desk should serve as a buffer of the things one needs to perform the current tasks.
Keeping things “in the buffer” redounds to productivity (and ideally, creativity). If something is closer at hand, it lowers the transaction costs of retrieval.
Memorization works this way, too. People question the benefits of rote memorization in school, but this is a useful metaphor for understanding its value. Memorizing reusable data keeps it “in RAM” for faster retrieval.
Faster retrieval reduces friction, which means faster feedback loops, faster learning.
My plea at Meta was “No grand plans, follow the gradient of user value”.
I love this. If you just keep persistently pushing up the gradient toward more value, it’s winning in the long term. Durable and sustainable success
is that which happens gradually.
This reminds of a conversation I was just having earlier today about Fulcrum and our positive net retention. Our product fit was good enough that no
one ever left. That didn’t mean infinite growth or hockey-stick revenue, but it created a durable foundation from which to grow gradually.
With the bottom-up adoption model, the continuous shipping of new features, and modest evolution of pricing and packaging with time, that combo enabled a gradual climb up the gradient.
You don’t always need that comprehensive 5-year strategy doc or holistic product redesign or earth-shaking press release. You just need that next
nugget of feedback on the adjacent missing links in the value chain that you can iterate toward solving.
Keep climbing the gradient, and success will follow.
Here’s a useful way of thinking about the domains of our three branches of government, from Yuval Levin’s American Covenant:
It is only a slight exaggeration to say that the Congress is expected to frame for the future, the president is expected to act in the present, and the courts are expected to assess the past. These boundaries are not perfectly clean, of course.
How distinct this delineation of roles is between the branches of government is up for debate, but this is a useful way to think about the Framers’
intention in designing the balanced separation of powers.
Legislators frame laws for the future
The president acts on them today
Judges compare what’s happening today and planned for tomorrow against precedents set in the past
We tend to run into trouble when any of the branches strays outside its primary domain.
When the executive is designing Big Plans for the future, or when the judiciary is issuing punishments in the present, or (in what I’d say is our worst problem today) the legislative isn’t doing anything, we get into fraught territory.
While each of these has its primary focus on a particular time horizon, they aren’t the sole arbiters of decision making with respect to their domain. Our complex array of checks allows each to assert influence over other areas. I just find this a useful compression of the general model of the system.
Build assistive docs to help the Agent. Remember that the Agent behaves a lot like a human:
A human with minimal guidance doesn’t infer the right choices
PRDs, app flows, tech stacks / API docs, frontend/backend guidelines all help it do what you want
project rules over .cursorrules
More flexible
Sync across teams
Fine-tune based on tech stack
I’ve had much better results getting in the weeds with PRDs well before having agents go off and build. And you can have multiple AIs work on and
refine your documents, too.
Some technologies are unpredicted, but evolve. Others are predicted don’t seem to materialize (or not yet). Then there are those that are expected AND
appear. The unexpected tend to be the most disruptive — no one’s had the chance to prepare.
But the expected, if they do finally arrive, have been ruminated on for a long time. When we eventually realize the expected, we’re more prepared
socially for their impacts. Though often we’re wrong about their societal impacts until they show up.
Kevin Kelly writes about this in the context of AI, a technology long-predicted, but always with a bent
toward the negative. Toward the destructive social consequences of creating artificial beings.
Artificial beings – robots, AI – are in the Expected category. They have been so long anticipated that there has been no other technology or invention as widely or thoroughly anticipated before it arrived as AI. What invention might even be second to AI in terms of anticipation? Flying machines may have been longer desired, but there was relatively little thought put into imagining what their consequences might be. Whereas from the start of the machine age, humans have not only expected intelligent machines, but have expected significant social ramifications from them as well. We’ve spent a full century contemplating what robots and AI would do when it arrived. And, sorry to say, most of our predictions are worrisome.
Here’s the example list from Arthur C. Clarke’s 1963 book, Profiles of the Future:
This is a phenomenal extended (3 hour!) interview with Dana Gioia on his background, poetry, his writing process, and the habits he’s curated that
make him into a prolific and interesting writer.
When capturing hundreds of problems during product discovery, we generate at least as many potential solutions. We can’t build everything at once, so
how do we decide what to tackle first? Ryan Singer has a suggestion:
The counterintuitive thing is, we often feel like our task is to get to a “yes.” But what we actually need is a way to say “no.” It’s the ability to eliminate many, many things that aligns us on the one thing. It’s the “no, no, no, … YES!” that gives us the power to move forward and to stick with a project.
Finding the “reasons to build” for any given solution is easy — every idea is “good” on some continuum. When faced with a hundred ideas, each with compelling reasons to build it, we’re left with the fuzzy question of “how good?”
Ryan’s suggestion inverts the question: look for the reasons to not build it. Is there a workaround? Is it merely annoying but not a dealbreaker? What are users doing today instead? What are the real consequences of the status quo?
This inversion approach is more likely to highlight the acute pains — the missing solutions causing real negative consequences, the ones with no good alternatives or workarounds.
The first key to prioritizing is to triage the obvious “not now”s first. If we can cut our list down significantly, we can focus attention on where we’ll make the biggest impact.
In this AI era, I’ve been thinking a lot about what it means for humans in the loop of formerly human tasks. When AI is inserted in all layers of the
stack, what’s left for us?
Sari Azout hits on something I agree with: that the intangibles are (at least for now) resistant to AI. And these areas tend to be where we humans
find joy in creativity in the first place. Taste, building context, intersecting divergent ideas, a respect for the tactile, the ephemeral, the
unpredictable.
First, you need to cultivate a deeper relationship with your gut. The more our world becomes measurable and quantifiable, the more we need spaces that preserve what can’t be measured—the hunches we can’t explain, the patterns we feel but can’t prove. A jazz musician knows when to break rules in ways no theory explains. A good copywriter can feel what words will land without having a single data point to prove it. Taste isn’t some mysterious gift bestowed at birth—it’s simply what happens when you pay close attention to what moves you.
Our minds are good at finding patterns in the unquantifiable.
As I watch my kids learn, it sits with me how much we learn by copying. Imitation isn’t the enemy of originality — it’s the foundation of it. We learn
by copying, refine through practice, and ultimately create something uniquely ours. My latest post on Res Extensa:
In our rush to be original, we often dismiss copying as somehow lesser than “true” learning. But mimicry isn’t just a shortcut — it’s fundamental to how we master skills. We see it in my daughter’s creative reproductions, in my son’s workbench discoveries, and in every artist who’s traced the footsteps of masters before them.
The path to originality paradoxically begins with imitation. First, we copy to build competence. Then we understand. Finally, we create.