Stuart Buck with a fascinating piece on the seemingly simple but deceivingly hard process of replicating studies. Two labs were collaborating on a breast cancer study, and ran into surprising challenges getting the same results with (what they thought were) the same inputs:
They were frustrated: “Despite using seemingly identical methods, reagents, and specimens, our two laboratories quite reproducibly were unable to replicate each other’s fluorescence-activated cell sorting (FACS) profiles of primary breast cells.”
They tried everything: The instrumentation. The “specific sources of tissues…, media composition, source of serum and additives, tissue processing, and methods of staining cell populations.” The protocols to ensure they were using “identical enzymes, antibodies, and reagents.”
After doing all of this for a year, they were still stumped. So they met in person to “work side by side so we could observe every step of each other’s methods.”
In the end, they figured out that the ONLY reason for their discrepant results was this: The rate of stirring collagenase. At one lab, the tissue was stirred at a rate of 300 to 500 revolutions per minute for six to eight hours. At the other lab, it was stirred at a much more gentle rate of 80 revolutions per minute for 18 to 24 hours.
That was it. That was the one and only difference that explained why Harvard and Berkeley labs were getting such different results from an identical experiment. No one had even thought to mention the rate of stirring, because it seemed so routine and unlikely to matter.
There’s already a replication crisis in research. This shows that even when one is trying really hard to replicate honest research, we can be confounded by intricate minutiae of process and incidental detail. It all matters.
We learn about “The Enlightenment” as a singular entity, a historical age associated with rationality, scientific inquiry, humanism, and liberty. The Enlightenment and scientific revolution were defining moments that spawned an unprecedented period of progress and human flourishing. But in his book The Beginning of Infinity, David Deutsch adds useful texture for better understanding the motivations of the Enlightenment’s contributors.
He divides the movement into two broad forms: the “British” and the “Continental”.
Both branches agree on the core principles of rationality, progress, and freedom. Where they disagree is on how to achieve these goals. They pursue the same ends, but disagree on the means. The British model builds on the concept of fallibilism: progress happens through conjecture, empirical evidence, and falsification. The Continental relies on pure reason, and our theoretical ability to find final, objective truth. Thinkers like Kant, Rousseau, and Voltaire best fit in the Continental camp. The likes of John Locke, Edmund Burke, Karl Popper, and Adam Smith in the British.
Here’s a summary of qualities that differentiate these two approaches to pursuing human progress:
Continental Enlightenment
British Enlightenment
Utopianism
Fallibilism
Society can be perfected
Society can only be indefinitely improved
Problems are soluble, NOT inevitable
Problems are soluble, AND inevitable
Perfect the state through design
Improve the state through gradual evolution
Top-down
Bottom-up
Comprehensive reform of institutions
Messy, improvement of imperfect forms
Deutsch himself favors the British form. As with issues of contemporary politics and philosophy, it’s important to understand not only the goals a particular philosophy seeks, but how it proposes we go about doing so.
Jason Fried recently wrote that we should teach iteration as a subject, or technique at least, in schools.
Another subject wildly undertaught is evolution. Not just the “creation vs. evolution” Big Picture story of how humans got here that we’ve spent centuries arguing over. I mean the underlying mechanisms of random variation, error correction, and fitness-to-environment testing that creates emergent order:
Out of the random variation, which is the result of mutations/copying-errors (which can be the result of exposure to radiation, metals or chemical substances), only a small percentage actually increases the fitness of an individual. Those mutations tend to prevail and become widespread, whereas mutations that lead to a disadvantage will likely be weeded out of the genepool. Even though the variation is originally random, a non-random subset of it – the fitness-benefitting components – ends up conserved through natural selection. This mechanism results in organisms adapting to better survive and reproduce in their environment.
So many other fields can benefit from a deeper understanding of these mechanisms — economics, sociology, architecture, language. In fact there’s a similarity here to teaching iteration. Teaching that accumulation and trial-and-error are present in every real system you’ll encounter in the future.
This is a phenomenal interview with Richard Rhodes, author of the legendary The Making of the Atomic Bomb, an expansive history of the Manhattan Project and the development of nuclear weapons technology.
Dwarkesh Shah’s show The Lunar Society is generally excellent and highly recommended. Just listen to how long he lets Rhodes answer and expound on questions without interruption. These are my favorite types of long-form interviews.
This piece from Anton Howes gets at one of the key insights about how innovation works: it doesn’t happen through sudden bursts of insight from thin air — it requires the combination of the right simmering ingredients and a person in search of solutions to specific problems:
Santorio’s claim, it seems, is safe. But in this lies an important lesson for all would-be inventors. The inverted flask experiment had been around for centuries, and even been understood since ancient times as being caused by hot and cold. So its application as a thermometer was extremely low-hanging fruit. The likelihood of it being interpreted as a temperature-measuring device might have increased somewhat in the mid-sixteenth century, when we find the first mentions of it being done using a glass flask rather than an opaque metal container. Yet even then, the visible rise and fall of the liquid in the open bucket, rather than the flask, could always have been noted and measured against a scale in much the same way. What Antonini’s letter also shows us is that even when a scale was applied to the experiment, an ingenious person who knew their cutting-edge science like he did could still fail to appreciate the potential of what they had done.
In this case, the “inverted flask” had existed for many year, and Santorio was actively searching for ways to measure temperature. Innovation requires the right mixture of “prior art” and the willful intent in search of solutions. Progress doesn’t happen automatically!
Ezra Klein recently hosted Stripe founder Patrick Collison on his podcast for a deep dive into his thinking on progress studies.
Tracking down the origins of what generates progress, and what compels things like substantial breakthroughs in scientific research is a hard problem. Clearly there’s no monocausal explanation. I like Patrick’s idea here that specific attributes of research culture might be key contributors:
If we kind of accept that, and we try to ask ourselves, well, specifically, what are the mechanisms? You know, what’s actually going on? It’s hard for me to say. It seems like the transmission of research culture by individual researchers matters a great deal.
And you see these kinds of pockets of the cultural transmission repeatedly crop up, where Gerty and Carl Cori — you probably haven’t heard of — they ran a little biology lab in Missouri, and no fewer than six of their trainees, of students they trained, went on themselves again to win Nobel Prizes.
And if we tell ourselves a standard kind of mechanistic story as to, well, it’s the funding level, it’s how much are we investing in science, or it’s something about whether there’s an institution in the courser sense, that can possibly be amenable to it, it’s very hard to explain these eddies where you see these pockets of excellence really produce these outsized returns. So I think it’s a complicated question.
I think all of aggregate culture, funding, institutional characteristics, and so on all contribute to it. But if I had to isolate a single variable, it seems to me that the research culture set by specific people and the tacit knowledge transmitted through direct experience is probably the number-one thing.
Cultural factors are some of the hardest contributors to measure, which could be a reason why explaining where the enlightenment and human progress come from in the first place.
Online magazine BigThink has just published a full issue dedicated to progress studies. Lots of great stuff here from noteworthy folks like Tyler Cowen, Hannah Ritchie, Kevin Kelly, and many more.
Geologists on the whole are inconsistent drivers. When a roadcut presents itself, they tend to lurch and weave. To them, the roadcut is a portal, a fragment of a regional story, a proscenium arch that leads their imaginations into the earth and through the surrounding terrane.
This is a book I’d love to revisit. So many great bits of history.
From the world of geophysics, a massive-scale seismic research project has been happening surrounding the island of Réunion, a shield volcanic dome over an Indian Ocean hotspot. Researchers have been using a stream of data collected from a web of seismometers in the region to map out the superheated plumes of mantle material that bubble up from the core.
In 2012, a team of geophysicists and seismologists set out to map the plume, deploying a giant network of seismometers across the vast depths of the Indian Ocean seafloor. Nearly a decade later, the team has revealed that the mantle is stranger than expected. The team reported in June in Nature Geoscience that the plume isn’t a simple column. Instead, a titanic mantle plume “tree” rises from the fringes of the planet’s molten heart, with superheated branchlike structures appearing to grow diagonally out of it. As these branches approach the crust, they seem to sprout smaller, vertically rising branches — super hot plumes that underlie known volcanic hot spots at the surface.
The data has resulted in higher-resolution 3D modeling of these plumes than we’ve seen before, showing how fractal, tree-like structures happen even in geophysical processes embedded in the earth. These patterns are everywhere in nature, even in slow-moving rock. The article comes with some cool graphics showing what these structures look like, stretching from the core up to the surface forming continent-sized columns.
Geologic timescales are impossible to comprehend in their scale. So I love it when writers accelerate the events for effect:
Some scientists suspect that plumes from the African giant blob spent at least 120 million years tearing the ancient supercontinent of Gondwana into shards. As the plumes rose into its base, they heated it and weakened it; like moles making hills, they caused the land atop these plumes to dome upward, then slide downhill. Australia was unzipped from India and Antarctica, Madagascar from Africa, and the Seychelles microcontinent from India — an act of destruction that made the Indian Ocean.
Jonathan Rauch on pluralism and the necessity of disagreement in the search for truth.
His book Kindly Inquisitors was first published in 1993, but is as relevant today as ever. The book is a defense of what he calls “liberal science”, our decentralized process for knowledge discovery that relies on relentless-but-gradual error correction:
Liberal science, by its very nature, has little tolerance for fundamentalism; conversely fundamentalism is a threat to liberal science. Fundamentalism, defined by Rauch as the “search for certainty rather than for errors,” is the antithesis of scientific inquiry. Fundamentalism seeks a monopoly on knowledge from which it can deny the beliefs put forth by all others. Rauch even notes that there are fundamentalist free-marketeers—those who refuse to accept the possibility that cherished economic axioms may be flawed, or at least in need of revision—and he challenges them to enhance their intellectual rigor. If classical liberals are willing to accept the self-correcting actions of the marketplace to properly allocate valued resources, they should also allow the self-correcting mechanisms of liberal science to separate knowledge from supposition.
Due to its nature as a decentralized system, liberal science frees knowledge from authoritarian control by self-appointed commissars of truth. “In an imperfect world, the best insurance we have against truth’s being politicized is to put no one in particular in charge of it,” notes Rauch. Liberal science achieves this end. It avoids despotism in the intellectual realm as it does in those of politics and economics.
I set up RoamHacker’s Roam42 suite for SmartBlocks a few weeks back, and it’s game-changing. I’m still a novice with it and have only used a few of its tools, but this sort of extensibility and programmability is what’s making Roam the most interesting text platform.
This is a solid, brief guide on how to frame Jobs to Be Done statements.
“Help me brush my teeth in the morning” is not a great example of a Job to Be Done statement.
“Help me brush my teeth in the morning” is joined at the hip to an existing solution (a toothbrush) and there’s only so far you’ll be able to expand your thinking within that bubble.
A way to describe the Job to Be Done when a person is brushing their teeth that could lead to more innovative product design is:
Vannevar Bush’s seminal report to President Truman, making the case for government support for foundational scientific research (and pushing to create the NSF).
Jason Crawford is maintaining this list on Roots of Progress, an archive of inventions that seemingly could’ve been uncovered earlier than they were, based on what precursor knowledge would’ve been required. This one about stirrups is wild:
It’s fascinating how there aren’t even clear explanations of why these took the time that they did to discover. It points to the random, serendipitous, evolutionary nature of innovation. Many things with little to no prerequisite breakthrough in materials, processes, or manufacturing techniques just aren’t run across for centuries, or sometimes have social or cultural factors obscuring them.
One of the key insights coming out of the progress studies movement seems like a simple idea on the surface, but it’s an important core thesis: that progress is not an inevitability. We don’t see new inventions, innovations, and improvements to quality of life by accident. It’s the result of deliberate effort by people in searching for new life improvements. Using names like “Moore’s Law” perhaps makes it sound like computer chip improvements “just happen,” but researchers at Intel or TSMC would beg to differ on how automatic those developments were.
For at least the last 150 years, steady, expansive progress has been the default. Since the Industrial Revolution, scientific discovery has marched forward, and since the days of the Enlightenment, science and progress have been generally accepted as net benefits to humanity1.
I think what we see today isn’t so much a reversal on this position, but perhaps a sense of passivity and taking progress for granted. That scientific advancement just happens to us without deliberate effort.
Jason Crawford had some interesting thoughts on this subject on Roots of Progress, first posing a question on where progress comes from:
Most of the arguments in response supporting the second case fell into two categories:
Failure of imagination — ”I can’t imagine any big breakthroughs, so they must be flukes or strokes of luck” or
Materialism — ”Progress happens through exploitation of resources, so will peter out as we run out of physical materiel”
Exploring innovation’s inner workings is helpful in understanding the issues with these two arguments.
In his phenomenal book How Innovation Works, Matt Ridley describes innovation as a gradual, bottom-up, evolutionary process of thousands of small steps forward. We tend to look back on the history of progress and point to pillar breakthroughs like Orville and Wilbur at Kitty Hawk, Edison’s lightbulb, Marconi’s radio, or Pasteur’s vaccines, treating each as a Big Bang moment of inspiration that happened in one fell swoop. Ridley tells us this is a flaw in human reasoning; we love narratives and stories, so we spice up the reality of how these inventions came to be. The truth on the ground was much more gradual and dispersed in each of these cases. Hundreds of precursor steps had to happen, proffered by hundreds of different individuals dispersed around the globe. The Big Bang invention story takes as given the source branches lower on the tree that sprouted these successor innovations.
It’s not as if innovators aren’t actively pursuing discovery, that solutions just fell in their lap while sitting in their living rooms. The Wright Brothers knew that they were trying to get a flying machine off the ground. What they didn’t predict, though, was the impact that flight would have on global economics, war, trade, recreation, and every other dimension of modern life. The same is typically true of other instances of progress. From our position in the 21st century, it seems obvious that flight would have massive ramifications for the global order. But this is easy too see in hindsight. As Ridley puts it:
Technology is absurdly predictable in retrospect, wholly unpredictable in prospect.
I think Jason raises a good point on why we have trouble imagining where the next breakthrough will come from:
The historical reason is that the big breakthroughs of the past were not easy to imagine or predict before they happened. In a different context, Eliezer Yudkowsky points out that even the creators of inventions such as the airplane or the nuclear reactor felt that their breakthroughs were fifty years out, or even impossible, shortly before they happened. Now is no different.
One of the biggest factors to this trick of hindsight is that innovation in situ is a gradual phenomenon. Only in hindsight do we look back on it as a “eureka” process of going from 0 to 1 in a flash of inspiration. I think if you reframe your understanding of progress and innovation around a steady, gradual march of deliberate advances, you begin to see why waiting around for it to happen in big bursts is an incorrect model.
Another insight from Ridley is that innovation is rooted in trial and error. In a world where we’ve become hyper-concerned with risk and protecting against downsides (just look at our expanding regulatory complex — only growing, never shrinking), we slow ourselves down from making the errors necessary for progress. Edison, a man that turned innovation into a product in itself, had this to say on his process, acknowledging error as a baked-in prerequisite for discovery:
I’ve not failed, I’ve just found 10,000 ways it won’t work.
Regarding the point on material constraints, it’s worth reading into the concept of dematerialization. The best work on the topic I know of is Andrew McAfee’s More From Less, which dives deep on this topic of how much modern progress is able to not only continue but in most cases accelerate, all while using fewer resources than once required. Take the simple aluminum can: the first ones weighed 85 grams. Modern refinement and manufacturing processes have reduced that to 11. The book is filled with cases like this of dematerialization outpacing our increase in consumption. It’s not a universal law of innovation, but rather a pattern that we see with continued progress2.
Getting back to Jason’s original question — I believe that progress does have substantive causes. But between innovation’s gradual nature and the sea of trial and error, it’s hard to notice it while it’s happening. While you’re sitting in the present without the clarity of hindsight, it can feel like progress comes from flukes and strokes of luck3. But the deliberate effort and small victories add up (and compound) over time to enormous progress.
One of the goals of the progress studies movement is to expose what the sources of innovation are, to teach people how innovations came to be. And it’s important to recognize that innovation is intentional. It happens because we choose to work on making our lives better.
A final quote from the beginning of How Innovation Works (emphasis mine):
Innovation, like evolution, is a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance – and that happen to be useful.
For most of human history, the unknown was treated as mystical and divine, rather than something that could be analyzed, understood, and deliberately improved. ↩
To be clear, serendipity does play a role. Teflon, famously, was accidentally discovered by Roy Plunkett in his time at DuPont. He was attempting to create a new refrigerant, and ended up coating the inside of a pressurized bottle with the slick material. But he was putting in work and seeking a discovery, he just ended up with a different one. As one of my favorite quotes from explorer Roald Amunsen goes: “Victory awaits him who has everything in order — luck, people call it.” ↩
This conversation with José Luis Ricón Fernández de la Puente on Erik Torenberg’s podcast was an expansive cover of more topics than I think I’ve ever heard discussed on a single podcast. A brief sampling of the subjects touched: scientific progress, economics, GDP growth, health care, regulations, longevity research.
Also see José’s blog for more in-depth coverage on his research topics.
Jeff Atwood on Robert X. Cringely’s descriptions of three groups of people you need to “attack a market”:
Whether invading countries or markets, the first wave of troops to see battle are the commandos. Woz and Jobs were the commandos of the Apple II. Don Estridge and his twelve disciples were the commandos of the IBM PC. Dan Bricklin and Bob Frankston were the commandos of VisiCalc.
Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse and slog out the early victory, building on the start given them by the commandos. The second-wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.
What happens then is that the commandos and the infantry head off in the direction of Berlin or Baghdad, advancing into new territories, performing their same jobs again and again, though each time in a slightly different way. But there is still a need for a military presence in the territory they leave behind, which they have liberated. These third-wave troops hate change. They aren’t troops at all but police.
Behind all this is the astonishing, baffling breadth of what sleep does for the body. The fact that learning, metabolism, memory, and myriad other functions and systems are affected makes an alteration as basic as the presence of ROS quite interesting. But even if ROS is behind the lethality of sleep loss, there is no evidence yet that sleep’s cognitive effects, for instance, come from the same source. And even if antioxidants prevent premature death in flies, they may not affect sleep’s other functions, or if they do, it may be for different reasons.
Adam Elkus with a great essay on the current moment:
“Is this as bad as 1968?” is an utterly meaningless question precisely for this underlying reason. People do not invoke 1968 because of the objective similarities between 2020 and 1968. They do so because we have crossed a threshold at which basic foundations of social organization we take for granted now seem up for grabs. This is an inherently subjective determination, based on the circumstances of our present much as people in 1968 similarly judged the state of their worlds to be in flux. 1968 is an arbitrary signpost on an unfamiliar road we are driving down at breakneck speeds. You can blast “Gimme Shelter” on the car stereo for the aesthetic, but it’s not worth much more than that.
Devon Zuegel with ideas on how to better utilize your calendar for things beyond appointments and meetings. A few ideas I’d like to commit to doing, especially with using the calendar as a recall tool for memory.
Just as our distant ancestors were too gullible (factually, if not strategically) about their sources of knowledge on the physical world around them, we today are too gullible on how much we can trust the many experts on which we rely. Oh we are quite capable of skepticism about our rivals, such as rival governments and their laws and officials. Or rival professions and their experts. Or rival suppliers within our profession. But without such rivalry, we revert to gullibility, at least regarding “our” prestigious experts who follow proper procedures.
I really enjoyed this post from Jerry Neumann exploring the structure of how technological and scientific progress happens.
Referencing the well known work of Karl Popper and Thomas Kuhn, he demonstrates how technological change falls into a power-law distribution in its frequency-to-impact ratio. Kuhn’s argument was that progress happens in either small, incremental improvements, or massive, revolutionary leaps:
Kuhn looked at the history of scientific progress and saw that Popper’s heroic scientific machinery was rarely how science happened in the real world. Kuhn’s theory was descriptive: it explained why science seems to have two different processes at work, one of the gradual accumulation of knowledge through normal science and the other of jarring change through revolution. These two processes are not versions of one another, they are truly different, in Kuhn’s view. He says the proponents of normal science fiercely resist revolutionary science and so revolutions can only occur when normal science hits an almost existential dead end.
Kuhn was a proponent of the idea that large, tectonic movements in scientific progress were the results of theories overwhelming the inertia of the status quo. When the old guard would age, shrink in influence, and eventually die out. “Science advances one funeral at a time.”
But Neumann here peels apart what a “technology tree” looks like in reality, and how changes to the modular components result in technological output at the “leaf” level. Using microprocessors as an example, they’re the result of combined sets of discoveries connected in a trunk-and-branch type configuration:
In this model, making incremental improvement to a fundamental technology (like transistor technology or lithography) has a cascading impact up the tree.
An interesting insight here is how he uses this explanation to refute not only what Kuhn’s theory describes, but also Clayton Christensen’s theory of sustaining versus disruptive innovation, which is widely accepted as truth in the tech community.
If innovation outcomes are power-law distributed then there aren’t really two processes at all, it just seems that way. Kuhn, not to mention Clay Christensen, might have been seriously misreading the situation. It may seem like change faces resistance until it is big enough that the resistance can be swept away, but the truth may be that every change faces resistance and every change must sweep it aside, no matter if the change is tiny, medium-sized, or large. We just tend to see the high frequency of small changes and the large impact of the unusual big changes.
When framed this way, it makes a lot of intuitive sense. Impact and frequency are often the two qualities we index on when reacting to discoveries. For those in the middle of the distribution — discoveries that are less frequent than the small incremental ones, and less impactful that the big sea-changes — the response tends to be: “meh.” Perhaps the distribution doesn’t follow the strict bi-modal pattern like we thought; maybe it’s just our attention being focused on the far left and right of the curve.
To me, this is is an important insight considering that our acceptance of the Kuhn / Christensen theories causes us to design our organizations and processes around this model. Organizations create innovation labs and bring in McKinsey consultants to help them “do innovation.” Then these groups are incentivized to discount or ignore ideas that aren’t massive in scope — a selection bias against patently good ideas with potential because they’re seeking the next world-shifting discovery.
There’s a strong case to be made to orient research and development focuses in a more linear fashion; don’t overindex on the Big Ideas, but also don’t fall into the trap of incremental, small steps over exploratory free-form research.
The topic of funding has been kicking around with the coronavirus. Fastgrants launched a few weeks ago and has already awarded grants to 97 different research proposals.
Roots of Progress breaks down various funding methods that have powered scientific research.
Ken Burns is producing a documentary series adapted from Siddhartha Mukherjee’s book The Gene: An Intimate History. It’s a history of genetics and the human genome. It was one of my favorite books from 2017. Looking forward to watching this.
This is a neat clip from Walt Disney’s Disneyland TV series. Wernher von Braun explains the future technology that’ll take us to the Moon, in 1955, several years before the Mercury program even began.
This article is excerpted from Steve Stewart-Williams’s latest book, The Ape that Understood the Universe. On the purposes of altruism and kin selection:
The details of Hamilton’s theory are complex, but the basic idea is fairly simple. The starting point is the observation that organisms share a larger fraction of their genes with relatives than they do with unrelated individuals. This has an important implication, namely that any gene that contributes to the development of a tendency to help one’s relatives has a better than average chance of being located as well in the recipients of that help. As a result, by helping one’s relatives to survive and reproduce, one can indirectly help to spread the genes that gave rise to that very tendency.
I’m currently reading Rory Sutherland’s Alchemy, which is full of evolutionary concepts and factoids like this, even though it’s ostensibly about marketing, bias, and decision making. Stewart-Williams’s book sounds right up there; will add it to the list.
Bloomberg has been publishing this video series on future technologies called “Giant Leap.” It’s well-done and a nice use of YouTube as a medium.
This one explores a number of new companies doing R&D in microgravity manufacturing — from biological organ “printing” to creation of high-quality fiber optic materials. There are still some challenges ahead to unlock growth of space as a manufacturing environment, but it feels like we’re on the cusp of a new platform for industrial growth in the near future.
An interesting technical breakdown on how Figma built their multiplayer tech (the collaboration capability where you can see other users’ mouse cursors and highlights in the same document, in real time).
A fascinating paper. This research suggests the possibility that group-conforming versus individualistic cultures may have roots in diet and agricultural practices. From the abstract:
Cross-cultural psychologists have mostly contrasted East Asia with the West. However, this study shows that there are major psychological differences within China. We propose that a history of farming rice makes cultures more interdependent, whereas farming wheat makes cultures more independent, and these agricultural legacies continue to affect people in the modern world. We tested 1162 Han Chinese participants in six sites and found that rice-growing southern China is more interdependent and holistic-thinking than the wheat-growing north. To control for confounds like climate, we tested people from neighboring counties along the rice-wheat border and found differences that were just as large. We also find that modernization and pathogen prevalence theories do not fit the data.
An interesting thread to follow, but worthy of skepticism given the challenge of aggregating enough concrete data to prove anything definitively. There’s some intuitively sensible argument here as to the fundamental differences with subsistence practices in wheat versus rice farming techniques:
The two biggest differences between farming rice and wheat are irrigation and labor. Because rice paddies need standing water, people in rice regions build elaborate irrigation systems that require farmers to cooperate. In irrigation networks, one family’s water use can affect their neighbors, so rice farmers have to coordinate their water use. Irrigation networks also require many hours each year to build, dredge, and drain—a burden that often falls on villages, not isolated individuals.
I’ve talkedbefore about my astonishment with the immune system’s complexity and power. This piece talks about tuft cells and how they use their chemosensory powers to identify parasites and alert the immune system to respond:
Howitt’s findings were significant because they pointed to a possible role for tuft cells in the body’s defenses — one that would fill a conspicuous hole in immunologists’ understanding. Scientists understood quite a bit about how the immune system detects bacteria and viruses in tissues. But they knew far less about how the body recognizes invasive worms, parasitic protozoa and allergens, all of which trigger so-called type 2 immune responses. Howitt and Garett’s work suggested that tuft cells might act as sentinels, using their abundant chemosensory receptors to sniff out the presence of these intruders. If something seems wrong, the tuft cells could send signals to the immune system and other tissues to help coordinate a response.
Given the massive depth of knowledge about biological processes, anatomy, and medical research, it’s incredible how much we still don’t know about how organisms work. Evolution, selection, and time can create some truly complex systems.
A beautiful visualization project from Nature converts 150 years of scientific papers into a 3-dimensional network diagram, making concrete the network of citations and references linking together the history of discoveries.
Blot is a super-minimal open source blogging system based on plain text files in a folder. It supports markdown, Word docs, images, and HTML — just drag the files into the folder and it generates web pages. I love simple tools like this.
An interesting post from Robert Simmon from Planet. These examples of visualizations and graphics of physical phenomena (maps, cloud diagrams, drawings of insects, planetary motion charts) were all hand-drawn, in an era where specialized photography and sensing weren’t always options.
A common thread between each of these visualizations is the sheer amount of work that went into each of them. The painstaking effort of transforming a dataset into a graphic by hand grants a perspective on the data that may be hindered by a computer intermediary. It’s not a guarantee of accurate interpretation (see Chapplesmith’s flawed conclusions), but it forces an intimate examination of the evidence. Something that’s worth remembering in this age of machine learning and button-press visualization.
I especially love that Apollo mission “lunar trajectory” map.
Descartes Labs built a wildfire detection algorithm and tool that leans on NASA’s GOES weather satellite thermal spectrum data, in order to detect wildfires by temperature:
While the pair of GOES satellites provides us with a dependable source of imagery, we still needed to figure out how to identify and detect fires within the images themselves. We started simple: wildfires are hot. They are also hotter than anything around them, and hotter than at any point in the recent past. Crucially, we also know that wildfires start small and are pretty rare for a given location, so our strategy is to model what the earth looks like in the absence of a wildfire, and compare it to the situation that the pair GOES satellites presents to us. Put another way our wildfire detector is essentially looking for thermal anomalies.
I’m a historian of innovation. I write mostly about the causes of Britain’s Industrial Revolution, focusing on the lives of the individual innovators who made it happen. I’m interested in everything from the exploits of sixteenth-century alchemists to the schemes of Victorian engineers. My research explores why they became innovators, and the institutions they created to promote innovation even further.
This connects nicely with the recent “progress studies” movement.
Another interesting post from Roots of Progress, following up from the previous one, which asked why it took so long to invent the bicycle.
This question on invention is an interesting one. My first reaction is to agree with Jason in general that the leisure time and latitude permitted by times of plenty gives us more room for study and experimentation — the steps that lead to incremental discovery. However there have been many breakthrough discoveries happened upon by accident.
Often times progress is spurred forward by intentional inventions leading to unintentional second-order effects, sometimes negative ones but most of the time positive, unpredicted outcomes:
More generally, it’s impossible to predict which discoveries or inventions are going to be important, at the time they are made, or to see all of the most important applications that will come. When Newcomen invented the steam engine, I don’t think he had any idea that over a century later a descendant of his machine would power railroads and steamboats. Edison invented the phonograph but didn’t predict the recorded music industry. Rockefeller established the oil industry to produce kerosene, then decades later pivoted to gasoline for automobiles. And when DARPA wrote the first grant to invent the Internet, they had no idea how much bandwidth would one day be consumed by cat pictures. So even if people were motivated purely by utility, or wanted to be, we wouldn’t know which directions to pursue. We make progress only through a wandering, unpredictable process of exploration.
The correlation is there to lean toward “plenty”, for sure. Another reason the study of progress warrants official research to understand its mechanics, so we can keep that flywheel spinning.
The relationship that eventually mattered most to Einstein’s legacy was symmetry. Scientists often describe symmetries as changes that don’t really change anything, differences that don’t make a difference, variations that leave deep relationships invariant. Examples are easy to find in everyday life. You can rotate a snowflake by 60 degrees and it will look the same. You can switch places on a teeter-totter and not upset the balance. More complicated symmetries have led physicists to the discovery of everything from neutrinos to quarks — they even led to Einstein’s own discovery that gravitation is the curvature of space-time, which, we now know, can curl in on itself, pinching off into black holes.
Symmetry has helped physicists predict eventual discoveries (like the Higgs boson and gravitational waves), but also doesn’t predict some symmetries we’d expect:
In some cases, symmetries present in the underlying laws of nature appear to be broken in reality. For instance, when energy congeals into matter via the good old E = mc2, the result is equal amounts of matter and antimatter — a symmetry. But if the energy of the Big Bang created matter and antimatter in equal amounts, they should have annihilated each other, leaving not a trace of matter behind. Yet here we are.
The perfect symmetry that should have existed in the early hot moments of the universe somehow got destroyed as it cooled down, just as a perfectly symmetrical drop of water loses some of its symmetry when it freezes into ice. (A snowflake may look the same in six different orientations, but a melted snowflake looks the same in every direction.)
“Everyone’s interested in spontaneously broken symmetries,” Trodden said. “The law of nature obeys a symmetry, but the solution you’re interested in does not.”
A French startup company called Glowee is working on being able to produce light using bioluminescence:
Glowee reinvents light production with technology nature has already created to make lighting more sustainable and healthier for both humans and the environment. Having identified the genetic coding that creates bioluminescence, Glowee inserts this code into common, non-toxic, and non-pathogenic bacteria to produce clean, safe, synthetic bioluminescence. Once engineered and grown, the bacteria are encapsulated into a transparent shell, alongside a medium composed of the nutrients they need to live and make light. This lighting solution can indefinitely and exponentially grow with little infrastructure needed and does not require any extraction of natural resources.
Because of the relatively low output of these biological sources of light, they want to focus first on nighttime lighting for things like street furniture and nighttime street lighting. But it’s a clever idea to how we could engineer energy sources with alternative fueling methods than electricity.
Imagine having to “feed” the lights in your house instead of simply paying a generation facility for watts delivered through wires.
NASA has developed a portable atomic clock that would allow deep space probes to navigate on their own. As Geoff Manaugh notes here, when you’re traveling in space with no access to a frame of reference, travel time from a point of origin is how one orients:
One might say that the ship is navigating time as much as it is traveling through space—steering through the time between things rather than simply following the lines that connect one celestial object to another.
The general problem of ship orientation and navigation in deep space is a fascinating one, and it has led to ideas like using “dead stars” as fixed directional beacons, a kind of thanato-stellar GPS. This is “the long-sought technology known as pulsar navigation,” Nature reported last year. “For decades, aerospace engineers have dreamed of using these consistently repeating signals for navigation, just as they use the regular ticking of atomic clocks on satellites for GPS.” You head toward something that’s only consistent because it’s dead.
Neuroscientist Karl Friston is the world’s leading authority on brain imaging science and on the forefront of our understanding of how brains actually work. He’s the creator of the free energy principle, an idea that attempts to unify an organizing framework for what drives all life: minimizing free energy.
The predictive processing model is a cognitive framework for modeling how the brain synthesizes information from two channels:
The “bottoms up” stream of raw data coming in through our senses for processing
The “top down” stream of predictions about the world
These two channels merge together in a continuous interplay inside the brain and allow us to make sense of the world, with each system continually feeding back to the other in a process we’d refer to as “learning”.
This Slate Star Codex post is a review of Andy Clark’s Surfing Uncertainty, and has a fascinating analysis of how the two systems interact. It’s a great summary of the concept and one of the best concise descriptions of how the brain works that I’ve ever seen. Here’s a great description of the bottoms-up / top-down interplay:
The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.
The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?
The author looks at disorders and other phenomena through the predictive processing lens to see how they hold up — things like the learning, dreaming, the placebo effect, priming, schizophrenia, and autism:
Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well. Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.
This group is building some interesting tools to expose and enable sharing and collaboration on academic papers.
We develop software to help illuminate academic papers. Just as Pierre de Fermat scribbled his famous last theorem in the margins, professional scientists, academics and citizen scientists can annotate equations, figures, ideas and write in the margins.
They have a tool called Margins, which allows researchers to upload, annotate, and share academic papers, and another neat one called Librarian, a Chrome extension for comments and annotations for arXiv papers.
In his new book Loonshots, author Safi Bahcall uses the concept of phase transitions to analyze how companies work. When a substance changes phase, like water going from solid to liquid, the same exact substance is forced to take on a new structural form when the surrounding environment changes.
As Bahcall points out in the book, companies exhibit a similar behavior in their inventions and strategy. He contrasts two different types of innovations that companies tend to be built to produce: “P” type innovations, where a company is great at producing new products, and “S” type innovations, where they can stay ahead of the pack by developing new business strategies for the same products. There are many examples presented in the book of both types of innovation done right — Juan Trippe and Pan Am, Steve Jobs, Edwin Land and Polaroid, Bob Crandall and American Airlines — each of them was (or has been) a pillar innovator with a specialty in P or S types.
Being great at a single type works great for a time, until the environment changes too much around you.
In the history of business, there are few examples of organizations able to straddle both phases simultaneously. Early on in the book there’s the example of Vannevar Bush, the engineer that led the historic Office of Scientific Research and Development during World War II. The OSRD was legendary for the systems and inventions developed during the war, many of which helped to tip the war in favor of the Allies. From the OSRD wiki page:
The research was widely varied, and included projects devoted to new and more accurate bombs, reliable detonators, work on the proximity fuze, guided missiles, radar and early-warning systems, lighter and more accurate hand weapons, more effective medical treatments, more versatile vehicles, and, most secret of all, the S-1 Section, which later became the Manhattan Project and developed the first atomic weapons.
What makes companies so focused on short term innovation, either in product or strategy? Humans (and organizations) are certainly known to be bad at having a long view of planning and decision making.
It’s a fascinating idea — that a successful, hard-to-kill organization becomes one by having a particular structure, one that can be water and ice at the same time. What Bush figured out 70 years ago was that the organization is what’s important. He focused on making organizations that could make great things, a focus on the process rather than its products:
This bit from a 1990 piece after his death sums it up:
He was an academic entrepreneur who co-founded Raytheon and was a vice president at the Massachusetts Institute of Technology who consolidated the school’s reputation as having the nation’s finest engineering program. It’s not just that Bush was a brilliant engineer; it’s that Bush knew how to map, build and manage the relationships and organizations necessary to get things done. He knew how to craft the human networks that could build the technological networks.
After reading The Breakthrough, I’ve been doing more reading on immunotherapy, how it works, and what the latest science looks like. Another book in my to-read list is An Elegant Defense, a deeper study of how the immune system works. The human defensive system of white blood cells is a truly incredible evolutionary machine — a beautiful and phenomenally complex version of antifragility.
This stuff is crazy. Using modern compute, data science, and gene sequencing, you can now design proteins from your laptop:
Amazingly, we’re pretty close to being able to create any protein we want from the comfort of our jupyter notebooks, thanks to developments in genomics, synthetic biology, and most recently, cloud labs. In this article I’ll develop Python code that will take me from an idea for a protein all the way to expression of the protein in a bacterial cell, all without touching a pipette or talking to a human. The total cost will only be a few hundred dollars! Using Vijay Pande from A16Z’s terminology, this is Bio 2.0.
This is a fun one. I’ve been at Spatial Networks almost 10 years now. When I joined we were maybe 10 or 12 people, now we’re about 60 and still going up. It’s exciting to see the hard work paying off and validated — but like I say to our team all the time: it feels like we’re just getting started.
Since The Origin of Species, Darwin’s theory of natural selection has been the foundation of our thinking about the evolution of life. Along the way there have been challengers to the broadness of that theory, and David Quammen’s The Tangled Tree brings together three core “modern” concepts that are beginning to take hold, providing a deeper understanding how lifeforms evolve.
The book mostly follows the research of the late Carl Woese, a microbiologist who spent his career studying microorganisms, looking for connections between creatures in the micro and macro. Beginning with Darwin’s tree of life, he sought to follow our individual branches back to the roots, looking for the cause of early splits and fractures in the genetic timeline that led us to where we are now.
The Tangled Tree traces the path of three separate yet interrelated discoveries over the past several decades:
The discovery of the Archaea — through the work of Woese and his associates, we now know that what was formerly a two-kingdom world of “prokaryotes” and “eukaryotes” was more complex than that. Hidden within the prokaryote kingdom was actually a genetically distinct kingdom dubbed “archaea.” These are fascinating creatures more like alien life than visually-similar bacteria, often found at the most extreme habitats like volcanic vents and permafrost layers fathoms deep.
Symbiogenesis — It was once thought that the organelles within cells developed on their own through natural selection and genetic mutation. This theory posits that certain components within cells were once their own independent (yet symbiotic) organisms, eventually subsumed by the host to become a single genetic lineage.
Horizontal gene transfer — This process is the most radical of all, and is the most germane to modern science, particularly when it comes to combating bacteria that can mutate and become invulnerable to current antibiotics. The process involves genes moving between branches of the tree, versus in the strictly linear ancestor → descendant fashion we’re all familiar with from biology class. Humans likely have had material inserted into our genomes in the relatively recent past from life far different from ourselves.
Quammen weaves together all of these ideas through the stories of their discoverers. There are probably a hundred different scientists mentioned in the book, many of whom collaborated along the way, sharing research findings and data to build a case that evolution doesn’t work exactly how we thought it did.
The diversity of life is difficult to comprehend, and the book brought out many statistics and factoids that stayed with me long after reading. How do 4 acids configured into various protein structures manifest as “life”? The sheer quantity of life growing and evolving beyond our level of perception is mind-boggling. The total mass of bacteria on earth exceeds that of all plants and animals combined. Within a typical human body, bacterial cells outnumber all other “human” cells by a 3-to-1 ratio. A bacteria known as prochlorococcus marinus is the most abundant lifeform, with 3 octillion individuals presumed to exist.
I’ve never been deeply interested in biology compared to other sciences, but The Tangled Tree was a thought-provoking, fascinating look at how much there is yet to be understood right at our fingertips. While we’re trying to understand the origins of the universe and what star systems look like millions of light years away, there’s also a mysterious, terrifyingly complex world within our own bodies.
The physicist Richard Feynman’s famous 1974 commencement address on theories that pretend to be scientific:
“But this long history of learning how to not fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.
“The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
A wide-ranging conversation on linguistics, human scientific advancement, and enlightenment thinking with Steven Pinker and John McWhorter.
Linguistics is endlessly fascinating.
I might be an outlier, but I absolutely love YouTube as a medium for this kind of content. This sort of long form video is an example of a fantastic new thing that couldn’t exist or thrive prior to YouTube.
As I started The Gene, I was assuming it’d be framed as a history of genetics. There’s a significant amount of history on the discoveries made the last few centuries as scientists gained an understanding of how hereditary traits are encoded and transmitted. But my favorite parts of the narrative are when Mukherjee seeks to look at the gene as the fundamental building block, making comparisons to bits and atoms.
It reminded me of another book I’d like to revisit: James Gleick’s The Information. That book is to bits what The Gene was to genetics. Claude Shannon’s information theory shares so many parallels with genetics: both required technology to see nanoscopic things, rested on huge amounts of prior knowledge in physics, chemistry, and mathematics, and involved breaking down building blocks into ever more tiny requisite parts. Nearly all of our understanding of each of these sciences was gained since about 1950. We’re only just figuring out the fundamentals of both, and the potential for engineering them to our whims — through advancements in computing and AI on one end and gene splicing and gene therapy on the other. Genes are biological information. So I wonder what the next few decades will look like as the two disciplines start to converge.
Graham Hawkes has a fascinating approach to undersea research and exploration. Rather than focusing on deep ocean submersibles (which he’s built plentyof), his company is currently building underwater airplanes, craft that fly through the water with hydrodynamic wings and thrusters, capable of flying alongside dolphins and manta rays. Hawkes is obsessed with the ocean, and is fond of saying to space explorers that their “rockets are pointing in the wrong direction”. It’s amazing how little is known about the ocean floor, and how relatively little funding we roll into hydro-exploration.
The R&D work Hawkes is doing is amazing, focusing more effort on underwater flight than deep ocean dives. While they have built craft for the purpose of superdeep dives, that doesn’t seem to be Hawkes’ passion. They’ve designed and built several craft to study hydrodynamics, provide research platforms for scientists, and modes of transportation for recreation or studying the seafloor. The Merlin and the Challenger are two vessels funded by Richard Branson, under the moniker Virgin Oceanic.
I found myself obsessed with Hawkes and his work, and spent Sunday morning trolling the internet reading interviews and backstories, and watching videos of his projects. The notion of underwater flight is fascinating to me, and makes me wonder why the technology hasn’t caught on and become a popular attraction for divers in the tropics, to allow divers to fly through reefs and wrecks. I imagine flying over the Great Barrier reef for hundreds of miles sightseeing, stopping along the way for closer looks. Or diving to depth between the Cayman Islands, soaring over the bottom with sea turtles and schools of fish.
His company is running a Kickstarter campaign to fund a field test expedition to Lake Tahoe with his two-seater, Super Falcon, to perform “hydrobatic” maneuvers in the deep parts of the lake. If you’re as interested as I am in this stuff, here are some other links to check out:
(In my browsing yesterday, I also read about the Aquarius Reef Base, an undersea research station operated by NOAA since the 1980s. It sits on the bed of Conch Reef off the coast of Key Largo. The project is in danger of being shuttered soon, so they’ve launched a funding campaign to try and save the project.)