This newsletter goes out to more than 1,000 ambitious frontier people. Please share it with a friend who would like it. (And welcome to all the folks who joined this week!)
Hello my internet friend,
Podcast this week! #88 Andy Clark: How Brains Model The World.
Andy is a super sharp neurophilosopher who helped develop two models I love:
Predictive processing, a theory of how our brain makes sense of the world
Embodied cognition, a theory that we think with our whole body, not just our brain.
I wanted to chat with Andy to understand the cognitive niches for what kinds of information can live in the brain. Most importantly, Andy showed me how predictive processing is necessary for us to create a self-reflective species. Some highlights from the convo below:
EMBODIED COGNITION
Rhys: What is embodied cognition?
Andy: Embodied cognition is the idea that we share the load between the brain and the body.
Robot walkers are awkward. It’s the AI brain versus the body. But for us biological walkers, the brain and the body co-evolved so that our brain is very neatly giving minimal commands to a body that actually does a lot of the work.
Rhys: Do you see any overlap with the Buddhist concept of “no self'“ and embodied cognition? If our brain is embodied and extended, is there a self?
Andy: I’m very attracted to the Buddhist idea of the soft self.
There's no concrete self or selfhood. It's very much negotiable. The notion of the self is just some kind of construct. It's a useful construct that helps me negotiate the world, but it's not something pre-given.
That comes out of the predictive processing too. Just about everything about ourselves turns out to be some kind of construct, including the self itself.
Rhys: I really like the term “the soft self.” How does the soft self relate to conciousness? If there is no self, why do we care about conciousness?
Andy: There's a move in the consciousness debate at the moment to shift the problem away from the hard problem of conciousness. That “hard problem” is: How is this ineffable experience that I'm having now, this experience of blueness and windiness and so on, how is that possible?
The debate is shifting to the meta hard problem of conciousness. This problem is: What kind of computational organization would give rise to a creature that would make reports like “How is it possible to have this kind of ineffable blue experience?”
PREDICTIVE PROCESSING
Rhys: What is predictive processing?
Andy: I like to give this example of sine wave speech. You can’t understand the speech until you have the model of what the speech is.
Learning to perceive the world is a process like trying to understand sine wave speech. You're hit with all this stuff. It's kind of noisy. What the brain has to learn to do is get a high-level grip on the sort of patterns that might matter, then use that to try to predict the shape of the signal. As that happens, you separate out the signal from the noise. The meaningful salient bits of structure get to emerge.
Rhys: Why is the predictive processing model important?
Andy: Predictive processing is important to create a self-reflective species.
It’s having the right picture of the nature of our relationship to reality. It's seeing the distance that separates us from whatever structure is really out there in the world. Predictive processing pushes back against the idea that there's simply a world out there, and the job of the brain is to get it right, and therefore we should probably all end up with the same picture of the world.
But if you think from a predictive processing perspective, we perceive the world because of the expectations that we have. Everyone has a very different history and we know about very different things. Then it becomes immediately clear that we're going to perceptually experience different worlds.
This is politically significant. There's work by Lisa Feldman Barrett talking about how this could be important in rather difficult situations, like police officers encountering suspects in a dark alleyway, and perhaps mistaking a handheld phone for a drawn weapon.
The predictive processing account of what could happen there is this: We're not just trying to predict the signals coming from the external world, we're trying to predict the ones coming from our body. If your body is in a highly anxious sort of frightened, scared state, the Bayesian brain takes that as evidence for what might be out there. You can see how you could very easily skew things and make terrible mistakes as a result of overweighting bodily evidence in a situation like that. I think understanding how these pictures of the world come into view is going to be hugely important for us to flourish as a self-reflective species.
Andy: Indeed, this can actually help us break our own models of the world. When I think about something I'm sucked into the biases that I have. That's the nature of how the inner machinery works.
But if I create my models as an external object, then other people can approach it and I can approach it as a detached object. That gives us a chance to break apart these models rather than just always being sucked into their attractor basins. You can keep them at arm’s length to poke and prod them.
That's why we've got the institutions of science. Peer review is an attempt to turn your theory into something publicly inspectable. Then you will run it past a whole bunch of other people, and they'll poke and prod at it.
This is a way to make sure we do justice to the actual complexity of the world that we're trying to understand, as opposed to simply being sucked into the most minimal model that seems to accommodate the data points that you happen to be taken most seriously. Because that's what your brain wants to do. Take a few data points seriously, find a minimal model and sit with it. I think what we've done is we've created all these structures that push back against that, and therefore ultimately let us do more things and think more things.
MEMES AND WHAT INFORMATION WANTS
Rhys: What is the most optimized home for a meme?
Andy: Anything that does multi-level reduction of prediction error.
Think about any kind of aha moment. Things seem to fall together and then an idea is suddenly solidified for you. I think those are cases where you're not just getting rid of prediction error with respect to one of your models of the world, you’re getting rid of error in a way that cascades and gets rid of more error.
Andy: The other way to come at this question is this: the goodness of a predictive model is its accuracy minus its complexity.
So you're always being driven towards something simple if it will do the job, and I think a lot of the sort of more dangerous memes that pass around the ones that are piggybacking on that. They got simplicity nailed and they're pretending at least to explain an awful lot of stuff that perhaps they're not fundamentally getting to grips with but the attraction of simplicity is written pretty deep into the kind of Bayesian brain.
Thanks Andy! Much more in the podcast itself.
LINKS
1) Cool to see 3,500 robots create delivery orders. (Or is it a single AI with 3,500 appendages?)
2) Zero COVID Risk Is the Wrong Standard.
This article helpfully describes a kind of COVID thinking often seen on the left, Zeroism:
Zeroism refuses to grapple with trade-offs in practical terms, preferring instead to frame the question in pure moral terms. A classic example is the wave of media shaming of people who visited beaches last year, even though the risk of spread is tiny. These episodes allow people to redirect their anxiety into displays of personal virtue and contempt for those who flout newly established social norms.
Zeroism is a natural response to national or global trauma. The shock of 9/11 produced a form of Zeroist thinking, in which almost any burden — from war to airport screenings — could be justified because no risk of terrorism was tolerable. This was the explicit premise of Dick Cheney’s “One Percent Doctrine,” which deemed even a 1-in-100 chance of a terrorist attack an unacceptable risk, justifying drastic action.
Zeroism often expresses itself in claims that a policy choice comes down to “life or death.” Sometimes those aren’t the only two choices.
When I don’t wear a mask outside and pass by someone, there is a possibility that I make them uncomfortable. I have “produced harm.” My harm (their discomfort) may arise because they are immunocompromised and weren’t able to get the vaccine, or simply because they are having anxiety in opening up, even if they are young and vaccinated.
In fact, for any given outdoor walk-by, there is a small chance that I have COVID, a (very) small chance I pass it to them, and a small chance that it leads to their death.
It is always possible to catastrophize a given interaction to show how it makes harm. Whether it’s “omg, workplace language could lead to genocide” or “omg, cancel culture lead us to public executions like in the Cultural Revolution.”
Be wary of catastrophizing and optimizing for zero-harm Zeroism. Harm may be real. But it also may be small. Or it may be “good harm” in service of new norms (like “getting back to normal without wearing masks outside”).
3) Babylon Bee: Attorney General Garland Replaces Federal Executions With Bus Tickets To Chicago
4) The Onion: Senate Passes Bill Wishing Younger Generations Best Of Luck Stopping Climate Change
5) Rhys: Nerdy Jewish Kid's "Iron Dome" Yarmulke Fails To Protect Him From School Bully's Fist
JOBS AND OPPORTUNITIES
Mariana Mazzucato’s team is hiring research consultants for the UN’s Economic Health For All initiative. Apply by July 14th.
Syndicate DAO is a cool DAO that is decreasing syndicate costs by 100x. They’re hiring for a bunch of roles here.
Grant to help develop Single Sign-On with Ethereum.
SciFounders have opened up their second batch for science researchers to commercialize their work. $400k investment for 10% of equity. Learn more and apply here.
Borg is an excellent company that has developed Hive.one. They are building a PageRank for people and are hiring for a bunch of great roles in Berlin and remote.
Support the collective health, openness, and civility of public conversation on Twitter through the 2021 Information Operations Fellowship. 6-month stint for academic researchers.
Knox Networks is a new stealth company developing and white-labeling Central Bank Digital Currencies (CBDCs). They’re hiring a bunch of developers and product folks.
EVENTS
EthCC is back! Paris, July 20-22, 2021.
Effective Altruist Events Calendar (recurring)
Interintellect Salons (recurring)
The Stoa (recurring)
MUSIC
I’ve been listening to a lot of “meme music” recently. It’s Gen Z music about Minecraft and other such things. Here’s an example song: a 90-second outro video for Youtube. Fun!
Hope you have a good week! Warmth, Rhys
If you like this newsletter, check out the online community of systems thinkers that I helped co-found, Roote.
❤️ Thanks to my generous patrons ❤️
Jonathan Washburn, Ben Wilcox, Audra Jacobi, Sam Jonas, Patrick Walker, Shira Frank, David Hanna, Benjamin Bratton, Michael Groeneman, Haseeb Qureshi, Jim Rutt, Zoe Harris, David Ernst, Brian Crain, Matt Lindmark, Colin Wielga, Malcolm Ocean, John Lindmark, Collin Brown, Ref Lindmark, James Waugh, Mark Moore, Matt Daley, Peter Rogers, Darrell Duane, Denise Beighley, Scott Levi, Harry Lindmark, Simon de la Rouviere, and Katie Powell.