A batch of the best highlights from what Quinn's read, .
The danger, and you see it often in investing, is when people become too McNamara-like – so obsessed with data and so confident in their models that they leave no room for error or surprise. No room for things to be crazy, dumb, unexplainable, and to remain that way for a long time. Always asking, “Why is this happening?” and expecting there to be a rational answer. Or worse, always mistaking what happened for what you think should have happened.
The ones who thrive long term are those who understand the real world is a neverending chain of absurdity, confusions, messy relationships, and imperfect people.
Does Not Compute
collabfund.com
Source of the Meaning Crisis: Contradictions Between Societal Progress and Global Crises
Summary:
The current societal malaise and victimhood culture are attributed to a complex interplay of factors, including a disconnection from the positives of societal progress and the simultaneous awareness of global crises like climate change.
The author explains that while the world is objectively getting better, the incessant exposure to negative news triggers hyper-vigilance and threat response, leading to a cognitive dissonance between feeling alive and needing to practice triage. This contradictory experience fosters a sense of confusion and psychological distress in individuals, creating a state of being 'crazy making.'
Transcript:
Speaker 2
You because you wrote a book recently called recapture the Rapture, which is trying to address the seeming sort of psychological ills of our society. Can you try and sort of summarize what your thesis is on why it seems like victimhood culture has become so dominant? Disconnection, general malaise people are having, is it, is it a function of, you know, fear of the future? We've been hearing, you know, doom and gloom from climate change and all these other growing risks? Or is it something more fundamental going on inside a psychologically that is giving rise to this? I mean, I think without a doubt, like, what on earth is going wrong these days? And why are so many people sad, suffering, disconnected?
Speaker 1
I think that's just a massive, multi-variable situation. But one of the things that I mentioned in that book was just things are getting exponentially better, and things are getting exponentially worse at the very same time. And trying to map to intersecting, contradicting, overlapping, exponential curves. Confusing. Back as the imagination. I mean, with the whole three-body problem in physics, which I know you must be deeply aware of, everybody, it's very hard to be like sun and moon and stars, you know, like you get you. Panotales, ah! Yeah, and we are eight billion bodies, all with volition, you know, and pesky human nature. So trying to map what is going on as things are simultaneously Stephen Pinker and Hans Rosling, and all the lot of like, if it bleeds, it leads, you've been massively misled. The world is safer, better, cheaper, more prosperous than it's ever been. Ta-da. And you're like, oh, thank God. And then you click over to polar bears and, you know, throw it to Glacier and all of these things, you're like, oh, no, which is it? Right. So as we have that initial experience, which naturally triggers hyper-vigilance and threat response, oh, shit. Right? Are we coming alive? All this wonderful stuff. My own personal life, my personal growth, my relationships, my career, where am I coming alive? That's the inquiry I'm in. Or are we staying alive? And I need to be practicing triage, right? And in a threat response and toggling back and forth between those two is crazy making.
#11 - Jamie Wheal — Tackling the Meaning Crisis
Win-Win with Liv Boeree
Explore v.s. Exploit: Finding Solutions Quickly Can Get You Stuck in a Local Optimum
Transcript:
Speaker 1
So when I started doing the work in AI, one of the really, very, very general ideas that comes across again and again in computer science is this idea of the explore, exploit trade on. And the idea is that you can't get a system that is simultaneously going to optimize for actually being able to do things effectively. That's the exploit part. And being able to figure out, search through all the possibilities. So let me try to describe it this way. I guess we're a podcast. So you're going to have to imagine this usually I wave my arms around a lot here. So imagine that you have some problem you want to solve or some hypothesis that you want to discover. And you can think about it as if there's a big box full of all the possible hypotheses and all the possible solutions to your problem or possible policies that you could have, for instance, Your reinforcement learning context. And now you're in a particular space in that box. That's what you know now. That's the hypotheses you have now. That's the policies you have now. Now what you want to do is get somewhere else. You want to be able to find a new idea, a new solution. And the question is how do you do that? And the idea is that there are actually two different kinds of strategies you could use. One of them is you could just search for solutions that are very similar to the ones you already have. And you could just make small changes in what you already think to accommodate new evidence or a new problem. And that has the advantage that you're going to be able to find a pretty good solution pretty quickly. But it has a disadvantage. And the disadvantage is that there might be a much better solution that's much further away in that high dimensional space. And any interesting space is going to be too large to just search completely systematically. You're always going to have to choose which kinds of possibilities you want to consider. So it could be that there's a really good solution, but it's much more different from where you currently are. And the trouble is that if you just do something like what's called hill climbing, you just look locally, you're likely to get stuck in what's called a local optimum.
Alison Gopnik on Child Development, Elderhood, Caregiving, and A.I.