Join 📚 Quinn's Highlights

A batch of the best highlights from what Quinn's read, .

Layers of Information Continually Accumulate *Within* Objects Over Time Summary: Information can be shared between objects as evidence of a common history, indicating that objects are deeply rooted in time. As the biosphere has evolved, it has increased the layers of information processing and abstraction, resulting in the generation of objects that are deeper in time. Consequently, some features of these objects appear less physical and more abstract. Each individual accumulates information over time, making parts of them brand new and parts billions of years old. Transcript: Speaker 1 And so information we talked about has a sort of interesting property that seems very abstract. And it seems to be that information can move between objects, like we're speaking the same language, but when you can share information between objects like you and I speaking, what That is is evidence of a common history. These things that we call information and abstractions, I think, are just evidence that these objects are actually deep in time. So things look more abstract, the deeper and timely are. And it's one of the reasons I think that as the biosphere has evolved over time, it's increased the layers of information processing and abstraction that it's built. But really what it is is you're generating these objects that are deeper and deeper in time. And so some of their features look less and less physical because they're not physical now, they're physical in the structure that's extended in time. Speaker 2 You have a lovely line in one of your papers where you say that each of us are our own age, but in many ways, we're thousands and thousands of years old because we have accumulated all that Information instead of genes to be who we are today. Speaker 1 That's right. So parts of you are brand new. And parts of us are all brand new from this conversation because we've exchanged information and generated new structures. Parts of us are billions of years old.

Big Ideas — Time

Simplifying Complexity

Perspectives on Organizational Strategy & Coordination: Optimizing for Few Coherent Goals v.s. Many Incoherent Goals Transcript: Speaker 1 I think one of the things where the corporate world is actually much better at this than the academic world or the educational world, because their goal is profit. So it's very clear. It's much harder to say what the goal of an educational institution is. It feels like it should be obvious, but within the general goal of like we want to produce successful, well-rounded people, there's a lot of disagreement about what the goals are. And so shaping the institutional incentives around those goals becomes extremely difficult, because not only do we have to worry about perverse incentives, but we have to worry about Vigorous disagreement about the kinds of things that are valued in the first place. And I think exactly what you're talking about, T, is something that if you went to a bunch of university administrators, let's say, or medical school administrators or doctors, and You said, what is the point to what you're doing? Is it to produce wise, well-rounded people? Is it to minimize costs to insurance companies? Is it to increase donor contributions? What is it? And there are all these competing goals. And so there's this constant infighting about among different people who have different versions of what the best version of their institution is, and it's so difficult to articulate What that is. Speaker 2 I wonder if we're in different sides of this, because are you like worried about the hardness of it? It sounds like you think it's a problem that it's hard to come to agreement and articulate a goal, where I actually prefer the university that disagrees, has many incuit and plural goals, And worry that when it articulates an outcome clearly and starts orienting around that outcome, that's when it starts shedding a lot of what was good about the kind of pluralistic more. So let me just give you this is like from my life, right? So a university I've been employed at has started moving toward orienting everything around student success, where student success is defined as graduation rate, graduation speed, Salary after graduation. When you define that outcome, it becomes really easy to target, and the people that are targeting it, as you say, the people that target it well tend to rise, people that are willing to Go all in on targeting that stuff instead of caring about all the other weird shit that education might be for, tend to have better recordable outcomes and tend to rise in the university Structure. So I actually am happier for something as complicated with education, in which different groups have different conceptions of values about what they're doing, and we don't actually Try to settle it, and we don't hold them all to a high articulability constraint, because I think the business school and the CS department have more easily articulable outcomes than The creative writing department, art history department. A lot of the stuff that I'm writing right now is about like this defense of the inarticulable. Speaker 1 It's a hard question to answer because I think that there are multiple levels of organization going on here. There's like a top administrator level, because these institutions tend to be pretty hierarchical. I think at the top of the hierarchy, there has to be some sort of reasonably well-defined goal, even if it doesn't specify what every individual component of the organization or institution Would do it. And I think that that trickles down to those levels though, and creates incentives. Regardless of whether or not it's a good thing, I think there has to be some sort of coherence at the very top level, even if it doesn't dictate what each individual component is doing.

Paul Smaldino & C. Thi Nguyen on Problems With Value Metrics & Governance at Scale

COMPLEXITY: Physics of Life

Explore v.s. Exploit: Finding Solutions Quickly Can Get You Stuck in a Local Optimum Transcript: Speaker 1 So when I started doing the work in AI, one of the really, very, very general ideas that comes across again and again in computer science is this idea of the explore, exploit trade on. And the idea is that you can't get a system that is simultaneously going to optimize for actually being able to do things effectively. That's the exploit part. And being able to figure out, search through all the possibilities. So let me try to describe it this way. I guess we're a podcast. So you're going to have to imagine this usually I wave my arms around a lot here. So imagine that you have some problem you want to solve or some hypothesis that you want to discover. And you can think about it as if there's a big box full of all the possible hypotheses and all the possible solutions to your problem or possible policies that you could have, for instance, Your reinforcement learning context. And now you're in a particular space in that box. That's what you know now. That's the hypotheses you have now. That's the policies you have now. Now what you want to do is get somewhere else. You want to be able to find a new idea, a new solution. And the question is how do you do that? And the idea is that there are actually two different kinds of strategies you could use. One of them is you could just search for solutions that are very similar to the ones you already have. And you could just make small changes in what you already think to accommodate new evidence or a new problem. And that has the advantage that you're going to be able to find a pretty good solution pretty quickly. But it has a disadvantage. And the disadvantage is that there might be a much better solution that's much further away in that high dimensional space. And any interesting space is going to be too large to just search completely systematically. You're always going to have to choose which kinds of possibilities you want to consider. So it could be that there's a really good solution, but it's much more different from where you currently are. And the trouble is that if you just do something like what's called hill climbing, you just look locally, you're likely to get stuck in what's called a local optimum.

Alison Gopnik on Child Development, Elderhood, Caregiving, and A.I.

COMPLEXITY: Physics of Life

...catch up on these, and many more highlights