Join 📚 Quinn's Highlights

A batch of the best highlights from what Quinn's read, .

The Factors That Hinder Knowledge Transfer Are Often Structural Summary: Barriers to knowledge transfer or knowledge sharing are often structural, rather than merely the result of skill set limitations. Lessons can be transferred through storytelling, lessons with direction reviews and debriefs, analysis and research, and by allocating workload strategically. Placing the workload at the top, instead of burdening lower-level employees with excessive reading, is crucial. It is essential to identify the structural barriers causing the hindrance and address them. Transcript: Speaker 1 And if something is learnt at one end of the state, I need to transfer that to the other. Now you can do that with story, but you can also do that with lessons without direction reviews and debriefs. You can do that through analysis and research. And you can do that by putting the workload where it should be, you know, up at the top, rather than on the poor people down below expected to read 160 documents a year about everything Because they're going to remember that really, they're going to remember that, not in my experience. So what we have to do is take those really important lessons and then think, well, what are the structures that are causing that to happen? Speaker 2 And now we've already kind of hindered around this. What are the barriers to knowledge transfer or knowledge sharing? And is it just the skill set of listening and conversation? Is that the biggest barrier? Speaker 1 I think a lot of the barriers that we deal with are structural.

Organizational Structures That Enable Knowledge Flow With Stuart French

Because You Need to Know Podcast ™

Explore v.s. Exploit: Finding Solutions Quickly Can Get You Stuck in a Local Optimum Transcript: Speaker 1 So when I started doing the work in AI, one of the really, very, very general ideas that comes across again and again in computer science is this idea of the explore, exploit trade on. And the idea is that you can't get a system that is simultaneously going to optimize for actually being able to do things effectively. That's the exploit part. And being able to figure out, search through all the possibilities. So let me try to describe it this way. I guess we're a podcast. So you're going to have to imagine this usually I wave my arms around a lot here. So imagine that you have some problem you want to solve or some hypothesis that you want to discover. And you can think about it as if there's a big box full of all the possible hypotheses and all the possible solutions to your problem or possible policies that you could have, for instance, Your reinforcement learning context. And now you're in a particular space in that box. That's what you know now. That's the hypotheses you have now. That's the policies you have now. Now what you want to do is get somewhere else. You want to be able to find a new idea, a new solution. And the question is how do you do that? And the idea is that there are actually two different kinds of strategies you could use. One of them is you could just search for solutions that are very similar to the ones you already have. And you could just make small changes in what you already think to accommodate new evidence or a new problem. And that has the advantage that you're going to be able to find a pretty good solution pretty quickly. But it has a disadvantage. And the disadvantage is that there might be a much better solution that's much further away in that high dimensional space. And any interesting space is going to be too large to just search completely systematically. You're always going to have to choose which kinds of possibilities you want to consider. So it could be that there's a really good solution, but it's much more different from where you currently are. And the trouble is that if you just do something like what's called hill climbing, you just look locally, you're likely to get stuck in what's called a local optimum.

Alison Gopnik on Child Development, Elderhood, Caregiving, and A.I.

COMPLEXITY: Physics of Life

One misconception about highly successful cultures is that they are happy, lighthearted places. This is mostly not the case. They are energized and engaged, but at their core their members are oriented less around achieving happiness than around solving hard problems together. This task involves many moments of high-candor feedback, uncomfortable truth-telling, when they confront the gap between where the group is, and where it ought to be. Larry Page created one of these moments when he posted his “These ads suck” note in the Google kitchen. Popovich delivers such feedback to his players every day, usually at high volume.

The Culture Code

Daniel Coyle

...catch up on these, and many more highlights