Join 📚 Josh Beckman's Highlights

A batch of the best highlights from what Josh's read, .

Over time, the gifts accrue and you have created a reputation.

Linchpin

Seth Godin

The seemingly earnest popularity of these videos, especially among younger users, is part of an untethering of video content from coherent meaning that seems to be happening on short-form video apps. It feels like the longer people make videos for an algorithm the more the videos start to degrade into just random visual stimuli, which is unnerving, but also kind of interesting.

The Doomscrolling Is the Point

Garbage Day

The way double descent is normally presented, increasing the number of model parameters can make performance worse before it gets better. But there is another even more shocking phenomenon called *data double descent*, where increasing the number of *training samples* can cause performance to get worse before it gets better. These two phenomena are essentially mirror images of each other. That’s because the explosion in test error depends on the ratio of parameters to training samples.

Double Descent in Human Learning

chris-said.io

...catch up on these, and many more highlights