Join 📚 Josh Beckman's Highlights

A batch of the best highlights from what Josh's read, .

The way double descent is normally presented, increasing the number of model parameters can make performance worse before it gets better. But there is another even more shocking phenomenon called *data double descent*, where increasing the number of *training samples* can cause performance to get worse before it gets better. These two phenomena are essentially mirror images of each other. That’s because the explosion in test error depends on the ratio of parameters to training samples.

Double Descent in Human Learning

chris-said.io

If everyone is competing for the same things, and money is once again our lingua franca, it seems urgent and inevitable that a new subcultural logic must emerge. But that will require a new currency and new infrastructure.

Dub Housing

Kneeling Bus

Nobody is going to turn down getting their eyes scanned by a chrome orb, that is just science. Oh a guy is building a superintelligent AI and also compiling a permanent electronic identification database of all humans, that’s great, that’s not an alarming science-fiction premise at all. I feel like if you made that movie you’d have to have some plausible back story about how the AI convinced people to hand over their iris scans to the AI, but in the real world you don’t.

Money Stuff: The Moon Emoji Is Securities Fraud

Matt Levine

...catch up on these, and many more highlights