Join 📚 Josh Beckman's Highlights
A batch of the best highlights from what Josh's read, .
The way double descent is normally presented, increasing the number of model parameters can make performance worse before it gets better. But there is another even more shocking phenomenon called *data double descent*, where increasing the number of *training samples* can cause performance to get worse before it gets better. These two phenomena are essentially mirror images of each other. That’s because the explosion in test error depends on the ratio of parameters to training samples.
Double Descent in Human Learning
chris-said.io
TikTok’s entire strategy is to turn viral content into public domain “trends” that anyone can participate in, thus robbing any singular creator of ownership.
Erysichthon’s dominionist hubris may be how we got to this climate juncture, but the path out of our mess is not backwards. It’s some yet-undiscovered third way that rides the dipole between animism and reason. Learning to perceive nature without objectifying it, while at the same time learning to truly live in it.
Rain Lilies and Robots
Field Notes from Christopher Brown
...catch up on these, and many more highlights