Join 📚 Josh Beckman's Highlights

A batch of the best highlights from what Josh's read, .

When adding jitter to scheduled work, we do not select the jitter on each host randomly. Instead, we use a consistent method that produces the same number every time on the same host. This way, if there is a service being overloaded, or a race condition, it happens the same way in a pattern. We humans are good at identifying patterns, and we're more likely to determine the root cause. Using a random method ensures that if a resource is being overwhelmed, it only happens - well, at random. This makes troubleshooting much more difficult.

Timeouts, Retries, and Backoff With Jitter

Amazon Web Services, Inc.

Data caching is probably the most effective and most essential layer you should be working on, before moving up to more advanced caching.

Production Ready GraphQL

Marc-Andre Giroux

Maybe they’re just creating massive amounts of “digital representations of human approval,” because this is what they were historically trained to seek (kind of like how humans sometimes do whatever it takes to get drugs that will get their brains into certain states).

How We Could Stumble Into AI Catastrophe

Cold Takes

...catch up on these, and many more highlights