Join 📚 J'S Highlights
A batch of the best highlights from what J's read, .
Design tactics like mimicking human-like interactions, open-ended follow ups, and easy to bypass safety features are intentionally baked in to these products. The end goal is sustained user engagement. But it doesn’t have to be that way.
Companies could choose to turn off human-like behavior as a default option. They can then set limits on how much users can engage daily and leverage their systems' sophisticated memory features to recognize when someone is in crisis and respond appropriately rather than just showing generic pop-up warnings.
Key Takeaways: How ChatGPT's Design Led to a Teenager's Death
Center for Humane Technology
If the legions of Facebook likers can be placated with AI-generated viral content, that would protect actual people from the downsides that come with unwanted or unexpected publicity at uncontrollable scale. Maybe it would be better if only bots and fakes went viral and not actual people who suffer actual fallout from it. Any story where the specific identity of the person doesn’t matter to its appeal may as well be populated with an AI-generated character.
Have You Heard the Word
Rob Horning
Well, an ideology is a simplification of reality where the vast, seething, messy baroqueness of being is put through some kind of rasher of language and comes out grossly simplified. And because it’s grossly simplified it becomes like a kind of algebra of idiocy where, now, you can set up these little equations and they solve themselves, and you get a feeling of satisfaction from that.
The Great Disembedding
Roger’s Bacon
...catch up on these, and many more highlights