Hypermedia Controls: From Feral to Formal
An interesting paper that tries to locate and formalize a set of core primitives in hypermedia systems as expressed in HTMX. It identifies a "hypermedia control" as consisting of four mechanisms: (1) an element that (2) responds to an event trigger by (3) sending a network request and (4) placing the response in at some position in the viewport. By enhancing a hypermedia system with primitives that allow you to manipulate each of those mechanisms you can declaratively extend the system with your own hypermedia controls.
An example they give:
<button hx-trigger="click" hx-post="/clicked" hx-target="#output">
Issue a request
</button>
<output id="output"> </output>
When the user clicks on the button the system will issue a network request to /clicked
and place the response in the <output id="output">
element.
This is interesting in so far as it goes but I'm not convinced that the "hypermedia maximalist" approach is really all that great of a way to develop systems.
We're Getting the Social Media Crisis Wrong
It's not about misinformation, it's about groups with collective misunderstandings:
The fundamental problem, as I see it, is not that social media misinforms individuals about what is true or untrue but that it creates publics with malformed collective understandings.
I really like this subtle shift in perspective. It aligns with my distrust of social media as distorting people's behavior around chasing engagement but he actually identifies a more specific issue:
The more important change is to our beliefs about what other people think
Our beliefs and opinions about the world are influenced by what we think other people think and social media is a (distorted) machine that tells us what other people think.
Decentralized Systems Aren't
Centralized systems will always layer on top of decentralized systems unless you figure out how to fix the underlying economic problem of increasing returns to scale.
To actually get a permissionless decentralized system you need:
- a business model that has decreasing returns to scale
- a way to prevent Sybil attacks without massive cost
- a way to prevent collusion between independent nodes
Never Forgive Them
I will never forgive these people for what they’ve done to the computer, and the more I learn about both their intentions and actions the more certain I am that they are unrepentant and that their greed will never be sated. I have watched them take the things that made me human — social networking, digital communities, apps, and the other connecting fabric of our digital lives — and turned them into devices of torture, profitable mechanisms of abuse, and find it disgusting how many reporters seem to believe it's their responsibility to thank them and explain why it's good this is happening to their readers.
Last year Ed Zitron became my favorite critic of the tech industry's rot. He's incisive and angry and it's cathartic to read him.
How I’m Trying to Use BlueSky Without Getting Burned Again
I really like the principle of assuming the platform/service you are using will go away in three years. That's a long enough timeline to derive value without overcommitting.
AI Scaling Myths
As 2024 went on, more and more AI discourse started to come around to the idea that AI scaling was probably about to end. One things I learned from this article is that "scaling laws" have a more precise meaning than what I'd known.
I had a general understanding of "scaling laws" to mean that the bigger the model the more "capabilities" were supposed to pop out. But the original meaning in AI research is about the "decrease in perplexity" (perplexity being jargon I also learned from this article meaning a measure of word prediction uncertainty). A decrease in perplexity is not quite the same thing as new capabilities.