I'm writing this update on the train back from the Berlin Eurosky Live event, and it was amazing to see so many people in person, had a great time with all the sessions and meeting so many great people! Shoutout to the organisers (Sebastian, Sherif and many. more) for a great event, and for bringing both policy people and the developers building a new social networking ecosystems together into a single coherent conference.

A recurring theme at Eurosky Live, especially from the policy side, was on how Europe can take back control from Big Tech platforms over the social networking infrastructure. This created a dynamic where open social protocols are framed as a tool to be used in a conflict with the Big Tech platforms over control of the current social networking ecosystem. But these open protocols create their own power structures in their own right, that are not reducible to just conflicts with Big Tech platforms.

Robin Berjon's opening keynote session had a interest perspective on this, and he referenced social scientist and Nobel prize winner Elinor Ostrom, with a slide saying:

The properties that define the architecture of a protocol and those that define the rules in an institution are the same

That quote stuck with me, so instead of a news report about the event I'm giving in to the brain worms that tell me to write about how this thinking (protocol architecture and institutional governance are the same thing) relates to Bluesky is making some changes to their reporting system.

Bluesky reporting system

Bluesky has updated their reporting system, with more reporting options:

Updates to Our Moderation Process - Bluesky
We're improving our in-app reporting and introducing new moderation systems and processes to better serve the Bluesky community.
https://bsky.social/about/blog/11-19-2025-moderation-updates

The update expands the reporting system, going from 6 reporting options to 39. When a person files a report, they are first asked to select a category (like 'violence', 'adult content'), and then within that category a variety of new options show up.

An illustration of how the new reporting system works. When reporting a post, you select a category and then choose a specific reason within that category. These more specific reporting reasons help our moderation team address issues with better accuracy.

Bluesky says that "this granularity helps our moderation systems and teams act faster and with greater precision. It also allows for more accurate tracking of trends and harms across the network."

One example of the newly added option to flag human trafficking content, where Bluesky says that this new option is "reflecting requirements under laws like the UK’s Online Safety Act". This option is implemented globally, not just in the UK.

This creates a new dynamic on social networking, and illustrates the thinking of Ostrom that Berjon referenced his Eurosky Live presentation, on how the architecture of a protocol and the institutional rules are the same thing.

The modular nature of atproto makes content moderation is one of the many components of the network, that can be individually implemented and operated by any service. This gives jurisdictional flexibility, as the protocol does not enforce specific categories of moderation, it is determined by the jurisdictions a service operates. This architecture enables different services to make different choices about the different categories for reports, creating technical conditions for a wide variety of implementations.

But these same affordances for flexibility also give Bluesky the discretionary choice to apply the UK regulatory requirement network-wide. This is an active choice, Bluesky and atproto have an entire system for geo-specific moderation (see my deep dive on that system here), but Bluesky judged (correctly, imho!) this harm matters globally and should be an individually reportable category everywhere.

This choice matters, as Bluesky's dominant role in the ecosystem means that their choices shapes the choices that other moderation services in the ecosystem make as well. Other moderation services building on atproto will likely adopt similar structures, and user expectations about what "should" be a reportable category get shaped by Bluesky's choices.

This is where the thinking by Ostrom that Berjon referenced in his presentation at Eurosky live comes back in, where "the properties that define the architecture of a protocol and those that define the rules in an institution are the same". From a user's perspective, the reporting UI is the moderation architecture they experience. There's no meaningful distinction between "the protocol" and "the governance" from their standpoint.

I don't think this is a technological determinism, atproto does not determine governance outcomes. But from a user perspective, they are effective the same thing however, as how the protocol is implemented is the governance they experience. You cannot meaningfully separate the protocol architecture from how it is institutionally implemented.

So when Bluesky implements regional laws on a global scale, and when new moderation services on atproto are incentivised to follow Bluesky in implementing the same reporting categories, regional laws can shape global networking norms. This happens via an intermediary step: regional laws shape the dominant service's choices, which in turn shape ecosystem norms. This is an emergent property of open protocols, and the reason why protocol architecture becomes equivalent to institutional governance.

This matters for how we think about the political project of building alternatives to Big Tech platforms. We do need to constrain Big Tech platform power. But by doing so we create new systems where the decentralised protocols don't eliminate power dynamics, instead they get reconfigured in confusing, fun, illegible, and exciting new ways.