Skip to main content

· 6 min read

One of most popular aspects of atproto for developers is the firehose: an aggregated stream of all the public data updates in the network. Independent developers have used the firehose to build real-time monitoring tools (like Firesky), feed generators, labeling services, bots, entire applications, and more.

But the firehose wire format is also one of the more complex parts of atproto, involving decoding binary CBOR data and CAR files, which can be off-putting to new developers. Additionally, the volume of data has increased rapidly as the network has grown, consistently producing hundreds of events per second.

The full synchronization firehose is core network infrastructure and not going anywhere, but to address these concerns we developed an alternative streaming solution, Jetstream, which has a few key advantages:

  • simple JSON encoding
  • reduced bandwidth, and compression
  • ability to filter by collection (NSID) or repo (DID)

A Jetstream server consumes from the firehose and fans out to many subscribers. It is open source, implemented in Go, simple to self-host. There is an official client library included (in Go), and community client libraries have been developed.

Jetstream was originally written as a side project by one of our engineers, Jaz. You can read more about their design goals and efficiency gains on their blog. It has been successful enough that we are promoting it to a team-maintained project, and are running several public instances:

  • jetstream1.us-east.bsky.network
  • jetstream2.us-east.bsky.network
  • jetstream1.us-west.bsky.network
  • jetstream2.us-west.bsky.network

You can read more technical details about Jetstream in the Github repo.

Why Now?

Why are we promoting Jetstream at this time?

Two factors came to a head in early September: we released an example project for building new applications on atproto (Statusphere), and we had an unexpectedly large surge in traffic in Brazil. Suddenly we had a situation where new developers would be subscribing to a torrential full-network firehose (over a thousand events per second), just to pluck out a handful of individual events from a handful of accounts. Everything about this continued to function, even on a laptop on a WiFi connection, but it feels a bit wild as an introduction to the protocol.

We knew from early on that while the current firehose is extremely powerful, it was not well-suited to some use cases. Until recently, it hadn’t been a priority to develop alternatives. The firehose is a bit overpowered, but it does Just Work.

Has the Relay encountered scaling problems or become unaffordable to operate?

Nope! The current Relay implementation ('bigsky', written in Go, in the indigo git repo) absorbed a 10x surge in daily event rate, with over 200 active subscribers, and continues to chug along reliably. We have demonstrated how even a full-network Relay can be operated affordably.

We do expect to refactor our Relay implementation and make changes to the firehose wire format to support sharding. But the overall network architecture was designed to support global scale and millions of events per second, and we don't see any serious barriers to reaching that size. Bandwidth costs are manageable today. At larger network size (events times subscribers), bandwidth will grow in cost. We expect that the economic value of the network will provide funding and aligned incentives to cover the operation of core network infrastructure, including Relays. In practical terms, we expect funded projects and organizations depending on the firehose to pay infrastructure providers to ensure reliable operation (eg, an SLA), or to operate their own Relay instances.

Tradeoffs and Use Cases

Jetstream has efficiency and simplicity advantages, but they come with some tradeoffs. We think it is a pragmatic option for many projects, but that developers need to understand what they are getting into.

Events do not include cryptographic signatures or Merkle tree nodes, meaning the data is not self-authenticating. "Authenticated Transfer" is right in the AT Protocol acronym, so this is a pretty big deal! The trust relationship between a Jetstream operator and a consuming client is pretty different from that of a Relay. Not all deployment scenarios and use-cases require verification, and we suspect many projects are already skipping that aspect when consuming from the firehose. If you are running Jetstream locally, or have a tight trust relationship with a service provider, these may be acceptable tradeoffs.

Unlike the firehose (aka, Repository Event Stream), Jetstream is not formally part of the protocol. We are not as committed to maintaining it as a stable API or critical piece of infrastructure long-term, and we anticipate adopting some of the advantages it provides into the protocol firehose over time.

On the plus side, Jetstream is easier and cheaper to operate than a Relay instance. Folks relying on Jetstream can always run their own copy on their own servers.

Some of the use cases we think Jetstream is a good fit for:

  • casual, low-stakes projects and social toys: interactive bots, and "fun" badging labelers (eg, Kiki/Bouba)
  • experimentation and prototyping: student projects, proofs of concept, demos
  • informal metrics and visualizations
  • developing new applications: filtering by collection is particularly helpful when working with new Lexicons and debugging
  • internal systems: if you have multiple services consuming from the firehose, a single local Jetstream instance can be used to fan out to multiple subscribers

Some projects it is probably not the right tool for:

  • mirroring, backups, and archives
  • any time it is important to know "who said what"
  • moderation or anti-abuse actions
  • research studies

What Else?

The ergonomics of working with the firehose and "backfilling" bulk data from the network are something we would like to improve in the protocol itself. This might include mechanisms for doing "selective sync" of specific collections within a repo, while still getting full verification of authenticity.

It would be helpful to have a mechanism to identify which repos in the network have any records of a specific type, without inspecting every account individually. For example, enumerating all of the labelers or feed generators in the network. This is particularly important for new applications with a small initial user base.

We are working to complete the atproto specifications for the firehose and for account hosting status.

· 9 min read

As the AT Protocol matures, developers are building alternative Bluesky clients and entirely novel applications with independent Lexicions. We love to see it! This is very aligned with our vision for the ATmosphere, and we intend to encourage more of this through additional developer documentation and tooling.

One of the major components of the protocol is the concept of "Lexicons," which are machine-readable schemas for both API endpoints and data records. The goal with Lexicons is to make it possible for independent projects to work with the same data types reliably. Users should be able to choose which software they use to interact with the network, and it is important that developers are able to call shared APIs and write shared data records with confidence.

While the Lexicon concept has been baked into the protocol from the beginning, some aspects are still being finalized, and best practices around extensions, collaboration, and governance are still being explored.

A recent incident in the live network brought many of these abstract threads into focus. Because norms and precedent are still being established, we thought it would be good to dig into the specific situation and give some updates.

What Happened?

On October 10, Bluesky released version 1.92 of our main app. This release added support for "pinned posts," a long-requested feature. This update added a pinnedPost field to the app.bsky.actor.profile record. This field is declared as a com.atproto.repo.strongRef, which is an object containing both the URL and a hash (CID) of the referenced data record.

📢 App Version 1.92 is rolling out now (1/5)Pinned posts are here! Plus lots of UI improvements, including new font options, and the ability to filter your searches by language.Open this thread for more details. 🧵

[image or embed]

— Bluesky (@bsky.app) Oct 10, 2024 at 3:24 PM

All the way back in April 2024, independent developers had already implemented pinned posts in a handful of client apps. They did so by using a pinnedPost field on the app.bsky.actor.profile record, as a simple string URL. This worked fine for several months, and multiple separate client apps (Klearsky, Tokimeki, and Hagoromo) collaborated informally and used this same extension of the profile record type.

やっていることは簡単で、 app.bsky.actor.profile に pinnedPost というカスタムフィールドを作り、これにポストのAT URIを設定しているだけ…なんですが getProfile がカスタムフィールドを返してくれない(それはそう)のがちょっとあれでまだ調整中です

— mimonelu 🦀 みもねる (@mimonelu.net) Apr 29, 2024 at 8:45 AM

One of the interesting dynamics was that multiple independent Bluesky apps were collaborating to use the same extension field.

Blueskyクライアントの一覧を更新しました!🆕Features!PinnedPost (3rd party non-official feature)・Klearsky・TOKIMEKI・羽衣-Hagoromo-

[image or embed]

— どるちぇ (@l-tan.blue) May 3, 2024 at 9:36 AM

Which all worked great! Until the Bluesky update conflicted with the existing records, causing errors for some users. Under the new schema, the previously-written records suddenly became "invalid". And new records, valid under the new schema, could be invalid from the perspective of independent software.

Analysis

The issue with conflicting records  was an unintentional mistake on our part. While we knew that other apps had experimented with pinned posts, and separately knew that conflicts with Lexicon extension fields were possible in theory, we didn't check or ask around for feedback when updating the profile schema. While the Bluesky app is open-source and this new schema had even been discussed by developers in the app ahead of time, we didn't realize we had a name collision until the app update was shipped out to millions of users. If we had known about the name collision in advance, we would have chosen a different field name or worked with the dev community to resolve the issue.

There has not been clear guidance to developers about how to interoperate with and extend Lexicons defined by others. While we have discussed these questions publicly a few times, the specifications are somewhat buried, and we are just starting to document guidance and best practices.

At the heart of this situation is a tension over who controls and maintains Lexicions. The design of the system is that authority is rooted in the domain name corresponding to the schema NSID (in reverse notation). In this example, the app.bsky.actor.profile schema is controlled by the owners of bsky.app – the Bluesky team. Ideally schema maintainers will collaborate with other developers to update the authoritative schemas with additional fields as needed.

There is some flexibility in the validation rules to allow forwards-compatible evolution of schemas. Off-schema attributes can be inserted, ignored during schema validation, and passed through to downstream clients. Consequently it’s possible (and acceptable) for other clients to use off-schema attributes, which is the situation that happened here.

While this specific case resulted in interoperability problems, we want to point out that these same apps are separately demonstrating a strong form of interoperation by including data from multiple schemas (whtwnd.com, linkat.blue, etc) all in a single app. This is exactly the kind of robust data reuse and collaboration we hoped the Lexicon system would enable.

🌈 TOKIMEKI UPDATE!!!(Web/Android v1.3.5/iOS TF)🆕 プロフィール画面に Atmosphere スペースを追加!- AT Protocol では Bluesky 以外にも様々なサービスを自由に開発することができ、実際にいくつかの便利なサービスが公開されています。- ユーザーが利用しているBluesky以外のサービスへのリンクを見ることができます。- 現在は、Linkat (リンク集) と WhiteWind (ブログ) の2つに対応。- 設定→全般から非表示にできます。Web | Android

[image or embed]

— 🌈 TOKIMEKI Bluesky (@tokimeki.blue) Oct 10, 2024 at 10:50 PM

Current Recommendations

What do we recommend to developers looking to extend record schemas today?

Our current recommendation is to define new Lexicons for "sidecar" records. Instead of adding fields to app.bsky.actor.profile, define a new record schema (eg com.yourapp.profile) and put the fields there. When rendering a profile view, fetch this additional record at the same time. Some records always have a fixed record key, like self, so they can be fetched with a simple GET. For records like app.bsky.feed.post, which have TID record keys, the sidecar records can have the same record key as the original post, so they also can be fetched with a simple GET. We use this pattern at scale in the bsky Lexicons with app.bsky.feed.threadgate, which extends the post schema, and allows data updates without changing the version (CID) of the post record itself.

There is some overhead to doing additional fetches, but these can be mitigated with caching or building a shim API server (with updated API Lexicions) to blend in the additional data to "view" requests. If needed, support could be improved with generic APIs to automatically hydrate "related records" with matching TIDs across collections in the same repository.

If sidecar records are not an option, and developers feel they must add data directly to existing record types, we very strongly recommend against field names that might conflict. Even if you think other developers might want to use the same extension, you should intentionally choose long unique prefixes for field names to prevent conflicts both with the "authoritative" Lexicon author, and other developers who might try to make the same extension. What we currently recommend is using a long, unique, non-generic project name prefix, or even a full NSID for the field name. For example, app.graysky.pinnedPost or grayskyPinnedPost are acceptable, but not pinnedPost or extPinnedPost.

While there has been some clever and admirable use of extension fields (the SkyFeed configuration mechanism in app.bsky.feed.generator records comes to mind), we don't see inserting fields into data specified by other parties as a reliable or responsible practice in the long run. We acknowledge that there is a demonstrated demand for a simple extension mechanism, and safer ways to insert extension data in records might be specified in the future.

Proposals and discussion welcome! There is an existing thread on Github.

Progress with Lexicons

While not directly related to extension fields, we have a bunch of ongoing work with the overall system.

We are designing a mechanism for Lexicon resolution. This will allow anybody on the public internet to authoritatively resolve the schema for a given NSID. This process should not need to happen very often, and we want to incorporate lessons from previous live schema validation systems (including XML), but there does need to be a way to demonstrate authority.

We are planning to build an aggregator and automated documentation system for Lexicons, similar to package management systems like pkg.go.dev and lib.rs. These will make it easier to discover and work with independent Lexicons across the ATmosphere and provide baseline documentation of schemas for developers. They can also provide collective benefits such as archiving, flagging abuse and security problems, and enabling research.

We are writing a style guide for authoring Lexicons, with design patterns, tips and common gotchas, and considerations for evolution and extensibility.

The validation behaviors for the unknown and union Lexicon types have been clarified in the specifications.

The schema validation behavior when records are created at PDS instances has been updated, and will be reflected in the specifications soon (a summary is available).

Generic run-time Lexicon validation support was added to the Go SDK (indigo), and test vectors were added to the atproto interop tests repository.

Finally, an end-to-end tutorial on building an example app ("Statusphere") using custom Lexicons was added to the updated atproto documentation website.

Overall, the process for designing and publishing new schemas from scratch should be clearer soon, and the experience of finding and working with existing schemas should be significantly improved as well.

· 5 min read

We are very happy to release the initial specification of OAuth for AT Protocol! This is expected to be the primary authentication and authorization system between atproto client apps and PDS instances going forward, replacing the current flow using App Passwords and createSession over time.

OAuth is a framework of standards under active development by the IETF. We selected a particular "profile" of RFCs, best practices, and draft specifications to preserve security in the somewhat unique atproto ecosystem. In particular, unlike most existing OAuth deployments and integrations, the atproto network is composed of many independent server instances, client apps, developers, and end users, who generally have not cross-registered their client software ahead of time. This necessitates both automated discovery of the user’s Authorization Server, and automated registration of client metadata with the server. In some ways this situation is closer to that between email clients and email providers than it is between traditionally pre-registered OAuth or OIDC clients (such as GitHub apps or "Sign In With Google"). This unfortunately means that generic OAuth client libraries may not work out-of-the-box with the atproto profile yet. We have built on top of draft standards (including "OAuth Client ID Metadata Document") and are optimistic that library support will improve with time.

We laid out an OAuth Roadmap earlier this summer, and are entering the "Developer Preview Phase". In the coming months we expect to tweak the specification based on feedback from developers and standards groups, and to fill in a few details. We expect the broad shape of the specification to remain the same, and encourage application and SDK developers to start working with the specification now, and stop using the legacy App Password system for new projects.

What is Ready Today?

OAuth has been deployed to several components of the atproto network over the past weeks. The Bluesky-developed PDS implementation implements the server component (including the Authorization Interface), the TypeScript client SDK now supports the client components, and several independent developers and projects have implemented login flows. At this time the Bluesky Social app has not yet been updated to use OAuth.

We have a a few resources for developers working with OAuth:

The current OAuth profile does not specify how granular permissions ("scopes") work with atproto. Instead, we have defined a small set of "transitional" scopes which provide the same levels of client access as the current auth system:

  • transition:generic the same level of permissions as an App Password
  • transition:chat.bsky is an add-on (must be included in combination with transition:generic) which adds access to the chat.bsky.* Lexicons for DMs. Same behavior as an App Password with the DM access option selected.

Next Steps

There are two larger components which will integrate OAuth into the atproto ecosystem:

  • The first is to expand the PDS account web interface to manage active OAuth sessions. This will allow users to inspect active sessions, the associated clients, and to revoke those sessions (eg, "remote log out"). This is user interface work specific to the PDS implementation, and will not change or impact the existing OAuth specification.
  • The second is to design an atproto-native scopes system which integrates with Lexicons, record collections, and other protocol features. This will allow granular app permissions in an extensible manner. This is expected to be orthogonal to the current OAuth specification, and it should be relatively easy for client apps to transition to more granular permissions, though it will likely require logout and re-authentication by users.

These are big priorities for user security, and the path to implementation and deployment is clear.

While that work is in progress, we are interested in feedback from SDK developers and early adopters. What pain points do you encounter? Are there requirements which could be relaxed without reducing user security?

The overall design of this OAuth profile is similar to that of other social web protocols, such as ActivityPub. There are some atproto-specific aspects, but we are open to collaboration and harmonization between profiles to simplify and improve security on the web generally.

Finally, a number of the specifications we adopt and build upon are still drafts undergoing active development. We are interested in feedback on our specification, and intend to work with standards bodies (including the IETF) and tweak our profile if necessary to ensure compliance with final versions of any relevant standards and best practices.

· 11 min read

Today we are merging some changes to how the TypeScript @atproto/api package works with authentication sessions. The changes are mostly backwards compatible, but some parts are now deprecated, and there are some breaking changes for advanced uses.

The motivation for these changes is the need to make the @atproto/api package compatible with OAuth session management. We don't have OAuth client support "launched" and documented quite yet, so you can keep using the current app password authentication system. When we do "launch" OAuth support and begin encouraging its usage in the near future (see the OAuth Roadmap), these changes will make it easier to migrate.

In addition, the redesigned session management system fixes a bug that could cause the session data to become invalid when Agent clones are created (e.g. using agent.withProxy()).

New Features

We've restructured the XrpcClient HTTP fetch handler to be specified during the instantiation of the XRPC client, through the constructor, instead of using a default implementation (which was statically defined).

With this refactor, the XRPC client is now more modular and reusable. Session management, retries, cryptographic signing, and other request-specific logic can be implemented in the fetch handler itself rather than by the calling code.

A new abstract class named Agent, has been added to @atproto/api. This class will be the base class for all Bluesky agents classes in the @atproto ecosystem. It is meant to be extended by implementations that provide session management and fetch handling. Here is the class hierarchy:

AT protocol api class hierarchy

As you adapt your code to these changes, make sure to use the Agent type wherever you expect to receive an agent, and use the AtpAgent type (class) only to instantiate your client. The reason for this is to be forward compatible with the OAuth agent implementation that will also extend Agent, and not AtpAgent.

import { Agent, AtpAgent } from '@atproto/api'

async function setupAgent(service: string, username: string, password: string): Promise<Agent> {
const agent = new AtpAgent({
service,
persistSession: (evt, session) => {
// handle session update
},
})

await agent.login(username, password)

return agent
}
import { Agent } from '@atproto/api'

async function doStuffWithAgent(agent: Agent, arg: string) {
return agent.resolveHandle(arg)
}
import { Agent, AtpAgent } from '@atproto/api'

class MyClass {
agent: Agent

constructor () {
this.agent = new AtpAgent()
}
}

Breaking changes

Most of the changes introduced in this version are backward-compatible. However, there are a couple of breaking changes you should be aware of:

  • Customizing fetch: The ability to customize the fetch: FetchHandler property of @atproto/xrpc's Client and @atproto/api's AtpAgent classes has been removed. Previously, the fetch property could be set to a function that would be used as the fetch handler for that instance, and was initialized to a default fetch handler. That property is still accessible in a read-only fashion through the fetchHandler property and can only be set during the instance creation. Attempting to set/get the fetch property will now result in an error.
  • The fetch() method, as well as WhatWG compliant Request and Headers constructors, must be globally available in your environment. Use a polyfill if necessary.
  • The AtpBaseClient has been removed. The AtpServiceClient has been renamed AtpBaseClient. Any code using either of these classes will need to be updated.
  • Instead of wrapping an XrpcClient in its xrpc property, the AtpBaseClient (formerly AtpServiceClient) class - created through lex-cli - now extends the XrpcClient class. This means that a client instance now passes the instanceof XrpcClient check. The xrpc property now returns the instance itself and has been deprecated.
  • setSessionPersistHandler is no longer available on the AtpAgent or BskyAgent classes. The session handler can only be set though the persistSession options of the AtpAgent constructor.
  • The new class hierarchy is as follows:
    • BskyAgent extends AtpAgent: but add no functionality (hence its deprecation).
    • AtpAgent extends Agent: adds password based session management.
    • Agent extends AtpBaseClient: this abstract class that adds syntactic sugar methods app.bsky lexicons. It also adds abstract session management methods and adds atproto specific utilities (labelers & proxy headers, cloning capability) - AtpBaseClient extends XrpcClient: automatically code that adds fully typed lexicon defined namespaces (instance.app.bsky.feed.getPosts()) to the XrpcClient.
    • XrpcClient is the base class.

Non-breaking changes

  • The com.* and app.* namespaces have been made directly available to every Agent instances.

Deprecations

  • The default export of the @atproto/xrpc package has been deprecated. Use named exports instead.
  • The Client and ServiceClient classes are now deprecated. They are replaced by a single XrpcClient class.
  • The default export of the @atproto/api package has been deprecated. Use named exports instead.
  • The BskyAgent has been deprecated. Use the AtpAgent class instead.
  • The xrpc property of the AtpClient instances has been deprecated. The instance itself should be used as the XRPC client.
  • The api property of the AtpAgent and BskyAgent instances has been deprecated. Use the instance itself instead.

Migration

The @atproto/api package

If you were relying on the AtpBaseClient solely to perform validation, use this:

Before
After
import { AtpBaseClient, ComAtprotoSyncSubscribeRepos } from '@atproto/api'

const baseClient = new AtpBaseClient()

baseClient.xrpc.lex.assertValidXrpcMessage('io.example.doStuff', {
// ...
})
import { lexicons } from '@atproto/api'

lexicons.assertValidXrpcMessage('io.example.doStuff', {
// ...
})

If you are extending the BskyAgent to perform custom session manipulation, define your own Agent subclass instead:

Before
After
import { BskyAgent } from '@atproto/api'

class MyAgent extends BskyAgent {
private accessToken?: string

async createOrRefreshSession(identifier: string, password: string) {
// custom logic here

this.accessToken = 'my-access-jwt'
}

async doStuff() {
return this.call('io.example.doStuff', {
headers: {
'Authorization': this.accessToken && `Bearer ${this.accessToken}`
}
})
}
}
import { Agent } from '@atproto/api'

class MyAgent extends Agent {
private accessToken?: string
public did?: string

constructor(private readonly service: string | URL) {
super({
service,
headers: {
Authorization: () =>
this.accessToken ? `Bearer ${this.accessToken}` : null,
}
})
}

clone(): MyAgent {
const agent = new MyAgent(this.service)
agent.accessToken = this.accessToken
agent.did = this.did
return this.copyInto(agent)
}

async createOrRefreshSession(identifier: string, password: string) {
// custom logic here

this.did = 'did:example:123'
this.accessToken = 'my-access-jwt'
}
}

If you are monkey patching the xrpc service client to perform client-side rate limiting, you can now do this in the FetchHandler function:

Before
After
import { BskyAgent } from '@atproto/api'
import { RateLimitThreshold } from "rate-limit-threshold"

const agent = new BskyAgent()
const limiter = new RateLimitThreshold(
3000,
300_000
)

const origCall = agent.api.xrpc.call
agent.api.xrpc.call = async function (...args) {
await limiter.wait()
return origCall.call(this, ...args)
}

import { AtpAgent } from '@atproto/api'
import { RateLimitThreshold } from "rate-limit-threshold"

class LimitedAtpAgent extends AtpAgent {
constructor(options: AtpAgentOptions) {
const fetch = options.fetch ?? globalThis.fetch
const limiter = new RateLimitThreshold(
3000,
300_000
)

super({
...options,
fetch: async (...args) => {
await limiter.wait()
return fetch(...args)
}
})
}
}

If you configure a static fetch handler on the BskyAgent class - for example to modify the headers of every request - you can now do this by providing your own fetch function:

Before
After
import { BskyAgent, defaultFetchHandler } from '@atproto/api'

BskyAgent.configure({
fetch: async (httpUri, httpMethod, httpHeaders, httpReqBody) => {

const ua = httpHeaders["User-Agent"]

httpHeaders["User-Agent"] = ua ? `${ua} ${userAgent}` : userAgent

return defaultFetchHandler(httpUri, httpMethod, httpHeaders, httpReqBody)
}
})
import { AtpAgent } from '@atproto/api'

class MyAtpAgent extends AtpAgent {
constructor(options: AtpAgentOptions) {
const fetch = options.fetch ?? globalThis.fetch

super({
...options,
fetch: async (url, init) => {
const headers = new Headers(init.headers)

const ua = headersList.get("User-Agent")
headersList.set("User-Agent", ua ? `${ua} ${userAgent}` : userAgent)

return fetch(url, { ...init, headers })
}
})
}
}

The @atproto/xrpc package

The Client and ServiceClient classes are now deprecated. If you need a lexicon based client, you should update the code to use the XrpcClient class instead.

The deprecated ServiceClient class now extends the new XrpcClient class. Because of this, the fetch FetchHandler can no longer be configured on the Client instances (including the default export of the package). If you are not relying on the fetch FetchHandler, the new changes should have no impact on your code. Beware that the deprecated classes will eventually be removed in a future version.

Since its use has completely changed, the FetchHandler type has also completely changed. The new FetchHandler type is now a function that receives a url pathname and a RequestInit object and returns a Promise<Response>. This function is responsible for making the actual request to the server.

export type FetchHandler = (
this: void,
/**
* The URL (pathname + query parameters) to make the request to, without the
* origin. The origin (protocol, hostname, and port) must be added by this
* {@link FetchHandler}, typically based on authentication or other factors.
*/
url: string,
init: RequestInit,
) => Promise<Response>

A noticeable change that has been introduced is that the uri field of the ServiceClient class has not been ported to the new XrpcClient class. It is now the responsibility of the FetchHandler to determine the full URL to make the request to. The same goes for the headers, which should now be set through the FetchHandler function.

If you do rely on the legacy Client.fetch property to perform custom logic upon request, you will need to migrate your code to use the new XrpcClient class. The XrpcClient class has a similar API to the old ServiceClient class, but with a few differences:

  • The Client + ServiceClient duality was removed in favor of a single XrpcClient class. This means that:
    • There no longer exists a centralized lexicon registry. If you need a global lexicon registry, you can maintain one yourself using a new Lexicons (from @atproto/lexicon).
    • The FetchHandler is no longer a statically defined property of the Client class. Instead, it is passed as an argument to the XrpcClient constructor.
  • The XrpcClient constructor now requires a FetchHandler function as the first argument, and an optional Lexicon instance as the second argument.
  • The setHeader and unsetHeader methods were not ported to the new XrpcClient class. If you need to set or unset headers, you should do so in the FetchHandler function provided in the constructor arg.
Before
After
import client, { defaultFetchHandler } from '@atproto/xrpc'

client.fetch = function (
httpUri: string,
httpMethod: string,
httpHeaders: Headers,
httpReqBody: unknown,
) {
// Custom logic here
return defaultFetchHandler(httpUri, httpMethod, httpHeaders, httpReqBody)
}

client.addLexicon({
lexicon: 1,
id: 'io.example.doStuff',
defs: {},
})

const instance = client.service('http://my-service.com')

instance.setHeader('my-header', 'my-value')

await instance.call('io.example.doStuff')
import { XrpcClient } from '@atproto/xrpc'

const instance = new XrpcClient(
async (url, init) => {
const headers = new Headers(init.headers)

headers.set('my-header', 'my-value')

// Custom logic here

const fullUrl = new URL(url, 'http://my-service.com')

return fetch(fullUrl, { ...init, headers })
},
[
{
lexicon: 1,
id: 'io.example.doStuff',
defs: {},
},
],
)

await instance.call('io.example.doStuff')

If your fetch handler does not require any "custom logic", and all you need is an XrpcClient that makes its HTTP requests towards a static service URL, the previous example can be simplified to:

import { XrpcClient } from '@atproto/xrpc'

const instance = new XrpcClient('http://my-service.com', [
{
lexicon: 1,
id: 'io.example.doStuff',
defs: {},
},
])

If you need to add static headers to all requests, you can instead instantiate the XrpcClient as follows:

import { XrpcClient } from '@atproto/xrpc'

const instance = new XrpcClient(
{
service: 'http://my-service.com',
headers: {
'my-header': 'my-value',
},
},
[
{
lexicon: 1,
id: 'io.example.doStuff',
defs: {},
},
],
)

If you need the headers or service url to be dynamic, you can define them using functions:

import { XrpcClient } from '@atproto/xrpc'

const instance = new XrpcClient(
{
service: () => 'http://my-service.com',
headers: {
'my-header': () => 'my-value',
'my-ignored-header': () => null, // ignored
},
},
[
{
lexicon: 1,
id: 'io.example.doStuff',
defs: {},
},
],
)

· 4 min read

We’re launching microgrants for labeling services on Bluesky!

Moderation is the backbone of healthy social spaces online. Bluesky has our own moderation team dedicated to providing around-the-clock coverage to uphold our community guidelines, and additionally, we recognize that there is no one-size-fits-all approach to moderation. No single company can get online safety right for every country, culture, and community in the world. So we’ve also been building something bigger — an ecosystem of moderation and open-source safety tools that gives communities power to create their own spaces, with their own norms and preferences.

Labeling services on Bluesky allow users and communities to participate in a stackable ecosystem of services. Users can create and subscribe to filters from independent moderation services, which are layered on top of Bluesky’s own service. You can read more about how stackable moderation works on Bluesky here.

To support the first labelers in our ecosystem and encourage more, we are launching a microgrants program for labeling services. To apply for a grant, please fill out this form.

Program Details

For this program, we have an allocation of $10,000. We will be distributing $500 per labeling service that is approved for a grant. Please submit an application here.

The application has a rolling deadline, and we will announce both the recipients of the grants and when all of the grants have been distributed. We pay out grants via public GitHub Sponsorships.

In addition, we’ve also partnered with Amazon Web Services (AWS) to offer $5,000 in AWS Activate1 credits to labeling services as well. These credits are applied to your AWS bill to help cover costs from cloud services, including machine learning, compute, databases, storage, containers, dev tools, and more. Simply check a box in your grant application if you’re interested in receiving these credits as well.

If you’re an organization interested in running a labeler but do not currently have the technical capacity to implement one, please reach out to our team at partnerships@blueskyweb.xyz. We may be able to assist in matching you with a developer.

Initial Labeling Grant Recipients

We're kicking off the program with grants to three initial recipients:

XBlock

@XBlock.aendra.dev is an attempt to help give users control over the types of content they see on Bluesky. Screenshots serve a variety of uses on social media, but quite often are intended to create discourse or drive dogpiles. By letting users toggle the visibility of screenshots from various platforms, XBlock aims to give users a "volume dial" for certain types of content.

Aegis

@aegis.blue is a volunteer-run labeling service providing community moderation predominantly to Bluesky's LGBTQIA+ and marginalized users. Featuring a diverse team of both industry and aspiring experts, Aegis lives by the motto, We Keep Us Safe. More info can be found on their website at https://aegis.blue/Home.

News Detective

News Detective fights misinformation by combining the experience of professional factcheckers with the wisdom of crowds. A crowd of volunteer factcheckers transparently investigates posts, and professional factcheckers make sure only the highest quality factchecks make it through the system. Users who use News Detective will be able to see factchecks (including explanations and sources) on posts they come across and request factchecks on posts they find questionable. They can also watch News Detectives discuss the posts and even participate in factchecking to create a more honest, democratic, and transparent internet. Incubated at MIT DesignX, MIT Sandbox, and HacksHackers.

Contact

Please feel free to leave questions or comments on the GitHub discussion for this announcement here, or on the Bluesky post here.

Footnotes

  1. AWS Activate Credits are subject to the program's terms and conditions.

· 11 min read

Discuss this post in our Github Discussion forums here

This roadmap is an update on our progress and lays out our general goals and focus for the coming months. This document is written for developers working on atproto clients, implementations, and applications (including Bluesky-specific projects). This is not a product announcement: while some product features are hinted at, we aren't promising specific timelines here. As always, most Bluesky software is free and open source, and observant folks can follow along with our progress week by week in GitHub.

In the big picture, we made a lot of progress on the protocol in early 2024. We opened up federation on the production network, demonstrated account migration, specified and launched stackable moderation (labeling and Ozone), shared our plan for OAuth, specified a generic proxying mechanism, built a new API documentation website (docs.bsky.app), and more.

After this big push on the protocol, the Bluesky engineering team is spending a few months catching up on some long-requested features like GIFs, video, and DMs. At the same time, we do have a few "enabling" pieces of protocol work underway, and continue to make progress towards a milestone of protocol maturity and stability.

Summary-level notes:

  • Federation is now open: you don't need to pre-register in Discord any more. 
  • It is increasingly possible to build independent apps and integrations on atproto. One early example is https://whtwnd.com/, a blogging web app built on atproto.
  • The timeline for a formal standards body process is being pushed back until we have additional independent active projects building on the protocol.

Current Work

Proxying of Independent Lexicons: earlier this year we added a generic HTTP proxying mechanism, which allows clients to specify which onward service (eg, AppView) instance they want to communicate with. To date this has been limited to known Lexicons, but we will soon relax this restriction and make arbitrary XRPC query and procedure requests. Combined with allowing records with independent Lexicon schemas (now allowed), this finally enables building new independent atproto applications. PR for this work

Open Federation: the Bluesky Relay service initially required pre-registration before new PDS instances were crawled. This was a very informal process (using Discord) to prevent automated abuse, but we have removed this requirement, making it even easier to set up PDS instances. We will also bump the per-PDS account limits, though we will still enforce some limits to minimize automated abuse; these limits can be bumped for rapidly growing communities and projects.

Email 2FA: while OAuth is our main focus for improving account security (OAuth flows will enable arbitrary MFA, including passkeys, hardware tokens, authenticators, etc), we are rapidly rolling out a basic form of 2FA, using an emailed code in addition to account password for sign-in. This will be an optional opt-in functionality. Announcement with details

OAuth: we continue to make progress implementing our plan for OAuth. Ultimately this will completely replace the current account sign-up, session, and app-password API endpoints, though we will maintain backwards compatibility for a long period. With OAuth, account lifecycle, sign-in, and permission flows will be implementation-specific web views. This means that PDS implementations can add any sign-up screening or MFA methods they see fit, without needing support in the com.atproto.* Lexicons. Detailed Proposal

Product Features

These are not directly protocol-related, but are likely to impact many developers, so we wanted to give a heads up on these.

Harassment Mitigations: additional controls and mechanisms to reduce the prevalence, visibility, and impact of abusive mentions and replies, particularly coming from newly created single-purpose or throw-away accounts. May expand on the existing thread-gating and reply-gating functionality.

Post Embeds: the ability to embed Bluesky posts in external public websites. Including oEmbed support. This has already shipped! See embed.bsky.app

Basic "Off-Protocol" Direct Messages (DMs): having some mechanism to privately contact other Bluesky accounts is the most requested product feature. We looked closely at alternatives like linking to external services, re-using an existing protocol like Matrix, or rushing out on-protocol encrypted DMs, but ultimately decided to launch a basic centralized system to take the time pressure off our team and make our user community happy. We intend to iterate and fully support E2EE DMs as part of atproto itself, without a centralized service, and will take the time to get the user experience, security, and privacy polished. This will be a distinct part of the protocol from the repository abstraction, which is only used for public content.

Better GIF and Video support: the first step is improving embeds from external platforms (like Tenor for GIFs, and YouTube for video). Both the post-creation flow and embed-view experience will be improved.

Feed Interaction Metrics: feed services currently have no feedback on how users are interacting with the content that they curate. There is no way for users to tell specific feeds that they want to see more or less of certain kinds of content, or whether they have already seen content. We are adding a new endpoint for clients to submit behavior metrics to feed generators as a feedback mechanism. This feedback will be most useful for personalized feeds, and less useful for topic or community-oriented feeds. It also raises privacy and efficiency concerns, so sending of this metadata will both be controlled by clients (optional), and will require feed generator opt-in in the feed declaration record.

Topic/Community Feeds: one of the more common uses for feed generators is to categorize content by topic or community. These feeds are not personalized (they look the same to all users), are not particularly "algorithmic" (posts are either in the feed or not), and often have relatively clear inclusion criteria (though they may be additionally curated or filtered). We are exploring ways to make it easier to create, curate, and explore this type of feed.

User/Labeler Messaging: currently, independent moderators have no private mechanism to communicate with accounts which have reported content, or account which moderation actions have been taken against. All reports, including appeals, are uni-directional, and accounts have no record of the reports they have submitted. While Bluesky can send notification emails to accounts hosted on our own PDS instance, this does not work cross-provider with self-hosted PDS instances or independent labelers.

Protocol Stability Milestone

A lot of progress has been made in recent months on the parts of the protocol relevant to large-scale public conversation. The core concepts of autonomous identity (DIDs and handles), self-certifying data (repositories), content curation (feed generators), and stackable moderation (labelers) have now all been demonstrated on the live network.

While we will continue to make progress on additional objectives (see below), we feel we are approaching a milestone in development and stability of these components of the protocol. There are a few smaller tasks to resolve towards this milestone.

Takedowns: we have a written proposal for how content and account takedowns will work across different pieces of infrastructure in the network. Takedowns are a stronger intervention that complement the labeling system. Bluesky already has mechanisms to enact takedowns on our own infrastructure when needed, but there are some details of how inter-provider takedown requests are communicated.

Remaining Written Specifications: a few parts of the protocol have not been written up in the specifications at atproto.com.

Guidance on Building Apps and Integrations: while we hope the protocol will be adopted and built upon in unexpected ways, it would be helpful to have some basic pointers and advice on creating new applications and integrations. These will probably be informal tutorials and example code to start.

Account and Identity Firehose Events: while account and identity state are authoritatively managed across the DID, DNS, and PDS systems, it is efficient and helpful for changes to this state to be broadcast over the repository event stream ("firehose"). The semantics and behavior of the existing #identity event type will be updated and clarified, and an additional #account event type will be added to communicate PDS account deletion and takedown state to downstream services (Relay, and on to AppView, feed generator, labelers, etc). Downstream services might still need to resolve state from an authoritative source after being notified on the firehose.

Private Account Data Iteration: the app.bsky Lexicons currently include a preferences API, as well as some additional private state like mutes. The design of the current API is somewhat error-prone, difficult for independent developers to extend, and has unclear expectations around providing access to service providers (like independent AppViews). We are planning to iterate on this API, though it might not end up part of the near-term protocol milestone.

Protocol Tech Debt: there are a few other small technical issues to resolve or clean up; these are tracked in this GitHub discussion

On the Horizon

There are a few other pieces of protocol work which we are starting to plan out, but which are not currently scheduled to complete in 2024. It is very possible that priorities and schedules will be shuffled, but we mostly want to call these out as things we do want to complete, but will take a bit more time.

Protocol-Native DMs: as mentioned above, we want to have a "proper" DM solution as part of atproto, which is decentralized, E2EE, and follows modern security best practices.

Limited-Audience (Non-Public) Content: to start, we have prioritized the large-scale public conversation use cases in our protocol design, centered around the public data repository concept. While we support using the right tool for the job, and atproto is not trying to encompass every possible social modality, there are many situations and use-cases where having limited-audience content in the same overall application would be helpful. We intend to build a mechanism for group-private content sharing. It will likely be distinct from public data repositories and the Relay/firehose mechanism, but retain other parts of the protocol stack.

Firehose Bandwidth Efficiency: as the network grows, and the volume and rate of repository commits increases, the cost of subscribing to the entire Relay firehose increases. There are a number of ways to significantly improve bandwidth requirements: removing MST metadata for most use-cases; filtering by record types or subsets of accounts; batch compression; etc.

Record Versioning (Post Editing): atproto already supports updating records in repositories: one example is updating bsky profile records. And preparations were made early in the protocol design to support post editing while avoiding misleading edits. Ideally, it would also be possible to (optionally) keep old versions of records around in the repository, and allow referencing and accessing multiple versions of the same record.

PLC Transparency Log: we are exploring technical and organizational mechanisms to further de-centralize the DID PLC directory service. The most promising next step looks to be publishing a transparency log of all directory operations. This will make it easier for other organizations to audit the behavior of the directory and maintain verifiable replicas. The recent "tiling" transparency log design used for https://sunlight.dev/ (described here) is particularly promising. Compatibility with RFC 6962 (Certificate Transparency) could allow future integration with an existing ecosystem of witnesses and auditors.

Identity Key Self-Management UX: the DID PLC system has a concept of "rotation keys" to control the identity itself (in the form of the DID document). We would like to make it possible for users to optionally register additional keys on their personal devices, password managers, or hardware security keys. If done right, this should improve the resilience of the system and reduce some of the burden of responsibility on PDS operators. While this is technically possible today, it will require careful product design and security review to make this a safe and widely-adopted option.

Standards Body Timeline

As described in our 2023 Protocol Roadmap, we hope to bring atproto to an existing standards body to solidify governance and interoperability of the lower levels of the protocol. We had planned to start the formal process this summer, but as we talked to more people experienced with this process, we realized that we should wait until the design of the protocol has been explored by more developers. It would be ideal to have a couple organizations with atproto experience collaborate on the standards process together. If you are interested in being part of the atproto standards process, leave a message in the discussion thread for this post, or email protocol@blueskyweb.xyz.

While there has been a flowering of many projects built around the app.bsky microblogging application, there have been very few additional Lexicons and applications built from scratch. Some of this stemmed from restrictions on data schemas and proxying behavior on the Bluesky-hosted PDS instances, only relaxed just recently. We hope that new apps and Lexicons will exercise the full capabilities and corner-cases of the protocol.

We will continue to participate in adjacent standards efforts to make connections and get experience. Bluesky staff will attend IETF 120 in July, and are always happy to discuss responsible DNS integrations, OAuth, and HTTP API best practices.

· 4 min read

In March, we announced the AT Protocol Grant program, aimed at fostering the growth and sustainability of the atproto developer ecosystem, as well as the first three recipients.

We’re excited to share the second batch of grant recipients with you today!

In this batch, we distributed $4,800 total in grants. There is $2,200 remaining in the initial allocation of $10,000. Congratulations to all of the recipients so far, and thank you to everyone who has applied — we're very lucky to have such a great developer ecosystem.

Reminder: You can apply for a grant for your own project by filling out this form. We’re excited to hear what you’re working on!


Blacksky Algorithms@rudyfraser.com ($1000)

Blacksky is a suite of services on AT Protocol that all serve to amplify, protect, and provide moderation services for the network’s Black users. It also doubles as a working atproto implementation in Rust. AT Protocol creates the opportunity to build community in public (e.g. custom feeds, PDS-based handles) while retaining enough agency to build protective measures (e.g. blocking viewers of a feed, third-party labeling) for the unique issues that community may face such as anti-blackness. Rudy has worked on Blacksky for the last 10 months and has built a custom firehose subscriber, feedgen, and has almost completed a PDS implementation from scratch.

SkyBridge@videah.net ($800)

SkyBridge is a in-progress proxy web server that translates Mastodon API calls into appropriate Bluesky ones, with the main goal of making already existing Mastodon tools/apps/clients such as Ivory compatible with Bluesky. The project is currently going through a significant rewrite from Dart into Rust.

TOKIMEKI@holybea.social ($500)

TOKIMEKI is a third-party web client for Bluesky. It is one of the most popular 3rd party clients in Japan. Development and release began in March 2023. It supports multi-column and multi-accounts like TweetDeck, and features lightweight behavior, a variety of themes, and unique features such as bookmarks.

Bluesky Case BotsFree Law Project ($500)

Free Law Project is a small non-profit dedicated to making the legal sector more fair and competitive. We run a number of bots on Blue Sky that post whenever there are updates in important American legal cases, such as the Trump indictments, cases related to AI, and much more.

MorphoOrual ($500)

Morpho is a native Android app for Bluesky and the AT Protocol, written in Kotlin. The primary goals are to have improved performance and accessibility relative to the official app, and to accommodate de-Googled Android devices and security/privacy-minded users while retaining a full set of features.

hexPDSnova ($500)

hexPDS is intended to be a production-grade PDS, written in Elixir/Rust. So far, some DID:PLC related operations have been completed, and MST logic and block storage are in progress. The next milestone is loading a repo’s .CAR file and parsing the records.

ATrium@sugyan.com ($500)

ATrium is a collection of Rust libraries for implementing AT Protocol applications. It generates code from Lexicon schema and provides type definitions, etc. for use with Rust to handle XRPC requests.

SkeetStats@ameliamnesia.xyz ($500)

SkeetStats is currently running and used actively by 400+ users. It tracks account stats for opted-in users and displays them in a variety of charts and graphs, and has a bot that allows users to opt in/out.


Thank you to everyone who has applied for a grant so far! As mentioned above, we have $2,200 remaining in the initial allocation of $10,000 for this program. If you’re an atproto developer and haven’t applied for a grant yet, we encourage you to do so — the application is here.

· 12 min read

Moderation is a crucial aspect of any social network. However, traditional moderation systems often lack transparency and user control, leaving communities vulnerable to sudden policy changes and potential mismanagement. To build a better social media ecosystem, it is necessary to try new approaches.

Today, we’re releasing an open labeling system on Bluesky. “Labeling” is a key part of moderation; it is a system for marking content that may need to be hidden, blurred, taken down, or annotated in applications. Labeling is how a lot of centralized moderation works under the hood, but nobody has ever opened it up for anyone to contribute. By building an open source labeling system, our goal is to empower developers, organizations, and users to actively participate in shaping the future of moderation.

In this post, we’ll dive into the details on how labeling and moderation works in the AT Protocol.

An open network of services

The AT Protocol is an open network of services that anyone can provide, essentially opening up the backend architecture of a large-scale social network. The core services form a pipeline where data flows from where it’s hosted, through a data firehose, and out to the various application indexes.

Data flows from independent account hosts into a firehose and then to applications.

Data flows from independent account hosts into a firehose and then to applications.

This event-driven architecture is similar to other high-scale systems, where you might traditionally use tools like Kafka for your data firehose. However, our open system allows anyone to run a piece of the backend. This means that there can be many hosts, firehoses, and indexes, all operated by different entities and exchanging data with each other.

Account hosts will sync with many firehoses.

Account hosts will sync with many firehoses.

Why would you want to run one of these services?

  • You’d run a PDS (Personal Data Server) if you want to self-host your data and keys to get increased control and privacy.
  • You’d run a Relay if you want a full copy of the network, or to crawl subsets of the network for targeted applications or services.
  • You’d run an AppView if you want to build custom applications with tailored views and experiences, such as a custom view for microblogging or for photos.

So what if you want to run your own moderation?

Decentralized moderation

On traditional social media platforms, moderation is often tightly coupled with other aspects of the system, such as hosting, algorithms, and the user interface. This tight coupling reduces the resilience of social networks as businesses change ownership or as policies shift due to financial or political pressures, leaving users with little choice but to accept the changes or stop using the service.

Decentralized moderation provides a safeguard against these risks. It relies on three principles:

  1. Separation of roles. Moderation services operate separately from other services – particularly hosting and identity – to limit the potential for overreach.
  2. Distributed operation. Multiple organizations providing moderation services reduces the risk of a single entity failing to serve user interests.
  3. Interoperation. Users can choose between their preferred clients and associated moderation services without losing access to their communities.

In the AT Protocol, the PDS stores and manages user data, but it isn’t designed to handle moderation directly. A PDS could remove or filter content, but we chose not to rely on this for two main reasons. First, users can easily switch between PDS providers thanks to the account-migration feature. This means any takedowns performed by a PDS might only have a short-term effect, as users could move their data to another provider. Second, data hosting services aren't always the best equipped to deal with the challenges of content moderation, and those with local expertise and community building skills who want to participate in moderation may lack the technical capacity to run a server.

This is different from ActivityPub servers (Mastodon), which manage both data hosting and moderation as a bundled service, and do not make it as easy to switch servers as the AT Protocol does. By separating data storage from moderation, we let each service focus on what it does best.

Where moderation is applied

Moderation is done by a dedicated service called the Labeler (or “Labeling service”).

Labelers produce “labels” which are associated with specific pieces of user-generated content, such as individual posts, accounts, lists, or feeds. These labels make an assertion about the content, such as whether it contains sensitive material, is unpleasant, or is misleading.

These labels get synced to the AppViews where they can be attached to responses at the client’s request.

Labels are synced into AppViews where they can be attached to responses.

Labels are synced into AppViews where they can be attached to responses.

The clients read those labels to decide what to hide, blur, or drop. Since the clients choose their labelers and how to interpret the labels, they can decide which moderation systems to support. The chosen labels do not have to be broadcast, except to the AppView and PDS which fulfill the requests. A user subscribing to a labeler is not public, though the PDS and AppView can privately infer which users are subscribed to which services.

In the Bluesky app, we hardcode our in-house moderation to provide a strong foundation that upholds our community guidelines. We will continue to uphold our existing policies in the Bluesky app, even as this new architecture is made available. With the introduction of labelers, users will be able to subscribe to additional moderation services on top of the existing foundation of our in-house moderation.

The Bluesky application hardcodes its labeling and then stacks community labelers on top.

The Bluesky application hardcodes its labeling and then stacks community labelers on top.

For the best user experience, we suggest that clients in the AT Protocol ecosystem follow this pattern: have at least one built-in moderation service, and allow additional user-chosen mod services to be layered in on top.

The Bluesky app is a space that we create and maintain, and we want to provide a positive environment for our users, so our moderation service is built-in. On top of that, the additional services that users can subscribe to creates a lot of options within the app. However, if users disagree with Bluesky’s application-level moderation, they can choose to use another client on the network with its own moderation system. There are additional nuances to infrastructure-level moderation, which we will discuss below, but most content moderation happens at the application level.

How are labels defined?

A limited core set of labels are defined at the protocol level. These labels handle generic cases (“Content Warning”) and common adult content cases (“Pornography,” “Violence”).

A label can cover the content of a post with a warning.

A label can cover the content of a post with a warning.

Labelers may additionally define their own custom labels. These definitions are relatively straightforward; they give the label a localized name and description, and define the effects they can have.

interface LabelDefinition {
identifier: string
severity: 'inform' | 'alert'
blurs: 'content' | 'media' | 'none'
defaultSetting: 'hide' | 'warn' | 'ignore'
adultContent: boolean
locales: Record<string, LabelStrings>
}

interface LabelStrings {
name: string
description: string
}

Using these definitions, it’s possible to create labels which are informational (“Satire”), topical (“Politics”), curational (“Dislike”), or moderational (“Rude”).

Users can then tune how the application handles these labels to get the outcomes they want.

Users configure whether they want to use each label.

Users configure whether they want to use each label.

Learn more about label definitions in the API Docs on Labelers and Moderation.

Running a labeler

We recently open-sourced Ozone, our powerful open-source Labeler service that we use in-house to moderate Bluesky. This is a significant step forward in transparency and community involvement, as we're sharing the same professional-grade tooling that our own moderation team relies on daily.

Ozone is designed for traditional moderation, where a team of moderators receives reports and takes action on them, but you're free to apply other models with custom software. We recommend anyone interested in running a labeler try out Ozone, as it simplifies the process by helping labelers set up their service account, field reports, and publish labels from the same web interface used by Bluesky's own moderation team, ensuring that all the necessary technical requirements are met. Detailed instructions for setting up and operating Ozone can be found in the readme.

Beyond Ozone’s interface, you can also explore alternative ways of labeling content. A few examples we've considered:

  • Community-driven voting systems (great for curation)
  • Network analysis (e.g., for detecting botnets)
  • AI models

If you want to create your own labeler, you simply need to build a web service that implements two endpoints to serve its labels to the wider network:

  • com.atproto.label.subscribeLabels : a realtime subscription of all labels
  • com.atproto.label.queryLabels : for looking up labels you've published on user-generated content

Reporting content is an important part of moderation, and reports should be sent privately to whoever is able to act on them. The Labeling service's API includes a specific endpoint designed for this use case. To receive user reports, a labeler can implement:

  • com.atproto.report.createReport : files a report from a particular user

When a user submits a report, they choose which of their active Labelers to report to. This gives users the ability to decide who should be informed of an issue. Reports serve as an additional signal for labelers, and they can handle them in a manner that best suits their needs, whether through human review or automated resolution. Appeals from users are essentially another type of report that provides feedback to Labelers. It's important to note that a Labeler is not required to accept reports.

In addition to the technical implementation, your labeler should also have a dedicated Bluesky account associated with it. This account serves as your labeler's public presence within the Bluesky app, allowing you to share information about the types of labels you plan to publish and how users should interpret them. By publishing an app.bsky.labeler.service record, you effectively "convert" your account into a Bluesky labeler, enabling users to discover and subscribe to your labeling service.

More details about labels and labelers can be found in the atproto specs.

Infrastructure moderation

Labeling is the basic building block of composable moderation, but there are other aspects involved. In the AT Protocol network, various services, such as the PDS, Relay, and AppView, have ultimate discretion over what content they carry, though it's not the most straightforward avenue for content moderation. Services that are closer to users, such as the client and labelers, are designed to be more actively involved in community and content moderation. These service providers have a better understanding of the specific community norms and social dynamics within their user base. By handling content moderation at this level, clients and labelers can make more informed decisions that align with the expectations and values of their communities.

Infrastructure providers such as Relays play a different role in the network, and are designed to be a common service provider that serves many kinds of applications. Relays perform simple data aggregation, and as the network grows, may eventually come to serve a wide range of social apps, each with their own unique communities and social norms. Consequently, Relays focus on combating network abuse and mitigating infrastructure-level harms, rather than making granular content moderation decisions.

An example of harm handled at the infrastructure layer is content that is illegal to host, such as child sexual abuse material (CSAM). Service providers should actively detect and remove content that cannot be hosted in the jurisdictions in which they operate. Bluesky already actively monitors its infrastructure for illegal content, and we're working on systems to advise other services (like PDS hosts) about issues we find.

Labels drive moderation in the client. The Relay and Appview apply infrastructure moderation.

Labels drive moderation in the client. The Relay and Appview apply infrastructure moderation.

This separation between backend infrastructure and application concerns is similar to how the web itself works. The PDSs function like personal websites or blogs on the web, which are hosted by various hosting providers. Just as individuals can choose their hosting provider and move their website if needed, users on the AT Protocol can select their PDS and migrate their data if they wish to change providers. Multiple companies can then run Relays and AppViews over PDSs, which are similar to content delivery networks and search engines, that serve as the backbone infrastructure to aggregate and index information. To provide a unified experience to the end user, application and labeling systems then provide a robust, opinionated approach to content moderation, the way individual websites and applications set their own community guidelines.

In summary

Bluesky's open labeling system is a significant step towards a more transparent, user-controlled, and resilient way to do moderation. We’ve opened up the way centralized moderation works under the hood for anyone to contribute, and provided a seamless integration into the Bluesky app for independent moderators. In addition, by open sourcing our internal moderation tools, we're allowing anyone to use, run, and contribute to improving them.

This open labeling system is a fundamentally new approach that has never been tried in the realm of social media moderation. In an industry where innovation has been stagnant for far too long, we are experimenting with new solutions to address the complex challenges faced by online communities. Exploring new approaches is essential if we want to make meaningful progress in tackling the problems that plague social platforms today, and we have designed and implemented what we believe to be a powerful and flexible approach.

Our goal has always been to build towards a more transparent and resilient social media ecosystem that can better represent an open society. We encourage developers, users, and organizations to get involved in shaping the future of moderation on Bluesky by running their own labeling services, contributing to the open-source Ozone project, or providing feedback on this system of stackable moderation. Together, we can design a more user-controlled social media ecosystem that empowers individuals and communities to create better online spaces.

Additional reading:

· 3 min read

We’re excited to announce the AT Protocol Grants program, aimed at fostering the growth and sustainability of the atproto developer ecosystem.

In the first iteration of this program, we’ll distribute a total of $10,000 in microgrants of $500 to $2,000 per project based on factors like cost, usage, and more.

To apply for a grant, please fill out this form.

Program Details

Over the last few months, we’ve seen independent developers create projects ranging from browser extensions and clients to PDS implementations and atproto tooling. Many of them have become widely adopted in the Bluesky community, too! As we continue on our path toward sustainability, we’re launching this grants program to encourage and support developers building on the AT Protocol.

We will be distributing a total of $10,000, and will publicly announce all grant recipients. We have already distributed $3,000, and the recipients of those grants are detailed below. This is a rolling application, though we will announce when all $10,000 of the initial allocated amount has been distributed.

We’ll evaluate each application based on the submitted project plan and the potential impact. The project should be useful to some user group, whether its fellow developers or Bluesky users. To be eligible for a grant, your project must be open source. We pay out grants via public GitHub Sponsorships.

In addition, we’ve also partnered with Amazon Web Services (AWS) to offer $5,000 in AWS Activate1 credits to atproto developers as well. These credits are applied to your AWS bill to help cover costs from cloud services, including machine learning, compute, databases, storage, containers, dev tools, and more. Simply check a box in your atproto grant application if you’re interested in receiving these credits as well.

Initial AT Protocol Grant recipients

Ahead of Bluesky’s public launch in February, Bluesky PBC extended grants to three developers as a pilot program. We awarded $1,000 each to the following projects and developers:

AT Protocol Python SDK — Ilya Siamionau

AT Protocol Dart SDK — Shinya Kato

Listed on the homepage of the Bluesky API documentation site, these two SDKs have quickly become popular packages with atproto developers. We’re also especially impressed by their own documentation sites!

SkyFeed — redsolver

SkyFeed has helped bring Bluesky’s vision for custom feeds to life — now, there are more than 40,000 custom feeds that users can subscribe to, and a vast majority of them are built with SkyFeed.

Contact

We’re excited to continue to find ways to help developers make their projects built on atproto sustainable. Again, you can submit an application for an AT Protocol grant here.

Please feel free to leave questions or comments on the GitHub discussion for this announcement here.

Footnotes

  1. AWS Activate Credits are subject to the program's terms and conditions.

· 3 min read

This is a guest blog post by Skygaze, creators of the For You feed. You can check out For You, the custom feed that learns what you like, at https://skygaze.io/feed.

Last Sunday, 70 engineers came together at the YC office in San Francisco for the first Bluesky AI Hackathon. The teams took full advantage of Bluesky’s complete data openness to build 17 pretty spectacular projects, many of which genuinely surprised us. My favorites are below. Thank you to Replicate for donating $50 of LLM and image model credits to each participant and sponsoring a $1000 prize for the winning team!

The 17 projects covered a wide range of categories: location-based feeds, feeds with dynamic author lists, collaborative image generation, text moderation, NSFW image labeling, creator tools, and more. The top three stood out for their creativity, practicality, and completeness (despite having only ~6 hours to build), and we’ll share a bit about them below.

Convo Detox

@paritoshk.bsky.social and team came in first place with Convo Detox–a bot that predicts when a thread is at high risk of becoming toxic and interjects to diffuse tension. We were particularly impressed with the team’s use of a self-hosted model trained on Reddit data specifically to predict conversations that are likely to get heated. As a proof of concept they deployed it as a bot that can be summoned via mention, but in the near future this would make for a great third party moderation label.

SF IRL

This is a bot that detects and promotes tech events happening in SF. In addition to flagging events, it keeps track of the accounts posting about SF tech and serves a feed with all of the posts from those accounts. We think simple approaches to dynamic author lists is a very interesting 90/10 on customized feeds and (if designed reasonably) could be both easier for the feed maintainer and higher quality for the feed consumers.

NSFW Image Detection

On Bluesky, users can set whether they want adult content to show up in their app. Beyond this level of customization, whether or not an image is labeled as NSFW can be customized as well — people have a wide variety of preferences. This team trained a model to classify images into a large number of NSFW categories, which would theoretically fit nicely into the 3rd party moderation labeler interface. It’s neat that their choice of architecture extends naturally to processing text in tandem with images.

Other Projects

Other noteworthy projects included translation bots, deep fake detectors, a friend matchmaker, and an image generator tool that allowed people to build image generated prompts together in reply threads. It was genuinely incredibly impressive and exciting to see what folks with no previous AT Proto experience were able to put together (and often deploy !!!) in only a few hours.

Additional Resources

We prepared some starter templates for the hackathon, and want to share them below for anyone who couldn’t attend the event in person!

And if you’re interested in hosting your own bluesky hackathon but don’t know where to start, please feel free to copy all of our invite copy, starter repos, and datasets.