A new tool called CLR:SKY is now available for users on Bluesky, a decentralized social media platform. Its purpose is simple but significant—offer real-time feedback on the tone of a post before it’s published. Instead of removing harmful posts after damage is done, this tool aims to stop them before they ever go live.
Developed by online safety expert Fay M. Johnson, CLR:SKY operates quietly in the background as users type. If a post starts to lean aggressive or inflammatory, a live icon—called the Toxicity Weather Report—reflects that shift. The stormier the symbol, the more charged the tone. That feedback gives users a moment to pause and reconsider how their message may be received.
How It Works
CLR:SKY includes three key features, each designed to promote more thoughtful communication:
Toxicity Weather Report
This live sentiment meter updates in real time while you type. The icon changes based on tone—giving visual feedback without judgment or interruption.
GenAI Rewrite Suggestions
If users want help rephrasing a post, CLR:SKY can provide instant rewrites. These suggestions maintain the core message but add empathy and clarity. It’s an optional tool, not an editor.
Perspective Assistant
For users involved in longer threads or debates, this feature adds insight into opposing viewpoints. It encourages replies that engage instead of escalate.
Each of these features works inside a user’s Bluesky experience, requiring no new account or sign-up process. Users simply log in through CLR:SKY with their Bluesky credentials. The platform doesn’t save data, which keeps privacy concerns off the table.
Who’s Behind It
Fay M. Johnson created CLR:SKY after years leading trust and safety teams at Twitter, Meta, and Nextdoor. While at Harvard’s Applied Social Media Lab, Johnson focused on how digital tools could shape healthier online behavior. That research led directly to CLR:SKY’s launch.
According to Johnson, “Content moderation isn’t about shutting people down. It’s about creating space for people to express their views without turning disagreement into conflict. We’re not telling people what to say—we’re helping them see how it might land.”
The system doesn’t prevent anyone from posting. It simply makes the tone visible in a new way. That alone, research shows, can make a difference. A recent study found that real-time toxicity warnings reduced harmful language by over 30%. CLR:SKY builds on that research by applying it in a user-controlled environment.
A Shift in Strategy for Online Speech
Most social media platforms lean on after-the-fact moderation. That includes takedowns, reports, or content filters applied after a post spreads. CLR:SKY flips the script. It adds an internal moment of reflection—just enough time for users to adjust their tone if they want to.
The timing is relevant. Public conversation across platforms has become more polarized. Tensions flare quickly. With this new tool, Bluesky users get an early heads-up when their post may veer into risky territory.
This doesn’t mean watering things down. Johnson emphasizes that disagreement is still part of the conversation. The tool’s intent is to help people hold firm to their views while engaging in a way that keeps the dialogue going, rather than cutting it off.
What It Means for Bluesky
Bluesky has marketed itself as a decentralized, user-first alternative to mainstream platforms. CLR:SKY fits that philosophy by giving control back to the user. There’s no outside moderator. Just tools that work quietly, prompting awareness without pushing an agenda.
The integration also shows how third-party developers can add meaningful value to Bluesky’s growing infrastructure. As more tools like this emerge, the platform could serve as a testing ground for new approaches to digital speech and online safety.
CLR:SKY introduces a low-friction way to rethink how people engage online. Instead of policing content after it’s public, it gives users a tool to self-check their message before hitting publish. For Bluesky users, that might mean fewer regrets, fewer fights, and more room for productive conversation.
Whether the broader social media world will follow this path remains to be seen. But for now, Bluesky has a new feature that treats tone not as a filter—but as something people can tune themselves.