About Bluesky:Bluesky is on a mission to build a safer, decentralized social media app for everyone. As a system integrity engineer, you'll help create the backend tools and systems that protect the platform from spam and abuse.Role Overview:We are looking for a System Integrity Engineer to lead technical efforts in designing and maintaining scalable safety systems across our app and protocol. This role will be responsible for integrating new algorithms/ML solutions, developing user-friendly tools, and optimizing workflows to protect our users and enforce community guidelines. You will work closely with the T&S lead, as well as front and backend engineers, to build an app that prioritizes safety, security, and transparency.Key Responsibilities:
- Moderation Systems
- Implement systems in our moderation tool to ensure accurate and timely review of user reports, along with proactive detections to reduce the need for users to make reports
- Lead integration with third-party safety services and maintain and enhance automated systems, ensuring seamless integration with new surfaces and tools.
- Manage the transition of moderation workflows from email to an in-app inbox system.
- Regulatory compliance
- Implement systems to restrict illegal or sanctioned content.
- Provide input and integration on detections for underage users, such as ID verification tools, in order to drive a safer product experience for minors on the app
- Tool Development and System Maintenance
- Develop user-friendly tools for bulk content detection, labeling, and takedowns.
- Consolidate multiple tools into a unified interface for improved efficiency and usability.
- Ensure uptime and reliability of all safety systems, with robust alerting mechanisms for potential downtimes.
What We're Looking For:
- Have strong fundamental programming and software engineering skills.
- Have extensive experience with either
- Typescript, React, and NodeJS (e.g. 5+ years), or
- Go (e.g. 3+ years).
- Have built Web applications in production with 1000+ users.
- Willingness to be on-call.
- Experience integrating ML systems and building tools for content moderation and safety.
- Strong understanding of content moderation challenges, including CSAM detection, spam prevention, and region-specific content restrictions.
- Proficiency in developing workflows and automation systems for routing, tagging, and processing user reports.
Note:
- Please be aware that this role may involve exposure to sensitive content. This may include graphic, offensive, or otherwise challenging material associated with content moderation tasks.
To learn more about us, check out:
We're a fully remote team, though a significant overlap of working hours with US/Pacific is required. For full-time roles, we offer health, dental, and vision insurance.Please attach your cover letter, resume and links to your GitHub, GitLab, or a portfolio of past work within the same attachment.