Launch! The Rewire API
Takeaways from the product launch of the Rewire API
Last week saw the launch of the Rewire API, our flagship product for detecting toxic content online. It was fantastic to be able to share our vision publicly, to answer your questions, and to give a live demonstration of just how effective our tech is.
This blog post recaps some of the key takeaways from the session. Read on to find out why online safety is so important, the problems many content moderation solutions encounter, and the innovative approaches Rewire is using to tackle them. And, for those who missed out on the launch this time, watch this space for future live sessions!
The toxic content problem is growing.
Our research shows that toxic content is a difficult problem to measure, let alone tackle. Harmful content often goes under-reported, with measurements like surveys failing to identify the scale of the problem. Despite this, the statistics are staggering. Toxic content is a growing problem, in terms of volume, impact, and variety of harms.
This problem has a tangible impact: on users experiencing harm to their mental health; on communities suffering disruption; on brands who face reputational damage; on governments who fail in their duty to keep citizens safe; and on platforms, who face financial, legal, and marketing risks.
Facebook is now removing 10x more hate than in 2017.
1 in 2
1 in 2 people in the US have personally experienced online harassment.
9 in 10
9 in 10 adults want social media platforms to do more to tackle abuse targeted at children.
Our approach: safety by design.
Rewire offers a step-change in content moderation with our safety by design approach. This means embedding our AI across a whole platform, where it can find toxic content at the first point of contact—so human moderators don’t have to. Because we believe that if users are reporting toxic content, we’ve already made a mistake.
Instead, the Rewire API can be used to curate content as it appears, in real time. Platforms can remove or down-rank toxic content, or offer content warnings. With this, the Rewire API ensures that users are kept safe, and that moderators can focus their attention on other content. The result is a consistent, trustworthy, and scalable content moderation solution.
Proprietary training makes our AI unique.
Rewire’s proprietary training process has been developed through years of research and testing. Our unique approach involves three stages, which together produce the most robust online safety products on the market.
1. How we acquire data: We target our data collection to address any gaps in our AI, keeping it up-to-date with changing language.
2. How we label data: We have built a custom annotation interface designed specifically for online safety, with proprietary taxonomies and labelling systems which guide the work of our own expert annotators.
3. How we use data. We believe the best AI needs to be adaptive to specific needs. That’s why we reiteratively train and re-train our AI using human and model in-the-loop training, which combines the power of AI with the expertise of our trained annotators.
Like what you see?
Start working with Rewire’s technology today
by scheduling a demo with our team.
Or, you’d like to chat, contact us directly.
We look forward to working with you!