the Rewire API
If you manage an online platform or community, you know how important it is to keep your users safe. But truly effective solutions to detect toxic online content at scale are hard to come by. Rewire is here to change that with our groundbreaking API, which finds and stops toxic content at the source.
How it works
Cutting-edge AI technologies power the Rewire API, our flagship tool for finding and stopping toxic content online.
We believe that when it comes to content moderation, there shouldn’t be room for error—because error means people are exposed to harm. That’s why the Rewire API is more accurate, robust, and fair than other content moderation systems on the market. It can provide trustworthy assessments of all of your content in real time—empowering you to keep your users safe.
Accurate and Trustworthy
Reliably detect all forms of toxic content with pinpoint precision.
Lightning-Fast and Scalable
Get real-time assessments for any amount of content.
Use our API in any application with just a few lines of code.
Total content coverage, from hate speech to profanity and abuse.
We tested the Rewire API against five major competitors, to assess just how well it compares in detecting hate speech. Four datasets were used in this process, combining real-world hate speech and hand-crafted English-language test cases for the most robust benchmarking.
Here you can see the Rewire API outperforming all competitors in terms of F1: a score which combines precision (how much of the content identified as hateful actually is hateful), and recall (how much of all the hateful content they successfully identify). The Rewire API scores consistently high on both counts, marking it out as the leading content moderation solution.
*Please contact us for further details.
The Rewire API performs consistently and most accurately across different target groups
We also measured how our AI performs in detecting hate against different protected groups (like women, trans people, and Black people), and the results were crystal clear—the Rewire API is almost 100% consistent across groups, whereas on some groups competitors’ content moderation tools only performed 25% as well.
Distinguishing between toxic and non-toxic language is no problem for the Rewire API
We use a proprietary training process to ensure the Rewire API is finely tuned to the nuances of language. For hate speech, this means identifying distinctions which prove too subtle for other AI systems to detect.
The Rewire API is trained to recognise instances of counter-speech, reclaimed slurs, and neutral uses of identity terms. Competitors’ tools consistently fail to make these fine-grained distinctions. For instance, many of them classify statements like <black lives matter> as hateful.
Start using the Rewire API for free today. Sign up below to receive your access key.