API Documentation

The Rewire API detects abuse, hate, profanity, violent and sexually explicit as well as positive language in English text. It can easily and securely be integrated with any application or workflow.
Rewire does not store any data you send.

Getting Started

  1. We send you a personal API key.
  2. You specify your API key when you send API requests.
  3. The API returns content assessments in real time.

Request Format

The Rewire API expects POST requests to our endpoint URL https://api.rewire.online/classify. Your API key is sent as a header (x-api-key) and your input text as a request parameter (text).

Response Format

For a given request, the API’s response body is a JSON object containing:

  1. text: the input text you sent
  2. scores: confidence scores for the text containing abuse and hate, and a binary flag for profanity
  3. request_time: the time you sent your request to the Rewire API

Sample Implementation

import requests

api_url = "https://api.rewire.online/classify"
api_key = "EXAMPLE_API_KEY"

input_text = "This is an example text"


response = requests.post(api_url,
                         json = {'text': input_text},
                         headers={"x-api-key": api_key})

print(response.json())Code language: JavaScript (javascript)
curl -X 'POST' \
  'https://api.rewire.online/classify' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: EXAMPLE_API_KEY' \
  -d '{
  "text": "This is an example text"
}'Code language: PHP (php)
const request = require('request');

const options = {
  method: 'POST',
  url : 'https://api.rewire.online/classify',
  headers: {
    'content-type': 'application/json',
    'x-api-key': 'EXAMPLE_API_KEY',
    useQueryString: true
  },
  body: {text: 'This is an example text'},
  json: true
};

request(options, function (error, response, body) {
	if (error) throw new Error(error);

	console.log(body);
});Code language: JavaScript (javascript)
package main

import (
	"fmt"
	"strings"
	"net/http"
	"io/ioutil"
)

func main() {

	url := "https://api.rewire.online/classify"

	payload := strings.NewReader("{\n    \"text\": \"This is an example text\"\n}")

	req, _ := http.NewRequest("POST", url, payload)

	req.Header.Add("content-type", "application/json")
	req.Header.Add("x-api-key", "EXAMPLE_API_KEY")

	res, _ := http.DefaultClient.Do(req)

	defer res.Body.Close()
	body, _ := ioutil.ReadAll(res.Body)

	fmt.Println(res)
	fmt.Println(string(body))

}Code language: JavaScript (javascript)

Sample Response

{
  "text": "This is an example text",
  "scores": {
    "abuse": 0.005963563919067383,
    "hate": 0.0026835622265934944,
    "profanity": 0,
    "violent": 0.0012395522375959901,
    "sexually_explicit": 0.0046336892225937917,
    "positive": 0.0034892047596738217
  },
  "request_time": "2022-10-27T13:57:48.046538"
}Code language: JSON / JSON with Comments (json)

Notes

We define abuse as content that is aggressive, insulting or threatening.

We define  hate as abuse targeted at a protected group or at its members for being a part of that group. Protected groups are based on characteristics such as gender, race or religion.

We define profanity as a word or expression that is socially or culturally offensive, usually due to being obscene or explicit.

We define violent content as graphic descriptions of violence or injury, as well as content that threatens, condones, glamorises or incites violence.

We define sexually explicit language as expressions that describe, refer to or clearly relate to sexual activity.

We define positive content as clear expressions of positive emotions or positive sentiment about someone or something.

Right now, the Rewire API works for English text, but we are working on making it more multilingual. Models for French, Italian, German, Spanish, Arabic and Mandarin are already available separately.

Get started

Sign up for your free trial access to the Rewire API below, which offers up to 10,000 calls per month. Or upgrade to one of our paid plans find and stop even more toxic content.