Tackling online abuse:
why use AI?

Rewire’s CEO and co-founder, Bertie Vidgen, spoke to the UK Parliament’s Petitions Committee about tackling online abuse, and how AI can help.

Many AI systems for content moderation don’t pick up on the types of expressions we want them to, and sometimes they take down legitimate expressions of speech. AI is brittle: it has blind spots and biases. Which begs the question, why even use AI at all?

The answer is scale. Due to the sheer volume of content needing attention, alternatives are simply not viable—especially human moderation systems, which is inconsistent, impossible to scale, and exposes moderators to toxic content in the process.

The real question, then, is not should we use AI, but how do we use it? And how do we use it effectively?

Watch below to see how Bertie explained these issues to Parliament, outlining the need to move forward with a hybrid, flexible approach to AI, the challenges which need to be overcome for this to be effectively implemented, and how AI could change the face of content moderation.

socially-responsible-artificial-intelligence-solutions-detect-stop-hate-speech-toxic-content-online-rewire-startup

Get started.

Start working with Rewire’s technology today
by scheduling a demo with our team.
Or, you’d like to chat, contact us directly.
We look forward to working with you!