Racial bias observed in hate speech detection algorithm from Google
Understanding what makes something offensive or hurtful is difficult enough that many people can't figure it out, let alone AI systems. And people of color are frequently left out of AI training sets. So it's little surprise that Alphabet/Google-spawned Jigsaw manages to trip over both of these...
techcrunch.com
Scribd doc of studyUnderstanding what makes something offensive or hurtful is difficult enough that many people can't figure it out, let alone AI systems. And people of color are frequently left out of AI training sets. So it's little surprise that Alphabet/Google -spawned Jigsaw manages to trip over both of these issues at once, flagging slang used by black Americans as toxic.
To be clear, the study was not specifically about evaluating the company's hate speech detection algorithm, which has faced issues before. Instead it is cited as a contemporary attempt to computationally dissect speech and assign a "toxicity score" — and that it appears to fail in a way indicative of bias against black American speech patterns.
OK, so it is an Alphabet subsidiary's algorithms, but it's almost exactly the same as the thread title.
This inability to understand languages in context will certainly delay the inevitable robot apocalypse. Or perhaps start it. Who knows? The point is that languages are hard, AI is even more autistic than people on the internet, and robots are dumb so it is fine that we smack them with hockey sticks. Or something like that.
Also, black lingo is hella rude but I guess it is racist to say that. Still, as Sorry to Bother You clearly showed us, a white voice will make black people more acceptable to everyone until the corporate giants create equisapiens. Wait... Google is a corporate giant. It all begins to make perfect sense, horse people versus the robot uprising means that the capitalists benefit from the power struggles that they manufactured!
I think I drifted off point somewhere.