When white supremacists plan rallies like the one a few days ago in Charlottesville, Virginia, they often organize their events on Facebook, pay for supplies with PayPal, book their lodging with Airbnb and ride with Uber. Tech companies, for their part, have been taking pains to distance themselves from these customers.But sometimes it takes more than automated systems or complaints from other users to identify and block those who promote hate speech or violence, so companies are finding novel ways to spot and shut down content they deem inappropriate or dangerous. People don't tend to share their views on their Airbnb accounts, for example. But after matching user names to posts on social-media profiles, the company canceled dozens of reservations made by self-identified Nazis who were using its app to find rooms in Charlottesville, where they were heading to protest the removal of a Confederate statue.At Facebook, which relies on community feedback to flag hateful content for removal, the social network's private groups meant for like-minded people can be havens for extremists, falling through gaps in the content-moderation system. The company is working quickly to improve its machine-learning capabilities to be able to automatically identify posts that should be reviewed by human moderators. Continue reading at AdAge.com Continue reading at 'Advertising Age'
[ Advertising Age | 2017-08-17 00:00:00 UTC ]