YouTube: More AI can fix AI-generated ‘bubbles of hate’



Facebook, YouTube and Twitter faced another online hate crime grilling today by UK parliamentarians visibly frustrated at their continued failures to apply their own community guidelines and take down reported hate speech.

The UK government has this year pushed to raise online radicalization and extremist content as a G7 priority — and has been pushing for takedown timeframes for extremist content to shrink radically.

While the broader issue of online hate speech has continued to be a hot button political issue, especially in Europe — with Germany passing a social media hate speech law in October. And the European Union’s executive body pushing for social media firms to automate the flagging of illegal content to accelerate takedowns.

In May, the UK’s Home Affairs Committee also urged the government to consider a regime of fines for social media content moderation failures — accusing tech giants of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

It revisited their performance in another public evidence sessions today.

“What it is that we have to do to get you to take it down?”

Addressing Twitter, Home Affairs Committee chair Yvette Cooper said her staff had reported a series of violent, threatening and racist tweets via the platform’s standard reporting systems in August — many of which still had not been removed, months on.

She did not try to hide her exasperation as she went on to question why certain antisemitic tweets previously raised by the committee during an earlier public evidence session had also still not been removed — despite Twitter’s Nick Pickles agreeing at the time that they broke its community standards.

“I’m kind of wondering what it is we have to do,” said Cooper. “We sat in this committee in a public hearing and raised a clearly vile antisemitic tweet with your organization… but it is still there on the platform — what it is that we have to do to get you to take it down?”

Twitter’s EMEA VP for public policy and communications, Sinead McSweeney, who was fielding questions on behalf of the company this time, agreed that the tweets in question violated Twitter’s hate speech rules but said she was unable to provide an explanation for why they had not been taken down.

She noted the company has newly tightened its rules on hate speech — and said specifically that it has raised the priority of bystander reports, whereas previously it would have placed more priority on a report if the person who was the target of the hate was also the one reporting it.

“We haven’t been good enough at this,” she said. “Not only we haven’t been good enough at actioning, but we haven’t been good enough at telling people when we have actioned. And that is something that — particularly over the last six months — we have worked very hard to change… so you will definitely see people getting much, much more transparent communication at the individual level and much, much more action.”

“We are now taking actions against 10 times more accounts than we did in the past,” she added.

Cooper then turned her fire on Facebook, questioning the social media giant’s public policy director, Simon Milner, about Facebook pages containing violent anti-Islamic imagery, including one that appeared to be encouraging the bombing of Mecca, and pages set up to share photos of schoolgirls for the purposes of sexual gratification.

He claimed Facebook has fixed the problem of “lurid” comments being able to posted on otherwise innocent photographs of children shared on its platform — something YouTube has also recently been called out for — telling the committee: “That was a fundamental problem in our review process that has now been fixed.”

Cooper then asked whether the company is living up to its own community standards — which Milner agreed do not permit people or organizations that promote hate against protected groups to have a presence on its platform. “Do you think that you are strong enough on Islamophobic organizations and groups and individuals?” she asked.

Milner avoided answering Cooper’s general question, instead narrowing his response to the specific individual page the committee had flagged — saying it was “not obviously run by a group” and that Facebook had taken down the specific violent image highlighted by the committee but not the page itself.

“The content is disturbing but it is very much focused on the religion of Islam, not on Muslims,” he added.

This week a decision by Twitter to close the accounts of far right group Britain First has swiveled a critical spotlight on Facebook — as the company continues to host the same group’s page, apparently preferring to selectively remove individual posts even though Facebook’s community standards forbid hate groups if they target people with protected characteristics (such as religion, race and ethnicity).

Cooper appeared to miss an opportunity to press Milner on the specific point — and earlier today the company declined to respond when we asked why it has not banned Britain First.

Giving an update earlier in the session, Milner told the committee that Facebook now employs over 7,500 people to review content — having announced a 3,000 bump in headcount earlier this year — and said that overall it has “around 10,000 people working in safety and security” — a figure he said it will be doubling by the end of 2018.

Areas where he said Facebook has made the most progress vis-a-vis content moderation are around terrorism, and nudity and pornography (which he noted is not permitted on the platform).

Google’s Nicklas Berild Lundblad, EMEA VP for public policy, was also attending the session to field questions about YouTube — and Cooper initially raised the issue of racist comments not being taken down despite being reported.

He said the company is hoping to be able to use AI to automatically pick up these types of comments. “One of the things that we want to get to is a situation in which we can actively use machines in order to scan comments for attacks like these and remove them,” he said.

Cooper pressed him on why certain comments reported to it by the committee had still not been removed — and he suggested reviewers might still be looking at a minority of the comments in question.

She flagged a comment calling for an individual to be “put down” — asking why that specifically had not been removed. Lundblad agreed it appeared to be in violation of YouTube’s guidelines but appeared unable to provide an explanation for why it was still there.

Cooper then asked why a video, made by the neo-nazi group National Action — which is proscribed as a terrorist group and banned in the UK, had kept reappearing on YouTube after it had been reported and taken down — even after the committee raised the issue with senior company executives.

Eventually, after “about eight months” of the video being repeatedly reposted on different accounts, she said it finally appears to have gone.

But she contrasted this sluggish response with the speed and alacrity with which Google removes copyrighted content from YouTube. “Why did it take that much effort, and that long just to get one video removed?” she asked.

“I can understand that’s disappointing,” responded Lundblad. “They’re sometimes manipulated so you have to figure out how they manipulated them to take the new versions down.

“And we’re now looking at removing them faster and faster. We’ve removed 135 of these videos some of them within a few hours with no more than 5 views and we’re committed to making sure this improves.”

He also claimed the rollout of machine learning technology has helped YouTube improve its takedown performance, saying: “I think that we will be closing that gap with the help of machines and I’m happy to review this in due time.”

“I really am sorry about the individual example,” he added.

Pressed again on why such a discrepancy existed between the speed of YouTube copyright takedowns and terrorist takedowns, he responded: “I think that we’ve seen a sea change this year” — flagging the committee’s contribution to raising the profile of the problem and saying that as a result of increased political pressure Google has recently expanded its use of machine learning to additional types of content takedowns.

In June, facing rising political pressure, the company announced it would be ramping up AI efforts to try to speed up the process of identifying extremist content on YouTube.

After Lundblad’s remarks, Cooper then pointed out that the same video still remains online on Facebook and Twitter — querying why all threee companies haven’t been sharing data about this type of proscribed content, despite their previously announced counterterrorism data-sharing partnership.

Milner said the hash database they jointly contribute to is currently limited to just two global terrorism organizations: ISIS and Al-Qaeda, so would not therefore be picking up content produced by banned neo-nazi or far right extremist groups.

Pressed again by Cooper reiterating that National Action is a banned group in the UK, Milner said Facebook has to-date focused its counterterrorism takedown efforts on content produced by ISIS and Al-Qaeda, claiming they are “the most extreme purveyors of this kind of viral approach to distributing their propaganda”.

“That’s why we’ve addressed them first and foremost,” he added. “It doesn’t mean we’re going to stop there but there is a difference between the kind of content they’re producing which is more often clearly illegal.”

“It’s incomprehensible that you wouldn’t be sharing this about other forms of violent extremism and terrorism as well as ISIS and Islamist extremism,” responded Cooper.

“You’re actually actively recommending… racist material”

She then moved on to interrogate the companies on the problem of ‘algorithmic extremism’ — saying that after her searches for the National Action video her YouTube recommendations included a series of far right and racist videos and channels.

“Why am I getting recommendations from YouTube for some pretty horrible organizations,” she asked?

Lundblad agreed YouTube’s recommendation engine “clearly becomes a problem” in certain types of offensive content scenarios — “where you don’t want people to end up in a bubble of hate, for example”. But said YouTube is working on ways to remove certain videos from being surfaceable via its recommended engine.

“One of the things that we are doing… is we’re trying to find states in which videos will have no recommendations and not impact recommendations at all — so we’re limiting the features,” he said. “Which means that those videos will not have recommendations, they will be behind an interstitial, they will not have any comments etc.

“Our way to then address that is to achieve the scale we need, make sure we use machine learning, identify videos like this, limit their features and make sure that they don’t turn up in the recommendations as well.”

So why hasn’t YouTube already put a channel like Red Ice TV into limited state yet, asked Cooper, naming one of the channels the recommendation engine had been pushing her to view? “It’s not simply that you haven’t removed it… You’re actually actively recommending it to me — you are actually actively recommending what is effectively racist material [to] people.”

Lundblad said he would ask that the channel be looked at — and get back to the committee with a “good and solid response”.

“As I said we are looking at how we can scale those new policies we have out across areas like hate speech and racism and we’re six months into this and we’re not quite there yet,” he added.

Cooper then pointed out that the same problem of extremist-promoting recommendation engines exists with Twitter, describing how after she had viewed a tweet by a right wing newspaper columnist she had then been recommended the account of the leader of a UK far right hate group.

“This is the point at which there’s a tension between how much you use technology to find bad content or flag bad content and how much you use it to make the user experience different,” said McSweeney in response to this line of questioning.

“These are the balances and the risks and the decisions we have to take. Increasingly… we are looking at how do we label certain types of content that they are never recommended but the reality is that the vast majority of a user’s experience on Twitter is something that they control themselves. They control it through who they follow and what they search for.”

Noting that the problem affects all three platforms, Cooper then directly accused the companies of operating radicalizing algorithmic information hierarchies — “because your algorithms are doing that grooming and that radicalization”, while the companies in charge of the technology are not stopping it.

Milner said he disagreed with her assessment of what the technology is doing but agreed there’s a shared problem of “how do we address that person who may be going down a channel… leading to them to be radicalized”.

He also claimed Facebook sees “lots of examples of the opposite happening” and of people coming online and encountering “lots of positive and encouraging content”.

Lundblad also responded to flag up a YouTube counterspeech initiative — called Redirect, that’s currently only running in the UK — that aims to catch people who are searching for extremist messages and redirect them to other content debunking the radicalizing narratives.

“It’s first being used for anti-radicalization work and the idea now is to catch people who are in the funnel of vulnerability, break that and take them to counterspeech that will debunk the myths of the Caliphate for example,” he said.

Also responding to the accusation, McSweeney argued for “building strength in the audience as much as blocking those messages from coming”.

In a series of tweets after the committee session, Cooper expressed continued discontent at the companies’ performance tackling online hate speech.

“Still not doing enough on extremism & hate crime. Increase in staff & action since we last saw them in Feb is good but still too many serious examples where they haven’t acted,” she wrote.

“Disturbed that if you click on far right extremist @YouTube videos then @YouTube recommends many more — their technology encourages people to get sucked in, they are supporting radicalisation.

“Committee challenged them on whether same is happening for Jihadi extremism. This is all too dangerous to ignore.”

“Social media companies are some of the biggest & richest in the world, they have huge power & reach. They can and must do more,” she added.

None of the companies responded to a request to respond to Cooper’s criticism that they are still failing to do enough to tackle online hate crime.

Featured Image: Atomic Imagery/Getty Images



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *