新闻详情
当前位置:首页-新闻中心

2019香港 六 合 彩开奖结果h

2020-02-18 03:00:48 官方地址:http://pm2517.com 浏览次数 145295
字体大小: 14px 16px 18px
Amber Rudd will urge social media companies to do more to remove online terrorist content during a series of meetings with tech giants including Twitter and Facebook, after a sharp increase in the number of plots foiled in the UK. The home secretary will warn that extremists have exploited web platforms as way of spreading their “hateful messages” when she attends the Global Internet Forum to Counter Terrorism in Silicon Valley.Theresa May had previously warned that the fight against Islamic State was shifting from the “battlefield to the internet” when she attended the G7 meeting in Sicily in the wake of the Manchester terror attack. World leaders called on internet service providers to “substantially increase” their efforts to crackdown on extremist content.“The responsibility for tackling this threat at every level lies with both governments and with industry. We have a shared interest: we want to protect our citizens and keep the free and open internet we all love,” Rudd is expected to tell the internet providers. She will claim that the forum, which was created by Facebook, Microsoft, Twitter and YouTube, marks an opportunity to start “turning the tide” on the issue. It comes after a 21-year-old from Slough, Taha Hussain, was found guilty of encouraging people to instigate, prepare or commit acts of terror including through online content.How Facebook groups bring people closer together – neo-Nazis included Read moreAfter he was arrested, disturbing photographs were found on his phone alongside a YouTube channel that he had created, which claimed that no one should “feel sorry” for the deaths of non-Muslims and the “wrong kind” of Muslims.The channel broadcast images of militants in battle, firing a range of weapons and blowing up vehicles and buildings, and included the black flag associated with Isis. Detectives also discovered that Hussain had sent a number of videos con

taining extremist propaganda via WhatsApp. “Extremist posts like the ones Hussain posted and shared have the power to influence other people and particularly those who may be young and impressio

nable or vulnerable

for a variety of reasons,” said Det Ch Sup Kath Barnes, head of CTP South East. “This could lead to those influenced individuals committing acts of terror, which clearly has devastating effects on communities, the individual and their family and friends.”Rudd met tech companies after Khalid Masood drove a car into tourists gathered on Westminster Bridge and then mu

rdered PC Keith Palmer at the gates of parliament before being shot dead. After the meeting, she said her starting point was that people who want to do harm should not be able to use the internet or social media to further their cause. “I want to make sur

e we are doing everything we can to stop this,” she said, warning that terrorist propaganda online was a “very real and evolving threat”.The UK government was cited by tech companies as they moved towards creating the forum. Monika Bickert, director of global policy management at Facebook, and Brian Fishman, the company’s counterterrorism policy manager, said they had been learning a lot through briefings from age

ncies in different countries about Isis and Al-Qaida propaganda mechanisms.The companies, which have previously come under intense criticism for not taking the issue seriously enough, said the forum would aim to create new technological solutions that could help remove terrorist content. Counter-terrorism was never meant to be Silicon Valley's job. Is that why it's failing? Read moreThey would also commission research to help them reach policy decisions, and work with counter-terrorism experts, as well as governments, civil society groups and academics. The largest companies have promised to work with smaller outfits to support their efforts. The prime minister chose to focus on the issue of terrorism threats both at the G7 in Sicily, when she spoke about social media companies, and in the G20 in Hamburg, when she spoke about how to disrupt the financing of extremist groups. That followed a series of terror attacks in the UK in quick succession, including the Westminster attack, the Manchester bombing in which 22 people were killed, and the London Bridge attacks that resulted in eight deaths and 48 being injured. The Home Office admitted that there had been a sharp increase in the number of terror attacks foiled by the UK security agencies, with five plots disrupted in just two months, compared with 12 in the period from 2013 to March 2017. It has been reported that MI5 is juggling around 500 active investigations at one time with 3,000 people of interest. However, Silicon Valley is expected to strongly resist Rudd’s demands. Hany Farid, senior adviser to the Counter Extremism Project, said: “They [tech compan

ies] are under intense pressure from the EU, from the media, from advertisers and the public. But they continue to stall and do a good job on PR, but not a good job on actually implementing these changes.” Farid, a computer science professor at Dartmouth College, compared the efforts on terrorism to tech firms’ slowness to take action against conten

t promoting child abuse and exploitation in the mid-2000s. “I’m highly sceptical of the PR efforts that we’re seeing from tech,” he said. “It is not real action. It is trying to stave off legislation both at the EU, the UK and here in the States.”At the first forum’s first works

hop on Tuesday, the tech giants will discuss

strategies to “disrupt terrorists’ ability to use the internet”, the companies said in a joint blogpost. The statement cited goals to share best practices and tec

hnology, conduct and fund new rese

arch and recruit other firms to join the effort. Critics have argued that private corporations are not well equipped to tackle such a complex problem on their own. “There’s an immediate conflict of interest, which is these companies want to make money,” said R Karl Rethemeyer, professor of public administration and policy at the University at Albany. “The way that they capture information … is built around a commercial purpose. That’s really very different than trying to discern what one’s political agenda is.”Michael Smith, a terrorism analyst, said that he hoped Rudd would “call them out for their unwillingness to enable policies which would more effectively deter exploitations of their technologies by terrorists”. The firms, he said, have generally opposed efforts to make it easier to identify and locate users, which could lead regulators in Europe and the US to try to force them to be more proactive in the way they track people.Although the companies have pledged to share information as part of th

eir joint counter-terrorism initiative, there are also profit motives in limiting such collaboration. “The level of competition between those platforms is intense,” said Rethemeyer. “There’s no particular commercial reason for them to share.”Pressure from officials like Rudd could encourage the companies to adopt internal policy changes in an effort to pre-empt regulations and other efforts to hold them legally liable. There are, however, growing concerns that in the process, the social media platforms will increasingly censor people and violate users’ free speech rights, sai

d Sophia Cope, staff attorney at the Electronic Frontier Foundation.“The potential for overbroad takedowns is just incredibly great, particularly when you get into those grey areas of political speech, dissenting speech, religious speech, where there is room for debate.”In recent months, Facebook has repeatedly come under fire for censoring journalists and activists in the name of combating terrorism, often reversing their decisions in the wake of negative media coverage.While some regulators have pushed for greater technical solutions to weed out terrorist propaganda, there are also worries that machine learning and artificial intelligence fail to understand the context of content and can block speech that should not be censored.“It’s not so simple at just throwing automated technology at the problem,” said Cope. “The technology just isn’t there yet

to be able to sift out what they’re trying to target from what is considered legitimate speech.”

Copyright © 1998 - 2015