Skip to main content

tv   Hearing on Violence Extremism Digital Responsibility  CSPAN  September 19, 2019 12:05pm-2:18pm EDT

12:05 pm
warring european powers. its leader, hoover, enjoyed diplomatic immunity and traveled freely through enemy lines, probably the only american citizen permitted to do so during the entire war. >> explore our nation's past on american history tv every weekend on c-span 3. now the senate commerce committee looks into digital platforms' efforts to identify and remove violent extremist content. representatives from google, facebook, twitter and the anti-defamation league testified. they discussed extremist behavior and how the major companies are trying to combat this through content using reporting, artificial intelligence, law enforcement tools and information sharing between companies. from yesterday, this is just over two hours and five minutes.
12:06 pm
today we welcome representatives from the world's largest social media companies and online platforms. we hear from ms. monica bickert, head of the global policy management for facebook.
12:07 pm
and mr. nick pickles, public policy at twitter. mr. derrick slater, global director of information policy at google and mr. george saline, senior vice president of programs for the anti-defamation league. platforms have dramatically changed the way we skmcommunica and have been used positively for like-minded groups to come together and shed light on despotic regimes throughout the world. no matter how great the benefit
12:08 pm
to society these platforms provide, it is important to consider how they can be used for evil at home and abroad. on august 3rd, 2019, 20 people were killed and more than two dozen injured in a mass shooting at an el paso shopping center. police have said that they are reasonably confident that the suspect posted a manifesto to a website called 8chan 27 minutes prior to the shooting. 8 chan moderators removed the original post, though users continued sharing copies. following the shooting, president trump called on social media companies to work in partnership with local, state and federal agencies to develop tools that can detect mass shooters before they strike. i certainly hope we talk about that challenge today. sadly, the el paso shooting is
12:09 pm
not the only recent example of mass violence with an online dimension. on march 15, 2019, 51 people were killed and 49 were injured in shootings at two mosques in christchurch, new zealand. the perpetrator filmed the attacks using a body camera and live streamed the footage to his facebook followers, who began to reload the footage to facebook and other sites. access to the footage quickly spread and facebook stated that it removed 1.5 million videos of the massacre within 24 hours of the attack. 1.2 million views of the videos were blocked before they could be uploaded. like the el paso shooter, the christchurch shooter also uploaded a manifesto to 8 chan. the 2016 shooting at the pulse nightclub in orlando, florida, killed 49 and injured 53 more. the orlando shooter was
12:10 pm
reportedly radicalized by isis and other jihadist propaganda through online sources. days after the attack, the fbi director stated that investigators were highly confident that the shooter was self-radicalized through the internet. according to an official involved in the investigation, analysis of the shooter's electronic devices revealed that he had consumed, quote, a hell of a lot of jihadist propaganda, unquote, including isis beheading videos, shooting survivors and family members of victims brought a federal lawsuit against those three social media platforms under the anti-terrorism act. the sixth circuit dismissed the lawsuit on grounds this was not an act of international terrorism.
12:11 pm
with other 3.2 billion internet users, this committee recognizes the challenge facing social media companies and online platforms. their ability to act and remove content threatening violence from their sites. these are questions about tracking of a user's online activity. does this invade an individual's privacy, thwart due process or violate constitutional rights. the automatic removal may also impact an online platform's ability to detect possible warning signs. indeed, the first amendment offers strong protections against restricting certain speech. this undeniably adds to the complexity of our task. i hope these witnesses will speak to these challenges and how their companies are navigating these challenges. in today's internet connected
12:12 pm
society, misinformation, fake news, deep fakes and viral online conspiracy theories have become the norm. this hearing is an opportunity for witnesses to discuss how their platforms go about identifying content and material that threatens violence and poses a real and potentially immediate danger to the public. i hope our witnesses will also discuss how their content moderation processes work. this include addressing how human review or technological tools are employed to remove or otherwise limit violent content before it is posted, copied and disseminated across the internet. communication with law enforcement officials at the federal, state and local levels is critical to protecting our neighborhoods and communities. we would like to know how companies are coordinating with law enforcement when violent or
12:13 pm
extremist content is identified. and finally, i hope witnesses will discuss how congress can assist in ongoing efforts to remove content promoting violence from online platforms and whether best practices or industry codes of conduct in this area would help increase safety both online and offline. so i look forward to hearing testimonies from our witnesses. hope we engage in a constructive discussion about potential solutions to a pressing issue. and i'm delighted at this point to recognize my friend and ranking member senator cantwell. >> across the country we are seeing and experiencing a surge of hate. as a result we need to think much harder about the tools and resources we have to combat this problem both online and offline. while the first amendment to the constitution protects free speech, speech that incites
12:14 pm
imminent violence is not protected and congress should review and strengthen laws that prohibit rules of violence, harassment, stalking, intimidati intimidation. in testimony before the senate judiciary committee in july, federal bureau of investigation fbi director chris wray said that the white supremacist violence is on the rise. he said the fbi takes this threat, quote, extremely seriously and has made over 100 arrests so far this year. in my community we suffered a shooting of a jewish community center in seattle and others. and over the last year we've seen a rise in the desecration of both synagogues and mosques. the rise in hate across the country has also led to multiple mass shootings including the tree of life congregation in pittsburgh and the walmart in el
12:15 pm
pass toe. social media is used to amplify that hate. and the shooter at one high school in parkland said the image of himself and guns and knives on instagram wrote social media posts prior to the attack on his fellow students. in el paso, the killer published a white supremacist anti-immigration manifesto on 8 chan message board and my colleague just mentioned the streaming of live content related to the christchurch shooting, the horrific incidents that happened there. in myanmar the military promoted violence against muslim rohingya. these human lives were all cut short by deep hatred and extremism that we have seen has become more common. this is a particular problem on the dark web where we see certain websites like 8 chan and
12:16 pm
adding technology and tools to mainstream websites to stop the spread of these dark websites is a start, but there needs to be an effort to ensure that people are not directed into these cesspools. i believe calling on the department of justice to make sure that we are working across the board on an international basis with companies to fight this issue is an important thing to be done. we don't want to push people off of social media platforms only to then be on the dark web where we are finding less of them. we need to do more at the department of justice to shut down these dark websites and social media companies need to work with us to make sure that we are doing this. i do want to mention just last week as there's much discussion here in washington about initiatives, the state of washington has passed three gun initiatives by the vote of the people, closing background
12:17 pm
loopholes and also relating to private sales and extreme person laws. all voted on by a majority of people in our state and successfully passed. so i do appreciate just last week representatives from very companies in the tech industry sending a letter asking for passage of bills requiring extensive background checks. very much appreciate that and your support of extreme person laws to keep guns out of the hands of people who a court has determined are dangerous in the possession of that. so this morning we look to forward to asking you about ways in which we can better fight these issues. i do want us to think about ways in which we can all work together to address these issues. i feel that working together, these are successful tools that we can demploying in trying to fight extremism that exists
12:18 pm
online. thank you, mr. chairman, for the hearing. >> thank you very, very much. now we'll hear oral testimony from our four witnesses. your entire statements will be submitted for the record without objection. we ask you to limit your comments at this point to five minutes. ms. bicker, you're recognized. thank you for being here. >> thank you chairman wicker, ranking member cantwell. thank you for the opportunity to be here today and answer your questions and explain our efforts in these areas. my name is monica bicker and i am facebook's manager for global policy management and counter terrorism. i'm responsible for our rules around content on facebook and our company's response. i'd like to begin by expressing
12:19 pm
my sympathy and solidarity with everybody affected by the recent attacks across the country. in the face of such acts we remain committed to assisting law enforcement and standing with the community against hate and violence. we're thankful to be able to provide a way for those affected by this horrific violence to communicate with loved ones, organize events for people to gather in grief, raise money to help support communities and begin to heal. our mission is to give people the power to connect with one another and to build community. but we know that people need to be safe in order to build that community. that's why we have rules in place against harmful conduct including hate speech and inciting violence. our goal is to ensure that facebook is both a place where people can express themselves, but where they are also safe. while we're not aware of any connection between the recent
12:20 pm
attacks and our platform, we certainly recognize that we all have a role to play in keeping our community safe. that's why we remove content that encourages real world harm. this includes content that is involving violence or incitem t incitement, promoting or publicizing crime or encouraging suicide or self-injury. we don't allow any individuals or organizations who proclaim a violent mission, advocate for violence or are engaged in violence to have any presence on facebook even if they are talking about something unrelated. this includes organizations and individuals involved in or advocating for terror activity, organized hate or other violence. we also don't allow content posted by anyone that praises or
12:21 pm
supports these individuals or their actions. when we find content that violates our standards, we remove it promptly. we also disable accounts when we see severe or repeated violations. we work with law enforcement directly when we believe there is a risk of physical harm or a direct threat to public safety. while there is always room for improvement, we already remoove per year. our efforts to improve the enforcement of these policies are focused on three areas. first, building new technical solutions. second, investing in people who can help us implement these policies. at facebook we now have more than 30,000 people across the company working on safety and security efforts. this includes more than 350 people whose primary focus is counter hate and counter
12:22 pm
terrorism. and third, building partnerships with other companies, civil society, researchers and governments so that together we can come up with shared solutio solutions. we're proud of the work we've done to make facebook a less hos sti tile place. we know bad actors will continue to attempt to skirt detection. and we're dedicated to continuing to advance our work and share our progress. we look forward to working with the committee, regulators, others in the tech industry and civil society to continue this progress. again, i appreciate the opportunity to be here today and i look forward to your questions. thank you. >> thank you. mr. pickles? >> twitter has publicly
12:23 pm
committed to improving the collective health, openness and civility of public conversation on our platform. our policies are designed to keep people safe on twitter and they continuously evolve to reflect the realities of the world we operate in. we are working faster and investing to remove content that distracts from a healthy conversation before it's reported, including terrorist content. tackling terrorism requires a whole of society response, including from social media companies. let me be clear. twitter is incentivized to keep violent content off our platform. communities have been impacted by instances of mass violence with tragic frequency in recent
12:24 pm
years. these events demand a robust public policy response from every quarter. we recognize that content removal alone cannot solve these issues. firstly twitter takes a zero tolerance approach to terrorism on our service. since 2015 we have suspended more than 1.5 million accounts for violations of our rules related to terrorism. in the majority of cases we take action at the account creation stage before an account has even tweeted. the remaining 10% is identified through a combination of user reports and partnerships. secondly we prohibit the use of twitter by violent extremist
12:25 pm
grou groups. since the introduction of this policy in 2017, we've taken action on more than 186 groups globally and suspended more than 2,000 unique accounts. thirdly, twitter does not allow hateful conduct on our service. an individual on twitter is not allowed to promote violence. where any of these rules are broken, we will take action to remove the content and will permanently remove them from twitter. fourthly our rules prohibit the selling, buying of transactions in weapons. we'll take appropriate action on any account found to be engaged in this activity including permanent suspension where appropriate. additional le ly collaboration
12:26 pm
our industry peers and civil society is critically poorimpor to addressing the threats from terrorism globally. this facilitate information sharing, technical coordination, research collaboration. twitter and technology companies have a role the plo play in addressing mass violence. this cannot be the only public policy response and removing content alone will not stop those determined to cause harm. when we remove content from our platform, it moves those views into the darker corners of the internet. this content continues to m
12:27 pm
migrate to less governed platforms and services. addressing mass violence requires a whole of society response. we continue to work with industry peers, government institutions, law enforcement, academic and civil society to find the right solutions. thank you for your time today. >> thank you very much. mr. slater. >> chairman wicker, ranking member cantswell. my name is derrick slater. i lewould like to take a momentn behalf of everyone at grade schooling to express our horror upon learning of the tragic
12:28 pm
attacks. while google services were not involved in these recent incidents we have engaged on steps we are taking to ensure that our platforms are not used to incite hate speech. i will focus on three key areas where we are making progress to help people. first how we work with governments and law enforcement, second, our efforts to prohibit products that cause damage or injury. first, google engages in ongoing dialogue with law enforcement and law enforcement agencies to understand the threat landscape. for example, when we have a good faith belief that there is a threat to life or serious bodily harm made on our platform in the united states, the dpgoogle cyb
12:29 pm
crime group will report it. in turn that intelligence center quickly gets the report into the hands of police officers to respond. we are also deeply committed to working with kbogovernment, ande tech industry. second, we take the threat posed by gun violence in the united states very seriously and our advertising policies have long prohibited the selling of weapons, ammunition and similar products that cause damage, harm or injury. we also prohibit the promotion of instructions from making guns, explosives or other harmful products.
12:30 pm
we em moi a numbploy a number o to make sure that our efforts are enforced. third, on youtube we have rigorous policies and programs to defend against the use of our platform to spread hate or incite violence. we have invested heavily in machines and people to quickly identify and remove content that violates our policies. this includes machine learning technology, hiring over 10,000 people for reviewing and removing content, an intel desk of experts that looks for new trends, and finally creating beneficial counter speech.
12:31 pm
this work has lead to tangible results. over 87% of the 9 million videos we removed in the second quarter of 2019 were first flagged by our automated systems. overall videos that violate our policies generate a fraction of a percent of the views on youtube. our efforts do not end there as we are constantly i involevolvi new policies. the updated policy specifically prohibits videos claiming that a group is superior. we have already seen a 5 x spike in removals and channel terminations on hate speech. in conclusion we take the safety of our users seriously and value our close relationships with law
12:32 pm
enforcement and government agencies. we want to be responsible actors for our part of the solution. as these issues evolve, google will invest in people and technology to meet the challenge. we look forward to continuing collaboration with the committee as it examines these issues. thank you for your time. >> thank you very much. mr. saliam, your group refers to be known as adl these days. >> the anti-defamation league. goes by adl for short. >> we appreciate you being with us today and we're happy to receive your testimony. >> thank you for the opportunity to be here. my name is george saliam and i serve as the senior vice president for programs at the adl. for decades the adl has fought against bigotry and anti-semitism by exposing those
12:33 pm
who spread hate to incite violence. today the adl is the foremost nongovernmental authority on dou dough mdomestic. i'd like to share some key data, findings, analysis and urge this committee to take action to counter a severe national security threat, the threat of online white supremacist extr e extremism that threatens our communities. he expressed support for the accused shooter in new zealand who also posted on 8 chan. before the massacre in california, the alleged shooter
12:34 pm
posted a link to his manifesto on 8 chan, citing the terrorists in new zealand and the pittsburgh tree of life attack. three killing sprees, three white supremacist manifestos. one targeted muslims, another targeted jus and the third targeted land x and other immigrant communities. one thing these three killers had in common was 8 chan. unfettered access to online platforms both fringe and mainstream has significantly driven the scale, speed and effectiveness of these forms of extremist attacks. our research shows that domestic extremist violence is trending up and that anti-semitic hate is trending up. the fbi and doj data shows similar trends.
12:35 pm
immediate action is paramount to prevent the next tragedy that could take innocent lives. adl has worked with the platforms represented at this tabl table. we appreciate these work greatly but much more needs to be done. adl has called on these companies at this hearing as well as many others to be far more trabs parensparent. we need meaningful transparency to give actionable information to policy makers and stakeholders. the problem will not be solved by addressing these issues online alone. we urge this committee to take immediate action. first, our nation's leaders must
12:36 pm
call out bigotry in all its forms at every opportunity. our nation's law enforcement leadership must make enforcing hate crimes laws a top priority. our communities need this congress's immediate action on a range of ways, notably to codify federal offices and create comprehensive reporting. our federal legal system currently lacks the means to prosecute a white supremacist terrorist as a terrorist. congress should explore whether it is possible to craft a rights protecti protecting domestic terrorism statu statute. in addition, the state department should examine
12:37 pm
whether certain foreign, white supremacist groups meet the criteria for designation of fto. for technology and social media companies, we look forward to companies expanding their terms of service and exploring accountability and governance challenges, aspiring to greater transparency and how you decrease these efforts. this is an all hands on deck moment to protect all of our communities. i look forward to your questions. thank you. >> thank you. on your platforms how do you define violent content? how do you define extreme
12:38 pm
content? >> thank you, mr. chairman. we will remove any content that celebrates a violent act and this is a serious physical injury or death. we also will remove any organization that has proclaimed a violent mission or is engaged in acts of violence. we also don't allow anybody who has engaged in organized hate to have a presence on the site and we remove hate speech. hate speech we define as an attack on a person based on his or her characteristics like race, religion, sexual orientati orientation. >> harder to find extreme than violent, isn't that correct? >> yes. and we see different people use that word in different ways. what we do is any organization that has proclaimed a violent mission or engaged in documented acts of violence, we remove t m
12:39 pm
them. >> what is your platform's definition of extreme? >> we agree that the word extremism itself is very subjective and in some contexts can be a positive thing. people can be extremely active on an issue and it isn't a bad thing. we have a three stage test that identifies violent extremist groups. that test is that we identify through their stated purpose publication -- or they promote violence as a means to further their cause and they target civilians. we've got that three stage test because we believe that framing allows us to protect speech and debates but also remove violent
12:40 pm
extremists from our platform. >> mr. slater, can you add anything? >> thank you, chairman. probably similar in that we ban designated foreign terrorist programs from using our policeman form. broadly similar lines. >> now, mr. saliam has suggested that your three platforms need to be more transparent. what do you say to that, mr. slater? >> thank you, chairman. i think transparency is a bedrock of the work that we do, particularly around online content.
12:41 pm
we are in t we have in the last year on youtube provided our community enforcement record where you can go and see how many videos we removed in a quarter and we break that down. i think this is a really key issue and we look forward to continuing to improve. >> perhaps you could help them understood how you frankly don't believe they're quite transparent enough at this point? >> thank you for your question. to be clear the point i'm making on transparency is to make sure there are more clearly delineated categories between the point that mr. slater was making and the point in terms of what the machines or algorithms use and what users on any of these platforms go onto say. like we think this is a violation of the terms of service. there is degrees of
12:42 pm
inconsistencies across these platforms at the table and others. to get a wholistic picture of what a certain issue may be, while individuals may flag for some gravels pulled down, there are different consistencies in that. we're looking for a much more balanced approach across all the platforms. >> mr. pickles, is he touching on something that has a point? >> absolutely. i think the balance for companies investing in technology, understanding how the technology was found is very important. we now publish a breakdown of six policy areas and the number of user reports we receive. it's about 11 million reports every year. b telling that story in a meaningful way is absolutely a
12:43 pm
challenge. >> what's that percentage in facebook? >> when it comes to violent content and terror content, more than 99% of what we remove is flagged by our technical tools. >> by artificial intelligence? >> some of it's artificial intelligence, some of it is image matching. known individu-- we've worked w for years on this and i think transparency is key. we for the past year and a half have published not only our detailed implementation guidelines for exactly how we define hate speech and violence, but also reports on exactly how much we're removing in each category and how much of that is actually flagged by our technical tools before we get user reports.
12:44 pm
>> thank you very much. senator can it twell. >> what do you think we need to do to monitor incitement on 8 chan and other dark websites. >> so i think you can really approach this issue from two categories. there are a number of increased measures, some of which i noted in my written statement submitted to this committee that these companies as well as others can take to create a greater degree of transparency and standards so we can have a really accurate measure of the types of hatred and bigotry that exists online. we can makebetter policies. i think having the good data is a frame fowork.
12:45 pm
>> you're saying there's more they can do? >> yes, ma'am. >> i look at your statement. you include auditing and third party evaluation for that transparency as well as responsibility. as i mentioned in my opening statement, basically then drive all this to a dark web that we have less access to. what more do you think we should be doing together to address the hate that is taking mace on these darker websites too? >> so a number of measures. i mean, the first is having our public policy starting from a place where we're victim focused. we know whether it's pittsburgh or any of the number of cities that other panelists have mentioned in their statements. we need to start taking measures
12:46 pm
that combat extremism or domestic terrorism for preventing other such tragedies. i think we can make better policy and programs at the federal government and state and local and also in the private industry levels as well. >> one of the reasons i'm definitely going to be calling on department of justice to ask what more we can do in this coordination is several years ago interpol, microsoft, others worked on trying to address on an international basis child pornography to try to better skill law enforcement at policing crime scenes online. i would assume that the representatives today would be supportive and maybe helpful, maybe even financially helpful in trying to address these crimes as they exist today as hate crimes on the dark side of
12:47 pm
the web. do i have any responses from our tech companies here? >> this is something that across the industry we've been working on for the past few years in a manner very similar to how the industry came together against child exploitation online. we launched the global internet forum to counter terrorism as a way of getting industry to create sort of a no-go zone for this terrorist and violent content. as part of that, we train hundreds of smaller companies on best practices and we make technology available to them. the reality is for the big e.r. companies we often are able to build technical tools to stop videos at the time of upload. we now have 14 companies involved in a hash sharing
12:48 pm
consortium so we can stop content at the time of upload. >> i agree there is more you can do on your own sites. setting that aside for a minute, what do you think we should do about 8 chan and the dark websites? >> i can tell you what we do on facebook, senator, which is we ban any link that connects to 8 chan where these manifestos have appeared. these were not available through facebook. >> i'm saying what more do you think in government and law enforcement working together besides what you do to address this, anybody else? >> i think to follow up, i think certainly if there's criminal activity happening on these platforms, a law enforcement response is primary. if people are promoting violence against individuals and committing criminal offenses, a law enforcement intervention is
12:49 pm
something i think should be looked at. if we can strengthen as industry our cooperation from law enforcement, we can make sure the information sharing is as strong as it needs to be. >> you think we need more law enforcement resources addressing this issue? >> i think it's a question of resources. there was a paper from george washington university last week looking at some of the statutory framework around these basis. >> i definitely believe you need more law enforcement resources on this issue and i look at what progress we made with interpol and the tech industry fighting on other issues. i think this is something more resources. thank you all, mr. chairman. >> thank you. senator fisher. >> thank you, mr. chairman. in june senator thun held a sub
12:50 pm
committee hearing on design. whether it's through predictions of the next video to keep us watching or us watching or what content to push to the top of our news feeds. i think we have to realize that when social media platforms fail to block extremist content online, this content doesn't just slip through the cracks. it's amplified. and it's amplified to a wider audience. and we saw those effects during the christchurch shooting. the facebook live broadcast was up for an hour. that was confirmed by "the wall street journal" before it was removed. and it gained thousands of views during that time frame. ms. bickert, how do you concentrate on how your
12:51 pm
algorithms boost content while still getting dangerous content off the platform? you touched on that in a bit in your response, but how are you targeting solutions to address that specific tension that we see? >> senator, thank you for the question. it's a real area of focus. and there are three things that we're doing probably the most significant is technological improvements, which i will come back to in a second. second is making sure we are staffed to very quickly review reports that come in. so the christchurch video, once that was reported by law enforcement, we were able to remove it within minutes. that response time is critical to stopping the virality you mentioned. we have hundreds of safety and civil society organizations. if they are seeing something, they can flag it for us through a special channel. going back to the technology, briefly. with with the horrific
12:52 pm
christchurch video, one of the challenges was our artificial intelligence tools did not spot violence in the video. what we are doing going forward is working with law enforcement agencies, including in the u.s. and the uk, to try to gather videos that could be helpful training data for our technical tools. and that's just one of the many efforts we have to try to improve these machine learning technologies so that we can stop the next viral video at the time of upload -- the time of creation. >> when you said law enforcement, is that reciprocal? do you see something show up and then you, in turn, try to get it to law enforcement as soon as possible so individuals can be identified? what's the working relationship there? >> absolutely, senator. we have a team that is our law enforcement outreach team. any time that we identify a credible threat of imminent harm, we will reach out
12:53 pm
proactively to law enforcement agencies. we do that regularly. also, when there is some sort of mass violence incident, we reach out to them even if we have no kaegz that our services involved at all. we want to make sure the lines of communication are open, they know how to submit emergency process to us. we respond around the clock. every minute is critical in this type of situation. i'm a former prosecutor myself. so these things are very personal to me. >> i know that the platforms that are represented today, you have increased your efforts to take down this harmful content. but as we know, thrill still shortfalls that exist in order to get that response made in a -- not just a timely manner but one that is going to truly have an effect. mr. slater, when it comes to liability, do media platforms -- do guys need more skin in the
12:54 pm
game? so you can ensure better accountability and be able to incentivize some kind of timely solution? >> thank you, senator, for the question. i think if you look at the practices that we are all investing in, certainly looking from our perspective and the way we are getting better over time, the current legal framework strikes a reasonable balance. in particular, both provides protection from liability that would go too far that would be overbroad but acts as a sword, not just a shield, empowering us giving us the legal certainty we need to invest in these technologies, the people, to monitor, detect, review, and remove this sort of violative content. the legal framework continues to work well. >> can you comment on this as well? do you think there is enough legal motivation for social media platforms to prioritize
12:55 pm
some kind of solutions out there? i think that's what this hearing is about, to find solutions so we can curb that online hate that i think continues to grow. >> when thinking through the issues of content moderation, the authorities that exist within the current legal frameworks that reside within the company's represented at this table is sufficient for them to take actions on issues of content moderation, transparency, reporting, et cetera. so there certainly is a degree of legal authorities that affords these companies, as well as others, the opportunity to take any number of measures. >> ms. bickert, in your testimony, you said facebook live will ban a user 30 days for first time violation of its platform policies. is that enough? can users be banned permanently?
12:56 pm
would that be something to look at? >> senator, thank you for the question. one serious violation will lead to a temporary removal of the ability to use live. however, if we see repeated serious violations, we simply take that person's accounts away. that is something we do across the board, not just with hate and in citing content but other content as well. >> thank you. >> thank you, senator fischer. senator blumenthal. >> thank you, mr. chairman. thank you all for being here today. thank you for outlining the increased attention and intensity of effort that you are providing to this very profoundly significant area. i welcome that you are doing more and trying to do it better. but i would suggest that even more needs to be done. and it needs to be better. and you have the resources and
12:57 pm
technological capability to do better. and just to take the question that senator fischer asked you of, mr. selim, about incentives, your answer was that they have authority to provide them with opportunities. the question is, really, don't they need more incentives to do it more and to do it better to prevent this kind of mass violence that may be spurred by hate speech on the sites or may, in fact, actually be a signal of violence to come? ? and i just want to highlight that 80% of all perpetrators of mass violence provide clear signals and signs that they are about to kill people. that is the reason the senator and i have a bipartisan measure
12:58 pm
to provide incentives to more states to adopt extreme risk protection order, laws that will, in fact, give law enforcement the information they need to take guns away from people who are dangerous to themselves or others. and that is so critically important to prevent mass violence, suicides, domestic violence, and the keys and information and signals often appear on the internet. just this past december in monroe, washington, a clearly troubled young man made a series of anti essential etic rants online. he bragged about planning to, quote, shoot up an expletive school, end quote, in a video while armed with an ar-15-style weapon and on facebook posted that he was, quote, shooting for
12:59 pm
30 jews. fortunately, the adl saw that post. it went to the fbi. and the adl's vigilance prevented another parkland or tree of life attack. fred guttenberg of coral springs, florida met with me yesterday. he told me about a similar incident involving a young man in coral springs who said he was about to shoot up the high school there, and law enforcement was able to foresaw it and using extreme risk protection order statute. so my question is to facebook, twitter, and google, what more can you do to make sure that these kinds of signs and signals involving references to guns. it may not be hate speech, but it is references to possible violence with guns or use of
1:00 pm
guns. >> thank you, senator blumenthal. one of the biggest things we can do is engage with law enforcement to find out what is working in our relationship and what isn't. that's the dialogue over the past years has led to us establishing a port al through which they can request requests for content and we can respond very quickly. >> what are you doing proactively? i apologize for interrupting, but my time is limited. proactively, what are you doing with the identity of signs and signals that somebody is about to use a gun in a dangerous way. >> senator, we are now using technology to identify any of
1:01 pm
those early signs, including gun violence but also suicide or self-injury. >> you report it to law enforcement? >> we do. in 2018, we referred a number of many cases of suicide or self-injury where we detected them using artificial intelligence to law enforcement so they were able to then intervene and in many cases, save lives. >> we have a similar approach. they are at risk. we work with the fbi to ensure they have the information they need. >> mr. slater? >> thank you, senator. similarly, when we have a good-faith pwhroef of a threat, we will pro privily refer to the california northern regional center who will fan it out to the authorities. >> because my time has expired, i'm going to ask each of you if you would, please, to give me more details in writing as a follow-up for how you -- what identification, signs you use, what kind of technology, and how
1:02 pm
you think it can be improved, assuming that congress approves, as i hope it will, the emergency risk protection order statute to provide incentives more than just the 18 states that have them now but others to do the same. thank you. >> thank you, senator blumenthal. senator bloom. >> thank you for all being here today. your participation in this hearing is appreciated as we continue the oversight of tasks each of our company's face. while seeking to responsibly manage and thwart those who use your services to spread extremist and violent content. last congress when we held a hearing looking at propaganda online, we discussed the cross-sharing of information between facebook, microsoft, twitter, and youtube which allowed each of those companies to identify potential extremism faster and more efficiently.
1:03 pm
so i would just direct this question and ask how effective is that shared database of hashes been? >> senator, thank you for the question. through the shared database, we have 200,000 distinct hashes of terror propaganda. and that has allowed -- i can speak for facebook only. that has allowed us to remove a lot more than we otherwise would have been able to do. >> i would just add, since that hearing, actually, the reassuring thing is we don't just share hashes now. we have grown that partnership. so we share urls. if we see a link to something like a manifesto, we share it across industry. after christchurch we recognized we need to improve. we have realtime communications in a crisis. so industry can talk to each other in realtime operationally to say even -- not content related but situational awareness. that partnership between
1:04 pm
industry now also involves law enforcement. that wasn't there when i think we had the hearing last. so i think it is not just about the hash program but broadening out new programs that are developing the work further. . >> yeah. i think broadly i would say look at how we have been improving over time. surely systems are not perfect. we're always going to have to evolve to deal with bad actors. on the whole, we are doing a better and better job because of information sharing in removing the sort of content before it has wide exposure of any sort or before it is viewed widely. >> senator, i would only add that the threat environment that we are in today as a country has changed and evolved in the last 24 to 36 months. likewi likewise, the tactics and techniques that these platforms, as well as others, use to evolve -- the evolving nature of the terrorist landscape online, foreign or domestic, needs to keep pace with the threat
1:05 pm
environment that we are in today. >> so just as a follow-up, are there similar partnerships among your companies, as well as the smaller platforms, to specifically identify mass violence? >> senator, one of the things we have done over time is expand the mandate of the global internet forum to counterterrorism. so we relatively recently expanded to include mass violent incidents. and we are now sharing, both through our crisis incident protocol and hash sharing, sharing a broader variety of incidents. >> mr. slater, youtube's automated recommendation system has come under criticism for steering users towards violent content. earlier this yearive led a sub committee using persuasive technologies on internet platforms al going rhythmic content selection. i asked the witness that google provided at that time for that hearing. several specific questions for
1:06 pm
the record that were not thoroughly answered. and i would just say that providing complete answers to questions members submit for the record is essential as we work to look together as partners to combat many of the issues discussed here today. so i would like your commitment to provide thorough responses to any questions you might get for the record. do i have that? >> certainly, senator, to the best of our ability. >> i would like to explore the nexus between persuasive technologies and today's topic. specifically, what percentage of youtube video views as a result of youtube automatically suggesting or playing another video after the user finishes watching a video? >> senator, i don't have a specific statistic there, but i can say the purpose of our recommendation system is to show people videos they may like that are similar to what they have watched before. at the same time, we recognize this concern about recommendations for borderline content. that is content that maybe isn't removed but brushes right up
1:07 pm
against those lines. and we have introduced changes this year to reduce recommendations for those sorts of borderline videos. >> okay. if you could get the number. i assume you have that somewhere. that's got to be available. and furnish it for the record. the question again is to ask you specifically what is youtube doing to address the risk of some of these features, which you note are pointing a user to the direction of increasingly violent content? >> yes. . and that change we made in january to reduce recommendations. it's been key. it's still early days, but it is working well. we have reduced views by 50% just since january. as the systems get better, we hope that will improve and happy to discuss it further. >> thank you. thank you, mr. chairman. >> thank you, senator thaoupb. based on presence at the gavel, we have senator black burn, followed by senator scott. senator black burn. >> thank you, mr. chairman. and i want to thank each of you for being here this morning.
1:08 pm
and for talking with us. this committee has looked at this issue on the algorithms and their utilization for some time. and we are going to continue to do this. looking at content and the extremist content that is online is certainly important. we know there are a host of solutions that are out there. and we need to come to an agreement and an understanding of how you are going to use these and these technologies to really protect our citizens. and social companies are in a sense open public forums. and they should be, where people can interact with one another. and part of your responsibility in this vane is to have an objective cop on the beat. and be able to see what is happening. because you're looking at it in
1:09 pm
realtime. but what has unfortunately happened many times is you don't get an objective view. you don't get a consistent view. you get a subjective view. and this is problematic. and it leads to confusion by the public that is using the virtual space for entertainment, for their transactional life, for obtaining their news. so, indeed, as we look at this issue, we are looking for you to approach it in a consistent and objective manner. and we welcome the opportunity to visit with you today. ms. bickert, i have a couple of things that i wanted to talk with you about. we've all heard about these third-party facilities where contractors are working long hours and they're looking at
1:10 pm
grotesque and violent images and they're doing there day in and day out. so talk a little bit about how you transition from that to using modern technologies. what facebook is going to do in order to capture this to extract it into&to minimize harm. you've talked about you've got 30,000 employees that are working on safety and security. and then there are third-party entities that are working on this. so let's talk about that impact on the individuals and then talk about the use of technologies to speed up this process and to make it more consistent and accurate. >> thank you for the question, senator. making sure that we are enforcing our policies is a priority for us. making sure that our content
1:11 pm
viewers are healthy in their jobs is paramount. so one of the things that we do is we make sure that we are using technology to make their jobs easier and to limit the amount of content -- types of content they have to see. and i will give you a couple of examples where child exploitation videos, with graphic violence, with terror propaganda. we are now able to use technology to review a lot of that content so that people don't have to. and in situations where -- >> now, let me ask you this. sorry to interrupt. but we need to move forward. your 30,000 reviewers, are they all located in palo aloe or are they scattered around the country or around the globe? >> no, senator. we have 30,000 people working in safety and security. some of them are engineers or lawyers. the content reviewsers, we have more than 15,000. they are based around the world. yes. >> okay. great. thank you.
1:12 pm
>> for any of them, not only are we using technology, and there are ways we are using even where we cannot make a decision on the content using technology alone, there are things we can do, like removing the volume. or separating a video into still frames that can make the experience better for the reviewer. >> okay. now, let me ask you about this. mark zuckerberg, in a "washington post" op-ed, had called for us to regulate -- to define lawful but awful speech. so tell me how you think you could define or we could define lawful but awful speech but not overreach or infringe on somebody's first amendment free speech rights. >> senator, one of the things that we are looking to with our dialogue with governments is clarity on the actions that governments want us to take. so we have our set of policies that lays out very clearly how
1:13 pm
we define things. but we don't do that in a vacuum. we do it a lot with civil society organizations and academics around the world. but we also like to hear the views from government so we can make sure we are mindful of all the different safety -- >> well, ours are constitutionally based. i am out of time. mr. pickles i'm going to submit a question to you for the record. mr. selim, i have one that i am going to send to you. mr. slater, i always have questions for google. so you can depend on me to get one to you. and we do hope you all are addressing your prioritization issues also. with that, mr. chairman, i'll yield back. >> thank you very much. senator scott. >> thank you for being here today. i'm glad we're having a -- here today to have a meaningful conversation about what's happening in our nation. it's time we face the inability our culture produced a class of predominantly young white men.
1:14 pm
they live purposeless life for the most evil desires, sometimes with racial hatred. as you all know, we had the, while i was governor, we had the horrible shooting at parkland, at the school in parkland. within three weeks, we passed historical legislation including the risk protection orders that senator blumenthal was talking about. we did it by sitting down with law enforcement, mental health counselors and educators to come up with the right solution. now, with regard to the shooting at parkland, the killer, nicholnikolas cr cruz, had a long history of violent behavior. in september 2017, the fbi learned that someone with the user name nikolas cruz posted a comment on a youtube video that said, i am going to be a professional school shooter. in addition, nikolas cruz made other threatening comments on various platforms.
1:15 pm
the individual whose video he posted the comment on reported it to the fbi. unfortunately, the fbi closed the investigation after 16 days without ever contacting nicholas cruz. the fbi claimed they were unable to identify the person who made the comment. unfortunately, we now have 17 innocent lives that were lost because of nicholas cruz. my question is for mr. slater. how is a platform like youtube, which is owned by google, not able to track down the i.p. address and the identity of the person who made that comment. when did youtube remove the comment? did youtube report this comment to law enforcement? if so, who and when? if you did report this comment to law enforcement, did you follow up? what was the process? and was there any follow-up to see if there was any corrective action? >> senator, thank you for the question. first, it was a horrendous event. and we strive to be vigilant, to
1:16 pm
invest heavily to proactively report where we see an imminent threat. i don't have the details on this and the specific facts that you are describing. i'd be happy to get back to you. let me say this going forward. looking ahead, parkland was a moment that did spur us to proactively reach out to law enforcement to start talking about how can we do this better. that's part how we reached out and started working with the northern california regional intelligence center so we could go to a one-stop shop, who could get it from the right law enforcement locally rather than to cold call people. just this month in fact, in the last month, there was an incident where pbs was streaming news hour on youtube. somebody put a threat in the live chat. we referred that to the regional intelligence center. they referred it to orlando police, who then took the person into custody appropriately. and this was reported in the news. so that's not to say things are perfect. we always have to strive to get
1:17 pm
better and look forward to working with you and law enforcement on that. but i do think we continue to improve over time. >> so with regard to nicholas cruz, you will give me the information of, you know, who did you contact, when did you contact, when was it taken down. so to this day i cannot get an answer on what anybody did with regard to this shooter. what youtube did, the fbi did. nobody wants to talk about it, which is fascinating to me. so if you will get me that information. second, are you comfortable that if another nicholas cruz puts something up, you have the process now that you will contact somebody and there will be a follow-up process? >> senator, i think our processes are getting better all the time. they are robust. i think this is an area where it is an evolving challenge both because technology evolves, people's tactics evolve. they might use code words and so on. i will be happy to follow-up on
1:18 pm
how we will continue to work together. >> thank you. mr. pickles, how can knack lass maduro, who is committing genocide against his citizens, who is withholding clean water, food, medicine, still have a twitter account with 3.7 million followers? >> you rightly highlight that the behavior that's been taken there is abhorrent, and the question for us as a public company that provides a public space for dialogue is someone breaking our rules on our service. we recognize there are situations where there are geopolitical circumstances, world leaders have accounts where twitter is blocked, there is no free speech. with he take a view of that, that we hope that the dialogue that person being on the platform starts helps contribute us to solving the challenges that you outline. >> he has been doing it for a
1:19 pm
long time. it is not getting better in venezuela. it's getting worse. >> and i think this is a good illustration of how the role of technology companies, along with other parts of public policy response is. if we remove that person's account, it would not change facts on the ground. so we need to bear in mind how did the other levers have come play. >> i disagree. maduro sits there and talks about things and continues to act like he is a word leader, and he's a pariah. it sure seems to me what you are doing is allowing him to continue to do that. >> well, as i say, his current account has broken our rules. were he to break the rules, he would be treated as any other user and we would take action as necessary. >> we have votes already started and you are trying to get to other people. i would be happy to work with the senator from florida on this issue. i think we are not doing enough. and i think the specific case i mentioned in my opening statement about the rohingya and
1:20 pm
what happened on facebook is another example. so happy to work with you on this issue. >> well, yes. and thank you, senator cantwell, and senator scott, for raising this. i'm told there is a vote. and i'm shocked, shocked to hear they are going to leave it open until 11:30. which is generally what happens. senator duckworth? >> thank you, mr. chairman. while i do appreciation the intersection of extremism and social media, many i think would agree that today's hearing is another data point in a long history of congressional hand wringing on gun violence. since 2019 began, 260 days ago, we have witnessed 318 mass shootings in the u.s. more than one per day. mass shootings are those in
1:21 pm
which at least four people are shot, excluding the shooter. after 20 children, six adults and the shooter lost their lives at sandy hook in 2012, many elected officials, including myself, declared an end to congressional inaction. no more, we said. but since that day, our nation has endured 2,226 mass shootings. think about that number for a minute. but here we are not focused on ways to stop gun violence but rather the scourge of social media. i'm not going to say that there is no connection. but every other country on the planet has social media, video games, online harassment, hate groups, crime, and mental health issues. but they don't have mass shootings like we do. nothing highlights the absurdity of congress's ability to solve the gun crisis than seeing 318 mass shootings in 260 days and
1:22 pm
holding a hearing on extremism in social media. this is a chart from the digital marketing institute that according to their website highlights the average number of hours that social media users spend on platforms like facebook and twitter. as you will see, the united states and the u.s., our users are relatively middle of the pack when it comes to time spent online. my question to you both is this. do you agree that americans use social media is not unique on a per capita basis? in other words, are you aware of specific trends on your platforms to explain the amount of gun violence in the united states? >> senator duckworth, and this won't come out of your time. do sort of explain to us, because some of us can't see the detail. >> sure. this is how much time, average of tpho hours that social media users spend using social media each day via any device. >> and the arrow points to the united states. >> the highest is the philippines. the lowest is japan.
1:23 pm
the u.s. is right in the middle. so american users -- look, i have a 4 1/2-year-old and an 18-month-old when i get home who says iphone, iphone. she's on it. she knows how to select youtube kids and knows how to go right to what she wants to watch. so i'm just as concerned. the united states, in terms of social media usage, would you both agree is somewhere in the middle of the pack compared to the rest of the world? >> yes, senator. according to the study, which i'm not more familiar with, yes. >> in other words, are you aware, either one of you aware of specific trends on your platforms to explain the amount of gun violence in the united states? >> no. i think your study reflects our view, about 80% of our users are outside the united states. so i think your image speaks for itself. >> thank you. >> mr. selim, you brought up the role that video games can play in online hate and harass. . i disagree any dissemination can
1:24 pm
be used. if a connection between video games and gun violence exists, you would think that the widespread use of video games in japan and south korea would reflect that connection, correct? i think there is something to be said for the availability of guns in the u.s. if you look, the amount of time the folks in japan and south korea spend on video games is far greater than tpheurlgs. we're third. if you look at gun violence and gun death in 2017, here's the u.s. we're not the biggest users of video games. would this be accurate? >> senator, thank you for your question. i have not read this specific study. but i do have one data point, if i may, to share with you for just a moment. according to an adl report looking at extremist-related murders and homicides over the past decade, a research shows 73% of extremist-related murders
1:25 pm
and homicides were, in fact, committed with firearms. so to the extent you are making the point that extremists with weapons results in violence and homicide, we have the data that backs that point up. >> thank you. as we are reminded daily, the world is full of people who use spoerbl so disparage others and question facts. some will use the anonymity to spread hate. but our use of social media, video games and other variables does little to explain the 2,226 mass shootings since sandy hook. the internet has emboldened and empowered hate by allowing individuals to develop online communities and share their warped ideas. it is our weak gun laws here in the u.s. that allows the hate to become lethal. there is a clear and undeniable connection between the number of guns in the united states and the number of gun deaths in our community. look at this platform.
1:26 pm
this is the number of guns per 100 people. this is the number of gun-related deaths. we are up here. here's the rest of the world. some who use more social media than we do. some of whom engage in more video games than we do. we are saturated in weaponry that was designed for war but made available to nearly anyone who attends a local gun show. the dayton shooter had a 100-round drum. i didn't have one when i served in iraq. we didn't send marines into fallujah with 100-round drums. yet you can buy them in gun shows. many agree congress should expand red flag laws and background complex. banning high-capacity ammunition clips is what we need to do. this is not controversial. it is well pastime that leader mcconnell brings it to the house, the house pass background checks and to the senate floor for a vote. i hope leader mcconnell will allow votes on the keep americans safe, the disarm hate
1:27 pm
act, and the domestic terrorism prevention act. each of these bills will keep our children and our neighbors safer. i hope my republican colleagues will join in these bipartisan efforts. thank you. and i yield back. >> senator duckworth, let's do this so we can have a complete record. if you would reduce those three posters to a size that we can copy and they will be admitted in the record at this point. in the hearing without objection. >> thank you very much, mr. chairman. that's generous of you. >> so ordered. senator young. >> thank you, mr. tkhaeur man. chairman. i want to thank our panelists for being here today. i really do appreciate your testimony. and your answering our questions. look, we all need to collaborate in cushing online extremism,
1:28 pm
which i understand to be one of multiple causes that we could cite as we all think about the issue of mass casualty events and extremist events or generally. the nation's wrestling with mass violence, extremism and issues of responsibility, digital responsibility for some of these events. in fact, in my home state of indiana, hoosiers and crown point, indiana, recently experienced firsthand how a person can become radicalized over the internet, something i know that many of your companies has studied and are working on. in 2016, a crown point man was arrested and convicted for planning a terrorist attack after becoming radicalized by isis over the internet. thankfully, the fbi and the indianapolis joint terrorism task force intervened before any violent attack occurred.
1:29 pm
however, that isn't always the case, as we know. we have seen this across the country. and that's why it is critically important we have this hearing, that we continue to work together collaboratively, knowing that your products and platforms provide incredible value to consumers. and they obviously weren't intended for this purpose. so it's our responsibility in congress. it is definitely your responsibility as business people to make sure that we monitor how the great value that you provide can be used in an illicit, improper, dangerous and nefarious manner. in one minute or less, because i have three minutes left, i would request that the representatives from google and facebook and twitter tell us why americans should be confident that each of your companies is taking this issue seriously and why americans should be optimistic
1:30 pm
about your efforts going forward. >> one minute each. >> indeed. google. >> thank you, senator. i would start by pointing to youtube community guidelines enforcement report which details every quarter videos we have removed, the reasons why. and indeed how much is being flagged first by machines. dealing with this issue, removing vie legislative content is technology and people. technology can get better and better in identifying patterns. people can deal with the right nuances. and we have seen over time that the technology is getting better and better taking the content down faster and before people have viewed it. of the 9 million videos that we have removed in the second quarter of this year, 87% of those were first flagged by our machines. 80% of those were removed before a single view. we talked about violent extremism it is better in terms
1:31 pm
of removable for wide viewing. so, you know, we are already seeing advancements in machine learning, not just in this area but across the industry broadly. and the thing about machine learning, as it is fed for data, as it learns from mistakes. as we say, you got around here, those systems will get better. so why won't you be optimistic. those systems ideally will continue to get better. will they be perfect? no. bad actors will continue to evolve. but i think there is reason for optimistic and i think there is reason for optimistic based on the collaboration between all of us today. >> thank you. facebook? >> thank you, senator. the first thing i'll say is facebook won't work as a service if it is not a safe place. and this is something that we are keenly aware of every day. if we want people to come together to build this community, they have to know their safe. so incentives are there to make sure we do our part. on our team of 350 people
1:32 pm
primarily dedicated to countering terrorism and hate is expertise. so i lead this team, my background is with more than a decade as a federal criminal prosecutor, safety and security are personal to me. but the people that i have hired onto this team have backgrounds in law enforcement, in academia, studying terrorism and radicalization. this is something that people come to work on at facebook because this is what they care about. they're not a in soed to work on it while they're at facebook. this is bringing in expertise. and i want to make that very clear. and then finally, similar to my colleagues here, we have taken steps to make what we're doing very transparent. the reports we published in the past year and a half show a steady increase in our ability to detect terror, violence, and hate much earlier when it is uploaded to the site and before anybody reports it to us. now more than 99% of the violent videos and the terrorist propaganda we remove from the site we are finding ourselves
1:33 pm
before anybody reports it to us. >> thank you. twitter? >> thank you, senator. i think people can be optimistic. a few years ago at the peak of this caliphate so-called, people challenged our industry to do more, be better. i now look at a time where 90% of the terrorists content that twitter removes is detected through technology. i look at independent academics like professor conway who talk about the is community being decimated on twitter. i look at the collaboration between our companies which didn't exist when i joined twitter five and a half years ago. all of those areas have driven better technology, faster response, and a much more aggressive posture towards bad actors that is now showing benefit in other areas. but i think we can also take confidence that no one is going to tell this committee that our work is done. and every one of us will leave here today knowing we have more to do and we can never sleep. these actors are adversarial and we have to keep adapting. >> thank you so much. i could spend five days, five
1:34 pm
weeks, maybe five months or five years. i only had five minutes. i'm already one minute over. mr. chairman? >> thank you. senator rosen you're next. i'm going to go vote. i can assure you i will not let them close that vote until you have asked your questions and get over there. senator rosen. >> i appreciate it, senator. thank you for holding this important hearing. i want to thank aurlt witnesses for being here to talk about this very real and difficult issue. the rise of extremism online is a serious threat. and the internet is unfortunately proven a valuable tool to extremists who are connecting through various forms to spread hate and dangerous eye dealings. extremism online, we must not lose sight of the fact that violent individuals who find communities online to fuel their hatred have also acted in the name of hate. we cannot ignore the fact that
1:35 pm
the absence of sensible common sense gun safety measures like background checks are allowing individuals to access dangerous weapons far too easily. and so we know the majority of americans want us to support that. but i represent the great state of nevada. and as we approach unfortunately the two-year anniversary of the 1 october shooting in las vegas, the deadly mass shooting in modern american history, we know that coordination with and between law enforcement is more important than ever. the southern nevada counterterrorism center, also known as our fusion center, is an example of a dynamic partnership between 27 different law enforcement agencies to rapidly and accurately respond to terrorists and other threats. with las vegas hosting nearly 50 million tourists and visitors each year, the fusion center is responsible for preventing
1:36 pm
countless crimes and even acts of terrorism. so to all of you, can you please discuss with us your coordination efforts with law enforcement when violent or threatening content is identified on your platforms and what do you need from us as a legislative body to promote and enable, facilitate, whatever word you want to use, to facilitate this partnership to keep our communities safe from another shooting like 1 october? please. >> thank you, senator. the attack was incredibly tragic and our hearts are with those who have suffered and did suffer in that attack. our relationship with law enforcement, first, is an ongoing effort. we have a team that does trainings to make sure that law enforcement understands how they can best work with us. and that's something that we do proactively -- we reach out and offer those. in i time there is a mass
1:37 pm
violence incident, we reach out to law enforcement immediately, be even if we're not aware of nip connection between our service and the incident. we want to make sure they know where we are and how to reach us. we also have an online portal through which they can submit legal process, including emergency requests. and we have a team that that office is staffed 24 hours a day to respond quickly. finally, we proactively refer imminent threats of serious physical harm to law enforcement whenever we find them. >> thank you. >> thank you, senator. and i just want to echo monika's sympathies of the victims of that horrible tragedy. the lessons i think we have learned since that attack have continued to inform our thinking. for example, not waiting for the ideological intent to be known before acting. one of the challenges we have is in the traditional terrorist space you might look for an organization affiliation where we say this is a terrorist attack. we don't wait for that any more.
1:38 pm
we act first for people to stop using the services. we cooperate with law enforcement to provide credible threats. i think one of the questions, and i, along with colleagues from other companies, meant with agencies yesterday to discuss how we can further deepen our collaboration. one of the questions we had there is there is a huge amount of information within law enforcement community, within the dhs umbrella, that is classified. it might help us understand the threats, the trends, the situational awareness. so understanding how more information can be shared with industry, to better inform us about the threats. >> is so can you provide us in writing some of the tools you think you might need to help you better cooperate to protect our communities? >> absolutely. that was the subject of the meeting yesterday. we had a very productive conversation. >> thank you. >> senator, broadly, similar here both in horror and sympathy, tragedies like that one and in the ways we proactive live cooperate with law enforcement, refer credible
1:39 pm
threats, as well as receive valid requests, emergency disclosure requests, propbd to them expeditiously. >> thank you. i see my time is up. i will submit a question about combatting violent anti-semitism online. i know other people are waiting. we have votes. i appreciate your time and your commitment to solving working on this issue. thank you. >> thank you, senator rosen. your questions will be expected for the record. i want to start with a simple yes or no question. i don't mean this to be a trick yes or no question. it is either yes or no. yes or no with a brief one-sentence caveat if you need to. i'd like to hear from each of the three of you from ms. bickerrt, mr. pickles, mr. slater. do you provide a platform that you regard as neutral in the political sense? >> yes, senator, are rules are politically neutral. we apply them neutrally
1:40 pm
>> so you aspire to political neutrality. >> we want to be a service for political ideas across the spectrum. >>ing on. okay. >> we enforce our rules. our rules are crafted without ideology being included. >> mr. slater? >> similarly, we craft our services without regard to political ideology. we're not neutral against terrorism or violent extremism or hate speech. >> i appreciate you pointing that out. that is of course not what i'm talking about. that leads into the next question i wanted to raise with each of you. i think it's important the work each of you are doing in this area is important. it's important for anyone occupying the space to be conscious of those things. you do a service to those who access your services by removing things like pornography, terrorism advocacy and things like that.
1:41 pm
there is a lot of debate surrounding this issue and the legal framework surrounding it. as you know, section 230 of the communications decency act has received a lot of criticism. it protects a website from being held liable as a publisher by another information content provider. section 230 tkpwefs you the promise that you won't be held liable for taking down this type of objectional content we are talking about. whether it's something that is constitutionally protected or not. so for each of the same witnesses, each of you represent a private company. each of you are accountable to your consumers within your company can. this means in some sense you have incentives to provide a
1:42 pm
safe experience on your respective platform. so got a question about section 230. does section 230, particularly the good samaritan provisions, help you in your efforts to swift live take down things like pornography and terrorist content off your platforms? and would it be more difficult without the legal certainty that section 230 provides? >> absolutely, senator. section 230 is critical to our efforts in safety and security. >> mr. pickles? >> absolutely. and i would go further and say section 230 has been critical to the leadership of american industry in the information technology sector. >> mr. slater? >> absolutely, yes. >> on a related point, imagine a world where this is suddenly taken away. where those provisions no longer exist. large companies like yours may
1:43 pm
be able to in fact, i strongly suspect still would be able to filter out this content between the artificial intelligence capabilities at our disposal and your -- the human resources that you have. i suspect you could and probably would still do your best to perform the same function. what about a start-up? what about a company trying to enter into the space that each of your companies entered into when they were created not very many years ago? what would happen to them? ms. bickert. >> thank you for your question. this reminds me of industry conversations involving smaller companies before we formed the global counterterrorism in june 2017. we were having closed door sessions with companies large and small to talk about the best ways to combat counterterrorism on online. section 230 is very important for them to be able to begin to
1:44 pm
proactively act and assess content. >> i'd say it's a fundamental part of maintaining a competitive online ecosystem. without it, the ecosystem is less competitive. >> mr. slater. >> yes. and i would just add the u.s. has section 230. that's part of the reason why we have been a leader in economic growth, innovation and tech technological development. other countries that don't have something like that it suffer. study after study show that. and i would be happy to discuss it more. >> if it was to be taken away, all three companies, yours in particular, mr. slater, not known for being a small business or a business with a modest economic impact. you can identify it with this amount of concern i'm expressing. if we were to talk that away, google might be able to keep up with what it has to do. wouldn't it be harder for a new
1:45 pm
tech platform, somebody starting out in the same position where your company was a couple of decades ago. wouldn't that be exponentially more difficult? >> i think it will create problems for innovators of all stripes. certainly small and medium-sized businesses would have a lot of trouble potentially getting their arms around that sort of significant change to the fundamental legal framework of the internet. >> thank you. i see my time has expired. senator baldwin. >> thank you. i wanted to begin by thanking our full committee chairman wicker for holding this hearing. i think it is a vital conversation for us to be hav g having. we need to be taking a hard look at how we address the rising tide of online extremism and its real world consequences in our country. i do have some questions for you on this important topic. but, first, i wanted to echo some of what my colleagues have
1:46 pm
already said, which is there is much more that the senate must do to address gun violence. whether or not it's connected to hatred espoused to the internet. so more than 200 days ago the house of representatives passed a bipartisan universal background check bill. it has an extraordinary level of public support. it deserves a vote on the senate floor. and i feel like we can't simply have hearings but we have to act to reduce gun violence. mr. selim, adl's center on extremism has closely studied hate crimes and extremist violence in this country. is it fair to say that there has been an alarming increase in bias motivated crimes, including extremist killings in the last several years? >> yes, senator, that's
1:47 pm
accurate. >> in the case of extremist killings, what role do you have that access to firearms has played in that increase? >> senator, thank you for that question. as i briefly alluded to earlier, just to expand on what i was mentioning, according to our recent adl report, extremists of all ideological spectrums that committed murders or homicides in the united states, 73% of those acts were committed with firearms. >> thank you. what impact do you believe this increase in hate crime, including extremist killings have on the minority killings who have been the targets of these attacks? and let me just add to that question one of the unique aspects of a hate crime is that it not only victimizes the targeted victim but it strikes fear among those who share the
1:48 pm
same characteristics with the victim or victims? >> senator, thank you for making this point. in the past 24 months, we saw a calendar year 2017 with a 57% increase essential etic incidents across the country. and bias-motivated krao eupls in 20176789 we continue to see the troubling statistics year after year. so it is imperative. and part of my testimony today, both the submitted written and my oral testimony speaks to the need for greater enhancement and enforcement of hate crime laws and protections for victims. >> i am an original co-sponsor of senator bob casey's legislation, the disarm hate act, which would bar those convicted of misdemeanor hate crimes from obtaining firearms. do you agree this could help keep guns out of the hands of individuals who might engage in extremist violence?
1:49 pm
. >> yes, stpher. thank you for your legislation. adl supports this legislation. >> thank you. >> i appreciate the efforts that our witnesses from the social media companies have described regarding their companies's efforts to combat online extremism, including to provide some transparency to their users and the general public. it's of course critically important to understand how you're addressing problems within your existing services and platforms. i'd actually like to learn more from you about how you are thinking about this issue as you develop and introduce new products. in other words, i think a lot of us feel that the approach of rapidly introducing a new product and then assessing the consequences later is a problem. so i'd like to ask you how do
1:50 pm
you plan to build combatting extremism into the next generation of ways in which individuals engage online and why don't we start with you, ms. >> thank you for the question, senator. safety by design is an important part of building new products at our company. one of the things we've built in the past maybe five years is a new products policy team that sits and their responsibility is to make sure they're aware of new products and features being built and explain to his engineers who are thinking of all the wonderful ways the service can be used, all of the abuse scenarios we can envision in making sure we have reporting mechanisms and other safety features in place. >> i think as i said url yr we are in a very adversarial space. one of the key processes in part of that discussion is how can this be used against us, how can
1:51 pm
this be gained? how will people change their behavior and i think you're absolutely right. we need to take that learning and share that with some smaller companies. working with some 200 small companies around the world to share that knowledge with them, to help them understand the challenges is also invaluable. >> similarly or trust and safety teams are at the table with product managers and engineers from the conception of an idea all the way to development and possible release. so from ground up cites safety by design. >> i want to thank the witnesses and i'm going to be taking over as chair and i will call on myself as the next witness. i want to actually ask all of you, you know, your companies, your technology is famous for its algorithms, which seem to have the ability to pin point on what people want.
1:52 pm
union you can put an e-mail out or some people think talk about say your interest in yellow sweaters and next thing you do know you have ads popping up on your facebook or other accounts that talk about yellow sweaters. who knows how that happens, but to a lot of us it happens. pretty impressive, but here's my question. if your algorithm technology is so good at kind of pinpointing things like that, what people are interested in particularly as it relates to ads, what are the challenges with regard to directing that kind of technology behind you to help us and help you find what has been talked about on both sides of the aisle which is the people committing this kind of violence are typically disaffected young males. and aren't there signs?
1:53 pm
aren't there things you can do with the technology you do so well in other spaces to at least provide more warning signs of this kind of violence from these kind of individuals who in some ways already have a profile online? i'll throw that out to any of you. and are you working on that? >> thank you for the question, senator. technology plays a huge role in what we are doing to enforce our safety policies on facebook. in the areas of terrorism, extremism and violence it's not just the matching softwares we have to stop organized terror propaganda videos, we're now using artificial intelligence machine learning to get better at identifying new content that we haven't seen before that might be promoting violence or we proictively send that out to
1:54 pm
law enforcement and these systems are getting better every day. different products work in different ways. >> is it a priority of yours like it would be for selling yellow sweaters? >> absolutely. >> can i ask that of all the companies here? >> absolutely. investing in technology to find content that is terrorist content, extremist violent concept is absolutely a top priority. >> it is a priority, yes. >> senator, i'd only add to this part of the conversation as someone who's studied the research and data around these issues for nearly two decades, the threat environment that we're in today has changed significantly. white supremacist terrorists in the united states don't have training camps in the same way that foreign terrorist groups do like al-qaeda and isis and others. their training camp where they connect, learn and coordinate with one another is in the
1:55 pm
online space. so it's imperative the question you're asking about the machine learning, the technology, the artificial intelligence continue to advance to disrupt that environment and make it an inhospitable place for individuals that want to promote violent content of any ideological spectrum to be disrupted. >> all of your companies kind of have this tension between you want eyeballs on, more clicks, more time on, and yet with facebook or google or twitter, and yet i think there's increasing studies that are showing for example the amount of young men and women, young girls who feel kind of a sense of loneliness from their time online. you know, there's indications that among teenagers suicide rates are increasing particularly for young girls. one of the things i deal with
1:56 pm
we're looking back on my god, how did we do that, how did we get to this position in the '90s and policies and there are things that 72,000 americans died of overdoses last year and so we're kind of looking backwards saying how did this happen. in your kind of suites of policymaking do you ever wonder are we going to be looking back in 20 years going how in the hell did we addict a bunch of young americans to look at their iphones 8 hours a day and 20 years from now we're going to be seeing the social and physical and psychological ramifications where we all might be kicking ourselves in the head saying why did we allow that to happen? i think about that and it worries me, but you have tension because don't you want more face
1:57 pm
time? don't you want more teenagers spending 7 hours a day staring at their iphones because that helps your revenues? do you worry 15, 20 years from now we're going to be in the same spot like with opioids saying what did we do with our kids, what did we do our citizens? do you guys worry about that your power, the negative implications in what's happening in society right now. >> senator, thank you for the question. as a mother i take these question about wellness very seriously and our company does as well. and this is something we look at and talk to group wellness groups to make sure we're crafting products that are in the best interests. woo have seen social media be a tremendous place of support for those thinking of harming
1:58 pm
themselves or struggling with eating disorders or opioid addiction or getting exposed to hateful concept. and so we're also exploring and dedeveloping ways of linking people up with helg resources. we already do that now for opioid addiction, for thoughts of self-harm, for people who are asking or searching for hateful content, we now provide them with help resources. we do think this can be a really positive thing for overall wellness. >> we have similar problems in place for both opioid searches and also for people who are using terms referencing self-harm or suicide where we will intervene and provide them with a source of support. and that's something we draw around the world. we also recognize things like digital literacy certainly we as an industry and we as twitter need to invest in to make sure as people are using our services they have the skills to use them
1:59 pm
discernibly and finally our ceo is committed to looking at the health of the situation but looking at much more broader metrics, much of the health of the conversation rather than just revenue. >> chonthank you, mr. chairman. >> and i will say thank you to my friend from alaska to sharing apparently the deep voice and longing in your heart. i want to start with you. i want to talk a little bit about project dragonfly. in august of 2018 it was reported that google was developing a censored search? gen under the alias of project dragonfly.
2:00 pm
in response to those concerns alphabet shareholders requested they provide an impact report this year. however, during alphabet's shareholder meeting on june 19th the proposal for the assessment was rejelkted. in fact alphabet's board of directors explicitly encouraged shareholders to vote against the proposal. quote, google has been open about its desire to serve yurz in china and other countries. we've considered a variety of options in a way that is consistent with our mission have have gradually expanded our offerings to consumers in china. so i want to start with just some clarity. has google ceased any and all development and work? >> senator, to my knowledge,
2:01 pm
yes. >> and has google committed to foregoing future projects that may be named differently but would be focused on developing a sensory search engine in china? >> senator, we have nothing to announce at this time, and i think whatever we would do, we would look very carefully at things like human rights. in fact, we work with the global work initiative on an ongoing basis to evaluate how our principles, our practices, our products comport with human rights and the law. >> so roughly contemporaneously google decided that it did want want to work with the u.s. department of defense. how does google justify being willing to work with the chinese government on complex projects including artificial intelligence under project
2:02 pm
maven, and at the same time not being willing to help the department of defense develop ways to minimize civilian casualties? how do you reconcile those two approaches? >> senator, as wave talked about today we do partner with law enforcement and we do partner with the military in certain ways offering some of our services. also as a business we draw responsible lines about where we want to be in business including limitations on getting in the field of building weapons and so on. and, you know, we will continue to evaluate that over time. >> let me shift to a different topic which is this panel has talked about combating extremism and the efforts of social media to do that. many americans including myself have a long-standing concern that when big tech says it's
2:03 pm
combating extremism that that is often a shield for advancing political censorship. i want to talk about recently twitter extended its pattern of censorship to the level it took down the twitter account of the senate majority leader mitch mcconnell. that i found a pretty remarkable thing for twitter to do. and it did so because that account as i understand it had sent out a video of angry protesters outside of senator mcconnell's house including an organizer black lives matter in louisville who's heard in the video saying tat the senate majority leader, quote, should have broken his little raggedy
2:04 pm
wrinkled ass neck, and someone else who had a voodoo doll of the majority leader. senate majority leader sent out those threats of violence and found remarkably his own twitter account taken down. how does twitter explain that? >> thank you, senator, for the opportunity to discuss this. something we've been asked around the world is the climate in many political jurisdictions of safety of people who hold public office. so when we saw a video posted by numerous users that clearly identified someone's home and clearly contained as you referenced their threats, out of an abundance of caution we did remove that video. we didn't remove the accounts. we removed that single tweet that contained that video from everybody who had posted it. because the ens of a video with
2:05 pm
someone's personal home where the senate majority leader may have been residing at the time with several violent references, we felt was something out of an abundance of caution we should remove. we then discussed this further with the office. we understood their intent was to call attention to those threats of violence and so we did prevent the video saying this is sensitive media but it's not balance we're striking between -- i've been in many situations where i've been offered things opposite which is similar concept should be removed. that balance is something we struggle to get right every day. >> you would agree there's a difference between someone posting video where they are threatening someone else and the target of that threat posting the video. you mind agree those a qualitatively different. >> i believe that's wholly fair.
2:06 pm
i believe there is still a risk there and we are motivated by connecting that off-line home that could have occurred because the home was visible. and we appreciate they're insight but this was something our motivation was to prevent harm and not the kind of potential ideological issues you may allude to. >> but mr. pickles, have you rethought your policy since what senator cruz asked about. and i would recall the written testimony on page 2 which says and i quote we do not allow
2:07 pm
propaganda symbols to be shared on a platform unless they're being used to condemn or inform. is that language instructive to your platform, and don't you think that clearly it was readily evident from the beginning that senator mcconnell and his campaign had posted that video to condemn and inform? >> i think this is an absolutely relevant issue. >> we as a company have taken a more aggressive posture. after the christchurch attack we did see people posting the manifesto and content to condemn it. and we decided even in those circumstanc circumstances we would remove it. and manifestoes, large chunks of
2:08 pm
manifestoes even when they are condemning it we have taken the decision to remove the material. i think the case you illustrate highlights for us the complexity in getting this right. again, if we're going to err on the side of caution, fewer violence threats and fewer people homes being visible on our platform is notably a good thing. but this is something where this is the first time and i've been with the company for 5 1/2 years i've ever been asked why did want we leave something up to the content of the threat. >> well, in terms of the context in this instance it was the owner of the home who chose to inform the world about what was being said against him. and it was the individual
2:09 pm
himself who posted this. and it seems to be a clear-cut case in that instance that differentiates it from the condemnation of the larger incident of the christchurch violence. i would just suggest that it shouldn't have taken very long for twitter to understand that. senator sullivan, you are recognized. >> thank you, mr. chairman. i have a couple of follow up questions. senator cruz's question i think it's -- whether a company wants to work with the pentagon i think is something the leadership of the companies -- the individual companies have to make that decision. i think that's certainly something that's fine. i think what troubles a number of us is that there's a
2:10 pm
declaration that you're not womening to work with the department of defense on certain issues, and yet there's a willingness to work with one of our country's potential adversaries particularly on sensitive tech logical issues that are important to the competition between the two nations. do you understand why that has caused bipartisan concern here, and how should we address it? should congress take action on those kind of situations? not saying everybody has to work for the pentagon, that's your decision. but if you don't want to help to work with our nation's defense but you're working with the -- the country that poses a significant threat long-term in the united states, do you understand why that causes concern here? >> senator, i do appreciate the
2:11 pm
concern. we are a proudly american company. we are a business that wants to draw responsible lines and we look forward to continuing to engage with you, the committee and others to make shurlg we're doing that. >> do you think there's a clear-cut example, hey, we're not going to do anything with the u.s. department of defense but we're going to work with the chinese, something very clear and obvious, do you think there's something we should do to prevent that or penalize that, we the congress? >> i think it's an important question. i think as a business we try and strike responsible and consistent lines, but the details would certainly have to matter. >> okay, mr. pickles let me ask just the one time question. it's a really good follow up to senator scott's earlier question. you said that the twitter account of maduro in venezuela has, quote, not broken any other rules. what are those rules, and at what point would you like to
2:12 pm
have somebody who's certainly not treating its citizens well, senator scott's been a leader on this issue but what are those rules and at what point would you look at what they're doing to their own citizens as a way to maybe not provide them the platform that you have? >> thank you. well, firstly, the rules apply to a user on twitter is the same. i can make a full copy example. first of all it's encouragement of violence -- if twitter account was used in some ways we have seen around the world to encourage violence against minorities, to organize violence, we would take action on those accounts breaking those rules. >> would twitter allow putin to have an account or xi jinping to have an account? >> if they were acting within our rules. but one thing i would note and this is slightly different but important, some governments have
2:13 pm
sought to manipulate our platform to spread propaganda information through breaking our rules. one of those governments is venezuela and we have made a public declaration of every account that we remove from twitter for engaging in information operations covertly that we believe are responsible. that government, we've made that whole archive available for the public. we've taken the same step with information operations that have been directed we believe from countries including china, iran and russia. because we believe it's not just those single twitter accounts, that some governments do also seek to manipulate our platform. >> so if a government takes violence against its own citizens is that breaking the twitter rules? >> well, i think that actually is happening off-line, and the key question for us is what's happening on twitter. >> thank you. >> thank you, mr. chairman. >> thank you, senator sullivan
2:14 pm
and thank you to our witnesses. the hearing record will remain open for two weeks. during this time senators are asked to submit any questions for the record. upon receipt the witnesses are requested to submit their complete written answers to the committee as soon as possible but no later than wednesday, october 2, 2019, by close of business. i thank each and every one of you for appearing today. this hearing is now adjourned.
2:15 pm
president trump and first lady melania trump will host the second state dinner of his administration as he welcomes australian prime minister scott morrison and his wife jenny
2:16 pm
morrison. watch guest arrivals and dinner toast. our live coverage begins friday at 6:30 p.m. eastern on c-span, online at or listen on the free c-span radio app. >> c-span is back in des moines, iowa, this saturday for live campaign 2020 covera:20 coverag beginning at 2:00 p.m. eastern where 18 presidential candidates will take the stage for speeches. watch the iowa state fry live on c-span or using the free c-span radio app. >> car manufacturing in the city is very, very important to us. that industry is one of the backbones in lancing. we have three things in lancing, michigan state university, we have the state capitol and automobile manufacturing. these three components have kept
2:17 pm
lancing very successful. >> c-span tour is on the road exploring the american story. this weekend be take you to lancing, michigan, with the help of our comcast cable partners. lancing has been michigan's capitol city since 1847. >> lancing really ironically was sort of picked as the capitol city because no one really wanted to pick lancing. it was offered up as a compromised location. >> and we'll learn about the auto company he founded in lancing. >> he founded the real motor corp company which was a company titled as an acronym of his name. it emerged here in 1904 and stayed here pretty much close to this location in a variety of different formats through 1975. >> watch c-span city's tour of lancing, michigan, as we take in its history and literary scene. this saturday at noon eastern on
2:18 pm
c-span 2's book tv and sunday at 2:00 p.m. on american history tv on c-span 3. working with our cable affiliates as we explore the american story. >> now it's a joint congressional hearing featuring young climate change activists. witnesses include greta toomberg who said i don't wopt you to listen to me, i want you to listen to the other scientists. action congress should take to reduce greenhouse gas emissions and why young people should get involved in the issue. this is just under 90 minutes.


info Stream Only

Uploaded by TV Archive on