Skip to main content

tv   Cato Institute 2018 Surveillence Conference Part 2  CSPAN  January 4, 2019 4:36pm-6:10pm EST

4:36 pm
point solutions. republican russ fullcher was elected to idaho's first district after serving a decade in the state senate. he works in commercial real estate but had a prior career at an electronics company. democrat golden first member elected to congress in instant run off voting in which voters rank their candidates by preference which are taken into account if no candidate wins a majority. he's held a state in the statehouse for last four years. he served tours of iraq and afghanistan as a marine before working for susan collins on a committee staff in washington, d.c. and former congressman ed case returns to congress. this time representing hawaii's first district. he previously represented the second district from 2002 to 2007. before that, he was a member of the state legislature, serving as the house majority leader for part of his tenure.
4:37 pm
mr. case is an attorney and cousin of aol cofounder steve case. new congress, new leaders, watch it all on c-span. also at the cato institute's 2018 surveillance conference, a discussion on the growing number of security cameras in the u.s. and the surveillance of students. >> welcome back. thank you very much. as i mentioned in my introductory remarks, there are so many fascinating issues surrounding surveillance, intelligence, new technologies, that if we were to cover them all, with panels of this sort you just saw, this would be a conference that would last approximately three weeks. and because even i have limits to my capacity to focus on
4:38 pm
issues for that long, we have for the last couple of years been inviting scholars and activists to present shorter flash talks, but focus very tightly on a single subject and present work or analysis that they've been doing in a way that allows i think to get a sense of the range of hard questions we face as citizens and policymakers. our morning block of flash talks covers issues from facial recognition to social media surveillance to the global war on encryption on various fronts. i will very quickly introduce the speakers. if you want fuller biographies, look to the conference website on you will find in addition to the agenda links on the speakers' names for more extensive biographies. we're going to begin with
4:39 pm
analysis recently passed legislation in australia that seeks to mandate law enforcement access to encrypted software, encrypted messaging tools. in a sense it's a first of its kind. but could be a model regulation elsewhere. for that, i want to invite from new america sharon bradford franklin. >> thank you. i'm sharon bradford franklin with new america's open technology institute. if you had told me a year ago that i would be here today talking to you about australia, i would have actually thought you were joking. but i'm really glad to have the opportunity to speak with you today about the law just passed earlier this month in australia and how this could actually
4:40 pm
allow the united states to look down under for an encryption back door. got to get the clicker working. here we go. so for those of you who may not actually be already familiar with the long-standing encryption debate, this is a battle that pits security against security. for years, the u.s. justice department and the federal bureau of investigation have been arguing that they are, quote, going dark, due to the increasing use of encryption. they've complained that they can no longer access many electronic communications, even when they have a valid court order. many tech companies now have encryption by default, and in their products and services, and they simply don't have access to their users' encrypted communications. the justice department and fbi want to require that tech companies guarantee that government has exceptional access or what they now have started calling the use of
4:41 pm
so-called responsible encryption. so that they will always be able to access even encrypted messages. otherwise, they say, they are hampered in their ability to keep americans safe from terrorists and other criminals. but security researchers, tech companies, and privacy advocates have pointed out that this would amount to an encryption back door, that could be exploited by others. there is no way to guarantee that only the u.s. government would be able to use any such mechanism. rather, this amounts to deliberately building vulnerabilities into products and services and undermining device security for all would harm everyone's privacy and cybersecurity. and it would create new threats that we'll all be victims of criminal activity. in addition, as we explored in a half day forum that oti hosted last month, encryption protects economic security and the personal safety and freedom of
4:42 pm
journalists and individuals in vulnerable communities, including victims of domestic violence. this debate, which has been going on for years in the united states, has now gone global with a quick flare up down under in australia. this past august, the australian government released what they called an exposure draft of its telecommunications and other legislation amendment or assistance in access bill 2018. unlike the u.s. congress which takes months and months or more likely years before it passes anything, the australia parliament managed to wrap up its consideration of this bill in a mere four months. following a public comment period on the exposure draft, a slightly modified version of the bill, was introduced in parliament and referred to the parliamentary joint committee on intelligence and security or pjcis which opened a new public comment period. my organization, the technology
4:43 pm
institute organized an international coalition of civil society organizations, tech companies and trade associations, and we filed three rounds of public comments on the bill, outlining our concerns, which i will describe in a moment. the committee held a series of hearings, and then just at the beginning of just last week, the pgcis issued a report recommending passage of the bill with certain amendments incorporated. early in the morning, just last thursday, december 6th, the parliament released an updated version of the bill, including 173 amendments, that no one had ever seen before. but by the end of the day, the australian parliament had passed the bill into law. what does the australian law actually do? as one australian commenter put it quote the combined stupidity and cowardice of the coalition in labor now means any it product hardware or software made in australia will be
4:44 pm
automatically too risky to use for anyone concerned about cybersecurity. so we're focusing here on schedule one of that australian law, which is the one that's designed to undermine the safeguards of encryption. there are also folks should be aware other sections of the law that create additional privacy threats and increase powers of government hacking, but we're focusing on schedule one which relates to encryption. the law includes what appears to be an encouraging statement that purports to prohibit the government from demanding the creation of encryption back doors. i have it up here on the slide here, section 317 zg says that the government may not request or require communication providers, quote, to implement or build a systemic weakness or systemicvil systemic vulnerability, and also the government must not prevent a communications provider from rectifying a systemic weakness or systemic vulnerability. however, the law grants
4:45 pm
unprecedented new authorities to the australian government that undermine this promise. specifically, the law creates three new and powerful tools for the australian government, technical assistance requests or tar's, technical assistance notices or tan's, and technical capability notices or tcn's. the requests are supposed to be voluntary, whereas the notices are mandatory. and the difference between the tan's and the tcn's depends on which government official is authorized to issue the notice. all of these authorities authorize the australian government to request or demand any, quote, listed act or thing". now, that's a long list, in the bill, and it includes things like removing one or more forms of electronic protection that are or were applied by or on behalf of the provider. and it also includes modifying or facilitating the modification of any of the characteristics of a service, provided by the
4:46 pm
designated communications provider. in short, these are powers to demand that tech companies weaken the security features of their products. for example, the australian government could now make the same request to apple that the fbi made in the 2015 san bernardino shooter case, that they build a new operating system to circumvent iphone security features. as apple explained in the san bernardino case, building the requested software tool would have made that technique widely available. thereby threatening the cybersecurity of other users. as we know, in the lawsuit here in the u.s., the united states government argued that under the somewhat obscure act which dates back to 1789, they were permitted to make this demand of apple. but apple supported by other tech companies and privacy advocates argued that this demand was unconstitutional. the justice department ultimately withdrew its demand
4:47 pm
because the court resolved -- before the court could resolve the legal question because the fbi was able to pay an outside vendor to hack into the phone. but in australia, they now have a specific authority to make these kinds of demands. another worrisome scenario is that australia may seek to use its tcn authority in the same way that the u.k. is looking to use its new powers. -- excuse me, its powers. just last month, levy and robinson of the u.k.'s gchq which is the u.k.'s nsa put out a proposal until law fare. under this proposal tech companies would be asked or required to add gchq as a silent participant in end to end encrypted chats and the tech company would suppress the notification to the user. they argue that quote you don't even have to touch the encryption to add gchq as a ghost user inside the encrypted
4:48 pm
chat. so there are several other threats posed by the new australia law approach to encryption. in our coalition comments, in addition to describing the breadth of the new powers created by the bill, we also address three other key concerns. first, the law lacks any requirement for prior independent review or adequate oversight. many features of australia's new law, such as the authorization for technical capabilities notices were modelled on the u.k.'s investigatory powers act that was passed in 2016. the u.k.'s law also raises threats to digital security and human rights, but section 254 of the u.k.'s act does require that judicial commissioners must review and approve proposed technical capability notices before they may be issued. although we still have questions about the adequacy and independence of this review under the u.k. law, australia's
4:49 pm
tcn authority poses even greater threats to cybersecurity and individual rights. in addition, australia has no bill of rights. while the procedures through -- while there are procedures through which tech companies may challenge government requests and orders, these challenges will be more difficult. tech companies won't have the same legal arguments available to them based on protecting individual rights as they would in countries like the u.k. and the u.s. second, the law requires undue secrecy, although the law requires statistical transparency reporting by the government and permits statistical trarns parent si reporting by -- transparency reporting by tech companies but also has strict nondisclosure requirements whenever the government issues a request or notice to a tech company. violation of these secrecy rules is a criminal offense punish bl by up to five years in prison. there are no limits to the
4:50 pm
duration of these gag orders, such as, we have here in the u.s. when the reason for the confidentiality no longer exists. third, the law's definition of covered designations communications providers is overbroad. it includes anyone who provides an electronic service that has one or more end users in australia. so this means that any tech company doing business in australia or anyone providing electronic services in australia is subject to government demands that they weaken the security features of their products and services. so this is bad for australia, but what does it mean for us here in the united states? well, australia's legislation appears to be part of a coordinated effort by the alliance. it is an intelligence alliance comprised of australia, canada, new zealand, the u.k. and the u.s. that dates back to world war ii. since 2013, these five nations
4:51 pm
have also formed a five country ministerial which is an annual convening on strategy and information sharing on law enforcement and national security issues. for the past two years the five nations have focused on strategies and policies to weaken encryption. just this past august, august 2018, the five countries released a statement on principles on access to evidence and encryption, and that statement includes that if these governments continue to quote encounter impediments in their efforts to access encrypted communications, they may pursue legislative mandates for encryption back doors. the very same month that that statement came out, australia released the exposure draft of its encryption bill. so now australia's law can provide the united states and other governments with a back door to an encryption back door. australia now has the authority to compel providers to create
4:52 pm
encryption back doors, and once providers are forced to build weaknesses into their products, other governments can exploit those weaknesses. i've already mentioned the example of apple versus fbi. now, if australia issued a technical capability notice to compel apple to build a new operating system, to circumvent iphone security features, which is what the fbi demanded in the san bernardino case, then if apple complied and built that system, it could no longer argue that it lacks the capacity to turn over data to the u.s. government in similar cases. similarly, if australia forced facebook to reengineer encrypted chats to be accessible in response to australian legal demands those would also be vulnerable to other government's demands. finally there's of course also a risk that the u.s. government could simply seek to expand its own direct authority by pointing to australia as the new model for quote responsible encryption legislation. so whether it's as a pathway or
4:53 pm
as a model, the australian law creates risks to cybersecurity and privacy that extend well beyond australia's borders. thank you. [applause] >> thank you very much, sharon. next up the french philosopher is known for his analysis of the tight link between surveillance and training or discipline. his book usually translated in english as "discipline and punish" could also be equally translated to surveil and to punish. so very naturally close monitoring is always a key part
4:54 pm
of training and indoctrination. it's no surprise that children are often closely monitored as we are teaching them. that is perhaps an inevitable part of raising children safely, but it also means we need to worry about whether we are training them for compliance, surveillance, as the technological capability to monitor children ever more closely becomes both a reality and widespread in use. i often wonder whether we are preparing children to accept as normal a world in which everything they do is closely scrutinized. to look at one aspect of that, the social media surveillance of students, i want to invite rachel levinson walton. >> great, thank you very much. and that's really the perfect
4:55 pm
introduction. i will be coming back to that point near the end of my presentation. my name is rachel waldman, i'm senior counsel with the liberty and national security program at the brennon center for justice. i'm going to talk about the social media surveillance of students, especially k through 12 students. i want to talk for a moment about the prevalence, the sort of deep saturation at this point that kids have on-line. so according to a pew internet study, from last month, 97%, 97% of 13 to 17-year-olds in the u.s. are on at least one major on-line social media platform. 95% of american teens have access to a smart phone. and 45% say that they are on-line almost constantly. so there's clearly a lot of content out there and a lot of time that teens and even younger kids are spending on-line. and with that social media
4:56 pm
presence comes social media monitoring. and these tools are sold for a variety of purposes. they are sold as preventing bullying, preventing school shootings, potential suicides and other on-line threats. and maybe not surprisingly they are also big business. so spending by public schools nationwide on nine major social media monitoring companies, you can see here and you can see there's sort of some spikes, some, you know, mountains and valleys, but overall it is a pretty massive increase in spending starting in 2010, and there's spikes in 2015, 16, and 17 and the big spike in the summer of 2018, potentially driven by the shooting in parkland, florida, but public school districts are spending more and more money on automated social media monitoring tools. this similar reflected by key
4:57 pm
word searches. so searches for social media monitoring in contrast between public schools and private companies and again showing these spikes over the last several years, really significant increases and then a major spike in 2018. increasingly a lot of public money is being spent on these services. now, based on these statistics, you might think that schools are getting more dangerous. but in fact, the opposite is true. schools are actually getting safer. and while it's true that this country has a unique risk of school shootings, among developed countries, and while obviously a single shooting or even a single serious bullying incident is one too many, the overall crime decline in this country holds true in schools as well. the odds that a k through 12 student will be shot and killed at a public school are about 1 in 614 million. so by way of contrast, the odds of choking are about 1 in 3400.
4:58 pm
in 1995, 10% of students aged 12 through 18 reported being the victim of a crime at school in the previous six months. in the 2015, 2016 school year, just 3% of students did. so in that 20-year period, it went from 10% down to 3%. a pretty major decrease. and in general, over the last two decades, less than 3% of youth homicides and less than 1% of youth suicides have occurred at school. now, of course part of the hope with social media monitoring may be that will it pick up risks off of school grounds as well, but by any measure, school is a pretty safe place to be. now, the one state in the country that has legislated social media monitoring is florida. i'm sure everyone here is familiar with the shooting in parkland, florida, last february, when nicholas cruz, a former student at marjory stoneman douglas high school shot and killed 17 students and staff members and injured 17
4:59 pm
others. in the wake of that shooting, the florida legislature passed a law that included the creation of an office of state schools, within the state department of education. that office is required to coordinate with a florida department of law enforcement, to set up a centralized database to facilitate access to a pretty wide range of information, including social media data. the legislation also established a public safety commission, which recently recommended the development of protocols around social media monitoring. it doesn't look like that collection has begun quite yet, but it is likely to do so in the new year. now, as it turned out, nicholas cruz actually had posted on-line about his intentions before the shooting. and people had taken notice. he was reported to the fbi and local police at least three times for disturbing posts. one call to the fbi warned that he might become a school shooter and a separate call flagged a youtube post in which the user
5:00 pm
had said that he wanted to become a professional school shooter, although the poster wasn't identified as cruz until after the shooting. while there certainly were warning signs on social media, it wasn't the case that the district was flying blind. people were seeing those warning signals and were trying to act on them. and what really failed those students wasn't a failure to see those posts, according to a review of the school district's actions that came out august, it was more that the district itself had failed at nearly every turn to provide cruz with the educational and support services that he needed. nevertheless, florida is embarking on a first of its kind national experiment when it comes to social media data. kind of a big question here, which is, okay, but why not? right? if a single school shooting is one too many, if social media monitoring could catch one future nicholas cruz, could catch one future suicidal
5:01 pm
student, why not do it? if the stakes are that high, what's the harm? and there are a lot of reasons to at least be very cautious about this kind of monitoring. so the first is a real concern about the accuracy of social media monitoring tools. and this plays out in a couple of different ways. so one way that these tools can be inaccurate is through overreach, so the fact that they are likely to pull in much more information than is actually going to be useful. by way of example, police in jacksonville, florida, set up a social media monitoring tool to search for key words that were going to be related to public safety or that might indicate some risk of criminal activity. one of the words that they set up was the word bomb, thinking if there was some kind of bomb threat, it would turn up. well, it turned out that there were no bomb threats that were flagged on-line. instead it was inundated with posts about things like pizza
5:02 pm
that was the bomb and photo bombs, so a lot of stuff coming in of very very little use. the second issue is underreach, by which i mean that the kinds of risks that social media monitoring tools kind of would like to find often aren't going to appear on-line at all. so i mentioned earlier that nicholas cruz had actually posted on-line about his intentions and that people had reported them. so it is not clear there what the extra value of monitoring software would have been. as it turns out to some extent he was an exception. there was an informal survey of major school shootings. unfortunately, that is a category, major school shootings, since the sandy hook shooting in 2012. there was only one other perpetrator according to the public reporting that had put up social media postings that strongly indicated an interest in school violence. that was adam lanza, the shooter in newtown, sandy hook. he had posted in discussion
5:03 pm
forums about the columbine high school shooting, and he'd operated tumblr accounts that were named after school shooters. now, these postings weren't a secret. fellow users were able to see these, and while they may not have known at the time whether to take this seriously, it's hard to imagine that now these wouldn't have been reported directly to authorities. in fact we saw that with nicholas cruz that's exactly what happened, and that individual concerned users would report this in. the on-line profiles of other shooters in major school shootings, which again usually get quite a lot of reporting after the fact don't show anything that would flag them for an automated tool. so for instance, the perpetrator of a 2014 shooting in troutdale, oregon, had a facebook page showing that he liked first person shooter and military themed games like call of duty, and he also liked various knife
5:04 pm
and gun pages. in retrospect sure these seem like warning signs that something was going on, but in fact the official facebook page for call of duty has nearly 24 million followers. the remington arms facebook page has more than 1.3 million likes, so sending up a red flag about every single person who enjoys these past times would create a huge quantity of noise for very little signal. finally automated social media monitoring tools just have built in shortcomings. so i will flag a terrific report from the center on democracy and technology called mixed messages which does a lot of research on this. as their research shows, automated monitoring tools generally work best when the posts are in english, and when the tool is looking for something very concrete. they can be easily fooled by lingo, by slang, pop culture references, things like that. maybe the best example comes from the 2015 trial of the
5:05 pm
boston marathon bomber. during the trial, the fbi produced as evidence several quotes from his twitter account, to try to show that he himself was an extremist, that he wasn't just following his brother's orders. to for instance, he had tweeted, and this is one of the things the agent brought up, a quote that said i shall die young, which may be was suggesting something about his intent, but it was also a quote from a russian pop song, and he actually linked to the pop song in the tweet. the agent just hadn't bothered clicking on the tweet to see this was a song lyric. other quotes the fbi relied on were from jay-z songs and south park episodes among other things. social media is incredibly contextual. neither automated tools nor often human analysts are that great at parsing out that context. the second major concern is the risk of discrimination. this kind of comes in two forms. the first is that the key words
5:06 pm
themselves that the tools will be set to flag on will be discriminatory. so for instance, an aclu report found that when the boston police department set up a social media monitoring tool, the hashtags that it was flagging included black lives matter, ferguson, muslim lives matter, and the arabic word for community. needless to say, these words aren't signs of a public safety threat. so these tools are only as good as the people who are using them, and there are a lot of ways to use them to further a discriminatory mindset. the second is the risk of discriminatory impact. so whatever key words are flagged, there's going to be a huge amount of discretion in what's done with the results, including which students are brought in, who is punished by the school, and even who is subjected to criminal justice consequences. we already know that students of
5:07 pm
color at every level of schooling experience harsher discipline than white students, even for the same infractions and even when they commit infractions at lower rates than white students. there's a real concern that social media monitoring could contribute to the school to prison pipeline. the muslim teenager who brought a home made clock to his dallas area high school and was then arrested on the suspicion that it concealed a bomb. he was well known at his school for bringing in electronics, tinkering, fixing other people's electronics, and he had told his teachers and the principal repeatedly that it was in fact a clock. it raises suspicions that the scrutiny that he was put under and his ultimate arrest was essentially grounded in islamophobia. on the social media front an alabama high school paid a former fbi agent to go through students' social media accounts on the basis of anonymous tips. the district ultimately expelled
5:08 pm
over a dozen students on the basis of what he found on-line. 86 of the students expelled were black, even though blacks made up only 40% of the student body. now, not surprisingly where people are mistakenly identified as posing a threat because of their social media posts, the consequences can be serious. one connecticut teenager posted on snapchat a picture of a toy air soft gun that resembled a real rifle. in his words in explaining why he put it up, hi thought it was awesome and -- he thought it was awesome and he knew his friends also thought it was awesome. another student saw it and he was worried about it so he reported it to school officials. this does not necessarily strike me as a crazy thing to do. although as the student noted if they had googled the name on the side of the goon, the manufacturer, they would have seen it was a toy gun even though it did look like a real
5:09 pm
one. instead of discussing with him and resolving the issue potentially some lessons about responsible social media use, thinking before you posting, he was not only suspended for the day, but arrested for breach of peace, a misdemeanor offense. now because it is so hard to reliably pinpoint individual social media posts that actually indicate some kind of life threat, monitoring companies have kind of have an incentive to sweep up everything so they can ensure their customers that they will spot that needle in the haystack. at the same time they have very little reliable way of gauging their effectiveness. a 2015 investigation by the christian science monitor revealed that none of the three major school social media monitoring companies they looked into had firm metrics for measuring effectiveness. at least one said basically we know we have succeeded when we get a call from a school saying that, you know, something we
5:10 pm
sent them was interesting. it's really a perfect storm for a mindset of more, more, more. at the same time, parents and students often know very little about these tools. research shows that while social media monitoring companies may assume that students are assenting to be tracked by virtue of posting on public sites, students more often believe that companies are prohibited from sharing personal information with third parties. there's a real lack of information about how these programs operate. or rather there's asymmetrical information. finally, this goes to julian's point at the beginning, it's worthying about what it means for students to be under constant -- it's worth thinking about what it means for students to be under constant surveillance on-line. they will stop posting or post
5:11 pm
more privately, maybe more concerningly it teaches students to expect surveillance and anticipate an authority figure's opinion and react accordingly. now, some of this you could say is good digital hygiene. i think we all know that something we post publicly, we need to think before we post about what that looks like, who might see it now, and who might see it in the future. but it's not clear that it's healthy for students who are learning about citizen's role in a democracy to know they are under that surveillance all the time and to be acting accordingly. so what does this all mean? at the very least, before a school or a school district rolls out a social media monitoring program, it's incumbent on officials to weigh the costs and the benefits and to involve parents and students in a frank discussion of what it means. and if they decide not to set forth on a monitoring program, they should remember that they are most likely not going dark. that there are a lot of concerned people out there who will spot posts and flag them.
5:12 pm
thank you very much. [applause] >> thank you very much, rachel. i have an acquaintance who is a science fiction writer who is more optimistic view of this, saying this is great because we are training our children to develop habits of fairly sophisticated counterintelligence trade craft just to be able to have a normal childhood, and so the next generation will be very sophisticated about evading surveillance. i suppose we'll find out. two talks that focus on privacy in public in a sense, the myriad ways that just walking down an ordinary city street we are being observed in ways we may not recognize, and also the ways
5:13 pm
existing networks, surveillance, like post circuit cameras can be transformed in fairly deep ways by existing infrastructure becoming a platform for new methods of monitoring. so the first of these is going to be an examination of camera networks for facial recognition surveillance from the project on government oversight. >> hi, everyone. thank you very much for having me here. i'm a senior counsel at the constitution project, where i focus on surveillance issues, and i'm really excited to be talking about facial recognition and the specific aspect of facial recognition, how cameras in various aspects can empower
5:14 pm
and grow facial recognition surveillance into dragnets. so that's just a quick start about facial recognition surveillance itself. this is no longer a sci-fi technology that we will see in the distant future, it is happening now. the fbi conducts over 4,000 facial recognition searches every month on average. a quarter of all state and local police departments have the ability to conduct facial recognition scans as well. customs and border protection has a biometric exit program that uses facial recognition for outgoing flights. they are planning to spread this to airports in general, as well as seaports and land ports across the country. and i.c.e. is looking to buy facial recognition technology as well. so that is the status of facial recognition. it is a very live and real surveillance threat. facial recognition depends on three key factors to be a powerful force for surveillance. first, you need a database of
5:15 pm
photos that are identified with people. the fbi has that. they have about half of all american adults in the photo database. you need powerful software technology that scan across hundreds of millions of photos and scan faces rapidly. a lot of companies are developing this technology. the government is as well. third, what i want to focus on is you need a network of cameras that you can tap into and that you can use to see people's faces everywhere all the time. now, there are four areas where this -- you have the potential to build these camera networks. first, government surveillance cameras, cctv, second, police body cameras, third, privately owned security cameras, and last, social media photo databases. so let's start first with government surveillance programs, cctv programs. about a decade ago, then chicago mayor richard daly said that he expected one day we would have a police camera basically on every corner. i want you to keep that quote in mind as we talk more and more
5:16 pm
about cctv in american cities. but first let's go to where we truly have a cctv photo dragnet and where it seems that we have achieved big brother status, and that is in china. china is by far the most powerful network of government surveillance cameras that we can see in the world. the country has an estimated 200 million government-run surveillance cameras throughout the country. and the effects of this are quite profound. if you look at cities, these networks are incredibly dense and incredibly powerful. for example, beijing maintains over 46,000 cctv cameras that blanket the city. state media and police in beijing boasts that this network allows them to have 100% coverage of the city and see everything that is going on all the time. now this can have really powerful impacts for facial recognition. for example, recently bbc reporter to test the system went to a city of 3.5 million people, gave his photo to the government
5:17 pm
to be put into his system and asked them to find him, using their cameras and systems, the software tracked him down and found him throughout that entire city 3 1/2 million people in a mere 7 minutes. that is surveillance cameras at its peak. cctv is also in america to a strong degree. it's already been instituted in large cities such as new york, chicago, washington, and los angeles. in new york, there is a cctv network hub. this is called the domain awareness system. the way it works is that you have all cameras networked into a centralized hub. that can be subject to real-time viewing, analysis, and other tools. facial recognition could become one of those in the future. oakland considered its own domain awareness hub would have hooked up cameras all across the city used by government
5:18 pm
involving everything from port authority to police cars to cameras outside schools. smaller cities such as st. louis and new orleans also have mass cctv networks and centralized hubs that they are used to watch. city with the largest by far cctv network in the united states is chicago. chicago is the closest to achieving big brother status in america. right now chicago maintains a police surveillance network of cameras that is over 30,000 total cameras in the city. this in some ways actually surpasses the level of surveillance dragnet that you will see in china. although, 30,000 cameras in chicago is less than the total 46,000 in beijing. if you look at area density for cameras, the 128 cameras per square mile on average in chicago is far far higher than that in the beijing dragnet that covered 100% of the population. now, this can have really powerful effects for facial recognition, and it is starting to america. we're seeing this primarily
5:19 pm
first in orlando. orlando's currently running a pilot program with amazon's real-time recognition facial recognition program. the way the system works is that you have cameras scanning throughout the city. they will try to scan faces, find people, identify them, and then flag any persons of interest, whatever persons of interest means, not sure. so that is government cctv. next i want to look at police body cameras. this is probably the area of greatest risk in terms of establishing video surveillance dragnets, in the united states. and the simple reason for that is that body cameras are becoming incredibly popular in america and in american police departments. america's largest body camera producer in the united states has systems already in over half of america's largest cities. this isn't a huge surprise because they offer their body cameras to police departments for free, so long as you then
5:20 pm
use their video storage system. studies from recent years of police departments indicate that 97% of the largest police departments in america all either have body camera programs in place, are in pilot and testing stages or if they don't have them yet, are planning to build them in the future. this really is going to be a universal phenomenon of police wearing body cameras, and that being a common thing we will see on our streets as cops walk by. why is a really big deal for a proliferation of government surveillance cameras? it is because cities have a lot of police in them. 16 to 24 police officers for every 10,000 residents. when you look at big cities, this amount gets much higher. plenty of cities have as many as 40 officers for every 10,000 residents. d.c. is over 50. some cities are very populated with police officers. for example ten different cities
5:21 pm
have over 20 police officers per square mile. topping the list is new york city which has well over 100 police officers for every square mile. now, in terms of facial recognition, we have actually seen a little bit of progress here. axon recently backtracked on a long-term plan to put facial recognition in its body cameras. they acknowledged the fact that this tech really in a lot of ways is very flawed, very prone to misidentification so they scrapped plans that might have happened as soon as this year to put facial recognition in its system, but not all vendors are taking that cautious approach. some are charging ahead with facial recognition in body cameras. it is only a matter of time before companies like axon are probably satisfied that it is good enough for their work and begin to institute it. after all, an axon vp described their interest in body cameras a couple years ago by saying that by putting facial recognition in body cameras, one day every cop in america would be robo cop.
5:22 pm
this is very worrying because while virtually all police departments are charging ahead with police body cameras, very few are setting rules and standards for facial recognition. according to a score card on body cameras maintained by up turn in the leadership conference, basically no cities that operate body camera programs have effective rules on facial recognition, and that is many, many cities that are not acting with appropriate standards. so that's police body cameras. next i want to talk about private surveillance cameras and capacity to build government surveillance networks from them. private surveillance cameras is similar to cctv. it is another way that government can potentially build out surveillance -- video surveillance networks but do so with very little work, without the infrastructure and at a fraction of the cost. we may not have the 200 million surveillance cameras that china does, but america does have over 30 million privately-owned
5:23 pm
security cameras throughout the country. so given that, given the potential to simply tap into these instead of building your own cameras, it is no surprise that government may want to turn this into this. by the way, as a quick aside, a coup olle of those cam -- couple of those cameras are amazon's ring doorbell. just last night news broke that amazon had patented technology to build facial recognition into those doorbells and connect it to police networks and notify them whenever anyone suspicious came up. so another fun innovation from amazon. police departments are not just thinking about this idea. they are proactively soliciting owners of private security cameras asking for registration of security cameras and asking for them to engage in formal agreements whereby those cameras can be accessed and readily used by law enforcement in video surveillance networks. so i mentioned new york before, and the domain awareness system that they have there that allows
5:24 pm
real-time streaming of video cameras. of the 6,000 cameras that are connected to new york's network, 2/3 are actually privately owned cameras that have agreements that allow the new york police department to access and use them. washington, d.c. and a lot of other cities actually offer incentives to try to get people to hook up their surveillance cameras into police networks. so here, for example, is d.c. mayor saying please purchase security cameras. please connect them to our networks. we will pay you to do this. excellent use of emojis, mayor. that is privately owned security cameras. in terms of effect, it is very similar to government cctv. it's a network of stable cameras that can provide a video dragnet that could be used for facial recognition, but it is another way to build it out and pretty severe risk given that we don't have the option potentially stopping government in its tracks, building these cameras. the cameras are already there. we're just worrying about
5:25 pm
potentially having law enforcement tap into them. so last i want to talk about social media photos. this is a bit of a different in that we're not talking about cameras taking images or rather images that are already being stock piled. social media photos are potentially the greatest risk in terms of a surveil of a photo dragnet that could be used by government for facial recognition and that's because of the sheer size of these photo databases. we've already seen facial recognition used for social media to a limited degree by a firm. a few years ago they got caught and admitted that they during protests had run social media photos through facial recognition technology during protests in baltimore to find individuals with any outstanding warrant and directly arrest and remove them from the crowd. now, luck ri when this came out -- luckily when this came out as a product of aclu research, companies responded properly. they blocked and shut down the firm's access to their services. it is really important that
5:26 pm
social media companies continue to be vigilant on this front to limit their api to prevent photo databases from becoming a means of government surveillance and facial recognition surveillance. but i think that it's also really important that companies start to think not just about data scanning and harvesting on their platforms openly through api access but also about court orders and scanning through those means. we have seen similar things like this in the recent past. a couple years ago yahoo! received and complied with a court order asking that they scan all e-mail content in their databases for specific bits of content that the government was looking for. it is not hard to imagine with the government coming with a similar court order someone who maintains photo databases and asking to find very particular face prints. we have google talking about surveillance transparency and facebook talking about it. these companies maintain very
5:27 pm
large photo databases. google has more than 2 million users store photos in its cloud photo service. including 24 billion selfies. facebook has over 350 million photos up loaded every single day. it would be really great as these companies continue to build out what are already fantastic surveillance, transparency reports, that are getting better all the time, to think about possibly including a -- [inaudible] -- for facial recognition so if the government does come with this broad access or saying we want to start scanning all your photos for facial recognition services, we will be able to start acting. with that i want to conclude by talking about what actions can we take if we start to see these activities, how should we respond. first of all, there's a lot of potential at the local level. before i mentioned oakland had a proposed domain awareness system that would have connected all their cameras to a hub. this was a success story. oakland activists got organized, got very mad, talked a lot to
5:28 pm
the government about it and got it shut down. that's the sort of thing we can see in other cities if we take action, and i want to give a shoutout to a great program that's going on right now. it is an effort to improve transparency and limit surveillance properly in cities all across the country. i'm sure as that campaign goes on, it is going to continue doing a lot of great work to limit video surveillance and limit advanced surveillance tools like facial recognition being built into cameras. on the federal level, we have a lot of potential in terms of limiting and conditioning funds. so we talked a little bit about government cctv. a lot of funds for local government cctv networks don't come from those localities. they come from the federal government. doj funds cctv in police grants very often. for example, orlando, which is now running a cctv real-time facial recognition network originally received funds for cctv from the department of justi
5:29 pm
justice. it would be great in the future when doj handed out funds for cctv video surveillance networks, they said you could not use this for facial recognition or set really strict guidelines and limits on how it could be used. dhs funds surveillance cameras for cities on a large degree as well. again, this is another opportunity where setting strict rules guide lines and limits could be a very effective from stopping these video surveillance networks from being turned into mass facial recognition location tracking and scanning networks. finally the department of justice also issues grants in tens of millions of dollars every year for police body cameras, but again we do not see virtually any departments putting in good rules for facial recognition on body cameras. it would be an improvement if when doj was handing out its grants for body cameras they said you need to put in effective rules and guidelines and limits to protect privacy before we give you all this money. so those are some actions we should take. i just think it is very important that we take now because we are very quickly
5:30 pm
approaching the point where we're all going to on a daily basis be much like that bbc reporter tracked down through an automated computer system that's being monitored with a million little eyes. thank you very much. you can read more about our work on-line and looking forward to the rest of the conference. [applause] >>:: in order to encourage people
5:31 pm
to react to changes happening around them is to be aware so the front patient frontier foundation has developed a tool to help people recognize the ways in which surveillance is imploding around us and for that i want to invitedave moss . >> thank you for having me today. my name is dave moss and i'm with the electronic frontier foundation and you're based in san francisco, we've been around since 1990 and we exist to make sure our rights and liberties continue to exist as our society and
5:32 pm
views on technology advances. i work on ess street-level surveillance project which aims to ensure there is transparency, regulation and public awareness of the various technologies that law enforcement is deploying in our communities and a lot of times that work looks like filing public records requests so with license plate readers, esf teamed up with the organization mock rock to file public records requests to find out how law enforcement agencies where sharing license plate reader data among themselves or let's say drones. will file a public records request for mission log reports on how to show how uc berkeley police use drones to surveilled protesters in 2014 or we will file a public records request with the district attorney's office to get a spreadsheet with geo locations of every surveillance camera in their database, similar to what
5:33 pm
jake was talking about. this is all a problem because too often our work looks like this. we are chucking public records of people saying here you go, here's documents on document cloud or ears the white paper we wrote or a 3000 word blog post oreven worse, it's me standingin front of you doing a powerpoint presentation and if we're lucky, i have a funny cartoon to go with it. i don't have one so i had to use this one . really our work should look like this . contextualized within their communities. if i could, i would run a walking tour company where i could take people around and show them the various surveillance technology around them. i'm a busy person and i don't know at doing tour groups of six or seven people is the most effective way to get our message across. however, maybe this concept can transfer over to
5:34 pm
something like virtual reality . taking a step back, we look at virtual reality and that law enforcement technology. police are already working on virtual-reality stuff so this is a company in georgia called motion reality that has a warehouse -sized space where police officers put on virtual-reality helmets. there given real feeling fake electronic firearms and their wired up head to toe and they go and they run scenarios and that can be replayed back where they can see what they did right, what they did wrong and one of my favorite things is they are covered in electrodes so if their shot, they get shocked and demobilized in that part of their body. there's a company that has taken one of these oculus and has modified it to look as a replacement for field sobriety tests. the whole flashlight thing would happen within avr visor and there's a surveillance aspect. this is something called bounce imaging and it's a small, covered with little cameras in the swat team
5:35 pm
officer might chuck that into a hostage situation and somebody could sit outside in virtual-reality looking around before they go in and recording a 360 view of everything going on. what can we do on the other side with er? a little brief history of our organization. this is one of our founders, both a process for the grateful dead as well as a digital pioneer and he wrote an essay in which after he had gone and visited some of the early pr companies and came back and he was amazed. he thought it was a psychedelic experience. he bought a lot of things were psychedelic back then because he was on psychedelics a lot of the time but welcome to virtual virtual-reality, now we jumped 25 years because not a lot has happened since then but in 2015 we finally saw vr to move toward the mass commercial market.
5:36 pm
this is the oculus rift, this is the playstation vr. they came out early 2016 and for our organization, there were two big questions. first of all, what are the digital rights implications of virtual technology on our society and what is the potential for virtual-reality as an advocacy tool and an education tool? what i think of as a privacy element, the intercept had a great piece in 2016 about hypothesizing that virtual-reality might be the most nefarious kind of digital surveillance with regard to the internet yet and i tend to agree. this voices a lot of the concerns i was having an we hadn't seen it loaded publicly yet. the reason is biometrics. virtual-reality tends to rely on our physical characteristics in order to function on a basic level, that is how your head is moving. the distance between your
5:37 pm
hand and your head. how long your arms are. but even something so simple as how your head is moving in a virtual reality environment can be correlated to mental health conditions . more advanced vr technology is starting to involve devices that measure your breath or track your eyes or map out your facial expressions and that's a whole other world of violations and one of the creepiest things is when you have company that in order to gather reaction on biometrics are throwing stimulus that you in a fairly quiet manner without saying why so they can find something measurable in how you respond. we'renot going to get too much into augmented reality but that's going to present even more problems , a lot of the devices are scanning the world around you in order to produce content. >> something interesting that came up as well is that there's a research study by the extending pluto the yarn found that current state of play 90 percent of the our
5:38 pm
users are taking some sort of steps to protect their privacy whether that is adjusting their facebook settings were using an ad blocker and while three quarters of users were okay with companies using a biometric data for product development, the overwhelming majority was very much opposed to that biometric information beings stole, anonymized or not to other entities. as far as the art advocacy tool, we're not the first one to try this. planned parenthood has an experience i'll cross the line that puts people in the position of a woman trying to see reproductive health services at a clinic that has a lot of angry protesters. peta has a couple of experiences around the college campuses and other locations where they challenge people to step inside a factory farming situation. what is the factory farm were
5:39 pm
a chicken. and then there's some groups out of brookline massachusetts worked with united nations and environmental assembly to do virtual-reality visualizations of data on air pollution and took that and ran it through a bunch of un delegates in nairobi. so that brings us to the surveillance project and this is out of space a virtual-reality experience that uses a very basic simulation that teaches people about the various fine technologies that the lease may deploy in their communities. when we were starting to pursue this in the early stages, we had considerations and wanted it to be a meaningful advocacy experience and wanted to not collect biometric information that we wanted it as an organization that's an open source and accessibility to technology, we wanted to make short work on multiple platforms and not just the oculus or the bike store.we wanted to be also function on a modest budget as we are a nonprofit and are not sony.
5:40 pm
when i need to say meaningful advocacy experience, we didn't want to rely on the novelty factor. you can take anything and put it in vr and if it's somebody's first time they will be this is amazing regardless of what it is but we wanted to make sure our was presenting our research in a way that only the ark could allow. we didn't want people to be watching a movie, we wanted them to be doing something, interacting with the world and to be challenged by it. and we wanted people to learn information that even though they were experiencing it, we wanted them to carry that back to the real world. so the concept is somebody, once you put the headset on and people will have demos but you can put it on your place in the street scene and western addition neighborhoods of san francisco where there's a police encounter going on between a young citizen and two officers and you look around and as you find something, you get a pop-up
5:41 pm
and the voiceover explaining what it is. it's not meant to how quickly you can go through it and score points, it is supposed to be an educational tool. and there were four bowls, one was and we do virtual-reality, can we do and experience deeply and if we can do it the first time, maybe we cando other things down the road. number two is just to educate people about the forms of surveillance . then we also wanted to figure out where they are in their community and then finally, we had this thought that police encounters are stressful situations, protests are stressful situations sometimes. things move quickly but it can be useful for people to take note of what surveillance technology they saw in the scene though imagine putting people in a controlled environment where there is to gain practice , it might carry over to the stress situations. so we decided not to go with computer-generated environments and instead go with a 360 degree photo.
5:42 pm
this is rico of the. it's also on the screen. it's not concave lenses, one on each side and it captures just beyond 180 degrees on each side and stitches them together though you're able to take a photo. if i use it you would get all of this, office. the only thing you might not get is the very base of the tripod underneath camera. but this helped us get past what people refer to as the uncanny valley when it comes to videogames, the more you try to create a realistic personal realistic environment, the more creepiest people by using a photo with a real scene in a few things photoshop in, bypass altogether. this is what the photo looks like that we took . obviously once you're in the virtual-reality headset, wraps all the way around you but you see there is a scenario there going on and you can kind of see at the
5:43 pm
bottom here, this is what it looks like and you don't see this in the game so this is a behind-the-scenes exquisite here. we were just kind of hiding under this longer version of this poll that went about this high and hiding their outside his police patient hoping police would come outside and eventually they did and it being san francisco, the question to people with a piece of technology on the street which was great because it was kind of the perfect shot for us . so those of you will not have a chance to try today, thisis what it looks like . you would get a pop-up about it and explain what it is. and as a voiceover because we didn't want to make sure, it's such a visual medium we didn't want it to be that you have to be fully cited to enjoy this experience for learn from it so if you're only able to see out of one eye or you have limited visibility you have this certain amount of awareness of an environment, you can learn things audio.
5:44 pm
we did our beta launch on november. this is at the internet archive at the inter-sports international hackathon, brewster tail, thefounder of the internet archive testing it out. but i think for the most part, we are looking at having cable like this, this is most like , there's not a lot of at this point people have these devices in their homes even though this one year drop down to $200 recently. not a lot of people have it but it's something we can conferences. we can have our grassroots activists when they're going to visit bring it with them just like they would bring one pagers or brochures for things like that, they can bring one of these with them. and we run it through, only 500 people in the last month which if you think about it in terms of the division organization, if you're able to send the nine minutes with somebody getting them to only exclusively focus on surveillance, that is incredible. that's thousand, that's a lot of pockets but it was available on the internet so
5:45 pm
one of the things i found gratifying is portland maine is about as far from san francisco as you can get in the united states but we see there are hacker spaces and media labs trying this out and having people demo it and we started to see social media respond as favorite to is this one in the middle. the architect is so bad, i was aiming my tech on screen. and we think lol is exactly what we were going for with this. so i feel pretty good about that. as far as next steps, we're still in beta mode you were going to be doing demos together user. were going to improve the experience. one of the things with open source technology is that dumb times there might be a tweak in the language and then everything breaks. we had bugs, and would have to fix them and we need to
5:46 pm
get everything stable for a 2019 launch and once we have that we will send it out to communities to come up with an educational curriculum but after that we have to look at what with the next version of this project be? we have a few ideas. some of them are let's do an internet of things version where you look around and you see a printer, you see all these ways you might be surveilled your devices or maybe we do one where not everyone is into san francisco and they want to know what it's like in iowa or new york city so maybe we build the same thing for various areas or maybe we just abandon the are altogether and go on ar and have some way for people's phones to project things for them into the world. and all these depend on how the technology develops, what kind of interest we get, whether there's a return on investment, what kind of grants there are. it's in the world and we don't know where it's going to be in a new year or five years but i can tell you i know it where it's going to be after lunch time and that is just outside the lunchroom if you can try it out, i got two of the devices and i can
5:47 pm
show you where the camera works and that is all i have, if you have a headset or want to play around, if the sf board last spot. >> if you have an opportunity to visit the texas effects, it's an idea that you can lay games that involve how to recognize any here, very often it spills over into nongame life. and named after people play a lot tetris, you starting sheets everywhere and together. it shows up inthe assassins creed games . the bleeding effect where someone is reliving a simulation as is their ancestors life . takes on their short of superhuman murder self abilities and that seems unrealistic and undesirable but might be desirable to imagine a population trained on day about spotting
5:48 pm
surveillance technology in the world around them. in more useful. >> turning back to the question of encryption, as we heard from franklin earlier, law enforcement have for years now been planning that the spread of encryption is causing it to go dark making it more difficult to do electronicsurveillance communication . but there's a fascinating report from the center of international studies that really points out there are a lot of ways that difficulties in law enforcement is having with intercepting medications really don't have a lot to do with the need for backdoors and this is a lot of low hanging fruit left on the table that we ought to examine before we talk about legislating bridges in platforms or breaches and the tools that rely on this.
5:49 pm
>> i'd like to rely on jen for the legal line on the table outside. >> thank you joanne and thanks to cato for putting on this conference. the focus of my talk today is the range of challenges that law enforcement cases in accessing digital evidence separate and apart from the encryption challenges and this talk tends from a report that i worked on with a co-author under the auspices of the center for strategic and international studies were what many of us know as the sis. but not in quicken undoubtedly will continue but it was, is and is emphatically more so argue after writing this report and
5:50 pm
working on this report more encryption and a cannot encryption so much of the limelight, there are a range of other challenges that law enforcement cases that need to be with. and they can be got with relatively easily and they need to be with as parents, these challenges will continue no matter what happens with respect encryption, no matter if or in fact there ever were a clear decryption mandate, there would still be other ongoing challenges . and so as our title tends to indicate, these are problems that we think can be relatively easily solved. not completely, nothing in this space ever leads to a complete solution and we make a mistake if we assume that we are seeking a complete solution or that we're ever trying to eliminate totally friction in the process. some of that is in fact
5:51 pm
helping but some of the friction is unnecessary and actually collectively harmful to both security and privacy and minimizing that friction is not only a lot of old goal but one that is achievable. so to that end, i'll note that the report that we worked on was endorsed by a number of individuals and also groups and was endorsed the director john brennan, former fbi general counsel ken morningside, to former deputy attorney general, and former boston commissioner. police commissioner ed davis, former attorney general or national security david crick , it's also been praised by a number of different groups and provider and several providers i've already introduced a number of reforms consistent with what we called for in this report. so now that i've given you the hard sell, i'm going to spend the remainder of my time talking about a little about the methodology that we use in doing this report a little bit about our findings
5:52 pm
and our ultimate recommendations. this report stems from about a year for the research including a series of qualitative interviews with, local and federal law enforcement officials, prosecutors, representatives from a range of different companies and members of the civil society community and also involve a quantitative survey of state, local and federal law enforcement officials. and the survey results are notable, hopefully you can all read a little bit. the survey according to the survey results, those surveys found difficulty accepting, analyzing and utilizing digital evidence over a third of their cases. we believe that a problem that's only going to continue to grow as additional information becomes more and more ubiquitous and as digitalevidence is needed in just about every criminal investigation . this shows the response to the question what is the biggest challenge that your department encounters in
5:53 pm
using digital evidence? and accessing data from service providers was ranked as the key challenge amongst our lawmakers. separate and apart from the questions about interpretation. identifying what service provider has the data was reported as the number one challenge, 30 percent of our respondents bring up this problem. obtaining the data once itwas identified was reported as the number two challenge , 29 percent of our respondents ranked it as their number two, as their biggest challenge. accessing data from a device was 19 percent, righted as the biggest challenge and collectively analyzing data from devices and analyzing eta from providers which are two separate things combined, that's 21 percent so that was their biggest problem. this is important because
5:54 pm
these are problems that can be fixed or at least largely reduced without huge changes in the system, but with more resources, more dedicated systematic to addressing these the extent that law enforcement doesn't know where to go to get data of interest, that is a problem can be solved with better information flows and better training. to the extent that law enforcement faces challenges in obtaining data, that is a bigger challenge and we heard two different stories from the law enforcement officials we talked to at the provider community. the law-enforcement officials talk about what they perceive as long delays in getting information back him service providers. what they perceive as service providers dragging their feet, service providers
5:55 pm
having insufficient resources to respond to their need of requesting this slow walk and turning down what they perceived to be invalid circumstances. providers on their side told us a different story. they complained about what they saw as overbroad requests , about law enforcement asking for things that simply weren't available , delays being the fault of law enforcement as they were internally debating and deciding whether or not to get down disclosure orders that would prohibit the provider from telling their customer to subscribe to data that had been obtained and subscribers holding off at law enforcement requests, turning over the data until we learn whether or not they had permission to tell the customer or subscriber. the data interestingly supports both sides of the story. this chart shows the request that you ask law enforcement issued to these companies, facebook, microsoft, google,
5:56 pm
twitter and apple based on the company's own transparency reporting. there is no other good source of this data. and not surprisingly, you see from this chart a pretty dramatic increase in requests over the short time. these show requests in six-month intervals so in a six-month period ending in december 2013 there are about 400,000 left to these six us state providers. by december 2017, the previous six months, almost but not quite, there increase by a significant amount and by 650,000, almost 700,000 requests in the prior six months. now what's interesting about this chart is a grant rate had hovered more or less at about the same rate. about 80 percent. an inconsistent overtime in terms of the percentage of requests or demands providers complied with.
5:57 pm
but that also means that the number, the absolute number of requests that are being turned down and the number of schools or demands not being complied with his fire given that there's a bigger value of actual requests. so in some extent, as law enforcement is consenting that since this figure number of requests denials for its providers are saying where consistent in how they been treating us. if you copy off, the data only shows where the requests were made, not where the requests were not made because law enforcement know where to go or were otherwise finding his request and the grant race say nothing about the legitimacy of either the requests or the grounds for the request. and there is and there should be some ongoing disagreement about the appropriate scopeof the request . this is an area where some action is not only healthy, actually productive and it's going to present inevitably because your views about the upper bristol of these
5:58 pm
requests. but there's also a number of areas with respect to grad rates and law enforcement issuance of requests to providers where there is unnecessary friction and some of the reduction can support privacy and security at the same time some of the things that can be helpful in this regard are better to date law-enforcement guys, resourcing of law enforcement teams by the providers, better training and dissemination of that training to date and the local law enforcement officers. better training of judges that review and approve the range of requests subject to court order or warrants and these have obvious security benefits that provide law enforcement more streamlined ability to access data of interest but it also has privacy benefits that need to better tailored, more privacy
5:59 pm
protected requests and less as a result, more tailored, more narrow requests. to the extent that law enforcement cannot interpret that data that's disclosed, this is a problem that stems in part from decryption but also what we heard over and over again with from the absence of technical tools to decipher non-encrypted data that was disclosed. so this is a problem that results one from absence of tools to some extent and also a distribution problem so sometimes, some of the bigger law enforcement entities would have access to the appropriate tools but it was not disseminated to the 18,000 state and local law enforcement activities that exist around the country. so despite what appears to be pretty clear he and a pretty easy to identify solution
6:00 pm
respectably sourcing training and dissemination of tools, the federal entities with an explicit mission to better suppress a cooperation between law enforcement and providers is an fbi's national domestic communications enter. it has a budget of just 11.4 million the fiscal year spread out among several different programs assigned to distribute knowledge about service providers policies and products, develop and share technical tools, training law enforcement, maintain a 24 seven line center among many other initiatives. as a drop in the bucket to do to need that out there so one of the key most highly regarded training centers, the national computer forensic institute but by the service by for appropriations every year. this year 1.9 million, enough to train 1200 students. if it were fully funded train over 2002 is just a drop in
6:01 pm
the bucket when you consider that there's 18,000 federal, state and localentities across the country. that's just the number of entities, not the number of individuals working . there are a range of states and local training centers. resources and other federal resources that have arisen to still some of these gaps but as you can see, they are not geographically distributed evenly, it's much higher concentration on the east coast and the west coast. it's a big swap in the middle there's not much in terms of resources and training centers. and there's no single entity for determining what's out there. what works, what doesn't and how to best allocate these resources. at this time i dress me to our recommendations which is the creation of a national regional evidence office. that's authorized and resourced by congress and
6:02 pm
would sit in the oj and that would do the kind of work that is needed to both assess what's out there and ensure a more efficient and reasoned distribution of resources. so developing a national digital defense policy, >> .. >> who is asking for the request is, in fact, asking -- the data is, in fact, entitled to receive that data, coordinate with some to have interesting international efforts that are ongoing and report to congress and promote transparency about what is, in fact, going on. we've also call on congress to -- called on congress to
6:03 pm
authorize within the fbi. it does not have an independent authorization at the moment to resource ndicacc, so you have the synergy between the technologists who are actually aware of what's going finish of the technology and are awe ware of the challenges in the field with some of the policy folks and, again, allow it to do what it already is trying to do, but is trying to do on a very slim budget which is conduct and disseminate trainings, gather and disseminate information about service providers, develop and disseminate the technical tools, provide a hotline system. and then we've also included a series of recommendations to providers to step up some of their training efforts. having a centralized body where they can go can help facilitate
6:04 pm
that. one of the things we heard from providers over and over again is we do do trainings, but there's 18,000 federal, state and local law enforcement agencies in the country. it's like a cat and mouse game. having some centralized place that can disseminate the training and lead to better, more tailored requests is helpful for the law enforcement folks and for the provider folks. provide trainings, maintain online portals to facilitate the request process, to help also with the authentication to some extent, provide explanations for rejections so there can be a dialogue, insure appropriate staffing to meet the needs, provide rapid responses. what is an adequate response is going to change based on what is being requested so we don't include specific time limits in there. but, and also maintain the transparency that providers already are doing with respect -- at least the big providers -- with respect to the law enforcement requests that they get, but to break that down
6:05 pm
even more in terms of the categories of requests and over time a range of other, different, smaller categories as well. and finally, this is not on here, but important to think as well about providers, some of the bigger providers working with and helping to develop best practices for the range of smaller providers that are increasingly coming on the market and that are going to have to deal with this bucket of issues as well. so i'll end by just saying that the challenges are only going to grow over time. we think this is low hanging fruit, hence the title. these are structures and resources that need to be put in place now because the needs are only going to ec -- expand as we move forward. and in this -- in our view, this has benefits for security and privacy and allows us to do something as the debates about encryption continue to rage. so thanks.
6:06 pm
>> reminding me of a comment that i heard from somebody who worked for a tech company, getting a confused e-mail from a law enforcement officers. oh, these files you sent me that we requested, they're all decriminated. they're not -- encrypted, can you help us. they're not encrypted -- [inaudible] very often there are automatic calls for greater authority to solve problems that are often more about institutional competence and knowledge and ability to navigate changing ecological structures than about a need for more power. but that is always the easiest demand to make. so i want to thank our flash talkers, and i want to invite you to join us upstairs for lunch. i hope you'll join us for the
6:07 pm
afternoon session as well, and in particular, that you will stick around -- or if you have to leave, return at the end of the day when we'll be leading a group over to the american art museum for a tour of the exhibition. please join me in thanking our speakers one last time. [applause] [inaudible conversations] >> employers adding 312,000 jobs in december. even so, the labor department says the unemployment rate rose slightly to 3.9% due to a surge in workers getting back into the job market. average hourly pay improved 3.2% from a year ago.
6:08 pm
>> this weekend we go to santa monica, california. we highlight santa monica's literary life and history. saturday at noon eastern on booktv, a visit with journalist, author and professor saul reuben as he describes santa monica's culture, economy and more. >> santa monica is a progressive southern california beach city, and it's a major tourist destination. it's most well known for being a place where people might come to enjoy the day, be it tourists, and also now it's a popular place for young tech start-up companies. >> and on sunday at 2 p.m. eastern on american history tv, santa monica pier historian jim harris, author of "santa monica pier: a century on the last great pleasure pier," shares the history of this eye cobbic
6:09 pm
landmark. >> -- iconic landmark. >> we see almost nine million people a year come to the pier, and that's people of all walks of life, all income levels, all interests. there's almost as many different reasons to come to the pier as there are people that come to visit it. i think if you were to walk down the pier today, on any given day, and ask what brought them here, you'd get a different reason from each one of them. >> watch c-span's cities tour of santa monica, california, saturday at noon eastern on c-span2's booktv and sunday at 2 p.m. on american history tv on c-span3. working with our cable affiliates as we explore the american story. >> up next, the son of the last shah of iran talks about the role of civil disobedience and the impact foreign governments could have on that approach. the washington center for near east policy is the host of this event. [inaudible conversations]


info Stream Only

Uploaded by TV Archive on