Skip to main content

tv   Social Media Companies Legal Responsibility Regarding Users Content  CSPAN  September 7, 2019 4:10am-5:39am EDT

4:10 am
mugs and see all of the c-span products. next committee debate on the legal responsibility social media companies should have under the id 96 communications with decency act. considering the content that users post on platforms. this is about an hour and a half and was hosted by the american enterprise institute. this fantastic, event you have chosen to attend, watch online or view on c-span should re-informed section 230. i want to start with an interpretive reading of section 230. it is not a long section. it is a brief but important and it states as follows. user of an or interactive computer service
4:11 am
shall be treated as a publisher or speaker of any information provided by a another information content provider. after more than 20 years, since the birth of the internet age as we know it, section 230 has provided websites immunity from what they are users post. legal and policy framework has allowed for youtube users to upload their own videos, amazon and yelp to offer countless user reviews, and facebook and twitter to offer social networking to billions of internet users. given the sheer size of user generated websites, it would be impossible and unfeasible are intermediaries to prevent this from cropping up on their sites. in short, section 230 is perhaps the most influential law to protect the kind of innovation that is of the internet to
4:12 am
thrive since 1996. yet over the past few years section 230 has faced criticism. -- has faced heightened criticism from the left and the right. many critics on the left argue that social media giants enjoy special protection. they allow speech without worrying about consequences. resulting in harassment and the increased prevalence of hate speech online. other critics on the right have argued that these companies have gone too far in policing these devices or at least have gone too far in policing conservative voices. and that has enacted a biased doublers -- double standard. in light of these criticisms, we have seen calls to reform this section, and it is these calls and this debate that we are going to discuss today, and i often use the phrase, an all-star panel. this time i mean it especially. this is an all-star panel. immediately to my right is the danielle, the author of the 2014
4:13 am
book "hate crimes in cyberspace." next to her is jim harper, and over on the end is jeff, the assistant for pfizer of cyber security law at the naval academy cyber science department and author of the book "the 26 words that created the internet" and welcome all. let me start with jeff. what do we know for sure, or reasonably for sure, was the intent behind section 230? why did legislators feel the need to bring this into being? jeff: there are two main reasons. when you look at the legislative
4:14 am
history and when you talk to the people involved in drafting and advocating for section 230. the first was there was really a concern at the time -- it is kind of quaint, legal pornography available to children. that was the biggest concern at the time. there was a series of court rulings under common law and the first amendment which basically stood for the proposition that if you did not moderate content, you could increase your chances of being protected from liability for all user content. but if you start to moderate, then you could expose yourself to liability. for defamation and other types of harm. there was really a concern that the system was giving incentives for platforms not to moderate. it was one driver of the section. the other was you have to think about the time, 1995, the modern internet was really at its infancy. congress wanted to get this out
4:15 am
of the way of the development of the new technology. between those two reasons, those were really the driving forces between section 230. to get regulation and lawsuits out of the way of the development of the new technology. between those two reasons, those were really the driving forces between section 230. jim: everyone can sort of jump in, you don't need for me to >> everyone can sort of jump in, you don't need for me to call on you. i should have mentioned that earlier. there seems to be confusion about the original intent. i think that's what you try to dispel with your book. what is the confusion? that seems straightforward. there seems to be confusion. >> l say i am only speaking on my own half. jim: just sort of the narrow issue of what was the point of this to begin with? >> speaking just on my own behalf, not on the behalf of dod , there's a lot of confusion --
4:16 am
i would say the biggest bit is that section 230 is conditioned on neutrality, that it only applies to neutral platforms, whatever those might be, rather than publishers. the point was to eliminate that distinction and to allow platforms to exercise discretion, figure out what their users should or should not be exposed to and not [--] that's probably the biggest. >> the issues that were foremost at the time were about pornography and defamation, not so much about politics, were they? >> not at the time here there was a statement in the findings for the bill that said something along the lines of wanting to foster political discourse, a recognition that there would be free speech. to that extent there was, but there was no
4:17 am
conditioning of neutrality. four texan suture 30 protections. 230 protections. >> i think we are forgetting the title of section 230 c which is -- the idea was it would apply to good samaritans who were protecting, blocking and filtering offensive speech. absolutely, the idea was to incentivize self-monitoring, provide protection for those blocking and filtering offensive speech, which is completely legally protected rate but we wanted to incentivize -- jim: what did people think they meant by offensive back then? >> you quoted see 2 allows good faith efforts to block
4:18 am
objectionable content or harassment. but congress wanted to give a very wide range of discretion to these platforms to block content and not be exposed to liability. jim: ok. just a brief legal point, were they not already protected under the first amendment? i hear that a lot. why did date need this -- why did they need this extra protection? >> there's a lot of different protections, but one is to protect content that others create and the rule is if you are a distributor you can only be liable for other content if you either or had reason to know the problem became that the courts started saying if you start to moderate, you don't receive that protection and you are strictly liable just like a newspaper. that was the weird system that was created by the first amendment. >> and it was common law development, that is the courts were looking at this new medium
4:19 am
and coming at it from different places whether a platform provider was the publisher, and or was a speaker. in my opinion the law jumped ahead to what i think was the right result. it is unfortunate that we got there through legislation rather than common-law development. it would have taken years, but then again here we are talking about it, if the rule had been generated from the bottom up through experience, i think it might've been stronger and a better understood rule and one based deeply in practice rather than getting this over-the-top legislation which i think is pretty good but not perfect. jim: given that, a little bit on the history, how controversial was this to slip through where people kind of did not understand all of the consequences from this, or was there much real debate? >> there was a little bit of
4:20 am
discussion, no real opposition on the floor. it was attached to the telecom bill, because this was part of the broader overhaul of telecom laws in 1996. section 230 was really seen as an alternative to the communications decency act, which both of those ended up being put in the telecom act. the communication decency act imposed all sorts of prohibitions for speech that wasn't distant -- was indecent. and the supreme court struck this down. section 230 did not really receive media coverage, and when it did -- for most there was not a notion that it would shield platforms from
4:21 am
content user liability types of user content, so it was really an afterthought at the time. >> so we have this law that shields internet companies, how does it work in western europe? do they have their own version of this? >> they have rules. europe has a number of rules, but it varies a little bit and it gets into the classification whether it is a conduit or other service. there are some cases where the court said not only do you have to take down content if you get a notice complaining about it. you also might have a duty to actively patrol user content violate defamation laws or hate speech laws and things like that. it is much more restrictive than really most other jurisdictions than it is here. >> you said it protects big lot -- big platforms.
4:22 am
but it also protects small platforms. my thinking, i ran this site which was a government transparency site. don't go look at it now there's nothing but broken dreams, but -- and when web 2.0 came along, i changed, and that's when people actually started to be able to comment and vote and have their own material. we had a common -- comment section. there was a bill to extend welfare benefits, unemployment benefits and it had , something like 200,000 comments. i was the only guy in my spare time to join -- watching what was going on in the site. there were people who were behaving horribly to undermine the conversation. there were people who were telling me that someone else was using their name, and i was called on to moderate the situation. i wasn't using consciously using section 230, just trying to do my best to create a good environment. it is extraordinary the length people go to to undermine the site
4:23 am
because they thought i was in favor of this legislation. it was meant to be a neutral platform. unfortunately the internet for all its blessings allows people to be truly wicked from behind that anonymity. they take every effort to upset each other, to undercut the platform, so it's an area that is fraught. section 230 is a very important protection. >> it is not just to make them upset. people use tools to destroy lives, post nude photos, and sure can't get a date, keep their jobs, get a new one. they threaten. they post did amateur information.
4:24 am
information that one can never respond to, i am not a prostitute, so that the kind of stuff that we have seen isn't just like things that hurt feelings, not just offensive speech paired it is speech that we would say is not even legally protected, that we can criminalize and ensure liability for. there are sites that make a business out of hosting this kind of thing. they say, sumi, - sue me, good luck, i am immune from liability. and people's lives are ruined. i think it is just important to note that this mischief -- i help with filtering crazy comments, but we -- it is for more than that, like any wonderful tools for great things and also for ill to see a lot of illegality and destruction of peoples lives. >> indeed. we have this
4:25 am
legislation that is not -- that was not controversial, now very controversial, for different reasons from different perspectives. but just one second before we get to that, you mentioned that perhaps some of these protections could have evolved more organically rather than legislation. but given how this internet developed, if that section 230 had not happened, what might the internet look like today? how might it have evolved? without it. now, didn't have it right what does internet look like? you have an argument on that. i'm a little skeptical that the common law would have developed to the extent of the protection that 230 provides. i
4:26 am
i think some of the early court cases were not very well reasoned, that really imposed significant liability, but i don't think that broad protection that this section provides for third-party content would've been the same under common law. there probably would be more of a notice of takedown type system where if you get notification of potentially illegal content, your choices take it is either down or you have to stand in the shoes of the person who posted it and then defend it, which most rational platforms are not going to want to do because they don't want to litigate all these defamation cases. i think you would not see the same extent of social media, all these services that aren't coincidentally based in the united states. >> where are all of these giants , do you think 230 is a big reason why we have them and they don't? >> very much so. i think they were able to develop in the way they currently are because of section 230. we would probably
4:27 am
have different types of services, but they would not have the same rules that they have right now. without section 230. >> and a social media companies like in china, with complete censorship and control. it's not like we wouldn't have any companies -- >> posting dog videos, right, right. >> things that were pretty benign. >> i think it's right that we got the big media we did because of 230. is that right or wrong? is it a liability free subsidy? a little bit less liability for social media that forms meant we have a huge industry that came at a cost, victimization of people that are in some cases very sympathetic. like we have for industry. >> and some of those
4:28 am
externalities include speech so , when you are targeted with harassment online, and you are silent so often, and we forget that those negative externalities are lost jobs and potential physical violence but also speech. >> and sort of the externality that seems controversial, we are talking about how to regulate big tech. but those externalities and victims were there from the very beginning, and i know in the book there are several interesting cases of people who were actually hurt, with extremely sympathetic stories, bet where there any calls back then that this is generating victims, we need to do something, or does it pretty much continue to the present state, ignored by policymakers? i think there were some tough cases. >> what about senator
4:29 am
lieberman, wasn't that 2004, and he wanted to pressure google to takedown videos. and that just gets ignored. there were some efforts at the time. >> i guess what i am saying the public attention was not nearly the same, and they were tough cases in the beginning. there were a lot of people arguing they're only been difficult cases recently. the second case ever litigated was a case where a mother sued aol because there was child pornography with her teenage son in it that aol was being marketed in a chat room and she kept contacting them and they would not prevent it. that is a difficult case. it's not just like they were only defamation cases in the early stages. there were tough cases all the way from the early days of section 230.
4:30 am
>> is not child pornography doesn't that fall outside of the section? federal criminal law does not enjoy that immunity. >> but she was suing for negligence. >> i got you. >> the problem is the state of changing technology. in the early days, you had manual processes entirely. nowadays a thing like child pornography you can kind of automate tracking, and those systems are in place where you can get a hash of known child files and you can review anything. there is a case where aol found child pornography and delivered it. justice gorsuch wrote the opinion the opinion is very . good, but automated review for things that are self-evidently criminal seems like an area where you might be able to feel -- hold platforms libel.
4:31 am
it's important because if you are taking defamation or stocking it is really hard for a , person coming in for the first time to a conversation to figure out whether someone is defaming another. what is the reality of it? someone contacting me -- -- someone contacted me saying someone was using my name. i i know rare names are repeated may times around the world. could not bring myself to take something down as identity fraud because the same name was being used as another person out there in the world. i don't think the people who are faced with these decisions, myriad different kinds of problems coming to them are going to be able to negotiate those kinds of determinations readily. you might have a might have a
4:32 am
liability that is a probably attached if it is self-evidently >> so there would be these stories, but generally this is sort of what it sounds like, it's sort of not anymore, is it the case that you have these big platform companies that are in the news, and so it's sort of why now? was there a triggering event, or is it just being intrusive in our lives? that we are taking a closer look at companies and what regulations and laws apply? >> a couple of things, coming together all at once, i feel like we could take each thread and bring it together, right? people don't like tech companies >> peck companies don't like tech companies -- people don't
4:33 am
like tech comedies is they used like tech companies as much as they used to. >> this is a wonderful opportunity for every senator to hop on the bandwagon. you bear full responsibility of all mischief online, and it was sort of an easy potshot, and then so what else can we throw on this? we have russia and disinformation and then therefore blaming the platform for not doing anything about it when they were caught unaware, right? i don't want to take all the thunder. i feel like we all want to pitch in something. >> i would also say one thing i have observed is traditional news media perhaps is not very happy with the business models of some of the large platforms. >> it seems like one criticism is that it enables those models,
4:34 am
not just about moderating and shielding companies, but it is a part of why these companies are worth $500 billion dollars. >> and they are not nascent. they are engaged in surveillance for tremendous amounts of money and they are stealing the business of news companies. -- that's what you're saying. or in the mind of -- one problem familiar to students of regulation. if you were deeply to undercut these protections, the big ones would be able to navigate the new law, and the little ones trying to challenge and would not be able to navigate the new law cost-effectively. like regulation typically does, it would lock in the status quo in the marketplace. so the natural decline of facebook would be delayed if not foreclosed by doing away with 230 or deeply undercutting its protections. is
4:35 am
there as much for the small ones as it is for the big ones. >> i have an approach for that, but we can talk about that later. we don't have to get there yet. >> i would also say the larger platforms have lack of transparency, and they have started to change, but i think it was often like a black box to figure out what was going on in there. >> particularly a complaint among conservatives that think companies are biased against them. i say i don't see why that person is not being taken down, and why not that person? do i call the 800 number if it happens to me? what do i do? they don't understand the process. when i went on twitter, to discuss this, there was a statement that section 230 is terrible.
4:36 am
what it does is enables hate and harassment. you have written about this. >> these things too. i have, and that's right, that is a free pass. in 2017 we sort of proposed a way to fix the statute without doing too much violence to all the great things that it does, because on the one hand section 230 provides this safe harbor which is not conditioned on good behavior togood faith when it comes under filtering. when it comes to over filter, we condition it on good behavior. it doesn't condition that immunity on any good behavior. it just says you have this immunity, we are not going to
4:37 am
treat you as a publisher of other content. when that is interpreted broadly , it means that even websites that encourage nonconsensual pornography, that business model is abused. sites that cater in deep fake sex videos, they get to enjoy immunity from liability even though they are not engaged in any kind of moderation. can i talk about my proposal? >> not yet. this is a stay tuned moment. the proposals are coming. >> i don't want to get rid of it. i think section 230 has been incredibly important great it was important from the early years from which we do have yelp, facebook and prosocial
4:38 am
activity. >> it seems like all of these positive things that most of the stories that you hear -- >> we have wikipedia. that is pretty social. about whether it is harassment or hate speech or people being -- uploading, really violent videos, these companies have been given a lot of power, and got very big. they show no responsibility. with great power comes great responsibility. >> where are those people when who are victims of cyber stalking. companies -- there's all these politicians that were so right about hate and harassment. they don't really care. all of these years i've been writing about it, no one says a word. it's only when it is it is your own
4:39 am
ox being gored is when you think about it. the companies actually the largest platforms have been working on these issues and i have been working with them, facebook, twitter, four years. they have not done a terrific job for they have been trying. they have gotten better at it. why does twitter take a stand against stalking and threats and not pornography? we had gamer gate. advertiser stop pulling their money. if it's bad for business, they they making efforts. some people say it marginalizes hate speech, and i would disagree. -- political speech, and i would disagree.
4:40 am
[earlier comment] >> these are not new companies. they have been around for a while. i think a lot of people think they should have better policies at this point, but maybe they just didn't really care that much until it became really a big issue in the public conscience. >> i think it is important that a lot of the worst wrongdoing that happens online has a wrongdoer doing the wrong thing. you take something like revenge porno fire giving -- revenge pornography. they are the ones that are the actual wrongdoer, but if they hadn't done what they did, the platform not have that material on it. there is a case that was dealt in the second circuit where it is unclear that they saw this as a grinder case. it is unclear that they did much to stop the bad actor. >> and grinder did nothing so they made it impossible -- his perpetrator. >> spend 30 seconds on that
4:41 am
issue. >> grinder is a hook up site and a jilted ex-lover adopted the identity of another and invited i think something like thousands of men to his house. -- ads.sted fake as he provided the home address, so over 1000 men came to his house because -- >> and grinder prompted not to do anything, but that individual -- i think there was a protective order against him. >> and it was not enforced. i am not the lawyer, but i know the lawyer well who represents the victim. and law enforcement said nothing. >> i think that's a law enforcement story, not a grinder story.
4:42 am
the defendant is grinder. >> i'm not sure we should feel badly for grinder because grinder has no capacity to identify anyone on their site and they cannot prevent people from reappearing, which is nonsense. as a technical matter, right, the inability to trade addresses is a design choice. >> but the ability to avoid tracking is pretty readily used. i think if you were required to figure out when and where a person was making false bank accounts on your own platform, you would have a hard time doing it, it would take a long time to do it, and you would have -- to but a full-time employee on any individual who is trying to do that on your site. where sectionking
4:43 am
is not doing the job. 230 >> i think ultimately grinder it said we chose not to design our site in a way that would make it easy to block people who are doing mischief. twitter, they can block those kinds of people, it's not that technically difficult. grinder, their business model would be disserved by changing the architecture of their site, and i think that is, given how it's been misused, that strikes me as creating something that is hazardous and then walked away. >> a lot of us in the privacy community want surveillance to be pretty hard even for the corporations that run the sites. so it is not a bad thing necessarily that there is a platform that makes it hard to figure out who is who. this is a tough balance to strike. i'm not sure grinder was wrong when the actual perpetrator was not pursued as readily as the platform. >> i read the short petition in the case.
4:44 am
going to a conference in october for the supreme court. i think this probably has the best chance of getting granted of any section 230 petition because it makes an argument about product liability rather than -- section 230 is clear, treating as a publisher or speaker. i think there is a chance that the supreme court would take its first ever section 230 case and that is a really big deal, because section 230 we rely on a fourth circuit case, the first opinion to interpret the section, and everyone has taken that as how to interpret section 230. i think this is the biggest challenge that i have seen to getting it to the supreme court. >> so one reason this is in the news, we have more high-profile examples of harassment.
4:45 am
we have hate speech online. and there's an issue that companies have been given this and they arederate -you said you have worked with some of these companies. do think they think they are being treated poorly by the media, that all of their good efforts, that it's a lot harder than you think? to understand the technology you . you just can't come up with a better artificial intelligence? feel like people just don't get the difficulty of this? moderating at scale? >> scale is a huge problem. for any given grievance, it's really hard in scale to get it right all the time, but i think there is some recognition that twitter knows, speaking from nonconfidential information, that until 2014 it was not doing the right thing. -- until 2014 it was not doing the right thing. facebook has long banned
4:46 am
harassment and stalking and bullying. and i would like them to be more transparent. complexe an incredibly rules internally. they have guidelines that would take days to read all of it, but not public facing. while they say they ban hate speech, they don't define it. they say they ban harassment, define it for us. explain why you ban it and give us examples and tell users why you are banning them and give them a chance to respond. are they doing better since 2014? are they doing a great job? i don't think so.
4:47 am
i have been working with facebook on some of their imagery, but i don't get paid, so i can be honest with them, and talk about it, not based on confidential information, but it is important to be an academic. you're free to say, i can criticize you. facebook and twitter are virtuous compared to the poor -- pornographers. we think of these things as a scale. some of these companies are trying a little bit, because you have sites who abuse, and their entire ability to make money is encouraging users to engage in nonconsensual naga fee.
4:48 am
>> when you're getting this flood of hate speech and you have contract moderators, i encourage everyone to read this series of stories about the life of a social media moderator is like -- >> happy to like that. >> these people are being exposed to the worst possible aspect of humanity and applying policies.iled there will be mistakes even in terms of whether it violates the policy. there are judgment calls per there is no easy solution.
4:49 am
nothing that congress does or does not do will solve the problem we have. >> is not just the united states. the european union, there's a lot of pressure brought ot bear against the major platforms, ruling that into taking hate speech within 24 hours. because their terms of service, yet they u >> something we touched in the beginning. we have people on the right that are convinced these platforms are censoring political viewpoints they disagree with. as part of the argument, they said it establish the intent and platforms would be neutral and there is a difference between platforms and publishers. they have to choose. are they a platform, are they a publisher? 230need to fulfill the23 responsibilities?
4:50 am
do they think they are suppressing conservative speech and what about the platform versus publishers distinction? no.ust insert to get a little more detailed, there is not a distinction in section 230 but this is a law the congress passed so whether there is or whether there should be is a different question so congress could say there will be a neutrality requirement and then we will debate the merits of that. that is a debate we can have what i think some of the debate in terms ofonflated what the section does say and what some people think it should say. >> i will open the debate on whether there should be neutrality. there's no such thing. you cannot administer a neutrality rule. so don't even try. you cannot administer a neutrality rule. so don't even try.
4:51 am
>> electric wires are neutral. platforms are not. they handle the way they work, and the way their users use them. that is not neutral in any sense. >> you were referring perhaps to theece of legislation where commissioners from the fcc would be appointed and have to certify these platforms were politically neutral if they wanted to get the section 230 protection, specifically the very largest platforms. you have sort of already indicated, do you think it was a good idea, and if so, how would that actually work? if you wanted to certify these platforms as being politically neutral? >> well, you would have to have a federal speech agency. that concludes my answer. [laughter]
4:52 am
it's absurd on its face given our traditions and constitution to try to have any kind of federal body determining whether a platform is neutral and fair . athad a fairness doctrine the fcc, rip. i think that debate is over. that ship has sailed. it has come and gone. of course, congress will talk about it like it could happen, but it would die in the courts if it got the legislature. >> is it unconstitutional? >> i mean, i think it would have to be tested. i think it probably would be. also think even beyond the constitutional issue, practically, the ftc even with their native advertising, they are not comfortable in making judgments on things, so i think there are constitutional issues, and also a requirement for a super majority on the ftc.
4:53 am
means that even a minority party could block section 230 from some of the large platforms if they wanted to. there's a lot of logistical issues as well as the constitutional issues. >> it might be helpful to explain why it is a constitutional issue. it goes back to your point about the first amendment, whose speech would be controlled by government and compelled? it would be private actors. platforms are speakers. they are not the government. they are not first amendment actors with freedom of regulation. they would argue that they're speakers, the platforms, and it's their speech, and they are -- should be free from regulations. >> that brings up what i call my woke facebook scenario.
4:54 am
one day mark zuckerberg, at one point he was touring the country, and people were saying he was running for president. what if he says one day, i'm very concerned about the direction of this company, from now on on facebook feel free to post about your family, funny pictures, and you can post about issues of racism and gender, but no trump posts. we don't want to see those posts. we will be woke facebook. is that not a government issue at that point? they may be a private actor, but they have a jim mendez public role. >> they are not a public forum. >> that would be ok? >> sure. go to gab, go somewhere else. go to twitter.
4:55 am
>> there was an online knitting forum that made that very same choice. i think we had a recent decision a few months ago that justice wrote that found that did note access station have an admin requirement. i think that gives you the idea of the direction the supreme court is going. >> i was reading an interview with senator hawley, and he was addressing what the section called for. there was not some grand bargain decades ago, and i think now times have changed. when this regulation was instituted, we did not have these big platforms. we did not have these business models. now it's time for a change. they are tremendously influential in speech. so i guess we are getting to that reform part of the conversation, but just to be clear, their role in political speech, even though it may have not been the top line issue back
4:56 am
1996, that's certainly covered in this idea that they can choose to be neutral or not neutral. it's not just defamation and not just pornography. >> a professor said that facebook is powerful, but i think you quickly understand its limits. it would likely play into this kidands to have in power trying to play presidential politics. they are already out there waiting. congress has these concerns, but i do not think they are real. the capacity to control the
4:57 am
marinette strings is -- >> as i discussed this issue, i find there are a lot of republicans who are convinced that they see there's 20 twitter accounts suspended by republicans for every one that might be on the others. have there been studies looking at any of these issues? > there has been a lot of anecdotal discussion for a variety of concerns, which can we talk about solutions now? so this is probably the most sort of -- >> didn't we just say it's anecdotes? if you dig in, it seems untrue -- i can come up with 10 stories
4:58 am
about black lives matter's protesters removed from facebook and twitter. so does that make sense, it's like we all have examples that we can trot out. i think impair lee speaking, it seems to be -- empirically speaking, it seems to be false. >> the audience may have questions, and we will reserve time, and i will even ask all the way on the other side, if you have questions, so you can sort of putting them together. again, i am wondering if people grasp the difficulty of content moderation and at the same time i don't want to let the companies off the hook. it is clear it is a work in progress at best. >> i think it will always be a work in progress given the scale and types of problems and challenges outside the united states.
4:59 am
what i would like to see companies do is constantly thinking and experimenting, whether it is addressing a deep space problem by coming up with technical solutions which will be hard to do, but with new types of policies that if there's pressure on companies to do something and to act reasonably, then they will generate, experiment. >> do think there is a space for that? , ithey iterate it badly >> it seems like they can't get it right. i'm not sure -- >> i want to challenge your premise. that they are not doing a good jo we happily are not in a position
5:00 am
to see what it would be like if they were not doing the job. these articles you referred to read thosem verge, articles about what the moderators are seeing and what they are preventing you from seeing. it's one of those things, the things we are not seeing are really horrible and might make you think they are doing a better job. >> indeed, some people criticize these content moderation policies. 1000, 1000, 1 million flowers bloom, let's put it up there and people can decide for themselves who they want to follow or not. >> i would throw my computers and devices out the window if that was the case. it would be terrible. >> let's talk about future scenarios that we might have in mind now.
5:01 am
no technology talk would be complete without block chain. >> the answer is not block chain. >> this is not actionable investment advice. >> there are future block chain based social networks where you cannot remove content. somebody could post it and it will be there forever. there is no corporation to ask to take it down. it's distributed across the world and it would be impossible to remove content. if it's coming, it might be. you might be aware that you can solve this problem right now, sprinkle your magic dust on this problem, and the bad actors go right over to this other place where things are uncensored. that means challenges to the existence or utility of defamation law. that means that revenge porn once up cannot be taken down. there is a whole world that is
5:02 am
about to emerge and can't really be altered. we should think about all of the expectations and all of the law that has been emerged in the past might be deeply challenged. >> 230 emerged in the past, and i think people who say things are very different now it is a , completely valid point and i think looking at what -- what is happening now, perfectly valid. but keeping in mind the downside that you mentioned, given this sort of law that has been successful. this is something that government did that seemed out to work well, and we don't want to screw it up, so being careful about reforms, what do they look like? we can go down the line. about reforming section 230?
5:03 am
>> i think we should preserve section 230, but it should not be a free pass, we should make it conditional. that is, we are not going to treat an interactive service, online service providers as publishers or speakers if they engage in moderation practices. that is not looking at any given decision. it is looking at their practices were large -- writ large. the premise of 230, the original purpose, was to incentivize self-monitoring. we should go back to that purpose and condition the immunity on engaging in these practices in the face of known illegality. this is a proposal that i have the language at the ready, and i have offered it.
5:04 am
i've been talking to senate offices. maybe they will be interested. it was literally -- what it would do, it would mean that you get to enjoy section 230 immunity, but it would not let the revenge pornographers off the hook. the sites that engage in no content moderation knowing that there is sex videos on the site. if you do nothing in the face of known illegality and mischief, then you should not enjoy the immunity from liability. you have not earned it. i would say the facebooks, the twitters, they have extensive speech rules, extensive speech moderation practices. they are engaged in reasonable content moderation practices. it would depend on the kind of speech. now to pornography what is
5:05 am
reasonable is different then what's reasonable for defamation. using ai for threats is impossible because -- you got to look at the whole picture, so what is reasonable in one content and approach, and what that would do is incentivize experimentation and moving with the times and with technologies. >> is there a moderation at scale issue with that proposal, though? it seems like it would be hard. >> yeah. at any specific decision regarding content at scale the speech rules practices and design, looking at a approach of a company to
5:06 am
content moderation, the design of the site, how they respond to certain types of speech and problems. .nown problems the idea is to look at it, how are they doing it at scale, and what is reasonable. what is reasonable for facebook is certainly different from what --reasonable at can we look at your blog? what was reasonable for some concurring opinions 10 years ago, we had something like 300 comments on any specific day, so -- and there are like six people running it. so it would matter the size of the company, who was participating, what kind of activity they were hosting. the idea of the premise of reasonableness, it is tailored
5:07 am
to the circumstance. >> i think these are reasonable reforms. >> i share the concern about some of the bad actors. there are some bad actors that we need to figure out how to solve the problem of them. my concern about a reasonable standard, and this comes from having practiced cybersecurity lack of certainty as to what would be reasonable when ultimately there could be judges who perhaps don't understand the technology. >> could we ever have specific rules for cybersecurity, they would get outmoded in five seconds. >> but we do. we have some state laws that have incorporated -- but they give some controls and specific controls. i think that without any
5:08 am
certainty at all, just having reasonableness, and conditioning speech on this, that's what i struggle with with having reasonableness for section 230. i wonder how much of the free-speech benefit there would be and perhaps we don't want that. but i worry about the lack of certainty, if you just have reasonableness. i think in other areas of the depended on speech it. so that's what would be my concern with just having reasonableness. , that's reallyss hard to administer, but it's probably the only thing to do. the problem is that you got this legislation in place that has this language. you have a congress you cannot rely on to do the careful content pruning. >> very skeptical, cynical. >> i've been here a couple years. my solution is to go back in
5:09 am
section 230.e >> you would have reasonableness. in that world, reasonableness would be so litigated we would have negligence. >> i think we would know better what's reasonable than we do now. going mostly been people to court. you cannot go back in time. i don't trust congress to change it. coming back on it in places. the immediate proposal i have is and let230 in place
5:10 am
judicial opinions continue to work over the law. >> and let companies continue to innovate and iterate. can't we just give people -- let people just post on these sites? but let people handle the filters? here is the kind of content i would like to see and that's all i will see. >> i built that. usersuld blink out other -- blank out other users, particular forms. >> they have a blacklist. -- block list. >> any reforms you are interested in? >> one is fairly specific and one is broader. i think a lot of the sites we are concerned about, i don't think they should be entitled to section 230 protections. the problem is, section 230 is often decided at a very early
5:11 am
stage so there is no discovery to determine whether the platform has contributed to the development of the content, in which case it does not receive section 230 protection. i would like to see some sort of standard where in cases where the judge believes there is reasonable because, that the platform has contributed to content to allow them into discovery on that issue. we do that for personal jurisdiction. i think that would be a way to go after the bad actors that never should be getting protected by section 230, while still preserving protections for the sites that are not contributing to the material. that is one thing. i hear from people on all sides about various issues with 230, ranging from there should not be any moderation to there should not be any protection at all and they should be moderating everything. this is all anecdotal. we do not have good empirical
5:12 am
evidence. it is such an important issue. it affects the speech of everyone in the united states essentially and the ability to receive information. i would like to see a commission , a congressionally created commission to really gather a good factual thesis for this before proposing specific solutions. we do not have that right now. i am really worried when i hear all of these different stories, which are really opposed to one another. i don't know whether any of the solutions will actually pass. i would like to get a much better factual record before we make any decisions on what to do with platforms or 230. the upside/downside risks? what is the upside of smart legislation and the downside of getting it wrong?
5:13 am
what's the upside of getting these reforms right? what is the internet look like? what are the downsides of boy, we got it wrong? what does this all look like? so some speculation whoever wants to take a crack at it. >> the upside is that if we crafted it well, if we conditioned the immunity on reasonable content moderation practices or denied bad actors the ability to enjoy that immunity, we would have a lot more speech online. we would not have st alking, harassment and pornography, which drives people off line. there is an upside. there is actually a speech upside. upsides or areer there just downsides of getting it wrong?
5:14 am
how could it be better? >> i see a lot more downsides than upsides to any changes. i don't trust congress. we could craft something brilliant, the folks here. we could put together perfect legislative history and send it down the stream and when we come back -- it comes back, it could be something radically different. >> the decisions are being made by lawyers. i assume there are a number of lawyers in this room. we are all risk-averse people so decisions about moderation have to at least get cleared by lawyers. you saw what happened after the sex trafficking amendment. craig list -- craigslist days before the law got rid of its personal section. i think section -- changes to section 230. >> are you surprised at the outcome? >> no. >> we worked on it. different offices.
5:15 am
we tried. what came out of committee was depressing. what was signed was depressing. >> i testified in the house judiciary committee in support on the limited exception for intentional sex trafficking. what ended up, it's a standard -- you really can't comprehend it. the lawyers will be risk-averse. i am definitely not opposed to section 230 changes. i just think they need to be really carefully crafted and deliberate and not some legislative compromise that gets attached to an appropriations bill right before the december recess. that is not what we want. >> it is jim's proof of concept. i am a pollyanna. i am always engaged and happy to help. i will not give up on the enterprise. that is what you were talking about.
5:16 am
>> i will ask one more question. i will ask questions. whether you are a pollyanna or pessimist, be prepared. we will bring me mics around. if 230 was eliminated, what does that world look like? do we still have these platforms? does it destroy their business models? is it something radically different? what does that world look like if it did real harm in the effort of reforming to this provision?
5:17 am
>> i don't think we know for sure because for the entire modern internet we had section 230. there are a bunch of different outcomes. platforms could take the approach that, not in the first amendment will give me protections if i don't to many moderation, which i think most of us would say that is not a great outcome. they could become risk-averse and say we are not going to allow you because that exposes us to too much risk. or perhaps it will be somewhere in between. i can't say with certainty what would happen because that would really be new territory for us. >> after you have thrown your computer out the window. >> exactly. >> jennifer huddleston has a good article on the common law development coming along when 230 was passed. a trend, because publishing has changed, towards trading platforms as conduits -- treating platforms as conduits that are not responsible for
5:18 am
what appears on their site. that gives me confidence the long-term outcome is debatable but would probably be good. economically efficient and free speech protected results would retain, but in the short term there would be a stampede against the platforms with litigation. if the repeal of 230 were to signal federal law is opposed to protections for liability, that could undercut the development. i don't know how you get there from here. what happens if 230 goes away? common law and first amendment protections. perhaps with some narrow tweaks to the worst examples of egregious behavior supported by platforms would be infringed upon and go away. >> thoughts or should we go to questions? if you have a question, raise your hand. we will send the mic over to you. that table right there, gentle man with a laptop -- if you care to say where you are from.
5:19 am
>> james, you asked about the value of section 230. there was a good paper that came out. $440 billion over the next decade in value. facebook, google, and twitter removed in six months five billion accounts. there was active content moderation. i want to hint on something you brought up. what if we get rid of section 230, and you point out in your book, it goes to encourage content moderation. 8chan does not matter if they lose section 230. they don't engage in content moderation. er, whichht up grind does not engage in content moderation. do you expect to see a rise in hate and offensive and terrorist content as platforms move away from the risky content moderation to achieving total immunity if we do nothing? >> that is not true.
5:20 am
they don't risk it by engaging in content moderation. you are saying if we change section 230 -- sorry, i want to make sure i understood the premise of the question. and then you wipe out 230 and we live in a world where we are afraid of being treated as publishers so we engage in no moderation, content moderation. i worried about a paper we wrote that we would have either no moderation or too much moderation. i share your concerns that we will end up looking like either the wild west or it is family-friendly and overly censored. that is what we thought 25 years ago. >> i am jumping back in. do you think they would love to be out of this business? they would love to be out of the figuring out what hate speech and not hate speech?
5:21 am
>> they would like europe off their back. the hate speech conversation, that is because of europe. >> i would also just say they love to be out of this business but they need to be in this business. this is a product any service -- and a service they are making a lot of money off of. this is part of the job. >> it should be said at least once at this event that the users out there are little bit responsible, too. >> totally. >> when it comes to fake -- what happened to don't believe everything you read? >> you are very quiet over there. that back table right there. >> my name is roger. during the 1990's, i was the ibm executive responsible for
5:22 am
worldwide internet policy. thus, i had a front row seat in the development of section 230. there was an important piece of the puzzle the panel has not discussed and it leads me to think of what might be a blunderbuss way to deal with the issue. one of the elements that went into the assumptions we all made at that time was that the future would be very much like the past. most of us had in mind that the time 230 was approved of computer bulletin boards. that is what we thought of as content. someday, the thinking was that someday they would be 100 million people on the internet and there would be 100,000 websites. everyone assumed there wouldn't -- that there would be facebook in the future.
5:23 am
and in fact, there would be thousands of facebooks. none would have more than 2% market share. everyone assumed there would be a google. there would be hundreds of googles and none of them would have more than 1% market share. it would be a distributed system like the internet itself is distributed. there was never any expectation of enormity that we see today. >> and your question? >> the question is, perhaps that assumption that went into thinking and because of that it was assumed the internet will be made up of thousands and thousands of small businesses that could never possibly monitor 100 million people posting on the bulletin boards. the question is -- i realize this is a blunderbuss solution, but suppose we went as far as to say any network that has one billion users were market value of $1 trillion is by definition a publisher, with the same rights and duty to care that the northwest current in the georgetown of today have -- georgetowner
5:24 am
today have when they posted make a judgment. at that scale you are a publisher. no gray area. you have the same responsibility that a call-in radio program or neighborhood newspaper has of duty of care when you reach that size. >> i might think too abstractly about both things but i have a hard time thinking justice changes. the terms of justice in the world changes with as the size of your network goes above 999,999,999. i want the rules to be rooted in justice and not administrative stuff that keeps the lawyers working. either there is -- you are doing it right or doing it wrong.
5:25 am
there is reasonable and unreasonable and the law should be as close as possible to that. not that you have to be reasonable when you reach a certain threshold of users or market capitalization. that type of rule exists all over the place and you see lots of small businesses bank small so they don't to employees. i don't want that, not in this area. -- don't have to provide health insurance to their employees. i don't want that, not in this area. >> the gentleman right here. right in front of me. >> good morning. i am jeff jarvis from the newmarket school of journalism at cuny. i have learned a lot. i want to throw some quick ideas out. one from the group i learned is maybe we should not concentrate on content and set of behaviors.
5:26 am
-- content but on behaviors. there is a suggestion that there to betught national internet courts where matters of legality should go to a court where we negotiate norms in public with due process. in terms of the legal bit irritating -- we have facebook's oversight board where they are trying to establish that and see how that goes. i wonder about your views of these efforts to in essence come up with new common-law through different paths as we negotiate this new landscape. >> you have written about this. >> in my book i argue we should -- a company should engage in technological due process. they should be transparent and accountable for their content
5:27 am
moderation and practices. that could be as a matter of law but they should. the different speech, the line is blurry. the difference between content and conduct -- yes, we should be more careful. some speech is tantamount to conduct in the harassment and sexual harassment. we can say that is truly like a bludgeon of an instrument rather than a message or viewpoint. it is really hard. i feel like i have a free spirit -- my speech friends with me here. conduct and speech are one of the same, like a lot of conduct is expressive and vice versa. >> i don't want to talk about that. >> you talk about what you want to talk about.
5:28 am
>> the national internet court. it sounds like a thing you will get if you just wait around for a couple of years and older judges move out and younger judges move in. i want to challenge this because you mentioned it a second time. the idea that companies should be providing due process and transparency. i'm a big, big fan of transparency. but true transparency in your content moderation would be a huge gift to your adversary. over on 4chan, or 8chan, they would figure out how to game rules and follow the line between the two. they work hard. smart people, socially maladaptive but they're working
5:29 am
hard to game publish rules. that's a gift to them. >> you are saying we should not have laws? >> no, but it is gamable. people will game them rather than living right. >> i would say there has to be more transparency than there is right now. we are getting better. i think today compared to three or four years ago there is much more transparency but we have a long way to go. this is one thing i really stressed to tech companies. section 230 is not a constitutional right. this is something that congress has provided and there are some people that will say section 230 is not a benefit to the tech companies. of course it is. it is a benefit to the public as well. tech companies do receive -- they have been able to structure business models around this. i have no problem with some sort
5:30 am
of transparency being a premise of section 230 because we need to have a better idea of what's going on. now you see what is going on now with the debate, which is chaotic. >> so when someone, on account -- an account get suspended, it's not a mystery. the person should have that kind of expectation if they engage in that behavior. they will understand that it is a likely result. >> i think that can maybe help improve user behavior. but also more transparency about the broader policies. platforms have policies but they are fairly broad. getting more specificity as to how decisions are made, that's a benefit overall. >> shorter and clearer policies. reluctant to ask a question. i am really pointing with my red pen. >> elizabeth banker with
5:31 am
the internet association. i have a question about the professor's proposal. as many commentators have noted, one of the advantages of section 230 is that it does allow companies that are sued to get rid of cases quickly through a motion to dismiss asserting the 230 immunity. it sounds like your proposal would require litigation before being able to get to that stage. i am wondering about the impact on smaller companies. we talked about how without section 230 the bigger companies would have the resources to do more litigation. smaller companies would not necessarily have the same resources. is there a way you idea takes into account the smaller business impact? >> reasonableness itself would. what is reasonable is different but it does address your point about the cost of litigation. what is reasonable, you can adjudicate and be very different.
5:32 am
it is true there would be costs. at the same time, we shouldn't forget there are costs in the current system where there is no -- and what happens is we lose a lot. we lose a lot in terms of victims to go off-line. we have a lot of harm externalized. what i would say is i recognize that would be true. it would be more challenging and expensive for small companies to face litigation that it would -- than it would facebook. they would be the expense. you have more censorship and collateral censorship. they can't litigate. at the same time, some small actors are the folks running revenge porn sites.
5:33 am
we cannot overlook that there are meaningful trade-offs. it is not all trade-offs in speech in one direction, which it's just that poor provider. there is also a lot of trade-offs for victims who without any change would be driven off-line. >> i thought there was another question over here. the gentleman right there. thank you. >> hi. thank you. i want to talk about something jim brought up that i think is an important point. i was wondering for the panel -- what the panel would think about it. the question posed is should we reform section 230? we have our academic question. is there a possible way that 230 can be improved?
5:34 am
then there is the other side, should we kick off the process in d.c. and congress? i think there needs to be complete agreement on the panel that 230 has been very beneficial. it's very important to the internet that we enjoy today. my fairly quick question is, with congress not being trusted with a lot of tech policy or policy more generally to get things through and it gives us what we want, would you take that risk that congress could potentially undermine section 230 just to get through potentially reforming? >> article one says they have to. i think looking at it is one thing. i mean, one of the jobs of congress is to look at the laws they pass. i would agree there are some challenges that congress has with understanding technology to be as broad as possible but that
5:35 am
is not a reason to not look at laws. there are efforts to revive the ota which would help inform that and is essential. it was basically congress' in-house technology think tank. i think congress has to look at laws. whether they should do anything is something different. i would like my commission to help inform that. we have no other choice. >> your question i think helps show we are kind of talking about two different types of governmental bodies. do we trust the legislator or -- legislature or judiciary more? the talk of common law is on the judiciary side. it might be handled better there. no question that the world of
5:36 am
litigation could be reformed quite a bit. the earlier questions you referred to are serious and not to be forgotten. maybe familiarity breeds contempt. i have been a congress watcher for a lot of years and i don't think i would try to run anything through there. we are better off leaving 230 where it is. and letting case-by-case perhaps turn it back where appropriate. >> we have time for only one more question. it needs to be concise, powerful. if you raise your hand, there will be a huge -- this is a big gie. >> these private moderation systems that these platforms managers are private legal systems. in order for them to work, they need to be legitimate in someone's eyes. in whose eyes should they attempt to legitimate themselves? users, states, the population at large? what renders these private governance systems legitimate or not?
5:37 am
>> the market. >> yeah. [laughter] >> that's an amazing response. >> the question allows me to add on to what i said about transparency. transparency i do think it would to people trying to game your system. if transparency will give your users and your governmental overseers confidence in what you are doing is right, go ahead. you will have problems with your adversaries because of that transparency. >> that will have to be at. -- it. >> all-star panel, all-star audience. thank you for coming. [applause]
5:38 am
>> next a hearing on ways to new members of congress. hearing. [taps gavel] >> all right. the committee will come to order. better late than never. sorry, everybody. the constitutional obligation of the job. i will now recognize myself for five minutes to give myself an opening statement.


info Stream Only

Uploaded by TV Archive on