Defamation on the Internet: With Courts Strongly Supporting Website Users' Immunity from Suit, Should Would-Be Plaintiffs Resort to ReputationDefender.com?
By JULIE HILDEN
|Monday, Dec. 11, 2006|
On November 20, in the case of Barrett v. Rosenthal, the California Supreme Court issued an important opinion relating to Internet defamation. In the opinion, the court invoked the federal statutory immunity created by Section 230 of the Communications Decency Act (CDA) to dismiss a claim of defamation based on the publication of an Internet posting.
In holding in favor of the defendant, the court made crystal clear that someone who is the target of Internet defamation has only one legal recourse: To go after the "originator" of the publication -- meaning, typically, its author. The California Supreme Court didn't applaud this state of affairs from a policy perspective, and it's true that Section 230 could always be amended at some future date, but the court was quite unequivocal as to the broad protection afforded to defamation defendants under current law.
The decision was notable because it came from the state's highest court. However, as the California Supreme Court itself pointed out, it was far from the first to apply the immunity: To the contrary, the immunity has been "widely and consistently interpreted [by courts] to confer broad immunity against defamation for those who use the Internet to publish information that originated from another source."
In light of this reality, what should those who believe they've been defamed on the Internet do? In this column, I'll consider the various options -- including the one provided by the website ReputationDefender.com.
The California Decision: Rejecting a Possible Exception to the Immunity
First, it's useful to note just how broad the immunity, as construed by the California Supreme Court, truly is.
The immunity conferred by the CDA protects both "providers" of "interactive computer services" -- typically, Internet Service Providers (ISPs), but also services hosting online message boards or content -- and "users" of these services. Past cases have been brought against deep-pocketed ISPs such as AOL. But in Barrett v. Rosenthal, a defamation suit was brought against a "user" instead.
The case arose because the plaintiffs -- two doctors who operate a website dedicated to exposing healthcare fraud -- believed they had been defamed by postings on another website. Soon, the courts narrowed the case to focus on a single posting -- which had started out as a private email, but was then posted by the recipient on her newsgroup.
Then, in the Barrett decision, the California Supreme Court held the recipient to be immune from a defamation suit based on her posting of the email. (The Court left open the possibility of a suit against the "originator" of the posting, but it isn't clear if it's fair to deem the original email author -- and sender -- as the "originator" of the posting, since he was not the one who posted the email on the newsgroup. Technically, a suit could be brought against the original email author and sender for defaming the doctors to the email recipient, but the damages would be slim to none.)
The relevant legislative history, as well as the language of the CDA immunity provisions, led the Court to the inescapable conclusion that Congress meant to grant immunity even to "those who intentionally republish defamatory statements on the Internet."
Why would Congress have meant to do that? The answer, in essence, is that the alternative would be worse: Under the law that existed before the enactment of the CDA immunity in Section 230, the safest thing for website hosts or moderators to do was simply to turn a blind eye.
Before the CDA immunity, hosts and moderators knew that if they read postings, they might be held liable for having published them, if they turned out to be defamatory. For them, the wisest course, from a legal point of view, was simply to abdicate responsibility to the greatest extent possible -- even refusing to look at postings about which there was a complaint, for fear that if they were de-posted, liability would result on the ground that by selecting content, and de-posting only some of it, the site would be deemed the editor and publisher (not just the host) of the rest . The result was a free-for-all: Anyone could say anything, and no one familiar with the law would dare to de-post it, or -- if they were a host or moderator -- would even read it in the first place.
Congress, however, wanted sites to be able to preserve "family-friendly" content by policing their postings, and to have the option to respond intelligently to complaints -- by reading the relevant posting and then deciding whether to de-post it depending on whether it violated the site's policies.
Removing a post may sound a bit like censorship but, since the removal is done by a private party and not the government, it is akin to the editorial function. Congress didn't want every website to be a free-for-all; it wanted sites that were not created as public forums, to be able to edit their content, according to their own values, as they chose. Arguably, this is pro-, not anti-, free speech.
ReputationDefender.com: Is It Fair for Would-Be Plaintiffs to Pay for Policing?
In sum, although the application of the statutory immunity may seem unfair when true defamation is at issue, there are reasonable policy objectives behind the immunity. Moreover, as long as Congress remains a strong fan of family-friendly editing, the immunity probably won't be going away anytime soon.
Does that mean, then, that a victim of Internet defamation has no recourse? Not exactly.
First, there is always the power of counter-speech. Fair-minded site hosts allow, at least, a right of reply -- and it would be great to see the provision of such a right become part of an Internet Code of Ethics. Similarly, blog postings typically have comments sections open to those who may feel the posts are inaccurate or defamatory. This is a big improvement on the practice of newspapers -- who post only some Letters to the Editor, sometimes in truncated form, and rarely provide space for an entire reply piece by the target of a damaging story.
Second, even if the site itself refuses to grant a right of reply or provide a comments section, the person who feels he or she has been defamed can say as much on his or her own site -- and, thanks to Google searches, one can strongly hope that many of the people who read the initial defamatory posting, will also read the reply debunking it. Experience shows that people on the Internet generally are very interested in what the truth is: They amass evidence on both sides and consider it. (Remember the example of the bloggers who forced Dan Rather and his producers to make a retraction.)
Third, if these remedies don't work, there is always ReputationDefender.com. The site boasts that "Our trained and expert online reputation advocates use an array of proprietary techniques developed in-house to correct and/or completely remove the selected unwanted content from the web. This is an arduous and labor-intensive task, but we take the job seriously so you can sleep better at night." All this, for only $15.95 a month for a six-month membership. (Discounts may apply!) (Full disclosure: Neither I or nor FindLaw has an affiliation with ReputationDefender.)
So, what exactly are these "proprietary techniques"? They are a bit mysterious, but the site's Q&A sheds some light on the subject:
Q: Does ReputationDefender simply send cease-and-desist letters or sue everybody when it seeks to "Destroy" content?
A: Most of our approaches to effecting correction or removal of content are non-legal. We will only pursue legal options with the express consent of our clients, and these techniques are strictly optional and usually the last resort. They may incur additional cost.
What isn't said here is that, due to the CDA immunity, legal options will probably be futile when it comes to defamatory content. One has to wonder what the "non-legal" approaches are, given that hacking is illegal, and that many knowledgeable site hosts cannot be intimidated by a threatening letter, and site hosts can quickly become knowledgeable because the EFF has published an excellent explanation of the immunity online. (Some site hosts do still find de-posting the path of least resistance in the face of a communication on legal letterhead, however: The only cost is a possible complaint email from the original poster, and the benefit is avoiding any possible trouble.)
Moreover, soon the CDA immunity may be so well-established, that simply filing a suit attacking an allegedly defamatory Internet posting may well be sanctionable. California, for instance, has an anti-SLAPP (Strategic Litigation Against Public Participation) process, successfully invoked in Barrett v. Rosenthal itself, which awards attorneys' fees to those who defeat meritless complaints attacking speech. In this instance, filing a defamation suit may boomerang, actually costing the plaintiff a large amount of money -- which not only includes the fees of the plaintiff's own attorneys, but also the fees of the defendant's attorneys, often to the tune of tens of thousands of dollars for the fees and costs incurred in successfully filing an anti-SLAPP motion.
With risks like that, the simplest route may be the best: Create your own site, tell the truth, provide evidence if you need to, and hope that those evaluating the evidence will make the right choice. Let's hope that the same communities that can create open-source software, authenticate documents better than CBS can, and debunk rumors through sites like Snopes.com, can also ultimately tell a truth from a damaging falsehood.