Skip to main content
Find a Lawyer

Do Search Engines Compete with - or Simply Publicize -- Online News Outlets? Calculating Damages If Copyright Infringement Occurs

By JULIE HILDEN


julhil@aol.com
----
Monday, May. 30, 2005

Currently, Google News links to online news sources, such as newspapers' websites, and features a snippet of content from them. Could the newspapers sue for copyright infringement?

That's doubtful, for a few reasons: Links probably do not count as "copying" under copyright law. Also, taking only minimal content from an article falls under the "fair use" doctrine. (The "fair use" doctrine allows, among other things, small parts of a copyrighted work to be copied without legal liability, even if technically, copyright has been violated.)

Moreover, even if a copyright suit could be brought, damages would be paltry: Google News may actually help - not hurt - the news outlets' finances by increasing their readership and offering them free publicity. Media companies pay real-life, human publicists for far less effective promotion.

Put another way, as they currently exist, the search engine and content markets are discrete: Content on Google News (and other search engines) does not really compete with content on a news site - even if it draws from that content.

(This is true even for paid content: If Google News were to take snippets from paid content, it might be violating the Terms of Service agreements the content sources impose on users, but it still might be financially benefiting -- not hurting -- the content source, by luring more readers to pay for full access.)

Does that mean that Google News is likely lawsuit-proof when it comes to copyright infringement? In a previous column, I argued that, the answer is yes - unless and until the copyright statutes are amended to target it.

But if Google News - and the Internet - were to evolve in certain ways, the legal situation could change. In this column, I will explain why.

Competing with Composites: Sloan and Thompson's Scenario

Currently, as noted above, the search engine market and the online news market are more or less discrete: The two sets of entities do not compete in the same market.

And they are not only separate, but symbiotic: Google News publicizes news sites, and news sites provide grist for Google News' mill. But what if Google News got more aggressive - and began to compete with news sources?

Robin Sloan and Matt Thompson have imagined a scenario where that would occur: In the future they describe, Google News would employ fact-stripping robots that would construct composite news stories by combining factual snippets from traditional news sources.

Would these composite stories compete with the various news stories from which the facts are drawn? You bet.

As Sloan and Thompson point out, information on the user (the kind of information to which say, Amazon has access - in the form of book preferences and the like) could be used to personalize the composite story based on the reader's interests. And readers might well opt for stories specifically designed to cater to their interests, over stories written for a more generic audience.

Consider, for instance, a story about possible U.S. action relating to the crisis in Darfur. An international news buff would get lots of content on that angle; someone who was interested in the details of government decisionmaking would get lots of content on that angle. Each would very probably prefer his or her personalized story, over a story that mixed the two angles - the kind of story as a traditional newspaper might provide. Not only would the stories compete, then, the composite story might win.

Thus, if a copyright suit were to be brought based on the construction of the composite story, there would be damages - indeed, potentially massive damages - if readers opted for the composite, personalized stories, as they well might.

As I discussed in my last column, I believe that the Supreme Court probably would hold such composite stories to be "fair use" -- as long as the robots were careful to take only tiny snippets from each source. But in the wake of such a decision, Congress might well expand copyright protection to render such fact-stripping robots illegal.

If it did so, a constitutional challenge to its statute would probably follow: This isn't exactly the kind of copyright protection the Framers intended. But it's hard to predict what the Supreme Court would do in the face of such a challenge.

Suppose, however, that this hypothetical new copyright law were upheld.

Google News (and any others who had opted to use fact-stripping robots) might then have to shift ground, and start taking content from the kind of news sources that are would not be prone to sue - such as blogs. (Bloggers would probably be delighted to be featured, and could be rewarded with a link from the particular fact - or facts -- the blog provided, to the blog itself.)

Even now, not all the search results on Google News come from the Internet outposts of newspapers. Many come from other websites. And if it had to, Google News could operate entirely without reliance on the Internet outposts of newspapers.

Suppose Google News did shift ground, and borrow from bloggers and other sites. Could personalized composite stories from nontraditional news sources (such as blogs) compete with traditional news sources? I believe so.

If so, this kind of content would be liability-free - except in the virtually unimaginable, and probably unconstitutional, scenario where Congress created an antitrust exception in favor of traditional media. (This would violate the First Amendment, in my view.)

How Google Could Compete With Online News Sites Without Using Their Content

But is this liability-free - yet very profitable - solution for Google just pie in the sky? I don't think so. I think if Google did, indeed, rely on nontraditional news sources to create composite stories, those stories could compete with traditional media stories.

One obvious issue is that readers may not judge blogs and other sites to be as reliable as traditional media reportage by the online outposts of long-established journalism institutions. But a more sophisticated system of flagging reliable and unreliable content could fix that.

Such a system could start by making a few assumptions: Much-linked-to sites tend to be reliable. Blogs that do well on the "fantasy market" for Blog shares tend to be reliable. Sites that report local events tend to be reliable. (Bloggers have the ability to provide the most local reportage possible - doing it literally street by street; so do local news sites and the like.)

Of course, these proxies for reliableness - links, market evaluation, geographic proximity - wouldn't be perfect. Far from it. But it's worth noting that neither are traditional media. Remember, a New York Times reporter is just a blogger who happened to attend college; impress some bosses with his or her talent; get some training through experience - and possibly (though certainly not always) journalism school; and receive a podium for his or her pains.

Indeed, certain institutional features suggest bloggers (or other local reports) may actually be more reliable than traditional media when it comes to local topics.

Recall, for instance, the New York Times's problem with "touchdown bylines" - where a Mobile, Alabama byline, for example, could merely mean the reporter's plane touched down there briefly - while the reportage came from a local, uncredited freelancer unaffiliated with the Times. Might not credited reportage by a Mobile-based blog be more reliable than the Times's "Mobile" story?

Similarly, consider Newsweek's headline-making - but now-retracted - reportage on alleged desecration of the Koran. Wouldn't an anonymous blog by someone within the military -and vetted by others in the military, who could anonymously comment -- have been more likely to get the story (or lack thereof) right?

Moreover, and crucially, Google would not have to rely on proxies for reliability such as links, fantasy markets, and the like. Instead (or in addition), it could limit its sources to blogs (and other sites) willing to incorporate a system to further guarantee reliability.

How would this system work? It could ask readers to rate content for reliability - and to rate other raters as to how accurate their ratings were.

Systems for rating raters already exist - though they are not yet legion. Transparensee (for which I have worked, and from which I have stock options) has developed a dynamic system by which writers' ratings are adjusted based on readers' evaluation of their postings; top-rated writer's posts would appear first for readers. Daily Kos also uses a rating system for those providing comments.

Google could also require disclosure -- through which a content writer or producer could, in effect, make an argument for his, her, or its reliability.

For instance, one rater (or source) on economic issues might disclose that he has a Ph.D. in economics from, say, Stanford. Readers - and raters - may infer that he probably knows what he's talking about in economics (but not necessarily when it comes to, say, wine tasting).

Long-established brands like "Stanford" wouldn't be the only ones that counted: The Wired brand, the Wonkette brand, and individuals' names ("Anne Rice" is a brand when it comes to vampire knowledge) would matter too.

Video game scores could be proof of reliability regarding knowledge of video games; "top Amazon book reviewer" status could indicate knowledge of books. People could also vouch for each other's reliability, just as they often do in real life.

Finally, a writer who couldn't resort to any of these brands - a rare occurrence -- could just make an argument: "Why you should believe me." In this way, content itself could vouch for reliability: After all, expertise doesn't always come from a degree; it can come from experience or access instead.

Should Google Risk a Copyright Suit, Or Shift to Non-Traditional Content Sources?

Traditional media, of course, is hardly dead yet. But with tools like these, new media may drive a few stakes in its heart.

Also, since the traditional media have proved far less adaptable than new media entities, setbacks - such as adverse court decisions or new federal laws - are more likely to prove a permanent setback for traditional media than for new media. (New media, in contrast, will probably devise creative workarounds to mitigate the impact of decisions and statutes that hurt their chances.)

Still, traditional media entities - with cadres of lawyers - remain good at filing lawsuits, and continuing costly litigation. With this in mind, what should Google do if it wants to get into the personalized-composite-news-story business?

My suggestion would be for it to work on a beta that would draw news only from nontraditional sources, who would "opt in" and waive any rights to the content. To do this would have two effects: First, it would give Google a fallback, in case traditional media did win a suit against it in the future.

Second, it might even make a traditional media suit against composite stories useless: Even if traditional media could shut down Google's fact-stripping robots' composite stories, they would still have to contend with the stories created on the beta.

Either way, Google might win.


Julie Hilden, a FindLaw columnist, practiced First Amendment law at the D.C. law firm of Williams & Connolly from 1996-99. Hilden also has experience in criminal motions and appeals. Hilden's first novel, 3, was published recently. In reviewing 3, Kirkus Reviews praised Hilden's "rather uncanny abilities," and Counterpunch called it "a must read.... a work of art." Hilden's website, www.juliehilden.com, includes MP3 and text downloads of the novel's first chapter.

Was this helpful?

Copied to clipboard