Stolen photos of stars find 'safe harbor' online

In this Sunday, March 2, 2014, file photo, Jennifer Lawrence arrives at the Oscars at the Dolby Theatre in Los Angeles. Dan Steinberg/Invision/AP, File

SAN FRANCISCO (AP) — Imagine what the Internet would be like if most major websites had imposed controls preventing the naked photos stolen from Oscar-winning actress Jennifer Lawrence and other celebrities from being posted online.

The Internet would be less sleazy, but pre-screening more content might also mute its role as a megaphone for exposing abuses in government, big companies and other powerful institutions.

To preserve the Internet as a free-wheeling forum, the U.S. Congress included a key provision in a 1998 law called the Digital Millennium Copyright Act that governs the online distribution of photos, video and text.

A "safe harbor" clause absolves websites of any legal liability for virtually all content posted on their services. The law, known as the DMCA, requires websites and other Internet service providers to remove a piece of content believed to be infringing on a copyright after being notified of a violation by the copyright owner.

Websites have been busily pulling the naked photos of Lawrence and other victims of the high-tech theft presumably because they are being notified of copyright violations or because the images violate the sites' terms of service. The copyright infringements are fairly blatant: The photos were likely taken by either the celebrities themselves or by someone else besides the thieves who hacked into their online accounts to heist copies stored on computers for online backup services such as Apple Inc.'s iCloud.

But the stolen photos weren't removed quickly enough to prevent an unknown number of people from making their own copies on their smartphones, tablets and personal computers.

Although the intrusion into the privacy of Lawrence and other stars probably would have been less rampant if websites weren't protected by the DMCA, most legal experts question whether requiring Internet companies to review content more vigilantly before it's posted would be worth setting precedents that could stifle free expression.

"If there is anything the American public dislikes more than an invasion of privacy, it's censorship," says Bruce Sunstein, a Boston attorney specializing in intellectual property rights.

HOW DID THE DMCA COME ABOUT?

As more people began to surf the Web in the mid-1990s, it became increasingly apparent that the Internet was making it easier for people to acquire and post all kinds of content. This made copyright violations more widespread, but music labels, movie studios and book publishers had to go to court to obtain orders to remove each piece of illegal content.

The DMCA represented Congress' attempt to address the copyright challenges posed by the Internet. Among other things, the legislation gave copyright holders a way to request their content to be removed simply by sending an email. Lawmakers also included the safe harbor provision to protect websites from lawsuits alleging that they should never have allowed the content to be posted in the first place.

Some of the safe-harbor protections have faced legal challenges, including a high-profile lawsuit that entertainment conglomerate Viacom Inc. filed against YouTube after the video site was sold to Google for $1.76 billion in 2006. Viacom alleged that YouTube management allowed copyrighted video to be brazenly uploaded to their site because they knew the material would attract more viewers and drive up the value of their company. Google and YouTube ultimately prevailed in the bitter dispute, largely because of the DMCA's safe harbor.

WHY WAS A SAFE HARBOR NEEDED?

If websites could be held liable for copyright violations, they would be thrust into the position of making judgment calls on a piece of content before it's posted online. That would be a daunting task, given the volume of material that Web surfers share on the Internet today. About 144,000 hours of video are uploaded to YouTube alone each day, while Twitter processes more than 500 million tweets per day and Facebook's 1.3 billion users share billions of photos.

"The platforms that host that content can't readily police all of it the way that a newspaper can carefully select what should go in as a letter to the editor," says Harvard University Law School professor Jonathan Zittrain, who is also co-founder of the Berkman Center for Internet & Society.

Some pre-screening of content is still done. YouTube prevents some video from being posted through a copyright-screening tool that was created after Google took over.

Not all copyright violations are caught, so Google is still inundated with takedown requests. In the past month alone, Google says it received requests to remove more than 31 million links in its search engine index directing traffic to content cited as copyright violations. That number doesn't include content posted on YouTube or its blogging service. Google says it complies with the overwhelming majority of the takedown requests.

It's probably a good thing that websites aren't asked to decide what's legal and what's not, says Corynne McSherry, intellectual property director for the Electronic Frontier Foundation, a group focused on digital rights. She worries big companies would likely to err on the side of caution and block more content than necessary because they wouldn't want to risk being held liable for something that could dent their earnings and stock price. Small startups, meanwhile, would also likely be prone to block a lot more content because they can't afford anything that could drain their finances.

"The Internet, as we know it, would not exist if it were not for the DMCA's safe harbor," McSherry says. "If we are ever in a position where Internet service providers have to monitor their sites, I think Internet users will lose."

DON'T WEBSITES ALREADY BLOCK OR REMOVE MATERIAL THAT DOESN'T INVOLVE COPYRIGHT VIOLATIONS?

Yes, but those decisions typically involve violations of a websites own rules. For instance, YouTube and Facebook try to block pornographic images from appearing on their services. Both of those sites, along with Twitter, also forbid graphic violence, such as the recent beheadings of U.S. journalists videotaped by the Islamic State militants that killed them. In many instances, though, the websites still rely on their own users to identify posted content that violates the terms of service.

"The lasting test here is of the ethical moment that users face when they choose to seek out or repost photos they know weren't meant to be public," Zittrain says.

Show comments