newspapers-fail

Google to Newspapers: learn how to use Robots.txt

After being pounded by media owners in three continents for providing links to their content, Google has finally returned fire by suggesting the papers learn how to use Robots.txt.

Media owners and high ranking officials in Australia, the United States and Europe have been calling for Google to pay for the right to link to their content, arguing that Google steals their work by aggregating links on services such as Google News.

As we’ve reported several times, media outlets that honestly believe that Google is stealing from them can actually take action now by implementing two lines in a Robots.txt file. Adding the extremely easy to implement disallow lines would see links to their content removed from Google and all other major search engines that follow the Robots.txt protocol. That those complaining about Google haven’t done so yet is a serious case of wanting to keep their cake, and eat it as well.

Josh Cohen, a Google Senior Business Product Manager writes on the Google European Public Policy Blog that news publishers “like all other content owners, are in complete control when it comes not only to what content they make available on the web, but also who can access it and at what price.”

“For more than a decade, search engines have routinely checked for permissions before fetching pages from a web site. Millions of webmasters around the world, including news publishers, use a technical standard known as the Robots Exclusion Protocol (REP) to tell search engines whether or not their sites, or even just a particular web page, can be crawled. Webmasters who do not wish their sites to be indexed can and do use..two lines to deny permission.”

(via SEL)

Comments