Column from PC Magazine: Pathetic European Attack on Google and the Net

Column from PC Magazine: Pathetic European Attack on Google and the Net

In a headline-grabbing comment last week, Pinto Balsemão, head of the European Publishers Council, said that the Internet cannot continue to be free, as it has been for the last decade. He wasn’t suggesting that publishers make all their sites pay-per-view, but that search engines could not and should not be able to search for content freely.

There is some concern that Google, and other search engines, can run rampant through a publishers library and start providing copyrighted works to the public, for free or a fee.

If this is a real concern, I can think of several ways to stop this.

  • Get off the internet.
  • Block the bots. I know what bots are visiting my web pages. I could set filters to stop them from accessing pages. Then they wouldn’t be collected, stored, mined, and indexed on the main site.
  • Get W3C to add a tag that tells bots that this page should not be indexed. Maybe even set the tag to allow some bots and forbid others, so internal search engines can provide a catalog for the Intranet. (Note: Intranet vs. internet)
  • Whitelist allowed users. Only allow valid, authorized IP adresses to access the web sites. In a closed community, this is a lot more manageable than a site that wants to be available to anyone, anywhere, anytime.

There are so many controls that a user can provide to limit access to files and pages on the internet. Folks seem to have forgotten them, or never understood how the system worked; just following the lead of what was done before, by a bunch of hackers who didn’t really want to restrict access to information, copyrighted or not.

Of course, it will require a lot more work to manage everything. An expenditure of time, money, and effort to limit access to a system that is a free-for-all in its natural state. But, it can be done.

Can I patent these ideas?