Results 1 to 2 of 2
October 22nd, 2010, 09:40 AM #1
What Content Can Search Engines “See” on a Web Page?
- Join Date
- October 21st, 2010
Search engine crawlers and indexing programs are basically software programs. These programs are extraordinarily powerful. They crawl hundreds of billions of web pages, analyze the content of all these pages, and analyze the way all these pages link to each other. Then they organize this into a series of databases that can respond to a user search query with a highly tuned set of results in a few tenths of a second.
This is an amazing accomplishment, but it has its limitations. Software is very mechanical, and it can understand only portions of most web pages. The search engine crawler analyzes the raw HTML form of a web page. If you want to see what this looks like, you can do so by using your browser to view the source.
October 22nd, 2010, 09:48 AM #2
Who told you that? Spiders (search bots) do not analyze html, but bad html can prevent them from completing a page. If you want to see what a robot sees on your page use a text-only browser like Lynx.
By fifthhouse in forum Search Engine OptimizationReplies: 15Last Post: April 30th, 2011, 09:21 AM
By Bill in forum Midnight Cafe'Replies: 9Last Post: August 2nd, 2010, 09:28 PM
By SeanW in forum Programming / Datafeeds / ToolsReplies: 3Last Post: November 7th, 2005, 08:35 PM
By emaxsys in forum Domains & HostingReplies: 3Last Post: March 24th, 2004, 09:52 PM
By Buddha in forum Midnight Cafe'Replies: 4Last Post: December 8th, 2003, 03:13 PM