Google Answers Logo
View Question
 
Q: Sudden Loss of Google PR ( No Answer,   3 Comments )
Question  
Subject: Sudden Loss of Google PR
Category: Computers
Asked by: mollee-ga
List Price: $15.00
Posted: 24 Apr 2004 01:38 PDT
Expires: 24 May 2004 01:38 PDT
Question ID: 335347
Our website http://www.travelmasti.com, has a PR 5. Till very recently
around 2200 pages from the site were listed on Google

Now there are only 900 left.

A section main page http://www.travelmasti.com/hotel/index.htm, had PR 4

Now it shows PR 0, can u please explain why ??
Answer  
There is no answer at this time.

Comments  
Subject: Re: Sudden Loss of Google PR
From: fload-ga on 26 Apr 2004 19:52 PDT
 
Could be one of a few things, your robots.txt file could have been
changed or lost or improperly setup, "In order to save bandwidth
Googlebot only downloads the robots.txt file once a day or whenever we
have fetched many pages from the server. So, it may take a while for
Googlebot to learn of any changes that might have been made to your
robots.txt file. Also, Googlebot is distributed on several machines.
Each of these keeps its own record of your robots.txt file. Finally,
you may want to check that your syntax is correct against the standard
at: http://www.robotstxt.org/wc/norobots.html. If there still seems to
be a problem, please let us know, and we will correct it."
Subject: Re: Sudden Loss of Google PR
From: hans51-ga on 27 Apr 2004 17:11 PDT
 
the answer and solution is very simple - we had a same situation in
another professional forum
http://forums.digitalpoint.com/
see web site reviews and other topics of interest to learn yourself
more and fix all problems to get back alive.

you have many substantial webdesign mistakes in your start page: 
( i had no look at your other pages )
one of the most important for crawling /( indexing ) is the title tag.
YOUR title tag looks like:

<title>Hotels Resorts: " Hotels" Online Hotel Apartment Reservation. Addresses 
Phone Numbers Pictures Maps Information of Tour Guide & Travel in india Indian 
Hotels, India, Goa Hotels "Taj Hotel" Taj Rajvilas Jaipur</title>

that is original source above - on 3 lines ! instead of 1 line.

hence that is that is illegal metatag and a few days ago i observed a
Google bot result form such page. the bot FAILS in such pages.

your LINEFEED within a meta tag can NOT be crawled correctly by Googlebot
Google bot stops after the first linefeed within your title
and fails to parse the FULL title
and may eventually continue with the next meta tag. may be its a BUG
at google but thats the way it is:

META tags are always on ONE line !

next (minor) problem
<LINK 
href="rajeev.css" type=text/css rel=stylesheet>

here again you split a tag with a line feed !

also another number of MAJOR problems

your site is NOT valid HTML !! ( for MANY reasons ! )
when you validate at W3.org
http://validator.w3.org/check?uri=http://www.travelmasti.com/


 - you see the error message:
" I was not able to extract a character encoding labeling from any of
the valid sources for such information. Without encoding information
it is impossible to validate the document. "

many major errors

1. the MISSING character encoding looks similar to

 <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> 
 
according to what you really use with you web site editor-tool

2.  a doc type declaration is missing on TOP of your source page
it looks like ( you have to adapt it to the real character encoding
your PC/ web site editor is actually using )

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">

but even worst - 
your code is a salad (!!) of old and new
it appears to me that you have been opening and modifying your page at
least with 2 different tools or different versions of the same tool -
one part with FrontPage 4.0.
you may want to prefer a real editor - there are many on the market
for example Macromedia - Dreamweaver MX 2004  offers tools powerful
enough to clean up FrontPage 4.0-salad !
and get your pages validated as clean and valid html code.

hence

CLEAN your entire source code by using a professional tool OR if you are skilled 
you can do all that also by hand. ( i do all coding by hand using a
simple professional open source tool Quanta Plus ( under Linux ). it
depends on your knowledge about valid html code.
you may have an employee able to do so for you ?

then validate after all a.m. problems have been solved
and then wait until Google is re-crawling your site again
and you may be back a few weeks after all site clean up has been finished.

part of the back links you may have lost because G is considering for
your PR only links with a PR =>4
only links from quality sites with PR higher than 4 count. no link
farming or private links.

good luck
Subject: Re: Sudden Loss of Google PR
From: hans51-ga on 28 Apr 2004 00:11 PDT
 
important additional information to above:

an additional link report file i have made the past few hours is
availabel to you for download now. the link report shows that you have
a large number of death links - many due to wrong link-syntax. pleaase
download and study and correct.
it took a while ( 2+ hrs on a fast server ) because your site appears
to be on a very slow server making crawling terribly expensive due to
waiting time ..
think about that
even Google may run our of money and hardware resources wasting hours to crawl ..
DEATH links

you find the complete report for free download at
http://www.kriyayoga.com/with_love_from_god/

it will be deleted after a few days.

the link report is compressed in tar.bz2 format you may need a regular
tool to uncompress it. if you miss such a tool, then a Google search
for

tar.bz2 decompression tool download

will give you a choice of tools to select from.

you have a whole bunch of death links 
some of them again only because of invalide code - because some of the
links may have been made by a wrong tool OR by hand.
i give you an example of one of the first mentionned errors

your link in the code is
/domestic/himachal/CHAIL.htm
however the CORRECT path would be
/domestic/himachal/chail.htm

ALL file names are CASE sensitive on www servers !
if you write a file name UPPER case instead of lowercase - then that
will result in a 404: Not Found - Error.

another frequent error you have with your links is the 3rd link on the link report
the code on your page is
/hotel/image\holiday\logo.gif   ---> resulting in: 404: Not Found - Error !
correct would be
/hotel/image/holiday/logo.gif   ---> found !

a correct WWW-path has NO back slash ! NEVER - except on your windows
system but never on a commercial unix server or similar professional
www server.

most likely you may have entered such link either by hand or by a wrong tool.
you have many such errors.

after a full site-damage-repair by you all should be fine again - for
Googlebot AND any other bot/crawler ..

professional site builder tools have often built in link checkers
else you may use external link checking tools to make a regular link
verification and assure that all pages can be found.

w3.org
offers tools for free online link verification
and free online code validation
BOTH should be done after each site/page modification

God bless

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy