Hi there,
There can be many reasons why a website does not appear in the indexes
of search engines. One is that no-one has told the search engines that
it is there - each search engine requires you to submit your URL to
them. The place to do this fot Google is at:
://www.google.com/addurl.html
After submission it can take 1-2 months before the site shows up in
search results.
The other reasons all revolve around the site being rejected by search
engines. Some of these are:
- Cloaking.
- Writing text or create links that can be seen by search engines but
not by visitors to your site
- Participating in link exchanges for the sole purpose of increasing
your ranking in search engines.
-Sending automated queries to Google in an attempt to monitor your
site's ranking.
-Using programs that generate lots of generic doorway pages
://www.google.com/webmasters/dos.html
For foodphilosophers.com, the culprit appears to be duplicate content.
Many webmasters attempt to get double the listings in search engines
by having the same content appear under two different URLs. In this
case, these two URLs have the same content:
http://www.foodphilosophers.com/
http://www3.sympatico.ca/yuksele/
Search strategy:
Personal knowledge
I trust this answers your question. If any portion of my answer is
unclear, please ask for clarification.
Best wishes,
robertskelton-ga |
Request for Answer Clarification by
foodphilosophers-ga
on
14 Sep 2002 15:01 PDT
Please clarify why the culprit "double content" is not recognized or
rejected by the search engines and what can one do to overcome that
(if the site's double content cannot be changed by the owner). Thanks
|
Clarification of Answer by
robertskelton-ga
on
14 Sep 2002 18:04 PDT
The first priority of search engines is to provide quality results to
searchers. It is a waste of a searchers time if they follow two
different links in the search results, yet see the same content in
each. Google and other search engines compare the contents of all the
pages in their index. If they find duplicate sites, the tendency is to
remove them. Most of the time the offending sites will attempting to
trick the search engines into having multiple listings of the same
content.
The solution is to remove or change the duplicated content. If your
content is the same as the content on another site, and you do not
control the content of the other site, then you have two options:
1) Change or remove the content on your site
2) Ask the other site to change or remove their content
After the change/removal has been done, resubmit the site/s to the
search engines, and wait for them to re-index the sites.
|
Request for Answer Clarification by
foodphilosophers-ga
on
14 Sep 2002 19:37 PDT
My original question stands however; why would not the search engines
take or choose at least one of these double content. My site is built
on a web-space my internet provided (www3.sympatico.ca/yuksele/); I
domain-forward my registered company name
(www.foodphilosophers.com)there; hence is the "double content" but no
intention to trick the search engines. There is no other way that I
can do this. Pleae further clarify. And thank you.
|
Clarification of Answer by
robertskelton-ga
on
14 Sep 2002 20:43 PDT
I have yet to find an official explanation from any of the search
engines of their methods concerning duplicate content. Anecdotal
evidence concerning Google suggests that usually the site Google ranks
the highest stays, and the other one is removed. Sometimes instead of
removing such sites, Google just assigns it a rank of zero.
I suspect it has a lot to do with the method of duplication. There are
many large websites that have regional mirrors to help spread the
bandwidth - they do not appear to penalised.
In your case, rather than having the same words and images, but at two
different URLs and stored on two different servers, you have two URLs
using the same content from the same server.
My educated guess is that when Google finds a web page which only
contains a single frame, which fetches content from a different
website, it doesn't like it. I'm sure the exact answer would be quite
complex.
There was a discussion on this at WebMasterWorld:
http://www.webmasterworld.com/forum3/1766.htm
Another solution to your situation would be to use a robots.txt file
or META tag at one of the sites, so that the search engines ignore it.
More info:
http://www.robotstxt.org/wc/exclusion-admin.html
If the site that you don't want to be on search engines is:
FoodPhilosophers.com - give it a robots.txt or META tag to exclude all
robots, and resubmit both URLS
Sympatico.com - remove the index.html file, and submit the full URL to
the search engines (they won't find it, so will remove it):
http://www3.sympatico.ca/yuksele/index.html
Save the old index file under a different name, say:
http://www3.sympatico.ca/yuksele/frame_index.html
and change the redirection at FoodPhilosopher to point to this new
page. If nobody links to frame_index.html, and no-one tells the search
engines that it is there, then as far as search engines are
concerned... single content.
Then resubmit the URL for FoodPhilosopher.
----------------
Good news: AltaVista has one of the sites indexed:
http://www.altavista.com/sites/search/web?q=URL%3Asympatico.ca%2Fyuksele%2F+&avkw=tgz&kl=XX
|