Google Answers Logo
View Question
 
Q: Internet ( Answered,   1 Comment )
Question  
Subject: Internet
Category: Computers > Internet
Asked by: lifeafterdeath-ga
List Price: $200.00
Posted: 02 Jun 2003 07:01 PDT
Expires: 02 Jul 2003 07:01 PDT
Question ID: 211931
Q1.) What are cascading style sheets, how the concept can be used to
produce more manageable web pages? and also explain how do they add
manageability to people's websites.

Q2.) What are image maps? Differentiate between client side and server
side image maps? Explain the effective use of clickable image maps on
web page.

Q3.) Explain different types of internet/ intranet connections
available towards web technology in present day. Discuss at least
three types, keeping in mind the web performance consideration. and
also include conection devices.

Q4.) One aspect of management is publication of an organization web
site on internet search engines. explain how users can register the
website with internet search engines? Also explain about spiders, meta
tags, headers.

Q5.) write a brief criical appraisal of client and server side
scripting technologies. Your report should address the main scripting
languages used on both client and server side scripting.
you might consider modern markup languages (DHTML,XML), UNIX AND
Microsoft technolgies, platforms support, browser support, security,
portability, scalability, functionality, future developments  etc.

[Note: Q4= 500-800 words
       Q5= 2000-3000  words ]

Request for Question Clarification by errol-ga on 02 Jun 2003 08:40 PDT
Hi there!

While I work on your question, can I just ask for clarification
regarding Q3?

Do you mean different types of physical data connection such as T1
leased lines, satellite etc?

Many thanks,
errol-ga.
Answer  
Subject: Re: Internet
Answered By: errol-ga on 02 Jun 2003 18:57 PDT
 
Hi there, Lifeafterdeath!

This answer will be heavily based on my own experience of internet
technologies and some parts may be very "hands-on" with lots of
examples that you can try using a text editor and a browser if you
like.



Q1. Cascading Style Sheets
===========================

Cascading Style Sheets (CSS) is a very powerful and efficient way to
control the way that a web page looks on all forms of media through a
set of simple commands that are embedded within the HTML code.
These commands are very easy to learn even if you have had a small
amount of HTML experience.
CSS is currently at Level 2 of the specification defined by the World
Wide Web Consortium (W3C) although most browsers do not have support
for anything above Level 1.
Most web developers are still learning Level 1 and many continue to
struggle with the quirks of CSS support in individual browsers.
I will go into more detail with this near the end of the section.

Some things that CSS can do are:

- Control the sizes and colours of text, borders, tables and virtually
every page element.
- Make a site easier to maintain or change.
- Make a site more accessible.
- Reduce page size, which has a positive effect on loading time and
saves bandwidth.
- Provide a set of different styles optimized for different media
types such as a PC monitor screen, television or printing.
- Replace large images by using fonts scaled to any size that you
like.

Some things that CSS cannot do:

- Run on the server, it is strictly a client-side technology.
- Display properly in older browsers.
- Move items on the page dynamically, do maths, alter windows - that's
what Javascript is for.

So where do we put the CSS code?
We can place the CSS in three places:

- Inline, where the code is placed within a HTML element using the
"style" declaration.
- In the Head of the HTML.
- In an external file and called in the same way as an external
Javascript.

Let's take a look at an example of CSS.

	H1 { color: green; }

The above rule will tell the browser to render every <h1> element in a
green colour which means that we do not need to wrap it in the <font>
tags.
If we were to place this inline, we would write the following:

	<h1 style="color: green;">A Heading</h1>

If the CSS were placed in the head of the document, or in an external
file we would not need to change the <h1> tag at all, like so:

- In the document head:

	<html>
	<head>

	<style type="text/css">
	H1 { color: green; }
	</style>

	</head>
	<body>

	<h1>A Green Heading</h1>

	</body>
	</html>

- In an external file, "style.css":

	<html>
	<head>

	<link rel= "stylesheet" type="text/css" href="style.css">

	</head>
	<body>

	<h1>A Green Heading</h1>

	</body>
	</html>

The last method is by far the most efficient because it is separate
from the HTML page and is cached by the browser which speeds up page
loading time and can reduce your bandwidth by ten times or more.
You can control virtually every element on the page with CSS;
paragraphs, headings, text colours, margins, position, and sizes which
means that we can eliminate many tags including <font> and <center>.
A typical HTML page that doesn't use CSS would be around 25k in size,
if we strip out the tags which are no longer needed the HTML page
could be as little as 5k with a stylesheet file of around 2k but if
the CSS is placed in the head or inline, the browser will still need
to load each page including the CSS code and the page weight would not
be much less than a page using traditional methods.
If you used the traditional method or inline CSS and you needed to
change one item throughout the whole website such as a heading colour,
you would have to go through the code of every single HTML page and if
you have a large site then you have several hours of work ahead of
you.
Using the external file method though, all you need to do is open the
CSS file and simply change what you need to see the effects on the
whole website globally and instantly.

The following diagram shows the difference in file structure:

- Without CSS

index.html page2.html page3.html page4.html page5.html
   25k         30k        27k       59k        12k

- With CSS

                     style.css
                        3k
                         |
      -----<--------<---------->------------>----
     |          |          |          |          |
index.html page2.html page3.html page4.html page5.html
   8k         14k        13k       42k        4k

You begin to see the advantages when your outbound bandwidth massively
decreases due to each page using the same stylesheet file and when you
can alter just one line of code to change the appearance of the entire
site.


CSS can also make a page more accessible by offering a selection of
different style sheets for each type of viewing device.
For a television, you might want to increase the overall font size and
increase the margins to reduce the clutter.
This would be very difficult to achieve using the traditional methods,
you might have to provide two or more versions of a website to TV
Internet (WebTV) and PC Internet users and then updating one page of
the site means double the work required but with CSS we can use the
same page for everybody.
To perform this, we use the "media" declaration like these examples:

- PC monitor

	<link rel= "stylesheet" type="text/css" href="PCstyle.css"
media="screen">

- WebTV, TV Internet browsers

	<link rel= "stylesheet" type="text/css" href="TVstyle.css"
media="tv">

- Projector

	<link rel= "stylesheet" type="text/css" href="Projectorstyle.css"
media="projection">

- Printers

	<link rel= "stylesheet" type="text/css" href="Printstyle.css"
media="print">

The last one of these, for printers, is particularly useful.
If the normal page contains a lot of graphics then we can tell the
browser to use the "print" stylesheet when printing to save on ink and
to make it more readable on paper by maybe increasing the text size
and changing the font to a more simple one.
This is how many websites offer a basic "view printable version" page.
As well as the above, there are other media types for braille, aural,
handheld, embossed braille print and teletype.
For more on these, visit the following pages:

W3C Media Types
http://www.w3.org/TR/REC-CSS2/media.html#media-types

A List Apart - Print Styles
http://www.alistapart.com/stories/goingtoprint/


So why are Cascading Style Sheets, "Cascading"?
The answer is very simple, if we have two properties applied the same
page element then the last property will be more important.
In this example, the font color for the body of the page will be
black:

	body { color: red; }
	body { color: black; }

Also, this rule is applied to the method used to import the
stylesheet.
If we have three stylesheets; inline, in the head and an external file
with each one specifying a different font color, the inline style
rules will override the others.
This is how CSS is "cascading", because the style rules go from top to
bottom in importance.
For more on the Cascade, visit this page:
http://www.w3.org/TR/REC-CSS1#the-cascade


Although there are very few, CSS does have some disadvantages.
The primary one of these is the level of support in older browsers
which is a constant source of frustration for many web developers.
If a browser does not support CSS at all, such as Internet Explorer 3,
the page will display as a very basic page with no colours or other
formatting apart from the usual heading and paragraph tags.
Some people (including myself) would argue that this is actually an
advantage, it keeps things simple for old browsers which may struggle
with lots of traditional HTML design and more importantly, it
separates the content from the styling.
There have even been books written on the subject:
http://www.amazon.com/exec/obidos/ASIN/1904151043/002-5524531-8280038

CSS support is very poor in Netscape 4.x and Internet Explorer 4 or 5
but it is very good in Internet Explorer 6 and excellent in browsers
such as Mozilla [ http://www.mozilla.org ] (Netscape 6, 7), Opera 6+
and Internet Explorer 5 for the Mac which support virtually all of CSS
Level 1.

There is a very good table of CSS browser bugs and support here:
http://www.richinstyle.com/bugs/table.html


A few commercial web sites have taken the brave step of moving to a
100% CSS design, the two most famous of these are Wired, a popular
magazine and All The Web, a search engine.

"Wired News: A Site for Your Eyes"
http://www.wired.com/news/culture/0,1284,55675,00.html

"AlltheWeb celebrates full XHTML/CSS compliance"
http://www.fast.no/article/articleview/265/1/12


Related Links
==============

W3C CSS Specification - Level 1
http://www.w3.org/TR/REC-CSS1

W3C CSS Specification - Level 2
http://www.w3.org/TR/REC-CSS2

Introduction to CSS style
http://www.w3.org/MarkUp/Guide/Style

W3C CSS main page
http://www.w3.org/Style/CSS/

Glish.com - Eric Costello's guide to CSS layout
http://glish.com/css/

A List Apart - Flexible Layouts
http://www.alistapart.com/stories/flexiblelayouts/

Eric Meyer - CSS
http://www.meyerweb.com/eric/css/

CSS/edge - A collection of remarkable demonstrations that shows the
real power of CSS
http://www.meyerweb.com/eric/css/edge/

Microsoft - A collection of CSS demos written in the late ninties
http://www.microsoft.com/typography/css/gallery/slide1.htm



Q2. Image Maps
===============

Image maps are basically an image in the popular GIF format which
contains clickable areas which point to different URLs.
A classic example would be a map of the world where you would click on
a particular country to get taken to a new page with a more detailed
map of that area.

An area would be defined by code in either CGI script on the server or
in the HTML for client side which describes which shape it should be
and what size, using pixel coordinates.
A rectangle area of the image would use this code, for a client side
map:

	<area shape="rect coords="1,50,150,100" href="://www.google.com"
alt="Google!">

Lets imagine that the following diagram is a 200x200 GIF image map.

 --------------------------
|            |             |
|     A      |     B       |
|            |             |
|            |             |
|------------|-------------|
|            |             |
|     C      |     D       |
|            |             |
|            |             |
 --------------------------

For a server side image map, you would use code like this:

CGI Map Code
=============

	rect http://www.site.com/a.html 0,99 99,99
	rect http://www.site.com/b.html 100,200 99,99
	rect http://www.site.com/c.html 100,100 99,0
	rect http://www.site.com/d.html 100,200 200,100
	default http://www.site.com/index.html

HTML
=====

	<a href="cgi-bin/imagemap.map">

	<img src="imagemap.gif" ismap>

	</a>

In this example, the actual GIF image would be used as a "guide" to
the user which is just overlayed on top of the actual clickable map
itself which is invisible.
When you hold the mouse cursor over the map, the browser will be
reading the position constantly and when you click on it, it will send
these coordinates to the CGI script which is then processed and then
you're sent to whichever URL the area corresponded to.


For client side, we must place everything within the HTML:

HTML
=====

	<map name="ourmap">
	<area shape="rect coords="0,99,99,99"  
href="http://www.site.com/a.html">
	<area shape="rect coords="100,200,99,99"
href="http://www.site.com/b.html">
	<area shape="rect coords="100,100,99,0"  
href="http://www.site.com/c.html">
	<area shape="rect coords="100,200,200,100"
href="http://www.site.com/d.html">
	<area shape="default" href="http://www.site.com/index.html">
	</map>

	<a href="imagemap.map">
	<img src="imagemap.gif" height="200" width="200" ismap
usemap="#ourmap">
	</a>

Now, when you click on the map the browser will jump to the part of
the HTML called "ourmap" which contains the coordinates for each link.
As before, the mouse pointer's position will be read but the exception
is that the browser will be doing the processing of the coordinates
and will work out which URL to load next.

To create the actual coordinates and the code, you could work them all
out by yourself or you could use a free tool to do it for you.
I have listed a couple at the end of this section for your
convenience.

So that is a very basic description of how they work but which is
better?
Server side or Client side?
It is very difficult to choose one over the other but server side is
probably more reliable.

Server side advantages
=======================

- Very powerful and complex maps can be used.
- Coordinates and URLs are not at risk of being cached by the browser
so any changes will have an immediate effect.
- The actual coordinates cannot be copied by somebody wishing to
duplicate your own work.
- The map will work in any browser that can display images.

Server side disadvantages
==========================

- The server can become heavily loaded on a busy site.
- The map cannot be saved for offline use.
- Requires a server with Perl support.

Client side advantages
=======================

- A user could save the map for reference at a later date.
- All the processing is done by the browser, all the server must do is
output the HTML files and images.

Client side disadvantages
==========================

- Only modern graphical browsers will support image maps (IE 3+
Netscape 4.x)
- Map can be saved and copied onto another site with ease.
- The coordinates and URLs may be cached so any changes may take hours
to have an effect.
- A computer with low hardware specifications may struggle with a lot
of coordinates.

It is entirely up to you which version you use but you should test
each method to see which works better in your environment.


Common and effective uses of image maps range from actual geographical
maps to elaborate site navigation.
Another common use is for graphical statistics, such as a pie chart.
A good example of a web site which effectively uses image maps in a
commercial environment is Multimap [ http://www.multimap.com ].


Related Links
==============


HTML Goodies - Image maps
http://www.htmlgoodies.com/tutors/im.html

Webmonkey - Server side image maps
http://hotwired.lycos.com/webmonkey/96/39/index2a.html

An example of a pie chart image map
http://www.chestysoft.co.uk/drawgraph/imagemap.asp

MapEdit - A Windows map creation tool (free trial available)
http://www.boutell.com/mapedit/

Mac-ImageMap - An Apple Mac image map tool
http://weyl.zib-berlin.de/imagemap/Mac-ImageMap.html

Yahoo Directory - More image map resources
http://d4.dir.scd.yahoo.com/computers_and_internet/internet/world_wide_web/imagemaps/



Q3. Internet Connections
=========================


*** I have posted this assuming that you wanted information about
physical connections, if that is not the case please ask for a
clarification ***


DSL
====

At the most basic and cheapest level, a web server could be connected
via a persons DSL line.
While it would not offer a high level of reliability, this solution
would be adequate for a personal homepage or a development server.

The main obstacle is the IP address that will be assigned to the
server.
In most cases an ISP will only issue a dynamic IP to the line which
means that while it may not change for months, it is still at risk of
changing without warning in the event of a power cut or a modem
failure.

There is a very effective way around this though, by using dynamic IP
services such as No IP [ http://www.no-ip.com ] you can  install
software on the server which will contact the No IP service if the IP
address changes.
You will still be able to access the server because when you sign up
for the service, you choose an easy to remember sub-domain such as
http://myserver.no-ip.com.
when the software updates the NO IP DNS entries, the sub-domain will
then be pointing to the new IP address.

The option of hosting on your cable modem or DSL connection with a
dynamic IP service is very popular among people who run game servers,
IRC servers or even using remote control software such as PC Anywhere
[ http://www.symantec.com/pcanywhere/ ].


"A server of your own"
http://hotwired.lycos.com/webmonkey/99/08/index3a.html


VPN
====

virtual Private Network (VPN) technology is widely used for business
people on the move.
VPN essentially allows you to connect to your office network over a
normal internet connection via a secure tunneling protocol.
This is ideal if you wish to access the corporate intranet which is
blocked from public by a firewall or IP address restriction.

What is a Virtual Private Network?
http://vpn.shmoo.com/vpn/FAQ.html#Q3:

The performance of this network will be limited to your actual
internet connection speed and also slowed further by the encryption
that must take place.
It's similar in the way that an SSL website would operate for example.

Another drawback is that some ISPs (mostly residential) block VPN
ports and view it as a breach of the Acceptable Use Policy to use this
protocol.


DSL Reports
http://www.dslreports.com/speak/remark,5022467;L2NvbW1lbnQvMTU3Ni8zMTIxOSM1MDIyNDY3

"Cable firms cloud AT&T's VPN vision"
http://www.nwfusion.com/news/2001/0312aup.html

Google Directory - VPN
http://directory.google.com/Top/Computers/Security/Virtual_Private_Networks/?tc=1

VPN Labs
http://www.vpnlabs.com


Leased Line
============

Leased Lines are the most reliable and popular form of connection for
serving web pages.
They offer a large range of speeds and a 1:1 contention ratio which
means that no other organization is sharing your bandwidth.

A good explanation of Leased Lines comes from Onyx Internet:

"All companies that are serious about their Internet access use Leased
Lines. In today's business environment the Internet has become an
essential communication, information, and marketing tool which needs
to be available all of the time. Many companies would lose money if
they could not receive orders or e-mails so they need the guarantee of
a Service Level Agreement (SLA). This means that should their
connection be down longer than the time allowed in the SLA they would
receive service credits as compensation.
A leased line gives the customer a direct connection from their
network straight in to one of Onyx Internet's Points of Presence
(POP), which provides a fast link out onto the Internet. This
connection is for the exclusive use of the customer so unlike ADSL
they have guaranteed bandwidth all of the time. Leased Lines are
suitable for heavy Internet users and they can easily be upgraded as
Internet usage or traffic flow increases."
http://www.onyx-connections.net/leaseline/

Most Leased Lines are usually referred to as "T1" connections.
For more about T1 technology and terminology, I found a useful article
explaining it well which contains the quote:

"T1 is a high speed digital network (1.544 mbps) developed by AT&T in
1957 and implemented in the early 1960's to support long-haul
pulse-code modulation (PCM) voice transmission. The primary innovation
of T1 was to introduce "digitized" voice and to create a network fully
capable of digitally representing what was up until then, a fully
analog telephone system.

Perhaps the way to really begin this discussion is to discuss the AT&T
Digital Carrier System referred to as "ACCUNET T1.5". It is described
as a "two-point, dedicated, high capacity, digital service provided on
terrestrial digital facilities capable of transmitting 1.544 Mb/s. The
interface to the customer can be either a T1 carrier or a higher order
multiplexed facility such as those used to provide access from (fiber
optic) and radio systems."
http://www.dcbnet.com/notes/9611t1.html


Google Directory - Digital Hierarchy
http://directory.google.com/Top/Computers/Data_Communications/Digital_Hierarchy/?tc=1

Point to Point Digial Circuits
http://www.ldcircuit.com/services-ptp.htm

T1 FAQ
http://www.bandwidthsaving.com/T1lines.cfm



Q4. Search Engines
===================


Search engine placement is vital to the success of an online business
these days, the first place many people look when finding the official
site for any organization is a search engine.
Getting listed is fairly easy but staying listed in a good position in
the results is more difficult.


Getting Listed
===============

To become listed, there are several things that you must do.
The first task is to gather a list of the search engines and the
submission forms, I have listed a few of the most popular ones below
for you.

Search Engines

Google
://www.google.com/addurl.html

All The Web
http://www.alltheweb.com/add_url.php

HotBot & Lycos
http://hotbot.lycos.com/addurl.asp

MSN
http://submit.looksmart.com/info.jhtml?synd=zdd&chan=zddsearch

Looksmart
http://listings.looksmart.com

AltaVista
http://addurl.altavista.com

Directories

Yahoo
http://docs.yahoo.com/info/suggest/

DMOZ (Google Directory)
http://dmoz.org/add.html


The above sites would be the absolute minimum to get the site indexed.
All you need to do is visit each search engine in turn and follow the
instructions for submitting your web site, usually a matter of typing
your URL and pressing the Submit button.
The directories usually need you to navigate to the category related
to your web site then click the "Add URL" link on the page.

Either before or very soon after submitting your URL, you should
optimize the web pages for the spiders which are applications that are
run on many computers at once, checking sites already indexed and
updating the listings.


Spiders
========

A spider (also known as robots, or "bots" for short) will perform any
of the following actions on it's "crawl":

- Check existing URLs to see if they are online.
- If they are, compare the content with the stored data.
- Keep a list of any new sites it finds for inclusion in the index.
- Delete obsolete links/URLs

A spider can take up to six months to do a complete crawl of the web
due to the sheer size of it and after submitting your URL it may take
around three months to actually appear in the index.
These spiders are based in huge data centers and consume enormous
amounts of bandwidth while performing the tasks.

However, there is a new technology emerging -- distributed crawling.
At the home of Grub [ http://www.grub.org ], they are doing something
new and exciting.
You may have heard of SETI@home [ http://www.seti.org ] which is a
distributed computing project, "distributed computing" means that many
people run an application on many computers at once.
The main difference between the Grub spider and Google is that while
the Google bot is based in a data center on powerful servers, Grub is
running on hundreds on home PCs at the same time.
This means that Grub might be able to get more work done by lots of
people donating spare bandwidth and CPU power, if there are enough
users then Grub will be able to crawl many times more sites than any
other search engine.

Optimizing Content
===================

Contrary to popular opinion, very few search engines pay any attention
to Meta tags anymore.
Google, for example, will ignore the Meta Description and Meta
Keywords tags because the actual text content of the pages are a much
better indication of what a site is about.
This is why you will see results in Google with the search terms you
typed in highlighted in the summary of the page content below the
link.
The Meta tags which the search engine do pay attention to are the Meta
Robot ones which give instructions to the spider on if to follow any
links or if it should not spider anything.

The syntax of the Meta Robots tag is:
	<meta name="robots" content=" INSERT VALUE ">

Possible combinations of values are:

	<meta name="robots" content="index,follow">

- This means that it will "index" all the content that it finds such
as text and images to store in the database.
When somebody types a search term and it is found in this page, it
will include the page in the results.
The Follow value tells the robot to follow all links that it find.

	<meta name="robots" content="index,nofollow">

- As above but the robot is not allowed to follow any links so it will
just index this page only.

	<meta name="robots" content="noindex,follow">

- The robot is not allowed to index any content and is allowed to
follow any links.

	<meta name="robots" content="noindex,nofollow">

- The robot must not index anything and must not follow any links from
this page.

Even though this seems like a good way of doing things, using
something called a "robots.txt" file is far more popular and gives you
more control over which parts of the site are spidered.
A robots.txt file is placed in the root of your webspace, for example:
http://www.yoursite.com/robots.txt and every time a spider visits, it
will check to see if the file is there and then reads the contents.
The contents of the file will have two main parameters, User-Agent and
Disallow.

This is a sample of the robots.txt file:

User-agent: *
Disallow: /images/

Here, the file is telling all search engine spiders that they must not
enter the /image/ directory of your webspace which is useful if you do
not want your pictures to turn up on Google Image Search [
http://images.google.com ].

For more on robots.txt and Meta tags, visit these sites:

Robot Meta Tags
http://www.searchengineworld.com/metatag/robots.htm

Robots.txt Tutorial
http://www.searchengineworld.com/robots/robots_tutorial.htm

Robots FAQ
http://www.robotstxt.org/wc/faq.html

Some points which should be followed to enable the spiders to
successfully index your content and enjoy a good position in search
results:

- Use plenty of descriptive text in your page titles and main body.
- Make good use of HTML structure, heading tags, paragraphs and image
alt tags.
- Avoid using frames, if you do then provide links to each page within
<noframes> tags.
- Flash cannot be indexed so your site may not be indexed at all if
you do not provide a HTML version.
- Be careful with dynamic pages that use session IDs, these will
usually stop a robot in it's tracks or cause a "loop", where a robot
will keep indexing the same page thinking that it's different because
parts of the URL have changed.
- Avoid using text which a similar colour to the background, this will
most likely result in your site being dropped.


Related Links
==============

Google Webmaster Guidelines
://www.google.co.uk/webmasters/guidelines.html

Search Engine Submission Tips
http://searchenginewatch.com/webmasters/index.php

Indexing Resources on the WWW
http://www.slais.ubc.ca/resources/indexing/www1.htm



Q5. Scripting
==============


Scripting is probably second only to HTML in importance on the world
wide web; it provides interaction, security and without it many areas
like e-commerce and user communities would be very difficult to
establish and use.
I will describe both forms of scripting below with a summary beneath.


Client Side Scripting
======================

Client side scripting is versatile and very useful.
With Client Side Javascript, the most popular method, you can run
various small programs within the web browser including calculations,
real time manipulation of page elements and alteration of the browser
windows.
The majority of modern browsers support Javascript to a certain
extent; Internet Explorer 3+, Netscape 4+, Opera 4+ will be able to
run complex calculations with no major issues.

What is Javascript?
The scripting language began life as an extremely cut down version of
Java that web designers needed to perform basic tasks without the
memory and CPU overhead that Java gives.

So where would you find Javascript?
On at least a quarter of the web I would guess, if not much more.
If we visit CNN [ http://www.cnn.com ] and click on one of the links
for a video feed, Javascript introduces itself by asking your browser
to open a new window with certain parameters set to remove the address
bar and the browser buttons (home, refresh, back, etc) to maximize the
visual experience and reduce the clutter.
You can usually recognize a link which opens a window by looking at
the status bar at the bottom of the browser which will sometimes look
like this: "javascript:openwindow('url.html', '800', '600', no, no)".
When you click on the link, a Javascript "function" will be executed
which is a single "shortcut" to a sequence of commands written by the
coder and embedded in the HTML, usually in the Head of the page.

Javascript can also be used to provide handy calculators and currency
convertors, anything which needs mathematics can be performed using
client side scripting.

So what can Javascript do?
Javascript can:

- Open or modify browser windows.
- Modify HTML code or CSS values dynamically (DHTML).
- Perform calculations within the browser, using the users CPU power.
- Check if certain conditions have been met, such as filling in
required form fields.
- Display dialog boxes to the user.
- Display the system date or time.
- Display the browser version.
- Change the current URL of any window.
- Be triggered by clicking a link, moving the mouse or pressing a key.

Javascript cannot:

- Write to or alter files on the users computer system.
- Perform any task unrelated to or outside the browser environment.
- Alter files or data on the web server.

Soon after learning basic HTML, many people move on to Javascript to
create special effects on their personal websites such as scripts
which read the mouse position and place a small image at this point
every few milliseconds to create the illusion of an image following
the cursor.

When client side scripting is used to dynamically alter HTML code in
this way, it is called "DHTML" or Dynamic HTML.
Many spectacular results can be found by combining Javascript with
advanced CSS, you could change the colour of a font depending on what
the time is for example, or make advanced menus which vanish if your
mouse leaves the trigger area.

I found some good examples of DHTML which I have listed here:

Image - from black and white to colour
http://www.w3schools.com/dhtml/tryit.asp?filename=trydhtml_gray

Image - moving light effect
http://www.w3schools.com/dhtml/tryit.asp?filename=trydhtml_filter_lightfx

Moving text
http://www.w3schools.com/dhtml/tryit.asp?filename=trydhtml_examples_onmousemove

These effects are nice to look at but they do not work in all
browsers, they require Javascript to be enabled and a modern CSS aware
browser such as IE 5.5.
A mobile phone would certainly not be suitable for advanced scripting
due to the small screen size and operating system limitations and many
old or text browsers can become confused by Javascript, having strange
effects or no effect at all.


There is another main scripting language called "VBScript", short for
Visual Basic Script.
This is a Microsoft technology so it will only work in Microsoft
Internet Explorer or programs such as Outlook Express which share much
of the same basic program code.

VBScript is very similar to Javascript in syntax, the following code
would be identical in both technologies:

	document.write("Hello World!")

However, VBScript is regarded by some to be inflexible and insecure
compared to Javascript because a few nasty viruses (notably, the Love
Letter virus: http://www.superiornet.net/security/archive/love_bug.htm
) have spread using this code and it also has some access to the users
file system.
This was demonstrated recently with a script that would open a users
CDROM drive when a special link was clicked on.
VBScript is less flexible than Javascript due to the fact that it only
works in Internet Explorer which has a very large share of the browser
market but there are at least one hundred other browsers available
which can run the other scripting languages with ease.
As a side note, Microsoft also have their own version of Javascript,
called JSCript which is essentially the same thing.

Of course, there is also Flash with it's Javascript-like ActionScript
which entire web sites can be programmed in.
There are thousands of excellent examples of Flash programming but the
one which I personally enjoy the most is the brilliant AI Movie site
with the chatbot [ http://www.aimovie.com ].
Note that the bot responses and "brain" actually run on the server but
the user interface is all client side.
If this AI technology interests you, visit the homepage for the
technology - The ALICE Foundation [ http://www.alice.org ].


Server Side Scripting
======================

After exploring HTML, CSS and Javascript, a lot of web coders then
move on to server side scripting which is the most powerful form of
interactivity that the web currently has.
They are now in the world of "real" programming.


There are many technologies to choose from when programming a website,
the most popular are PHP, ASP, Coldfusion, Perl and Server Side
Javascript.
You can usually tell which technology is powering a web site by the
file extension.
- PHP (Perl Hypertext Preprocessor) is .php, .php3 or .phtml
- ASP (Active Server Pages) is .asp
- Coldfusion is .cmd
- Server Side Javascript is .jsp
- Perl is .pl or .cgi (Common Gateway Interface)

You may also come across .shtml which is a "server side include" page
where small parts of PHP, Perl or ASP can be included within standard
HTML code.
This is very rare with PHP though because this language enables you to
jump in and out of PHP whenever you want to, unlike Perl.


The main difference between these and client side scripting is that
server side runs entirely on the server and will have no direct
control over the users browser.
An ASP script can serve Javascript code but it cannot change window
sizes by itself or dynamically alter pages after the page has been
loaded.
In some ways, dynamic pages are less dynamic than a "static" .html
page can be.

What they can do though is run and modify files on the server, alter
databases, connect to other servers to retrieve data and  lots of
other useful things.
You could write a simple PHP script to ping another webserver from
yours for example, or maybe create a PDF document on the fly.
It's this power and flexibility which makes server side technologies
so exciting.

Two good examples of server side scripting are e-commerce and user
community forum software.
These can also use Javascript to provide even more interaction.


E-Commerce Example
===================

When using an e-commerce site, you will usually find Perl at the heart
of the operation.

When browsing the catalogue of products, a server side script will be
responsible for fetching the list of products with prices from the
database, generating the correct links to view more information about
a product and storing your session data in a "session ID" or cookie.

A session ID is basically an easy way for the server to identify you
as you move around the site and keep track of the products that you
have viewed.
PHP Session IDs
http://www.php.net/manual/en/ref.session.php
It may be stored either as a cookie on the users filesystem or as part
of the URL like this:
http://www.yoursite.com/store/viewproducts.php?sid=5478957298379871972324

Also, when a user hits the "add to basket" link, the server will then
store the selection in the database along with your unique session ID
and place this information in the cookie on the users system.

Upon checkout, all the data stored in the session is then retrieved to
provide the final list of items.
Then, when the user decides to buy the product, the server will take
the credit card data and run a special script on the credit card
authorization server to check the status of the card.
You can tell when this is happening when the file extension is a .cgi
file.
If the transaction is successful, the auth server will debit the card
and tell the web server to proceed.
Then, the web server will be programmed to send the "checkout
completed" HTML page to the browser and will also dispatch an email
containing receipt information.

All this happens in a matter of seconds.


User Community Forums Example
==============================

Forums are a very, very popular application on the web and allow
almost real time communication to take place between users.
They have been around for many years now and although they once used
CGI exclusively, PHP is now the more popular choice due to flexibility
and ease of coding.

When viewing a topic (also called a "thread"), the script will be
working in virtually the same way as an e-commerce website; the data
is selected from a database, altered to suit the particular needs and
then sent to the browser.
However, when creating a topic or post the methods are slightly
different.

When using software such as the popular VBulletin [
http://www.vbulletin.com ], you are presented with a text area to type
your message and several "emoticons" or "smilies" to the left.
When using this page, it has several advanced client side features for
various actions such as clicking on a smiley which inserts the
appropriate text version into the text area.
When you hit the Submit button, several things happen.
First, the data is parsed by the script on the server for items such
as disallowed Javascript and HTML tags then it stores the data as text
in the database which is usually the popular MySQL [
http://www.mysql.com ].
An email will also be dispatched to users who have chosen to receive
notifications of new posts in the topic or forum.

When the browser is re-directed to the page where the user can view
their new post, the data is retrieved from the server by the PHP
script and then parsed again for smiley code and when it finds the
text string " :) " it knows to replace this with the appropriate image
code representing a smiley face.

This is a rather simplified version of events, hundreds of actions
happen on the server while the user is moving around the forum area.


So now that we are aware of some basic functions of the languages,
which one is the best?
Well the answer is none of these, none can be have the title of best
or worst, it all depends on your situation, server operating system,
stability and security issues.
It's not really possible to rate the amount of security that any
language can provide but some are more mature and therefore, stable,
than others.

- Of the main server side languages, Perl is the most mature and has
been around for many years.
Many web hosts will only offer this language because it is highly
trusted and people have a high degree of experience with it.

- ASP is a closed-source Microsoft technology.
This doesn't mean that you can't run it on a Unix/Linux machine
though, Sun have brought it to this platform with their own version of
it, Chilisoft ASP.
http://wwws.sun.com/software/chilisoft/index.html

- PHP is fairly new and is open-source.
It performs very well, is fast and is usually paired with MySQL for
database driven web sites.
http://www.php.net

- Coldfusion is made by Macromedia, the same company who invented
Flash.
"With ColdFusion MX, you can build and deploy powerful web
applications and web services with far less training time and fewer
lines of code than ASP, PHP, and JSP."
Quote from: http://www.macromedia.com/software/coldfusion


with regards to operating systems, most people favour Unix with the
Apache webserver.
Again, this may be due to it's maturity and well known stability.

To read several arguments about this, view the following pages:

Windows NT Server 4 vs Unix
http://www.kirch.net/unix-nt/

Microsoft vs. Linux
http://www.oreilly.com/pub/a/oreilly/ask_tim/2001/msvlinux_0201.html

OS Wars
http://www.zdnet.com.au/newstech/os/story/0,2000048630,20263263-3,00.htm


One of the best advantages of server side scripting though, is that
they are compatible with any browser on any platform.
No processing at all is done by the client so you could easily use the
same script to process pages for a PDA as for a desktop PC.


Related Links
==============

"Server-Side Scripting Shootout"
http://hotwired.lycos.com/webmonkey/99/46/index1a.html

The Scripts.com
http://www.thescripts.com/serversidescripting/

Server Side Scripting Languages
http://www-106.ibm.com/developerworks/web/library/wa-sssl.html

DMOZ Directory - Server Side Scripting
http://dmoz.org/Computers/Programming/Internet/Server_Side_Scripting/

Yahoo Directory
http://dir.yahoo.com/Computers_and_Internet/Software/Internet/World_Wide_Web/Servers/Server_Side_Scripting/

ASP.net
http://www.asp.net

Apache.org - Server Side Includes
http://httpd.apache.org/docs/howto/ssi.html

About Server Side Scripting
http://webservices.web.cern.ch/WebServices/docs/Advanced/SSScripting/



I hope this answers all your questions!
Kind regards,
errol-ga


Related Google Searches
========================

"css"
://www.google.co.uk/search?q=css

"css support"
://www.google.co.uk/search?q=css+cupport

"cascading style sheets"
://www.google.co.uk/search?q=cascading+style+sheets

"image maps"
://www.google.co.uk/search?q=image+maps

"image map uses"
://www.google.co.uk/search?q=image+map+uses

"search engine robots"
://www.google.co.uk/search?q=search+engine+robots

"search engine spiders"
://www.google.co.uk/search?q=search+engine+spiders

"what search engine spiders do"
://www.google.co.uk/search?q=what+do+search+engine+spiders+do

"microsoft vs unix"://www.google.co.uk/search?q=microsoft+vs+unix

"server side scripting"
://www.google.co.uk/search?q=server+side+scripting

"session ids"
://www.google.co.uk/search?q=session+ids

"web server connections"
://www.google.co.uk/search?q=web+server+connections

"leased lines"
://www.google.co.uk/search?q=leased+lines

"dedicated digital circuits"
://www.google.co.uk/search?q=dedicated+digital+circuits
Comments  
Subject: Re: Internet
From: fezzik-ga on 12 Jun 2003 09:35 PDT
 
The link to the Alice Bot should be:
http://www.alicebot.org/

The link mentioned is to a 3D learning tool.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy