Hello begentle-ga,
ericynot's question is important - what is the significance of the XP
/ linux bit? If you post the answer to that as a clarification
request to this answer, I will be notified of it and can respond
quickly. However, since I've got some spare time now, I'm going to
tackle the other 4 parts of your question.
In answering your question, I am going to take a fairly hard line view
of what an operating system is. Since there is no standard
definition, I will start with what I consider to be an OS.
The operating system is the software responsible for allocating and
deallocating machine resources. It manages CPU time, memory, access
to disk, network, keyboard, port, PCI and other resources. Even this
however is not a precise definition (how about a video card driver?
What about the filesystem layer?), but we will pretend for a moment
that it is.
The division between OS and application is reasonable now. Clearly
applications cannot be written to run on the bare metal (how would you
switch between watching a movie and writing an essay?), so there has
to be some system to switch between applications. In that sense, if
in no other, there needs to be an operating system running on every
computer system which is not single-purpose (such as a VCR control
system). While the division is unclear except from a marketing
perspective (while consumers think of Windows as an operating system,
computer scientists realize it is vastly more than that, as it
includes numerous client applications, a window manager, kernel,
etc.), the conceptual divide is important both when designing software
and when using it.
In the future, the division between an operating system and an
application will continue to blur. I expect there will be an
abandonment of the OS/app divide in favor of a tiered approach since
ultimately this is the most stable and efficient way to implement
things. The bottom layer interacts with hardware and switches tasks.
Then there are higher levels such as the filesystem layer, which
provide services to applications. If each layer and component is
well-documented, incremental improvements can be made without worrying
about causing the whole house of cards to collapse. A major cause of
application instability today is that software does not adhere to
published specifications, so when another vendor changes the way
something is implemented (without changing the "public" accessors),
software that uses unpublished functions breaks. This must stop and
will hopefully coincide with increased popularity of "better"
programming languages.
In theory, there are no advantages to producing the OS and apps
together. If done properly, the OS and applications will be written
separately, whether they are produced "together" or not - the problem
I mentioned in the previous paragraph about undocumented interfaces is
not restricted to third-party applications; the OS vendor is just as
likely to suffer as anyone else. In practice, documenting interfaces
is a huge undertaking so it tends not to be done well. Therefore,
developing the Os and apps together allows the apps to take advantage
of OS features that may only be known to the developers of the OS
itself. The disadvantages are the same - if both are developed
simultaneously, there is less incentive to do a good job documenting
the OS, which leads to greater reliance on undocumented features which
are subject to change, since no documentation exists to say that they
must not be changed.
Consider the following example. I write code to allocate memory to
applications which has two publicly-accessible methods, one to request
memory, and one to free it. Suppose the first were req(size) that
returns the starting address of the memory block, and the second is
free (start, size) that frees a block of memory. In the code I write,
these methods are just wrappers. The request code has to figure out
what memory is available, decide if the requester should have it, and
then mark it as used by a given app. If an application peeked at the
data structure I use to keep track of memory, it could give itself
memory faster than by using req(). However, since I specify that
applications should only use req() and free(), I assume that I can
change how req() operates and as long as it takes a size and returns
an address it should be OK. I discover that using a hash table
instead of a list makes req() work better, so I change the code. Now,
when an application peeks at my hash table and interprets it as a
list, it makes a huge mess by writing a table entry on top of my hash
table! Granted, this is a simplistic example, but it is demonstrative
of real problems that occur in commercial code.
I hope this is the answer you were looking for; if not, ask for a
clarification. Also, let me know what you meant by the Windows XP /
linux bit.
-Haversian |