Google Answers Logo
View Question
 
Q: Database Management ( No Answer,   1 Comment )
Question  
Subject: Database Management
Category: Computers > Algorithms
Asked by: winner313-ga
List Price: $2.00
Posted: 25 Sep 2005 14:12 PDT
Expires: 09 Oct 2005 09:07 PDT
Question ID: 572480
Why the multiversion read consistency algorithm ensures serializablity?
Answer  
There is no answer at this time.

Comments  
Subject: Re: Database Management
From: cronk-ga on 25 Sep 2005 18:17 PDT
 
http://www.stanford.edu/dept/itss/docs/oracle/9i/rac.920/a96597/psintro.htm
Multiversion read consistency ensures that read operations do not
block write operations and that write operations do not block read
operations. Multiversion read consistency creates snapshots or read
consistent versions of blocks that have been modified by a transaction
that has not committed. This approach has two key benefits:

Read operations do not have to wait for resources because users
modifying data do not prevent readers from reading
A snapshot view of the data is available from a specific point in time 


http://courses.cs.vt.edu/~cs5204/fall99/distributedDBMS/serial.html
Serializability Theory

Consider a database D = (x, y, z), on which we will concurrently
perform a series of transactions T1, T2, ..., Tn. We want some way of
knowing whether we executed the transactions "correctly." Formally, we
state that an execution is correct if and only if it is equivalent to
some serial execution of the transactions. In other words, a correct
concurrent execution of the transactions produces the same result we
would get if we executed them one at a time.
It may not be clear how executing transactions concurrently can cause
problems. Date [1] identifies the Lost Update problem as a potential
problem for concurrently executing transactions. A lost update occurs
when two transactions, A and B, both update the same location R in the
database. Suppose A reads R, but before A can update R, B reads R.
Then, A updates R, and B updates R based on R's previous contents. B
has overwritten A's update; A's update, thus, is lost. Note that this
would not be a problem has A and B executed sequentially: A would have
updated R before B read R.

To determine whether a log is serializable, we construct its
serialization graph. This graph is constructed as follows: The
transactions {T1, T2, ...,Tn} are nodes in the graph. There is a
directed edge from Ti to Tj if and only if, for some x, one of the
following hold:
ri[x]<wj[x],
wi[x]<rj[x], or
wi[x]<wj[x].
The Serializability Theorem states that: A log L is serializable if
and only if SG(L) (the serialization graph) is acyclic.
Thus, given a transaction log, we can construct its serialization
graph and determine whether the log is serializable.


---
So putting the 2 together, since MVRC takes snapshots when reading it
doesn't lose updates to the db that are made by users that are
inserting.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy