Google Answers Logo
View Question
Q: Math, 3D Rotation around Arbitrary Axis ( Answered 5 out of 5 stars,   1 Comment )
Subject: Math, 3D Rotation around Arbitrary Axis
Category: Science > Math
Asked by: asdf123-ga
List Price: $15.00
Posted: 15 Jun 2004 10:22 PDT
Expires: 15 Jul 2004 10:22 PDT
Question ID: 361441
Ive read up alot on this and whenever i think i have found an
answer its not quite right. I think im just not getting it so 
what im looking for is the easiest, most clearly explained answer
assuming no knowledge of the subject. 

This is the problem: I have an arbitrary axis in 3 space. It
passes through the origin. The axis is represented by a unit 
vector ai+bj+ck. I want to rotate another vector (that also passes
through the origin) theta degrees about that arbitrary axis.
I would prefer a solution that could easily be implemented using 
matrices. Do not assume that either vector is normal to the other
or lies on the x,y, or z axis. Only right or left hand rule is necessary but
right AND left would be preferred - this is the LEAST important of everything.

The MOST important aspect of the solution is that one simple
example should be done from start to finish to insure that it is correct
and that i will understand how to do it.
Foremost, I am looking for an easy mathematical answer but
i am also looking for some brief and simple explanation of it.
Thank You Very Much!
Subject: Re: Math, 3D Rotation around Arbitrary Axis
Answered By: mathtalk-ga on 16 Jun 2004 20:01 PDT
Rated:5 out of 5 stars
Hi, asdf123-ga:

As you have no doubt already concluded, rotation around the axis
passing through the origin and a point (a,b,c) on the unit sphere in
three-dimensions is a linear transformation, and hence can be
represented by matrix multiplication.  We will give a very slick
method for determining this matrix, but to appreciate the compactness
of the formula it will be wise to start with a few remarks.

Rotations in three-dimensions are rather special linear
transformations, not least because they preserve the lengths of
vectors and also (when two vectors are rotated) the angles between the
vectors.  Such transformations are called "orthogonal" and they are
represented by orthogonal matrices:

  M M' = I

where we conveniently denote the transpose by '.  In other words the
transpose of an orthogonal matrix is its inverse.

Consider the data which is needed to define the transformation. 
You've already given notation for the axis of rotation, ai + bj + ck,
conveniently assumed to be a unit vector.  The only other datum is the
angle of rotation, which for lack of a more natural character I will
denote by r (for rotation?) and which we will assume to be given in

Now the rotations are actually a bit special even among orthogonal
transformations, and in fact they are also called special orthogonal
transformations (or matrices) in virtue of their property of being
"orientation preserving".  Compare them with reflections, which are
also length and angle preserving, and you will find that the geometric
characteristic of preserving orientation (or "handedness" if you
prefer) has a numerical counterpart in the determinant of the matrix. 
A rotation's matrix has determinant 1, while a reflection's matrix has
determinant -1.  It turns out that the product (or composition) of two
rotations is again a rotation, which agrees with the fact that the
determinant of a product is the product of the determinants (or 1 in
the case of a rotation).

Now we can describe a step by step approach that one might follow to
construct the desired matrix (before we shortcut the whole process and
jump to the Answer!).  Consider first a step in which we rotate the
unit vector:

u = ai + bj + ck

so that it coincides with one of the "standard" unit vectors, perhaps
k (the positve z-axis).  Now we know how to rotate around the z-axis;
it's a matter of doing the usual 2x2 transformation on the x,y
coordinates alone:

       cos(r) sin(r)   0
M  =  -sin(r) cos(r)   0
         0      0      1

Finally we need to "undo" that initial rotation that took u to k,
which is easy because the inverse of that transformation is (we
recall) represented by the matrix transpose.  In other words, if
matrix R represents a rotation taking u to k, then R' takes k to u,
and we can write out the composition of transformations like this:

  R' M R

It is easily verified that this product of matrices, when multiplied
times u, gives u back again:

  R' M R u = R' M k = R' k = u

Therefore this is indeed rotation about the axis defined by u.

One advantage of this expression is that it cleanly separates out the
dependence of M on the angle r from the dependence of Q and Q' on the
"axis" vector u.  However if we have to carry out the computations in
detail, we will obviously have a lot of matrix multiplication to do.

So, to the shortcut.  It turns out when all the dust settles that the
multiplication among rotations is isomorphic to multiplication of unit
quaternions.  Quaternions, in case you've not seen them before, are a
kind of four-dimensional generalization of complex numbers.  They were
"invented" by William Hamilton in 1843:

[Sir William Rowan Hamilton]

and today's 3D graphics programmers are greatly in his debt.

Each unit quaternion q = q0 + q1*i + q2*j + q3*k then defines a rotation matrix:

     (q0 + q1 - q2 - q3)      2(q1q2 - q0q3)          2(q1q3 + q0q2)

Q  =      2(q2q1 + q0q3)     (q0 - q1 + q2 - q3)      2(q2q3 - q0q1)

          2(q3q1 - q0q2)          2(q3q2 + q0q1)     (q0 - q1 - q2 + q3)

To verify that Q is an orthogonal matrix, ie. that Q Q' = I, means in
essence that the rows of Q form an orthonormal basis.  So, for
example, the first row should have length 1:

(q0 + q1 - q2 - q3) + 4(q1q2 - q0q3) + 4(q1q3 + q0q2)

  = (q0 + q1 - q2 - q3) + 4(q1q2) + 4(q0q3) + 4(q1q3) + 4(q0q2)
  = (q0 + q1 + q2 + q3)
  =  1

and the first two rows should have dot product zero:

   [ (q0 + q1 - q2 - q3), 2(q1q2 - q0q3), 2(q1q3 + q0q2) ]
   * [ 2(q2q1 + q0q3), (q0 - q1 + q2 - q3), 2(q2q3 - q0q1) ]
 = 2(q0 + q1 - q2 - q3)(q2q1 + q0q3)
 + 2(q1q2 - q0q3)(q0 - q1 + q2 - q3)
 + 4(q1q3 + q0q2)(q2q3 - q0q1)

 = 4(q0q1q2 + q1q0q3 - q2q0q3 - q3q2q1)
 + 4(q3q1q2 - q1q0q3 + q2q0q3 - q0q2q1)
 =  0

It can also be shown in general that det(Q) = 1, and thus that Q is
really a rotation.

But around what axis is Q the rotation?  And by what angle?  Well,
given angle r and unit vector:

u = ai + bj + ck

as before, the corresponding quaternion is:

q = cos(r/2) + sin(r/2) * u

  = cos(r/2) + sin(r/2) ai + sin(r/2) bj + sin(r/2) ck

Thus with:

q0 = cos(r/2), q1 = sin(r/2) a, q2 = sin(r/2) b, q3 = sin(r/2) c,

we are able to get the desired property that multiplication by Q "fixes" u:

Q u = u

Rather than chug through the long-winded algebra, let's do a simple example.

Let u = 0i + 0.6j + 0.8k be our unit vector and r = pi be our angle of rotation.

Then the quaternion is:

q = cos(pi/2) + sin(pi/2) * u

  = 0 + 0i + 0.6j + 0.8k

and the rotation matrix:

        -1     0     0

Q =      0  -0.28  0.96

         0   0.96  0.28

In this concrete case it is easy to verify that Q Q' = I and det(Q) = 1.

Also we compute that:

Q u = [ 0, -0.28*0.6 + 0.96*0.8, 0.96*0.6 + 0.28*0.8 ]'

    = [ 0, 0.6, 0.8 ]'
    =  u

ie. the unit vector u defines the axis of rotation because it is "fixed" by Q.

Finally we illustrate that the angle of rotation is pi (or 180
degrees) by considering how Q acts on the unit vector in the direction
of the positive x-axis, which is perpendicular to u:

i + 0j + 0k,  or as a vector, [ 1, 0, 0 ]'

Then Q [ 1, 0, 0 ]' = [-1, 0, 0 ]' which is the rotation of [ 1, 0, 0
]' through angle pi about u.

As a reference for this representation of rotations by quaternions and
some additional methods of representation (and what they are good
for), see the details here:

[Representing 3D rotations]


Given angle r in radians and unit vector u = ai + bj + ck or [a,b,c]', define:

q0 = cos(r/2),  q1 = sin(r/2) a,  q2 = sin(r/2) b,  q3 = sin(r/2) c

and construct from these values the rotation matrix:

     (q0 + q1 - q2 - q3)      2(q1q2 - q0q3)          2(q1q3 + q0q2)

Q  =      2(q2q1 + q0q3)     (q0 - q1 + q2 - q3)      2(q2q3 - q0q1)

          2(q3q1 - q0q2)          2(q3q2 + q0q1)     (q0 - q1 - q2 + q3)

Multiplication by Q then effects the desired rotation, and in particular:

Q u = u

regards, mathtalk

Clarification of Answer by mathtalk-ga on 16 Jun 2004 20:05 PDT
I tried to be "fancy" and use superscript 2's in my post, to denote
squares where appropriate, but somehow they look like backward
superscript 2's on my computer.  I hope the meaning is at least as
clear as if I'd done my usual approach, which is "^2" for "to the
second power".

regards, mathtalk-ga

Request for Answer Clarification by asdf123-ga on 16 Jun 2004 20:36 PDT
Mathtalk, thank you so much for your answer!
And just in time!
I am using this for comuputer graphics as you had

Two quick things.
First, to clarify...
So in the Q matrix a,b,c correspond to the axis im
rotating around correct?
So i mulitply Q by the vector i want rotated, correct?
Do you need to use unit vectors?

And secondly, i have found the following matrix
in almost every search for this answer in the past.
strangely it works sometimes but doesnt others 
(i think maybe the two vectors need to be orthogonal?).
If you could explain how to use this matrix for these
purposes id gladly like to give you a $5 tip.

Once again, great job!
And thank you for explaining quaternions (my calculus
professor didnt even know what they were!)

Clarification of Answer by mathtalk-ga on 17 Jun 2004 07:56 PDT
Hi, asdf123-ga:

I'll need to study that Web page you linked carefully to get an idea
about why it might fail.  I understand the idea behind it, but it'll
take more study to know what the pitfalls might be.

But, yes, multiplying by 3x3 matrix Q is what rotates a point [x,y,z]'
around the axis defined by unit vector [a,b,c]' through angle r in
radians.  While there is no requirement on [x,y,z]', we do need
[a,b,c]' to have unit length, and by our construction the

 q = [ cos(r/2), sin(r/2)a, sin(r/2)b, sin(r/2)c ]

must then also have unit length.  (Here I'm using the transpose ' to
indicate a column vector [x,y,z]' rather than a row vector, so that
multiplication of the vector on the left by Q is defined.  However,
since Q' is the inverse of Q, one can interpret multiplying row
vectors on the right by Q as the rotation in the opposite direction,
e.g. through angle -r.)

The quaternion representation is especially nice when a sequence of
rotations have to be performed.  The non-commutative aspect of
quaternion multiplication corresponds to the fact that order makes a
difference in doing 3D rotations around a sequence of arbitrary axes. 
One gains precision/avoids round-off errors by multiplying the
quaternions together, using quaternion arithmetic, and then applying
the combined rotation as matrix Q from that product.

regards, mathtalk-ga

Clarification of Answer by mathtalk-ga on 17 Jun 2004 16:47 PDT
Hi, asdf123-ga:

Note that "Confuted" has a later section of that Tutorial on 3D
Rotations which covers the quaternions approach:

[Rotations in Three Dimensions Part Five: Quaternions]

He shows a way to write Q in slightly more compact form, taking
advantage of the fact that q has unit length to simplify the diagonal
entries of Q.

regards, mathtalk-ga

Clarification of Answer by mathtalk-ga on 19 Jun 2004 10:47 PDT
Hi, asdf123-ga:

I've reviewed the discussion by Confuted at the site you asked about. 
He/she refers to a problem of "gimbal lock" variously as caused by 1)
programming errors, or 2) an unavoidable loss of othogonality
(perpendicularity) due to round-off in the numerical computations.

I'm dubious of this.  While no doubt the loss of orthogonality is a
real phenomenon of matrix/vector manipulations, it is essentially
negligible when only a single matrix multiplication (by an orthogonal
matrix) is done.  Yes, a series of such multiplications will tend to
accumulate errors, and under this circumstance for numerical
algorithms one often "reorthogonalizes" or (in the particular case
here) uses the quaternion approach to combine a sequence of rotations
to the point of doing a single matrix multiplication.

Also at a couple of points Confuted promises that more will be said
about (presumably) the programming error part of the story, but as far
as I could see the discussion never got to the point.

So rather than dwell on stuff that might not work, let me propose an
approach of breaking down the calculation to a level that relies only
on the angle r of rotation and the unit vector [a b c] in 3D that
defines the axis of rotation.  For a single rotation this will allow
you to verify the orthogonality of the matrix Q and clearly separate
out the dependence on angle r from the dependence on the axis of

The equivalent formula for Q, using r and [a b c] directly, can almost
be written in a single line, though we need to define an antisymmetric

                 0    -c     b
J = J(a,b,c) =   c     0    -a
                -b     a     0

Q = cos(r) I + sin(r) J(a,b,c) + (1 - cos(r)) [a b c]'[a b c]

Here the final term involves a symmetric rank-one matrix obtained by
multiplying the column [a b c]' by the row [a b c].  For the sake of
convenience I'll assig a "variable name" to this:

K = [a b c]'[a b c]

Now the first thing we want to observe is that when r = 0, Q = I. 
This means that whenever the angle of rotation is zero, multiplication
by Q is the identity operation, a good thing.

Now let's compute Q Q', which will show Q is orthogonal if the result
turns out to be I.  Note that because I,K are symmetric and J is
antisymmetric, the only difference in Q and Q' is the sign of the term
involving J:

Q  = cos(r) I + sin(r) J + (1 - cos(r)) K

Q' = cos(r) I - sin(r) J + (1 - cos(r)) K

Because J [a b c]' is zero (as is [a b c] J), J and K commute (with
respect to matrix multiplication), so we can form the product Q Q'
using the "difference of two squares" formulation:

QQ' = (cos(r) I + sin(r) J + (1-cos(r)) K)
    * (cos(r) I - sin(r) J + (1-cos(r)) K)

    = (cos(r)I + (1-cos(r)) K) - sin(r) J

    = cos(r)I + 2cos(r)(1-cos(r)) K + (1-cos(r)) K - sin(r) J

A little work allows us to simplify what the squares of J, K are:
K = [a b c]'[a b c] [a b c]'[a b c] = [a b c]'[a b c] = K

because [a b c] [a b c]' = 1 (unit length).

      a-1  ab   ac
J =   ab  b-1  bc   = K - I
       ac   bc  c-1

And now the dust settles:

Q Q' = cos(r) I
     + (2cos(r) - 2cos(r) + 1 - 2cos(r) + cos(r)) K
     - sin(r) (K - I)

     = cos(r)I + (1 - cos(r)) K - sin(r) K + sin(r) I
     = (cos(r) + sin(r)) I
     = I

So Q is orthogonal, and for angle r = 0 we know Q = I.  Then by
continuity the determinant of Q must be 1 for all angles r (and any
axis), because as a continuous function, det(Q) cannot very well jump
from 1 to -1.  Thus Q is a rotation, and we can verify that it is
rotation around the axis [a b c] by examining the effect of
multiplying Q times [a b c]':

Q [a b c]' = (cos(r) I + sin(r) J + (1 - cos(r)) K) [a b c]'

As already noted:

J [a b c]' = [0 0 0]'


Q [a b c]' = ( cos(r) I + (1 - cos(r)) K) [a b c]'

and because:

K [a b c]' = [a b c]'[a b c] [a b c]' = [a b c]'

once again, due to unit length of [a b c], we have:

Q [a b c]' = (cos(r) + (1 - cos(r))) [a b c]'

           = [a b c]'

Thus Q is a rotation which "fixes" [a b c] (leaves the vector
unchanged), so this must be the axis of rotation for Q.

Note also that the direction of rotation is controlled by the sign of
r, equivalently the choice of J versus J' (as Q' is also the rotation
around the same axis by an equal angle in the opposing direction).

So there you have a simpler formulation, avoiding any mention of
quaternions.  And the simpler form of Q shows us a few opportunities
for economizing on the matrix arithmetic.  That is, to compute Q [x y
z]' we need these terms:

cos(r) I [x y z]'  (a scalar multiple)

sin(r) J [x y z]'  (because of J's zero diagonal,
                    only six multiplies & three adds)

(1 - cos(r)) K [x y z]'  (needs inner product:
                           [a b c][x y z]'
                          but otherwise becomes simply
                          a scalar multiple)

Now the inner product takes three multiplications, so combining that
with the six multiplications needed for J, we don't save anything over
doing Q [x y z]' except possibly in matrix "assembly".  If you have a
bunch of points to rotate, you'll probably want to assemble Q and
apply it by matrix multiplication to all the points, but if you had a
single rotation to perform on a single point, then the above might
allow for some extra care in precision of operations.

Finally let's look back at the example we did before.

Taking angle r = pi  and axis [0 3/5 4/5], let's rotate:

Q [1 0 0]' = ((cos(pi) I + sin(pi) J + (1 - cos(pi)) K) [1 0 0]'

           = (-I  + 2K) [1 0 0]'

           = -[1 0 0]' + [0 0 0]

           = [-1 0 0]'

where K [1 0 0]' = [0 0 0]' because [0 3/5 4/5] [1 0 0]' = 0.

regards, mathtalk-ga
asdf123-ga rated this answer:5 out of 5 stars
Couldnt be more pleased with the way he gave his answer. Great Job!

Subject: Re: Math, 3D Rotation around Arbitrary Axis
From: saem_aero-ga on 16 Jun 2004 11:57 PDT
Here is ONE way of doing it.  But there are many that I'm aware of.  I
hope you can decipher my notation.  This is not the elegant matrix
way. :)

Xvec = vector os x
xvec1 = i component of xvec
xvec2 = j compoenent of xvec
xvec3 = k component of xvec
magX = magnitude of x
theta = angle u want to rotate (dont confuse w/ phi)
phi = dummy angle
case doesn't matter
* = multiple
/ = divide

note to watch out for signs from cos and sin, might not work for theta
greater than 90.  :)  I just did this on a post-it note...

Avec = vector which you are rotating about
Bvec = vector to point which you are rotating.
MagA = sqrt(Avec1^2+Avec2^2+Avec3^2)
Magb = sqrt(Bvec1^2+Bvec2^2+Bvec3^2)

Avec dot Bvec = ABCos(phi) = avec1*bvec1+avec2*bvec2+avec3*bvec3
solve for phi

cos(phi) = R/magB 
solve for R
sin(phi) = z/magB
solve for Z

Qvec = Svec-Bvec

magq = sqrt(qvec1^2+qvec2^2+qvec3^2)

now 3 equations and 3 unknowns

QQcos(theta) = qvec1*mvec1+qvec2*mvec2+qvec3*mvec3

qqsin(theta) = sqrt((qvec2*mvec3-qvec3*mvec2)^2-(qvec1*mvec3-qvec3*mvec1)^2+

magnitude of (svec + mvec) = magB

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  

Google Home - Answers FAQ - Terms of Service - Privacy Policy