Hi,
Before approaching the problem, let's set up some notational
conveniences:
P=(x_1, x_2,.. x_n), an ndimensional point on the surface of the
hypercube. Note that for a point to be on the surface of a hypercube,
at least one of these dimensions x_1 .. x_n must have value of .5 or
.5 and the others must be .5<=x_i<=.5
N={1, 2, .. n}, the set of integers from 1 to n.
V={v_1, v_2, .. v_n}, the set of n "candidate vectors" which we hope
in some combination will move our point from the surface to the
interior.
D={d_1, d_2, .. d_n}, the set of integer coefficients which will
multiply each corresponding vector, when translating the point P.
Given a hypercube of dimension n of side length 1 centered about the
origin, and given a set of "candidate vectors" V, it is our task to
determine whether there exists a set D such that Sum(d_i*v_i), i=1..n
will translate point P to the interior of the hypercube. Specifically
we wish to know for which sets of candidate vectors V such a set D
exists.
Condition 1: It is a necessary condition that the set of vectors V be
linearly independent.
Proof: Assume (for purposes of contradiction) that any set of
nonlinearly independent vectors can do the task. An example should
suffice to prove that this is not true: In the twodimensional case,
consider a point P at the coordinates (0, .5). Given the set of 2
vectors V={(.5, 0), (.25,0)}, it is clear that any linear combination
of scalar multiples of these vectors will move point P only
horizontally. As P is on the "top" edge of the square, it can never
move into the interior through horizontal translation alone.
Therefore, a nonlinearly independent set of matrices is insufficient
to translate a point to the interior.
Computing analysis: To decide whether a set of n vectors is linearly
independent in n dimensions, we create an n by n matrix A with the ith
row of A being the ith vector in V. By either calculating the
determinant of A or by putting A into row echelon form, we can decide
whether A represents a set of linearly independent vectors. (If any
rows of A, once put into echelon form, consist entirely of zeros, the
system of vectors is not linearly independent.) According to
reference (1), putting a matrix into rowechelon form is of the order
n^3. Several other sources indicate that echelonforming is a
polynomial operation but do not give estimates as to the degree.
Condition 2: No vector in V may have any dimension greater than .5 or
less than .5
I am having a harder time proving this condition.
Proof idea: Let P be at any coordinates such that all but one of its
coordinates is 0. The remaining coordinate is .5 (see the note about P
at the beginning of this message). This is the most "difficult" point
to move to the interior of the hypercube: the points at the center of
each face of the hypercube will be the hardest ones to move to the
interior. While, say, a point at a corner can move nearly 1 unit in
each direction, the points at the center of the faces are limited to
.5 unit movement in all but one dimension (see below).
Without loss of generality assume that in the first dimension the
coordinate's value is .5; i.e., P=(.5, 0, 0.. 0). Note that while P's
final displacement can be 1 in the x_1 direction, it can move no more
than +.5 in any other orthogonal direction. Note that this statement
generalizes to any P at the center of any face of the hypercube, with
the appropriate dimension in place of x_1. Therefore the set of
vectors we use must be able to move any of these hypothetical
facecentered P points less than +.5 in all directions. (In other
words, the freedom of the vectors to move a particular facecentered
point one full unit in some particular direction is negated by the
necessity that another facecentered point may move no more than .5 in
that direction) We could say that the vectors need the "finesse" to
move a point (any point) no more than .5 in any direction. It is our
task to decide what combination of vectors can acheive that.
Lemma:
I shall prove below that given n linearly independent ndimensional
vectors whose dimensions are all +1 or 0, I can acheive a net
translation of exactly 1 in any direction (or 1 in each of any
combination of directions) I choose.
Arrange the vectors into a matrix so that each row of the matrix
represents a vector. Since the vectors are linearly independent, this
matrix is nonsingular and through GaussJordan elimination it can be
reduced to the identity matrix. Note that since all entries in this
matrix are 1 or 0, we will not have to multiply any row by a
noninteger (in fact, we will not have to multiply any row by anything
but 1 or 1) in the process of our reduction. By keeping track of the
elementary row operations that lead to Reduced Echelon Form, we can
come up with vector sums of the given vectors that yield the n unit
vectors. By combining the "new" unit vectors (perhaps multiplied by
1) we can move our point exactly 1 unit in any dimension or
dimensions that we wish.
By "dividing everything" by 2, we will see that given proof will work
for vectors whose dimensions are all +.5 or 0. Therefore, we will be
able to move .5 in any direction or directions we choose. So when the
dimensions of the vector are each less than +.5, we can move the
point in any direction we choose in an amount less than .5 (although
this proof does not provide a mechanism for calculating the proper
vector sum in this case).
By the lemma we have shown that vectors whose dimensions are each less
than .5 can move any point P to the interior of the hypercube.
It is my assertion (although I cannot prove it) that if any of the
vectors have any dimension that is greater than or equal to .5, there
exists a point P that cannot be moved into the interior of the
hypercube  any sum of integer products of the vectors will
"overshoot" the interior of the hypercube.
Computing analysis: The task of taking an inequality of each entry of
n, ndimensional vectors is n*n=n^2.
I believe these are the only two conditions on V. Condition 1
guarantees that P can be moved in any direction necessary. Condition 2
guarantees that the vectors in V don't "overshoot" the interior of the
hypercube.
Total computing analysis:
n^3 for checking linear independence of your vectors.
n^2 for checking length of your vectors
n^3+n^2<n^4. So your calculation time for checking a candidate set of
vectors is less than n^4.
(1)MAS 223 Complexity and Optimisation in Operational Research, First
Week's Lectures
http://www.maths.qmw.ac.uk/~wilfrid/coor/mycoorweb3.pdf
PDF page 5
Also used for this answer:
_Elementary Linear Algebra, Sixth Edition_ by Bernard Kolman. Prentice
Hall (New Jersey) 1996 
Request for Answer Clarification by
fouyangga
on
03 Oct 2002 18:19 PDT
smudgy: I am very impressed by the completeness and thoroughness of
your answer. Unfortunately, I don’t think it is correct.
First, you proved that that if the vectors are +0.5 or 0 at any
dimension, one can move the point at 0.5 in any direction or
directions we choose. This is a beautiful proof. However, I am not
sure that implies easily that if the vectors are shorter, we have
“finer” movement. In order to move at a desired direction, we need to
balance the vectors so movements in all other directions are cancelled
out. If one vector is made smaller, such balance would be upset and
we may end up “overshooting”. In fact, by reducing a vector at a
particular dimension, it is sometimes possible to make it nearly
linear dependent with the other vectors, and therefore nearly violate
the first condition. It seems to me that under this condition it
would be impossible to move at fine steps.
Second, you conjectured that when any vector has any dimension that is
greater than or equal to 0.5, the requirement couldn’t be met. This
is not true. Consider a 2D example. Suppose we have one vector as
(1,0), and the other as (5,1). We can easily move any point one unit
(plus or minus) at x or y direction. Perhaps I understood you wrong?
By the way, the statement that the center point is the hardest to move
is not correct, although it probably does not affect the rest of the
proof. Consider a 2D case, where the two vectors are v_1=(2,0) and
v_2=(1,0.5). Consider the points at the left side (x=0.5,
0.5<y<0.5). The center point (0.5,0) can be moved into the square
by adding v_1v_2. But any points with y<0 cannot be moved that way.
Again, I appreciate your effort and hope you would try again. Please
let me know through comments how I can help (e.g., contemplate some
conjectures, etc.) Thanks again.

Clarification of Answer by
smudgyga
on
03 Oct 2002 18:33 PDT
Hi,
I think I should mention that I know that my answer (as stands) is not
a complete proof. I feel as though it is a significant starting point
for proof however. As mentioned in the comments, the conditions for
condition 2 could probably be looser. In particular the second half of
my idea is where the weakness in my answer lies. (Though I am not sure
whether the counterexample given in the comment can be generalized to
higher dimensions.)
My original idea (maybe you can do something with this) was that
perhaps the sum of all elements of V had to have a total of less than
.5 in each dimension, and that some combination of multiplying
individual vectors by 1 prior to summing could lead to moving an
arbitrary P into the interior. This relies on the idea that some point
on the surface will need to use each vector in its original form to be
moved into the interior. However, this did not seem intuitively
plausible due to the number of possible permutations of multiplying
individual vectors by 1; instead of this approach I felt it would be
better to give a solution which would, if anything, be overly
restricting, rather than potentially allowing a violation of the
conditions of the problem.
I do feel quite strongly (from an intuitive point of view) that, once
the conditions on V are nailed down precisely, that the elements of D
need be only either 1 or 1 for any P.
I hope my answer, though admittedly incomplete, suffices as a starting
point for a more rigorous and complete proof.
Good luck.

Clarification of Answer by
smudgyga
on
03 Oct 2002 18:54 PDT
Hello again,
With the exception of the unproved assertion, the weakest part of my
current proof is the (shaky!) move from "being able to move a point .5
with vectors of dimensions +.5" to "being able to move less than .5
if the dimensions are less than +.5". I will continue to work on
this because I have a peculiar faith in it that hasn't quite been
nailed down. I will work on this part separately and post it if I
arrive at a solution.
I'm having a little trouble following the counterexamples you
provided. In your first counterexample, you seek to defeat the
statement that vectors with at least one dimension greater than or
equal to .5 ccan't move P to the interior. However, the vectors you
give are integer vectors and would only succeed in (at best) moving P
to another point on the exterior, as far as I can tell. Did you leave
out some decimal points? Am I missing something? Am I misinterpreting
the question? (The interior of the hypercube is to be considered the
"strict" interior, correct?) In your second example, where you
question the assertion that the center point is hardest to move, By
following your suggestion (v_1v_2) it seems to me that P once again
lands on the edge of the square.
I hope you can clarify your counterexamples for me. Further
counterexamples would be greatly appreciated! Especially ones
contradicting my claim "vectors whose dimensions are each less than .5
can move any point P to the interior of the hypercube"
Thanks and good luck (again!)

Request for Answer Clarification by
fouyangga
on
04 Oct 2002 06:38 PDT
Hi, smudgy, sorry I cannot talk for long at this moment. I will write
more tonight. However, I'd like to make some points that might help.
First, I am looking for a "sufficient and necessary condition". The
condition that 0.5<=v_i<=.5 for any component is much stronger than
necessary, as it appears now.
Second, I think my counter example about the above condition (with two
vectors (1,0) and (5,1)) can be extended to higher dimensions. In
fact, if I have a series of vectors
(1,0,0,....),(a,1,0,0,...),(b,c,1,0,0,0,.....) etc., where a,b,c ...
be any value (i.e., the vectors form a lowertriangular matrix with
diagonal elements equal to 1), the condition would be met.
Third, your stated that if v_i has components of +0.5 and 0 would
work, then reducing a component would also work. I am still not
convinced. However, I do believe that if ALL of v_i's components are
within +0.5, the condition would be met. However, as I said in the
second point above, this is, I believe, a too strong condition (i.e.,
sufficient but not necessary).
About your question on my counter examples that actually move the
point to another surface instead of interior, I was indeed
oversimplifying. What I should have done was to replace 1 by 1a,
etc., where a is a positive number much smaller than 1. Alternatively,
I could change my requirement to allow moving a point to interior OR
boundary, except its original position (i.e., excluding the obvious
solution where all d_i are 0). I hope this helps to clarify.
Your statement that in a solution all d_i must be 1, 0 or 1 is
interesting. I will try to give some thought in this direction.
Thanks again.

Clarification of Answer by
smudgyga
on
04 Oct 2002 07:27 PDT
Let me state a couple of things to think about.
1: With n=1, the dimensions (err, dimension) of the vector must be
less than 1, clearly.
2: The problem starts getting interesting at n=2 because there's more
than one vector to deal with. Things start getting weird when we add
the two vectors. Once n=2 is proved it should generalize to any n.
Maybe there will be an easy way to perform the generalization using
induction.
3: If it turns out that each of the dimensions of the vector should be
less than 1 in absolute value (but not necessarily less than .5), I am
not so sure about my assertion that each element of D is +1.
I will be thinking about this problem this evening. Incidentally
tonight I will be surrounded by mathematicians. I will ask them for
advice.

Clarification of Answer by
smudgyga
on
04 Oct 2002 08:00 PDT
I believe I have it! At least for the two by two case. I believe it
generalizes nicely.
If we have two linearly independent vectors v_1 and v_2, and we can
only combine them in integer multiples, then a given point can be
moved only to points on a "grid" formed by multiple copies of v_1 and
v_2, kind of like a skewed "graph paper" with one side of each box
being v_1 and the other side being v_2.
If we take our grid and overlay it on our square, placing one of the
junctions of the grid on P, we can see all the points to which P can
be moved.
I believe (though I can't quite formalize it) that one of the points
of the grid must land on the interior of the square as long as the
dimensions of the vectors are less than 1, _no matter where P lies on
the square_.
I will do a few examples on geometer's sketchpad to see if I can nail
down the reason why this is (or seems to be) true.
This idea generalizes to n dimensions; we just will have an
ndimensional grid.
I will try to post a formalized answer later.

Clarification of Answer by
smudgyga
on
04 Oct 2002 11:25 PDT
Correction: The _sum_ of the vectors in each dimension cannot exceed
absolute value 1.

Clarification of Answer by
smudgyga
on
04 Oct 2002 23:00 PDT
Hello again!
I believe I have the final condition figured out. It is on a somewhat
different tack than my previous attempt. Also, it discards the notion
that D consists only of 1 and 1. (specific examples proved to me that
my earlier hypothesis was false).
Below I will make use of the term "integer linear combination" meaning
a sum of integer multiples (not all zero) of the set of vectors V.
The original condition 1 still holds: V must be linearly independent.
Modified condition 2: There must exist n distinct integer linear
combinations of V that move one of the corner points P (e.g.
(.5,.5,...5)) to the interior of the hypercube. This condition is
both sufficient (when taken in combination with 1) and necessary.
Proof of sufficience:
We will work with a slightly modified version of the original problem:
Consider the question "Under what conditions can a set of vectors move
P to either the interior of the hypercube or to another point on the
edge of the hypercube, under integer linear combination?" This is the
same question as originally posited except that strict inequalities
have been replaced by loose inequalities. I do this simply because it
makes my example easier to follow. I am fairly certain that the change
back to strict inequalities will not damage the validity of the proof.
This is addressed in (*) below. Also, for simplicity, the proof is
rendered in n=2. An explanation of generalization to higher dimensions
is also given (#) below.
Proof concept for visualization: The idea is that we are looking at a
grid of vectors with a unit square (parallel to the axes) overlaid
onto it. If, when the square is placed on the grid with one corner at
an intersection point of the grid, the edge of the square does not
come into contact with at least two other points, there is a position
that we can place the square where it has at most one point of the
grid on its edge. A few examples with neatlydrawn grids that fulfill
the given condition and with grids that don't will show this quite
nicely. Two points are necessary: if we attempt to "slide" the square
around, always keeping one particular point somewhere on the edge of
the square, if the vectors forming the grid fulfill the condition we
will not be able to slide the square into an "invalid" position. But
if the grid does not fulfill the condition we will quite easily find a
position for the square that is invalid.
Theorem:
If point P is placed at a corner of the hypercube (without loss of
generality, at the corner with all negative coordinates) and if (and
only if) there exist n distinct integer linear combinations of V which
move P to either the edge or the interior of the hypercube (i.e.,
there are n distinct integer linear combinations of V where each
dimension of each of the linear combinations is less than or equal to
1), then any point on the perimeter of the hypercube can be moved
either to another point on the edge or to the interior.
Proof in n=2: (Proofs in other dimensions follow similarly but are
harder to visualize. Once again see (#) below)
Sufficience:
Below please read "integer linear combination" for "combination" or
"linear combination." "P" refers to the specific P with all negative
coordinates mentioned above, and "arbitrary P" will refer to any
arbitrary point at the edge of the square.
(*)We will concentrate on moving arbitrary P to another point on the
edge of the square. To move arbitrary P to the interior of the square,
an appropriate scalar multiple of (1delta) for delta small will do
the trick. If in doubt, consider (without loss of generality) that the
square is placed in the first quadrant with two of its edges on the
axis, and such that P is on one of these edges. Assume without loss of
generality that P's image under a linear combination is on one of the
edges that is not coincident with the axes (some combination of
negativizing vectors will make this possible). Here, if P is moved to
an edge (through linear combinations) by a set of vectors in V, to
find a set of vectors V' that moves P to strictly the interior of the
square, simply take the set of vectors V'=(1delta)*V. Since P's image
must move towards the origin under this transformation, it must be in
the interior of the square. If P's image was on the interior of the
square, it will stay in the interior.
We consider the two linear combinations that move P to the edge of the
square and show that arbitrary P are moved to the edge or the
interior.
Case 0: If both combinations move P to the same edge adjacent to P,
the two vectors are not linearly independent. Contradiction.
Case 1: If one of the linear combinations c_1 moves P to an edge
adjacent to P, and the other c_2 moves P to a point on either of the
edges not adjacent to P, then c_1 is parallel to a unit vector and
less than 1 in length. c_2 has one of its dimensions being exactly 1.
Treat as in case 2 below.
Case 2: If both of the combinations are move P to the same edge, but
not an edge adjacent to P, then both combinations must have one
dimension of length 1. c_2c_1 must be parallel to one of the unit
vectors (and less than 1 in length). In this case, we're done. One of
the combinations will be responsible for moving arbitrary P, say, a
whole unit to the right (and some amount up and down), and c_2c_1
will be responsible for undoing any vertical actions of the first
combination and moving P onto the edge. Since the second combination
is of less than unit length, this must be possible.
Case 3: If both of the combinations moves P to distinct edges not
adjacent to P, then one of the combinations has exactly one dimension
equaling 1 and the other combination has its other dimension equaling
exactly 1. The unspecified dimensions of each of these two
combinations is less than 1. For any arbitrary P on the edge of the
square, either c_1, c_2, c_1c_2 or c_2c_1 will move P to another
edge of the square or to the interior (Once again we only need to
consider arbitrary P on the left and bottom edges): Assume c_1 moves
the original (nonarbitrary) P to the right edge and c_2 moves P to
the top edge.
c_1 will move some points along the left edge to the right edge. Other
arbitrary P on the left edge will be moved to the interior by c_2c_1.
c_2 will move some points along the bottom edge to the top edge. Other
points on the bottom edge will be moved to the interior by c_1c_2.
This covers all possibilities of moving points from one edge to
another. To move points strictly to the interior, given P,
shiftrotate the square appropriately so that the edge with P is
coincident with one of the axes and that another edge is coincident
with another axis. Then perform the scalar multiplication explained in
(*).
(#) Generalization idea: The idea behind generalizing this proof to
higher dimensions is that each of the distinct linear combinations
that moves P from a corner to an edge is responsible for doing so in
one dimension at a time. Therefore, in n dimensions, we will need to
show that for the set of vectors V there are n distinct linear
transformations that move P from corner to edge.
Necessity:
If the condition stated above is not fulfilled, particularly in case
3, it will be impossible to account for all points on either the left
or bottom edge. Notice that at the very instant c_1 ceases moving
points to the right edge, c_2c_1 starts moving points to the interior
(with an overlap of a single point). Thus, both c_1 and c_2 are
necessary to move points from (for example) the left edge to either
the right edge or the interior. With only one combination, it is
impossible to move some of the points.
Whew! That's it! I am pretty certain it works. The only thing I do not
have is an estimate for the computation time to search for the n
integer linear combinations with dimensions all less than 1. I am not
sure how to approach this search. The search should be relatively
limited: Consider the 2dimensional case. Our "grid" is composed of
many parallelograms, all with sides that are integer multiples of v_1
and v_2. If our square is placed with one of its corners at a corner
of a parallelogram, there must be a smallest parallelogram that
contains the square. Now perform an analysis of all paths leading away
from that corner and see which ones terminate inside the square. At
each intersection you have 2 choices that do not backtrack. For n
dimensions, at each intersection you have n choices.
If I can come up with a convincing estimate of computing analysis I
will post that in a followup. Meanwhile I hope this new proof answers
the rest of your questions about this problem. As usual if you have
questions, criticism, or potential counterexamples, please post them.
Good luck!

Request for Answer Clarification by
fouyangga
on
05 Oct 2002 05:41 PDT
smudgy, I am sorry but I was, and am, tied up with other things until
this afternoon (Saturday). I will be thinking about your latest
answer, and will respond this afternoon.
Sorry about the delay.

Request for Answer Clarification by
fouyangga
on
05 Oct 2002 09:47 PDT
smudgy,
I thought about the conditions you put forward. Although I need to go
to more details in your proof, I currently incline to believe that the
conditions are correct (at least they are sufficient). However, it
seems to me it is the complexity of finding such a set is still as
high as an exhaustive search. It seems you have some idea about a
more efficient search. Meanwhile, I will concerntrate on validating
the assertion (I believe a more direct proof is possible). I will
post it as soon as I get anywhere.
Thanks.

Clarification of Answer by
smudgyga
on
05 Oct 2002 13:30 PDT
A quick update to let you know where I'm at (and so you know I haven't
forgotten about you!):
I believe that to search all the possible paths from the corner of the
square along our vector grid to attempt to find points on the interior
of the square takes on the order of k^n operations for some k.
Basically we are looking at a grid with (approximately) k by k by ..
by k intersections and we want to check each intersection to see if it
is less than a unit in each direction from the origin. Clearly this
violates your desire for a search which can be performed in polynomial
time.
However, if somehow we can identify the points that are inside the
square without knowing their precise position on the vector grid, we
can entirely avoid the tedious task of searching for points in the
square by trying to "land" inside the square starting at the origin.
This is the direction I'm trying to work from. Since we don't need
the points' positions on the vector grid to determine whether the
system works, I'm hoping that by avoiding that part of the problem
entirely we can reduce the complexity of the search.
I look forward to hearing any thoughts you have.

Request for Answer Clarification by
fouyangga
on
05 Oct 2002 16:32 PDT
smudgy, Perhaps I misunderstood your last post. It seems to me that
one can probably work out a way to limit the search range (e.g., the
integers to try). But that is still an exhaustive search, and you
won't be able to save the steps required by orders (probably by a
factor). Maybe you meant something different.
I am hung up with an issue while trying to validate your sufficient
proof. In 2D case, you only consider the cases where the vectors
move the corner to a side, not to an interior point. I guess you are
relying on your previous argument that one can always make a vector
shorter without invalidating the results. I am having problem
following that. So could you make it more specific in this case?
Specifically, consider the case where the two vectors move the corner
point to the same nonadjacent side. In that case, your prove uses
the fact that the difference of the two vectors is in parallel with a
side. Therefore, one can move the point along this side without
worrying about the other dimension. However, if one of the vectors
fall inside of the square instead of falling on the side, the
difference won't be in parallel with a side anymore. How is this
situation covered?
I spent the whole after trying to fix the proof or find an counter
example, to no success. Please enlighten me. Thanks.

Request for Answer Clarification by
fouyangga
on
06 Oct 2002 14:05 PDT
smudgy, I spent more time on the proof discussed in the last post,
namely the case where the vectors fall inside the square instead of at
the edge. I still cannot make myself feel comfortable. And I realize
that this assumption (if it is OK at the sides, it will be OK inside)
is also the basis for your generalization into higher dimensions
(because you can only constrain the motion in a chosen dimension if
the vectors are on the surfaces).
Is there a way you can help me to understand better on this point? I
am afraid that this might be untrue, and we will wait effort on
finding the sorting algorithm.
Incidentally, I found a proof in the 2D case, that if we have 2
noncolinear vectors withing the unit square, the condition is met.
The proof I have is to form a parallelgram with the two vectors. Draw
unit squares from each vortex of the parallelgram that is inside the
original unit square. One can prove that the union of these unit
squares cover the whole parallelgram.
Unfortunately, it would be difficult to generalize this into higher
dimensions, because this is a pure geometrical proof. This probably
explained why I cannot find a counter example in 2D case, but still
have trouble understanding your proof.
By the way, when you say "distinct vectors", I think you mean vectors
that are linearly independent. Correct?

Clarification of Answer by
smudgyga
on
07 Oct 2002 12:04 PDT
Hi again,
Hmm, your proof seems to work for the 2d case and I don't see any
reason why it shouldn't work in the ndimensional case. For instance,
in 3d, you have three vectors (or linear combinations thereof) that
fit inside a cube and which form a parallelipiped. By appropriately
placing another unit cube at each vertex of the parallelipiped in the
cube, you should be able to cover the whole thing. While this
geometric proof is hard to visualize in higher dimensions, the
geometry does not actually fail at higher dimensions. Strictly
formalizing the proof might be a bit of a pain, but it is certainly
doable. One would have to come up with conditions for which vertex of
the cube to place at each vertex of the parallelipiped, but that's
really about it.
Anyway, this proof is a lot more elegant than mine, and there's no
need to convince someone of "what happens if you scale," etc. I'm
still convinced that the weak point in my proof can be patched without
a whole lot of work, but your proof is so much "nicer".
The last question is still deciding whether a candidate set of vectors
work. But any time that an exhaustive search would be required, your
computation time would be on the order of k^n. If there is a
possibility of deciding whether a set of candidate vectors work, it
has to come through finding the endpoints of the linear combinations
that lie inside the square without actually having to find which
linear combinations produce those endpoints. I feel there is a chance
this is doable, though, and I will continue to work on it.
I will continue to work on this. Sorry for the lack of updates over
the weekend.

Clarification of Answer by
smudgyga
on
08 Oct 2002 07:23 PDT
I have come up with an algorithm which is probably faster than
searching all points within a given range, and which is probably on
average relatively fast, but which still has operation time (in the
worst case) on the order of 2^n. I will post it later so you can see
what it's like, but I'm pretty convinced (though I don't think I could
prove it) that to find the required points is an exponentialtime
problem any way you look at it.

Request for Answer Clarification by
fouyangga
on
08 Oct 2002 16:38 PDT
smudgy, I guess I'll live with it if that can also find the vectors
inside the hypercube, instead of just proving their existance.
By the way, I am convinced that the condition is correct, after
reading your last post. However I still cannot prove it in high
dimensional case. I'll keep thinking about it.
Thanks.

Clarification of Answer by
smudgyga
on
09 Oct 2002 05:58 PDT
fouyang,
I'm glad you're confident about the condition being correct; I believe
that the only problems one should run into with the higher dimensional
case are visualization ones  there doesn't seem to be anything in
your proof that relies on the inherent twodimensionality of the
objects being used, so it should generalize perfectly.
Let me give you two algorithms and one potential algorithm for finding
the points. The first two are on the order of factorial time or
greater, but the third one is brand new and I think there may be a
chance that it could work.
The first: this is the same algorithm I originally suggested, in a
little more detail. (again, we will use the 2d version of the problem
for visualization.) Find a parallelogram of vectors that contains the
square when the square is placed at an intersection of the grid. The
dimensions of the parallelogram will have to be something like k_1+1
of one of the vectors, and k_2+1 of the other vectors. These numbers
are chosen in such a way that k_1 times the xdimension of its vector
is greater than 1, and likewise for k_2 and the ydimension of its
vector. Then there is a point of intersecton of vectors (along one of
the sides of the parallelogram) where the square can be placed so that
it is inside the parallelogram. (Testing this placement, I see that
some regions of the square are not inside the parallelogram, but
there's no possibility of intersection points being inside those
regions). Now since the linear combinations that give rise to each of
the intersection points are known, test all the intersection points of
your parallelogramgrid to see whether any of these points lie in the
square. The problem here is that for n dimensions, you have
k_1*k_2*..*k_n points to check, and this is on the order of k^n for
some number k.
The second algorithm is as follows:
In two dimensions, consider the two vectors of V, call them v and w.
Take note of the dimensions of v and w. If both dimensions of a vector
are less than 1, take note of that vector in a list L. Now consider
the vectors v+w and vw. Either both of these vectors are longer (in
modulus, i.e., using pythagorean theorem) than both v and w, or one of
them is shorter than either v or w. If the former is the case, we
can't make shorter vectors by combining v and w. In the latter case,
call the shorter vector "x" and then discard the longer of the two
vectors v and w, say it's v. Repeat the process above with w and x,
and anytime there is a vector whose length is less than 1 we add it to
the list L. Eventually we will generate the list of all linear
combinations of v and w that are also shorter than v and w, and who
have length in each dimension of less than 1. Do a sign test on all of
these vectors to see if they lie in the first or third quadrant; if
two of them do, then we have our two points. The problems with this
algorithm are as follows: If v and w are shorter than 1 in each
dimension we may potentially miss out on combinations that are longer
than v or w but still shorter than 1 in each dimension. The other
problem is that in higher dimensions, we need to find the ways in
which we can combine three vectors, call them v, w, x. So we need to
consider the lengths of v, w, x, v+w, vw, v+x, vx, w+x, wx, v+w+x,
v+wx, vw+x, and vwx in order to cover every possibility, up to
multiplication by 1. Since generating the vectors we need to compare
is clearly a combinatorial process, the number of vector combinations
we need to check will be on the order of n! depending on our dimension
n.
This last algorithm is not complete yet, but I have very high hopes
for it.
If our vectors our v_1..v_n, consider the following. We want each
dimension of Sum(d_i,v_i) to be less than 1 for two different sets D
and D*. This is just our second condition stated more precisely. Let
us set up a series of linear inequalities: for example, for the first
dimension of each vector in V (call these first dimensions x_1..x_n)
we want 1<d_1*x_1+d_2*x_2..+d_n*x_n<1. Now we have n of these linear
inequalities. Set these inequalities up into an augmented matrix of n
rows by n+2 columns, with the first and last columns being 1 and 1
respectively, to represent the twosided inequality we are working
with. In the center n columns, the coefficients are the dimensions of
the vectors and the variables are the values of d_i that we are
looking for. If we perform gaussjordan elimination on the middle
square submatrix (i.e., the vector matrix) and we can somehow perform
this elimination while simultaneously preserving the inequalities
(that is, not violating any of the rules of combining inequalities),
in the end, we will have a range of possible values for each d_i. If
there exist two distinct combinations of d_i in the integers that do
not violate their respective ranges, then we have our two linear
combinations of V that will translate a corner point to the interior.
The only remaining question is whether, indeed, it is possible to
successfully RREF the vector matrix without somehow breaking the rules
of inequality combination. If this algorithm can be made to work, it
has two distinct advantages: First, it can automatically determine
whether our vectors are linearly independent. If our vectors are not,
we are not able to RREF the vector matrix into the identity matrix.
Secondly, as mentioned originally, gaussjordan elimination takes on
the order of n^3 steps and this satisfies your computability
requirements. However, I have not tested this algorithm and I have
some trepidations when it comes to the matter of linearly combining
inequalities, because I do not have very much experience in this
matter, particularly within the realm of matrix algebra.
I really hope that the third algorithm works or can be made to work,
because I think it is exactly what you're looking for. If not, I hope
one of the other algorithms can be made to work in some way for you.
If you have any questions, of course, please reply. Good luck.

Clarification of Answer by
smudgyga
on
09 Oct 2002 11:11 PDT
My apologies. In my last algorithm I have managed to get mixed up
between an ndimensional case and a 2dimensional case. So be aware of
this as you're reading through it. Sometimes I talk about "2 vectors"
or "2 linear combinations" when I really mean "n vectors" or "n
combinations", etc.
Sorry for the confusion.

Clarification of Answer by
smudgyga
on
09 Oct 2002 20:21 PDT
One more modification to my third algorithm as given:
Since we don't want vectors that translate to the second or fourth
quadrants, we should limit our linear inequalities to being between 0
and 1, rather than 1 and 1. I overlooked this in my original
writeup.
I believe that in our process of gaussjordan elimination of our
matrix, a negative multiple of a row will not be an issue. This was my
original concern when I wrote up the algorithm. The problem with
inequalities is that when we multiply an inequality by a negative, the
direction of the inequality changes. However, this can be accounted
for in our twosided inequality matrix as follows:
If a row (in, say, the three dimensional case) of our matrix
represents the inequality
0<2x3y+4z<1
for example, then it will appear in the matrix as the row
0 2 3 4 1
If we multiply this inequality by, say, 1, we have
0>2x+3y4z>1
which does not match the pattern for our inequalities: the inequality
symbols must be "pointing" the same direction for us to add two of our
inequalities together.
To rectify this, we can simply rewrite the inequality as
1<2x+3y4z<0
represented in the matrix by the row
1 2 3 4 0
Hence, if we need to multiply a row by a negative in the process of
RREFing our "vector matrix", we simply need to swap the first and last
positions of the row after multiplying by 1 to make our new
inequality fit the pattern of our other inequalities.
This resolves most of my concerns about the solvability of the matrix
described in the third algorithm. However, a few tests of this
algorithm would certainly be in order to see if it behaves the way we
expect it to.

Request for Answer Clarification by
fouyangga
on
10 Oct 2002 05:32 PDT
smudgy, I don't quite understand the details of your three solutions
yet. But I'll try to make some comments on each one, as my
contribution.
The first solution: I don't understand what "intesection" means. Is
it the cornor points of the parallelgram, or is it where the lines of
the unit square meet the lines of the parallelgram? Perhaps a
clarification would help me to understand the rest.
The second solution: I agree that in higher dimensional case, we will
need to consider all possible combinations. This is probably not a
good course.
The third solution: I'll have to go back to my textbook and think
about the details. But intuitively it sounds too good to be true. I
will be tied up tonight, so probably by Saturday morning I will have
something more to say.
I have two more suggestions for your consideration.
First, is there a relationship between the area of the parallelgram
(or volumn, in higher dimension case) and the determinant of the
matrix formed by the vectors? If there is, that might help us.
Second, notice that all linear combinations (with integer
coefficients) of the orginal vectors are junction points of a
parallelgram grid spaned by the original vectors. All we need to know
is whether there are n linearly independent junction points inside the
unit square. This can be done by actually constructing the grid and
test each junction point. In n dimensional case, the complexity is
still k^n. However, if we find a good way to construct the grids and
to decide when to stop (i.e., all remaining grids won't be inside the
unit square), then k can be a quite small number and we can still
manage the complexity. Is this similar to your solution 1?

Clarification of Answer by
smudgyga
on
10 Oct 2002 08:42 PDT
fouyang,
The algorithm you describe is essentially what I had in mind for
algorithm 1. My use of the word "intersection" is intended to be the
same as your use of the term "junction point".
I cannot come up with a good method to correctly characterize the
appropriatelysized parallelogram grid necessary to contain the
square. But the size of each dimension of the grid (i.e., the number
of copies of each vector that form the sides of the parallelogram)
will be somehow related to the dimensions of the vectors. If we have,
say, n=2, and vector v_1 is the shorter of the two vectors in the
xdirection (say, 4*v_1 is greater than 1 in the xdirection but not
3*v_1) then we will need the v_1 dimension of our parallelogram grid
to be 4 units long, probably longer. Likewise if, say, v_2 is shortest
in the ydirection and 6*v_2 is greater than 1 in the ydirection (but
not 5*v_2) We will need at absolute minimum the v_2 dimension of our
parallelogram grid to be 6 units long. This is further complicated by
the fact that v_1 and v_2 may form a very small angle between them,
and to guarantee that the square will fit in the grid, we will need
significantly more than 4 copies of v_1 and 6 copies of v_2.
A little complication arises if a particular vector is the shortest in
both dimensions, but the basic idea is the same overall, I think.
Clearly, if vector v_1 is shortest in both the x and y directions, and
it takes 6 v_1's to be bigger than 1 in the x direction, and 4 v_1's
to be bigger than 1 in the y direction, we will need a minimum of 6
v_1's. Now the number of v_2's that would be necessary at minimum
would be the number of v_2's it takes to be greater than 1 in both the
xdirection and the ydirection. In practice, though, as mentioned
above, you will need more than these minimum numbers in order to
actually have your parallelogram grid contain the square. How much
more than these minimums are necessary I'm not sure.
In any case, there will certainly be a minimum number k for each
vector that the grid needs to be larger than. Unless the vectors are
all significantly long and almost perpendicular(possibly long enough
and perpendicular enough to make it an impossible system), the size of
the grid will have to be at least 2 for each vector (several examples
have shown this to be the case to me, for sure). Now that means that
for each edge of the parallelogram grid there will be at least 3
junction points, which means that we're talking a minimum of 3^n
points to check, which isn't bad as exponential growth goes, but it's
still exponential growth.
Another complication comes in the fact that our square must be placed
with one of its corners at a junction point of the grid. Not every
parallelogram that contains the square will necessarily be the same as
a parallelogram grid that contains the square with one of the square's
corner at a junction point.
As for your other query:
There is a relationship between the area of a parallelogram and the
determinant of a matrix: From Google's cache of
http://wwwmath.mit.edu/~djk/18_013a/chapter04/section01.html
(google search terms: determinant area parallelogram; google result
#1)
"Given three vectors in three dimensions we can form a 3 by 3 matrix
of their components, and we will see that the absolute value of the
determinant of that matrix is the volume of the parallelepiped whose
edges are determined by the three vectors."
and later,
"In fact an analogous results holds for k kvectors [including the 2d
case  smudgy]: the absolute value of the determinant of the matrix of
their components is the kvolume of the figure they bound."
I am not sure how we can use the area of the parallelogram to our
benefit. Certainly the area of the parallelogram must be greater than
1. However (similar to the problems listed above) a parallelogram can
be very skewed and still have an area greater than 1. It may still be
impossible to fit the square inside this parallelogram.
I believe the computation time of the determinant should be on the
order of n^3, as it is with gaussjordan elimination. If we reduce our
matrix, we get the identity matrix, which has determinant 1. Working
backwards through our elementary row operations, we should be able to
see how the determinant changes with each step and calculate the
determinant of our original matrix.
I agree that the last algorithm sounds a little too good to be true.
If it works, we've reduced our search to testing inequalities for
integer solutions. But this might serve the original hope I had
intended for an algorithm  that we don't need to find the actual
linear combinations that yield the points to know whether the points
exist, and that in not actually finding these combinations we might
have eliminated the part of the search that was taking exponential
time.
I will continue to work on both of these algorithms. Here's hoping the
last one is successful.
PS: I have a lead on an "algorithms" textbook which should list the
operation time of many of these operations (e.g., calculating
determinant, RREF, etc) to give you a better idea of running time.

Clarification of Answer by
smudgyga
on
16 Oct 2002 18:58 PDT
Fouyang,
Sorry about the lengthy response time. This is the official word from
the higherups if you need to reference and attribute this answer in a
publication:
"The appropriate procedure, if the author wants to cite the
question/answer, would be for him/her to email us so that we can clear
this through appropriate channels. We'd need to know the nature of
the publication and how the answer would be cited. Of course, he
would have to include appropriate citation."
Once you emailed the editors, they would clear it with me, and then
they would respond to you with the goahead to use the citation.
The email address for the editors of this forum is
answerseditors@google.com
It's been a pleasure working on this problem with you. Good luck with
your work!

Request for Answer Clarification by
fouyangga
on
17 Oct 2002 18:32 PDT
Hi, smudgy, sorry to raise you again. But I think I found a very
simple solution, and I'd like to share it with you.
Make the original problem more general (actually one can prove that
this is equivalent to the origninal problem):
Given an N by N matrix V, find the condition for V such that for any
vector x, one can find a vector d(x), with all components being
integers, satisfying the inequalities 0<(Vdx)_i<1 for each component.
Convert the matrix V into a lower triangular matrix L: V=QL. We can
convert the
inequality to
u<(Ldy)<v.
u and v are constructed as you mentioned before, from each row of
Q^(1).
y=Q^(1)x is still an arbitrary vector.
Now, let's look at the first row. It becomes u_1<L_11*d_1y_1<v_1.
For this to have solution for any value of y_1, the sufficient and
necessary condition is abs(L_11)<(v_1u_1). If this condition is met,
we can easily find d_1, given y_1.
Now the next row becomes u_2<(L_21*d_1+L_22*d_2+y_2)<v_2. Note that
L_21*d_1+y_2 is still an arbitrary number. Therefore, the sufficient
and necessary condition for the existance of d_2 is
abs(L_22)<(v_2u_2).
The same argument can be applied to every row. Therefore, we get the
sufficient and necessary condition for the whole. This also serves as
a way to find these integers, given the point x.
I know you are off the task. But I thought you'd be interested to
know. I appreciate very much your comments (especially if I did
something wrong). Is it possible to express it in a neater form
(e.g., using the properties of Q and P, instead of in component form)?

Clarification of Answer by
smudgyga
on
17 Oct 2002 19:23 PDT
I must admit that I have some trouble following your revised proof,
not because it's flawed but because it's much more technical than I've
had to deal with at any time in the recent past! My linear algebra
course was a long time ago... I understand up until you split matrix V
into the product of Q and lower diagonal matrix L. Clearly this is
possible, but I do not recall what the properties of Q would have to
be to ensure that QL=V. Therefore the parts of your proof relying on Q
and Q^(1) are somewhat lost on me.
However, it is clear from the layout of the proof itself that it is a
very straightforward one. It seems much more algebraic to me than the
proofs we initially worked on, which is fine; but for myself I was
picturing the problem as a geometric one, and using linear algebra to
demonstrate the geometric ideas. Your algebraic proof is certainly
just as valid for approaching the problem (assuming it works, which I
am inclined to believe it does).
I would also be interested, incidentally, on a formalized version of
your "covering the parallelogram with squares" proof. I may work on
that one myself, since the geometry of it appeals to me.
This is a great problem, and I am surprised that it does not appear
somewhere in the literature. "How can I create a grid so that a square
laid on the grid in a fixed orientation will always contain a grid
point?" is a very intriguing question by itself. It's been a lot of
fun to work on it; I'd be happy to entertain any other thoughts you
have on the problem.

Request for Answer Clarification by
fouyangga
on
18 Oct 2002 18:12 PDT
smudgy:
It's my fault. With the text format it is hard to write formulars.
So I will try to outline the proof rather than writing it exactly.
I have further simplified the proof since my last post. It goes like
this.
Again, we what to find the condition for V, such that we can have
0.5<Vd+x<0.5 for each component, for any x, with an "integer vector"
d.
Let U be the inverse of V: UV=I. If we take the first row of U and
make a linear combination of the inequalities, the center part will be
d_1+y. y is a linear combination of all components of x. However, to
us it is just an arbitrary number. The left side is 0.5 times the
sum of the absolute values of the components in that row. The right
side is 0.5 times that sum. This is the same process as in your
solution about the Gaussian elemination process. The inequality
becomes 0.5s<d_1+y_1<0.5s. s is the sum of absolute value, as
described before. So the sufficient and necessary condition for d_1
to exist, for arbitrary y_1 values, is that s>1.
Then we can use the second row of U to do the same, to find d_2, and
so on and so forth. Therefore, the final necessary and sufficient
condition is that for U, the inverse of V, the sum of the absolute
value of the component in each row is larger than 1.
This solution is inspired by your handling of the Gaussian
elimination. So my thanks again.
Do you know what other properties a matrix would have, if it has the
above property?

Request for Answer Clarification by
fouyangga
on
19 Oct 2002 06:05 PDT
smudgy:
Please ignore my two previous posts. I was completely wrong. One
cannot linearcombine the inequalities like we did. A simple
argument: if one inequality is strongly met (such as 0<100), the
other is weakly violated (such as (0<0.001), we can easily find two
linearly independent linear combinations (such as (1)*1+(2)*2,
(1)*1+(2)*3) that result in valid inequalities. However, this doesn't
mean the original inequalities are satisfied.
This also means the part of your solution, the Gaussian elemination,
is invalidated as well. However, I think you've done enough for this
problem. I don't think you are obligated to continue.
By the way, you asked about my formal proof on the squares filling the
parallelogram part. As you can see, I was sidetracked. Besides, I
am still having problem with the necessary part of the proof. I
believe there is an counter example in twodimensional case. If we
have two vectors (0.999,0.5) and (0.01, 0.999), we can move any
points into the unit square. However, we cannot construct two
linearly independent vectors inside the square bounded by (0,0) and
(1,1). In fact, I sort of believe that it is sufficient to have two
vectors are in a square of side one. But the square does not have to
have one corner at (0,0).
As I said, you are officially off the case. But if you have a quick
answer to my doubt, I'd appreciate.

Clarification of Answer by
smudgyga
on
19 Oct 2002 07:28 PDT
fouyang,
First I will deal with your counterexample to our condition: I'm
pretty sure you can't move every point on the edge of the square to
the edge/interior with this pair of vectors. Consider a point on the
square that is ,4 units up from the bottomleft corner. As you stated,
there are no linear combinations of the vectors that will move any
surface point to the interior, but your first vector will move your
point .1 unit below the square, and your second vector will move it
.599 units above the square.So your vectors V are not sufficient to
move all points of the surface to the interior. I still have a lot of
faith in our condition as stated.
However, in trying a couple of examples with the inequality method of
solving for D, I agree that it doesn't seem to work. I feel like there
must be some way to make a similar method work. The problem seems to
be that (in the twodimensional case) choosing a value for d_1 sets
further boundaries on the value of d_2, beyond those determined by the
inequalities. Though maybe this method has another use perhaps it
can help us determine the size of the parallelogram grid we need to
search, though in what way I am not sure (this is a totally intuitive
leap...).
I am pretty certain that our condition on V is valid both in the
necessary and sufficient directions. I'd love to hear about further
successes/complications.
