Does P=NP? I have a theory and I would like your opinion Mathtalk?
P=NP
For definition and details on the problem see:
http://www.claymath.org/Millennium_Prize_Problems/P_vs_NP
The solution is if I want to compress data, I can take an arbitrary
amount of data. For this example I?m going to use 1 Megabyte of
random data (2^1,048,576 bytes) or 8,388,608 bits. If I can take a
completely set of random data and compress or represent the same data
in less bits than the original set for ALL data then P=NP.
The data can have values than can be converted to a decimal
representation (base 10 or any base for that matter). Although a VERY
large number with million of digits long the number is finite. The
EXACT same number can be represented by a ?series? of bytes for
example broken up in 8 bytes a piece. (Like a computer program))
These 2 numbers: a series and 1 large ?whole? number although on the
face have no obvious correlation they are the same data in base 10.
So I can set up a relationship of my original data set of BIG NUMBER=
SERIES OF DATA (a,b,c,d?.). Now OBVIOUSLY the magnitude (or size) to
represent the data values is exactly the same. I?m reading the same
data to create the equation. (More on Symmetry later).
So if I can set the equation with a 3rd new dataset with less data then P=NP.
Using the properties of Factorials I will prove P=NP.
More on Factorials:
http://mathworld.wolfram.com/Factorial.html
Utilizing the properties of Factorials (X!), We can compress the data.
The largest Factorial that satisfies this equation 2^8,388,608 > X!
is 481176!
Proof provided by research at Google Answers:
http://answers.google.com/answers/threadview?id=292562
Second Proof we will need is that X!^2 > the next factorial
http://answers.google.com/answers/threadview?id=293570
So starting from position 0 in a new data file to represent the
original data I will use the first bit to tell me if I can subtract
481,176! from the ORIGINAL DATA. If it is I store the first bit as 1
and I subtract the 2 numbers. Then the remaining data is tested
against (X!-1). The data will lose magnitude ( in a factorial
progression). It should be no concern if the factorial could be
subtracted out more than once we have plenty of room to store it if it
was possible.
Proof:
http://answers.google.com/answers/threadview?id=293570
So the original data will get smaller and smaller to 0 when I get to
the last factorial 1!. Most of the time the X! Factorial will < the
data. But occasionally the factorial will subtract enough off the
next factorial will not. This creates a superposition of the values
being subtracted off.
So if I store whether or not the factorials are < than the remaining
data I will have stored all 8,388,608 in 481,176 bits.
With that P=NP.
(Note: this is a very rough draft and typos or spelling maybe questionable.) |
Clarification of Question by
mrsneaky-ga
on
07 Jan 2004 18:22 PST
I found small error... in the smaller numbers I may need some extra
bits as counters. Below a certain number the factorial may go in
more than once.. But the number is very small and grows only at the
VERY end (with small numbers).
|
Clarification of Question by
mrsneaky-ga
on
07 Jan 2004 23:53 PST
1 last point I don't think is very clear upon rereading the question.
The series of data creates a whole number and why they would be equal.
The data could be from a JPEG file as an example. For all intensive
purposes random data. So as each byte is stored in sequence to the
1st megabyte. The result would be a very large whole number millions
of digits long. So therefore the Series=Whole Number
The other thing on the clarification. The last couple factorials ( <
10 )(or something very small)) may need additional "counters" which
would be the exponent value (verses just 1 and 0 as counters). Even
if every factorial needed 8 bits (256 (which is impossible) the data
would still shrink.
There are a couple typos or mistakes like (X!-1) should have been
(X-1)! for example, but I think you should see through those.
|
Clarification of Question by
mrsneaky-ga
on
08 Jan 2004 12:56 PST
The original plan was to leave 8 bits for each factorial (255 times)
which is WAY overkill. With more effort I can allocate bits as
needed. But even if I allowed 255 times for each factorial to be
subtracted the net is 1MB to 481,176bytes (not bits).
The process is also iterative. So I can do it over and over.
|
Request for Question Clarification by
mathtalk-ga
on
08 Jan 2004 13:12 PST
Hi, mrsneaky-ga:
I haven't completely parsed out your Question yet, but you raise two issues:
1) Does P=NP?
2) Can one "take a completely set of random data and compress or
represent the same data in less bits than the original set for ALL
data"?
You suggest that 2) implies 1), which I can agree with (although I
haven't understood yet how you see the implication) because 2), at
least in the broad sense which you seem to imply by putting "ALL data"
as the qualifier, is false.
Consider for example a datum consisting of a single bit, which may be
0 or 1 with equal probability. Can this be compressed to less than
one bit? While I think it is relatively easy to agree that further
compression is impossible for "small" amounts of data, you seem to
have focused on the possibility of doing something better with larger
amounts of data.
It is understandable that with so many frequent successes of
compressing files to smaller ones, it may seem that by some more
sophisticated techniques we will eventually do what you suggest,
compress the size of any and all input streams. But mathematically
this is impossible. The reality of any "lossless" compression
technique is that there is a tradeoff between the files that can be
compressed (often by a great deal) and those which actually expand
when "compressed" even if it is only by a little bit.
If you were to look at the entire universe of files, you'd see that
the accounts balance exactly, according to the strict laws of
mathematics. What makes the compression algorithms such as used in
PKZIP of practical importance is that the distribution of files that
are important to us (as humans) tends to overrepresent the small
portion of that universe in which compression is somewhat to very
effective.
I will be eager to read your post more carefully this evening. In the
meantime I wanted you to have a chance to rethink the price; I believe
you have offered too much for the little insight that I may be able to
add.
regards, mathtalk-ga
|
Clarification of Question by
mrsneaky-ga
on
08 Jan 2004 13:51 PST
I need to add an extra term in the equations so it will make more
sense tonight. I want to store the "exponent" value of the factorial
that I will "subtract". not how many times I want to subtract a
specific factorial.
I completely understand it is impossible. I ask for an open mind.
|
Clarification of Question by
mrsneaky-ga
on
08 Jan 2004 14:00 PST
So to start over in a sense:
I want to the Original Whole Number created by a data set. That is called Z.
Z - X!^Y will give me a new remainder set. I store only Y for each
factorial from 481,176! to 1! Y in my opinion will never be larger
than 255. And thats what I would like help on. Can ^Y be larger than
255??? (I can't seem to find where I can make Y larger than 255)
|
Clarification of Question by
mrsneaky-ga
on
08 Jan 2004 14:04 PST
and take it a step further Y can be as large as 2^16 and it still work.
|
Clarification of Question by
mrsneaky-ga
on
08 Jan 2004 15:49 PST
Good Evening Mathtalk:
Let me try to explain what I think is possible. And I could be wrong,
and you are the man to prove me wrong. :)
I want to create a factorial map back to a very large number - 1
MB(this number is arbitrary) which has a finite whole number value.
Utilizing Factorials we know that the largest possible Factorial that
"fits" into the data is 481176!.
Using Factorials as the "currency" in expodential terms. I thought I
could lay out a map from a binary number to a new equivelent
"factorial" number stored as binary. So therefore compressed.
That is what I was trying to do.
I may need to use 4 or 5 significant digits to use predetermined
fractional exponents to get "closer" to the data. This idea is been a
work in progress no means finished. I'm just exploring an idea.
|
Request for Question Clarification by
mathtalk-ga
on
08 Jan 2004 16:16 PST
I've been trying to follow how you will define the mapping. Agreed we
start with a number Z greater than 0 and less than 2^8,388,608. I
would like to resist the labelling of Z as a "big" number, as I think
it may be crucial to your argument to define the mapping for all
numbers Z in this range.
Your construction sounds a bit like using "mixed radix" position
notation for numbers, where the highest order position gives the
number of multiples of 481176! contained in Z, the next highest gives
multiples of 481175!, and so on all the way down to the lowest
position = multiples of 1! = 1.
This leads to a unique representation of the form:
Z = d0*(481176!) + d1*(481175!) + d2*(481174!) + . . . + d481175*(1!)
Now I confess that this is merely an idea that popped into my head
from reading some of your Clarifications, esp. the most recent one,
and not necessarily one you meant to put there. For in places you
also refer to exponents and in particular the idea of (largest?) Y
such that Z - (X!)^Y gives "a new remainder set", and of this
construction I am completely uncertain how to get unique
representations. In particular note that when X = 1, the exponent Y
can be arbitrarily large yet give the same result.
I would suggest working from the simple end of the range: What does
your mapping do to represent 1? 2? and so on, until the definition is
clear to you. Then you can perhaps define the mapping for "large" Z
in a succinct fashion.
regards, mathtalk-ga
|
Clarification of Question by
mrsneaky-ga
on
11 Jan 2004 23:24 PST
I appreciate the very informative information. I'll continue to think
about the problem. I'm sure I'll have future questions.
|
Request for Question Clarification by
mathtalk-ga
on
12 Jan 2004 07:49 PST
Hi, mrsneaky-ga:
Perhaps you'd be interested in a discussion of P=NP ? There was
actually some new announcement of results on this front, and I could
try to catch you up on the story.
regards, mathtalk-ga
|
Clarification of Question by
mrsneaky-ga
on
01 Feb 2004 21:03 PST
MathTalk:
Can you review this document with attached spreadsheets. the link is
http://4thandinchesinc.com/p.htm.
I think you should find it at least a little interesting. Any opinion
would be considered an answer to this question.
Thanks,
MrSneaky
|
Request for Question Clarification by
mathtalk-ga
on
01 Feb 2004 22:43 PST
Sure, I'll take a look!
--mathtalk-ga
|
Clarification of Question by
mrsneaky-ga
on
03 Feb 2004 15:51 PST
Great! Thanks again!
|
Clarification of Question by
mrsneaky-ga
on
05 Feb 2004 08:18 PST
Should I make a new question since this is going to expire tomorrow?
|
Request for Question Clarification by
mathtalk-ga
on
05 Feb 2004 08:23 PST
No, I'll post my remarks tonight unless the dreaded Unable to Process
message proves an obstacle. Most times one can get through with a bit
of persistance.
Thanks, Mathtalk-ga
|
Hi, MrSneaky:
I've tried to sort out your outlined proof that by compressing any 1
Megabyte dataset into a smaller one, the P=NP problem can be solved.
Here are some comments:
1) Although you declare that a goal of your construction is to solve
the P=NP problem, you say little about how you mean this. Besides
providing a link the the Millennium Challenge Prizes page, almost your
only mention of P=NP says simply:
A separate independent proof that states,? if a general solution were
to be found, it would meet the requirements of P=NP and violate the
Counting Theorem?.
I would expect considerably more detail about why one should believe that P=NP.
2) From your earlier statements you accept that an arbitrary 1
Megabyte dataset can be identfied as a nonnegative integer between 0
and 2^N-1 where:
N = 8,388,608
and from the above mention of violating "the Counting Theorem" you
seem to be aware that you are in essence proposing that the basic
theory of arithmetic is inconsistent. It is hard for me to understand
how you might believe this, for a sincere belief in this would seem to
make one completely indifferent as to whether the Millennium Prize is
one million dollars or one dollar. After all, if counting gives
whatever answer you wish, then objectively there is no distinction.
3) If you wish to understand in detail where your attempted
construction goes awry, you can compare the formula you give for
representing C:
C = SUM X(k) * (k!)^Y(k) FOR k = 1 to 481,176
with the formula I gave above for a "mixed radix" number representation:
C = SUM X(k) * (k!) FOR k = 1 to 481,176
I noted that in my formula, the integer values X(k) could be found to
yield C in the range you want subject to X(k) between 0 and k
(inclusive). In your construction you introduce exponents Y(k), which
are said to be between 1 and 2, and so in general do not have integer
values.
But then it is no longer clear what values you will produce in your
formula. You mention limiting these exponents Y to some particular
values numbering as many as 256, and also you introduce a notion of
"approximation". It then is completely open as to what your formula
can or cannot represent.
Finally I'd like to know your reaction to the following thought
experiment. If your construction were applied to each of the
2^8,388,608 possible inputs, how many different outputs would you get?
If there are two inputs which produce the same output, does this not
prove that your "compression" algorithm would not allow for a
"decompression" algorithm to undo its action?
regards, mathtalk-ga |