Could someone please help me out. I'm not a math wiz, but I think I've stumbled onto something important, and it looks like it could have some profound implications for a number of fields.
I call it a Random Filter. The opposite of a random number generator, this algorithm takes a stream of numbers and removes the random ones, thus leaving a collection of purely non-random numbers. Here's the simple version; where the function rand() represents your favorite random number generator:
loop n <- input # get next n r <- rand() # also get a random number n = r ? # is n itself a random number ? yes: trash <- n # then toss it no: output <- n # otherwise keep it end loop
In other words, the randomness of each n is determined by comparing it to a number known to be random; all random n's are filtered out. It's just like that technique of finding prime numbers by eliminating all the composites. For example, this could be handy for scientific researchers: now they can weed out any random fluctuations in their statistical data. Astronomers can get sharper pictures. Physicists can finally get past that whole Heisenberg thing.
Obviously this technique hinges on having a good reliable source of randomness to use for comparison. Using a computer-based rand() in the loop above will only produce PSEUDO-non-random output.
Before you tell me what you think of all this (cough cough), let me assure you that I have already used it, with fabulously successful results. Before sending this note, I took the ASCII text and merged it with a file of Geiger readings from some radioactive isotope. Then, using the same geiger recording as my rand() function (since it was, after all, still random), I pumped the altered text through a Random Filter and, voila, out came the original! Pretty amazing, huh?
I can only speculate as to what would happen if I'd had some mechanical dice roller feeding its results to the computer in real time. With that degree of randomness at its disposal, the Filter may have corrected my spelling, cleaned up my grammar, or removed some other imperfections I'm not aware of. No telling how powerful this technique could be.
Think of what this could mean for areas like data integrity, quantum physics, radio reception, compiler design, weather forecasting, economics, structural design, how about racetrack handicapping... the applications seem virtually limitless.
Well, waddya think? One thing's for sure--I'm planning a trip to Las Vegas. Soon as I can figure out how to use the slot machines to beat the roulette wheel.
Late-breaking news: The Random Filter has continued to demonstrate its value. I applied it to a list of the last six months' winning lottery numbers. I tried every method and variation I could think of, and the results were absolute gobbledy-gook: the Random Filter concept simply could not be meaningfully applied to this set of data. In other words, I proved conclusively that the lottery is FIXED (ie, not random). Pretty amazing, huh?