Compression of random binary data
jonas.thornvall at gmail.com
jonas.thornvall at gmail.com
Tue Jul 12 15:42:44 EDT 2016
Den tisdag 12 juli 2016 kl. 21:40:36 UTC+2 skrev jonas.t... at gmail.com:
> Den tisdag 12 juli 2016 kl. 20:20:52 UTC+2 skrev Michael Torrie:
> > On 07/12/2016 11:46 AM, jonas.thornvall at gmail.com wrote:
> > > Well the algorithm start with looking up a suitable folding structure
> > > "close enough to the number", then it works down the folding
> > > structure finding the fold where the difference or sum between the
> > > folds closest to zero.
> > >
> > > You do the same prinicple with the remainder until zero is achieved.
> > >
> > > So our first fold can either be bigger or smaller, and it seek a
> > > configuration for the fold that close in max on the actual random
> > > number. The second fold could be a fold that depending upon our first
> > > fold was bigger or smaller than number either will add or subtract
> > > lower layers of the fold.
> > >
> > > There will come out a difference that need to be folded, the process
> > > is repeated until there is nothing to fold.
> > >
> > > It is basicly a search algorithm looking for suitable folding
> > > structures.
> >
> > Better patent it quickly then. And you will win a noble prize for math
> > if you could do what you say you could.
>
> I must stress when i say number here i really mean +100000 decimal digit. So i basicly search in on big numbers that i compress. So i divide the dataset into suitable sizes for compression.
And the dataset chunks that comes out from the process can also be treated like a new datafile, so the compression is iterative down to a limit.
More information about the Python-list
mailing list