bitwise not - not what I expected

Michael Peuser mpeuser at web.de
Mon Aug 18 03:42:16 EDT 2003


"Bengt Richter" <bokr at oz.net> schrieb im Newsbeitrag
news:bhpjm1$g01$0 at 216.39.172.122...
> On Sun, 17 Aug 2003 22:18:39 GMT, Dennis Lee Bieber
<wlfraed at ix.netcom.com> wrote:
>
> >Michael Peuser fed this fish to the penguins on Sunday 17 August 2003
> >02:41 pm:
> >
> >> I have the impression (may be wrong) that you are working under the
> >> misconception that there can be a "natural" binary represensation of
> >> negative numbers!?
> >
> >        Apologies if I gave that impression... the +/- 0 technical affair
is
> >the main reason I went into the whole thing...
> >
> >> Three conventions have commonly been used so far:
> >> 1- Complement
> >> 2-Complement
> >> Sign tag plus absolut binary values
> >>
> >> All of them have their pros and cons. For a mixture of very technical
> >> reasons (you mentioned the +0/-0 conflict, I might add the use of
> >> binary adders for subtraction) most modern computers use 2-complement,
> >> and this now leads to those funny speculations in this thread. ;-)
> >>
> >        From a human readable standpoint, your third option is probably
the
> >most "natural"; after all, what is -19 in human terms but a "pure" 19
> >prefaced with a negation tag marker... (I believe my college
> >mainframe's BCD hardware unit actually put the sign marker in the
> >nibble representing the decimal point location -- but it has been 25
> >years since I had to know what a Xerox Sigma did for COBOL packed
> >decimal <G>).
> >
> >        ie, 00010011 vs -00010011 <G>
> >
> >        1s complement is electrically easy; just "not" each bit.
> >
> >        2s complement is mathematically cleaner as 0 is 0, but requires
an
> >adder to the 1s complement circuit... Though both complement styles
> >lead to the ambiguity of signed vs unsigned values
> >
> Everyone says "two's complement" and then usually starts talking about
numbers
> that are bigger than two. I'll add another interpretation, which is what I
thought
> when I first heard of it w.r.t. a cpu that was designed on the basis that
all its
> "integer" numbers were fixed point fractions up to 0.9999.. to whatever
precision
> the binary fractional bits provided. There was no units bit. And if you
took one
> of these fractional values 0.xxxx and subtracted it from 2.0, you would
have a
> complementary number with respect to two. Well, for addition and
subtraction, that turns
> out to work just like the "two's complement" integers we are used to. But
since the
> value of fractional bits were all in negative powers of two, squaring
e.g., .5 had
> to result in a consistent representation of 0.25 -- i.e. in binary
squaring 0.1
> resulted in 0.01 -- which is shifted one bit from what you get looking at
the numbers
> as integers with the lsb at the bottom of the registers and the result.
>
> I.e., a 32-bit positive integer n in the fractional world was n*2**-31. If
you square
> that for 64 bits, you get n**2, but in the fractional world that looks
like (n**2)*2**-63,
> where it's supposed to be (n*2**-31)**2 => (n**2)*2**-62 with respect to
the binary point.
> The fractional model preserved an extra bit of precision in multiplies.
>
> So on that machine we used to count bits from the left instead of the
right, and place imaginary
> binary points in the representations, so a binary 0.101 could be read as
"5 at 3" or "2.5 at 2"
> or "10 at 4" etc. And the multiplying rule was x at xbit times y at ybit
=> x*y at xbit+ybit.
>
> You can do the same counting the bit positions leftwards from lsb at 0, as
we usually do now,
> of course, to play with fixed point fractions. A 5 at 0 is then 1.25 at 2
;-)
>
> Anyway, my point is that there was a "two's complement" implementation
that really meant
> a numeric value complement with respect to the value two ;-)
>
> Regards,
> Bengt Richter


A very good point! I might add that this is my no means an exotic feature.
Mathematically speaking there is great charme in computing just inside the
invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo real
operations. You have get track of the scale of course - it is a little bit
like working with sliding rules if anyone can remember those tools ;-)

Even modern chips have support for this format, e.g. there is the 5$ Atmel
Mega AVR which has two kinds of multiplication instructions: one for the
integer multiplication and one which automatically adds a left shift after
the multiplication! I leave it as an exercise to find out why this is
necessary when multiplying fractional numbers ;-)

Negative numbers are formed according to the same rule for fractionals and
integers:
Take the maximum positive number:   2**32-1  or 0.999999
Extend your scope
Add one bit:  2*32  or 1
Double it:     2*33 or 2
Subtract the number in question
Reduce your scope again

Kindly Michael P






More information about the Python-list mailing list