bitwise not - not what I expected

Bengt Richter bokr at oz.net
Mon Aug 18 00:11:45 EDT 2003


On Sun, 17 Aug 2003 22:18:39 GMT, Dennis Lee Bieber <wlfraed at ix.netcom.com> wrote:

>Michael Peuser fed this fish to the penguins on Sunday 17 August 2003 
>02:41 pm:
>
>> 
>> I have the impression (may be wrong) that you are working under the
>> misconception that there can be a "natural" binary represensation of
>> negative numbers!?
>
>        Apologies if I gave that impression... the +/- 0 technical affair is 
>the main reason I went into the whole thing...
>
>> Three conventions have commonly been used so far:
>> 1- Complement
>> 2-Complement
>> Sign tag plus absolut binary values
>> 
>> All of them have their pros and cons. For a mixture of very technical
>> reasons (you mentioned the +0/-0 conflict, I might add the use of
>> binary adders for subtraction) most modern computers use 2-complement,
>> and this now leads to those funny speculations in this thread. ;-)
>>
>        From a human readable standpoint, your third option is probably the 
>most "natural"; after all, what is -19 in human terms but a "pure" 19 
>prefaced with a negation tag marker... (I believe my college 
>mainframe's BCD hardware unit actually put the sign marker in the 
>nibble representing the decimal point location -- but it has been 25 
>years since I had to know what a Xerox Sigma did for COBOL packed 
>decimal <G>).
>
>        ie, 00010011 vs -00010011 <G>
>
>        1s complement is electrically easy; just "not" each bit.
>
>        2s complement is mathematically cleaner as 0 is 0, but requires an 
>adder to the 1s complement circuit... Though both complement styles 
>lead to the ambiguity of signed vs unsigned values
>
Everyone says "two's complement" and then usually starts talking about numbers
that are bigger than two. I'll add another interpretation, which is what I thought
when I first heard of it w.r.t. a cpu that was designed on the basis that all its
"integer" numbers were fixed point fractions up to 0.9999.. to whatever precision
the binary fractional bits provided. There was no units bit. And if you took one
of these fractional values 0.xxxx and subtracted it from 2.0, you would have a
complementary number with respect to two. Well, for addition and subtraction, that turns
out to work just like the "two's complement" integers we are used to. But since the
value of fractional bits were all in negative powers of two, squaring e.g., .5 had
to result in a consistent representation of 0.25 -- i.e. in binary squaring 0.1
resulted in 0.01 -- which is shifted one bit from what you get looking at the numbers
as integers with the lsb at the bottom of the registers and the result.

I.e., a 32-bit positive integer n in the fractional world was n*2**-31. If you square
that for 64 bits, you get n**2, but in the fractional world that looks like (n**2)*2**-63,
where it's supposed to be (n*2**-31)**2 => (n**2)*2**-62 with respect to the binary point.
The fractional model preserved an extra bit of precision in multiplies.

So on that machine we used to count bits from the left instead of the right, and place imaginary
binary points in the representations, so a binary 0.101 could be read as "5 at 3" or "2.5 at 2"
or "10 at 4" etc. And the multiplying rule was x at xbit times y at ybit => x*y at xbit+ybit.

You can do the same counting the bit positions leftwards from lsb at 0, as we usually do now,
of course, to play with fixed point fractions. A 5 at 0 is then 1.25 at 2 ;-)

Anyway, my point is that there was a "two's complement" implementation that really meant
a numeric value complement with respect to the value two ;-)

Regards,
Bengt Richter




More information about the Python-list mailing list