On Tue, Jan 23, 2007, Ulisses Furquim wrote:
>
> I've read some threads about signals in the archives and I was under
> the impression signals should work reliably on single-threaded
> applications. Am I right? I've thought about a way to fix this, but I
> don't know what is the current plan for signals support in python, so
> what can be done?
This one looks like an oversight in Python code, and so is a bug,
but it is important to note that signals do NOT work reliably under
any Unix or Microsoft system. Inter alia, all of the following are
likely to lead to lost signals:
Two related signals received between two 'checkpoints' (i.e. when
the signal is tested and cleared). You may only get one of them,
and 'related' does not mean 'the same'.
A second signal received while the first is being 'handled' by the
operating system or language run-time system.
A signal sent while the operating system is doing certain things to
the application (including, sometimes, when it is swapped out or
deep in I/O.)
And there is more, some of which can cause program misbehaviour or
crashes. You are also right that threading makes the situation a
lot worse.
Obviously, Unix and Microsoft systems depend on signals, so you
can't simply regard them as hopelessly broken, but you can't assume
that they are RELIABLE. All code should be designed to cope with
the case of signals getting lost, if at all possible. Defending
yourself against the other failures is an almost hopeless task,
but luckily they are extremely rare except on specialist systems.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

Giovanni Bajo <rasky(a)develer.com> wrote:
>
> I personally consider *very* important that hash(5.0) == hash(5) (and
> that 5.0 == 5, of course).
It gets a bit problematic with floating-point, when you can have
different values "exactly 5.0" and "approximately 5.0". IEEE 754
has signed zeroes. And so it goes.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

I have a fair amount of my binary floating-point model written,
though even of what I have done only some is debugged (and none
has been rigorously tested). But I have hit some things that I
can't work out, and one query reduced comp.lang.python to a
stunned silence :-)
Note that I am not intending to do all the following, at least for
now, but I have had to restructure half a dozen times to match my
implementation requirements to the C interface (as I have learnt
more about Python!) and designing to avoid that is always good.
Any pointers appreciated.
I can't find any detailed description of the methods that I need
to provide. Specifically:
Does Python use classic division (nb_divide) and inversion (nb_invert)
or are they entirely historical? Note that I can very easily provide
the latter.
Is there any documentation on the coercion function (nb_coerce)? It
seems to have unusual properties.
How critical is the 'numeric' property of the nb_hash function? I
can certainly honour it, but is it worth it?
I assume that Python will call nb_richcompare if defined and
nb_compare if not. Is that right?
Are the inplace methods used and, if so, what is their specification?
I assume that I can ignore all of the allocation, deallocation and
attribute handling functions, as the default for a VAR object is
fine. That seems to work.
Except for one thing! My base type is static, but I create some
space for every derivation (and it can ONLY be used in derived form).
The space creation is donein C but the derivation in Python. I
assume that I need a class (not instance) destructor, but what
should it do to free the space? Call C to Py_DECREF it?
I assume that a class structure will never go away until after all
instances have gone away (unless I use Py_DECREF), so a C pointer
from an instance to something owned by the class is OK.
Is there any documentation on how to support marshalling/pickling
and the converse from C types?
I would quite like to provide some attributes. They are 'simple'
but need code executing to return them. I assume that means that
they aren't simple enough, and have to be provided as methods
(like conjugate). That's what I have done, anyway.
Is there any obvious place for a reduction method to be hooked in?
That is a method that takes a sequence, all members of which must
be convertible to a single class, and returns a member of that
class. Note that it specifically does NOT make sense on a single
value of that class.
Sorry about the length of this!
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

Oops. Something else fairly major I forgot to ask. Python long.
I can't find any clean way of converting to or from this, and
would much rather not build a knowledge of long's internals into
my code. Going via text is, of course, possible - but is not very
efficient, even using hex/octal.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

"Guido van Rossum" <guido(a)python.org> wrote:
>
> "(int)float_or_double" truncates in C (even in K&R C) /provided that/
> the true result is representable as an int. Else behavior is
> undefined (may return -1, may cause a HW fault, ...).
Actually, I have used Cs that didn't, but haven't seen any in over
10 years. C90 is unclear about its intent, but C99 is specific that
truncation is towards zero. This is safe, at least for now.
> So Python uses C's modf() for float->int now, which is always defined
> for finite floats, and also truncates.
Yes. And that is clearly documented and not currently likely to
change, as far as I know.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

"Tim Peters" <tim.peters(a)gmail.com> wrote:
>
> OTOH, I am a fan of analyzing FP operations as if the inputs were in
> fact exactly what they claim to be, which 754 went a long way toward
> popularizing. That largely replaced mountains of idiosyncratic
> "probabilistic arguments" (and where it seemed no two debaters ever
> agreed on the "proper" approach) with a common approach that
> sometimes allows surprisingly sharp analysis. Since I spent a good
> part of my early career as a professional apologist for Seymour Cray's
> "creative" floating point, I'm probably much more grateful to leave
> sloppy arithmetic behind than most.
Well, I spent some of it working with code (and writing code) that
was expected to work, unchanged, on an ICL 1900, CDC 6600/7600,
IBM 370 and others. I have seen the harm caused by the 'exact
arithmetic' mindset and so don't like it, but I agree about your
objections to the "probabilistic arguments" (which were and are
mostly twaddle). But that is seriously off-topic.
> [remquo] It's really off-topic for Python-Dev, so
> I didn't/don't want to belabor it.
Agreed, except in one respect. I stand by my opinion that the C99
specification has no known PRACTICAL use (your example is correct,
but I know of no such use in a real application), and so PLEASE
don't copy it as a model for Python divmod/remainder.
> No, /Python's/ definition of mod is inexact for that example. fmod
> (which is not Python's definition) is always exact: fmod(-1, 1e100) =
> -1, and -1 is trivially exactly congruent to -1 modulo anything
> (including modulo 1e100). The result of fmod(x, y) has the same sign
> as x; Python's x.__mod__(y) has the same sign as y; and that makes all
> the difference in the world as to whether the exact result is always
> exactly representable as a float.
Oops. You're right, of course.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

"Tim Peters" <tim.peters(a)gmail.com> wrote:
>
> It could, but who would have a (sane) use for a possibly 2000-bit quotient?
Well, the 'exact rounding' camp in IEEE 754 seem to think that there
is one :-)
As you can gather, I can't think of one. Floating-point is an inherently
inaccurate representation for anything other than small integers.
> This is a bit peculiar to me, because there are ways to compute
> "remainder" using a number of operations proportional to the log of
> the exponent difference. It could be that people who spend their life
> doing floating point forget how to work with integers ;-)
Aargh! That is indeed the key! Given that I claim to know something
about integer arithmetic, too, how can I have been so STUPID? Yes,
you are right, and that is the only plausible way to calculate the
remainder precisely. You don't get the quotient precisely, which is
what my (insane) specification would have provided.
I would nitpick with your example, because you don't want to reduce
modulo 3.14 but modulo pi and therefore the modular arithmetic is
rather more expensive (given Decimal). However, it STILL doesn't
help to make remquo useful!
The reason is that pi is input only to the floating-point precision,
and so the result of remquo for very large arguments will depend
more on the inaccuracy of pi as input than on the mathematical
result. That makes remquo totally useless for the example you quote.
Yes, I have implemented 'precise' range reduction, and there is no
substitute for using an arbitrary precision pi value :-(
> > But it isn't obviously WRONG.
>
> For floats, fmod(x, y) is exactly congruent to x modulo y -- I don't
> think it's possible to get more right than exactly right ;-)
But, as a previous example of yours pointed out, it's NOT exactly
right. It is also supposed to be in the range [0,y) and it isn't.
-1%1e100 is mathematically wrong on two counts.
1a
Cc: "Tim Peters" <tim.peters(a)gmail.com>
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

"Tim Peters" <tim.peters(a)gmail.com> wrote:
>
> [Tim (misattributed to Guido)]
Apologies to both!
> > C90 is unclear about its intent,
>
> But am skeptical of that. I don't have a copy of C90 here, but before
> I wrote that I checked Kernighan & Ritchie's seminal C book, Harbison
> & Steele's generally excellent "C: A Reference Manual" (2nd ed), and a
> web version of Plauger & Brodie's "Standard C":
>
> http://www-ccs.ucsd.edu/c/
>
> They all agree that the Cs they describe (all of which predate C99)
> convert floating to integral types via truncation, when possible.
I do. Kernighan & Ritchie's seminal C book describes the Unix style
of "K&R" C - one of the reasons that ANSI/ISO had to make incompatible
changes was that many important PC and embedded Cs differed. Harbison
and Steele is generally reliable, but not always; I haven't looked at
the last, but I would regard it suspiciously.
What C90 says is:
When a value of floating type is converted to integer type, the
fractional part is discarded.
There is other wording, but none relevant to this issue. Now, given
the history of floating-point remainder, that is seriously ambiguous.
> > but C99 is specific that truncation is towards zero.
>
> As opposed to what? Truncation away from zero? I read "truncation"
> as implying toward 0, although the Plauger & Brodie source is explicit
> about "the integer part of X, truncated toward zero" for the sake of
> logic choppers ;-)
Towards -infinity, of course. That was as common as truncation towards
zero up until the 1980s. It was near-universal on twos complement
floating-point systems, and not rare on signed magnitude ones. During
the standardisation of C90, the BSI tried to explain to ANSI that this
needed spelling out, but were ignored. C99 did not add the normative
text "(i.e., the value is truncated toward zero)" because there was
no ambiguity, after all!
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1(a)cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679