int/long unification hides bugs
kartick_vaddadi at yahoo.com
Wed Oct 27 06:17:11 CEST 2004
Rocco Moretti <roccomoretti at hotpop.com> wrote in message news:<clm68p$6r4$1 at news.doit.wisc.edu>...
> Very rarely will the platform limit reflect the
> algorithmic limit. If you want to limit the range if your numbers, you
> need to have knowledge of your particular use case - something that
> can't be done with a predefined language limit.
i'm saying that most of the time, the algorithmic limit will be less
than 2**31 or 2**63 - & that can be checked by the language.
> the limit is still arbitrary. Which
> one will it be? How do we decide? If we're platform independent, why
> bother with hardware based sizes anyway? Why not use a base 10 limit
> like 10**10?
it doesn't really matter what the limit is, as long as it's large
enough that it's not crossed often. (it's only that a limit of 2**31
or 2**63 can be efficiently checked.)
> I think that one of the problems we're having in this conversation is
> that we are talking across each other. Nobody is denying that finding
> bugs is a good thing. It's just that, for the bugs which the overflow
> catches, there are much better ways of discovering them. (I'm surprised
> no one has mentioned unit testing yet.)
> Any decision is always has a cost/benefit analysis. For long/int
> unification, the benefits have been pointed out by others, and your
> proposed costs are minor, and can be ameliorated by other practices,
> which most here would argue are the better way of going about it in the
> first place.
agreed, but what about when you don't use these "better practices"? do
you use them for every variable? overflow catches sometimes help you
More information about the Python-list