![](https://secure.gravatar.com/avatar/49df8cd4b1b6056c727778925f86147a.jpg?s=120&d=mm&r=g)
Rob Clewley wrote:
Fair enough, but it does cause a *real* problem when I extract the values from aa and pass them on to other functions which try to compare their types to the integer types int and int32 that I can import from numpy. Since the values I'm testing could equally have been generated by functions that return the regular int type I can't guarantee that those values will have a dtype attribute!
You don't have to use the bit-width names (which can be confusing) in such cases. There is a regular name for every C-like type You can use the names byte, short, intc, int_, longlong (and corresponding unsigned names prefixed with u)
I have some initialization code for a big class that has to set up some state differently depending on the type of the input. So, I was trying to do something like this
if type(x) in [int, int32]: ## do stuff specific to integer x
but now it seems like I'll need
try: isint = x.dtype == dtype('int32') except AttributeError: isint = type(x) == int if isint: ## do stuff specific to integer x
try: if isinstance(x, (int, integer)) integer is the super-class of all c-like integer types.
-- which is a mess! Is there a better way to do this test cleanly and robustly? And why couldn't c_long always correspond to a unique numpy name (i.e., not shared with int32) regardless of how it's implemented?
There is a unique numpy name for all of them. The bit-width names just can't be unique.
Either way it would be helpful to have a name for this "other" int32 that I can test against using the all-purpose type() ... so that I could test something like
type(x) in [int, int32_c_long, int32_c_int]
isinstance(x, (int, intc, int_)) is what you want. -Travis