char with native integer signedness
![](https://secure.gravatar.com/avatar/81b0eda1a755030312d63998980d6b7a.jpg?s=120&d=mm&r=g)
Is there a standard way in numpy of getting a char with C-native integer signedness? I.e., boost::is_signed<char>::value ? numpy.byte : numpy.ubyte but without nonsensical mixing of languages? Thanks, Geoffrey
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 12:52 AM, Geoffrey Irving <irving@naml.us> wrote:
This is for interop with a C/C++ extension, right? Do this test in that extension's C/C++ code to expose the right dtype. As far as I know, this is not something determined by the hardware, but the compiler used. Since the compiler of numpy may be different from your extension, only your extension can do that test properly. -- Robert Kern
![](https://secure.gravatar.com/avatar/81b0eda1a755030312d63998980d6b7a.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 2:08 AM, Robert Kern <robert.kern@gmail.com> wrote:
It's not determined by the hardware, but I believe it is standardized by each platform's ABI even if it can be adjusted by the compiler.
From the gcc man page:
-funsigned-char Let the type "char" be unsigned, like "unsigned char". Each kind of machine has a default for what "char" should be. It is either like "unsigned char" by default or like "signed char" by default. Ideally, a portable program should always use "signed char" or "unsigned char" when it depends on the signedness of an object. But many programs have been written to use plain "char" and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default. The type "char" is always a distinct type from each of "signed char" or "unsigned char", even though its behavior is always just like one of those two. Geoffrey
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 4:19 PM, Geoffrey Irving <irving@naml.us> wrote:
On Thu, Oct 31, 2013 at 2:08 AM, Robert Kern <robert.kern@gmail.com>
not something determined by the hardware, but the compiler used. Since
wrote: this is the
Fair enough. numpy doesn't distinguish between these cases as it only uses plain 'char' for 'S' arrays, which don't really care about the numerical value assigned the bits. It explicitly uses 'signed char' elsewhere, so this platform setting isn't relevant to it. Consequently, numpy also doesn't expose this platform setting. I think I stand by my recommendation. -- Robert Kern
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 12:52 AM, Geoffrey Irving <irving@naml.us> wrote:
This is for interop with a C/C++ extension, right? Do this test in that extension's C/C++ code to expose the right dtype. As far as I know, this is not something determined by the hardware, but the compiler used. Since the compiler of numpy may be different from your extension, only your extension can do that test properly. -- Robert Kern
![](https://secure.gravatar.com/avatar/81b0eda1a755030312d63998980d6b7a.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 2:08 AM, Robert Kern <robert.kern@gmail.com> wrote:
It's not determined by the hardware, but I believe it is standardized by each platform's ABI even if it can be adjusted by the compiler.
From the gcc man page:
-funsigned-char Let the type "char" be unsigned, like "unsigned char". Each kind of machine has a default for what "char" should be. It is either like "unsigned char" by default or like "signed char" by default. Ideally, a portable program should always use "signed char" or "unsigned char" when it depends on the signedness of an object. But many programs have been written to use plain "char" and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default. The type "char" is always a distinct type from each of "signed char" or "unsigned char", even though its behavior is always just like one of those two. Geoffrey
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Thu, Oct 31, 2013 at 4:19 PM, Geoffrey Irving <irving@naml.us> wrote:
On Thu, Oct 31, 2013 at 2:08 AM, Robert Kern <robert.kern@gmail.com>
not something determined by the hardware, but the compiler used. Since
wrote: this is the
Fair enough. numpy doesn't distinguish between these cases as it only uses plain 'char' for 'S' arrays, which don't really care about the numerical value assigned the bits. It explicitly uses 'signed char' elsewhere, so this platform setting isn't relevant to it. Consequently, numpy also doesn't expose this platform setting. I think I stand by my recommendation. -- Robert Kern
participants (3)
-
Charles R Harris
-
Geoffrey Irving
-
Robert Kern