decoding keyboard input when using curses
arnodel at googlemail.com
Sun May 31 20:30:54 CEST 2009
Chris Jones <cjns1989 at gmail.com> writes:
> Try this:
> #include <locale.h>
> #include <ncurses.h>
> #include <stdlib.h>
> #include <stdio.h>
> #include <string.h>
/* Here I need to add the following include to get wint_t on macOS X*/
> int ct;
> wint_t unichar;
> int main(int argc, char *argv)
> setlocale(LC_ALL, ""); /* make sure UTF8 */
> keypad(stdscr, TRUE);
> ct = get_wch(&unichar); /* read character */
> mvprintw(24, 0, "Key pressed is = %4x ", unichar);
> return 0;
> gcc -lncursesw uni10.c -o uni10 # different lib..
My machine doesn't know about libncursesw:
marigold:c arno$ ls /usr/lib/libncurses*
So I've compiled it with libncurses as before and it works.
This is what I get:
If I run the program and type 'é', I get a code of 'e9'.
>>> print '\xe9'.decode('latin1')
So it has been encoded using isolatin1. I really don't understand why.
I'll have to investigate this further.
If I change the line:
setlocale(LC_ALL, ""); /* make sure UTF8 */
setlocale(LC_ALL, "en_GB.UTF-8"); /* make sure UTF8 */
then the behaviour is the same as before (i.e. get_wch() gets called
I'll do some more investigating (when I can think of *what* to
investigate) and I will tell you my findings.
More information about the Python-list