decoding keyboard input when using curses

Chris Jones cjns1989 at gmail.com
Sun May 31 13:07:48 EDT 2009


On Sun, May 31, 2009 at 04:05:20AM EDT, Arnaud Delobelle wrote:

[..]

> Thanks for this.  When I test it on my machine (BTW it is MacOS 10.5.7),
> if I type an ASCII character (e.g. 'A'), I get its ASCII code (0x41),
> but if I type a non-ascii character (e.g. '§') I get back to the prompt
> immediately.  It must be because two values are queued for getch.  I
> should try it on a Linux machine, but I don't have one handy at the
> moment.

Well so much for transparency :-(

Try this:

#include <locale.h>
#include <ncurses.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

int ct;
wint_t unichar;

int main(int argc, char *argv[])
{
  setlocale(LC_ALL, "");                 /* make sure UTF8       */
  initscr();
  raw();
  keypad(stdscr, TRUE);
  ct = get_wch(&unichar);                /* read character       */
  mvprintw(24, 0, "Key pressed is = %4x ", unichar);

  refresh();
  get_wch();
  endwin();
  return 0;
}

gcc -lncursesw uni10.c -o uni10         # different lib..
             ^

Seems there's more to it than my assupmtion that the python wrapper was
not wrapping as transparaently as it should.. here I'm using wide
characters.. not sure how this translates to python.

CJ





More information about the Python-list mailing list