[Edu-sig] re: modeling

Kirby Urner pdx4d@teleport.com
Mon, 11 Jun 2001 06:45:43 -0700


>It's also a minor diversion which is both non essential 
>and distracting from the important issues of computing 
>and infomatics (notice the UK sapelling :-)
>
>Just my 2 cents,
>
>Alan G.
>Definition: 
>GUI: A device for making easy things trivial and hard 
>things impossible.

I well understand the sentiments here.  However, we need to be
clear on what's meant by GUI, as sometimes a graphical interface
will be precisely the kind of LED affair you describe, except
painted on screen.  We've all seen the "front end" to a music
playing app designed to look just like a stereo system, complete
with stacked modular units.

Basically, if you take the computer away altogether, and just
go to the kitchen to fix some eggs, you might find yourself
thinking in terms of a virtual reality GUI -- easier if you 
do the semantic twists which involve your "never having 
experienced anything outside of your own brain" (given all
sensory data is processed there -- according to some models).

In video BIOS terms, you have these various screen modes, and
to get colored graphics, you have to switch to VGA or SVGA or
(in the old days) EGA and CGA.  Text mode is something else
-- more primitive.  So originally, any application or OS 
interface that was in a graphical model (beyond text -- one of
the "GAs") was considered, well, graphical.  Since the user 
used it, and since it was about controlling internals, it was,
by definition, also a user interface.

The thing is, once in VGA mode, we have the same capability to
project letters, i.e. a command line.  You can pull up an
XTerm window or DOS box and work in character mode, at a 
command line.  But technically, you're still in the GUI, as 
this "session" is in the context of a graphical front end.

What I think is happening with new generations of programmer
is a disregard of some of the fine points and distinctions that
might have been important earlier.  For example, a hardware 
device, like a cell phone, is still likely to have an LED, a
rectangular display unit, and the trend is to pack in more 
pixels and color depth, while making the physical display 
thinner.  This has the advantage of making the interface 
programmable.  Instead of wiring knobs and buttons at the
hardware level, you generate them as visuals.  Combine this 
with touch sensitivity, and you have a standard appliance of 
the future.  

Or, perhaps more typically, you might have a few physical 
buttons, and gain the flexibility by updating the screen 
contents via software -- again, similar to the cell phones 
of today (and televisions -- especially those connected to 
boxes that put menus on screen, making the remote a navigation 
device). But this is also a description of the standard PC:  
the monitor and input peripherals (usually mouse and keyboard) 
are the fixed constants, with the monitor used to keep changing 
the possibilities (keyboards haven't been entirely static 
though -- this row of purple keys across the top is new).

When I've brainstormed about what I call a "math center" of
the future, I've taken this programmable display vis. hardware
controls combo to the students:

  In a math center, kids would be sitting at NASA-style consoles 
  -- calculators handy maybe, but a computer screen, recessed and 
  augmented with controls, would be more the focus.  The central 
  server would be stocked with lesson plan software, and if the 
  day's unit were on trig-based oscilloscope functions, kids 
  would net-access Java (or whatever) applications with native 
  ties to twist-knobs or push buttons for changing amplitude, 
  frequency, whatever other parameters.

  I realize it's perfectly feasible to provide whatever controls 
  on screen and have just a mouse to twiddle the cartoon knobs, 
  click buttons and so on, but I think it would be fun and useful 
  to have these standard math center consoles come pre-equipped 
  with generic bells and whistles that programmers could tie to 
  applications (the kids themselves being programmers in many 
  situations), and also with sockets for receiving input from 
  compatible devices.  This is important because we're not just 
  training kids for bizapp cubicles here, but for workbench 
  engineering, wherein a lot of the instrumentation controls are 
  not just screen based, and wherein computers are very much used 
  for realtime data aggregation via sensors of various 
  description.

  [ http://www.teleport.com/~pdx4d/mathcenter.html 29 Jan 1998 ]


Alternatively, many appliances are internalizing a way to 
serve a web based interface as I described in my previous 
post.  For example, the 3com wireless unit I have sitting 
on my desk here is designed to communicate with the owner 
by TCP/IP.  I just go http://192.168.2.1 and I'm interacting 
with this hardware device via colorful GUI via my web browser 
and ethernet adapter.

All that being said, I agree with you that programmers now
as well as then, need to look at applications and OSs sideways,
and see the whole set of layers, from presentation down through
compiled source to OS calls to the hardware APIs (where I'm 
defining OS to include graphics modules like DirectX, or the
C libraries for getting at OpenGL on the video card e.g. 
Mesa).  At that point, you're talking hardware engineering,
with video RAM and controller chips for the raster projection 
of pixels on phosphor or active/passive matrix illumination.
You've got that layer whether or not you're talking text 
mode or VGA or whatever.  Or, if it's a chip or appliance, 
you might just have ports waiting for bytes to come through
-- something more like a CPU chip, with its registers 
referenced directly in machine code (or as alpha bytes in
assembly language).

Kirby