Typing system vs. Java

Bengt Richter bokr at accessone.com
Fri Aug 10 16:29:07 EDT 2001


On Fri, 10 Aug 2001 19:45:27 +0200, Markus Schaber <markus at schabi.de> wrote:
[...]
>
>As long as Python doesn't allow self-modyfing code (means a block of 
>code is immutable once compiled), it should be no problem (except 
>implemenation effort) to replace the byte-code compiler with one that 
>provides machine code.
>
>But this isn't the main slowdown - it is the dynamic lookup for every 
>access. But when we change this, we wouldn't have python any more.
>
>>> I think that adding features to allow Python powerful enough to
>>> remove the
>>> need for extensions would be worth a little bit of extra complexity. 
>>> I
>> 
>> I could hardly disagree more deeply, profoundly and totally than in
>> fact
>> I do.  Nothing will remove the need for extensions, although their use
>> may become less frequent.
>
>Fully agree. The only way to get around this is to have _very good_ 
>optimizing compilers, and to include every needed machine-specific 
>detail as a language feature.
>
>In a project at our university (Ulm, Germany), we implement an OS using 
>a java-like language (older researches used an oberon derivate). We 
>included machine level things using a "virtual" class called Magic. 
>This class doesn't physically exist, the compiler directly generates 
>the corresponding machine code. This way, we have direct access to 
>memory (Magic.mem8[]), Intel IO-Space port commands (Magic.Out(), 
>Magic.In) and - for use in the memory management classes - magic.Cast. 
>We even can declare methods that use interrupt stack frame instead of 
>normal one.
>
>But there are still some places where we have to use our nice inline 
>feature - sometimes for speed reasons (Memcopy, TCP-Checksum etc), and 
>the other times because they are very specific and needed only at one 
>place (MMU-Programming, TSC-Register access etc). 
>
>If you don't include assembly into the language, you always will have 
>to use extensions that don't follow the language's rules.
>
True, but maybe there's a useful happy medium for Python?

Even C/C++ requires some assurances about aliasing, etc. in order to
optimize full tilt. I would think there could be similar assurances
given to the Python compiler without crufting up the language.

E.g., what if there were a 'restricted' keyword (or you pick one ;-)
which allowed the compiler to assume that a function's arguments would
always be the same types as in provided default arguments, and that there
would be no global side effects, and global references would be stable
as to type? It would seem you could allocate local working storage on
the stack and avoid reference counting and lookup to a large extent.

Wrapping an expression in restricted(expr) could be a similar indicator
of stable type references. A lot of accesses from compiled code could
then bypass lookup ISTM. Of course global references would still have
to have some checks for robustness, I assume. Also, if things move
during gc, actual (pre-looked-up) global references in code would have to
be updated one way or another (I am guessing without yet having looked at
the relevant code).

An alternative to passing default arguments as data type prototypes
might be passing arguments paired with coercion functions which
could be checked quickly and used depending, e.g., foo(int, i), where
 >>> int
 <built-in function int>

Just some musings...




More information about the Python-list mailing list