<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 01.02.2016 18:18, Brett Cannon
wrote:<br>
</div>
<blockquote
cite="mid:CAP1=2W5t0+Fnq-ZxvZoHEwqvG_T0be8WKdS5KmNiWdjz7e__yA@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<br>
<div class="gmail_quote">
<div dir="ltr">On Mon, 1 Feb 2016 at 09:08 Yury Selivanov <<a
moz-do-not-send="true"
href="mailto:yselivanov.ml@gmail.com"><a class="moz-txt-link-abbreviated" href="mailto:yselivanov.ml@gmail.com">yselivanov.ml@gmail.com</a></a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 2016-01-29 11:28 PM, Steven D'Aprano wrote:<br>
> On Wed, Jan 27, 2016 at 01:25:27PM -0500, Yury
Selivanov wrote:<br>
>> Hi,<br>
>><br>
>><br>
>> tl;dr The summary is that I have a patch that
improves CPython<br>
>> performance up to 5-10% on macro benchmarks.
Benchmarks results on<br>
>> Macbook Pro/Mac OS X, desktop CPU/Linux, server
CPU/Linux are available<br>
>> at [1]. There are no slowdowns that I could
reproduce consistently.<br>
> Have you looked at Cesare Di Mauro's wpython? As far as
I know, it's now<br>
> unmaintained, and the project repo on Google Code
appears to be dead (I<br>
> get a 404), but I understand that it was significantly
faster than<br>
> CPython back in the 2.6 days.<br>
><br>
> <a moz-do-not-send="true"
href="https://wpython.googlecode.com/files/Beyond%20Bytecode%20-%20A%20Wordcode-based%20Python.pdf"
rel="noreferrer" target="_blank">https://wpython.googlecode.com/files/Beyond%20Bytecode%20-%20A%20Wordcode-based%20Python.pdf</a><br>
><br>
><br>
<br>
Thanks for bringing this up!<br>
<br>
IIRC wpython was about using "fat" bytecodes, i.e. using
64bits per<br>
bytecode instead of 8. That allows to minimize the number
of bytecodes,<br>
thus having some performance increase. TBH, I don't think
it was<br>
"significantly faster".<br>
<br>
If I were to do some big refactoring of the ceval loop, I'd
probably<br>
consider implementing a register VM. While register VMs are
a bit<br>
faster than stack VMs (up to 20-30%), they would also allow
us to apply<br>
more optimizations, and even bolt on a simple JIT compiler.<br>
</blockquote>
<div><br>
</div>
<div>If you did tackle the register VM approach that would
also settle a long-standing question of whether a certain
optimization works for Python.</div>
</div>
</div>
</blockquote>
<br>
Are there some resources on why register machines are considered
faster than stack machines?<br>
<br>
<blockquote
cite="mid:CAP1=2W5t0+Fnq-ZxvZoHEwqvG_T0be8WKdS5KmNiWdjz7e__yA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_quote">
<div>As for bolting on a JIT, the whole point of Pyjion is to
see if that's worth it for CPython, so that's already being
taken care of (and is actually easier with a stack-based VM
since the JIT engine we're using is stack-based itself). <br>
</div>
</div>
</div>
</blockquote>
<br>
Interesting. Haven't noticed these projects, yet.<br>
<br>
So, it could be that we will see a jitted CPython when Pyjion
appears to be successful?<br>
<br>
Best,<br>
Sven<br>
</body>
</html>