Hi all --
it recently occured to me that the 'spawn' module I wrote for the
Distutils (and which Perry Stoll extended to handle NT), could fit
nicely in the core library. On Unix, it's just a front-end to
fork-and-exec; on NT, it's a front-end to spawnv(). In either case,
it's just enough code (and just tricky enough code) that not everybody
should have to duplicate it for their own uses.
The basic idea is this:
from spawn import spawn
...
spawn (['cmd', 'arg1', 'arg2'])
# or
spawn (['cmd'] + args)
you get the idea: it takes a *list* representing the command to spawn:
no strings to parse, no shells to get in the way, no sneaky
meta-characters ruining your day, draining your efficiency, or
compromising your security. (Conversely, no pipelines, redirection,
etc.)
The 'spawn()' function just calls '_spawn_posix()' or '_spawn_nt()'
depending on os.name. Additionally, it takes a couple of optional
keyword arguments (all booleans): 'search_path', 'verbose', and
'dry_run', which do pretty much what you'd expect.
The module as it's currently in the Distutils code is attached. Let me
know what you think...
Greg
--
Greg Ward - software developer gward(a)cnri.reston.va.us
Corporation for National Research Initiatives
1895 Preston White Drive voice: +1-703-620-8990
Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
Hmm, if we're talking a "Python Make" or some such here the best way
would probably be to use Tool Server. Tool Server is a thing that is
based on Apple's old MPW programming environment, that is still
supported by compiler vendors like MetroWerks.
The nice thing of Tool Server for this type of work is that it _is_
command-line based, so you can probably send it things like
spawn("cc", "-O", "test.c")
But, although I know it is possible to do this with ToolServer, I
haven't a clue on how to do it...
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
Recently, Greg Ward <gward(a)cnri.reston.va.us> said:
> BTW, is there anything like this on the Mac? On what other OSs does it
> even make sense to talk about programs spawning other programs? (Surely
> those GUI user interfaces have to do *something*...)
Yes, but the interface is quite a bit more high-level, so it's pretty
difficult to reconcile with the Unix and Windows "every argument is a
string" paradigm. You start the process and pass along an AppleEvent
(basically an RPC-call) that will be presented to the program upon
startup.
So on the mac there's a serious difference between (inventing the API
interface here, cut down to make it understandable to non-macheads:-)
spawn("netscape", ("Open", "file.html"))
and
spawn("netscape", ("OpenURL", "http://foo.com/file.html"))
The mac interface is (of course:-) infinitely more powerful, allowing
you to talk to running apps, adressing stuff in it as COM/OLE does,
etc. but unfortunately the simple case of spawn("rm", "-rf", "/") is
impossible to represent in a meaningful way.
Add to that the fact that there's no stdin/stdout/stderr and there's
little common ground. The one area of common ground is "run program X
on files Y and Z and wait (or don't wait) for completion", so that is
something that could maybe have a special method that could be
implemented on all three mentioned platforms (and probably everything
else as well). And even then it'll be surprising to Mac users that
they have to _exit_ their editor (if you specify wait), not something
people commonly do.
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
[Followup to a discussion on psa-members about iterating over
dictionaries without creating intermediate lists]
Jim Fulton wrote:
>
> "M.-A. Lemburg" wrote:
> >
> > Jim Fulton wrote:
> > >
> > > > The problem with the PyDict_Next() approach is that it will only
> > > > work reliably from within a single C call. You can't return
> > > > to Python between calls to PyDict_Next(), because those could
> > > > modify the dictionary causing the next PyDict_Next() call to
> > > > fail or core dump.
> > >
> > > I do this all the time without problem. Basically, you provide an
> > > index and if the index is out of range, you simply get an end-of-data return.
> > > The only downside of this approach is that you might get "incorrect"
> > > results if the dictionary is modified between calls. This isn't
> > > all that different from iterating over a list with an index.
> >
> > Hmm, that's true... but what if the dictionary gets resized
> > in between iterations ? The item layout is then likely to
> > change, so you could potentially get complet bogus.
>
> I think I said that. :)
Just wanted to verify my understanding ;-)
> > Even iterating over items twice may then occur, I guess.
>
> Yup.
>
> Again, this is not so different from iterating over
> a list using a range:
>
> l=range(10)
> for i in range.len(l):
> l.insert(0,'Bruce')
> print l[i]
>
> This always outputs 'Bruce'. :)
Ok, so the "risk" is under user control. Fine with me...
> > Or perhaps via a special dictionary iterator, so that the following
> > works:
> >
> > for item in dictrange(d):
> > ...
>
> Yup.
>
> > The iterator could then also take some extra actions to insure
> > that the dictionary hasn't been resized.
>
> I don't think it should do that. It should simply
> stop when it has run out of items.
I think I'll give such an iterator a spin. Would be a nice
extension to mxTools.
BTW, a generic type slot for iterating over types would probably
be a nice feature too. The type slot could provide hooks of the
form it_first, it_last, it_next, it_prev which all work integer
index based, e.g. in pseudo code:
int i;
PyObject *item;
/* set up i and item to point to the first item */
if (obj.it_first(&i,&item) < 0)
goto onError;
while (1) {
PyObject_Print(item);
/* move i and item to the next item; an IndexError is raised
in case there are no more items */
if (obj.it_next(&i,&item) < 0) {
PyErr_Clear();
break;
}
}
These slots would cover all problem instances where iteration
over non-sequences or non-uniform sequences (i.e. sequences like
objects which don't provide konvex index sets, e.g. 1,2,3,6,7,8,11,12)
is required, e.g. dictionaries, multi-segment buffers
--
Marc-Andre Lemburg
______________________________________________________________________
Y2000: 127 days left
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
[Tim, in an earlier msg]
>
> Would be more valuable to rethink the debugger's breakpoint approach so that
> SET_LINENO is never needed (line-triggered callbacks are expensive because
> called so frequently, turning each dynamic SET_LINENO into a full-blown
> Python call;
Ok. In the meantime I think that folding the redundant SET_LINENO doesn't
hurt. I ended up with a patchlet that seems to have no side effects, that
updates the lnotab as it should and that even makes pdb a bit more clever,
IMHO.
Consider an extreme case for the function f (listed below). Currently,
we get the following:
-------------------------------------------
>>> from test import f
>>> import dis, pdb
>>> dis.dis(f)
0 SET_LINENO 1
3 SET_LINENO 2
6 SET_LINENO 3
9 SET_LINENO 4
12 SET_LINENO 5
15 LOAD_CONST 1 (1)
18 STORE_FAST 0 (a)
21 SET_LINENO 6
24 SET_LINENO 7
27 SET_LINENO 8
30 LOAD_CONST 2 (None)
33 RETURN_VALUE
>>> pdb.runcall(f)
> test.py(1)f()
-> def f():
(Pdb) list 1, 20
1 -> def f():
2 """Comment about f"""
3 """Another one"""
4 """A third one"""
5 a = 1
6 """Forth"""
7 "and pdb can set a breakpoint on this one (simple quotes)"
8 """but it's intelligent about triple quotes..."""
[EOF]
(Pdb) step
> test.py(2)f()
-> """Comment about f"""
(Pdb) step
> test.py(3)f()
-> """Another one"""
(Pdb) step
> test.py(4)f()
-> """A third one"""
(Pdb) step
> test.py(5)f()
-> a = 1
(Pdb) step
> test.py(6)f()
-> """Forth"""
(Pdb) step
> test.py(7)f()
-> "and pdb can set a breakpoint on this one (simple quotes)"
(Pdb) step
> test.py(8)f()
-> """but it's intelligent about triple quotes..."""
(Pdb) step
--Return--
> test.py(8)f()->None
-> """but it's intelligent about triple quotes..."""
(Pdb)
>>>
-------------------------------------------
With folded SET_LINENO, we have this:
-------------------------------------------
>>> from test import f
>>> import dis, pdb
>>> dis.dis(f)
0 SET_LINENO 5
3 LOAD_CONST 1 (1)
6 STORE_FAST 0 (a)
9 SET_LINENO 8
12 LOAD_CONST 2 (None)
15 RETURN_VALUE
>>> pdb.runcall(f)
> test.py(5)f()
-> a = 1
(Pdb) list 1, 20
1 def f():
2 """Comment about f"""
3 """Another one"""
4 """A third one"""
5 -> a = 1
6 """Forth"""
7 "and pdb can set a breakpoint on this one (simple quotes)"
8 """but it's intelligent about triple quotes..."""
[EOF]
(Pdb) break 7
Breakpoint 1 at test.py:7
(Pdb) break 8
*** Blank or comment
(Pdb) step
> test.py(8)f()
-> """but it's intelligent about triple quotes..."""
(Pdb) step
--Return--
> test.py(8)f()->None
-> """but it's intelligent about triple quotes..."""
(Pdb)
>>>
-------------------------------------------
i.e, pdb stops at (points to) the first real instruction and doesn't step
trough the doc strings.
Or is there something I'm missing here?
--
Vladimir MARANGOZOV | Vladimir.Marangozov(a)inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
-------------------------------[ cut here ]---------------------------
*** compile.c-orig Thu Aug 19 19:27:13 1999
--- compile.c Thu Aug 19 19:00:31 1999
***************
*** 615,620 ****
--- 615,623 ----
int arg;
{
if (op == SET_LINENO) {
+ if (!Py_OptimizeFlag && c->c_last_addr == c->c_nexti - 3)
+ /* Hack for folding several SET_LINENO in a row. */
+ c->c_nexti -= 3;
com_set_lineno(c, arg);
if (Py_OptimizeFlag)
return;
I posted a note to the main list yesterday in response to Dan Connolly's
complaint that the os module isn't very portable. I saw no followups (it's
amazing how fast a thread can die out :-), but I think it's a reasonable
idea, perhaps for Python 2.0, so I'll repeat it here to get some feedback
from people more interesting in long-term Python developments.
The basic premise is that for each platform on which Python runs there are
portable and nonportable interfaces to the underlying operating system. The
term POSIX has some portability connotations, so let's assume that the posix
module exposes the portable subset of the OS interface. To keep things
simple, let's also assume there are only three supported general OS
platforms: unix, nt and mac. The proposal then is that importing the
platform's module by name will import both the portable and non-portable
interface elements. Importing the posix module will import just that
portion of the interface that is truly portable across all platforms. To
add new functionality to the posix interface it would have to be added to
all three platforms. The posix module will be able to ferret out the
platform it is running on and import the correct OS-independent posix
implementation:
import sys
_plat = sys.platform
del sys
if _plat == "mac": from posixmac import *
elif _plat == "nt": from posixnt import *
else: from posixunix import * # some unix variant
The platform-dependent module would simply import everything it could, e.g.:
from posixunix import *
from nonposixunix import *
The os module would vanish or be deprecated with its current behavior
intact. The documentation would be modified so that the posix module
documents the portable interface and the OS-dependent module's documentation
documents the rest and just refers users to the posix module docs for the
portable stuff.
In theory, this could be done for 1.6, however as I've proposed it, the
semantics of importing the posix module would change. Dan Connolly probably
isn't going to have a problem with that, though I suppose Guido might... If
this idea is good enough for 1.6, perhaps we leave os and posix module
semantics alone and add a module named "portable", "portableos" or
"portableposix" or something equally arcane.
Skip Montanaro | http://www.mojam.com/
skip(a)mojam.com | http://www.musi-cal.com/~skip/
847-971-7098
But in Python, with its nice high-level datastructures, couldn't we
design the Mother Of All File Attribute Calls, which would optionally
map functionality from one platform to another?
As an example consider the Mac resource fork size. If on unix I did
fattrs = os.getfileattributes(filename)
rfsize = fattrs.get('resourceforksize')
it would raise an exception. If, however, I did
rfsize = fattrs.get('resourceforksize', compat=1)
I would get a "close approximation", 0. Note that you want some sort
of a compat parameter, not a default value, as for some attributes
(the various atime/mtime/ctimes, permission bits, etc) you'd get a
default based on other file attributes that do exist on the current
platform.
Hmm, the file-attribute-object idea has the added advantage that you
can then use setfileattributes(filename, fattrs) to be sure that
you've copied all relevant attributes, independent of the platform
you're on.
Mapping permissions takes a bit more (design-) work, with unix having
user/group/other only and Windows having full-fledged ACLs (or nothing
at all, depending how you look at it:-), but should also be doable.
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
Here's another siy idea, not having to do with optimization.
On the Mac, and as far as I know on Windows as well, there are quite a
few OS API structures that have a Python Object representation that is
little more than the PyObject boilerplate plus a pointer to the C API
object. (And, of course, lots of methods to operate on the object).
To convert these from Python to C I always use boilerplate code like
WindowPtr *win;
PyArg_ParseTuple(args, "O&", PyWin_Convert, &win);
where PyWin_Convert is the function that takes a PyObject * and a void
**, does the typecheck and sets the pointer. A similar way is used to
convert C pointers back to Python objects in Py_BuildValue.
What I was thinking is that it would be nice (if you are _very_
careful) if this functionality was available in struct. So, if I would
somehow obtain (in a Python string) a C structure that contained, say,
a WindowPtr and two ints, I would be able to say
win, x, y = struct.unpack("Ohh", Win.WindowType)
and struct would be able, through the WindowType type object, to get
at the PyWin_Convert and PyWin_New functions.
A nice side issue is that you can add an option to PyArg_Parsetuple so
you can say
PyArg_ParseTuple(args, "O+", Win_WinObject, &win)
and you don't have to remember the different names the various types
use for their conversion routines.
Is this worth pursuing is is it just too dangerous? And, if it is
worth pursuing, I have to stash away the two function pointers
somewhere in the TypeObject, should I grab one of the tp_xxx fields
for this or is there a better place?
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
Just curious:
Is python with vs. without "-O" equivalent today regarding line numbers?
Are SET_LINENO opcodes a plus in some situations or not?
Next, I see quite often several SET_LINENO in a row in the beginning
of code objects due to doc strings, etc. Since I don't think that
folding them into one SET_LINENO would be an optimisation (it would
rather be avoiding the redundancy), is it possible and/or reasonable
to do something in this direction?
A trivial example:
>>> def f():
... "This is a comment about f"
... a = 1
...
>>> import dis
>>> dis.dis(f)
0 SET_LINENO 1
3 SET_LINENO 2
6 SET_LINENO 3
9 LOAD_CONST 1 (1)
12 STORE_FAST 0 (a)
15 LOAD_CONST 2 (None)
18 RETURN_VALUE
>>>
Can the above become something like this instead:
0 SET_LINENO 3
3 LOAD_CONST 1 (1)
6 STORE_FAST 0 (a)
9 LOAD_CONST 2 (None)
12 RETURN_VALUE
--
Vladimir MARANGOZOV | Vladimir.Marangozov(a)inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
The one thing I'm not thrilled by in mxProxy is that a call to
CheckWeakReferences() is needed before an object is cleaned up. I guess this
boils down to the same problem I had with my weak reference scheme: you
somehow want the Python core to tell the proxy stuff that the object can be
cleaned up (although the details are different: in my scheme this would be
triggered by refcount==0 and in mxProxy by refcount==1). And because objects
are created and destroyed in Python at a tremendous rate you don't want to do
this call for every object, only if you have a hint that the object has a weak
reference (or a proxy).
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm