[Python-checkins] r72064 - in python/branches/release30-maint: Doc/library/multiprocessing.rst
r.david.murray
python-checkins at python.org
Tue Apr 28 20:08:52 CEST 2009
Author: r.david.murray
Date: Tue Apr 28 20:08:51 2009
New Revision: 72064
Log:
Merged revisions 72062 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/branches/py3k
................
r72062 | r.david.murray | 2009-04-28 14:02:00 -0400 (Tue, 28 Apr 2009) | 16 lines
Merged revisions 72060 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk
........
r72060 | r.david.murray | 2009-04-28 12:08:18 -0400 (Tue, 28 Apr 2009) | 10 lines
Various small fixups to the multiprocessing docs, mostly fixing and
enabling doctests that Sphinx can run, and fixing and disabling tests that
Sphinx can't run. I hand checked every test not now marked as a doctest,
and all except the two that have open bug reports against them now work,
at least on Linux/trunk. (I did not look at the last example at all since
there was already an open bug). I did not read the whole document with
an editor's eye, but I did fix a few things I noticed while working on
the tests.
........
................
Modified:
python/branches/release30-maint/ (props changed)
python/branches/release30-maint/Doc/library/multiprocessing.rst
Modified: python/branches/release30-maint/Doc/library/multiprocessing.rst
==============================================================================
--- python/branches/release30-maint/Doc/library/multiprocessing.rst (original)
+++ python/branches/release30-maint/Doc/library/multiprocessing.rst Tue Apr 28 20:08:51 2009
@@ -40,12 +40,18 @@
>>> p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
+ Process PoolWorker-3:
+ Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
+ (If you try this it will actually output three full tracebacks
+ interleaved in a semi-random fashion, and then you may have to
+ stop the master process somehow.)
+
The :class:`Process` class
~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -416,7 +422,9 @@
:attr:`exit_code` methods should only be called by the process that created
the process object.
- Example usage of some of the methods of :class:`Process`::
+ Example usage of some of the methods of :class:`Process`:
+
+ .. doctest::
>>> import multiprocessing, time, signal
>>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
@@ -426,6 +434,7 @@
>>> print(p, p.is_alive())
<Process(Process-1, started)> True
>>> p.terminate()
+ >>> time.sleep(0.1)
>>> print(p, p.is_alive())
<Process(Process-1, stopped[SIGTERM])> False
>>> p.exitcode == -signal.SIGTERM
@@ -667,7 +676,7 @@
freeze_support()
Process(target=f).start()
- If the ``freeze_support()`` line is missed out then trying to run the frozen
+ If the ``freeze_support()`` line is omitted then trying to run the frozen
executable will raise :exc:`RuntimeError`.
If the module is being run normally by the Python interpreter then
@@ -681,7 +690,7 @@
setExecutable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
- before they can create child processes. (Windows only)
+ before they can create child processes. (Windows only)
.. note::
@@ -761,8 +770,8 @@
*buffer* must be an object satisfying the writable buffer interface. If
*offset* is given then the message will be written into the buffer from
- *that position. Offset must be a non-negative integer less than the
- *length of *buffer* (in bytes).
+ that position. Offset must be a non-negative integer less than the
+ length of *buffer* (in bytes).
If the buffer is too short then a :exc:`BufferTooShort` exception is
raised and the complete message is available as ``e.args[0]`` where ``e``
@@ -771,6 +780,8 @@
For example:
+.. doctest::
+
>>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
@@ -857,8 +868,9 @@
specifies a timeout in seconds. If *block* is ``False`` then *timeout* is
ignored.
- Note that on OS/X ``sem_timedwait`` is unsupported, so timeout arguments
- for these will be ignored.
+.. note::
+ On OS/X ``sem_timedwait`` is unsupported, so timeout arguments for the
+ aforementioned :meth:`acquire` methods will be ignored on OS/X.
.. note::
@@ -1055,7 +1067,7 @@
lock = Lock()
n = Value('i', 7)
- x = Value(ctypes.c_double, 1.0/3.0, lock=False)
+ x = Value(c_double, 1.0/3.0, lock=False)
s = Array('c', 'hello world', lock=lock)
A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
@@ -1136,21 +1148,21 @@
Returns a :class:`Server` object which represents the actual server under
the control of the Manager. The :class:`Server` object supports the
- :meth:`serve_forever` method:
+ :meth:`serve_forever` method::
>>> from multiprocessing.managers import BaseManager
- >>> m = BaseManager(address=('', 50000), authkey='abc'))
- >>> server = m.get_server()
- >>> s.serve_forever()
+ >>> manager = BaseManager(address=('', 50000), authkey='abc')
+ >>> server = manager.get_server()
+ >>> server.serve_forever()
- :class:`Server` additionally have an :attr:`address` attribute.
+ :class:`Server` additionally has an :attr:`address` attribute.
.. method:: connect()
- Connect a local manager object to a remote manager process:
+ Connect a local manager object to a remote manager process::
>>> from multiprocessing.managers import BaseManager
- >>> m = BaseManager(address='127.0.0.1', authkey='abc))
+ >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
>>> m.connect()
.. method:: shutdown()
@@ -1278,7 +1290,9 @@
Its representation shows the values of its attributes.
However, when using a proxy for a namespace object, an attribute beginning with
-``'_'`` will be an attribute of the proxy and not an attribute of the referent::
+``'_'`` will be an attribute of the proxy and not an attribute of the referent:
+
+.. doctest::
>>> manager = multiprocessing.Manager()
>>> Global = manager.Namespace()
@@ -1330,17 +1344,15 @@
>>> import queue
>>> queue = queue.Queue()
>>> class QueueManager(BaseManager): pass
- ...
>>> QueueManager.register('get_queue', callable=lambda:queue)
>>> m = QueueManager(address=('', 50000), authkey='abracadabra')
>>> s = m.get_server()
- >>> s.serveForever()
+ >>> s.serve_forever()
One client can access the server as follows::
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
- ...
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
>>> m.connect()
@@ -1351,10 +1363,10 @@
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
- ...
- >>> QueueManager.register('getQueue')
- >>> m = QueueManager.from_address(address=('foo.bar.org', 50000), authkey='abracadabra')
- >>> queue = m.getQueue()
+ >>> QueueManager.register('get_queue')
+ >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
+ >>> m.connect()
+ >>> queue = m.get_queue()
>>> queue.get()
'hello'
@@ -1390,7 +1402,9 @@
A proxy object has methods which invoke corresponding methods of its referent
(although not every method of the referent will necessarily be available through
the proxy). A proxy can usually be used in most of the same ways that its
-referent can::
+referent can:
+
+.. doctest::
>>> from multiprocessing import Manager
>>> manager = Manager()
@@ -1398,7 +1412,7 @@
>>> print(l)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> print(repr(l))
- <ListProxy object, typeid 'list' at 0xb799974c>
+ <ListProxy object, typeid 'list' at 0x...>
>>> l[4]
16
>>> l[2:5]
@@ -1411,7 +1425,9 @@
An important feature of proxy objects is that they are picklable so they can be
passed between processes. Note, however, that if a proxy is sent to the
corresponding manager's process then unpickling it will produce the referent
-itself. This means, for example, that one shared object can contain a second::
+itself. This means, for example, that one shared object can contain a second:
+
+.. doctest::
>>> a = manager.list()
>>> b = manager.list()
@@ -1425,12 +1441,14 @@
.. note::
The proxy types in :mod:`multiprocessing` do nothing to support comparisons
- by value. So, for instance, ::
+ by value. So, for instance, we have:
+
+ .. doctest::
- manager.list([1,2,3]) == [1,2,3]
+ >>> manager.list([1,2,3]) == [1,2,3]
+ False
- will return ``False``. One should just use a copy of the referent instead
- when making comparisons.
+ One should just use a copy of the referent instead when making comparisons.
.. class:: BaseProxy
@@ -1462,7 +1480,9 @@
Note in particular that an exception will be raised if *methodname* has
not been *exposed*
- An example of the usage of :meth:`_callmethod`::
+ An example of the usage of :meth:`_callmethod`:
+
+ .. doctest::
>>> l = manager.list(range(10))
>>> l._callmethod('__len__')
@@ -1873,13 +1893,55 @@
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = multiprocessing.Manager()
+<<<<<<< .working
[INFO/SyncManager-1] child process calling self.run()
[INFO/SyncManager-1] manager bound to '\\\\.\\pipe\\pyc-2776-0-lj0tfa'
+=======
+ [INFO/SyncManager-...] child process calling self.run()
+ [INFO/SyncManager-...] created temp directory /.../pymp-...
+ [INFO/SyncManager-...] manager serving at '/.../listener-...'
+>>>>>>> .merge-right.r72062
>>> del m
[INFO/MainProcess] sending shutdown message to manager
- [INFO/SyncManager-1] manager exiting with exitcode 0
+ [INFO/SyncManager-...] manager exiting with exitcode 0
+<<<<<<< .working
+=======
++----------------+----------------+
+| Level | Numeric value |
++================+================+
+| ``SUBWARNING`` | 25 |
++----------------+----------------+
+| ``SUBDEBUG`` | 5 |
++----------------+----------------+
+
+For a full table of logging levels, see the :mod:`logging` module.
+
+These additional logging levels are used primarily for certain debug messages
+within the multiprocessing module. Below is the same example as above, except
+with :const:`SUBDEBUG` enabled::
+
+ >>> import multiprocessing, logging
+ >>> logger = multiprocessing.log_to_stderr()
+ >>> logger.setLevel(multiprocessing.SUBDEBUG)
+ >>> logger.warning('doomed')
+ [WARNING/MainProcess] doomed
+ >>> m = multiprocessing.Manager()
+ [INFO/SyncManager-...] child process calling self.run()
+ [INFO/SyncManager-...] created temp directory /.../pymp-...
+ [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
+ >>> del m
+ [SUBDEBUG/MainProcess] finalizer calling ...
+ [INFO/MainProcess] sending shutdown message to manager
+ [DEBUG/SyncManager-...] manager received shutdown message
+ [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
+ [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
+ [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
+ [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
+ [INFO/SyncManager-...] manager exiting with exitcode 0
+
+>>>>>>> .merge-right.r72062
The :mod:`multiprocessing.dummy` module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
More information about the Python-checkins
mailing list