I'm using a web2 server with Basic authentication provided by
There are cases when PUT requests with attachments from a client time
out, as following:
1. When the client issues the first PUT request, without an
Authentication header, the server responds with a 401 Unauthorized
before receiving all the attachment (I see this in a TCP trace made with
ngrep). I want to ask if this is the right behavior, I saw that Apache
waits for the entire attachment before replying with 401.
The client I used sends the next PUT request right after the attachment,
without an \r\n separator, e.g.
...<closing-xml-tag>PUT /resource HTTP/1.1
Is this normal ?
Sometimes, when the attachment is bigger, some 10KB, the second PUT
sent by the client (with the Authentication header present), is not
entirely received by the server, which times out as it waits for the
entire attachment, with size = ContentLength. I've printed the data
received on the TCP socket, in the doRead() method of
twisted.internet.tcp.Connection, and I think sometimes in the processing
the HTTPChannel or HTTPFactory stops reading the data, it close the
connection for reading, or something like that.
Did anyone got a similar behavior?
I'm doing a lot of Athena stuff at the moment. Calls from client->server and
server-> client (which is virtually 100% of the calls) are working
way that I would expect, and I think it's something in the package/module
system Athena uses that I haven't yet grokked.
I've attached a minimal test case. Since it actually has a directory
structure, I've tar-gzipped it; sorry for the additional inconvenience.
Directions are in the README file. The only requirement is that you modify
PYTHONPATH before twistd-ing the tac file (actually, it's a .py file, but it
should be a .tac file).
LiveElements; in the example these are XCall.A and XCall.B. XCall.A knows
about XCall.B through an import "statement"; that is, it has a line:
// import XCall.B
It tries to call a method on XCall.B (XCall.B.chain()), but it can't see this
method and throws an exception (which is caught by Nevow).
have any of the methods passed to its methods() method in the object
top-level namespace; instead, these are all available in the "prototype"
In other words, I can call XCall.B.prototype.chain(), but not XCall.B.chain().
implements some of its OO-ness, but all the Nevow code seems not to do it
this way: it just imports and uses directly:
// import Foo.Bar
so I'm confused as to why I can't do the same thing myself.
I'm running Python 2.5, Twisted 2.5.0, and have the same problem on both Nevow
0.9.18 and on trunk head.
Can anyone spot what I'm doing wrong?
I read the follwing
but I cant find "page.Page"
>>> from nevow.page import Page
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ImportError: cannot import name Page
Where can I find it?
Leggi GRATIS le tue mail con il telefonino i-mode di Wind
In the xmlrpc auth with auth.wrapper example below, how to pass the avatar
information to the EXAMPLE(XML_RPC) class?
return "Welcome "+username
from zope.interface import Interface, implements
from twisted.cred import portal
from twisted.cred import checkers
from twisted.web2 import channel
from twisted.web2 import http
from twisted.web2 import responsecode
from twisted.web2 import server
from twisted.web2 import xmlrpc
from twisted.web2.auth import digest
from twisted.web2.auth import basic
from twisted.web2.auth import wrapper
from twisted.application import service, strports
def requestAvatar(self, avatarId, mind, *interfaces):
if IHTTPUser in interfaces: return IHTTPUser, HTTPUser()
raise NotImplementedError("Only IHTTPUser interface is supported")
"""An example object to be published."""
addSlash = True
def xmlrpc_echo(self, request, x):
"""Return all passed args."""
def xmlrpc_add(self, request, a, b):
"""Return sum of arguments."""
return a + b
portal = portal.Portal(HTTPAuthRealm())
checker = checkers.InMemoryUsernamePasswordDatabaseDontUse(guest='guest123')
rsrc = Example()
credFactories = (basic.BasicCredentialFactory('My Realm'),
digest.DigestCredentialFactory('md5', 'My Realm'))
ifaces = (IHTTPUser,)
root = wrapper.HTTPAuthResource(rsrc, credFactories, portal, ifaces)
site = server.Site(root)
application = service.Application("XML-RPC Auth Demo")
s = strports.service('tcp:8080', channel.HTTPFactory(site))
>>>>> "Seb" == S�bastien LELONG <sebastien.lelong(a)dexia-securities.fr> writes:
Seb> FWIW, I have an application running CherryPy over Twisted, using
Seb> PyLucene. Since PyLucene isn't thread-safe, every thread must
Seb> subclass PyLucene.PythonThread, so the underlying "java runtime
Seb> garbage collector is aware about the python thread upon their
Seb> Without this, the apps simply crashes (segfault, see also
Seb> http://chandlerproject.org/PyLucene/ThreadingInPyLucene). Maybe
Seb> there's something about that while twistd is daemonizing the process ?
I read those pages at some point, but seeing as my code doesn't use threads
I never paid that much attention.
My app runs through the code where it crashes multiple times before things
die, so I'm not suffering from the kind of segfaulting you saw. From what I
can tell, something is deliberately calling abort(3) when it detects a
problem. It doesn't look like a signal is received or a system call is
Thanks a lot for the suggestion. I hope I'm understanding you right - that
it's not an issue for me as I'm not using threads directly (but is Twisted?
I'm not sure).
Since I want to write to the same log file from multiple Twisted
processes, I need to know if the log write is atomic.
Reading the twisted log source, I can see that the log entry is written
using only one write, so the question is if this operation if always
atomic, even for large buffers.
Thanks Manlio Perillo
I'm not sure if this belongs in Twisted Python or Twisted Web.
I have a twisted.web2 server that, among other things, makes calls to
PyLucene. Without going into detail, I'd like to ask a general question to
see if there's a high-level explanation for what I'm seeing.
When I run twistd without -n, I can take actions in the web UI that fairly
reliably cause a call into PyLucene that stops the twistd server. I don't
know what's going on, but the server definitely dies (it no longer appears
in ps, and you can't telnet to its port). There's nothing in the log file,
no core file (I have ulimit -c unlimited), nothing. This is on Mac OS X
10.4.9, PyLucene 2.1.0-2, Python 2.5, Twisted trunk revision 20059. When I
stick in debug prints to stderr, I can see that (usually) a call to
PyLucene.IndexWriter.optimize and (sometimes) a call to
PyLucene.IndexReader.open never returns.
The weird thing is that when I run twistd -n using the same .tac file, I
can't make the thing fall over. I can click away to my heart's content in
the application UI, and it all just works. I also tried running twistd
under pydb and the thing wouldn't crash.
While I realize there's probably something up with PyLucene, I'm wondering
if anyone can suggest why running with twistd -n is preventing the
crashing. What else does -n do, other than (presumably) forking and
running the server in the child?
| The fork does some other things though, and it's hard to say which one is
| affecting the execution of your program, especially since PyLucene is doing
| a bunch of things at the machine-code level which are highly surprising to
| Python programmers.
I'm too old to be highly surprised any more, and I'm barely a Python
| Have you tried running 'strace' on this process yet?
No, I hadn't, thanks. Running ktrace sheds a little light on things. The
final output before it writes the core file is:
7826 Python CALL write(0x2,0xbfff971f,0x18)
7826 Python GIO fd 2 wrote 24 bytes
7826 Python RET write 24/0x18
7826 Python CALL sigprocmask(0x3,0xbfff9b08,0)
7826 Python RET sigprocmask 0
7826 Python CALL kill(0x1e92,0x6)
7826 Python RET kill 0
7826 Python PSIG SIGABRT SIG_DFL
7826 Python NAMI "/cores/core.7826"
Nothing fails before that. The "thread_get_state failed\n" message does
not appear in the twisted server log. The kill is an abort to this process
(0x1e92 = 7826). So something (but not a system call) has gone wrong and
the code has called abort. That at least explains why the server abruptly
Running gdb python /cores/core.7826 and using where/bt provides no useful
#0 0x00000000 in _mh_dylib_header ()
>From the kdump output, the process is clearly in the middle of doing
PyLucene things (there are a bunch of access calls to files that are in my
PyLucene index directories).
Grepping for 'thread_get_state failed' in Python, Twisted, and PyLucene
gets me just one hit:
$ grep -i 'thread_get_state failed' /usr/local/lib/*
Binary file /usr/local/lib/libgcj.6.dylib matches
Which is a GCJ library file distributed with the Mac OS X binary version of
PyLucene. Google thread_get_state failed gives some leads.
I'll go bug the PyLucene folks now... :-)
Thanks again for the trace suggestion.
>>>>> "JP" == Jean-Paul Calderone <exarkun(a)divmod.com> writes:
JP> A completely wild guess is that forking is confusing PyLucene in a
JP> fatal way. Are you importing PyLucene in the .tac file itself? If so,
JP> it may help to avoid doing this, so that no code from PyLucene even
JP> gets a chance to run until after the process has already daemonized.
Hi JP. Thanks for the suggestion.
Yes, the PyLucene import does happen as a result of an import in the .tac
file. I just made some changes to delay the import until PyLucene is
actually needed. That didn't work, and nor did further hiding the import by
Is it right that all the -n switch to twistd does is prevent the fork?