hi there, folks:
I'd really like to release 0.7.0 but I would like it to be at least a
little bit tested before I do so. Could those of you with CVS trees check
everything out and see if it performs as advertised? Deeper bugs than
that will have to wait for the next release, but I'd at least like to know
if it works for someone other than me.
______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
On 18 May 2004, the following message was posted to this mailinglist:
Jp Calderone exarkun at divmod.com wrote:
>Daniel Newton wrote:
> I have a simple XML-PRC server similar to the example below:
> from twisted.web import xmlrpc, server
> class Example(xmlrpc.XMLRPC):
> """An example object to be published."""
> def xmlrpc_add(self, a, b):
> """Return sum of arguments."""
> return a + b
> if __name__ == '__main__':
> from twisted.internet import reactor
> r = Example()
> reactor.listenTCP(7080, server.Site(r))
> I want to be able to get the address of the client that calls the
> method can anyone help me with this?
This solution didn't work because 'transport' isn't a property of the
I'm currently in the process of changing from a customized
SimpleXMLRPCServer to a twisted XMLRPC server solution and I need to
insert the client IP into the attributes passed to the called xmlrpc
method. Anyone who knows the answer and is willing to share the info?
I am in charge of writing an application server for a three-tier
architecture system. It will receive requests&data trough an xml
protocol agreed with the client devs. Basically it will work on a
database and return the results to the client. The way I thought it is a
twisted server accepts connections from the clients, a coordinator
thread gets the request in a queue and pass them to free workers, which
in turn upon completion will place the result in the thread coordinator
response queue. The weird part is that another system places data in the
database when a specific request comes in so I have to permanently pool
the database for incoming data. This thing will be done with a pooling
So, my question is: is this kind of architecture good to implement?
(asynchronous server and threaded workers)
I might somehow miss some important details, but please feel free to ask
On Sat, Mar 1, 2008 at 9:02 AM, Thomas Herve
> @@ -1241,6 +1242,13 @@
> return result
> + def clear(self):
> + """
> + Remove all previously added tests.
> + """
> + self._tests = 
> class TestDecorator(components.proxyForInterface(itrial.ITestCase,
> @@ -1293,8 +1301,8 @@
> # Originally, we recreated the suite by calling test.__class__. The problem
> # was that the old suite kept references to test instances, which turns out
> # to never be free. Now we remove the original references by emptying the
> - # _tests list.
> - test._tests = 
> + # list of tests.
> + test.clear()
What's your plan for making sure this works with stdlib TestSuite?
I would also be very grateful for feedback from the twisted point of view.
---------- Forwarded message ----------
From: Marc Byrd <dr.marc.byrd(a)gmail.com>
Date: Fri, Feb 29, 2008 at 10:17 AM
Subject: Seeking Validation - search web service using memcached
I'm looking for some validation for some work I've done for a client, and
I'm open to criticism ("mock me" ? ;^), relevant awareness of similar
projects, and alternatives.
When I looked around in about September 2007 for a good scalable search
solution for Ruby on Rails, I found the choices lacking. Firstly, none of
the solutions seemed to have an option for keeping the reverse indices
in-memory across any number of machines I might like to store them.
Secondly, many of the solutions seemed too general purpose and heavy weight
for my client's needs (which are basically to search for items from the db,
based on tags). But without addressing the first concern, I felt that
anything I implemented would not scale to the customer's needs and
aspirations, and that for such an investment, virtually unlimited scale
would be mandatory.
Therefore I looked at memcached - well-proven on many large-scale sites for
caching, but to my knowledge not used in search. Note that memcached uses
an approach wherein the clients all calculate a server based on a given key,
such that no central (scale-limiting) controller is required. Having chosen
memcached, I next attempted to use various memcached connectors into RoR. I
found them at the time (Oct 2007 or so) to be slow and buggy; it didn't take
more than a couple of times of totally corrupting the entire cache to avert
my attention from a Ruby approach to using memcached. Meanwhile, I knew
from prior experience that the python client for memcached was both fast and
reliable. The python memcached client was routinely 3x faster for the tests
I ran. Python also seems to be quite fast at set operations.
Getting to the punchline, I used python and memcached, wrapped in twisted,
to provide a ReSTful web service api, which is called from RoR to get ALL of
the information needed to render search results. The API has been extended
to allow the Ruby code to "fire and forget" new indexing info onto a deque
(fifo queue), which is processed by a loosely-coupled daemon - overhead to
Ruby is about 20ms.
Prior to this approach, the client was using MyISAM full text search.
Search results were 10s for smaller search terms (5000 uses), and 20+s for
larger search terms (100k+ uses).
With the web service, the search results are routinely returned in 1-2
seconds, and the web service itself returns results to RoR within
100-200ms. Indexing is a challenge - the rank score needs to be updated
upon each viewing, but I've now gotten that to be almost real-time (5
minutes max). Plus I can re-index the entire database of 1M+ items in about
8 hours. The index is backed up nightly in case of a memcached server
failure (we're using 3). In addition to search, the search web service is
used for relatedness and for something like bookmarks.
So, is there anything out there that can touch these results and provide for
virtually unlimited scale (no central controller)?
Thanks in advance,
PS: Because of leaks in rmagick and its inferior performance compared to
the Python Image Library, I'm also considering a similar approach for
generating many different sizes of fairly large (10MB) images. A similar
fire and forget web service approach could be used to minimize the impact on
the RoR side. Early tests show a 10x speed improvement (even without the
fire and forget). Any thoughts there?
My project teammate has a nearly identical MacBook Pro as mine (we're
both running Leopard 10.5.2), but installing from Twisted 2.5.0 from
source (sudo python setup.py install) succeeds on my machine, while on
his machine it complains about a missing Python.h file. Does anyone
know what package provides that Python.h file? Is it one of the
packages on the Leopard DVD?
Is the AMP protocol bidirectional (beyond the response you get to each
message)? I'm really new to Twisted, and I followed the ampserver.py
and ampclient.py examples to get an AMP connection working, because
someone suggested that to me as an easy protocol to start with.
Although sending a message from the client to the server works fine, I
can't figure out how to initiate a message from the server to the
client over the connection that the client initially created.
I'm working on a little project where the "server" and the "client"
actually need to talk more like peers where each can initiate a
message to the other. The server is outside the NAT, so it'd be best
if the communication could happen both ways over the connection that
the client initiates.
Could someone point me in the right direction? I can provide my
existing code if that makes any difference.
Hi Steve et al:
>There are signs that there's going (again, hooray!)
to >be a concerted Twisted presence at PyCon 2008. I
am >hoping to demonstrate my ignorance publicly and
learn >something about Twisted by running one (or
>Open Space sessions entitled "Teach Me Twisted", that
>reverse the usual flow
I am happy to share my ignorance....
Steve, right now I don't see a reservation for Twisted
in either the "Birds of a Feather" or "Open Spaces." I
would recommend you book a space soon.
Also I am not sure how the "open spaces" works. Is it
another talk. Or is it a bunch of one on ones, or one
to a few?
As I suggested in private, perhaps it would be best to
survey a few topics that interest folks. In turn, find
volunteers that know those areas and are willing to
talk about them.
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
I have a small question, I have a service which needs to sometimes send
data (without having received any prior to sending) and sometimes
receive data, which is better :
1) create a factory that inherits from ServerFactory and ClientFactory,
thus it can listen and send data
2) create a factory that inherits from ServerFactory only and uses a
single-use client (ClientCreator, as shown in the writing clients howto)
when it needs to send data