[Twisted-Python] Components
So I want to merge the changes from Nevow that allow lazy adapter registration via string arguments to registerAdapter: registerAdapter("nevow.freeform.StringRenderer","nevow.formless.String", "nevow.freeform.ITypedRenderer") However, doing this necessarily means the removal of the implicit super-interface feature, where twisted automatically registers StringRenderer for all super-interfaces of ITypedRenderer. As there is no way to look up the super-interfaces of a string at registerAdapter time (without loading up the module), this feature cannot stay. I also doubt that's actually used by anyone. I can't think of a reason why I'd want to do that. The test suite doesn't even test it: all tests pass after removing it. There is a test for the behavior of the components.superInterfaces function, but not for actually registering adapters. Also, the current behavior is pretty much arbitrary and wrong:
from twisted.python.components import *
class IFoo(Interface): pass class IBaz(IFoo): pass class MyIFooAdapter(Adapter): pass class MyIBazAdapter(Adapter): pass registerAdapter(MyIBazAdapter, str, IBaz) registerAdapter(MyIFooAdapter, str, IFoo) <<<< Results in:
Traceback (most recent call last): File "<stdin>", line 1, in ? File "twisted/python/components.py", line 214, in registerAdapter raise ValueError( ValueError: an adapter (__main__.MyIBazAdapter) was already registered. <<<<
However, registerAdapter(MyIFooAdapter, str, IFoo) registerAdapter(MyIBazAdapter, str, IBaz) works fine. Depending on the order of adapter registration seems quite fragile to me. The only proper way to handle this would be to have a concept of 'depth' ala PyProtocols. James
James Y Knight wrote:
So I want to merge the changes from Nevow that allow lazy adapter registration via string arguments to registerAdapter: registerAdapter("nevow.freeform.StringRenderer","nevow.formless.String", "nevow.freeform.ITypedRenderer")
However, doing this necessarily means the removal of the implicit super-interface feature, where twisted automatically registers StringRenderer for all super-interfaces of ITypedRenderer. As there is no way to look up the super-interfaces of a string at registerAdapter time (without loading up the module), this feature cannot stay.
I also doubt that's actually used by anyone. I can't think of a reason why I'd want to do that. The test suite doesn't even test it: all tests pass after removing it. There is a test for the behavior of the components.superInterfaces function, but not for actually registering adapters.
Maybe this was the dubious failure that I've heard rumoured in NewReality when last someone tried to implement this feature. It'd be really nice if glyph or whoever tried implementing it last would explain what exactly the backwards-incompatibility problem they ran into was. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Thu, 2004-02-26 at 12:09, Christopher Armstrong wrote:
Maybe this was the dubious failure that I've heard rumoured in NewReality when last someone tried to implement this feature. It'd be really nice if glyph or whoever tried implementing it last would explain what exactly the backwards-incompatibility problem they ran into was.
I think James makes a compelling argument: if NewReality was the only thing stopping this fix from going through, forget about it. But still, make sure that the first version that changes this behaviour bumps at least a minor version number - a major reason I've been afraid of changes like this is that folks expect micro-versions to behave _exactly_ the same.
Glyph Lefkowitz wrote:
I think James makes a compelling argument: if NewReality was the only thing stopping this fix from going through, forget about it.
But still, make sure that the first version that changes this behaviour bumps at least a minor version number - a major reason I've been afraid of changes like this is that folks expect micro-versions to behave _exactly_ the same.
Cool! Sounds like a good idea. It just means 1.3 might be sooner than later :-) -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Feb 26, 2004, at 12:19 PM, Christopher Armstrong wrote:
Glyph Lefkowitz wrote:
I think James makes a compelling argument: if NewReality was the only thing stopping this fix from going through, forget about it. But still, make sure that the first version that changes this behaviour bumps at least a minor version number - a major reason I've been afraid of changes like this is that folks expect micro-versions to behave _exactly_ the same.
Cool! Sounds like a good idea. It just means 1.3 might be sooner than later :-)
Anothing few things: - Does getRegistry's context.get(AdapterRegistry, theAdapterRegistry) actually have a use? It seems like it might just be slowing down the process without any gain. - Why hasn't AdapterRegistry.getAdapterClassWithInheritance been deprecated yet? Is it because Componentized.locateAdapterClass calls it? But then, why does Componentized.locateAdapterClass exist. - What's Adapter.isuper? Yes it forwards itself but what *IS* it? - The persist argument is kind of strange. It's a tristate with 3 options: persist=None means lookup in the persistence table but don't add the newly created adapter to the table if one wasn't found, persist=False means don't lookup and don't add, and persist=True means lookup and add if necessary. Is that complication actually necessary? Also it doesn't seem to be documented. James
James Y Knight wrote:
On Feb 26, 2004, at 12:19 PM, Christopher Armstrong wrote:
Glyph Lefkowitz wrote:
I think James makes a compelling argument: if NewReality was the only thing stopping this fix from going through, forget about it. But still, make sure that the first version that changes this behaviour bumps at least a minor version number - a major reason I've been afraid of changes like this is that folks expect micro-versions to behave _exactly_ the same.
Cool! Sounds like a good idea. It just means 1.3 might be sooner than later :-)
Anothing few things: - Does getRegistry's context.get(AdapterRegistry, theAdapterRegistry) actually have a use? It seems like it might just be slowing down the process without any gain.
yeeees! I really like this feature. For example, people who want to override the way, say, a String input box is rendered for a formless UI rendered with gtk2form (freeform, too, probably), can implement an alternative IStringWidget implementor, put it in their own AdapterRegistry, and stick it on the context. Then their own widget will be used instead of the default, without touching anything in gtk2form. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Feb 26, 2004, at 2:22 PM, Christopher Armstrong wrote:
yeeees! I really like this feature. For example, people who want to override the way, say, a String input box is rendered for a formless UI rendered with gtk2form (freeform, too, probably), can implement an alternative IStringWidget implementor, put it in their own AdapterRegistry, and stick it on the context. Then their own widget will be used instead of the default, without touching anything in gtk2form.
Do you like it enough to have ever used it? Because, it doesn't work usefully, AFAICT. You've made an entire new AdapterRegistry with entirely new adapterRegistry and adapterPersistance dicts. So, you can't just override one adapter...Perhaps if there was an InheritingAdapterRegistry, or an copy method on AdapterRegistry... But really, what's the point? You can always make a subtype of IStringWidget.. This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, which *doesn't* seem to allow multiple registries. Basically, it seems to say: use a per-instance subprotocol. It may not be directly applicable to twisted, because of the lack of chained adaptation (A->B and B->C gives you A->C automatically), but it does seem to me like a nice way to go about it. James
James Y Knight wrote:
But really, what's the point? You can always make a subtype of IStringWidget..
This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, [etc]
I'm fine with using PyProtocols, but what did you mean about creating a subtype of IStringWidget? -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
James Y Knight wrote:
This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, which *doesn't* seem to allow multiple registries. Basically, it seems to say: use a per-instance subprotocol. It may not be directly applicable to twisted, because of the lack of chained adaptation (A->B and B->C gives you A->C automatically), but it does seem to me like a nice way to go about it.
Hmm... I'm reading that page, but I can't tell if it requires the framework code to do special things to support context-specific adapters. If it does, is there any way to implement context-specific adapters that _don't_ require framework-code to know about them? i.e., so plain old adaptation calls will look in the context for adapters? But yeah, if PyProtocols does everything we need and there's a good chance it will support our future crazy ideas, I think it's a good idea to switch. We'd probably wanna bump to 2.0 for such a switch. maybe? Of course, IIRC glyph mentioned it being slow, and I think he doesn't like the implicit adaptation. ? -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Feb 26, 2004, at 3:22 PM, Christopher Armstrong wrote:
James Y Knight wrote:
This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, which *doesn't* seem to allow multiple registries. Basically, it seems to say: use a per-instance subprotocol. It may not be directly applicable to twisted, because of the lack of chained adaptation (A->B and B->C gives you A->C automatically), but it does seem to me like a nice way to go about it.
Hmm... I'm reading that page, but I can't tell if it requires the framework code to do special things to support context-specific adapters. If it does, is there any way to implement context-specific adapters that _don't_ require framework-code to know about them? i.e., so plain old adaptation calls will look in the context for adapters?
But yeah, if PyProtocols does everything we need and there's a good chance it will support our future crazy ideas, I think it's a good idea to switch. We'd probably wanna bump to 2.0 for such a switch. maybe?
Of course, IIRC glyph mentioned it being slow, and I think he doesn't like the implicit adaptation. ?
Did he benchmark it? Last I checked, t.p.components and january molasses are neck in neck ;) PyProtocols has written-in-C code for acceleration, I bet it wins, even though it does do more work (which I believe happens *up front*, so doesn't really affect runtime performance) with transitive adaptation. -bob
At 03:41 PM 2/26/04 -0500, Bob Ippolito wrote:
PyProtocols has written-in-C code for acceleration, I bet it wins, even though it does do more work (which I believe happens *up front*, so doesn't really affect runtime performance) with transitive adaptation.
Adapter lookup time in PyProtocols is roughly equivalent to 5 attribute lookups: __conform__, __adapt__, __class__, __mro__ or __bases__, and finally the adapter itself. All of the transitive computation occurs up front, so the actual adapter lookup consists of looking up each class in the adaptee's __mro__ in a single dictionary. (IOW, it's basically the same as a Python attribute lookup, which walks the __mro__ and does a dictionary lookup for each base class until the attribute is found.) Adapter execution time is of course dependent on what the adapter does. :) The C code achieved about a 2-4x speedup over pure Python, mainly by cutting out function call and loop overhead, not by changing the fundamental algorithm, which can't really be improved upon without making assumptions I didn't want to make. For example, if I were willing to assume that classes would never have their __bases__ or __mro__ changed, I could cache lookup results and thus occasionally save a few dictionary lookups. However, for a general-purpose Python tool, I thought it better to support as much of Python's dynamic nature as practical. (I *do* assume that protocol relationships are not dynamic, however.)
On Feb 26, 2004, at 3:22 PM, Christopher Armstrong wrote:
James Y Knight wrote:
This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, which *doesn't* seem to allow multiple registries. Basically, it seems to say: use a per-instance subprotocol. It may not be directly applicable to twisted, because of the lack of chained adaptation (A->B and B->C gives you A->C automatically), but it does seem to me like a nice way to go about it.
Hmm... I'm reading that page, but I can't tell if it requires the framework code to do special things to support context-specific adapters. If it does, is there any way to implement context-specific adapters that _don't_ require framework-code to know about them? i.e., so plain old adaptation calls will look in the context for adapters?
I think that all of the cases where Twisted is using contextual adaptation are better solved by peak.binding anyway.. in the few, if any, cases where peak.binding is not better, then the context-specific adapters in the previously mentioned documentation should be enough to do it. In any case, because of __conform__, __adapt__, etc. you can really do Whatever You Want with PyProtocols.. even if that means implementing the awkward to use component registry stack junk :) -bob
On Feb 26, 2004, at 4:11 PM, Bob Ippolito wrote:
On Feb 26, 2004, at 3:22 PM, Christopher Armstrong wrote:
James Y Knight wrote:
This page <http://peak.telecommunity.com/protocol_ref/protocols-context.html> describes how one would do context-specific protocols with PyProtocols, which *doesn't* seem to allow multiple registries. Basically, it seems to say: use a per-instance subprotocol. It may not be directly applicable to twisted, because of the lack of chained adaptation (A->B and B->C gives you A->C automatically), but it does seem to me like a nice way to go about it.
Hmm... I'm reading that page, but I can't tell if it requires the framework code to do special things to support context-specific adapters. If it does, is there any way to implement context-specific adapters that _don't_ require framework-code to know about them? i.e., so plain old adaptation calls will look in the context for adapters?
I think that all of the cases where Twisted is using contextual adaptation are better solved by peak.binding anyway.. in the few, if any, cases where peak.binding is not better, then the context-specific adapters in the previously mentioned documentation should be enough to do it.
In any case, because of __conform__, __adapt__, etc. you can really do Whatever You Want with PyProtocols.. even if that means implementing the awkward to use component registry stack junk :)
And as James pointed out in another message, the context-aware registry currently implemented in t.p.c does not seem to do chaining, so radix' use case would not be implementable right now... so it would be work to do it the way he wants, whether it's fixing up t.p.c to do it or PyProtocols. dp
On Feb 26, 2004, at 2:16 PM, James Y Knight wrote:
On Feb 26, 2004, at 12:19 PM, Christopher Armstrong wrote:
Glyph Lefkowitz wrote:
I think James makes a compelling argument: if NewReality was the only thing stopping this fix from going through, forget about it. But still, make sure that the first version that changes this behaviour bumps at least a minor version number - a major reason I've been afraid of changes like this is that folks expect micro-versions to behave _exactly_ the same.
Cool! Sounds like a good idea. It just means 1.3 might be sooner than later :-)
Anothing few things: - Does getRegistry's context.get(AdapterRegistry, theAdapterRegistry) actually have a use? It seems like it might just be slowing down the process without any gain. - Why hasn't AdapterRegistry.getAdapterClassWithInheritance been deprecated yet? Is it because Componentized.locateAdapterClass calls it? But then, why does Componentized.locateAdapterClass exist. - What's Adapter.isuper? Yes it forwards itself but what *IS* it? - The persist argument is kind of strange. It's a tristate with 3 options: persist=None means lookup in the persistence table but don't add the newly created adapter to the table if one wasn't found, persist=False means don't lookup and don't add, and persist=True means lookup and add if necessary. Is that complication actually necessary? Also it doesn't seem to be documented.
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it. As a side note, PEAK's peak.util.imports.whenImported hooks, importString, and lazyModule might also be pretty useful to Twisted (among so many other things). -bob
At 02:43 PM 2/26/04 -0500, Bob Ippolito wrote:
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it.
Yep, just subclass InterfaceClass and add a __call__ method that calls back to adapt() and you'd be all set. One other benefit to PyProtocols is that you can track what interfaces an *instance* supports, independent of what its class supports. The more controversial aspect, however, is transitive adaptation. IIRC, some of the core Twisted developers consider implicit transitive adaptation to be dangerous and/or evil. However, that could possibly also be dealt with in a protocols.Interface subclass by disabling the parts that support that. There might be some other issues that could come up, but I'm definitely willing to try to "adapt" to Twisted's needs in these areas, especially if it means I could get rid of PyProtocols' wrapper code and wrapping tests for Twisted's existing interface class. :)
On Thu, 2004-02-26 at 14:59, Phillip J. Eby wrote:
At 02:43 PM 2/26/04 -0500, Bob Ippolito wrote:
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it.
Yep, just subclass InterfaceClass and add a __call__ method that calls back to adapt() and you'd be all set. One other benefit to PyProtocols is that you can track what interfaces an *instance* supports, independent of what its class supports.
Would you consider adding this to PyProtocols directly? If we're going to maintain a component system as part of Twisted, I would like to at least get the benefit of having full control over the component system within Twisted. I don't want to have some people using PyProtocols and others using PyProtocols+TwistedHacks. There are a lot of features I'd add to Twisted's component system if I had time, such as: - implicit context-dependent location of closest running t.a.service services by interface - interface-based context (moshez's context trick) - automatic generation of interfaces from any class - IComponentized - context / interface based log separations And of course, integrating foom's string-based components would be great too. There is a lot of friction even to add something like this to Twisted. I imagine that adding something like this to PyProtocols, with potentially more projects out there depending on the exact specifics of all its semantics, would be even worse. The other alternative is to add a bunch of specific hacks to PyProtocols that get loaded only when Twisted gets loaded, which could potentially introduce compatibility problems with other PyProtocols-using code, which would sort of invalidate the whole point of using a common components system in the first place. Then we have the issue of the PyProtocols dependency; dependency management can be quite hairy on windows.
The more controversial aspect, however, is transitive adaptation. IIRC, some of the core Twisted developers consider implicit transitive adaptation to be dangerous and/or evil. However, that could possibly also be dealt with in a protocols.Interface subclass by disabling the parts that support that.
Before veering off into this, I'd like to retract any comments about performance. I tried pyprotocols a while ago, on some simple cases, and it was pretty slow compared to Twisted's dirt-simple code. I don't think I was using the C code path, but it was more than a 2-4x difference at the time. I will test again with more reasonable cases before saying anything more specific. I bet Twisted has gotten slower and PyProtocols has gotten faster since then, though. Parts of PyProtocols do strike me as dangerous, evil, and overcomplex, though :) In particular, http://peak.telecommunity.com/protocol_ref/proto-implication.html The idea of passing numeric priorities for different implementations has always seemed deeply wrong to me. I have worked with one or two systems like this in the past (some MUD code in C++) where, inevitably, someone will want to make the 'real' default adapter for interface X; then someone else will want to make the 'really real' default adapter. Different developers will eventually keep trying to write comparison methods that leapfrog each other backwards to get to the correct result for last-most-from-greater-than-everything, which turns into a bug-ridden mess (and it's never really clear who should be "winning" this race to be the final overrider anyway). More importantly I don't really understand if that's in fact what the 'depth' value is used for, because my eyes glaze over halfway through the above web page :) PyProtocols feels to me like it's gone out of even the upper levels of abstraction that the Twisted team is used to inhabiting, straight into the Zopeosphere... 4000 lines of code related to components, whereas t.p.components has 300? It worries me. Maybe I'm alone in these concerns, though. Does anyone else feel that depending on PyProtocols would increase the learning curve for Twisted even more? Or is this common knowledge in the Python community that could be leveraged to actually make the curve shallower? I can certainly learn to wrap my head around the whole thing if nobody else has trouble with it :)
There might be some other issues that could come up, but I'm definitely willing to try to "adapt" to Twisted's needs in these areas, especially if it means I could get rid of PyProtocols' wrapper code and wrapping tests for Twisted's existing interface class. :)
This is clearly something that we need to talk about more. As many silly disagreements about design as I can come up with, a common components system would be beneficial to everyone involved. Are you coming to PyCon? :)
Glyph Lefkowitz wrote:
On Thu, 2004-02-26 at 14:59, Phillip J. Eby wrote:
Yep, just subclass InterfaceClass and add a __call__ method that calls back to adapt() and you'd be all set. One other benefit to PyProtocols is that you can track what interfaces an *instance* supports, independent of what its class supports.
Would you consider adding this to PyProtocols directly? If we're going to maintain a component system as part of Twisted, I would like to at least get the benefit of having full control over the component system within Twisted. I don't want to have some people using PyProtocols and others using PyProtocols+TwistedHacks.
Interface calling isn't that big of a deal. It just means for *parameterized* interfaces, we can't rely on calling (if we want to support non-Twisted interfaces there). But when we're adapting to a particular interface that we know subclasses our CallableInterface, we can call it.
There are a lot of features I'd add to Twisted's component system if I had time, such as:
In the following, "components system" = "PyProtocols".
- implicit context-dependent location of closest running t.a.service services by interface
Doesn't require changes to the components system
- interface-based context (moshez's context trick)
Doesn't require changes to the components system
- automatic generation of interfaces from any class
Doesn't require changes to the components system (afaict)
- IComponentized
Hmm, what exactly is the use case for this anyway? I thought it was related to string registration of adapters. It doesn't sound like it really requires changes to the components system, either; Componentized seems like it could easily be implemented on top of PyProtocols.
- context / interface based log separations
I don't really understand this, but it doesn't sound like it requires changes to the components system.
And of course, integrating foom's string-based components would be great too. There is a lot of friction even to add something like this to Twisted. I imagine that adding something like this to PyProtocols, with potentially more projects out there depending on the exact specifics of all its semantics, would be even worse.
Well, Bob said in another message that he had already done string-registration of adapters successfully. (?)
The other alternative is to add a bunch of specific hacks to PyProtocols that get loaded only when Twisted gets loaded, which could potentially introduce compatibility problems with other PyProtocols-using code, which would sort of invalidate the whole point of using a common components system in the first place.
My previous retorts make this point of yours irrelevant for the moment, so I will not respond: if you want, you may debate my retorts and we can get to this point once we have worked those out.[1] [1] I need to figure out a succinct way to say this. Maybe some Latin crap. It's way too verbose. spiv suggested "ad nauseum" when I asked him... Better look that one up. ;-)
Then we have the issue of the PyProtocols dependency; dependency management can be quite hairy on windows.
Hmm, yeah, I assumed we would include our own (hopefully unhacked) copy of PyProtocols in releases.
Maybe I'm alone in these concerns, though. Does anyone else feel that depending on PyProtocols would increase the learning curve for Twisted even more? Or is this common knowledge in the Python community that could be leveraged to actually make the curve shallower? I can certainly learn to wrap my head around the whole thing if nobody else has trouble with it :)
I kinda hold this reservation. PyProtocols is VERY complex compared to the t.p.c components system. But maintaining our own inadequate version is pretty sucky.
There might be some other issues that could come up, but I'm definitely willing to try to "adapt" to Twisted's needs in these areas, especially if it means I could get rid of PyProtocols' wrapper code and wrapping tests for Twisted's existing interface class. :)
This is clearly something that we need to talk about more. As many silly disagreements about design as I can come up with, a common components system would be beneficial to everyone involved. Are you coming to PyCon? :)
PyCon! PyCon! Everybody come to PyCon! -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Feb 26, 2004, at 6:51 PM, Christopher Armstrong wrote:
Glyph Lefkowitz wrote:
And of course, integrating foom's string-based components would be great too. There is a lot of friction even to add something like this to Twisted. I imagine that adding something like this to PyProtocols, with potentially more projects out there depending on the exact specifics of all its semantics, would be even worse.
Well, Bob said in another message that he had already done string-registration of adapters successfully. (?)
Here's an example: from protocols import * import peak.utils.imports as imports # does not depend on other PEAK stuff.. can easily be stolen mxDateTime = imports.lazyModule('mx.DateTime') datetime = imports.lazyModule('datetime') def _hook_mx_DateTime(mxDateTime): declareAdapter( NO_ADAPTER_NEEDED, provides=[IBasicDate], forTypes=[mxDateTime.DateTimeType], ) #datetime = imports.lazyModule('datetime') def convertToDateTime(obj, protocol): msec, second = math.modf(obj.second) # handle tzinfo ? return datetime.datetime( obj.year, obj.month, obj.day, obj.hour, obj.minute, int(obj.second), int(msec * 1e6), None ) declareAdapter( convertToDateTime, provides=[IBasicDateTime], forTypes=[mxDateTime.DateTimeType], ) imports.whenImported('mx.DateTime.mxDateTime.mxDateTime', _hook_mx_DateTime) def _hook_datetime(datetime): declareAdapter( NO_ADAPTER_NEEDED, provides=[IBasicDate], forTypes=[datetime.date], ) declareAdapter( NO_ADAPTER_NEEDED, provides=[IBasicDateTime], forTypes=[datetime.datetime], ) imports.whenImported('datetime', _hook_datetime) This could of course be automated a bit more, and there is probably even a better way.. but this was the first thing I stumbled across without thoroughly reading documentation or asking anyone :)
The other alternative is to add a bunch of specific hacks to PyProtocols that get loaded only when Twisted gets loaded, which could potentially introduce compatibility problems with other PyProtocols-using code, which would sort of invalidate the whole point of using a common components system in the first place.
My previous retorts make this point of yours irrelevant for the moment, so I will not respond: if you want, you may debate my retorts and we can get to this point once we have worked those out.[1]
[1] I need to figure out a succinct way to say this. Maybe some Latin crap. It's way too verbose. spiv suggested "ad nauseum" when I asked him... Better look that one up. ;-)
PyProtocols really is flexible enough to accommodate whatever.
Then we have the issue of the PyProtocols dependency; dependency management can be quite hairy on windows.
Hmm, yeah, I assumed we would include our own (hopefully unhacked) copy of PyProtocols in releases.
I would suggest including it beside, not inside, Twisted. The setup.py could check to see if a newer version is already installed, too.
Maybe I'm alone in these concerns, though. Does anyone else feel that depending on PyProtocols would increase the learning curve for Twisted even more? Or is this common knowledge in the Python community that could be leveraged to actually make the curve shallower? I can certainly learn to wrap my head around the whole thing if nobody else has trouble with it :)
I kinda hold this reservation. PyProtocols is VERY complex compared to the t.p.c components system. But maintaining our own inadequate version is pretty sucky.
Well my first and last significant experience with a t.p.components based project: == start using t.p.c == - Why AREN'T adapters transitive? - How do I expose an interface that the *class* provides? - Oh, I *can't do that*, I guess I'll have to patch t.p.c. - Grudgingly write software on top of patched t.p.c. == t.p.c gone forever == - Rewrite the code on PyProtocols. It's a lot shorter and cleaner, even though I do use adapt(..., IFoo) instead of IFoo(...). - Still haven't found a use case that PyProtocols can't already do. - Started looking at PEAK, it solves most of the other problems I have (lazy imports, bindings, data modeling, etc.) - .. hoping Twisted starts assimilating PEAK stuff, because it's better :)
There might be some other issues that could come up, but I'm definitely willing to try to "adapt" to Twisted's needs in these areas, especially if it means I could get rid of PyProtocols' wrapper code and wrapping tests for Twisted's existing interface class. :) This is clearly something that we need to talk about more. As many silly disagreements about design as I can come up with, a common components system would be beneficial to everyone involved. Are you coming to PyCon? :)
PyCon! PyCon! Everybody come to PyCon!
I'm there! -bob
Bob Ippolito wrote:
def _hook_datetime(datetime): declareAdapter( NO_ADAPTER_NEEDED, provides=[IBasicDate], forTypes=[datetime.date], ) declareAdapter( NO_ADAPTER_NEEDED, provides=[IBasicDateTime], forTypes=[datetime.datetime], ) imports.whenImported('datetime', _hook_datetime)
This could of course be automated a bit more, and there is probably even a better way.. but this was the first thing I stumbled across without thoroughly reading documentation or asking anyone :)
Hmm, yeah, we want something more along the lines of: registerAdapter(convertToDateTime, forStringTypes=['mxDateTime.DateTimeType'], provides=[IBasicDateTime]) But your example does show off the crazy crap you can do, which makes me believe that something like this would be quite easy to implement.
On Feb 26, 2004, at 6:51 PM, Christopher Armstrong wrote:
Hmm, yeah, I assumed we would include our own (hopefully unhacked) copy of PyProtocols in releases.
I would suggest including it beside, not inside, Twisted. The setup.py could check to see if a newer version is already installed, too.
I didn't mean "inside the 'twisted' Python package", I meant "in the Twisted package" ;-). So, yeah, beside Twisted, I guess.
- .. hoping Twisted starts assimilating PEAK stuff, because it's better :)
:-D -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
Parts of PyProtocols do strike me as dangerous, evil, and overcomplex, though :) In particular,
http://peak.telecommunity.com/protocol_ref/proto-implication.html
The idea of passing numeric priorities for different implementations has always seemed deeply wrong to me. I believe you misunderstood -- you don't pass numeric priorities, they are used internally to measure the length of a chain of adapters (for
On Feb 26, 2004, at 5:56 PM, Glyph Lefkowitz wrote: the transitive adaptation) so that the shortest chain is always used.
4000 lines of code related to components, whereas t.p.components has 300? It worries me.
It's actually only non-blank 996 lines, not including testcases (836 lines), pyrex speedup code (233 lines), or twisted/zope compatibility (159 lines). Compare this to twisted.python.component's 392 lines (+350 lines of tests). Not too bad, really. The PyProtocols authors seem to really like blank lines, there's 1842 of those compared to twisted's 182. :) James
James Y Knight wrote:
4000 lines of code related to components, whereas t.p.components has 300? It worries me.
It's actually only non-blank 996 lines, not including testcases (836 lines), pyrex speedup code (233 lines), or twisted/zope compatibility (159 lines).
This makes me feel a lot better.
Compare this to twisted.python.component's 392 lines (+350 lines of tests). Not too bad, really. The PyProtocols authors seem to really like blank lines, there's 1842 of those compared to twisted's 182. :)
I also hate how little vertical whitespace many Twisted hackers use ;-) -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
At 05:56 PM 2/26/04 -0500, Glyph Lefkowitz wrote:
On Thu, 2004-02-26 at 14:59, Phillip J. Eby wrote:
Yep, just subclass InterfaceClass and add a __call__ method that calls back to adapt() and you'd be all set. One other benefit to PyProtocols is that you can track what interfaces an *instance* supports, independent of what its class supports.
Would you consider adding this to PyProtocols directly?
I have considered it. Unfortunately, it runs counter to one PyProtocols use case I'd like to support, which is the ability to use an abstract base class as a protocol. That is, somebody should be able to instantiate non-abstract subclasses of a protocol. While I don't personally use this except for some stuff in peak.util.fmtparse, I've had some previous indications that Guido may favor an ABC-style (Abstract Base Class) for interfaces when they eventually land in Python. So, I'd like to avoid making it impossible to instantiate interface subclasses. Again, this would be easily solved by a Twisted-specific subclass of protocols.InterfaceClass, and I don't see that doing it is necessarily a bad thing for either Twisted or PyProtocols, although it may be that it should be considered simply a transitional, backward-compatibility thing.
If we're going to maintain a component system as part of Twisted, I would like to at least get the benefit of having full control over the component system within Twisted. I don't want to have some people using PyProtocols and others using PyProtocols+TwistedHacks.
I'm not sure I follow you here. Private extensions to PyProtocols' base classes is certainly permitted and encouraged, to provide additional features needed by particular frameworks, as long as the core interfaces are respected (e.g. 'IOpenProtocol'). PyProtocols itself offers several specialized protocol implementations, including Protocol, InterfaceClass, Variation, URIProtocol, SequenceProtocol, and so on.
There are a lot of features I'd add to Twisted's component system if I had time, such as:
- implicit context-dependent location of closest running t.a.service services by interface - interface-based context (moshez's context trick) - automatic generation of interfaces from any class - IComponentized - context / interface based log separations
Actually, if I understand correctly, these mostly sound like things outside PyProtocols' scope. peak.binding and peak.config implement some of this stuff by defining various interfaces they want, and using PyProtocols to adapt things to those interfaces. But that's entirely independent of PyProtocols itself. In other words, PyProtocols isn't tightly coupled to a component architecture, but is instead a convenient base for building component architectures.
And of course, integrating foom's string-based components would be great too. There is a lot of friction even to add something like this to Twisted. I imagine that adding something like this to PyProtocols, with potentially more projects out there depending on the exact specifics of all its semantics, would be even worse.
The other alternative is to add a bunch of specific hacks to PyProtocols that get loaded only when Twisted gets loaded, which could potentially introduce compatibility problems with other PyProtocols-using code, which would sort of invalidate the whole point of using a common components system in the first place.
Again, if I understand correctly, these are services you would build atop PyProtocols or using PyProtocols, and wouldn't need to extend PyProtocols for. Let's take a specific example: you mentioned locating nearby services by interface. peak.config does this with two interfaces: IConfigSource and IConfigKey: class IConfigSource(Interface): """Something that can be queried for configuration data""" def _getConfigData(forObj, configKey): """Return a value of 'configKey' for 'forObj' or 'NOT_FOUND' Note that 'configKey' is an 'IConfigKey' instance and may therefore be a 'PropertyName' or an 'Interface' object.""" class IConfigKey(Interface): """Configuration data key, used for 'config.lookup()' et al Configuration keys may be polymorphic at registration or lookup time. IOW, when looking up a configuration key, you can search multiple values that would imply the key being looked for. And, when registering a value for a configuration key, the key can supply alternate keys that it should be registered under. Thus, an 'IConfigKey' is never itself directly used as a key, only the values supplied by its 'registrationKeys()' and 'lookupKeys()' methods are used. (However, those values must themselves be adaptable to 'IConfigKey', and they must be usable as dictionary keys.) """ def registrationKeys(depth=0): """Iterate over (key,depth) pairs to be used when registering""" def lookupKeys(): """Iterate over keys that should be used for lookup""" Now, some of the things we want to use as configuration keys are interfaces. So, we use PyProtocols to declare adapters that implement IConfigKey for all the interface types we work with (PyProtocols interfaces, Zope interfaces, and Twisted interfaces). And of course we declare our "placeful" components as implementing IConfigSource. Now, the API to look something up amounts to iterating over parent components, adapting them to IConfigSource, and passing them the needed configuration key, after having first adapted it to IConfigKey. Now, let's say somebody wants to use a Twisted placeful component with PEAK or vice versa... they just declare adapters to whichever interfaces aren't implemented, and life is good. There's absolutely no reason you'd need to change PyProtocols for this, nor would you need to make Twisted's interfaces for component lookup anything like PEAK's. Heck, if somebody wanted to, they could declare an IConfigKey adapter for Twisted's interface class, and then PEAK would be able to use all its existing lookup and component-binding APIs using Twisted interfaces as keys. And that's *without* Twisted using PyProtocols. :)
Then we have the issue of the PyProtocols dependency; dependency management can be quite hairy on windows.
Indeed. I've begun correspondence with Bob off-list about the possibility of me helping to port PIMP/PackMan to other platforms, though.
Parts of PyProtocols do strike me as dangerous, evil, and overcomplex, though :) In particular,
http://peak.telecommunity.com/protocol_ref/proto-implication.html
The idea of passing numeric priorities for different implementations has always seemed deeply wrong to me.
I understand. However, I have yet to encounter a situation where I've actually used or needed to use it. And, I consider passing explicit depth arguments to PyProtocols a hack: a side-effect of the implementation rather than an intentional design feature.
I have worked with one or two systems like this in the past (some MUD code in C++) where, inevitably, someone will want to make the 'real' default adapter for interface X; then someone else will want to make the 'really real' default adapter. Different developers will eventually keep trying to write comparison methods that leapfrog each other backwards to get to the correct result for last-most-from-greater-than-everything, which turns into a bug-ridden mess (and it's never really clear who should be "winning" this race to be the final overrider anyway).
Right, the proper solution is to: 1) have one protocol per use-case 2) don't reuse a protocol for other use cases that aren't an exact match 3) use transitive adaptation so that similar use cases can reuse adapters, while still allowing special cases to declare a direct adapter that overrides the transitive one So far, this strategy has worked out very well for me, without need for explicit depth declaration. By the way, though, I don't know what you mean by "default adapter". Do you mean the adapter for type 'object', perhaps? I can't imagine why somebody would care about that, though.
More importantly I don't really understand if that's in fact what the 'depth' value is used for, because my eyes glaze over halfway through the above web page :) PyProtocols feels to me like it's gone out of even the upper levels of abstraction that the Twisted team is used to inhabiting, straight into the Zopeosphere... 4000 lines of code related to components, whereas t.p.components has 300? It worries me.
Sigh. I frequently regret having undertaken to document PyProtocols so thoroughly. :( Ironically, I did so thinking it would encourage developers of other frameworks to come on board. ;) Seriously, though, the only major differences I know of between PyProtocols and Twisted's interfaces are: 1) Transitive adaptation is automatic 2) Instances may implement interfaces, and can participate in their adaptation 3) Interface declarations are inherited from all base classes, not just the first So, the "upper levels of abstraction" have solely to do with levels that you don't need to know about in order to simply *use* the system. (See also http://peak.telecommunity.com/protocol_ref/module-protocols.twistedsupport.h... for some of the minor details of current PyProtocols/Twisted compatibility. But anyway, where the heck are you getting 4000 lines from? pje@pje ~/PyProtocols $ wc -l src/protocols/*.py 9 src/protocols/__init__.py 205 src/protocols/adapters.py # adapter bases, adapter arithmetic, default adapters 369 src/protocols/advice.py # stuff to support interface declarations 287 src/protocols/api.py # API methods 246 src/protocols/classic.py # support for declaring interfaces on Python built-ins 328 src/protocols/generate.py # auto-generated interfaces, like protocolForURI 410 src/protocols/interfaces.py # the actual Protocol/Interface implementations, and interfaces for them 205 src/protocols/twisted_support.py # This would go away, or at least get shorter... :) 121 src/protocols/zope_support.py # But we're probably stuck with this. :) 2180 total And those files have lots of whitespace and documentation lines in 'em. Maybe you're including the tests?
Maybe I'm alone in these concerns, though. Does anyone else feel that depending on PyProtocols would increase the learning curve for Twisted even more? Or is this common knowledge in the Python community that could be leveraged to actually make the curve shallower? I can certainly learn to wrap my head around the whole thing if nobody else has trouble with it :)
Stop trying to understand it and just use it. ;) Seriously, though, I think that Twisted's Interface/Adapter How-To is the kind of documentation I *should* have written for PyProtocols. The PyProtocols docs were biased towards proving that its framework is consistent and useful for all sorts of tricky edge cases and advanced interface usages, instead of just saying, "here, this is what you can do". In particular, I wanted to show Jim Fulton that adaptation is more fundamental than interface implementation, because you can represent the latter as a special case of the former. (i.e., the NO_ADAPTER_NEEDED adapter.) So, as you can see right there, writing docs with Jim Fulton in mind as the intended audience is where I made my big mistake. :)
There might be some other issues that could come up, but I'm definitely willing to try to "adapt" to Twisted's needs in these areas, especially if it means I could get rid of PyProtocols' wrapper code and wrapping tests for Twisted's existing interface class. :)
This is clearly something that we need to talk about more. As many silly disagreements about design as I can come up with, a common components system would be beneficial to everyone involved. Are you coming to PyCon? :)
No, but I have an IRC client. :)
Phillip J. Eby wrote:
Stop trying to understand it and just use it. ;) Seriously, though, I think that Twisted's Interface/Adapter How-To is the kind of documentation I *should* have written for PyProtocols. The PyProtocols docs were biased towards proving that its framework is consistent and useful for all sorts of tricky edge cases and advanced interface usages, instead of just saying, "here, this is what you can do". In particular, I wanted to show Jim Fulton that adaptation is more fundamental than interface implementation, because you can represent the latter as a special case of the former. (i.e., the NO_ADAPTER_NEEDED adapter.)
So, as you can see right there, writing docs with Jim Fulton in mind as the intended audience is where I made my big mistake. :)
Hmm, yeah, I think it would be helpful to have a document that mixes 1) explanation of interfaces/adaptors and why you need them and 2) basic introduction to PyProtocols to do what is outlined in #1. This is basically what the Twisted components howto is, but the Twisted components howto is pretty crappy, IMO :-) There's a lot of room for improvement in it even in that limited scope. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
At 07:34 PM 2/26/04 -0500, Christopher Armstrong wrote:
Phillip J. Eby wrote:
Stop trying to understand it and just use it. ;) Seriously, though, I think that Twisted's Interface/Adapter How-To is the kind of documentation I *should* have written for PyProtocols. The PyProtocols docs were biased towards proving that its framework is consistent and useful for all sorts of tricky edge cases and advanced interface usages, instead of just saying, "here, this is what you can do". In particular, I wanted to show Jim Fulton that adaptation is more fundamental than interface implementation, because you can represent the latter as a special case of the former. (i.e., the NO_ADAPTER_NEEDED adapter.) So, as you can see right there, writing docs with Jim Fulton in mind as the intended audience is where I made my big mistake. :)
Hmm, yeah, I think it would be helpful to have a document that mixes 1) explanation of interfaces/adaptors and why you need them and 2) basic introduction to PyProtocols to do what is outlined in #1. This is basically what the Twisted components howto is, but the Twisted components howto is pretty crappy, IMO :-) There's a lot of room for improvement in it even in that limited scope.
Yeah, my problem is that I don't usually write stuff to solve simple problems, because if it was a simple problem, somebody else would probably have already written something to solve it, and I could just download their implementation and be done with it. :) So, when I write something I'm almost invariably thinking about *hard* problems, and that tends to carry over into my documentation. I'm finding that it's much better to get other people to draft documentation of my stuff, and then edit it for correctness or to show better ways to get the job done, because those other people usually have far more modest goals than I do, and will therefore use simpler examples. For example, R.D. Murrary wrote a beautiful "Hello World"-driven tutorial for PEAK (at http://peak.telecommunity.com/DevCenter/IntroToPeak/ ) that I could never have written myself because it wouldn't have occurred to me to layer the examples according to complexity of the problem, rather than complexity of the solution. :) Contrast IntroToPeak with the PyProtocols docs: the former builds up solutions to bigger and bigger problems, while completely bypassing any discussion of PEAK fundamentals. Conversely, the PyProtocols doc builds up from tiny little pieces and assembles them into a bigger and bigger framework. That type of documentation is more useful for people developing and extending the framework itself, than for people who want to use it. So, PyProtocols really needs another R.D. Murray to come along and explain how to do "Hello World" in PyProtocols. :)
Phillip J. Eby wrote:
So, when I write something I'm almost invariably thinking about *hard* problems, and that tends to carry over into my documentation. I'm finding that it's much better to get other people to draft documentation of my stuff, and then edit it for correctness or to show better ways to get the job done, because those other people usually have far more modest goals than I do, and will therefore use simpler examples. For example, R.D. Murrary wrote a beautiful "Hello World"-driven tutorial for PEAK (at http://peak.telecommunity.com/DevCenter/IntroToPeak/ ) that I could never have written myself because it wouldn't have occurred to me to layer the examples according to complexity of the problem, rather than complexity of the solution. :) Contrast IntroToPeak with the PyProtocols docs: the former builds up solutions to bigger and bigger problems, while completely bypassing any discussion of PEAK fundamentals. Conversely, the PyProtocols doc builds up from tiny little pieces and assembles them into a bigger and bigger framework. That type of documentation is more useful for people developing and extending the framework itself, than for people who want to use it. So, PyProtocols really needs another R.D. Murray to come along and explain how to do "Hello World" in PyProtocols. :)
OTOH, I want to say that your documentation does seem very good: when the time comes to be go into interstellar abstraction-space, I think it will be quite useful. But, indeed, introductory material is very very important. :-) -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Thu, Feb 26, 2004 at 07:53:32PM -0500, Christopher Armstrong quipped:
OTOH, I want to say that your documentation does seem very good: when the time comes to be go into interstellar abstraction-space, I think it will be quite useful. But, indeed, introductory material is very very important. :-)
I agree. The importance of introductory material cannot be overstated without difficulty.
On Thu, 2004-02-26 at 19:47, Phillip J. Eby wrote:
Yeah, my problem is that I don't usually write stuff to solve simple problems, because if it was a simple problem, somebody else would probably have already written something to solve it, and I could just download their implementation and be done with it. :)
So, when I write something I'm almost invariably thinking about *hard* problems, and that tends to carry over into my documentation.
I can understand that inclination myself, having more than once heard Twisted accused of being "an impressive solution for which there are no problems". However, PEAK's documentation has been dense even for me. I certainly could not accuse if of not being thorough :). In a discussion with another Twisted developer, it struck me that we may be coming at the problem of components from completely different angles. I didn't understand why components were interesting at all until I started to play with using them in the context of separation of concerns within simulations. Since these concerns can become independently quite large, manage independent state, and are largely interrelated only at relatively few boundary points, they're well-suited towards collections of stateful adapters. Compared to the use-cases I've attempted to satisfy there, every usage of adapters has seemed trivial. Looking at PyProtocols, it doesn't seem to me to have taken simulations or gaming into account as a use-case. An indication of this is the seperability of the full component model from the interface / adapters system. Without a full understanding of the execution context of an adapter, I don't know how I could integrate external adapters like TTW login or email delivery into a simulation. (Pardon me for not being terribly specific here. It's difficult for me to come up with examples that take less than 5 or 6 pages of dull prose to explain.) However, I am not sure, because I can't figure out what PEAK as a whole is really for. The likely source of an explanation, http://peak.telecommunity.com/Articles/WhatisPEAK.html seems awfully vague. What is the problem that PEAK was designed to solve, exactly? Clearly it was difficult, because there was a lot of solution :) An aside, for any would-be component system or documentation authors in the audience: much general literature on the use of components - including the Twisted component tutorial, I might add - is maddeningly vague, somewhat like "object oriented programming" tutorials. I don't need to simulate simplistic bicycles or shapes or electrically incapacitated hairdryers, I want to write networked applications and video games, and it's not clear which concerns I should separate, how, and when. Why does this stuff help me?
Glyph Lefkowitz wrote:
In a discussion with another Twisted developer, it struck me that we may be coming at the problem of components from completely different angles. I didn't understand why components were interesting at all until I started to play with using them in the context of separation of concerns within simulations. Since these concerns can become independently quite large, manage independent state, and are largely interrelated only at relatively few boundary points, they're well-suited towards collections of stateful adapters. Compared to the use-cases I've attempted to satisfy there, every usage of adapters has seemed trivial.
Looking at PyProtocols, it doesn't seem to me to have taken simulations or gaming into account as a use-case. An indication of this is the seperability of the full component model from the interface / adapters system. Without a full understanding of the execution context of an adapter, I don't know how I could integrate external adapters like TTW login or email delivery into a simulation. (Pardon me for not being terribly specific here. It's difficult for me to come up with examples that take less than 5 or 6 pages of dull prose to explain.)
You're right, the developers probably had vastly different applications in mind. However, the only technical difference I notice in the 'game simulations' arena is Componentized, and that's actually a fairly simple thing (In fact, from what I read in the PyProtocols documentation last night, it seemed like similar things already exist, or could trivially be implemented). You can talk about 'different philosophies', but that's meaningless until you say what technical differencies you actually mean. Maybe you could explain what aspects of twisted.python.components you consider are more simulation-oriented? I'm sure it's not that hard for you to squeeze down your '5 or 6 pages', as I've seen you explain various Reality concepts in much less text, and we have an audience that's very familiar with component systems, interfaces, and adapters here. Maybe Jp can jump in, too, as he was the one who originally brought up the 'difference in philosophies' point. As far as the technical stuff goes, PyProtocols and t.p.c do the same thing, and PyProtocols *can* do everything we have and want, at least from what I've recently heard people want (string adaptation, Componentized, adapter contexts...).
However, I am not sure, because I can't figure out what PEAK as a whole is really for. The likely source of an explanation, http://peak.telecommunity.com/Articles/WhatisPEAK.html seems awfully vague. What is the problem that PEAK was designed to solve, exactly? Clearly it was difficult, because there was a lot of solution :)
Yeah, reading PEAK's web site is pretty amusing. otoh, our own web copy, while less 'professional', is also pretty lame :-) -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
At 08:16 AM 2/27/04 -0500, Christopher Armstrong wrote:
Glyph Lefkowitz wrote:
However, I am not sure, because I can't figure out what PEAK as a whole is really for. The likely source of an explanation, http://peak.telecommunity.com/Articles/WhatisPEAK.html seems awfully vague. What is the problem that PEAK was designed to solve, exactly? Clearly it was difficult, because there was a lot of solution :)
Yeah, reading PEAK's web site is pretty amusing. otoh, our own web copy, while less 'professional', is also pretty lame :-)
Pot, I'd like to introduce you to my friend Kettle. :) Seriously, all of this boils down to the distutils. If we could do CPAN-like dependencies, neither Zope nor Twisted nor PEAK would have these vague handwavy marketing materials. We'd all have a list of things that people could download, each with far-less-handwavy descriptions. And we'd be reusing a lot more of each other's stuff, I imagine.
At 12:24 AM 2/27/04 -0500, Glyph Lefkowitz wrote:
Looking at PyProtocols, it doesn't seem to me to have taken simulations or gaming into account as a use-case. An indication of this is the seperability of the full component model from the interface / adapters system. Without a full understanding of the execution context of an adapter, I don't know how I could integrate external adapters like TTW login or email delivery into a simulation. (Pardon me for not being terribly specific here. It's difficult for me to come up with examples that take less than 5 or 6 pages of dull prose to explain.)
There was a recent discussion of writing an interactive fiction game using PyProtocols on the PEAK mailing list; the thread started here: http://www.eby-sarna.com/pipermail/peak/2004-February/001245.html Mostly, I told the guy what sort of component model sounded like it would work for what he was trying to do, and how to do that sort of component model with PyProtocols. To pull out your statement about execution context, though, I should mention that in PEAK, some adapters are "contextful" and others are not. The PEAK component model has the simple concept of things having parent components. Adapters that want to know about context make sure to set their parent to the thing they're adapting, or perhaps its parent. Whenever you put a component in another component, the container adapts the containee to binding.IAttachable if possible, and tells it, "hey, I might be your parent", so the containee can know its context. That's really about it for context. The binding and config packages then offers lots of useful APIs to walk up the parent tree to find or do things.
However, I am not sure, because I can't figure out what PEAK as a whole is really for. The likely source of an explanation, http://peak.telecommunity.com/Articles/WhatisPEAK.html seems awfully vague. What is the problem that PEAK was designed to solve, exactly? Clearly it was difficult, because there was a lot of solution :)
What is Twisted as a whole really for? ;) At least PEAK's name says exactly what it is: Python Enterprise Application Kit. It's a kit for writing enterprise applications in Python. Specifically, configurable, component-based applications. The PEAK core (aka peak.core) frameworks are a component architecture for building task-specific frameworks. The rest of PEAK is either task-specific frameworks (storage, events, net, commands, etc.), utility modules, or development/deployment tools (supervisor, version, n2, etc.)
On Fri, 2004-02-27 at 09:26, Phillip J. Eby wrote:
What is Twisted as a whole really for? ;)
Networking applications. That is, programs that involve communicating with other programs over a network, typically TCP/IP. Whereas "enterprise" is to me anyway a meaningless word in this context. What is the definition of an "enterprise application"? Anyway - I propose we discuss switching to PyProtocols at PyCon. We can then make a list of necessary changes and problems to make this happen, or better yet try to implement as a test, and then we can come back to PJE with specific rather than theoretical problems.
Itamar Shtull-Trauring wrote:
On Fri, 2004-02-27 at 09:26, Phillip J. Eby wrote:
What is Twisted as a whole really for? ;)
Networking applications. That is, programs that involve communicating with other programs over a network, typically TCP/IP. Whereas "enterprise" is to me anyway a meaningless word in this context. What is the definition of an "enterprise application"?
I submit that the term "enterprise" has been over-buzzed to the point of near-meaninglessness, so it's pretty useless except in marketing literature, where meaninglessness is a useful quality. :) That said, to me an "enterprise application" is any application that is specifically designed to interoperate with, and/or enable interoperability of, other software that is used in the processes of a "business" (in the most general sense). I think that's about as specific as you can get. Like I say, *almost* meaningless, in that it could apply to almost anything, given enough "spin". ;) - Steve
At 06:53 PM 2/27/04 -0500, Stephen Waterbury wrote:
Itamar Shtull-Trauring wrote:
On Fri, 2004-02-27 at 09:26, Phillip J. Eby wrote:
What is Twisted as a whole really for? ;)
Networking applications. That is, programs that involve communicating with other programs over a network, typically TCP/IP. Whereas "enterprise" is to me anyway a meaningless word in this context. What is the definition of an "enterprise application"?
I submit that the term "enterprise" has been over-buzzed to the point of near-meaninglessness, so it's pretty useless except in marketing literature, where meaninglessness is a useful quality. :)
That said, to me an "enterprise application" is any application that is specifically designed to interoperate with, and/or enable interoperability of, other software that is used in the processes of a "business" (in the most general sense).
I think that's about as specific as you can get. Like I say, *almost* meaningless, in that it could apply to almost anything, given enough "spin". ;)
I'm actually using it a bit more specifically than that; I'm specifically targeting applications that have a "shared resource domain" (and whose data integrity or availability has fiduciary consequences), or tools needed to develop, maintain or support such applications. IOW, a desktop email client (for example) wouldn't count, but a group issue tracker for emailed requests would. By "fiduciary consequences", I mean that if it's not running when it's supposed to be, or data integrity isn't maintained, it results in financial losses. By "shared resource domain", I effectively mean a multi-user application, or shared processing resources like a mail server. I think you'll find that these two concepts (shared resources and fiduciary responsibility) are the common themes underlying what most people mean when they talk about "enterprise" applications. Or at least, I think people who *buy* enterprise applications would stress these as defining characteristics, whether the people who *sell* such applications really fulfill them or not. :)
On Fri, 2004-02-27 at 19:45, Phillip J. Eby wrote:
I'm actually using it a bit more specifically than that; I'm specifically targeting applications that have a "shared resource domain" (and whose data integrity or availability has fiduciary consequences), or tools needed to develop, maintain or support such applications. IOW, a desktop email client (for example) wouldn't count, but a group issue tracker for emailed requests would.
Philip, thank you for this clarification. IMHO the world would be a better place if software developers were legally required to fulfill your requirements specified here before calling their software "enterprise" :).
Glyph Lefkowitz wrote:
On Fri, 2004-02-27 at 19:45, Phillip J. Eby wrote:
I'm actually using it a bit more specifically than that; I'm specifically targeting applications that have a "shared resource domain" (and whose data integrity or availability has fiduciary consequences), or tools needed to develop, maintain or support such applications. IOW, a desktop email client (for example) wouldn't count, but a group issue tracker for emailed requests would.
Philip, thank you for this clarification. IMHO the world would be a better place if software developers were legally required to fulfill your requirements specified here before calling their software "enterprise" :).
Hmm ... not very realistic in terms of "legally", methinks. But I don't believe you seriously expect that to ever happen. For one thing, legality largely hinges on how expensive a lawyer you can afford, so any legal requirement would simply be a handicap to small vendors. ;) I sure agree that Phillip's definition is useful, but there is also a sense in which any failure of any software used by a company will have fiduciary consequences. After all, as Einstein proved, Time = Money ;), so if software screws up while people are using it on the clock, it costs you. - Steve
At 09:25 PM 2/27/04 -0500, Stephen Waterbury wrote:
I sure agree that Phillip's definition is useful, but there is also a sense in which any failure of any software used by a company will have fiduciary consequences. After all, as Einstein proved, Time = Money ;), so if software screws up while people are using it on the clock, it costs you.
Perhaps I should have said "directly measurable", or maybe "clear and present" fiduciary consequences. But at this point it becomes pointless nitpicking, unless there is in fact a use case for splitting hairs to define whether something is or isn't an "enterprise" application. From a practical perspective, what gets into PEAK is heavily influenced by the needs of software development and system administration at my "day job", dealing with a highly heterogeneous processing environment including numerous internal systems and business partner APIs.
On Thu, 2004-02-26 at 19:17, Phillip J. Eby wrote:
At 05:56 PM 2/26/04 -0500, Glyph Lefkowitz wrote:
On Thu, 2004-02-26 at 14:59, Phillip J. Eby wrote:
Yep, just subclass InterfaceClass and add a __call__ method that calls back to adapt() and you'd be all set. One other benefit to PyProtocols is that you can track what interfaces an *instance* supports, independent of what its class supports.
Would you consider adding this to PyProtocols directly?
I have considered it. Unfortunately, it runs counter to one PyProtocols use case I'd like to support, which is the ability to use an abstract base class as a protocol. That is, somebody should be able to instantiate non-abstract subclasses of a protocol. While I don't personally use this except for some stuff in peak.util.fmtparse, I've had some previous indications that Guido may favor an ABC-style (Abstract Base Class) for interfaces when they eventually land in Python. So, I'd like to avoid making it impossible to instantiate interface subclasses.
Considering that this is a rather uncommon use-case, can it be made into the non-default Interface class? Or an attribute/feature of the default one, such as class MyAbstractSub(IMyAbstract.ABC): pass class DontUseABC: __implements__ = MyAbstractSub.Interface
Again, this would be easily solved by a Twisted-specific subclass of protocols.InterfaceClass, and I don't see that doing it is necessarily a bad thing for either Twisted or PyProtocols, although it may be that it should be considered simply a transitional, backward-compatibility thing.
The __call__ hack, for me, is more than just syntactic sugar, or a "transitional, backward-compatibility thing". It's fundamental to my personal use of the component system. One-argument callables are used _everywhere_ in Twisted, thanks mostly to Deferreds, but also because "thing which takes one argument" is a very convenient interface for using for a variety of different kinds of processing. An idiom I find tremendously convenient (even, perhaps especially, when debugging) is "return foo.doSomethingDeferred().addCallback(IWhatever)". IWhatever is sometimes a variable, too - and it's quite common to use 'str', 'int', or 'list' there instead of an interface. Of course, I like the syntactic sugar quite a bit, too :). It's self-documenting. When I have run tutorials for others on the use of components, showing them an error on x.what(), and then success on IWhatever(x).what() is enlightening. Previously, our analogue of "adapt", "getAdapter", was difficult to explain. (BTW: a common idiom for using interfaces is to scope method names. An optimization I want to add to Twisted at some point is stateless interface-method-call syntax which can do this efficiently, something like: IWhatever.what(x). I believe you should be able to avoid a bunch of Python object allocation overhead by doing it that way. Does PyProtocols provide any such facility?)
If we're going to maintain a component system as part of Twisted, I would like to at least get the benefit of having full control over the component system within Twisted. I don't want to have some people using PyProtocols and others using PyProtocols+TwistedHacks.
I'm not sure I follow you here. Private extensions to PyProtocols' base classes is certainly permitted and encouraged, to provide additional features needed by particular frameworks, as long as the core interfaces are respected (e.g. 'IOpenProtocol'). PyProtocols itself offers several specialized protocol implementations, including Protocol, InterfaceClass, Variation, URIProtocol, SequenceProtocol, and so on. [...] Actually, if I understand correctly, these mostly sound like things outside PyProtocols' scope. peak.binding and peak.config implement some of this stuff by defining various interfaces they want, and using PyProtocols to adapt things to those interfaces. But that's entirely independent of PyProtocols itself.
In other words, PyProtocols isn't tightly coupled to a component architecture, but is instead a convenient base for building component architectures.
Perhaps we should be discussing Twisted using PEAK, then? I don't want to use half a component system and implement the other half myself. Maybe you can come up with a counterexample, but it seems to me that the benefit of a common protocol system would be lost without the use of a common component model.
Let's take a specific example: you mentioned locating nearby services by interface. peak.config does this with two interfaces: IConfigSource and IConfigKey:
Woah there, sparky! That looks a lot like the earlier documentation I was having trouble with. A brief example, maybe? :)
Then we have the issue of the PyProtocols dependency; dependency management can be quite hairy on windows.
Indeed. I've begun correspondence with Bob off-list about the possibility of me helping to port PIMP/PackMan to other platforms, though.
Thank goodness. It needs to be done; Twisted needs to be broken up into smaller pieces.
By the way, though, I don't know what you mean by "default adapter". Do you mean the adapter for type 'object', perhaps? I can't imagine why somebody would care about that, though.
I mean the adapter for type "Thing", mostly. The general idea being that you want to override what happens before someone picks up a "normal" object when they're under the influence of a particular enchantment. It is difficult to specify rules for which enchantment's hook takes precedence, so authors get into fights by specifying ever-lower numbers.
Stop trying to understand it and just use it. ;)
Really, an interface specification / lookup system is a pretty basic part of a system which uses it, on par with a function calling convention. I want to know _exactly_ what goes on when I use it; I never want to be surprised by a weird component getting looked up when that wasn't what I intended. With so much indirection in place that's too easy already.
So, as you can see right there, writing docs with Jim Fulton in mind as the intended audience is where I made my big mistake. :)
Quite. But, it seems like for that audience, you wrote it well :). As much as I respect his work, I don't think it would be a misnomer to call me bizarro jim fulton. I tend to have near-opposite interest in problems, but to come up with identical-looking solutions more often than not...
No,
Sad to hear that :-(
but I have an IRC client. :)
irc.freenode.net/#twisted is always there (and I am often in it)
Glyph Lefkowitz wrote:
On Thu, 2004-02-26 at 19:17, Phillip J. Eby wrote:
In other words, PyProtocols isn't tightly coupled to a component architecture, but is instead a convenient base for building component architectures.
Perhaps we should be discussing Twisted using PEAK, then? I don't want to use half a component system and implement the other half myself.
Why?
Maybe you can come up with a counterexample, but it seems to me that the benefit of a common protocol system would be lost without the use of a common component model.
This whole *thread* has been about the benefits of using only PyProtocols without PEAK (Not that I'm opposed to using PEAK -- I don't know enough about it yet. I'm just opposed to meaningless arguments :)
Let's take a specific example: you mentioned locating nearby services by interface. peak.config does this with two interfaces: IConfigSource and IConfigKey:
Woah there, sparky! That looks a lot like the earlier documentation I was having trouble with. A brief example, maybe? :)
Yeah, my eyes kinda glazed over on this one, too, since I have no idea what IConfigSource and IConfigKey are. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
At 12:40 AM 2/27/04 -0500, Glyph Lefkowitz wrote:
Considering that this is a rather uncommon use-case, can it be made into the non-default Interface class? Or an attribute/feature of the default one, such as
class MyAbstractSub(IMyAbstract.ABC): pass
class DontUseABC: __implements__ = MyAbstractSub.Interface
Hmm. I suppose it's possible I could have a 'protocols.Abstract' that could be subclassed in place of Interface, if you wanted to use the ABC style. But then, couldn't you also use an 'adapt()' method, i.e.: foo = IWhatever.adapt(bar) ?
Again, this would be easily solved by a Twisted-specific subclass of protocols.InterfaceClass, and I don't see that doing it is necessarily a bad thing for either Twisted or PyProtocols, although it may be that it should be considered simply a transitional, backward-compatibility thing.
The __call__ hack, for me, is more than just syntactic sugar, or a "transitional, backward-compatibility thing". It's fundamental to my personal use of the component system. One-argument callables are used _everywhere_ in Twisted, thanks mostly to Deferreds, but also because "thing which takes one argument" is a very convenient interface for using for a variety of different kinds of processing. An idiom I find tremendously convenient (even, perhaps especially, when debugging) is "return foo.doSomethingDeferred().addCallback(IWhatever)". IWhatever is sometimes a variable, too - and it's quite common to use 'str', 'int', or 'list' there instead of an interface.
Of course, I like the syntactic sugar quite a bit, too :). It's self-documenting. When I have run tutorials for others on the use of components, showing them an error on x.what(), and then success on IWhatever(x).what() is enlightening. Previously, our analogue of "adapt", "getAdapter", was difficult to explain.
Certainly, the convenience is tempting. I worry more about the lack of commonality between 'IFoo(x)' and 'adapt(x,IFoo,somedefault)', from a presentation standpoint. I think I'd like to hear some opinions from some existing PyProtocols users who aren't Twisted users before I "pronounce" on this.
(BTW: a common idiom for using interfaces is to scope method names. An optimization I want to add to Twisted at some point is stateless interface-method-call syntax which can do this efficiently, something like: IWhatever.what(x). I believe you should be able to avoid a bunch of Python object allocation overhead by doing it that way. Does PyProtocols provide any such facility?)
Not at the moment. What I'd like to do at some point is add single-dispatch generic functions, though. Then class IWhatever: def what(self,...): .... would be the default behavior for IWhatever.what(x), and you would define implementations via something like: def what_foo(self,...): ... IWhatever.what.add(what_foo, forTypes=[Foo])
Actually, if I understand correctly, these mostly sound like things outside PyProtocols' scope. peak.binding and peak.config implement some of this stuff by defining various interfaces they want, and using PyProtocols to adapt things to those interfaces. But that's entirely independent of PyProtocols itself.
In other words, PyProtocols isn't tightly coupled to a component architecture, but is instead a convenient base for building component architectures.
Perhaps we should be discussing Twisted using PEAK, then? I don't want to use half a component system and implement the other half myself.
Okay. The main issue there is going to be that PEAK is definitely still alpha, although the core CA stuff is *beginning* to approach PyProtocols' stability. But it's nowhere near PyProtocols in documentedness, of course.
Maybe you can come up with a counterexample, but it seems to me that the benefit of a common protocol system would be lost without the use of a common component model.
I don't really see it that way. Right now, using PyProtocols, one can write components that play in both Zope and PEAK's component architectures. (Granted, that's to some extent because Zope changed their CA to be more like PEAK, with interfaces for walking to parents and getting config data out of them.) Not that I'm trying to talk you out of using PEAK's CA, mind. I'm just saying that if you have radically different requirements, you might prefer to build your own. If it's built atop PyProtocols, it'll be even easier to integrate your CA with other CAs if people have a need or desire to do so.
Let's take a specific example: you mentioned locating nearby services by interface. peak.config does this with two interfaces: IConfigSource and IConfigKey:
Woah there, sparky! That looks a lot like the earlier documentation I was having trouble with. A brief example, maybe? :)
Okay, let's say I want to get a hold of the event loop I should be a participant of... class SomethingThatNeedsEvents(binding.Component): eventLoop = binding.Obtain(events.IEventLoop) Okay, that's it. That's the brief example. :) Seriously, this class is a component that, when used as part of a larger application, will automatically find the IEventLoop it's supposed to be using. It may be defined in an .ini file, like this: [Component Factories] peak.events.interfaces.IEventLoop = "peak.events.io_events.EventLoop" This says that when somebody asks for an IEventLoop, we will look at the requester's "service area" (a specially designated parent component), and if the service area doesn't already have one, we'll create a "singleton" instance of the peak.events.io_events.EventLoop class. I say "singleton" in quotes because PEAK avoids true singletons like the plague. Services are homed in a "service area", so there is usually one service instance per service area. Typically, an app will do all its work in a single service area, but sometimes there are reasons to have additional service areas. The other way you can make a service available (besides configuration for a service area) is to "offer" them from other components. For example: class SomethingWithACustomEventLoop(binding.Component): eventLoop = binding.Make(MyEventLoopClass, offerAs=[events.IEventLoop]) Each instance of this component will have its own private event loop instance. In addition, any child component of an instance of this class will receive the instance's event loop whenever it requests an IEventLoop service. So, if an instance of the first example class is made a child of an instance of this second example class, it will end up iwth a MyEventLoopClass instance as its eventLoop attribute. Is this the sort of thing you're asking about?
By the way, though, I don't know what you mean by "default adapter". Do you mean the adapter for type 'object', perhaps? I can't imagine why somebody would care about that, though.
I mean the adapter for type "Thing", mostly. The general idea being that you want to override what happens before someone picks up a "normal" object when they're under the influence of a particular enchantment. It is difficult to specify rules for which enchantment's hook takes precedence, so authors get into fights by specifying ever-lower numbers.
Ah. That sounds like something that calls for a more explicit chain of responsibility that's not adapter driven. That is, a component architecture issue. Adaptation should be something you do to elements in the chain of responsibility in order to see if they want to participate in the current event. But maybe I'm misunderstanding something about your use case.
Really, an interface specification / lookup system is a pretty basic part of a system which uses it, on par with a function calling convention. I want to know _exactly_ what goes on when I use it; I never want to be surprised by a weird component getting looked up when that wasn't what I intended. With so much indirection in place that's too easy already.
There's really an easy solution to that. If you want to be sure there's no confusion, define a new interface when you have a new use case. And if an old interface mostly fits, you can make the new one a Variation of it or declare it to be implied by the old one.
At 09:58 AM 2/27/04 -0500, Phillip J. Eby wrote:
At 12:40 AM 2/27/04 -0500, Glyph Lefkowitz wrote:
(BTW: a common idiom for using interfaces is to scope method names. An optimization I want to add to Twisted at some point is stateless interface-method-call syntax which can do this efficiently, something like: IWhatever.what(x). I believe you should be able to avoid a bunch of Python object allocation overhead by doing it that way. Does PyProtocols provide any such facility?)
Not at the moment. What I'd like to do at some point is add single-dispatch generic functions, though.
Just for the heck of it, I thought I'd try whipping up a single-dispatch generic function using PyProtocols. It turned out to be much easier than I thought; only 19 lines, in fact! from protocols import Protocol, adapt, declareAdapter from new import instancemethod class GenericFunction(Protocol): def __init__(self, defaultFunc=None): Protocol.__init__(self) if defaultFunc is not None: self.add(defaultFunc,[object]) def __call__(__self, self, *__args,**kw): return adapt(self,__self)(*__args,**kw) def add(self,func,forTypes=(),forProtocols=()): declareAdapter( lambda ob,proto: instancemethod(func,ob,type(ob)), provides=[self], forTypes=forTypes, forProtocols=forProtocols ) # Quick test/demo def g(ob): print ob, "is of unknown type" g = GenericFunction(g) class A(object): pass class B(A): pass class C(A): pass def gA(ob): print ob, "is an A" g.add(gA,[A]) def gB(ob): print ob, "is a B" g.add(gB,[B]) # Try it out g(A()) g(B()) g(C()) g([]) # ==== It could probably do with some kind of 'super()'-like mechanism, but that would make for a rather more complex implementation, alas. Still, it's not bad at all and I can think of a few places I might actually use it. For example, it'd be great for writing "visitor" algorithms that need to do different behaviors based on the type or interfaces of the nodes.
Phillip J. Eby wrote:
At 09:58 AM 2/27/04 -0500, Phillip J. Eby wrote:
At 12:40 AM 2/27/04 -0500, Glyph Lefkowitz wrote: Not at the moment. What I'd like to do at some point is add single-dispatch generic functions, though.
Just for the heck of it, I thought I'd try whipping up a single-dispatch generic function using PyProtocols. It turned out to be much easier than I thought; only 19 lines, in fact!
# Quick test/demo ...
def gA(ob): print ob, "is an A" g.add(gA,[A]) def gB(ob): print ob, "is a B" g.add(gB,[B])
Wow. glyph explained to me what he wanted before with the no-instantiation IFoo.meth(x), but I didn't realize that it really _is_ like generic functions. That's awesome. :-) -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
At 12:21 AM 2/28/04 -0500, Christopher Armstrong wrote:
Wow. glyph explained to me what he wanted before with the no-instantiation IFoo.meth(x), but I didn't realize that it really _is_ like generic functions. That's awesome. :-)
Well, a generic function with a couple of interesting twists, like being able to say that a particular signature is based on a protocol rather than a specific class, with transitive adaptation taking place. So, if the closer adaptation path is that the object gets adapted to an interface you support, then that's what you'll get passed to the function. The other twist is that you can create a new generic function, and then declare that it is "implied by" another generic function. You'll then "inherit" all of the definitions from the old generic function, allowing you to then just override certain specific implementations of the function in your new function.
Glyph Lefkowitz <glyph@twistedmatrix.com> writes:
On Thu, 2004-02-26 at 19:17, Phillip J. Eby wrote: (...)
I have considered it. Unfortunately, it runs counter to one PyProtocols use case I'd like to support, which is the ability to use an abstract base class as a protocol. That is, somebody should be able to instantiate non-abstract subclasses of a protocol. (...)
Considering that this is a rather uncommon use-case, can it be made into the non-default Interface class? Or an attribute/feature of the default one, such as
Since neither Twisted, nor the default interfaces defined in PyProtocols use the ABC approach, I've been curious about this since I started using these packages. Why aren't more interfaces defined in the ABC style? When Ifirst started defining some interfaces in my project with PyProtocols, and had to make the choice between an ABC interface and a documentation interface, ABC seemed to me as more desirable. I'll admit to almost wishing I didn't have the option at the time, because to be true I didn't really have a good idea of the pros and cons so it was sort of one of those uncertain feeling decisions (sort of like picking a Python web or application framework :-)). What finally turned it for me was that the ABC approach gave me a clean failure if the implementation object didn't really fulfill the interface, as in forgot to override a method or declare an attribute. Perhaps it was because the first set of things I was defining were interfaces for specific components I was building, so there was a natural 1:1 mapping between interface and initial objects. I suppose as my work grows I'll find more cases where the interfaces are just portions of an object, but wouldn't it still be preferable to just use MI from multiple interfaces to get the same failure handling? Obviously, the ABC approach doesn't seem to be adopted by Twisted or PyProtocols (at least for its official interfaces) themselves, but I haven't really seen references to advantages of the non-ABC variant. -- David
At 11:17 AM 2/27/04 -0500, David Bolen wrote:
Obviously, the ABC approach doesn't seem to be adopted by Twisted or PyProtocols (at least for its official interfaces) themselves, but I haven't really seen references to advantages of the non-ABC variant.
The main advantages I have found to the "pure interface" style are: * Interfaces may be read as documentation, because there is no behavior in the way. All that's there is just signatures and docstrings. * It's easier to see what is *really* the expected interface, as opposed to an accident of implementation. ABC's have a tendency to start out clean, but then accrete a bunch of implementation junk over time. With a pure interface, there's no tempation to do it, since there's absolutely no benefit to putting any implementation in the interface. So, if you are creating a framework of any significance, your users will in all likelihood thank you for using the pure style instead of the ABC style, unless you have exceedingly strong discipline as far as being able to keep implementation out of the ABC, or are separately specifying/documenting the expected behavior anyway.
"Phillip J. Eby" <pje@telecommunity.com> writes:
* Interfaces may be read as documentation, because there is no behavior in the way. All that's there is just signatures and docstrings.
* It's easier to see what is *really* the expected interface, as opposed to an accident of implementation.
ABC's have a tendency to start out clean, but then accrete a bunch of implementation junk over time. With a pure interface, there's no tempation to do it, since there's absolutely no benefit to putting any implementation in the interface.
Interesting - wouldn't that stop them from being ABCs at that point? The only thing in my ABCs with respect to implementatation is the "raise NotImplementedError" for each method. Basically the same as the example in the PyProtocols documentation. I really hadn't envisioned ever adding anything other than that (after all, then it wouldn't be an "Abstract" base class), although I suppose I could see how someone might let something slip in.
So, if you are creating a framework of any significance, your users will in all likelihood thank you for using the pure style instead of the ABC style, unless you have exceedingly strong discipline as far as being able to keep implementation out of the ABC, or are separately specifying/documenting the expected behavior anyway.
Speaking of documentation and convenience for users, it's nice when inheriting from an ABC interface lets tools such as epydoc automatically inherit the documentation strings for the methods as well for classes that implement them. Otherwise, do you find yourself duplicating docstrings in the classes providing the interfaces, or how do you handle that aspect of things. I notice that a lot of the Twisted documentation makes use of the links (such as the L{} epydoc notation) to cross-reference the appropriate interface for the concrete classes that implement the interfaces, whereas the ABC approach would seem to simplify that aspect of things. -- David
David Bolen wrote:
"Phillip J. Eby" <pje@telecommunity.com> writes:
So, if you are creating a framework of any significance, your users will in all likelihood thank you for using the pure style instead of the ABC style, unless you have exceedingly strong discipline as far as being able to keep implementation out of the ABC, or are separately specifying/documenting the expected behavior anyway.
Speaking of documentation and convenience for users, it's nice when inheriting from an ABC interface lets tools such as epydoc automatically inherit the documentation strings for the methods as well for classes that implement them. Otherwise, do you find yourself duplicating docstrings in the classes providing the interfaces, or how do you handle that aspect of things.
Hmm. I think the real solution to this is to make epydoc understand interfaces, and make it easy to see the interface-documentation when looking at its implementation. -- Twisted | Christopher Armstrong: International Man of Twistery Radix | Release Manager, Twisted Project ---------+ http://radix.twistedmatrix.com/
On Feb 26, 2004, at 2:43 PM, Bob Ippolito wrote:
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it.
As a side note, PEAK's peak.util.imports.whenImported hooks, importString, and lazyModule might also be pretty useful to Twisted (among so many other things).
And, while PyProtocols doesn't seem to support any sort of lazy adapter registration, it would probably be doable via a framework built on top of whenImported. James
On Feb 26, 2004, at 3:17 PM, James Y Knight wrote:
On Feb 26, 2004, at 2:43 PM, Bob Ippolito wrote:
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it.
As a side note, PEAK's peak.util.imports.whenImported hooks, importString, and lazyModule might also be pretty useful to Twisted (among so many other things).
And, while PyProtocols doesn't seem to support any sort of lazy adapter registration, it would probably be doable via a framework built on top of whenImported.
Been there, done that, got the t-shirt. Yes, it's easy. -bob
On Feb 26, 2004, at 3:17 PM, James Y Knight wrote:
On Feb 26, 2004, at 2:43 PM, Bob Ippolito wrote:
How about just migrating off of t.p.components and switching to PyProtocols? The license is compatible, it has PEP-backing, it is a superset of what t.p.components does, has optional Pyrex acceleration, and is compatible with Interfaces from itself, t.p.components, and zope.components. The default Interface implementation does not support the __call__ adaptation that we all know and love, but it is actually generic enough to allow it.
As a side note, PEAK's peak.util.imports.whenImported hooks, importString, and lazyModule might also be pretty useful to Twisted (among so many other things).
And, while PyProtocols doesn't seem to support any sort of lazy adapter registration, it would probably be doable via a framework built on top of whenImported.
Switching Nevow to PyProtocols would have been my first choice. Since I didn't have the time to learn it, and didn't feel like introducing the additional dependency at the time, I hacked t.p.c into nevow.compy so that I could prevent t.p.c's "import the world" behavior which was detrimental to nevow. Unfortunately this means you have to now use *two* "global" component registries when you use Nevow. I am +1000 with using PyProtocols for Nevow since it would reduce my headaches. I think that a global component registry is the one central important feature of a components system, making the current situation less than ideal. twisted.python.components wasn't flexible enough to do what I needed to do in Nevow but PyProtocols is, and supports more useful things like protocolForURI which will be necessary once people start designing interfaces that are useful outside of just one project. I am less excited about introducing a dependency into Nevow. It would be nice if we could distribute PyProtocols in the same tarball as Nevow and only install it if needed. dp
At 03:42 PM 2/26/04 -0500, Donovan Preston wrote:
I am less excited about introducing a dependency into Nevow. It would be nice if we could distribute PyProtocols in the same tarball as Nevow and only install it if needed.
The (dual PSF/ZPL) license permits that. And, PyProtocols, is getting pretty stable, so it's not like you'd be updating it every day. But naturally, working on dependency support for distutils seems to be an Idea Whose Time Is Coming. There was a recent Zope3-Dev thread about such issues, although it was more specific to bundling subsets of the Zope3 system, rather than bundling third-party packages.
On Feb 26, 2004, at 4:44 PM, Phillip J. Eby wrote:
At 03:42 PM 2/26/04 -0500, Donovan Preston wrote:
I am less excited about introducing a dependency into Nevow. It would be nice if we could distribute PyProtocols in the same tarball as Nevow and only install it if needed.
The (dual PSF/ZPL) license permits that. And, PyProtocols, is getting pretty stable, so it's not like you'd be updating it every day.
But naturally, working on dependency support for distutils seems to be an Idea Whose Time Is Coming. There was a recent Zope3-Dev thread about such issues, although it was more specific to bundling subsets of the Zope3 system, rather than bundling third-party packages.
Yeah, the dependency/package management stuff is also brought up every so often on the distutils-sig and pythonmac-sig .. the problem is that nobody wants to actually hack on the nasty distutils source to finish the work.. except maybe amk :) -bob
At 05:32 PM 2/26/04 -0500, Bob Ippolito wrote:
On Feb 26, 2004, at 4:44 PM, Phillip J. Eby wrote:
At 03:42 PM 2/26/04 -0500, Donovan Preston wrote:
I am less excited about introducing a dependency into Nevow. It would be nice if we could distribute PyProtocols in the same tarball as Nevow and only install it if needed.
The (dual PSF/ZPL) license permits that. And, PyProtocols, is getting pretty stable, so it's not like you'd be updating it every day.
But naturally, working on dependency support for distutils seems to be an Idea Whose Time Is Coming. There was a recent Zope3-Dev thread about such issues, although it was more specific to bundling subsets of the Zope3 system, rather than bundling third-party packages.
Yeah, the dependency/package management stuff is also brought up every so often on the distutils-sig and pythonmac-sig .. the problem is that nobody wants to actually hack on the nasty distutils source to finish the work.. except maybe amk :)
The sad thing to me is, that it's not that difficult, at least in principle. For example, I frequently create "pkgsrc" (similar to BSD ports) packages wrapping distutils-built Python libraries. Generally speaking, this amounts to writing a short makefile like: #==== DISTNAME= mechanize-0.0.7a PKGNAME= ${PYPKGPREFIX}-${DISTNAME} CATEGORIES= 3rdParty MASTER_SITES= http://wwwsearch.sourceforge.net/mechanize/src/ PYMODULE_DIRS= mechanize MAINTAINER= pje@eby-sarna.com HOMEPAGE= http://wwwsearch.sourceforge.net/mechanize/ COMMENT= Stateful programmatic web browsing in Python DEPENDS+= ${PYPKGPREFIX}-pullparser>=0.0.4b:../../3rdParty/py-pullparser DEPENDS+= ${PYPKGPREFIX}-ClientCookie>=0.4.18:../../3rdParty/py-ClientCookie DEPENDS+= ${PYPKGPREFIX}-ClientForm>=0.1.14:../../3rdParty/py-ClientForm PYTHON_VERSION_REQD= 22 .include "../../3rdParty/3p-pkgutils/py-plist.mk" .include "../../lang/python/extension.mk" .include "../../mk/bsd.pkg.mk" #==== This is one of the most *complicated* such files I've written. But, using this I can run 'make' and it will download and install all the dependencies for me! In fact, if I don't have Python installed, it'll download and install Python for me, but that's a bit out of scope for the distutils, methinks. :) Maybe what's really needed is a tool that uses a file like the above to do the downloading, extracting, and running of setup.py, et al. The main drawback I see is dealing with binary installers for e.g. Windows.
participants (11)
-
Bob Ippolito
-
Christopher Armstrong
-
David Bolen
-
Donovan Preston
-
Glyph Lefkowitz
-
Itamar Shtull-Trauring
-
James Y Knight
-
Phillip J. Eby
-
Robert Church
-
Stephen C. Waterbury
-
Stephen Waterbury