Exporting dict Items for direct lookups of specific keys
I believe that it is possible to add a useful feature to dicts, that will enable some interesting performance features in the future. Dict lookups are in general supposed to be O(1) and they are indeed very fast. However, there are several drawbacks: A. O(1) is not always achieved, collisions may still occur B. Some lookups use static "literal" keys, and could benefit from accessing a dict item directly (very fast O(1) _worst_ case, without even a function call when done from the C side). C. B is especially true when dicts are used to hold attributes - in that case literal attribute names, in conjunction with psyco-like type specialization, can help avoid many dict lookups. I won't delve into this in this mail - but this is actually the use-case that I'm after optimizing. There is a possible implementation that can allow a dict to export items with minimal impact on its other performance. Create a new type of PyObject, a PyDictItemObject that will contain a key, value pair. (This will NOT exist for each hash table entry, but only for specifically exported items). Add a bitmap to dicts that has a single bit for every hash table entry. If the entry is marked in the bitmap, then its PyObject* "value" is not a reference to the value object, but to a PyDictItemObject. A new dict method "PyDict_ExportItem" that takes a single argument: key will create a PyDictItemObject, and assign the dict's key to it, and mark that hash-table entry in the bitmap. If PyDict_ExportItem is called when the item is already exported, it will return another reference to the same PyDictItemObject. The value in PyDictItemObject will initially be set to NULL (which means "key not mapped"). Both the PyDictItemObject and PyDict_exportitem should probably not be exported to the Python-side, but PyDictItemObject should probably be a PyObject for refcounting purposes. All the dict methods to get/set values, once they found the correct entry, check the bitmap to see if the entry is marked, and if it is - access the value in the PyDictItemObject instead of the value itself. In addition, if that value is NULL, it represents the key not actually being in the dict (__getitem__ can raise a KeyError there, for example, and __setitem__ can simply use Py_XDECREF and overwrite value with the value). Alternatively to the bitmap, the hash table entry can contain another boolean int -- I am not sure which is preferable in terms of cache-locality, but the bitmap is definitely cheaper, space-wise. This would allow dict users to create an exported item for a key once, and then access that key in real O(1) without function calls. As mentioned before, this can also serve in the future, as the basis for avoiding dict lookups for attribute searches.
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
I believe that it is possible to add a useful feature to dicts, that will enable some interesting performance features in the future.
Dict lookups are in general supposed to be O(1) and they are indeed very fast. However, there are several drawbacks: A. O(1) is not always achieved, collisions may still occur
Note that O(1) is constant, not necessarily 1. Assuming that the hash function that Python uses is decent (it seems to work well), then as per the load factor of 2/3, then you get an expected number of probes = 1 + (2/3)^2 + (2/3)^3 + (2/3)^4 + ..., which sums to 3. Now, if you have contents that are specifically designed to screw the hash function, then you are going to get poor performance. But this is the case for any particular hash function; there exists inputs that force it to perform poorly. Given the above load factor, if 3 expected probes is too many, you can use d.update(d) to double the size of the dictionary, forcing the load factor to be 1/3 or less, for an expected number of probes = 1.5 .
B. Some lookups use static "literal" keys, and could benefit from accessing a dict item directly (very fast O(1) _worst_ case, without even a function call when done from the C side).
[snip] What you are proposing is to add a level of indirection between some pointer in the dictionary to some special PyDictItem object. This will slow Python execution when such a thing is used. Why? The extra level of indirection requires another pointer following, as well as a necessary check on the bitmap you are proposing, nevermind the additional memory overhead of having the item (generally doubling the size of dictionary objects that use such 'exported' items). You don't mention what algorithm/structure will allow for the accessing of your dictionary items in O(1) time, only that after you have this bitmap and dictioanry item thingy that it will be O(1) time (where dictionaries weren't already fast enough natively). I don't believe you have fully thought through this, but feel free to post C or Python source that describes your algorithm in detail to prove me wrong. You should note that Python's dictionary implementation has been tuned to work *quite well* for the object attribute/namespace case, and I would be quite surprised if anyone managed to improve upon Raymond's work (without writing platform-specific assembly). - Josiah
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
I believe that it is possible to add a useful feature to dicts, that will enable some interesting performance features in the future.
Dict lookups are in general supposed to be O(1) and they are indeed very fast. However, there are several drawbacks: A. O(1) is not always achieved, collisions may still occur
Note that O(1) is constant, not necessarily 1. Assuming that the hash function that Python uses is decent (it seems to work well), then as per the load factor of 2/3, then you get an expected number of probes = 1 + (2/3)^2 + (2/3)^3 + (2/3)^4 + ..., which sums to 3.
Now, if you have contents that are specifically designed to screw the hash function, then you are going to get poor performance. But this is the case for any particular hash function; there exists inputs that force it to perform poorly.
Ofcourse, though it is an interesting anecdote because it won't screw the lookups in the solution I'm describing.
Given the above load factor, if 3 expected probes is too many, you can use d.update(d) to double the size of the dictionary, forcing the load factor to be 1/3 or less, for an expected number of probes = 1.5 .
B. Some lookups use static "literal" keys, and could benefit from accessing a dict item directly (very fast O(1) _worst_ case, without even a function call when done from the C side).
[snip]
What you are proposing is to add a level of indirection between some pointer in the dictionary to some special PyDictItem object. This will slow Python execution when such a thing is used. Why? The extra level of indirection requires another pointer following, as well as a necessary check on the bitmap you are proposing, nevermind the additional memory overhead of having the item (generally doubling the size of dictionary objects that use such 'exported' items).
You don't mention what algorithm/structure will allow for the accessing of your dictionary items in O(1) time, only that after you have this bitmap and dictioanry item thingy that it will be O(1) time (where dictionaries weren't already fast enough natively). I don't believe you have fully thought through this, but feel free to post C or Python source that describes your algorithm in detail to prove me wrong. Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
You should note that Python's dictionary implementation has been tuned to work *quite well* for the object attribute/namespace case, and I would be quite surprised if anyone managed to improve upon Raymond's work (without writing platform-specific assembly).
Ofcourse - the idea is not to improve dict's performance with the normal way it is accessed, but to change the way it is accessed for the specific use-case of accessing static values in a static dict - which can be faster than even a fast dict lookup. The dict lookups in globals, builtins are all looking for literal static keys in a literal static dict. In this specific case, it is better to outdo the existing dict performance, by adding a special way to access such static keys in dicts - which insignificantly slows down access to the dict, but significantly speeds up this very common use pattern. Attribute lookups in the class dict are all literal/static key lookups in a static dict (though in order for a code object to know that it is a static dict, a psyco-like system is required. If such a system is used, all of those dict lookups can be made faster as well).
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
B. Some lookups use static "literal" keys, and could benefit from accessing a dict item directly (very fast O(1) _worst_ case, without even a function call when done from the C side).
[snip]
What you are proposing is to add a level of indirection between some pointer in the dictionary to some special PyDictItem object. This will slow Python execution when such a thing is used. Why? The extra level of indirection requires another pointer following, as well as a necessary check on the bitmap you are proposing, nevermind the additional memory overhead of having the item (generally doubling the size of dictionary objects that use such 'exported' items).
You don't mention what algorithm/structure will allow for the accessing of your dictionary items in O(1) time, only that after you have this bitmap and dictioanry item thingy that it will be O(1) time (where dictionaries weren't already fast enough natively). I don't believe you have fully thought through this, but feel free to post C or Python source that describes your algorithm in detail to prove me wrong.
Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case: def foo(obj): return obj.foo
You should note that Python's dictionary implementation has been tuned to work *quite well* for the object attribute/namespace case, and I would be quite surprised if anyone managed to improve upon Raymond's work (without writing platform-specific assembly).
Ofcourse - the idea is not to improve dict's performance with the normal way it is accessed, but to change the way it is accessed for the specific use-case of accessing static values in a static dict - which can be faster than even a fast dict lookup.
If I have a dictionary X, and X has exported keys, then whenever I access exported values in the dictionary via X[key], your proposed indirection will necessarily be slower than the current implementation.
The dict lookups in globals, builtins are all looking for literal static keys in a literal static dict. In this specific case, it is better to outdo the existing dict performance, by adding a special way to access such static keys in dicts - which insignificantly slows down access to the dict, but significantly speeds up this very common use pattern.
Please benchmark before you claim "insignificant" performance degredation in the general case. I claim that adding a level of indirection and the accessing of a bit array (which in C is technically a char array with every bit getting a full char, unless you use masks and shifts, which will be slower still) is necessarily slower than the access characteristics of current dictionaries. We can see this as a combination of an increase in the number of operations necessary to do arbitrary dictionary lookups, increased cache overhad of those lookups, as well as the delay and cache overhead of accessing the bit array.
Attribute lookups in the class dict are all literal/static key lookups in a static dict (though in order for a code object to know that it is a static dict, a psyco-like system is required. If such a system is used, all of those dict lookups can be made faster as well).
No, attribute lookups are not all literal/static key lookups. See getattr/setattr/delattr, operations on cls.__dict__, obj.__dict__, etc. From what I can gather (please describe the algorithm now that I have asked twice), the only place where there exists improvement potential is in the case of global lookups in a module. That is to say, if a function/method in module foo is accessing some global variable bar, the compiler can replace LOAD_GLOBAL/STORE_GLOBAL/DEL_GLOBAL with an opcode to access a special PyDictItem object that sits in the function/method cell variables, rather than having to look in the globals dictionary (that is attached to every function and method). - Josiah
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: > > "Eyal Lotem" <eyal.lotem@gmail.com> wrote: > > On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: > > > "Eyal Lotem" <eyal.lotem@gmail.com> wrote: > > > > B. Some lookups use static "literal" keys, and could benefit from > > > > accessing a dict item directly (very fast O(1) _worst_ case, without > > > > even a function call when done from the C side). > > > > > > [snip] > > > > > > What you are proposing is to add a level of indirection between some > > > pointer in the dictionary to some special PyDictItem object. This will > > > slow Python execution when such a thing is used. Why? The extra level > > > of indirection requires another pointer following, as well as a > > > necessary check on the bitmap you are proposing, nevermind the > > > additional memory overhead of having the item (generally doubling the > > > size of dictionary objects that use such 'exported' items). > > > > > > You don't mention what algorithm/structure will allow for the accessing > > > of your dictionary items in O(1) time, only that after you have this > > > bitmap and dictioanry item thingy that it will be O(1) time (where > > > dictionaries weren't already fast enough natively). I don't believe you > > > have fully thought through this, but feel free to post C or Python > > > source that describes your algorithm in detail to prove me wrong. > > > > Only access of exported items is O(1) time (when accessed via your > > PyDictItem_obj->value), other items must be accessed normally and they > > take just as much time (or as I explained and you reiterated, a tad > > longer, as it requires a bitmap check and in the case of exported > > items another dereference). > > But you still don't explain *how* these exported keys are going to be > accessed. Walk me through the steps required to improve access times in > the following case: > > def foo(obj): > return obj.foo > > I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit). It is the C code which can suddenly do direct access to access the exported dict items - not Python code. > > > You should note that Python's dictionary implementation has been tuned > > > to work *quite well* for the object attribute/namespace case, and I > > > would be quite surprised if anyone managed to improve upon Raymond's > > > work (without writing platform-specific assembly). > > > > Ofcourse - the idea is not to improve dict's performance with the > > normal way it is accessed, but to change the way it is accessed for > > the specific use-case of accessing static values in a static dict - > > which can be faster than even a fast dict lookup. > > If I have a dictionary X, and X has exported keys, then whenever I > access exported values in the dictionary via X[key], your proposed > indirection will necessarily be slower than the current implementation. That is true, I acknowledged that. It is even true also that access to X[key] even when key is not exported is slower. When I have a few spare moments, I'll try and benchmark how much slower it is. > > The dict lookups in globals, builtins are all looking for literal > > static keys in a literal static dict. In this specific case, it is > > better to outdo the existing dict performance, by adding a special way > > to access such static keys in dicts - which insignificantly slows down > > access to the dict, but significantly speeds up this very common use > > pattern. > > Please benchmark before you claim "insignificant" performance > degredation in the general case. I claim that adding a level of > indirection and the accessing of a bit array (which in C is technically > a char array with every bit getting a full char, unless you use masks > and shifts, which will be slower still) is necessarily slower than the > access characteristics of current dictionaries. We can see this as a > combination of an increase in the number of operations necessary to do > arbitrary dictionary lookups, increased cache overhad of those lookups, > as well as the delay and cache overhead of accessing the bit array. You are right - we disagree there, but until I benchmark all words are moot. > > Attribute lookups in the class dict are all literal/static key lookups > > in a static dict (though in order for a code object to know that it is > > a static dict, a psyco-like system is required. If such a system is > > used, all of those dict lookups can be made faster as well). > > No, attribute lookups are not all literal/static key lookups. See > getattr/setattr/delattr, operations on cls.__dict__, obj.__dict__, etc. I may have oversimplified a bit for the sake of explaining. I was referring to the operations that are taken by LOAD_ATTR, as an example. Lets analyze the LOAD_ATTR bytecode instruction: * Calls PyObject_GetAttr for the requested attribute name on the request object. * PyObject_GetAttr redirects it to the type's tp_getattr[o]. * When tp_getattr[o] is not overridden, this calls PyObject_GenericGetAttr. * PyObject_GenericGetAttr first looks for a method descriptor in dicts of every class in the entire __mro__. If it found a getter/setter descriptor, it uses that. If it didn't, it tries the instance dict, and then uses the class descriptor/attr. I believe this implementation to be very wasteful (specifically the last step) and I have posted a separate post about this in python-dev. There is work being done to create an mro cache for types which would allow to convert the mro lookup to a single lookup in most cases. I believe that this mro cache should become a single dict object inside each type object, which holds a merge (according to mro order) of all the dicts in its mro. If this modification is done, then PyObject_GenericGetAttr can become a lookup in the instance dict (which, btw, can also disappear when __slots__ is used in the class), and a lookup in the mro cache dict. If this is the case, then LOAD_ATTR, which is most often used with literal names, can (if the type of the object being accessed is known [via a psyco-like system]) become a regular lookup on the instance dict, and a "static lookup" on the class mro cache dict (which would use an exported dict item). If the psyco-like system can even create code objects which are not only specific to one type, but to a specific instance, even the instance lookup of the literal attribute name can be converted to a "static lookup" in the instance's dict. Since virtually all LOAD_ATTR's are using literal strings, virtually all of the class-side "dynamic lookups" can be converted to "static lookups". Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones). > From what I can gather (please describe the algorithm now that I have > asked twice), the only place where there exists improvement potential is > in the case of global lookups in a module. That is to say, if a > function/method in module foo is accessing some global variable bar, the > compiler can replace LOAD_GLOBAL/STORE_GLOBAL/DEL_GLOBAL with an opcode > to access a special PyDictItem object that sits in the function/method > cell variables, rather than having to look in the globals dictionary > (that is attached to every function and method). As I explained above, there is room for improvement in normal attribute lookups, however that improvement requires a psyco-like system in order to be able to deduce which dicts are going to be accessed by the GetAttr mechanism and then using static lookups to access them directly. With access to globals and builtins, this optimization is indeed easier, and your description is correct, I can be a little more specific: * Each code object can contain a "co_globals_dict_items" and "co_builtins_dict_items" attributes that refer to the exported-dict-items for that literal name in both the globals and builtins dict. * Each access to a literal name in the globals/builtins namespace, at the compilation stage, will request the globals dict and builtins dict to create an exported item for that literal name. This exported item will be put into the co_globals_dict_items/co_builtins_dict_items in the code object. * LOAD_GLOBAL will not be used when literal name are accessed. Instead, a new bytecode instruction "LOAD_LITERAL_GLOBAL" with an index to the "co_globals_dict_items/co_builtins_dict_items" tuples in the code object. * LOAD_LITERAL_GLOBAL will use the index to find the PyExportedDictItem in those tuples, and look like (a bit more verbose naming for clarity): case LOAD_LITERAL_GLOBAL: exported_dict_item = GETITEM(co->co_globals_dict_items, oparg); x = exported_dict_item->value; if(NULL == x) { exported_dict_item = GETITEM(co->co_builtins_dict_items, oparg); x = exported_dict_item->value; if(NULL == x) { format_exc_check_arg(PyExc_NameError, MSG, GETITEM(co->co_globals_names, oparg)); break; } } Py_INCREF(x); PUSH(x); continue; I hope that with these explanations and some code snippets my intentions are more clear. > - Josiah > >
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case:
def foo(obj): return obj.foo
I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit).
It is the C code which can suddenly do direct access to access the exported dict items - not Python code.
Maybe my exposure to C extensions is limited, but I haven't seen a whole lot of C doing the equivalent of obj.attrname outside of the Python standard library. And even then, it's not "I'm going to access attribute Y of object X a million times", it's "I'm going to access some attributes on some objects". The only exception that I've seen happen really at all is when someone converts their pure Python library that interacts with other libraries into Pyrex. But even then, repeated access is generally uncommon except in wxPython uses; wx.<attrname> (which I've never seen converted to Pyrex), and in those cases, repeated access is generally rare. I'm curious as to what kind of C code you are seeing in which these cached lookups will help in a substantial way.
If I have a dictionary X, and X has exported keys, then whenever I access exported values in the dictionary via X[key], your proposed indirection will necessarily be slower than the current implementation.
That is true, I acknowledged that. It is even true also that access to X[key] even when key is not exported is slower. When I have a few spare moments, I'll try and benchmark how much slower it is.
I await your benchmarks.
Attribute lookups in the class dict are all literal/static key lookups in a static dict (though in order for a code object to know that it is a static dict, a psyco-like system is required. If such a system is used, all of those dict lookups can be made faster as well).
No, attribute lookups are not all literal/static key lookups. See getattr/setattr/delattr, operations on cls.__dict__, obj.__dict__, etc.
I may have oversimplified a bit for the sake of explaining. I was referring to the operations that are taken by LOAD_ATTR, as an example. Lets analyze the LOAD_ATTR bytecode instruction: * Calls PyObject_GetAttr for the requested attribute name on the request object. * PyObject_GetAttr redirects it to the type's tp_getattr[o]. * When tp_getattr[o] is not overridden, this calls PyObject_GenericGetAttr. * PyObject_GenericGetAttr first looks for a method descriptor in dicts of every class in the entire __mro__. If it found a getter/setter descriptor, it uses that. If it didn't, it tries the instance dict, and then uses the class descriptor/attr.
I believe this implementation to be very wasteful (specifically the last step) and I have posted a separate post about this in python-dev.
Due to the lack of support on the issue in python-dev (it would break currently existing code, and the time for Python 3.0 PEPs is past), I doubt you are going to get any changes in this area unless the resulting semantics are unchanged.
Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones).
It's really only useful if you are accessing fixed attributes of a fixed object many times. The only case I can think of where this kind of thing would be useful (sufficient accesses to make a positive difference) is in the case of module globals, but in that case, we can merely change how module globals are implemented (more or less like self.__dict__ = ... in the module's __init__ method).
From what I can gather (please describe the algorithm now that I have asked twice), the only place where there exists improvement potential is in the case of global lookups in a module. That is to say, if a function/method in module foo is accessing some global variable bar, the compiler can replace LOAD_GLOBAL/STORE_GLOBAL/DEL_GLOBAL with an opcode to access a special PyDictItem object that sits in the function/method cell variables, rather than having to look in the globals dictionary (that is attached to every function and method).
As I explained above, there is room for improvement in normal attribute lookups, however that improvement requires a psyco-like system in order to be able to deduce which dicts are going to be accessed by the GetAttr mechanism and then using static lookups to access them directly.
Insights into a practical method of such optimizations are not leaping forth from my brain (aside from using a probabilistic tracking mechanism to minimize overhead), though my experience with JIT compilers is limited. Maybe someone else has a good method (though I suspect that this particular problem is hard enough to make it not practical to make it into Python).
With access to globals and builtins, this optimization is indeed easier, and your description is correct, I can be a little more specific: * Each code object can contain a "co_globals_dict_items" and "co_builtins_dict_items" attributes that refer to the exported-dict-items for that literal name in both the globals and builtins dict.
* Each access to a literal name in the globals/builtins namespace, at the compilation stage, will request the globals dict and builtins dict to create an exported item for that literal name. This exported item will be put into the co_globals_dict_items/co_builtins_dict_items in the code object.
* LOAD_GLOBAL will not be used when literal name are accessed. Instead, a new bytecode instruction "LOAD_LITERAL_GLOBAL" with an index to the "co_globals_dict_items/co_builtins_dict_items" tuples in the code object.
You may want to change the name. "Literal" implies a constant, like 1 or "hello", as in 'x = "hello"'. LOAD_GLOBAL_FAST would seem to make more sense to me, considering that is what it intends to do. - Josiah
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case:
def foo(obj): return obj.foo
I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit).
It is the C code which can suddenly do direct access to access the exported dict items - not Python code.
Maybe my exposure to C extensions is limited, but I haven't seen a whole lot of C doing the equivalent of obj.attrname outside of the Python standard library. And even then, it's not "I'm going to access attribute Y of object X a million times", it's "I'm going to access some attributes on some objects". The only exception that I've seen happen really at all is when someone converts their pure Python library that interacts with other libraries into Pyrex. But even then, repeated access is generally uncommon except in wxPython uses; wx.<attrname> (which I've never seen converted to Pyrex), and in those cases, repeated access is generally rare.
I'm curious as to what kind of C code you are seeing in which these cached lookups will help in a substantial way.
While extensions are an optimization target, the main target is global/builtin/attribute accessing code.
If I have a dictionary X, and X has exported keys, then whenever I access exported values in the dictionary via X[key], your proposed indirection will necessarily be slower than the current implementation.
That is true, I acknowledged that. It is even true also that access to X[key] even when key is not exported is slower. When I have a few spare moments, I'll try and benchmark how much slower it is.
I await your benchmarks. I have started work on this. I am still struggling to understand some nuances of dict's implementation in order to be able to make such a change.
Attribute lookups in the class dict are all literal/static key lookups in a static dict (though in order for a code object to know that it is a static dict, a psyco-like system is required. If such a system is used, all of those dict lookups can be made faster as well).
No, attribute lookups are not all literal/static key lookups. See getattr/setattr/delattr, operations on cls.__dict__, obj.__dict__, etc.
I may have oversimplified a bit for the sake of explaining. I was referring to the operations that are taken by LOAD_ATTR, as an example. Lets analyze the LOAD_ATTR bytecode instruction: * Calls PyObject_GetAttr for the requested attribute name on the request object. * PyObject_GetAttr redirects it to the type's tp_getattr[o]. * When tp_getattr[o] is not overridden, this calls PyObject_GenericGetAttr. * PyObject_GenericGetAttr first looks for a method descriptor in dicts of every class in the entire __mro__. If it found a getter/setter descriptor, it uses that. If it didn't, it tries the instance dict, and then uses the class descriptor/attr.
I believe this implementation to be very wasteful (specifically the last step) and I have posted a separate post about this in python-dev.
Due to the lack of support on the issue in python-dev (it would break currently existing code, and the time for Python 3.0 PEPs is past), I doubt you are going to get any changes in this area unless the resulting semantics are unchanged.
Well, I personally find those semantics (involving the question of whether the class attribute has a __set__ or not) to be "inelegant", at best, but since I realized that the optimization I am proposing is orthogonal to that change, I have lost interest there.
Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones).
It's really only useful if you are accessing fixed attributes of a fixed object many times. The only case I can think of where this kind of thing would be useful (sufficient accesses to make a positive difference) is in the case of module globals, but in that case, we can merely change how module globals are implemented (more or less like self.__dict__ = ... in the module's __init__ method). That's not true.
As I explained, getattr accesses the types's mro dicts as well. So even if you are accessing a lot of different instances, and those have a shared (fixed) type, you can speed up the type-side dict lookup (even if you still pay for a whole instance-side lookup). Also, "fixed-object" access can occur when you have a small number of objects whose attributes are looked up many times. In such a case, a psyco-like system can create a specialized code object specifically for _instances_ (not just for types), each code object using "static lookups" on the instance's dict as well, and not just on the class's dict.
From what I can gather (please describe the algorithm now that I have asked twice), the only place where there exists improvement potential is in the case of global lookups in a module. That is to say, if a function/method in module foo is accessing some global variable bar, the compiler can replace LOAD_GLOBAL/STORE_GLOBAL/DEL_GLOBAL with an opcode to access a special PyDictItem object that sits in the function/method cell variables, rather than having to look in the globals dictionary (that is attached to every function and method).
As I explained above, there is room for improvement in normal attribute lookups, however that improvement requires a psyco-like system in order to be able to deduce which dicts are going to be accessed by the GetAttr mechanism and then using static lookups to access them directly.
Insights into a practical method of such optimizations are not leaping forth from my brain (aside from using a probabilistic tracking mechanism to minimize overhead), though my experience with JIT compilers is limited. Maybe someone else has a good method (though I suspect that this particular problem is hard enough to make it not practical to make it into Python).
With access to globals and builtins, this optimization is indeed easier, and your description is correct, I can be a little more specific: * Each code object can contain a "co_globals_dict_items" and "co_builtins_dict_items" attributes that refer to the exported-dict-items for that literal name in both the globals and builtins dict.
* Each access to a literal name in the globals/builtins namespace, at the compilation stage, will request the globals dict and builtins dict to create an exported item for that literal name. This exported item will be put into the co_globals_dict_items/co_builtins_dict_items in the code object.
* LOAD_GLOBAL will not be used when literal name are accessed. Instead, a new bytecode instruction "LOAD_LITERAL_GLOBAL" with an index to the "co_globals_dict_items/co_builtins_dict_items" tuples in the code object.
You may want to change the name. "Literal" implies a constant, like 1 or "hello", as in 'x = "hello"'. LOAD_GLOBAL_FAST would seem to make more sense to me, considering that is what it intends to do. Well, LOAD_GLOBAL_FAST can only be used when the string that's being looked up is known at the code-object creation time, which means that
Implementing a psyco-like system in CPython is indeed not an easy task. But it is possible. The simple idea is that you create specialized code objects that are specific to an instance or to a type in the code object when the code object is first run with that instance or type, and use an exact-type check to invoke the right code object. The specialized code object can use "static lookups" in dicts, and perhaps even avoid using obj->ob_type->slotname (instead use slotname directly, as its already specific to a type). the attribute name was indeed literal. Eyal
Eyal, Have you taken a look at Andrea Griffini's patch, http://python.org/sf/1616125 -jJ
That seems interesting. My patch should have the same speed-up effect (assuming it has no serious consequences on the performance of dicts in general) for constant read-only globals/builtins, but it should also equally speed up global writes and reads of globals/builtins that constantly change. Thanks for the reference, it is encouraging as to what I can expect from the speedup of my patch. On 6/11/07, Jim Jewett <jimjjewett@gmail.com> wrote:
Eyal,
Have you taken a look at Andrea Griffini's patch, http://python.org/sf/1616125
-jJ
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case:
def foo(obj): return obj.foo
I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit).
It is the C code which can suddenly do direct access to access the exported dict items - not Python code. [snip] While extensions are an optimization target, the main target is global/builtin/attribute accessing code.
Or really, module globals and __builtin__ accessing. Arbitrary attribute access is one of those "things most commonly done in Python". But just for the sake of future readers of this thread, could you explicitly enumerate *which* things you intend to speed up with this work.
Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones).
It's really only useful if you are accessing fixed attributes of a fixed object many times. The only case I can think of where this kind of thing would be useful (sufficient accesses to make a positive difference) is in the case of module globals, but in that case, we can merely change how module globals are implemented (more or less like self.__dict__ = ... in the module's __init__ method).
That's not true.
As I explained, getattr accesses the types's mro dicts as well. So even if you are accessing a lot of different instances, and those have a shared (fixed) type, you can speed up the type-side dict lookup (even if you still pay for a whole instance-side lookup). Also,
That's MRO caching, which you have already stated is orthogonal to this particular proposal.
"fixed-object" access can occur when you have a small number of objects whose attributes are looked up many times. In such a case, a psyco-like system can create a specialized code object specifically for _instances_ (not just for types), each code object using "static lookups" on the instance's dict as well, and not just on the class's dict.
If you re-read my last posting, which you quoted above and I re-quote, you can easily replace 'fixed attributes of a fixed object' with 'fixed attributes of a small set of fixed objects' and get what you say. Aside from module globals, when is this seen?
You may want to change the name. "Literal" implies a constant, like 1 or "hello", as in 'x = "hello"'. LOAD_GLOBAL_FAST would seem to make more sense to me, considering that is what it intends to do.
Well, LOAD_GLOBAL_FAST can only be used when the string that's being looked up is known at the code-object creation time, which means that the attribute name was indeed literal.
A literal is a value. A name/identifier is a reference. In: a = "hello" ... "hello" is a literal. In: hello = 1 ... hello is a name/identifier. In: b.hello = 1 ... hello is a named attribute of an object named/identified by b. - Josiah
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: Only access of exported items is O(1) time (when accessed via your PyDictItem_obj->value), other items must be accessed normally and they take just as much time (or as I explained and you reiterated, a tad longer, as it requires a bitmap check and in the case of exported items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case:
def foo(obj): return obj.foo
I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit).
It is the C code which can suddenly do direct access to access the exported dict items - not Python code. [snip] While extensions are an optimization target, the main target is global/builtin/attribute accessing code.
Or really, module globals and __builtin__ accessing. Arbitrary attribute access is one of those "things most commonly done in Python". But just for the sake of future readers of this thread, could you explicitly enumerate *which* things you intend to speed up with this work.
As for optimizing globals/builtins, this will be the effect of my patch: global x x = 5 # Direct access write instead of dict write x # Direct access read globals()['x'] = 5 # Same speed as before. globals()['x'] # Same speed as before. min # Two direct access reads, instead of 2 dict reads. As for attribute access in classes, the speedup I can gain depends on a psyco-like system. Lets assume that we have a system that utilizes a new TYPE_FORK opcode that jumps to use different code according to a map of types, for example, if we have: def f(x, y): x.hello() Then we will create a TYPE_FORK opcode before x.hello() that takes 'x' as an argument, and a map of types (initially empty). When the exact type of 'x' isn't in the map, then the rest of the code in the code object after TYPE_FORK will have a specialized version created for x's current type [only if that type doesn't override tp_getattr/o], and inserted in the map. The specialized version of the code will contain, instead of a LOAD_ATTR for the string "hello", a FAST_LOAD_ATTR for the string "hello" (associated with the direct-access dict item in the mro dict (if there's no mro cache, I actually have a problem here, because I don't know which dicts I need to export dict items from - and worse, that list may change with time. The simplest solution is to use an exported item from an mro cache dict)). FAST_LOAD_ATTR will not call PyObject_GetAttr, but instead use the exported dict items to find the descriptor/classattr using direct access. If it found a descriptor with __get__/__set__, it will return its get-call. Otherwise, it will do a normal expensive lookup on the instance dict (for "hello") (unless __slots__ is defined in which case there is no instance dict). If it found that, it will return that. Otherwise, it will return the descriptor's get-call if it has one or the descriptor itself as a classattr. In other words, I am reimplementing PyObject_GenericGetAttr here, but for mro-side lookups, using my direct lookup. The result is: class X(object): def method(self, arg): self.x = arg # One direct-lookup+dict lookup instead of two dict lookups self.other_method() # One direct-lookup+dict lookup instead of two dict lookups class Y(object): __slots__ = ["x"] def method(self, arg): self.x = arg # One direct-lookup instead of one dict lookup self.other_method() # One direct-lookup instead of one dict lookup A direct lookup is significantly cheaper than a dict lookup (as optimized as dict is, it still involves C callstack setups/unwinds, more conditionals, assignments, potential collisions and far more instructions). Therefore, with the combination of a psyco-like system I can eliminate one of two dict lookup costs, and with the combination of __slots__ as well, I can eliminate one of one dict lookup costs.
Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones).
It's really only useful if you are accessing fixed attributes of a fixed object many times. The only case I can think of where this kind of thing would be useful (sufficient accesses to make a positive difference) is in the case of module globals, but in that case, we can merely change how module globals are implemented (more or less like self.__dict__ = ... in the module's __init__ method).
That's not true.
As I explained, getattr accesses the types's mro dicts as well. So even if you are accessing a lot of different instances, and those have a shared (fixed) type, you can speed up the type-side dict lookup (even if you still pay for a whole instance-side lookup). Also,
That's MRO caching, which you have already stated is orthogonal to this particular proposal. I may have made a mistake before - its not completely orthogonal as an MRO cache dict which can export items for direct access in psyco'd code is a clean and simple solution, while the lack of an MRO cache means that finding which class-side dicts to take exported items from may be a difficult problem which may involve a cache of its own.
"fixed-object" access can occur when you have a small number of objects whose attributes are looked up many times. In such a case, a psyco-like system can create a specialized code object specifically for _instances_ (not just for types), each code object using "static lookups" on the instance's dict as well, and not just on the class's dict.
If you re-read my last posting, which you quoted above and I re-quote, you can easily replace 'fixed attributes of a fixed object' with 'fixed attributes of a small set of fixed objects' and get what you say. Aside from module globals, when is this seen? Its seen when many calls are made on singletons. Its seen when an inner loop is not memory-intensive but is computationally intensive - which would translate to having an instance calling methods on itself or other instances. You may only have 100 instances relevant in your inner loop which is running many millions of times. In such a case, you will benefit greatly if for every code object in the methods of every instance, you create specialized code for every one of the types it is called with (say, 3 types per code object), so you take perhaps a factor of 3 of space for code objects (which I believe are not a significant portion of memory consumption in Python), but your performance will involve _no_ dict access at all for attribute lookups, and instead will just compare instance pointers and then use direct access.
You may want to change the name. "Literal" implies a constant, like 1 or "hello", as in 'x = "hello"'. LOAD_GLOBAL_FAST would seem to make more sense to me, considering that is what it intends to do.
Well, LOAD_GLOBAL_FAST can only be used when the string that's being looked up is known at the code-object creation time, which means that the attribute name was indeed literal.
A literal is a value. A name/identifier is a reference.
In: a = "hello" ... "hello" is a literal.
In: hello = 1 ... hello is a name/identifier.
In: b.hello = 1 ... hello is a named attribute of an object named/identified by b. Then I agree, the use of the word literal here is inappropriate, constant/static may be more appropriate.
Eyal
I have created a new thread on Python Ideas to discuss this. I have also wrote some code and attached a patch. I did not eventually have to use a bitmap in dicts, but could abuse the top hash bit instead: https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1739789&group_id=5470 pystones and other benchmarks seem to accelerate by about 5%. Other specific benchmarks built to measure the speed increase of the globals/builtins keywords measure a 42% speedup. Regression tests are less than 5% slower, but I assume that if adding this acceleration to attr lookups as well, they will be accelerated too. Eyal On 6/12/07, Eyal Lotem <eyal.lotem@gmail.com> wrote:
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/11/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote:
On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote:
"Eyal Lotem" <eyal.lotem@gmail.com> wrote: > On 6/10/07, Josiah Carlson <jcarlson@uci.edu> wrote: > Only access of exported items is O(1) time (when accessed via your > PyDictItem_obj->value), other items must be accessed normally and they > take just as much time (or as I explained and you reiterated, a tad > longer, as it requires a bitmap check and in the case of exported > items another dereference).
But you still don't explain *how* these exported keys are going to be accessed. Walk me through the steps required to improve access times in the following case:
def foo(obj): return obj.foo
I think you missed what I said - I said that the functionality should probably not be exported to Python - as Python has little to gain from it (it would have to getattr a C method just to request the exported item -- which will nullify the speed benefit).
It is the C code which can suddenly do direct access to access the exported dict items - not Python code. [snip] While extensions are an optimization target, the main target is global/builtin/attribute accessing code.
Or really, module globals and __builtin__ accessing. Arbitrary attribute access is one of those "things most commonly done in Python". But just for the sake of future readers of this thread, could you explicitly enumerate *which* things you intend to speed up with this work.
As for optimizing globals/builtins, this will be the effect of my patch:
global x x = 5 # Direct access write instead of dict write x # Direct access read globals()['x'] = 5 # Same speed as before. globals()['x'] # Same speed as before. min # Two direct access reads, instead of 2 dict reads.
As for attribute access in classes, the speedup I can gain depends on a psyco-like system. Lets assume that we have a system that utilizes a new TYPE_FORK opcode that jumps to use different code according to a map of types, for example, if we have:
def f(x, y): x.hello()
Then we will create a TYPE_FORK opcode before x.hello() that takes 'x' as an argument, and a map of types (initially empty). When the exact type of 'x' isn't in the map, then the rest of the code in the code object after TYPE_FORK will have a specialized version created for x's current type [only if that type doesn't override tp_getattr/o], and inserted in the map. The specialized version of the code will contain, instead of a LOAD_ATTR for the string "hello", a FAST_LOAD_ATTR for the string "hello" (associated with the direct-access dict item in the mro dict (if there's no mro cache, I actually have a problem here, because I don't know which dicts I need to export dict items from - and worse, that list may change with time. The simplest solution is to use an exported item from an mro cache dict)).
FAST_LOAD_ATTR will not call PyObject_GetAttr, but instead use the exported dict items to find the descriptor/classattr using direct access. If it found a descriptor with __get__/__set__, it will return its get-call. Otherwise, it will do a normal expensive lookup on the instance dict (for "hello") (unless __slots__ is defined in which case there is no instance dict). If it found that, it will return that. Otherwise, it will return the descriptor's get-call if it has one or the descriptor itself as a classattr.
In other words, I am reimplementing PyObject_GenericGetAttr here, but for mro-side lookups, using my direct lookup. The result is:
class X(object): def method(self, arg): self.x = arg # One direct-lookup+dict lookup instead of two dict lookups self.other_method() # One direct-lookup+dict lookup instead of two dict lookups class Y(object): __slots__ = ["x"] def method(self, arg): self.x = arg # One direct-lookup instead of one dict lookup self.other_method() # One direct-lookup instead of one dict lookup
A direct lookup is significantly cheaper than a dict lookup (as optimized as dict is, it still involves C callstack setups/unwinds, more conditionals, assignments, potential collisions and far more instructions). Therefore, with the combination of a psyco-like system I can eliminate one of two dict lookup costs, and with the combination of __slots__ as well, I can eliminate one of one dict lookup costs.
Since a "static lookup" costs a dereference and a conditional, and a dynamic lookup entails at least 4 C function calls (including C stack setup/unwinds), a few C assignments and C conditionals, I believe it is likely that this will pay off as a serious improvement in Python's performance, when combined with a psyco-like system (not an architecture-dependent ones).
It's really only useful if you are accessing fixed attributes of a fixed object many times. The only case I can think of where this kind of thing would be useful (sufficient accesses to make a positive difference) is in the case of module globals, but in that case, we can merely change how module globals are implemented (more or less like self.__dict__ = ... in the module's __init__ method).
That's not true.
As I explained, getattr accesses the types's mro dicts as well. So even if you are accessing a lot of different instances, and those have a shared (fixed) type, you can speed up the type-side dict lookup (even if you still pay for a whole instance-side lookup). Also,
That's MRO caching, which you have already stated is orthogonal to this particular proposal. I may have made a mistake before - its not completely orthogonal as an MRO cache dict which can export items for direct access in psyco'd code is a clean and simple solution, while the lack of an MRO cache means that finding which class-side dicts to take exported items from may be a difficult problem which may involve a cache of its own.
"fixed-object" access can occur when you have a small number of objects whose attributes are looked up many times. In such a case, a psyco-like system can create a specialized code object specifically for _instances_ (not just for types), each code object using "static lookups" on the instance's dict as well, and not just on the class's dict.
If you re-read my last posting, which you quoted above and I re-quote, you can easily replace 'fixed attributes of a fixed object' with 'fixed attributes of a small set of fixed objects' and get what you say. Aside from module globals, when is this seen? Its seen when many calls are made on singletons. Its seen when an inner loop is not memory-intensive but is computationally intensive - which would translate to having an instance calling methods on itself or other instances. You may only have 100 instances relevant in your inner loop which is running many millions of times. In such a case, you will benefit greatly if for every code object in the methods of every instance, you create specialized code for every one of the types it is called with (say, 3 types per code object), so you take perhaps a factor of 3 of space for code objects (which I believe are not a significant portion of memory consumption in Python), but your performance will involve _no_ dict access at all for attribute lookups, and instead will just compare instance pointers and then use direct access.
You may want to change the name. "Literal" implies a constant, like 1 or "hello", as in 'x = "hello"'. LOAD_GLOBAL_FAST would seem to make more sense to me, considering that is what it intends to do.
Well, LOAD_GLOBAL_FAST can only be used when the string that's being looked up is known at the code-object creation time, which means that the attribute name was indeed literal.
A literal is a value. A name/identifier is a reference.
In: a = "hello" ... "hello" is a literal.
In: hello = 1 ... hello is a name/identifier.
In: b.hello = 1 ... hello is a named attribute of an object named/identified by b. Then I agree, the use of the word literal here is inappropriate, constant/static may be more appropriate.
Eyal
participants (3)
-
Eyal Lotem
-
Jim Jewett
-
Josiah Carlson