
On Sat, Jan 27, 2018 at 8:35 AM, Pau Freixes <pfreixes@gmail.com> wrote:
def filter(rule, whatever): if rule.x in whatever.x: return True
rules = get_rules() whatevers = get_whatevers() for rule in rules: for whatever in whatevers: if filter(rule, whatever): cnt = cnt + 1
return cnt
The performance of Python compared with the other languages was almost x10 times slower. It's true that they didn't optimize the code, but they did not for any language having for all of them the same cost in terms of iterations.
Did you consider using a set instead of a list for your inclusion checks? I don't have the full details of what the code is doing, but the "in" check on a large set can be incredibly fast compared to the equivalent on a list/array.
This could be considered an unimportant thing, but its more relevant than someone could expect, at least IMHO. If the default code that you can write in a language is by default slow and exists an alternative to make it faster, this language is doing something wrong.
Are you sure it's the language's fault? Failing to use a better data type simply because some other language doesn't have it is a great way to make a test that's "fair" in the same way that Balance and Armageddon are "fair" in Magic: The Gathering. They reset everyone to the baseline, and the baseline's equal for everyone right? Except that that's unfair to a language that prefers to work somewhere above the baseline, and isn't optimized for naive code. ChrisA