<br><div><span class="gmail_quote">Larry Hastings wrote:</span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Chetan Pandya wrote:<br>> I don't have a patch build, since I didn't download the revision used
<br>> by the patch.<br>> However, I did look at values in the debugger and it looked like x in<br>> your example above had a reference count of 2 or more within<br>> string_concat even when there were no other assignments that would
<br>> account for it.<br>It could be the optimizer. If you concatenate hard-coded strings, the<br>peephole optimizer does constant folding. It says "hey, look, this<br>binary operator is performed on two constant objects". So it evaluates
<br>the expression itself and substitutes the result, in this case swapping<br>(pseudotokens here) [PUSH "a" PUSH "b" PLUS] for [PUSH "ab"].<br><br>Oddly, it didn't seem to optimize away the whole expression. If you say
<br>"a" + "b" + "c" + "d" + "e", I would have expected the peephole<br>optimizer to turn that whole shebang into [PUSH "abcde"]. But when I<br>gave it a cursory glance it seemed to skip every-other; it
<br>constant-folded "a" + "b", then + "c" and optimized ("a" + "b" + "c") +<br>"d", resulting ultimately I believe in [PUSH "ab" PUSH "cd" PLUS PUSH
<br>"e" PLUS]. But I suspect I missed something; it bears further<br>investigation.</blockquote><div><br>I looked at the optimizer, but couldn't find any place where it does constant folding for strings. However, I an unable to set breakpoints for some mysterious reason, so investigation is somewhat hard. But I am not bothered about it anymore, since it does not behave the way I originally thought it did.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">But this is all academic, as real-world performance of my patch is not<br>contingent on what the peephole optimizer does to short runs of
<br>hard-coded strings in simple test cases.<br><br>> The recursion limit seems to be optimistic, given the default stack<br>> limit, but of course, I haven't tried it.<br>I've tried it, on exactly one computer (running Windows XP). The depth
<br>limit was arrived at experimentally. But it is probably too optimistic<br>and should be winched down.</blockquote>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">On the other hand, right now when you do x = "a" + x ten zillion times<br>there are always two references to the concatenation object stored in x:
<br>the interpreter holds one, and x itself holds the other. That means I<br>have to build a new concatenation object each time, so it becomes a<br>degenerate tree (one leaf and one subtree) recursing down the right-hand
<br>side.</blockquote><div><br>This is the case I was thinking of (but not what I wrote).<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I plan to fix that in my next patch. There's already code that says "if<br>the next instruction is a store, and the location we're storing to holds<br>a reference to the left-hand side of the concatenation, make the
<br>location drop its reference". That was an optimization for the<br>old-style concat code; when the left side only had one reference it<br>would simply resize it and memcpy() in the right side. I plan to add<br>
support for dropping the reference when it's the *right*-hand side of
<br>the concatenation, as that would help prepending immensely. Once that's<br>done, I believe it'll prepend ((depth limit) * (number of items in<br>ob_sstrings - 1)) + 1 strings before needing to render.</blockquote><div>
<br>I am confused as to whether you are referring to the LHS or the concatenation operation or the assignment operation. But I haven't looked at how the reference counting optimizations are done yet. In general, there are caveats about removing references, but I plan to look at that later.
<br></div></div><br>There is another, possibly complimentary way of reducing the recursion depth. While creating a new concatenation object, instead of inserting the two string references, the strings they reference can be inserted in the new object. This can be done if the number of strings they contain is small. In the x = "a" + x case, for example, this will reduce the recursion depth of the string tree (but not reduce the allocations).