If I'm understanding you correctly, this would address something I've wanted list comprehensions to do for a little while:

[x for x in range(0,10) until greaterthanN(4,x)]

or, trying to take from the above example,

tokenchecks = [token for regex,token in match_tok until regex.match(s)]
# do whatever from this point forward.
    return tokenchecks[-1]

This ignores the case where you might want to bind a name to the expression in an if and until to prevent redundancy, like so:

tokenchecks = [token for regex,token in match_tok until regex.match(s) != None if regex.match(s)]
#"!= None" included to make the point that you can evaluate more than simple presence of a value
if len(tokenchecks) == 0:
    #do stuff
else:
    process(tokenchecks[-1])

Perhaps you could add some variable binding in there (...maaaaybe.):
#Bad example?
tokenchecks = [token for regex,token in match_tok until MatchP != None if MatchP with MatchP as regex.match(s)]

#Better example?
tokenchecks = [token for regex,token in match_tok until MatchP.group(0) == 'somethingspecific' if MatchP with MatchP as regex.match(s)]

There are arguments for or against making list comprehensions more complex to accommodate what I hope I understand this thread is getting at, but this is my two cents on a possible syntax. It seems the pattern (or idiom, or whatever the proper term is) of stopping a loop that operates on data before you're done is common enough to justify it. One could argue that this complicates list comprehensions needlessly, or one could argue that list comprehensions already do what for loops can, but serve as shortcuts for common patterns.

It also occurs to me that this may not net as much benefits in parallel (or OTOH it could make it easier), but as I do not have significant experience in parallel programming I'll shy away from further commenting.

I'm tired and this email has undergone several revisions late at night on an empty stomach. I apologize for the resulting strangeness or disjointedness, if any. Come to think of it, my example might be worse than my fatigued brain is allowing me to believe.

--Andy

Afterthought: what would happen if the 'until' and 'if' conditions were opposite? Presumably, if the until were never fulfilled, all elements would be present, but if the reverse were true, would it filter out the result, then, and stop?

On Tue, Jul 8, 2008 at 11:16 AM, Terry Reedy <tjreedy@udel.edu> wrote:


Talin wrote:

I'm actually more interested in the "as" clause being added to "if" rather than "while". The specific use case I am thinking is matching regular expressions, like so:

  def match_token(s):
     # Assume we have regex's precompiled for the various tokens:
     if re_comment.match(s) as m:
        # Do something with m
        process(TOK_COMMENT, m.group(1))
     elif re_ident.match(s) as m:
        # Do something with m
        process(TOK_IDENT, m.group(1))
     elif re_integer.match(s) as m:
        # Do something with m
        process(TOK_INT, m.group(1))
     elif re_float.match(s) as m:
        # Do something with m
        process(TOK_COMMENT, m.group(1))
     elif re_string.match(s) as m:
        # Do something with m
        process(TOK_STRING, m.group(1))
     else:
        raise InvalidTokenError(s)

In other words, the general design pattern is one where you have a series of tests, where each test can return either None, or an object that you want to examine further.

One of the things I do not like above is the repetition of the assignment target, and statement ends, where it is a little hard to see if it is spelled the same each time.  I think it would be more productive to work on adding to the stdlib a standardized store-and-test class -- something like what others have posted on c.l.p.  In the below, I factored out the null test value(s) from the repeated if tests and added it to the pocket instance as a nulls test attribute.

def pocket():
 def __init__(self, nulls):
   if type(nul;s) != tuple or nulls == ():
       nulls = (nulls,)
   self.nulls = nulls
 def __call__(self, value):
   self.value = value
   return self
 def __bool__(self)
   return not (self.value in self.nulls)

then


   def match_token(s):
      # Assume we have regex's precompiled for the various tokens:
      p = pocket(None) # None from above, I don't know match return
      if p(re_comment.match(s)):
         # Do something with p.value
         process(TOK_COMMENT, p.value.group(1))
      elif p(re_ident.match(s)):
         # Do something with p.value
         process(TOK_IDENT, p.value.group(1))
      elif p(re_integer.match(s)):
         # etc

Pocket could be used for the while case, but I will not show that because I strongly feel 'for item in iteratot' is the proper idiom.

Your specific example is so regular that it also could be written (better in my opinion) as a for loop by separating what varies from what does not:

match_tok = (
 (re_comment, TOK_COMMENT),
 (re_indent, TOK_INDENT),
  ...
 (re_string, TOK_STRING),
)
# if <re_x> is actually re.compile(x), then tokens get bundled with
# matchers as they are compiled.  This strikes me as good.

for matcher,tok in match_tok:
 m = matcher.match(s)
 if m:
   process(tok, m.group(1)
   break
else:
 raise InvalidTokenError(s)

Notice how nicely Python's rather unique for-else clause handles the default case.

Terry Jan Reedy


_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
http://mail.python.org/mailman/listinfo/python-ideas