Slices time complexity

Chris Angelico rosuav at gmail.com
Mon May 18 15:36:44 EDT 2015


On Tue, May 19, 2015 at 5:23 AM, Mario Figueiredo <marfig at gmail.com> wrote:
> From the above link it seems slices work in linear time on all cases.
> And this really has a big impact on certain operations. For instance,
> the code below may surprise some people when they realize it doesn't
> run in linear time on 3.4:
>
>     def minimum(values):
>         if len(values) == 1:
>             return values[0]
>         else:
>             m = minimum(values[1:])
>             return m if m < values[0] else values[0]

https://xkcd.com/1270/

Is there really a reason to code this in such a bizarrely inefficient
way? Linear or not, it's bound to be less efficient than the more
obvious form:

def minimum(values):
    values = iter(values)
    min = next(values)
    for value in values:
        if value < min: min = value
    return min

And if you know your value domain (maybe you're working with floats,
or positive integers, or something) and can put a hard-coded base
value in, it becomes even simpler:

def minimum(values):
    min = 0 # or float("-inf") etc
    for value in values:
        if value < min: min = value
    return min

It's obvious that this code will complete in linear time. It's also
pretty obvious how it's working: it steps through the collection,
comparing each value against the current smallest. With your recursive
version, it effectively steps backward through the list, comparing
each value against the current smallest, all while unwinding the
stack. It can't even be tail-call-optimized in its current state.

What's the point of optimizing slicing to allow you to use a poor
algorithm, instead of fixing your algorithm?

ChrisA



More information about the Python-list mailing list