# A faster way of finding historical highs/lows

Eamonn Sullivan eamonn_sullivan at blueyonder.co.uk
Fri Jun 11 13:56:30 CEST 2004

```This is a basic algorithm question (from a history major who never
took an algorithm course), but I'm also looking for the fastest way to
do this in python  -- for example, maximizing the existing library C
code. I have two searches in an application that I'm trying to speed
up:

1. Find the most recent date when there was an equal or higher (or
lower) value than X. The data (stock, currency and commodity prices)
looks like:

Col A         Col B     (a few other columns of prices)
Date          Price

Right now, the working code just goes through chronologically and
returns the value when the condition is met:

for item in price_list:
if val <= item['price']:
return item['date'], item['price']

I'm wondering if anyone has any ideas about a faster way that a) won't
unnecessarily slow down the simple cases (highest since yesterday, or
last week), while b) speeding up the edge cases (highest/lowest since
1947...).

2. Find the last time the price fell/rose an equal or more than X
times in a row.

Same data format.

Right now, this is implemented inefficiently, something like this:

streak_now = get_streak(price_list[0])
while i < len(price_list):
streak_then = get_streak(price_list[i])
if streak_then >= streak_now:
return price_list[i]['date'], streak_then
i += 1

That works OK most of the time because long streaks are rare, so
matches usually happen in at most months. But edge cases can be
extreme -- many years -- and take a while (a minute, say) to find.

In either of these, the data can be quite long (time wise), but not
large, so it would be possible to copy, reorder and find the answer
that way. Any ideas?

```