[Tutor] Searching through large number of string items
Dinesh B Vadhia
dineshbvadhia at hotmail.com
Thu Apr 10 15:13:03 CEST 2008
The 10,000 string items are sorted.
The way the autocomplete works is that when a user enters a char eg. 'f', the 'f' is sent to the server and returns strings with the char 'f'. You can limit the number of items sent back to the browser (say, limit to between 15 and 100). The string items containing 'f' are displayed. The user can then enter another char eg. 'a' to make 'fa'. The autocomplete plugin will search the cache to find all items containing 'fa' but may need to go back to the server to collect others. And, so on. Equally, the user could backspace the 'f' and enter 'k'. The 'k' will be sent to the server to find strings containing 'k', and so on.
One way to solve this is with linear search which as you rightly pointed out has horrible performance (and it has!). I'll try the binary search and let you know. I'll also look at the trie structure.
An alternative is to create an in-memory SQLite database of the string items. Any thoughts on that?
Dinesh
----- Original Message -----
From: Kent Johnson
To: Dinesh B Vadhia
Cc: tutor at python.org
Sent: Thursday, April 10, 2008 5:20 AM
Subject: Re: [Tutor] List comprehensions
Dinesh B Vadhia wrote:
> Kent
>
> I'm using a Javascript autocomplete plugin for an online web
> application/service. Each time a user inputs a character, the character
> is sent to the backend Python program which searches for the character
> in a list of >10,000 string items. Once it finds the character, the
> backend will return that string and N other adjacent string items where
> N can vary from 20 to 150. Each string item is sent back to the JS in
> separate print statements. Hence, the for loop.
Ok, this sounds a little closer to a real spec. What kind of search are
you doing? Do you really just search for individual characters or are
you looking for the entire string entered so far as a prefix? Is the
list of 10,000 items sorted? Can it be?
You need to look at your real problem and find an appropriate data
structure, rather than showing us what you think is the solution and
asking how to make it faster.
For example, if what you have a sorted list of strings and you want to
find the first string that starts with a given prefix and return the N
adjacent strings, you could use the bisect module to do a binary search
rather than a linear search. Binary search of 10,000 items will take
13-14 comparisons to find the correct location. Your linear search will
take an average of 5,000 comparisons.
You might also want to use a trie structure though I'm not sure if that
will let you find adjacent items.
http://www.cs.mcgill.ca/~cs251/OldCourses/1997/topic7/
http://jtauber.com/blog/2005/02/10/updated_python_trie_implementation/
> I haven't done any profiling yet as we are still building the system but
> it seemed sensible that replacing the for loop with a built-in would
> help. Maybe not?
Not. An algorithm with poor "big O" performance should be *replaced*,
not optimized.
Kent
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/tutor/attachments/20080410/240d8aaf/attachment-0001.htm
More information about the Tutor
mailing list