Enormous Input and Output Test
Duncan Booth
duncan.booth at invalid.invalid
Sun Oct 4 06:28:24 EDT 2009
n00m <n00m at narod.ru> wrote:
>
> I've given up :-)
Here's my attempt, which is about 30% faster than your original but I've no
idea if it would be fast enough for you.
import sys, time, os, itertools
import gc
gc.set_threshold(9999)
D = []
def foo():
##sys.stdin = open('D:/1583.txt', 'rt')
count = int(sys.stdin.readline())
data = sys.stdin.read().split()
D.append(data)
data = map(int, data)
D.append(data)
nextval = iter(data).next
data = map(str, (nextval()*nextval() for a in xrange(count)))
D.append(data)
sys.stdout.write('\n'.join(data))
sys.stdout.write('\n')
start = time.time()
foo()
print >>sys.stderr, time.time() - start
os._exit(0)
Storing the large lists in a global prevent them deallocating when the
function returns, so speeds up the time as I recorded it. If they are
timing the whole process then I think calling _exit() should avoid the
final cleanup time.
Playing with the garbage collector makes it ever so slightly faster
although disabling it entirely makes it much slower.
I feel there ought to be a faster alternative to xrange but if so I
couldn't find it.
As Terry said, it's all hack tricks.
More information about the Python-list
mailing list