Almost Have Threads Working for HTTP Scan

Bart Nessux bart_nessux at
Thu Jul 1 22:03:45 CEST 2004

Eddie Corns wrote:
> Some observations:
> Since every worker thread is creating another thread for each host you will
> end up with 65,535 threads all active, that seem overkill to me.
> On closer inspection it's going to be massively skewed towards thread 1, since
> it could simply empty the entire url_queue before the others get started.
> I presume the network section isn't finished since it's only actually scanning
> 255 addresses.

Yes, I'm only testing one subnet as it's quicker. It takes about an hour 
to do 65,536 urls.

> Wouldn't it be enough to just try and connect to ports 80,8080, (etc.), using
> just a socket?

I guess it would, but isn't that the same as urllib trying to read a url???

> Why not use seperate queues for failures and successes (not sure what the
> failures gives you anyway?)

This is a good point, I could probably just have a queue for urls that 
urllib can read.

> How about a simpler approach of creating 256 threads (one per subnet) each of
> which scans its own subnet sequentially.
> def test_http (subnet):
>     for i in range(256):
>         ip='192.168.%d.%d'%(subnet,i)
> 	x = probe (ip)    ; returns one of 'timeout','ok','fail'
> 	Qs[x].put(ip)
> Qs = {'timeout':Queue.Queue(),'ok':Queue.Queue(),'fail':Queue.Queue()}
> for subnet in range(256):
>     workers.append (.. target=test_http, args=(subnet,) )

I like the idea of having one thred per subnet, but I don't really 
understand your example. Sorry, I'm having trouble with threads. Can't 
get my head around them just yet. I'll probably stick with my earlier 
script (I can use netstat to watch it make the syn requests) so I know 
that it's working. Just can't figure out how to make it write out the 

Thanks for the tips,


More information about the Python-list mailing list