If you can authenticate the submitter, via password, then isn't this a deterrent against purposely malicious code? After all, you'll have a copy of the source with a link to the author (could archive source to a secure place before running, if you think the program might erase itself). If it's a programming class, just running code against test data is probably insufficient feedback anyway. You could at least eyeball the code and offer feedback on such things as the presence of coherent comments etc., at which point you could look for weird, obfuscatory syntax. But maybe that's not a realistic in your context. Another approach would be to set up plain vanilla quarantined box with its own web server and have student code run there, with a way to restore state completely from backup media in case of melt down i.e. keep potentially toxic code confined to a computer designated for running such stuff. Kirby