[SciPy-User] peer review of scientific software
Nathaniel Smith
njs at pobox.com
Tue May 28 16:14:11 EDT 2013
On Tue, May 28, 2013 at 9:00 PM, Matthew Brett <matthew.brett at gmail.com> wrote:
> The question that always comes up is - why? Most scientists trained
> in the ad-hoc get-it-to-work model have a rather deep belief that this
> model is more or less OK, and that doing all that version-control,
> testing stuff is for programming types with lots of time on their
> hands. If we want to persuade those guys and gals, we have to come up
> with something pretty compelling, and I don't think we have that yet.
> I would love to see some really good data to show that we'd proceed
> faster as scientists with more organized coding practice,
I always make newbies read this:
http://boscoh.com/protein/a-sign-a-flipped-structure-and-a-scientific-flameout-of-epic-proportions.html
(Short version: basically someone's career being destroyed by a sign
error in code written by some random person in the next lab.)
Then I show them the magic of 'if __name__ == "__main__": import nose;
nose.runmodule()'.
Maybe it sinks in sometimes...
I agree that while it's all very noble to talk about the benefit to
science as a whole and the long term benefits of eating your
vegetables and blah blah blah, the actually compelling reason to use
VCS and test everything is that it always turns out to pay back the
investment in, like, tens of minutes.
Telling people about the benefits of regular exercise never works either.
-n
More information about the SciPy-User
mailing list