Question about Source Control
rosuav at gmail.com
Tue Mar 18 07:03:27 CET 2014
On Tue, Mar 18, 2014 at 4:39 PM, Frank Millman <frank at chagford.com> wrote:
> Two quick questions -
> 1. At present the source code is kept on one machine (A), but only accessed
> from the two other machines (B and C).
> Does it make sense to create the central repository on A, but *not* install
> the SCM on A? Install separate copies of the SCM on B and C, and allow them
> both to set up their own clones. I only develop on B, so only B would
> 'push', but both B and C would 'pull' to get the latest version.
I don't know about Mercurial, but with git, installing the software on
A lets it work more efficiently (otherwise it has to do all the work
remotely, ergo unnecessary traffic). Advantage of free software is
that you don't have to check license agreements - just go ahead,
install it everywhere. But if for some reason that would be a problem,
you can look into running it over basic SSH or something.
> 2. Being a typical lazy programmer, I frequently go through the following
> cycle. I work on a section of code by editing it on B. Then I test it by
> running it on C. Instead of meticulously checking my code I let python find
> the errors, so I run it on C, it crashes with a traceback, I fix the error
> on B and rerun it on C until it is working to my satisfaction.
> It seems that I will have to change my approach. Edit on B, 'push' on B,
> 'pull' on C, run from C. It sounds more cumbersome, but maybe that is the
> price I have to pay.
> Have I got those two right?
That would be the simplest to set up. But here are two alternatives:
1) My current setup for developing Gypsum involves development on
Sikorsky, on Linux, and everything gets tested there. Then every once
in a while, I pull changes to Traal, and test on Windows. If there's a
problem, that's a separate commit fixing a separate issue ("Implement
pause key handling" / "Fix pause key handling on Windows"). That works
fairly well when you can do >90% of your testing on your development
2) At work, we had a system for rapid development across two machines,
pretty much how you're describing. To make that work, I wrote a
three-part rapid send/receive system: a daemon that runs on the dev
system, a client that runs on the test system and connects to the
daemon, and a triggering notification that runs on the dev and tells
the daemon to do its work. (That could be done with a process signal,
but I wanted to send it some parameters.) When the daemon gets
notified to send its stuff across, it writes out the full content of
all changed files (mangled somewhat because my boss was paranoid -
well, as far as I know he's still utterly paranoid, but he's not my
boss any more) to the socket connection, and the receiver plops them
onto the disk and SIGHUPs the appropriate processes to tell them to
The second option takes some setting up, though I'd be happy to help
out with the code. But it's really easy to use. You shoot stuff across
to it and off it all goes. The way I described above, it's quite happy
to have multiple simultaneous clients, and it's happy for those
clients to be behind NAT - so you can run half a dozen VMs with
different configurations, and have them all get the code together. And
you can put the trigger into a makefile to be run at the end of some
other tasks, or have it do some sanity checking, or whatever you like.
Very flexible and powerful.
More information about the Python-list