[pypy-dev] Mercurial workflow
p.giarrusso at gmail.com
Wed Dec 15 02:09:36 CET 2010
as a longer time DVCS (Git/Mercurial) user, here's some comments -
just my two cents.
On Tue, Dec 14, 2010 at 18:04, Jacob Hallén <jacob at openend.se> wrote:
> Hi folks,
> In Mercurial, you do all the important operations in your local repository. If
> you have a central node (which is not strictly necessary), you just push ready
> made change sets to it. Any conflict resolving has already been done on your
> local machine.
> This means that it is very convenient to have at least 2 repositories locally
> (this takes hadly any extra space, since unchanged files are hard linked to be
> the same file in each repository).
If you use hg clone, only files under .hg are hard linked - and I
guess they stay so only until they're touched by a commit, when the
_whole_ previous content is duplicated.
If you can do that also for the checked-out copy, you need to use cp
-al, but it's not done by default,
and you need to have advanced Unixy editors like Vim/emacs, and to set
them up correctly. E.g. For Vim, you need to set
backupcopy=breakhardlink (+ any other settings) or =no, otherwise any
editing will affect _both_ repos (if you actually use Vim, read :help
'backupcopy' to understand your options).
patch(1) is also fine, as many tools used by Linux kernel hackers.
I hardly believe many other editors give you control such options. If
you used a Java-based editor (say jEdit), it would even be challenging
to code such a feature.
> You keep one repository as a staging area to which you pull changes from the
> central server (or other peoples repositories) and to which you pull changes
> from your other local repositories. It is here you resolve any merge conflicts
> and it is from here you do pushes of new versions to the central pypy server
> on bitbucket.
One could have two local heads (or something alike) and switch between
them - if you need to recompile after merging and before pushing,
however, things are different of course.
And you might want to have a bitbucket fork for your work which you're
not pushing yet, to have patches reviewed before merging.
> For every idea you have, you clone your repository and do the work for the
> idea in the clone. If you need to collaborate with someone else you
> synchronize your repo with their repo without going through the central
If "your repo" means a Bitbucket fork (or any sort of publicly
accessible clone, but Bitbucket has some goodies) that's fine and
good. Otherwise, it might make harder for people to have a casual look
at what you're working on - they need to be motivated enough to ask
you to join.
> When you work on an idea you should keep in mind that it is an extremely fast
> operation to commit your work. You can do it for a number of various reasons -
> whenever you have made a set of changes that you feel belong together, if your
> machine is flakey and you want to have a backup copy somewhere else or if you
> want to continue the work on a different machine. I sometimes program in bed,
> and it is very simple to do a pull from my workstation to my laptop.
> The new workflow will make the detailed changes less visible to others, since
> you commit work to the central server in fewer and bigger chunks. This means
> that there will be fewer "oops, I misspelled this" and more of a higher level
> view of what is being developed.
Actually, the ease of doing local commits might lead to many smaller
ones. If you use hg mq and push only finished patches, or you
extensively rewrite commits, then what you say is true.
But there are some disadvantages because history is lost, and the use
of git/hg rebase is not uncontroversial. See below for one
high-profile opinion against git rebase (I don't think the git vs hg
difference has any significance here):
The bottom line is that there are more possible workflows, and you
will maybe need to discuss which one you'll want to use, and set some
Paolo Giarrusso - Ph.D. Student
More information about the Pypy-dev