tjreedy at udel.edu
Mon Jun 10 18:20:25 CEST 2002
To paraphrase Magnus Lie Hetland, the general constraint-propagation
system constraint-0.2 takes 100 times as long to do the 8-queens
problem as the specific backtracking queens.py found in the standard
Python distribution: 15 versus .1 sec on comparable machines.
The is about the same comparison, for some computations, as between
generic/dynamic Python and type-specific C. In both cases, there are
two more questions of practical interest: which platform/method is
faster for the *programmer*? and, is the slow method fast enough?
So, what is the relative programming speed of the general constraint
propagation system versus specific backtracking? I suspect that some
people could whip up something like queens.py as fast or faster that
they could the (unpublished?) constraint input for the same problem.
However, I also suspect the reverse would be true for the more
realistic and useful scheduling example that Fayolle used to
illustrate and motivate constraint-0.2.
Real world problems may have no solution as given. On the other hand,
constraints may not be absolute. In the scheduling example, two
workshops *can* be scheduled simultaneously, if necessary, even though
many people want to take both. So a stepwise procedure may be needed.
(Given multiple solutions, one might also want to add 'wishes' as
pseudoconstraints to help select just one.) So, is it equally easy,
with the two approaches, to add or drop constraints and rerun? This
would be fairly simple with c-0.2 input but probably harder for
backtracking code unless planned for from the beginning. Is it
equally easy to give constraints priorities and have the system
automatically either add them in order from the highest or drop them
from lowest? Constraint-0.2 evaluates constraints in the order
listed, but I do not know if it properly backtracks from failure to
report the most satisfying solutions.
Terry J. Reedy
More information about the Python-list