vendredi 8 avril 2011

Yes we can … optimize!

Recently, I heard from someone (that I was trying to convince how CP is cool) : « yes but you know, most of the time we have an objective function … ». This person has a very strong MIP background and it’s not the first time I heard such a belief. Needless to say this is wrong and no, CP is not only for satisfaction problems.

CP usually solves optimization problems with B&B by optimizing one variable having its own domain. In short, this is how it works:

When a solution is found, a constraint is dynamically added (without restarting the search) such that the next solution found is strictly better. The last solution found is by construction optimal (correct me if I’m wrong but I think this adaptation of B&B to CP was imagined by Pascal Van Hentenryck).

This is also completely false to say that CP doesn’t use lower bounds. Usually the objective function/variable is linked into one (or several) global constraint computing very good lower bounds on it (look for instance the paper of Jean-Charles: « Arc consistency for global cardinality constraints with costs ». With this kind of constraints you can solve TSP problems with CP. This is a small demo on google map I made some time ago It is a bit slow due to the time needed to collect the distance matrix L

3 commentaires:

  1. Sure it works, thanks to tight bounds that might be delivered if you are lucky (or smart) enough to have this objective variable involved in such global constraint delivering tight bounds.

    One might be interested in having a search _directed towards the improvement_ of the solution, as done in local search, where you sense the neighbourhood, and go to the next "best" solution among the neighbours. This is also why simplex is so good, it follows a path towards improvement.

    Intuitively, CP will follow its own path, described by the distribution procedure and might be unaware of interesting neighbours that are just one "perturbation" away, re-building the solution from the choice that would allow the search implementing this perturbation.

    One might be interested for instance in a composite search using CP to find a solution, then exploring the neighbourhood for a local optimum, and using this improved optimum as new bound to continue the search in a B&B fashion. Is there any chance that it would make the best of both worlds?

  2. Hello Renaud,

    Thank you for your comment. When I was still at Dynadec, I've build an example in the userman of Comet doing exactly what you suggest for the quadratic assignment problem. If you have the userman of Comet, just have a look at Section "Speeding Up Branch and Bound with CBLS". I agree with you this is a very good approach when you need to prove optimality.

    The problem is that it is not always easy to design good incremental moves. That's why I usually prefer LNS (when proof of optimality is not important).

  3. I can understand how you can introduce a "strictly better" constraint to integrate a integer-valued objective function. However, I wonder how you can do the same for continuous objective functions, since "strictly better" has no meaning in this context...