lundi 4 mars 2013

A new thesis on global constraints for scheduling

One of the best theses on scheduling with CP is the one of Petr Vilim available here but this new one entitled: Cumulative scheduling with overloads : global constraints and decompositions by Alexis De Clercq is definitively a thesis I have to read soon (unfortunately thesis in France are written in French, but you can still read associated papers). Here is the abstract of the thesis:

Constraint programming is an interesting approach to solve scheduling problems. In cumulative scheduling, activities are defined by their starting date, their duration and the amount of resource necessary for their execution. The total available resource at each point in time (the capacity) is fixed. In constraint programming, the CUMULATIVE global constraint models this problem.
In several practical cases, the deadline of the project is fixed and can not be delayed. In this case, it is not always possible to find a schedule that does not lead to an overload of the resource capacity. It can be tolerated to relax the capacity constraint, in a reasonable limit, to obtain a solution.
We propose a new global constraint : the SOFTCUMULATIVE constraint that extends the CUMULATIVE constraint to handle these overloads. We illustrate its modeling power on several practical problems, and we present various filtering algorithms. In particular, we adapt the sweep and Edge-Finding algorithms to the SOTCUMULATIVE constraint. We also show that some practical problems require to impose overloads to satisfy business rules, modelled by additional constraints. We present an original filtering procedure to deal with these imposed overloads. We complete our study by an approach by decomposition. At last, we test and validate our different resolution techniques through a series of experiments.

jeudi 24 janvier 2013

Worm sudoku: an application for alldiff with precedences

If you want to introduce CP exercises to your students slightly more difficult than basic sudoku, you can try this variant: wormsudoku (It comes from the book Taking Sudoku Seriously).

Fill in the grid so that every row, column, and block contains 1-9 exactly once. In addition, each worm must contain entries that increase from tail to head. For blue worms you must figure out yourself which end is the head.

Now I have a motivation to implement Alldiff with precedence ;-) :

The AllDifferent Constraint with Precedences, Christian BessiereNina NarodytskaClaude-Guy QuimperToby Walsh (CPAIOR11).

Although, the alldiff with preds can only be used for horizontal/vertical/inside a block worms. 

dimanche 20 janvier 2013

Heavy tails in CP: The magic of restarts

One of the theoretical work with a great impact in practice on CP solving is the one on heavy tails phenomena:

Gomes, Carla P., Bart Selman, and Henry Kautz. "Boosting combinatorial search through randomization." Proceedings of AAAI,1998.

In essence this work says:

There is a non negligible probability of hitting a problem that required exponentially more time to solve than any that had been encountered before”.

I ran some experiments with OscaR on the Sport Scheduling problem used in the paper:
  • N (even) teams
  • Every two teams plays each other exactly once
  • The season last N-1 weeks
  • Every team plays one game in each week of the season
  • There are N/2 periods and, each week, every period is scheduled for one game
  • No team plays more than twice in the same period over the course of the season

Here is a solution for 12 teams (N=12):


To solve this problem I use a first fail heuristic for the variable choice (the uninstanciated one with smallest domain size) with ties broken randomly. I choose randomly the value to instantiate.
Here is the OscaR model:
    val n = 12
    val nbPeriods = n / 2
    val Periods = 0 until nbPeriods
    val nbTeams = n
    val Teams = 0 until nbTeams
    val nbWeeks = n - 1
    val Weeks = 0 until nbWeeks
    val Homes = 0 to 1 // 0/1 for home/away game

    val cp = CPSolver()

    val team = Array.tabulate(nbPeriods, nbWeeks, 2)((p, w, h) => CPVarInt(cp, 0 until nbTeams))
    val game = Array.tabulate(nbPeriods, nbWeeks)((p, w) => CPVarInt(cp, 0 until (n * n - 1)))
    val tuples = for (i <- Teams; j <- i + 1 until nbTeams) yield (i, j, i * nbWeeks + j - 1)
    var solFound = false // will be set to true when a solution is found
    cp.solve subjectTo {
      // make the link between the team and game variables
      for (p <- Periods; w <- Weeks) {
        cp.add(table(team(p)(w)(0), team(p)(w)(1), game(p)(w), tuples))
      // a team plays exactly one game per week
      for (w <- Weeks) {
        val teamw = for (p <- Periods; h <- Homes) yield team(p)(w)(h)
        cp.add(allDifferent(teamw), Strong)
      // every team plays against every other team
      cp.add(allDifferent(game.flatten), Strong)
      // a team can play at most twice in the same period
      for (p <- Periods)
        cp.add(gcc(team(p).flatten, Teams, 1, 2), Strong)
    } exploration {
      cp.binary(game.flatten,_.size, _.randomValue) // our randomized heuristic
      solFound = true
    } // let's start looking for a solution
I executed 10 runs stopping when the solution was found:

As you can see, 2 runs took significantly more time than the others.
To avoid this problem Gomes et. al suggest to cuttoff the search (with a carefully chosen limit) and restart the search from scratch (don’t forget that the search is randomized so each run is different) until the solution is found.
From previous experiences I concluded that it is reasonable to think there is a quite high chance to solve this problem within 5000 backtrack limit. Here is the modified OscaR code to start the search making restarts until a solution is found (this replaces the previous call to run(1))
var restart = 0
do {, 5000) // search 1 sol with 5000 backtrack limit
   restart += 1
} while (!solFound)
println("#restart:" + restart)

Here is the result also giving the number of restarts before reaching a solution:

Looks like our few very long runs to find a solution have disappeared.  Much more robust, don’t you think? This is the magic of restarts, simple but powerful!

mercredi 16 janvier 2013

CP2013 in Sweden

CP (the oldest and most famous conference of the field) will take place in Sweden this year.
It doesn’t surprise me since they have very good research teams working on CP there (with some Belgians as well ;-)): (ASTRA groupKTH, SICS)
Sweden has also a very good CP blogger aka  Hakank (I hope the conference will invite him because he does a lot for CP advertisement on the Web).
Sweden is also one of the countries visiting most frequently our blog (top ten = Australia,Belgium,Canada,France,Germany,Sweden,Netherlands,Russia, UK,US).

Don’t miss the deadline: April 17, 2013 for abstract submission.
As it is has been the case since for a few years now, CP also encourage submitting papers related to applications.
More info and call for application here:

lundi 14 janvier 2013

The risk of inbreeding, some CPAIOR statistics

Following on the previous post of Jean-Charles.
One of our reader suggested to have a look at the the workshop on organising worshops  and conference in CS:
So I had a look and found a very interesting presentation and article "Best Practices for the Care and Feeding of a Program Committee, and Other Thoughts on Conference Organization" (Fred Douglis, IBM).

On PC composition he says in his presentation:

Avoid inbreeding, or the appearance of it
- Overlap from year to year
- Institutional overlap

Here is the extract on this topic from his article:

A number of conferences have a tendency to become rather inbred: they have a certain number of effectively permanent PC members, and only rotate a small fraction of their PC members from year to year. This is a bad idea. I believe that the core USENIX conferences, such as the annual conference and OSDI/NSDI1 are pretty good in this regard, as are some other conferences like SOSP. Some other systems conferences retain a much higher fraction of PC members, which I think results in a bit of tunnel vision, focusing on the same topics each year with much the same perceptions of what are good ideas and what are not.
Another possible aspect of inbreeding is the number of PC members from a particular organization or with a particular background. One USENIX security conference included a few people from one organization, and then the chair joined the same organization as the CFP came out, making it seem like he had selected 1/3 of the PC from his own organization. This looked bad to some, and while no one faults the chair for changing organizations, there would not have been an issue of the other people didn’t overlap so much. I can think of two other USENIX conferences that included over half the PC members with ties to the same department as the chair. I’m sure these PCs contained very talented people and I am not accusing them of bias; I am only suggesting that conferences need to avoid the appearance of being cliquish.
I think that conference organizers (such as USENIX) should establish guidelines for the number of PC members that can overlap in these respects, and then do a sanity check on PC lists prior to publishing the CFP. Some overlap with previous years is important, but too much overlap is terrible; finding that sweet spot would be a good topic for discussion at WOWCS. (I would recommend 20-30%.) Some conferences such as USENIX ATC have an informal policy of ensuring that a program chair serves on the PC the years before and after they chair it, which offers very strong continuity and should be adopted by all conferences.
One way to bring in new blood is to look at authors who have not previously served on the PC. When I chaired ATC’98, I took a USENIX bibliography to identify all authors of ATC or OSDI papers in the previous few years, then count their papers. I found a couple of people in my own department at AT&T who had published pretty much every year but never been on the PC ... and sure enough they both turned me down, despite my pleas for the need for authors to play their part as reviewers.
A corollary to my point about identifying people who have published but not served is that I think it is, in general, a tragedy to appoint someone to a PC who has never published at a conference, if the conference has been around for at least a couple of iterations. Are there people who could serve on a PC for conference X based on their experience at conferences Y and Z? Sure. But if they haven’t published at X, they either haven’t been submitting there (meaning they may not be that interested in the conference and also that they may not be well calibrated to the material normally published there) or they’ve been having submissions rejected. There are generally enough published authors from previous conferences that these authors should be tapped.

I just wanted to check if one of my favourite conferences, CPAIOR has a risk of inbreeding according to the indicators we should look at from the paper of Fred Douglis.

Over the past 7 conference 2007->2013:

18% of the PC has been the same for the past 8 years (counting on average 40 PC members)

2008: 54% of the PC common with 2007
2009: 57%% of the PC common with 2008
2010: 79% of the PC common with 2009
2011: 46% of the PC common with 2010
2012: 50% of the PC common with 2011
2013: 32% of the PC common with 2012 (with 22% of persons coming from the same institution!, I've already posted a message on this)

Another stat: 
50% of the union of PC's of CPAIOR have been in at least 4/7 last PC.
42% in at least 5/7
32% in at least 6/7

I’m afraid Fred Douglis would say CPAIOR has a risk of inbreeding …