- Harold W. Kuhn - A Tale of Three Eras: The Discovery and Rediscovery of the Hungarian Method [Abstract]
- John F. Nash, Jr. - The Agencies Method for Modeling Coalitions and Cooperation in Games [Abstract]
- Fran Ackermann - Problem Structuring Methods ‘in the Dock’! : Arguing the case for Soft OR [Abstract]
- Noga Alon - The combinatorics of Social Choice [Abstract]
- James Cochran - You want them to Remember? Then Make it Memorable! [Abstract]
- Rommert Dekker - Operations Research for Green Logistics [Abstract]
- Elena Fernández - Location problems on networks with routing [Abstract]
- Pierre Hansen - Can the computer make scientific discoveries? [Abstract]
- Martine Labbé - Bilevel programming and price optimization problems [Abstract]
- Nelson Maculan - The Discretizable Molecular Distance Geometry Problem [Abstract]
- Arkadi Nemirovski - Safe tractable approximations of chance constraints [Abstract]
- Alexander Shapiro - Computational Complexity of Stochastic Programming [Abstract]
- Paolo Toth - Exact and Heuristic Algorithms for the Vertex Coloring Problem [Abstract]
- Berthold Vöcking - A Combinatorial Approach to Secondary Spectrum Auctions [Abstract]
Harold W. Kuhn - A Tale of Three Eras: The Discovery and Rediscovery of the Hungarian Method
In the Fall of 1953, a translation of a paper of Jeno Egervary from Hungarian into English combined with a result of Denes Konig provided the basis of a good algorithm for the linear assignment problem. To honor the Hungarian mathematicians whose ideas had been used, it was called the Hungarian Method. In 2005, Francois Ollivier discovered that the posthumous papers of Carl G. J. Jacobi contain an algorithm that, when examined carefully, is essentially identical to the Hungarian Method. Since Jacobi died in 1851, this work was done over a hundred years prior to the publication of the Hungarian Method in 1955.
This lecture will provide an account of the mathematical, academic, social and political worlds of Jacobi, Konig/Egervary, and Kuhn. As sharply different as they were (Prussian monarchy, Hungary under the Nazis and the Communists, and the post-war USA), they produced the same mathematical result.
The lecture will be self-contained, assuming little beyond the duality theory of linear programming. The Hungarian Method and Jacobi's algorithm will be explained at an elementary level and will be illustrated by several examples.
John F. Nash, Jr. - The Agencies Method for Modeling Coalitions and Cooperation in Games
Our work in this research project represents the beginning of an effort to study the game-theoretic phenomenon of cooperation in cooperative games (that is, in games where it is understood that the players MAY cooperate whenever they would naturally desire to do that so as to realize mutually advantageous benefits). (This project of work could be described as work along the lines of the (old) "Nash program", seeking to reduce the study of "cooperative games" to the area of the (theoretically simpler) study of "equilibrium" in "non-cooperative games".)
The theoretical key for the inter-linking of these areas is the concept of the "evolution of cooperation" (in Nature) which has been studied both by theoretical biologists and by game theorists.
Our key idea for the reduction of a process realized through cooperation to a process achieved by actions taken independently (and separately) by the Players that are involved in the game context lies in the introduction of moves (or actions) of "acceptance" together with the supposition of an indefinitely repeated game context in which the Players may react (in punishing fashions) against unfavorable actions on the part of other Players. Our model designed for this purpose has the players strategically making "demands" in relation to the behavior of the other players.
And at the same time any Player also chooses, strategically, how he (or she) will allocate the resources available to a coalition if he (or she) has become accepted (through acceptance elections) to have that power. As it happens, in short, the studied model, for a game with three players, involved 39 strategic parameters to be controlled by the players, and we represented it in terms of 42 variables.
There was a substantial challenge of computation to find game theoretic solutions. These were sought in terms of PURE STRATEGIES. And the game example itself, although being described by a characteristic function giving rise to three numerical parameters setting the payoff benefits accessible by specific coalitions, was an "NTU game" rather than a game with "transferable utility". However we have compared solution results with corresponding indications for the games deriving from the Shapley value or from the nucleolus.
The general context of computations is found to be quite challenging. We have used Mathematica in the connection with the work done up to now. There seem to be possibilities for refinements in the model structure and games of more than three players can also be studied.
Fran Ackermann - Problem Structuring Methods ‘in the Dock’! : Arguing the case for Soft OR
Problem Structuring Methods (or Soft OR) have been around for nearly 40 years and yet these methods are still very much overlooked in the OR world. Whilst there is almost certainly a number of explanations for this, two key stumbling blocks are 1) the subjective nature of the modelling yielding insights rather than testable results, and 2) the demand on users to both manage content (through modelling) and manage processes (work WITH rather than on behalf of groups). This keynote presentation aims to put a case forward to support an increase in the use of these methods, either on their own to support clients with messy complex problems or in combination with more mathematical methods thus facilitating models that address a shared well understood objective, provide testable results, and are negotiated and thus owned by key stakeholders.
Noga Alon - The combinatorics of Social Choice
The early work of Condorcet in the 18th century, and that of Arrow and others in the 20th century, revealed the complex and interesting mathematical problems that arise in the theory of Social Choice, showing that the simple process of voting leads to strikingly counter-intuitive paradoxes. I will describe some of these, focusing on several recent intriguing examples whose analysis combine combinatorial and probabilistic ideas with techniques from Discrete Optimization and the theory of the VC dimension of range spaces.
James Cochran - You want them to Remember? Then Make it Memorable!
Each of us has key concepts we want our students to understand and remember, but lecturing to students on these concepts often fails to engender their deep comprehension or long term retention. So how can instructors of operations research/management science effectively accomplish these pedagogical goals? In this session Professor Cochran will discuss the use of several interesting and novel active learning exercises, classroom cases, and live projects that can dramatically improve student comprehension and retention of key concepts. Throughout the session Professor Cochran will emphasize his points with live demonstrations of active learning exercises. Card tricks, classroom versions of television game shows, and a teaching case with integrated active learning will be featured. Because many of these exercises are easily transferable across topics, instructor/classroom styles, cultures, national borders, institutions, faculties, programs, levels of technology, and class sizes, it is very likely you will walk away from this session with ideas on how to improve your own teaching (indeed, Professor Cochran will be very disappointed if you don't!).
Rommert Dekker - Operations Research for Green Logistics
After years of talking, green logistics or environmental friendly supply chain management seems to have taken off in many companies. In fact, industry seems to embark on hundreds of new initiatives, although it may be more popular in certain countries than in others. That is on one hand surprising, because at first sight becoming green may cost industry money and we would expect more initiatives from governments. The attention for green logistics in the OR community, however seems to lag behind in this respect.
In this overview we will first indicate the importance of green aspects to logistics and supply chain management and the drivers behind it. Next we will give an overview of various aspects of green logistics. We start with extending the traditional cost objective in quantitative logistic studies with environmental criteria, like emissions, in the form of CO2, NOx and PM. In a following step we discuss the physical supply chain drivers such as transportation, facilities and products and highlight the fundamental green choices considered in this respect. Thereafter we focus on the soft drivers, by considering logistic concepts, decision support systems, sourcing and pricing in the supply chain. In doing so, we will highlight the contributions of OR so far to green logistics. In fact OR has contributed already a lot to green logistics by reducing costs in supply chains, although it most cases the environmental aspects were not addressed explicitly. Next we focus on those problems where a high contribution of Operations Research can be expected. These problems deal with congestions, with coordination in transports and the use of price mechanisms to improve efficiency and they occur in a great variety of cases. Hence there are many opportunities for Operations Research, both methodologically, problem oriented as well as supporting discipline in the many logistic studies.
Elena Fernández - Location problems on networks with routing
Various location problems on networks involve routing decisions related to sending flows between pairs of nodes, using the located facilities as final or intermediate nodes. In addition to the location of the facilities, these problems involve essential aspects of network design, as the arcs to be used must also be decided. This issue is one extra source of difficulty which, in practice, is reflected in formulations with a huge number of variables, highly inefficient even for relatively small size instances.
In this talk we discuss some problems of this type, including some hub location models, and their relation to network design. Alternative formulations are presented and recent advances leading to successful solution methods are introduced.
Pierre Hansen - Can the computer make scientific discoveries?
Is there a systematic method leading to scientific discovery? To this question, Francis Bacon answers yes, while both Albert Einstein and Karl Popper answer no. As in so many cases, enormous progress in computer power, and in its efficient use, renew the question. Taking examples from geometry, graph and number theory, physics and chemistry, we will illustrate some clear successes (and some failures). This will lead to some speculation on the balance between automation and inspiration in discovery.
Martine Labbé - Bilevel programming and price optimization problems
Consider a general pricing model involving two levels of decision-making. The upper level (leader) imposes prices on a specified set of goods or services while the lower level (follower) optimizes its own objective function, taking into account the pricing scheme of the leader. This model belongs to the class of bilevel optimization problems where both objective functions are bilinear.
In this talk, we review this class of hierarchical problems from both theoretical and algorithmic points of view and then focus on two special cases. In the first one, tolls must be determined on a specified subset of arcs of a multicommodity transportation network. In this context the leader corresponds to the profit-maximizing owner of the network, and the follower to users travelling between nodes of the network. The users are assigned to shortest paths with respect to a generalized cost equal to the sum of the actual cost of travel plus a money equivalent of travel time. The second case consists in determining optimal prices for bundles of products given that each customer will buy the bundle that maximizes her/his own utility function.
Among others, we present complexity results, identify some polynomial cases and propose mixed integer linear formulations for those pricing problem.
Nelson Maculan - The Discretizable Molecular Distance Geometry Problem
The function of a protein is determined by its 3D structure. Nuclear Magnetic Resonance (NMR) experiments provide distances between some pairs of atoms of a protein. The Molecular Distance Geometry Problem (MDGP) consists in finding all the atomic positions of the molecule by exploiting the distances generated by the NMR experiments. In practice, the MDGP is solved by continuous optimization methods.
We show that under a few realistic assumptions, the MDGP can be formulated as a search in a discrete space, where we call this MDGP subclass the Discretizable MDGP (DMDGP). We prove that the DMDGP is also NP-hard and we propose a Branch-and-Prune (BP).
The BP algorithm performs remarkably well in practice in terms of speed and solution accuracy. We present computational results on several artificial and real-life instances.
Arkadi Nemirovski - Safe tractable approximations of chance constraints
When optimizing under stochastic uncertainty, the entity of primary importance is a chance constraint
where x is the decision vector, ξ is a random perturbation with distribution P known to belong to a given family , Q is a given target set, and ε << 1 is a given tolerance. Aside of a handful of special cases, chance constrains are computationally intractable: first, it is difficult to check efficiently whether the constraint is satisfied at a given x, and second, the feasible set of a chance constraint typically is nonconvex, which makes it problematic to optimize under the constraint. Given these difficulties, a natural way to process a chance constraint is to replace it with its safe tractable approximation a tractable convex constraint with the feasible set contained in the one of the chance constraint. In the talk, we overview some recent results in this direction, with emphasis on chance versions of well-structured convex constraints (primarily, affinely perturbed scalar linear and linear matrix inequalities) and establish links between this topic and Robust Optimization.
Alexander Shapiro - Computational Complexity of Stochastic Programming
Stochastic programming is a popular approach to optimization under uncertainty. Its origins are going back to pioneering papers of Dantzig(1955) and Beale (1955). The traditional approach to solving stochastic programming problems is to construct scenarios representing what may happen in the future. From a modelling point of view such scenarios can be considered as a discretization of the underline (true) stochastic data process. Consequently, computational complexity of the obtained optimization problem is determined by the number of generated scenarios.
Unfortunately the number of scenarios needed to approximate the "true" distribution of the data process grows exponentially both with increase of the number of random parameters and number of stages. A way of dealing with this explosion of the number of scenarios is to use randomization approaches based on Monte Carlo sampling techniques. In this talk we discuss theoretical and computational aspects of the Monte Carlo sampling approach to solving two and multi-stage stochastic programming problems.
Paolo Toth - Exact and Heuristic Algorithms for the Vertex Coloring Problem
Given an undirected sparse graph G = (V,E), where V is the vertex set and E the edge set, the Vertex Coloring Problem (VCP) requires to assign a color to each vertex in such a way that colors on adjacent vertices are different and the number of colors used is minimized. The VCP is a well known NP-hard problem, and has received a large attention in the literature, not only for its real world applications in many engineering fields (including scheduling, timetabling, register allocation, frequency assignment and communication networks), but also for its theoretical aspects and for its difficulty from the computational point of view.
In this talk, we review the main Integer Linear Programming formulations, exact algorithms, lower bounding procedures, heuristic and Metaheuristic algorithms proposed for the VCP. Extensive computational experiments on benchmark instances from the literature are reported, comparing the performance of the most effective exact and heuristic algorithms.
Berthold Vöcking* - A Combinatorial Approach to Secondary Spectrum Auctions
On the secondary spectrum market, licenses are valid only for local regions. Spectrum allocations have to take into account interferences between users in neighboring regions. We show that problem formulations for secondary spectrum auctions in well established, but rather technical models for wireless communication like, e.g., the protocol or the physical model, can be represented in terms of a plain combinatorial model in which interference conditions are described by a conflict graph. The conflict graphs obtained from the wireless models have an interesting combinatorial property: The so-called inductive independence can be bounded by a slowly growing function. We investigate how this property can be exploited for the design of approximation algorithms for efficient spectrum allocations.
*Joint work with Martin Hoefer and Thomas Kesselheim
Sunday July 11th, 2010
16:30 – 18:30
Faculty of Sciences, Building C3
Sunday July 11th, 2010
19:00 – 22:30
Faculty of Sciences, alley between Building C5 and Building C6
Sunday July 11th, 2010
12:00 Cais do Sodré
Monday July 12th, 2010
20:00 Cais do Sodré
Wednesday July 14th, 2010
20:00 Cais do Sodré
Show and Official Banquet
Tuesday July 13th, 2010
19:30 Parque das Nações
wednesday July 14th, 2010 17:20 Faculty of Sciences, Building C3