The **Hungarian method** is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal–dual methods. It was developed and published in 1955 by Harold Kuhn, who gave the name "Hungarian method" because the algorithm was largely based on the earlier works of two Hungarian mathematicians: Dénes Kőnig and Jenő Egerváry.^{[1]}^{[2]}

James Munkres reviewed the algorithm in 1957 and observed that it is (strongly) polynomial.^{[3]} Since then the algorithm has been known also as the **Kuhn–Munkres algorithm** or **Munkres assignment algorithm**. The time complexity of the original algorithm was , however Edmonds and Karp, and independently Tomizawa noticed that it can be modified to achieve an running time.^{[4]}^{[5]}^{[how?]} One of the most popular^{[citation needed]} variants is the Jonker–Volgenant algorithm.^{[6]} Ford and Fulkerson extended the method to general maximum flow problems in form of the Ford–Fulkerson algorithm. In 2006, it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century, and the solution had been published posthumously in 1890 in Latin.^{[7]}

## The problem

### Example

In this simple example there are three workers: Paul, Dave, and Chris. One of them has to clean the bathroom, another sweep the floors and the third washes the windows, but they each demand different pay for the various tasks. The problem is to find the lowest-cost way to assign the jobs. The problem can be represented in a matrix of the costs of the workers doing the jobs. For example:

Clean bathroom | Sweep floors | Wash windows | |
---|---|---|---|

Paul | $2 | $3 | $3 |

Dave | $3 | $2 | $3 |

Chris | $3 | $3 | $2 |

The Hungarian method, when applied to the above table, would give the minimum cost: this is $6, achieved by having Paul clean the bathroom, Dave sweep the floors, and Chris wash the windows.

### Matrix formulation

In the matrix formulation, we are given a nonnegative *n*×*n* matrix, where the element in the *i*-th row and *j*-th column represents the cost of assigning the *j*-th job to the *i*-th worker. We have to find an assignment of the jobs to the workers, such that each job is assigned to one worker and each worker is assigned one job, such that the total cost of assignment is minimum.

This can be expressed as permuting the rows and columns of a cost matrix *C* to minimize the trace of a matrix:

where *L* and *R* are permutation matrices.

If the goal is to find the assignment that yields the *maximum* cost, the problem can be solved by negating the cost matrix *C*.

### Bipartite graph formulation

The algorithm is easier to describe if we formulate the problem using a bipartite graph. We have a complete bipartite graph with worker vertices () and job vertices (), and each edge has a nonnegative cost . We want to find a perfect matching with a minimum total cost.

## The algorithm in terms of bipartite graphs

Let us call a function a **potential** if for each . The *value* of potential is the sum of the potential over all vertices: .

The cost of each perfect matching is at least the value of each potential: the total cost of the matching is the sum of costs of all edges; the cost of each edge is at least the sum of potentials of its endpoints; since the matching is perfect, each vertex is an endpoint of exactly one edge; hence the total cost is at least the total potential.

The Hungarian method finds a perfect matching and a potential such that the matching cost equals the potential value. This proves that both of them are optimal. In fact, the Hungarian method finds a perfect matching of **tight edges**: an edge is called tight for a potential if . Let us denote the subgraph of tight edges by . The cost of a perfect matching in (if there is one) equals the value of .

During the algorithm we maintain a potential and an orientation of (denoted by ) which has the property that the edges oriented from to form a matching . Initially, is 0 everywhere, and all edges are oriented from to (so is empty). In each step, either we modify so that its value increases, or modify the orientation to obtain a matching with more edges. We maintain the invariant that all the edges of are tight. We are done if is a perfect matching.

In a general step, let and be the vertices not covered by (so consists of the vertices in with no incoming edge and consists of the vertices in with no outgoing edge). Let be the set of vertices reachable in from by a directed path only following edges that are tight. This can be computed by breadth-first search.

If is nonempty, then reverse the orientation of a directed path in from to . Thus the size of the corresponding matching increases by 1.

If is empty, then let

is positive because there are no tight edges between and . Increase by on the vertices of and decrease by on the vertices of . The resulting is still a potential, and although the graph changes, it still contains (see the next subsections). We orient the new edges from to . By the definition of the set of vertices reachable from increases (note that the number of tight edges does not necessarily increase).

We repeat these steps until is a perfect matching, in which case it gives a minimum cost assignment. The running time of this version of the method is : is augmented times, and in a phase where is unchanged, there are at most potential changes (since increases every time). The time sufficient for a potential change is .

### Proof that adjusting the potential *y* leaves *M* unchanged

To show that every edge in remains after adjusting , it suffices to show that for an arbitrary edge in , either both of its endpoints, or neither of them, are in . To this end let be an edge in from to . It is easy to see that if is in then must be too, since every edge in is tight. Now suppose, toward contradiction, that but . itself cannot be in because it is the endpoint of a matched edge, so there must be some directed path of tight edges from a vertex in to . This path must avoid , since that is by assumption not in , so the vertex immediately preceding in this path is some other vertex . is a tight edge from to and is thus in . But then contains two edges that share the vertex , contradicting the fact that is a matching. Thus every edge in has either both endpoints or neither endpoint in .

### Proof that remains a potential

To show that remains a potential after being adjusted, it suffices to show that no edge has its total potential increased beyond its cost. This is already established for edges in by the preceding paragraph, so consider an arbitrary edge from to . If is increased by , then either , in which case is decreased by , leaving the total potential of the edge unchanged, or , in which case the definition of guarantees that . Thus remains a potential.

## Matrix interpretation

This article may be confusing or unclear to readers. In particular, this performs the algorithm on an example, but the actual algorithm for matrixes was never discussed before, and does not provide details of the actual algorithm, and also relies on vague approaches such as "drawing" a minimum cover.. (November 2019) (Learn how and when to remove this template message) |

Given workers and tasks, and an × matrix containing the cost of assigning each worker to a task, find the cost minimizing assignment.

First the problem is written in the form of a matrix as given below

a1 a2 a3 a4 b1 b2 b3 b4 c1 c2 c3 c4 d1 d2 d3 d4

where a, b, c and d are the workers who have to perform tasks 1, 2, 3 and 4. a1, a2, a3, a4 denote the penalties incurred when worker "a" does task 1, 2, 3, 4 respectively. The same holds true for the other symbols as well. The matrix is square, so each worker can perform only one task.

**Step 1**

Then we perform row operations on the matrix. To do this, the lowest of all *a _{i}* (i belonging to 1-4) is taken and is subtracted from each element in that row. This will lead to at least one zero in that row (We get multiple zeros when there are two equal elements which also happen to be the lowest in that row). This procedure is repeated for all rows. We now have a matrix with at least one zero per row. Now we try to assign tasks to agents such that each agent is doing only one task and the penalty incurred in each case is zero. This is illustrated below.

0 a2' a3' a4' b1' b2' b3' 0 c1' 0 c3' c4' d1' d2' 0 d4'

The zeros that are indicated as 0 are the assigned tasks.

**Step 2**

Sometimes it may turn out that the matrix at this stage cannot be used for assigning, as is the case for the matrix below.

0 a2' a3' a4' b1' b2' b3' 0 0 c2' c3' c4' d1' 0 d3' d4'

In the above case, no assignment can be made. Note that task 1 is done efficiently by both agent a and c. Both can't be assigned the same task. Also note that no one does task 3 efficiently. To overcome this, we repeat the above procedure for all columns (i.e. the minimum element in each column is subtracted from all the elements in that column) and then check if an assignment is possible.

In most situations this will give the result, but if it is still not possible then we need to keep going.

**Step 3**

All zeros in the matrix must be covered by marking as few rows and/or columns as possible. The following procedure is one way to accomplish this:

First, assign as many tasks as possible.

- Row 1 has one zero, so it is assigned. The 0 in row 3 is crossed out because it is in the same column.
- Row 2 has one zero, so it is assigned.
- Row 3's only zero has been crossed out, so nothing is assigned.
- Row 4 has two uncrossed zeros. Either one can be assigned, and the other zero is crossed out.

Alternatively, the 0 in row 3 may be assigned, causing the 0 in row 1 to be crossed instead.

0' a2' a3' a4' b1' b2' b3' 0' 0 c2' c3' c4' d1' 0' 0 d4'

Now to the drawing part.

- Mark all rows having no assignments (row 3).
- Mark all columns having zeros in newly marked row(s) (column 1).
- Mark all rows having assignments in newly marked columns (row 1).
- Repeat the steps outlined in the previous 2 bullets until there are no new rows or columns being marked.

× 0' a2' a3' a4' × b1' b2' b3' 0' 0 c2' c3' c4' × d1' 0' 0 d4'

Now draw lines through all marked columns and **unmarked** rows.

× 0' a2' a3' a4' × b1' b2' b3' 0' 0 c2' c3' c4' × d1' 0' 0 d4'

The aforementioned detailed description is just one way to draw the minimum number of lines to cover all the 0s. Other methods work as well.

**Step 4**

From the elements that are left, find the lowest value. Subtract this from every unmarked element and add it to every element covered by two lines.

Repeat steps 3–4 until an assignment is possible; this is when the minimum number of lines used to cover all the 0s is equal to max(number of people, number of assignments), assuming dummy variables (usually the max cost) are used to fill in when the number of people is greater than the number of assignments.

Basically you find the second minimum cost among the remaining choices. The procedure is repeated until you are able to distinguish among the workers in terms of least cost.

## Bibliography

- R.E. Burkard, M. Dell'Amico, S. Martello:
*Assignment Problems*(Revised reprint). SIAM, Philadelphia (PA.) 2012. ISBN 978-1-61197-222-1 - M. Fischetti, "Lezioni di Ricerca Operativa", Edizioni Libreria Progetto Padova, Italia, 1995.
- R. Ahuja, T. Magnanti, J. Orlin, "Network Flows", Prentice Hall, 1993.
- S. Martello, "Jeno Egerváry: from the origins of the Hungarian algorithm to satellite communication". Central European Journal of Operational Research 18, 47–58, 2010

## References

**^**Harold W. Kuhn, "The Hungarian Method for the assignment problem",*Naval Research Logistics Quarterly*,**2**: 83–97, 1955. Kuhn's original publication.**^**Harold W. Kuhn, "Variants of the Hungarian method for assignment problems",*Naval Research Logistics Quarterly*,**3**: 253–258, 1956.**^**J. Munkres, "Algorithms for the Assignment and Transportation Problems",*Journal of the Society for Industrial and Applied Mathematics*,**5**(1):32–38, 1957 March.**^**Edmonds, Jack; Karp, Richard M. (1 April 1972). "Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems".*Journal of the ACM*.**19**(2): 248–264. doi:10.1145/321694.321699.**^**Tomizawa, N. (1971). "On some techniques useful for solution of transportation network problems".*Networks*.**1**(2): 173–194. doi:10.1002/net.3230010206. ISSN 1097-0037.**^**Jonker, R.; Volgenant, A. (December 1987). "A shortest augmenting path algorithm for dense and sparse linear assignment problems".*Computing*.**38**(4): 325–340. doi:10.1007/BF02278710.**^**https://web.archive.org/web/20151016182619/http://www.lix.polytechnique.fr/~ollivier/JACOBI/presentationlEngl.htm

## External links

- Bruff, Derek, The Assignment Problem and the Hungarian Method (matrix formalism).
- Mordecai J. Golin, Bipartite Matching and the Hungarian Method (bigraph formalism), Course Notes, Hong Kong University of Science and Technology.
- Hungarian maximum matching algorithm (both formalisms), in Brilliant website.
- R. A. Pilgrim,
*Munkres' Assignment Algorithm. Modified for Rectangular Matrices*, Course notes, Murray State University. - Mike Dawes,
*The Optimal Assignment Problem*, Course notes, University of Western Ontario. - On Kuhn's Hungarian Method – A tribute from Hungary, András Frank, Egervary Research Group, Pazmany P. setany 1/C, H1117, Budapest, Hungary.
- Lecture: Fundamentals of Operations Research - Assignment Problem - Hungarian Algorithm, Prof. G. Srinivasan, Department of Management Studies, IIT Madras.
- Extension: Assignment sensitivity analysis (with O(n^4) time complexity), Liu, Shell.
- Solve any Assignment Problem online, provides a step by step explanation of the Hungarian Algorithm.

### Implementations

Note that not all of these satisfy the time complexity, even if they claim so. Some may contain errors, implement the slower algorithm, or have other inefficiencies. In the worst case, a code example linked from Wikipedia could later be modified to include exploit code. Verification and benchmarking is necessary when using such code examples from unknown authors.

- C implementation claiming time complexity
- Java implementation claiming time complexity
- Matlab implementation claiming time complexity (public domain)
- Python implementation
- Ruby implementation with unit tests
- C# implementation claiming time complexity
- D implementation with unit tests (port of a Java version claiming )
- Online interactive implementation
- Serial and parallel implementations.
- Matlab and C
- Perl implementation
- C++ implementation
- C++ implementation claiming time complexity (BSD style open source licensed)
- MATLAB implementation
- C implementation
- JavaScript implementation with unit tests (port of a Java version claiming time complexity)
- Clue R package proposes an implementation, solve_LSAP
- Node.js implementation on GitHub
- Python implementation in scipy package