*Introduction*

In many applications of graph algorithms, including communication networks, VLSI design, graphics, and assembly planning, graphs are subject to discrete changes, such as insertions or deletions of vertices or edges. In the last two decades there has been a growing interest in such dynamically changing graphs, and a whole body of algorithmic techniques and data structures for dynamic graphs has been discovered. This chapter is intended as an overview of this ﬁeld.

An *update on a graph *is an operation that inserts or deletes edges or vertices of the graph or changes attributes associated with edges or vertices, such as cost or color. Throughout this chapter by *dynamic graph *we denote a graph that is subject to a sequence of updates. In a typical dynamic graph problem one would like to answer queries on dynamic graphs, such as, for instance, whether the graph is connected or which is the shortest path between any two vertices. The goal of a dynamic graph algorithm is to update eﬃciently the solution of a problem after dynamic changes, rather than having to recompute it from scratch each time. Given their powerful versatility, it is not surprising that dynamic algorithms and dynamic data structures are often more diﬃcult to design and to analyze than their static counterparts.

We can classify dynamic graph problems according to the types of updates allowed. In particular, a dynamic graph problem is said to be *fully dynamic *if the update operations include unrestricted insertions and deletions of edges or vertices. A dynamic graph problem is said to be *partially dynamic *if only one type of update, either insertions or deletions, is allowed. More speciﬁcally, a dynamic graph problem is said to be *incremental *if only insertions are allowed, while it is said to be *decremental *if only deletions are allowed.

In the ﬁrst part of this chapter we will present the main algorithmic techniques used to solve dynamic problems on both *undirected *and *directed *graphs. In the second part of the chapter we will deal with dynamic problems on graphs, and we will investigate as paradigmatic problems the dynamic maintenance of minimum spanning trees, connectivity, transitive closure and shortest paths. Interestingly enough, dynamic problems on directed graphs seem much harder to solve than their counterparts on undirected graphs, and require completely diﬀerent techniques and tools.

**Techniques for Undirected Graphs**

Many of the algorithms proposed in the literature use the same general techniques, and hence we begin by describing these techniques. In this section we focus on undirected graphs, while techniques for directed graphs will be discussed in Section 36.3. Typically, most of these techniques use some sort of graph decomposition, and partition either the vertices or the edges of the graph to be maintained. Moreover, data structures that maintain properties of dynamically changing trees, such as the ones described in Chapter 35 (linking and cutting trees, topology trees, and Euler tour trees), are often used as building blocks by many dynamic graph algorithms.

The clustering technique has been introduced by Frederickson [13] and is based upon partitioning the graph into a suitable collection of connected subgraphs, called *clusters*, such that each update involves only a small number of such clusters. Typically, the decomposition deﬁned by the clusters is applied recursively and the information about the subgraphs is combined with the topology trees described in Section 35.3. A reﬁnement of the clustering technique appears in the idea of *ambivalent data structures *[14], in which edges can belong to multiple groups, only one of which is actually selected depending on the topology of the given spanning tree.

As an example, we brieﬂy describe the application of clustering to the problem of maintaining a minimum spanning forest [13]. Let *G *= (*V**, **E*) be a graph with a designated spanning tree *S*. Clustering is used for partitioning the vertex set *V *into subtrees connected in *S*, so that each subtree is only adjacent to a few other subtrees. A topology tree, as described in Section 35.3, is then used for representing a recursive partition of the tree *S*. Finally, a generalization of topology trees, called *2-dimensional topology trees*, are formed from pairs of nodes in the topology tree and allow it to maintain information about the edges in *E ** **S *[13].

Fully dynamic algorithms based only on a single level of clustering obtain typically time bounds of the order of *O*(*m*2*/*3) (see for instance [17, 32]). When the partition can be applied recursively, better *O*(*m*1*/*2) time bounds can be achieved by using 2-dimensional topology trees (see, for instance, [13, 14]).

*TH**E**OR**E**M 36.1 **(Frederickson [13]) The minimum spanning forest of an undirected **g**r**ap**h can be maintained in time **O*(*√**m *) *per update, where **m **i**s the current number of **e**d**ge**s in the graph.*

We refer the interested reader to [13, 14] for details about Frederickson’s algorithm. With the same technique, an *O*(*√**m*) time bound can be obtained also for fully dynamic connectivity and 2-edge connectivity [13, 14] The type of clustering used can very problem-dependent, however, and makes this technique diﬃcult to be used as a black box.

Sparsiﬁcation is a general technique due to Eppstein *et al. *[10] that can be used as a black box (without having to know the internal details) in order to design and dynamize graph algorithms. It is a divide-and-conquer technique that allows it to reduce the dependence on the number of edges in a graph, so that the time bounds for maintaining some property of the graph match the times for computing in sparse graphs. More precisely, when the technique is applicable, it speeds up a *T *(*n, m*) time bound for a graph with *n *vertices and *m *edges to *T *(*n, O*(*n*)), i.e., to the time needed if the graph were sparse. For instance, if *T *(*n, m*) = *O*(*√**m *), we get a better bound of *O*(*√**n *). The technique itself is quite simple.

A key concept is the notion of certiﬁcate.

**DEFINITION 36.1 **For any graph property *P *and graph *G*, a *certiﬁcate *for *G *is a graph *G**l *such that *G *has property *P *if and only if *G**l *has the property.

Let *G *be a graph with *m *edges and *n *vertices. We partition the edges of *G *into a collection of *O*(*m/n*) sparse subgraphs, i.e., subgraphs with *n *vertices and *O*(*n*) edges. The information relevant for each subgraph can be summarized in a sparse certiﬁcate. Certiﬁcates are then merged in pairs, producing larger subgraphs which are made sparse by again computing their certiﬁcate. The result is a balanced binary tree in which each node is represented by a sparse certiﬁcate. Each update involves *O*(log(*m/n*))*∗ *graphs with *O*(*n*) edges each, instead of one graph with *m *edges.

There exist two variants of sparsiﬁcation. The ﬁrst variant is used in situations where no previous fully dynamic algorithm is known. A static algorithm is used for recomputing a sparse certiﬁcate in each tree node aﬀected by an edge update. If the certiﬁcates can be found in time *O*(*m *+ *n*), this variant gives time bounds of *O*(*n*) per update.

In the second variant, certiﬁcates are maintained using a dynamic data structure. For this to work, a *stability *property of certiﬁcates is needed, to ensure that a small change in the input graph does not lead to a large change in the certiﬁcates. We refer the interested reader to [10] for a precise deﬁnition of stability. the form *O*(*m**p*) into *O*(*n**p*).

This variant transforms time bounds of **DEFINITION 36.2 **A time bound *T *(*n*) is *well-behaved *if, for some *c** < *1, *T *(*n/*2) *<** **cT *(*n*). Well-behavedness eliminates strange situations in which a time bound ﬂuctuates wildly with *n*. For instance, all polynomials are well-behaved.

*TH**E**OR**E**M 36.2 **(Eppstein et al. [10]) Let **P **b**e a property for which we can ﬁnd sparse **c**e**rtiﬁcates in time **f *(*n, m*) *for some well-behaved **f **, **a**nd such that we can construct a data** **structure for testing property **P **i**n time **g*(*n, m*) *which can answer queries in time **q*(*n, m*)*.** **The**n there **i**s **a fully dynamic data structure for testing whether a graph has property **P **, **f**o**r** **whic**h edge insertions and deletions can be performed in time **O*(*f *(*n, O*(*n*))) + *g*(*n, O*(*n*))*,** **a**nd for which the query time **i**s **q*(*n, O*(*n*))*.*

*TH**EOREM 36.3 **(Eppstein et al. [10]) Let **P **b**e a property for which stable sparse** **c**e**rtiﬁcates can be maintained in time **f *(*n, m*) *per update, where **f **i**s well-behaved, and for** **whic**h there is a data structure for property **P **wi**t**h update time **g*(*n, m*) *and query time** **q*(*n, m*)*. Then **P **c**a**n be maintained in time **O*(*f *(*n, O*(*n*))) + *g*(*n, O*(*n*)) *per update, with** **q**u**er**y time **q*(*n, O*(*n*))*.*

Basically, the ﬁrst version of sparsiﬁcation (Theorem 36.2) can be used to dynamize static algorithms, in which case we only need to *compute *eﬃciently *sparse *certiﬁcates, while the second version (Theorem 36.3) can be used to speed up existing fully dynamic algorithms, in which case we need to *maintain *eﬃciently *stable sparse *certiﬁcates.

Sparsiﬁcation applies to a wide variety of dynamic graph problems, including minimum spanning forests, edge and vertex connectivity. As an example, for the fully dynamic minimum spanning tree problem, it reduces the update time from *O*(*√**m *) [13, 14] to *O*(*√**n *) [10].

Since sparsiﬁcation works on top of a given algorithm, we need not to know the internal details of this algorithm. Consequently, it can be applied orthogonally to other data structuring techniques: in a large number of situations both clustering and sparsiﬁcation have been combined to produce an eﬃcient dynamic graph algorithm.

**Randomization**

Clustering and sparsiﬁcation allow one to design eﬃcient deterministic algorithms for fully dynamic problems. The last technique we present in this section is due to Henzinger and King [20], and allows one to achieve faster update times for some problems by exploiting the power of randomization.

We now sketch how the randomization technique works taking the fully dynamic connectivity problem as an example. Let *G *= (*V**, E*) be a graph to be maintained dynamically, and let *F *be a spanning forest of *G*. We call edges in *F **tree edges*, and edges in *E ** **F **non-tree edges*. The algorithm by Henzinger and King is based on the following ingredients.

**Mai****n****tainin****g Spanning Forests**

Trees are maintained using the Euler Tours data structure (ET trees) described in Section 35.5: this allows one to obtain logarithmic updates and queries within the forest.

**Rando****m Sampling**

Another key idea is the following: when *e *is deleted from a tree *T *, use random sampling among the non-tree edges incident to *T *, in order to ﬁnd quickly a replacement edge for *e*, if any.

**Grap****h Decomposition**

The last key idea is to combine randomization with a suitable graph decomposition. We maintain an edge decomposition of the current graph *G *using *O*(log *n*) edge disjoint sub- graphs *G**i *= (*V**, E**i*). These subgraphs are hierarchically ordered. The lower levels contain tightly-connected portions of *G *(i.e., dense edge cuts), while the higher levels contain loosely-connected portions of *G *(i.e., sparse cuts). For each level *i*, a spanning forest for the graph deﬁned by all the edges in levels *i *or below is also maintained.

Note that the hard operation in this problem is the deletion of a tree edge: indeed, a spanning forest is easily maintained with the help of the linking and cutting trees described in Section 35.2 throughout edge insertions, and deleting a non-tree edge does not change the forest.

The goal is an update time of *O*(log3 *n*): after an edge deletion, in the quest for a replacement edge, we can aﬀord a number of sampled edges of *O*(log2 *n*). However, if the candidate set of edge *e *is a small fraction of all non-tree edges which are adjacent to *T *, it is unlikely to ﬁnd a replacement edge for *e *among this small sample. If we found no candidate among the sampled edges, we must check explicitly all the non-tree edges adjacent to *T *.

After random sampling has failed to produce a replacement edge, we need to perform this check explicitly, otherwise we would not be guaranteed to provide correct answers to the queries. Since there might be a lot of edges which are adjacent to *T *, this explicit check could be an expensive operation, so it should be made a low probability event for the randomized algorithm. This can produce pathological updates, however, since deleting all edges in a relatively small candidate set, reinserting them, deleting them again, and so on will almost surely produce many of those unfortunate events.

The graph decomposition is used to prevent the undesirable behavior described above. If a spanning forest edge *e *is deleted from a tree at some level *i*, random sampling is used to quickly ﬁnd a replacement for *e *at that level. If random sampling succeeds, the tree is reconnected at level *i*. If random sampling fails, the edges that can replace *e *in level *i** *form with high probability a sparse cut. These edges are moved to level *i *+1 and the same procedure is applied recursively on level *i *+ 1.

*TH**E**OR**E**M 36.4 **(**H**e**nzinger and King [20]) Let **G **b**e a graph with **m*0 *e**d**ge**s and **n** **ver**tices subject to edge deletions only. A spanning forest of **G **c**a**n be maintained in **O*(log3 *n*) *e**x**p**e**c**ted amortized time per deletion, if there are at least *Ω(*m*0) *deletions. The time per** **q**u**er**y is **O*(log *n*)*.*