Doing the problem Minimum spanning tree for each edge gave me a refresher on minimum spanning trees and forced me to learn a new technique, binary lifting.
Here's the problem.
Connected undirected weighted graph without self-loops and multiple edges is given. Graph contains $N$ vertices and $M$ edges.
For each edge $(u, v)$ find the minimal possible weight of the spanning tree that contains the edge $(u, v)$.
The weight of the spanning tree is the sum of weights of all edges included in spanning tree.
It's a pretty simple problem statement, and at a high level not even that hard to solve. I figured out rather quickly the general approach. First, let $w: V \times V \rightarrow \mathbb{R}$ be the weight of an edge.
- Find any minimum spanning tree, $T = (V,E)$.
- Given that minimum spanning tree, root the tree.
- Now, given any edge $(u,v)$, if $(u,v) \in E$, return the weight of the spanning tree. If not, add that edge. We must necessarily have a cycle, for otherwise, the tree would not be a tree.
- Find the cycle by finding the least common ancestor of $u$ and $v$.
- Remove the edge with highest weight along the cycle that is not $(u,v)$. The weight of the minimum spanning tree with $(u,v)$ is obtained by adding $w(u,v)$ and subtracting the weight of edge that we removed.
Unfortunately, implementing this solution is rather tricky. I already knew how to find the minimum spanning tree. Finding the least common ancestor efficiently was not so easy, however.
Minimum Spanning Tree
Consider an undirected weighted graph $G = (V,E,w)$. A minimum spanning tree is a subgraph $MST = (V, E^\prime, w)$, where we choose $E^\prime \subset E$ such that $\sum_{(u,v) \in E^\prime,~u < v} w(u,v)$ is minimized.
There are two standard algorithms to solve this problem, Prim's algorithm and Kruskal's algorithm. Apparently, Prim's is faster on dense graphs, whereas Kruskal's is faster on sparse graphs. Also, to get the advantages of Prim's algorithm, you'll need to have some type of fancy priority queue implemented. I always use Prim's because that's what the USACO taught me.
Prim's algorithm is actually pretty easy to memorize, too, because it's so similar to Dijkstra's algorithm. In fact, I use the exact same implementation of the binary heap. Let's go through the steps.
- Initialization: Set the parent of each vertex to be a sentinal, say $-1$ if the vertices are labeled from $0$ to $N - 1$. Initialize $E^\prime = \emptyset$. These are the edges of the minimum spanning tree consisting of just vertex 0. Set vertex $0$ as visited and push all its neigbors onto the priority queue with the value being $w(0,v)$ for $(0,v) \in E$. Set each parent of such $v$ to be $0$.
Maintenance: At the start, we have subgraph $G_k$, which is the minimum spanning tree of some vertices $v_0,v_1,...,v_{k-1}$. Now, we're going to add vertices one-by-one if the priority queue is not empty.
- Pop a vertex from $v_k$ from the priority queue. This is the vertex that is closest to the current minimum spanning tree. Mark $v_k$ as visited.
- Let $P(v_k)$ be the parent of $v_k$. Add the edge $\left(P(v_k), v_k\right)$ to our minimum spanning tree.
For each neighbor $u$ of $v_k$, we have several cases.
- If $u$ has never been added to the priority queue, push $u$ with value $w(v_k, u)$. Set $u$'s parent to be $v_k$.
- If $u$ is in the priority queue already, and $w(v_k, u)$ is smaller than the current distance of $u$ from the minimum spanning tree, decrease the value of key $u$ in the priority queue to $w(v_k, u)$. Set $u$'s parent to be $v_k$.
- If $u$ was already visited of $w(v_k,u)$ is bigger the current distance of $u$ from the minimum spanning tree, do nothing.
We need prove that this actually creates a minimum spanning tree $G_{k + 1}$. Let $G_{k + 1}^\prime$ be a minimum spanning tree of vertices $v_0, v_1, v_2,\ldots,v_k$. Let $W(G)$ denote sum of the edge weights of graph $G.$
Suppose for a contradiction that $G_{k + 1}$ is not a minimum spanning tree. $G_{k + 1}$ contains some edge $(u, v_k)$, and $G^\prime_{k+1}$ contains an edge $(u^\prime, v_k)$. Now, we have that $W(G_{k + 1}) = W(G_k) + w(u,v_k)$ and $W(G^\prime_{k + 1}) = W(G_k^\prime) + w(u^\prime,v_k).$ Clearly, $u, u^\prime \in \{v_0,v_1,\ldots,v_{k - 1}\}$. Since $G_k$ is a minimum spanning tree of $v_0, v_1,\ldots,v_{k-1}$ and $w(u,v_k) \leq w(v,v_k)$ for any $v \in \{v_0,v_1,\ldots,v_{k - 1}\}$ by our induction hypothesis and construction, we cannot have that $W(G_{k+1}) > W(G_{k+1}^\prime)$. Thus, $G_{k+1}$ is the minimum spanning tree of vertices $v_0,v_1,\ldots,v_k$, and this greedy approach works.
- Termination: There's nothing to be done here. Just return $G_{N}$.
Here's an example of this algorithm in action. Vertices and edges in the minimum spanning tree are colored red. Possible candidates for the next vertex are colored in blue. The most recently added vertex is bolded.
Here's the code. It takes and adjacency list and returns the subset of the adjacency list that is a minimum spanning tree, which is not necessarily unique.
vector<unordered_map<int, int>> findMinimumSpanningTree(const vector<unordered_map<int, int>> &adjacencyList) {
int N = adjacencyList.size();
vector<unordered_map<int, int>> minimumSpanningTree(N);
vector<int> visited(N, false);
vector<int> parent(N, -1);
phillypham::priority_queue pq(N); // keep track of closest vertex
pq.push(0, 0);
while (!pq.empty()) {
// find closest vertex to current tree
int currentVertex = pq.top();
int minDistance = pq.at(currentVertex);
pq.pop();
visited[currentVertex] = true;
// add edge to tree if it has a parent
if (parent[currentVertex] != -1) {
minimumSpanningTree[parent[currentVertex]][currentVertex] = minDistance;
minimumSpanningTree[currentVertex][parent[currentVertex]] = minDistance;
}
// relax neighbors step
for (pair<int, int> nextVertex : adjacencyList[currentVertex]) { // loop through edges
if (!visited[nextVertex.first]) { // ignore vertices that were already visited
if (!pq.count(nextVertex.first)) { // add vertex to priority queue for first time
parent[nextVertex.first] = currentVertex;
pq.push(nextVertex.first, nextVertex.second); // distance is edge weight
} else if (pq.at(nextVertex.first) > nextVertex.second) {
parent[nextVertex.first] = currentVertex;
pq.decrease_key(nextVertex.first, nextVertex.second);
}
}
}
}
return minimumSpanningTree;
}
Least Common Ancestor
The trickier part is to find the cycle and the max edge along the cycle whenever we add an edge. Suppose we're trying to add edge $(u,v)$ not in $E^\prime$. We can imagine the cycle starting and stopping at the least common ancestor of $u$ and $v$. The naive way to find this is to use two pointers, one starting at $u$ and the other at $v$. Whichever pointer points to a vertex of greater depth, replace that pointer with one to its parent until both pointers point to the same vertex.
This is too slow. To fix this, we use a form of path compression that some call binary lifting. Imagine that we have vertices $0,1,\ldots,N-1$. We denote the $2^j$th ancestor of vertex $i$ by $P_{i,j}$. Storing all $P_{i,j}$ takes $O(N\log N)$ storage since the height of the tree is at most $N$. Then, if we were trying to find the $k$th ancestor, we could use the binary representation and rewrite $k = 2^{k_1} + 2^{k_2} + \cdots + 2^{k_m}$, where $k_1 < k_2 < \cdots < k_m$. In this way, from $v$, we could first find the $2^{k_m}$th ancestor $a_1$. Then, we'd find the $2^{k_{m-1}}$th ancestor of $a_1$. Call it $a_2$. Continuing in this manner, $a_m$ would be the $k$th ancestor of $v$. $m \sim O(\log N)$, and each lookup is takes constant time, so computing the $k$th ancestor is a $O(\log N)$ operation.
We can also compute all $P_{i,j}$ in $O(N\log N)$ time. We initially know $P_{i,0}$ since that's just the parent of vertex $i$. Then for each $j = 1,2,\ldots,l$, where $l = \lfloor \log_2(H) \rfloor$, where $H$ is the height of the tree, we have that \begin{equation} P_{i,j+1} = P_{P_{i,j},j} \end{equation} if the depth of $i$ is at least $2^{j + 1}$ since $P_{i,j}$ is the $2^j$th ancestor of vertex $i$. The $2^j$th ancestor of vertex $P_{i,j}$ will then be the $2^j + 2^j = 2^{j+1}$th ancestor of vertex $i$.
Thus, if we want to find the least common ancestor of $u$ and $v$, we use the following recursive algorithm in which we have three cases.
- Base case: If $u = v$, the least common ancestor is $u$. If $P_{u,0} = P_{v,0}$, the least common ancestor is $P_{u,0}$.
- Case 1 is that the depths of $u$ and $v$ are different. Assume without loss of generality that the depth of $u$ is greater than the depth of $v$. Replace $u$ with $P_{u,j}$, where $j$ is chosen to be as large as possible such that the depth of $P_{u,j}$ is at least the depth of $v$.
- Case 2 is that the depths of $u$ and $v$ are the same. Replace $u$ with $P_{u,j}$ and $v$ with $P_{v,j}$, where $j$ is the largest integer such that $P_{u,j}$ and $P_{v,j}$ are distinct.
I'm sure that you're thinking that this is all very interesting, but how about that edge of max weight in the cycle? This paradigm can be used to find this, too.
Suppose we have some function that acts on the edges $g : E^\prime \rightarrow \mathbb{R}$. We have some combiner function $C$ with with we can extend $g$ to $\tilde{g} : V \times V \rightarrow \mathbb{R}$ in the following manner. Consider any two vertices $u$ and $v$. If $(u,v) \in E^\prime$, then $\tilde{g}(u,v) = g(u,v)$. If there is not an edge between $u$ and $v$, there is an intermediate vertex $w$. Then, we define \begin{equation} \tilde{g}(u,v) = C\left(\tilde{g}(u, w), \tilde{g}(w, v)\right). \end{equation}
When we add an edge $(u,v) \not\in E^\prime$, we want add $w(u,v)$ and remove an edge of weight $\tilde{g}(u,v)$. Such a function can be computed in parallel with find the least common ancestor. Let $w$ be the least common ancestor. As we walk up the tree from $u$ to $w$, we'll compute $\tilde{g}(u,w)$. Similarly, we'll compute $\tilde{g}(w,v)$ as we walk up the tree from $v$ to $w$. This will give us $\tilde{g}(u,v)$ after applying $C$.
Let's walk through the steps with code now. Recall our minimum spanning tree is an adjacency list of type vector<unordered_map<int, int>>
.
First, we'll root the tree and compute the parent and depth of each vertex.
vector<pair<int, int>> rootTree(int root, const vector<unordered_map<int, int>> &adjacencyList) { int N = adjacencyList.size(); vector<pair<int, int>> parentDepth(N); stack<int, vector<int>> s; s.push(root); parentDepth[root] = make_pair(-1, 0); while (!s.empty()) { int currentVertex = s.top(); s.pop(); for (pair<int, int> nextVertex : adjacencyList[currentVertex]) { if (nextVertex.first != parentDepth[currentVertex].first) { parentDepth[nextVertex.first] = make_pair(currentVertex, parentDepth[currentVertex].second + 1); s.push(nextVertex.first); } } } return parentDepth; }
Next, we compute $P_{i,j}$ and $\tilde{g}(i,P_{i,j})$. $P_{i,j}$ corresponds to
ancestor[i][j]
, and $\tilde{g}(i,P_{i,j})$ corresponds tomaxEdge[i][j]
.void memoizeAncestorAndMaxEdge(vector<vector<int>> &ancestor, vector<vector<int>> &maxEdge, vector<unordered_map<int, int>> &minimumSpanningTree, const vector<pair<int, int>> &parentDepth) { int N = minimumSpanningTree.size(); int maxLogDepth = 0; for (int i = 0; i < N; ++i) { int logDepth = parentDepth[i].second == 0 ? 0 : floor(log2(parentDepth[i].second) + 1); if (maxLogDepth < logDepth) maxLogDepth = logDepth; ancestor.push_back(vector<int>(logDepth)); maxEdge.push_back(vector<int>(logDepth)); if (logDepth > 0) { ancestor.back().front() = parentDepth[i].first; maxEdge.back().front() = minimumSpanningTree[parentDepth[i].first][i]; } } for (int j = 1; j < maxLogDepth; ++j) { for (int i = 0; i < N; ++i) { int logDepth = parentDepth[i].second == 0 ? 0 : floor(log2(parentDepth[i].second) + 1); // go up 2^(j-1) + 2^(j-1) = 2^j levels if (j < logDepth) { ancestor[i][j] = ancestor[ancestor[i][j - 1]][j - 1]; maxEdge[i][j] = max(maxEdge[i][j-1], maxEdge[ancestor[i][j - 1]][j - 1]); } } } }
Now, we can recursively compute $\tilde{g}(u,v)$ whenever we want to add $(u,v) \not\in E^\prime$. Note that one of the base cases is handle in the
else
part of theif
statement.int findMaxEdge(int u, int v, const vector<pair<int, int>> &parentDepth, const vector<vector<int>> &ancestor, const vector<vector<int>> &maxEdge) { if (u == v) return 0; if (parentDepth[u].second < parentDepth[v].second) swap(u, v); // now depth(u) >= depth(v) if (parentDepth[u].second != parentDepth[v].second) { int depthDelta = parentDepth[u].second - parentDepth[v].second; int logDelta = floor(log2(depthDelta)); return max(maxEdge[u][logDelta], findMaxEdge(ancestor[u][logDelta], v, parentDepth, ancestor, maxEdge)); } else { // depth(u) == depth(v) int L = 0; int U = parentDepth[u].second == 0 ? 0 : floor(log2(parentDepth[u].second) + 1); int mid = L + (U - L)/2; while (L < U) { // find smallest index such that ancestor[u][j] == ancestor[v][j] if (ancestor[u][mid] == ancestor[v][mid]) { U = mid; } else { // ancestor[u][mid] != ancestor[v][mid] L = mid + 1; } mid = L + (U - L)/2; } if (mid == 0) return max(maxEdge[u][mid], maxEdge[v][mid]); --mid; // recursively run on the shallowest distinct ancestors int maxEdgeWeight = max(maxEdge[u][mid], maxEdge[v][mid]); return max(maxEdgeWeight, findMaxEdge(ancestor[u][mid], ancestor[v][mid], parentDepth, ancestor, maxEdge)); } }
Code
Here's the main loop, the code that glues all these functions together.
long long computeTreeWeightHelper(int parent, int vertex,
const vector<unordered_map<int, int>> &adjacencyList) {
long long weight = 0LL;
for (pair<int, int> nextVertex : adjacencyList[vertex]) {
if (nextVertex.first != parent) {
weight += nextVertex.second + computeTreeWeightHelper(vertex, nextVertex.first, adjacencyList);
}
}
return weight;
}
long long computeTreeWeight(const vector<unordered_map<int, int>> &adjacencyList) {
return computeTreeWeightHelper(-1, 0, adjacencyList);
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int N, M; cin >> N >> M; // vertices and edges
vector<tuple<int, int, int>> edgeList; edgeList.reserve(M);
vector<unordered_map<int, int>> adjacencyList(N);
for (int m = 0; m < M; ++m) {
int u, v, w; cin >> u >> v >> w;
--u; --v;
if (adjacencyList[u].count(v)) {
adjacencyList[u][v] = min(w, adjacencyList[u][v]);
} else {
adjacencyList[u][v] = w;
}
if (adjacencyList[v].count(u)) {
adjacencyList[v][u] = min(w, adjacencyList[v][u]);
} else {
adjacencyList[v][u] = w;
}
edgeList.emplace_back(u, v, w);
}
vector<unordered_map<int, int>> minimumSpanningTree = findMinimumSpanningTree(adjacencyList);
// do lots of pre-processing
long long mstWeight = computeTreeWeight(minimumSpanningTree);
vector<pair<int, int>> parentDepth = rootTree(0, minimumSpanningTree);
// for special pair of vertices find least common ancestor and heaviest edge between vertices
vector<vector<int>> ancestor; ancestor.reserve(N); // ancestor[u][j] is 2^j ancestor of u
vector<vector<int>> maxEdge; maxEdge.reserve(N); // maxEdge[u][j] is heaviest edge between u and ancestor[u][j]
memoizeAncestorAndMaxEdge(ancestor, maxEdge, minimumSpanningTree, parentDepth);
// now iterate through given edges and include each and remove heaviest edge in cycle
for (tuple<int, int, int> edge : edgeList) {
if (minimumSpanningTree[get<0>(edge)].count(get<1>(edge))) {
if (minimumSpanningTree[get<0>(edge)][get<1>(edge)] == get<2>(edge)) {
cout << mstWeight << '\n';
} else {
cout << mstWeight + get<2>(edge) - minimumSpanningTree[get<0>(edge)][get<1>(edge)] << '\n';
}
} else {
// now for each edge take diffs with the minimum spanning tree
int maxEdgeWeight = findMaxEdge(get<0>(edge), get<1>(edge),
parentDepth, ancestor, maxEdge);
cout << mstWeight + get<2>(edge) - maxEdgeWeight << '\n';
}
}
cout << flush;
return 0;
}
For what it's worth, here's my complete submission, 18541620.
New Comment
Comments
No comments have been posted yet. You can be the first!