After solving Kay and Snowflake, I wanted write this post just to serve as notes to my future self. There are many approaches to this problem. The solution contains three different approaches, and my approach differs from all of them, and has better complexity than all but the last solution. My solution contained a lot of code that I think is reusable in the future.
This problem involves finding the centroid of a tree, which is a node such that when removed, each of the new trees produced have at most half the number of nodes as the original tree. We need to be able to do this for any arbitrary subtree of the tree. Since there are as many as 300,000 subtrees we need to be a bit clever about how we find the centroid.
First, we can prove that the centroid of any tree is a node with smallest subtree such that the subtree has size at least half of the original tree. That is, if $T(i)$ is the subtree of node $i$, and $S(i)$ is its size, then the centroid of $T(i)$ is \begin{equation} C(i) = \underset{k \in \{j \in T(i) : 2S(j) - T(i) \geq 0\}}{\arg\min} \left(S(k) - \frac{T(i)}{2}\right). \end{equation}
To see this, first note the ancestors and descendents of $k = C(i)$ will be in separate subtrees. Since $S(k) \geq T(i)/2 \Rightarrow T(i) - S(k) \leq T(i)/2$, so all the ancestors will be part of new trees that are sufficiently small. Now, suppose for a contradiction that some subset of the descendents of $k$ belong to a tree that is greater than $T(i)/2$. This tree is also a subtree of $T(i)$, so $C(i)$ should be in this subtree, so this is ridiculous.
Now, that proof wasn't so hard, but actually finding $C(i)$ for each $i$ is tricky. The naive way would be to do an exhaustive search in $T(i)$, but that would give us an $O(NQ)$ solution.
The first observation that we can make to simplify our search is that if we start at $i$ and always follow the node with the largest subtree, the centroid is on that path. Why? Well, there can only be one path that has at least $T(i)/2$ nodes. Otherwise, the tree would have too many nodes.
So to find the centroid, we can start at the leaf of this path and go up until we hit the centroid. That still doesn't sound great though since the path might be very long, and it is still $O(N)$. The key thing to note is that the subtree size is strictly increasing, so we can do a binary search, reducing this step to $O(\log N)$. With a technique called binary lifting described in Least Common Ancestor, we can further reduce this to $O\left(\log\left(\log N\right)\right)$, which is probably not actually necessary, but I did it anyway.
Now, we need to know several statistics for each node. We need the leaf of path of largest subtrees, and we need the subtree size. Subtree size can be calculate recursively with depth-first search. Since subtree size of $i$ is the size of all the child subtrees plus 1 for $i$ itself. Thus, we do a post-order traversal to calculate subtree size and the special leaf. We also need depth of a node to precompute the ancestors for binary lifting. The depth is computed with a pre-order traversal. In the title picture, the numbers indicate the order of an in-order traversal. The letters correspond to a pre-order traversal, and the roman numerals show the order of a post-order traversal. Since recursion of large trees leads to stack overflow errors and usually you can't tell the online judge to increase the stack size, it's always better to use an explicit stack. I quite like my method of doing both pre-order and post-order traversal with one stack.
/**
* Does a depth-first search to calculate various statistics.
* We are given as input a rooted tree in the form each node's
* children and parent. The statistics returned:
* ancestors: Each node's ancestors that are a power of 2
* edges away.
* maxPathLeaf: If we continually follow the child that has the
* maximum subtree, we'll end up at this leaf.
* subtreeSize: The size of the node's subtree including itself.
*/
void calculateStatistics(const vector<vector<int>> &children,
const vector<int> &parent,
vector<vector<int>> *ancestorsPtr,
vector<int> *maxPathLeafPtr,
vector<int> *subtreeSizePtr) {
int N = parent.size();
if (N == 0) return;
// depth also serves to keep track of whether we've visited, yet.
vector<int> depth(N, -1); // -1 indicates that we have not visited.
vector<int> height(N, 0);
vector<int> &maxPathLeaf = *maxPathLeafPtr;
vector<int> &subtreeSize = *subtreeSizePtr;
stack<int> s; s.push(0); // DFS
while (!s.empty()) {
int i = s.top(); s.pop();
if (depth[i] == -1) { // pre-order
s.push(i); // Put it back in the stack so we visit again.
depth[i] = parent[i] == -1 ? 0: depth[parent[i]] + 1;
for (int j : children[i]) s.push(j);
} else { // post-order
int maxSubtreeSize = INT_MIN;
int maxSubtreeRoot = -1;
for (int j : children[i]) {
height[i] = max(height[j] + 1, height[i]);
subtreeSize[i] += subtreeSize[j];
if (maxSubtreeSize < subtreeSize[j]) {
maxSubtreeSize = subtreeSize[j];
maxSubtreeRoot = j;
}
}
maxPathLeaf[i] = maxSubtreeRoot == -1 ?
i : maxPathLeaf[maxSubtreeRoot];
}
}
// Use binary lifting to calculate a subset of ancestors.
vector<vector<int>> &ancestors = *ancestorsPtr;
for (int i = 0; i < N; ++i)
if (parent[i] != -1) ancestors[i].push_back(parent[i]);
for (int k = 1; (1 << k) <= height.front(); ++k) {
for (int i = 0; i < N; ++i) {
int j = ancestors[i].size();
if ((1 << j) <= depth[i]) {
--j;
ancestors[i].push_back(ancestors[ancestors[i][j]][j]);
}
}
}
}
With these statistics calculated, the rest our code to answer queries is quite small if we use C++'s built-in implementation of binary search. With upper_bound
:
#include <algorithm>
#include <cassert>
#include <climits>
#include <iostream>
#include <iterator>
#include <stack>
#include <vector>
using namespace std;
/**
* Prints out vector for any type T that supports the << operator.
*/
template<typename T>
void operator<<(ostream &out, const vector<T> &v) {
copy(v.begin(), v.end(), ostream_iterator<T>(out, "\n"));
}
int findCentroid(int i,
const vector<vector<int>> &ancestors,
const vector<int> &maxPathLeaf,
const vector<int> &subtreeSize) {
int centroidCandidate = maxPathLeaf[i];
int maxComponentSize = (subtreeSize[i] + 1)/2;
while (subtreeSize[centroidCandidate] < maxComponentSize) {
// Alias the candidate's ancestors.
const vector<int> &cAncenstors = ancestors[centroidCandidate];
assert(!cAncenstors.empty());
// Check the immediate parent first. If this is an exact match, and we're done.
if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
centroidCandidate = cAncenstors.front();
} else {
// Otherwise, we can approximate the next candidate by searching ancestors that
// are a power of 2 edges away.
// Find the index of the first ancestor who has a subtree of size
// greater than maxComponentSize.
int centroidIdx =
upper_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
// If this function evaluates to true, j is an upper bound.
[&subtreeSize](int maxComponentSize, int j) -> bool {
return maxComponentSize < subtreeSize[j];
}) - ancestors[centroidCandidate].cbegin();
centroidCandidate = cAncenstors[centroidIdx - 1];
}
}
return centroidCandidate;
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int N, Q;
cin >> N >> Q;
vector<int> parent; parent.reserve(N);
parent.push_back(-1); // the root has no parent
vector<vector<int>> children(N);
for (int i = 1; i < N; ++i) {
int p; cin >> p; --p; // Convert to 0-indexing.
children[p].push_back(i);
parent.push_back(p);
}
// ancestors[i][j] will be the (2^j)th ancestor of node i.
vector<vector<int>> ancestors(N);
vector<int> maxPathLeaf(N, -1);
vector<int> subtreeSize(N, 1);
calculateStatistics(children, parent,
&ancestors, &maxPathLeaf, &subtreeSize);
vector<int> centroids; centroids.reserve(Q);
for (int q = 0; q < Q; ++q) {
int v; cin >> v; --v; // Convert to 0-indexing.
// Convert back to 1-indexing.
centroids.push_back(findCentroid(v, ancestors, maxPathLeaf, subtreeSize) + 1);
}
cout << centroids;
cout << flush;
return 0;
}
Of course, we can use lower_bound
in findCentroid
, too. It's quite baffling to me that the function argument takes a different order of parameters in lower_bound
, though. I always forget how to use these functions, which is why I've decided to write this post.
int findCentroid(int i,
const vector<vector<int>> &ancestors,
const vector<int> &maxPathLeaf,
const vector<int> &subtreeSize) {
int centroidCandidate = maxPathLeaf[i];
int maxComponentSize = (subtreeSize[i] + 1)/2;
while (subtreeSize[centroidCandidate] < maxComponentSize) {
// Alias the candidate's ancestors.
const vector<int> &cAncenstors = ancestors[centroidCandidate];
assert(!cAncenstors.empty());
// Check the immediate parent first. If this is an exact match, and we're done.
if (subtreeSize[cAncenstors.front()] >= maxComponentSize) {
centroidCandidate = cAncenstors.front();
} else {
// Otherwise, we can approximate the next candidate by searching ancestors that
// are a power of 2 edges away.
// Find the index of the first ancestor who has a subtree of size
// at least maxComponentSize.
int centroidIdx =
lower_bound(cAncenstors.cbegin() + 1, cAncenstors.cend(), maxComponentSize,
// If this function evaluates to true, j + 1 is a lower bound.
[&subtreeSize](int j, int maxComponentSize) -> bool {
return subtreeSize[j] < maxComponentSize;
}) - ancestors[centroidCandidate].cbegin();
if (centroidIdx < cAncenstors.size() &&
subtreeSize[cAncenstors[centroidIdx]] == maxComponentSize) {
centroidCandidate = cAncenstors[centroidIdx];
} else { // We don't have an exact match.
centroidCandidate = cAncenstors[centroidIdx - 1];
}
centroidCandidate = cAncenstors[centroidIdx - 1];
}
}
return centroidCandidate;
}
All in all, we need $O(N\log N)$ time to compute ancestors, and $O\left(Q\log\left(\log N\right)\right)$ time to answer queries, so total complexity is $O\left(N\log N + Q\log\left(\log N\right)\right)$.
New Comment
Comments
No comments have been posted yet. You can be the first!