Recently, I came across a programming interview question that I didn't know how to answer. It turns out the solution was to use a suffix array. I had actually been aware of the existence of the data structure for some time, but never bothered to learn about them because they always made my head hurt.
They're actually pretty simple to describe. Consider a string $S$ of length $N.$ Then, $S$ will have $N$ suffixes, $S_0,S_1,\ldots,S_{N-1}$, where if we use $0$-based indexing, $S_i$ is just the suffix of $S$ starting at the $i$th character. Thus, we can just represent each suffix its index which, is just an integer. Sort the suffixes alphabetically. A suffix array is sorted array of suffixes of $S$. Strings can be very long, for example, a DNA sequence. Thus, we represent our suffix array as the array of integers that corresponds to each suffix.
The concept is very simple, but I always got lost on the construction. Of course, we could do a comparison sort, and so, given $N$ suffixes, we have about $O(N\log N)$ comparisons. The problem is that for each comparison, we have to compare $O(N)$ characters, so this naive problem is actually $O(N^2\log N)$ which is unacceptable for large $N.$
Suffix Array Construction
There exists $O(N)$ algorithms of constructing suffix arrays, but they are too unwieldly to code during a contest. The algorithm that I describe is $O\left(N(\log N)^2\right)$. The idea reminds me of a radix sort, where would sort a string character-by-character.
Here, we can do better than character-by-character, though, because a suffix of a suffix is just a further suffix. So, we sort by the first character. Now, say we have sorted all the suffixes by the first $k$ characters. We consider the suffix $i$ as the pair of suffixes $(i,i+k).$ The first $k$ characters of both of those suffixes are sorted, so actually we can easily sort by $2k$ characters by comparing pairs of integers. Thus, we at each step we have double the number of characters being compared, so we run a comparison sort $\lceil \log_2 N \rceil + 1$ times, which gives us a running time of complexity $O\left(N(\log N)^2\right)$.
In the title image, you can see this process in action. I thought it was pretty neat idea, so I created this data visualization, where you can construct your own suffix array from a string. Here's the an implementation in Java, where I use a custom comparator that keeps track of the rank of each suffix.
static class SuffixArray {
private class SuffixComparator implements Comparator<Integer> {
private String S;
private int N;
private int[] rankA;
private int[] rankB; // dummy array to avoid constantly reallocating memory
private boolean useRankB = false;
private int sortedCharacters = 0;
public SuffixComparator(String S) {
this.S = S;
this.N = S.length();
this.rankA = new int[N];
this.rankB = new int[N];
}
public int compare(Integer a, Integer b) {
int[] rank = useRankB ? rankB : rankA;
if (rank[a] != rank[b]) return rank[a] - rank[b];
if (sortedCharacters == 0) { // just look at first letter
return S.charAt(a) - S.charAt(b);
} else {
// first sortedCharacters are equal
// substring after sortedCharacters is
// S.substring(a + sortedCharacters) and S.substring(b + sortedCharacters)
// these are just further suffixes
if (a + sortedCharacters == N) {
return -1; // suffix a is only sortedCharacters long
} else if (b + sortedCharacters == N) {
return 1; // suffix b is only sortedCharacters long
} else { // look at sub-suffixes
return rank[a + sortedCharacters] - rank[b + sortedCharacters];
}
}
}
public int getRank(int suffixIndex) {
return useRankB ? rankB[suffixIndex] : rankA[suffixIndex]; // return position in suffixArray
}
public void updateRank(Integer[] suffixArray) {
int[] newRank = useRankB ? rankA : rankB;
newRank[suffixArray[0]] = 0;
for (int i = 1; i < N; ++i) { // loop over position in suffix array
if (compare(suffixArray[i], suffixArray[i - 1]) > 0) {
newRank[suffixArray[i]] = newRank[suffixArray[i - 1]] + 1;
} else {
newRank[suffixArray[i]] = newRank[suffixArray[i - 1]];
}
}
toggleRank();
}
public int incrementSortedCharacters() {
if (sortedCharacters >= N) return sortedCharacters;
if (sortedCharacters == 0) {
++sortedCharacters;
} else {
sortedCharacters *= 2;
}
return sortedCharacters;
}
private boolean toggleRank() {
useRankB = !useRankB;
return useRankB;
}
}
private String S;
private Integer[] suffixArray; // suffixArray[i] is the ith suffix lexographically
private int N;
private int[] rank; // rank[suffixIndex] is the position in the suffixArray
public SuffixArray(String S) {
this.S = S;
this.N = S.length();
this.suffixArray = new Integer[N];
for (int i = 0; i < N; ++i) suffixArray[i] = i;
SuffixComparator suffixComparator = new SuffixComparator(S);
do { // each iteration sorts 2^i characters
Arrays.sort(suffixArray, suffixComparator);
suffixComparator.updateRank(suffixArray);
} while (suffixComparator.incrementSortedCharacters() < N);
this.rank = new int[N];
for (int i = 0; i < N; ++i) rank[i] = suffixComparator.getRank(i);
}
public int getSuffixIndex(int i) {
return suffixArray[i];
}
public String getSuffix(int i) {
return S.substring(suffixArray[i]);
}
public int getRank(int suffixIndex) {
return rank[suffixIndex];
}
public int size() {
return N;
}
public String getString() {
return this.S;
}
@Override
public String toString() {
StringBuilder stringBuilder = new StringBuilder();
if (N == 0) return "empty suffix array";
stringBuilder.append(getSuffix(0));
for (int i = 1; i < N; ++i) {
stringBuilder.append('\n');
stringBuilder.append(getSuffix(i));
}
return stringBuilder.toString();
}
}
LCP Arrays
For many applications of a suffix array, an auxiliary data structure called an LCP array is needed. LCP stands for longest common prefix. Let $SA_i$ be the $i$th suffix of the suffix array, so it is the suffix of rank $i$ when the suffixes are sorted alphabetically. The LCP array will also have length $N,$ with entries $LCP_i.$ $LCP_0$ is undefined, while for $i > 0,$ $LCP_i$ is the length of longest common prefix of $SA_i$ and $SA_{i-1}.$
Again, a naive computation would be $O(N^2),$ but we can use the fact that if we remove the first character from a suffix, we have a prefix of another suffix. So, start with the longest suffix, that is, $S$, itself. Compute the longest common prefix with the suffix that occurs before it in the suffix array. That is, suppose we know the longest common prefix of $SA_i$ and $SA_{i-1}$ has length $k > 0.$ Now, remove the first character from $SA_i$ to get $SA_j.$ The prefix of $SA_{j-1}$ will share at least $k - 1$ characters with $SA_j$ because in the very worst case $SA_{j-1}$ will be be $SA_{i-1}$ with the first character removed. We handle the case where $i = 0$ or $k = 0,$ specially. Here is the Java code.
static class LCPArray { // LCP stands for longest common prefix
private int N;
private String S;
private SuffixArray suffixArray;
private int[] lCP;
// lCP[i] is undefined for i = 0, i is position in suffixArray
// for i > 0, it contains the length of the longest common prefix
// of S.substring(suffixArray[i]) and S.substring(suffixArray[i-1])
public LCPArray(String S) {
this(new SuffixArray(S));
}
public LCPArray(SuffixArray suffixArray) {
this.suffixArray = suffixArray;
this.S = suffixArray.getString();
this.N = suffixArray.size();
this.lCP = new int[N];
if (N == 0) return;
lCP[0] = -1; // undefined for 0
if (N == 1) return;
int currentLCP = 0;
for (int suffixIndex = 0; suffixIndex < N; ++suffixIndex) {
int rank = suffixArray.getRank(suffixIndex);
if (rank == 0) continue;
int previousSuffixIndex = suffixArray.getSuffixIndex(suffixArray.getRank(suffixIndex) - 1);
// loop invariant is that the current suffix and previous suffix have longest common prefix
// of at least currentLCP
while (Math.max(suffixIndex, previousSuffixIndex) + currentLCP < N
&& S.charAt(suffixIndex + currentLCP) == S.charAt(previousSuffixIndex + currentLCP)) {
++currentLCP;
}
lCP[rank] = currentLCP;
// we only remove one character off the front each time, so suffix length decreases by at most 1
if (currentLCP > 0) --currentLCP;
}
}
public int size() {
return this.N;
}
public int get(int i) {
return this.lCP[i];
}
public SuffixArray getSuffixArray() {
return this.suffixArray;
}
}
Case Study
The problem that required me to learn these two data structures was String Similarity. Here's the problem statement.
For two strings $A$ and $B$, we define the similarity of the strings to be the length of the longest prefix common to both strings. For example, the similarity of strings "abc" and "abd" is 2, while the similarity of strings "aaa" and "aaab" is 3.
Calculate the sum of similarities of a string $S$ with each of it's suffixes.
So, we find $S$ in our suffix array. Say $S = SA_i.$ The two suffixes with longest common prefixes of $S$ will be $SA_{i-1}$ and $SA_{i+1},$ which we've already calculated in $LCP_{i}$ and $LCP_{i+1},$ respectively.
The next two suffixes with longest common prefixes will be $LCP_{i-2}$ and $LCP_{i+2}.$ Now, we have to be careful because we could run into a situation like this. Consider $S = aabbabab.$ Here are the first 3 entries of the suffix array.
aabbabab
ab
abab
The longest common prefix of $S_0$ with $S_2$ is not $LCP_{2},$ but $\min\left(LCP_1,LCP_2\right),$ in general the longest common prefix of $SA_{i}$ with $SA_{j},$ where $i < j$ is $\min\left(LCP_{i+1},LCP_{i+2},\ldots,LCP_j\right).$ Thus, with our two data structures, the solution is quite short.
import java.io.*;
import java.util.*;
public class Solution {
static class SuffixArray {
...
}
static class LCPArray { // LCP stands for longest common prefix
...
}
static long sumSuffixSimilarities(String S) {
int N = S.length();
LCPArray lCPArray = new LCPArray(S);
SuffixArray suffixArray = lCPArray.getSuffixArray();
long similaritySum = N; // initialize with string itself
int pivot = suffixArray.getRank(0); // the longest matching prefixes will be adjacent
int minMatchLength = Integer.MAX_VALUE; // afterwards, the length will be decreasing
for (int i = pivot; i >= 0 && lCPArray.get(i) > 0; --i) {
minMatchLength = Math.min(lCPArray.get(i), minMatchLength);
similaritySum += minMatchLength;
}
minMatchLength = Integer.MAX_VALUE;
for (int i = pivot + 1; i < N && lCPArray.get(i) > 0; ++i) {
minMatchLength = Math.min(lCPArray.get(i), minMatchLength);
similaritySum += minMatchLength;
}
return similaritySum;
}
public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(System.out)));
int T = Integer.parseInt(in.readLine());
for (int t = 0; t < T; ++t) {
String S = in.readLine();
out.println(sumSuffixSimilarities(S));
}
in.close();
out.close();
}
}
I am 27 today. Since my birthday falls at the end of the semester after everyone goes home, I hardly ever celebrate it. But even if I were to celebrate it, just what would I be celebrating?
Obstensibly, my 20s have been an unrelenting string of failures of unmet expectations. Just as the road to success zigs and zags, on the fall down, one catches glimmers of hope and leaves opportunities uncapitalized. It's hard to not let all the rejections from jobs, graduate schools, and girls get to you. People say, "it's not you, it's them," but then, is it everybody? Or some say, just be confident and optimistic, but surely, that confidence and optimism must have some basis in reality. I've mostly put all these rejections behind me, but I would be lying if I don't sometimes wake up in the middle of a night in a panic and reflect on these things.
If someone told my college or high school self, that this paradise was awaiting me in my 20s, I'm not sure if I would have kept on living. The promise of things getting better was often what kept me going. Now, I know that I sound like an entitled millenial, dare I say, a Bernie Sanders supporter (I donated \$5 to his campaign), griping, but this post ends in a happy note.
Despite the disappointment and the unfulfilled promise of something better, I actually find myself happier and more full of joy than I've ever been. While I'm not exactly the most social person, I have managed to cultivate a few strong friendships. Through their love and my family's, I've caught a glimpse of God's love, and that has been enough to sustain me. In light of recent tragedies, I've realized that these relationships are so much more meaningful than my desire for an interesting career and a beautiful marriage. And if this is what it has taken to come to this realization, I'm grateful to have suffered. And yes, I understand that calling my experiences suffering is a gross exaggeration to what real suffering is, but I hope that the reader can understand how one can become enveloped in his or her own thoughts and lose perspective.
Perhaps, some might say that this all just a euphemism for settling for less or an act of post hoc rationalization. Again, I would be lying if I don't acknowledge that at times, I still lapse into states of utter despair. I expect my 30s will be even more difficult, and that's okay if I never achieve that sense of worldly security that I desire. Knowing that I am loved, I can not only manage and scrape by but also find joy.
In any case, in the title picture, you can see some homemade oreos that I made a while back. They don't really have to do with anything that I just wrote. However, Michael Vo made them into my brother's initials. Without his and God's love, I just don't know where I would be now.
I recently added equation numbering to my blog. To see it in action, here's one of my homework exercises. I like this problem because I actually found it useful to diagonalize to exponentiate the matrix.
Let $\{X_n\}$ be the random walk on the space $\{0,1,2,3,4\},$ with $0$ and $4$ absorbing and with $p(i,i) = 1/2,$ $p(i,i+1) = p(i,i-1)= 1/4$ for $1 \leq i \leq 3.$ Let $\tau$ denote the absorption time. The graph can be seen in the title picture.
Part A
What is the limit as $n \rightarrow \infty$ of the law $X_n$ conditioned on non-absorption, that is, $$\lim_{n \rightarrow \infty}\mathcal{L}(X_n \mid \tau > n)?$$
Solution
We have that $\displaystyle \boxed{ \lim_{n \rightarrow \infty}\mathcal{L}\left(X_n \mid \tau > n\right) = \begin{cases} 1 - \frac{1}{\sqrt{2}}, &X_n \in \{1,3\} \\ \sqrt{2} - 1, &X_n = 2. \end{cases}}$
To see this, without loss of generality, let $X_0 = 2.$ We can do this since if $X_0 = 1$ or $X_0 = 3,$ then $\mathbb{P}(X_m = X_0,~\forall m \leq n \mid \tau > n) \rightarrow 0$ as $n \rightarrow \infty.$ Then, we can apply the Markov property.
Now, define $p_n^*(x,y) = \mathbb{P}(X_{n} = y \mid \tau > n,~X_0 = 2).$
We have that
\begin{equation}
p^*_1(2,y) = \begin{cases}
1/4, &y \in \{1,3\} \\
1/2, &y = 2.
\end{cases}
~\text{and, in general,}~
p_n^*(2,y) = \begin{cases}
a_n, &y \in \{1,3\} \\
b_n = 1 - 2a_n, &y = 2
\end{cases}
\label{eqn:3a_a_def}
\end{equation}
by symmetry, where $a_1 = 1/4$ and $b_1 = 1/2 = 1 - 2a_1.$
Now, we have that
\begin{align}
a_{n+1} = \mathbb{P}(X_{n+1} = 1 \mid \tau > n + 1,~X_0 = 2)
&= p_{n+1}^*(2,1) \nonumber\\
&= \frac{p_n^*(2,1)p(1,1) + p_n^*(2,2)p(2,1)}{\mathbb{P}(\tau > n +
1 \mid X_0 = 2, \tau > n)} \nonumber\\
&= \frac{p_n^*(2,1)p(1,1) + p_n^*(2,2)p(2,1)}{1 -
p_n^*(2,1)p(1,0) - p_n^*(2,3)p(3,4)} \nonumber\\
&= \frac{a_n(1/2) + b_n(1/4)}{1 - (2a_n/4)} \nonumber\\
&= \frac{a_n(1/2) + (1-2a_n)(1/4)}{(4 - 2a_n)/4} \nonumber\\
&= \frac{1}{4 - 2a_n}.
\label{eqn:3a_a_recurs}
\end{align}
$a_n$ converges by induction because
$$
|a_{n+1} - a_n| = \left|\frac{1}{4-2a_n} -
\frac{1}{4-a_{n-1}}\right|
= \left|\frac{4 - 2a_{n-1} - 4 +
2a_n}{(4-2a_{n})(4-2a_{n-1})}\right|
\leq \frac{1}{2}\left|a_n-a_{n-1}\right|
$$
since $0 \leq a_n \leq 1$ for all $n.$ Solving for $a$ in
\begin{equation}
a = \frac{1}{4-2a} \Rightarrow 2a^2 - 4a + 1 = 0 \Rightarrow
a = 1 \pm \frac{1}{\sqrt{2}}.
\label{eqn:3a_a}
\end{equation}
Thus, we must have that $a_n \rightarrow 1 - \frac{1}{\sqrt{2}}$
since $a_n \leq 1$ for all $n.$ Then, we have that $b_n
\rightarrow b = 1 - 2a = \sqrt{2} - 1.$
Part B
What is the limiting distribution conditioned on never getting absorbed, that is, $$ \lim_{n \rightarrow \infty}\lim_{M \rightarrow \infty} \mathcal{L}(X_n \mid \tau > M)? $$
Solution
We have that $\displaystyle\boxed{\lim_{n \rightarrow \infty}\lim_{M \rightarrow \infty}\mathcal{L}(X_n \mid \tau > M)= \begin{cases}1/4, &X_n \in \{1,3\}\\ 1/2, &X_n = 2.\end{cases}}$
Again, we can assume without loss of generality that $X_0 = 2.$ Fix $n \in \mathbb{N}.$ From Equation \ref{eqn:3a_a_def} in the previous part, we know $\mathbb{P}(X_n = 2\mid \tau > n) = 1-2a_{n},$ where we define $a_0 = 0.$ Now, suppose, we know $\mathbb{P}(X_n = 2 \mid \tau > M)$ for $M \geq n.$ Then, by Bayes' theorem, \begin{align} \mathbb{P}(X_n = 2 \mid \tau > M + 1) &= \mathbb{P}(X_n = 2 \mid \tau > M + 1, \tau > M) \nonumber\\ &= \frac{\mathbb{P}(\tau > M + 1 \mid X_n = 2, \tau > M)\mathbb{P}(X_n = 2 \mid \tau > M)}{\mathbb{P}(\tau > M + 1 \mid \tau > M)}. \label{eqn:3b_bayes} \end{align} Calculating the two unknown factors, we have that in the denominator, \begin{align} \mathbb{P}(\tau > M + 1 \mid \tau > M) &= \sum_{x=1}^3 \mathbb{P}(\tau > M + 1, X_M = x \mid \tau > M) \nonumber\\ &= \frac{3}{4}\left(\mathbb{P}(X_M = 1 \mid \tau > M) + \mathbb{P}(X_M = 3 \mid \tau > M)\right) + \mathbb{P}(X_M = 2 \mid \tau > M)\nonumber\\ &= \frac{3}{4}\left(a_{M} + a_{M}\right) + (1-2a_{M}) \nonumber\\ &= 1 - \frac{1}{2}a_{M}, \label{eqn:3b_tau} \end{align} and in the numerator, \begin{align} \mathbb{P}(\tau > M + 1 \mid X_n = 2, \tau > M) &= \sum_{x=1}^3\mathbb{P}(\tau > M + 1, X_M = x \mid X_n = 2, \tau > M)\nonumber\\ &= \sum_{x=1}^3\mathbb{P}(\tau > M + 1, X_M = x \mid X_n = 2, \tau > M)\nonumber\\ &= \frac{3}{2}\mathbb{P}(X_M = 1 \mid X_n = 2, \tau > M) + \mathbb{P}(X_M = 2 \mid X_n = 2, \tau > M) \nonumber\\ &= \frac{3}{2}a_{M - n} + 1 - 2a_{M - n} \nonumber\\ &= 1 - \frac{1}{2}a_{M - n}, \label{eqn:3b_bayes_num} \end{align} where we use that $\mathbb{P}(\tau > M + 1, X_M = 1 \mid X_n = 2, \tau > M) = \mathbb{P}(\tau > M + 1, X_M = 3 \mid X_n = 2, \tau > M),$ and the fact that $\mathbb{P}(X_M = x \mid X_n = 2, \tau > M)$ is the probability of a chain starting at $2$ that doesn't hit an absorbing state in $M - n$ transistions.
Putting together Equations \ref{eqn:3b_bayes_num}, \ref{eqn:3b_tau}, and \ref{eqn:3b_bayes}, we have that \begin{equation*} \mathbb{P}(X_n = 2 \mid \tau > M + 1) = \frac{1 - \frac{1}{2}a_{M-n}}{1 - \frac{1}{2}a_{M}}\mathbb{P}(X_n = 2 \mid \tau > M), \end{equation*} and by induction, we'll have that \begin{equation*} \mathbb{P}(X_n = 2 \mid \tau > M) = (1 - 2a_{n}) \prod_{k=n}^{M - 1} \frac{1 - \frac{1}{2}a_{k-n}}{1 - \frac{1}{2}a_{k}}, \end{equation*} where the product is just $1 - 2a_{n}$ if $M = n.$ For large enough $M,$ when $k \geq 2n$ the factors in the numerator will cancel, so we will have that for $M \geq 2n$ that \begin{align} \mathbb{P}(X_n = 2 \mid \tau > M) &= (1 - 2a_{n}) \prod_{k=0}^{n - 1}\left(1-\frac{1}{2}a_{k}\right) \prod_{k=M - n}^{M-1}\frac{1}{1-\frac{1}{2}a_{k}} \nonumber\\ &= (1 - 2a_{n}) \prod_{k=0}^{n - 1}\left(4-2a_{k}\right) \prod_{k=M - n}^{M-1}\frac{1}{4-2a_{k}} \nonumber\\ &= (1 - 2a_{n}) \prod_{k=1}^{n}\frac{1}{a_{k}} \prod_{k=M - n + 1}^{M}a_k \label{eqn:3b_p2} \end{align} by the recursive definition of $a_n$ in Equation \ref{eqn:3a_a_recurs}.
Taking the limit as $M \rightarrow \infty,$ we have that \begin{equation} \lim_{M \rightarrow \infty} \mathbb{P}(X_n = 2 \mid \tau > M) = (1 - 2a_{n})a^{n}\prod_{k=1}^{n}\frac{1}{a_{k}}. \label{eqn:3b_p_lim} \end{equation} by Equation \ref{eqn:3a_a}, where $\displaystyle a = 1 - \frac{1}{\sqrt{2}} = \frac{1}{2 + \sqrt{2}}.$
Define $c_{-1} = 0$ and $c_0 = 1/4,$ so $a_0 = 0 = \frac{c_{-1}}{c_0}.$ In general, we can write $a_n = \frac{c_{n-1}}{c_n},$ where $c_n = -2c_{n-2} + 4c_{n-1}$ for $k \geq 1.$ To see this, by definition of $a_n$ in Equation \ref{eqn:3a_a_recurs}, \begin{equation} a_{n + 1} = \frac{1}{4 - 2a_n} = \frac{1}{4 - 2\frac{c_{n-1}}{c_{n}}} = \frac{c_n}{-2c_{n-1} + 4c_n} = \frac{c_n}{c_{n+1}}. \label{eqn:3b_a_c} \end{equation}
Now, with Equation \ref{eqn:3b_a_c}, Equation \ref{eqn:3b_p_lim} becomes \begin{equation} \lim_{M \rightarrow \infty} \mathbb{P}(X_n = 2 \mid \tau > M) = (1 - 2a_{n})a^{n}\prod_{k=1}^{n}\frac{c_k}{c_{k-1}} = (1 - 2a_{n})a^{n}\frac{c_{n}}{c_0} \label{eqn:3b_p_lim_2} \end{equation}
Note, that we can rewrite $c_n$ by exponentiating a matrix: \begin{align} \begin{pmatrix} c_{n} \\ c_{n + 1} \end{pmatrix} &= \begin{pmatrix} 0 & 1 \\ -2 & 4 \end{pmatrix}^n \begin{pmatrix} c_0 \\ c_1 \end{pmatrix} \nonumber\\ &= \begin{pmatrix} 1 & 1 \\ 2 + \sqrt{2} & 2 - \sqrt{2} \end{pmatrix} \begin{pmatrix} 2 + \sqrt{2} & 0 \\ 0 & 2 - \sqrt{2} \end{pmatrix}^n \begin{pmatrix} \frac{1}{2}-\frac{1}{\sqrt{2}} & \frac{1}{2\sqrt{2}} \\ \frac{1}{2}+\frac{1}{\sqrt{2}} & -\frac{1}{2\sqrt{2}} \end{pmatrix} \begin{pmatrix} 1/4 \\ 1 \end{pmatrix} \nonumber\\ &= \begin{pmatrix} \frac{1}{8}\left((2+\sqrt{2})^n(1+\sqrt{2}) + (2-\sqrt{2})^n(1-\sqrt{2})\right) \\ \frac{1}{8}\left((2+\sqrt{2})^{n+1}(1+\sqrt{2}) + (2-\sqrt{2})^{n+1}(1-\sqrt{2})\right) \end{pmatrix} \label{eqn:3b_c_matrix} \end{align} by diagonalization. Using Equation \ref{eqn:3b_c_matrix}, Equation \ref{eqn:3b_p_lim_2} becomes \begin{equation} \lim_{M \rightarrow \infty} \mathbb{P}(X_n = 2 \mid \tau > M) = \frac{1}{2}(1 - 2a_{n})\left((1+\sqrt{2}) + \left(\frac{2-\sqrt{2}}{2+\sqrt{2}}\right)^n(1-\sqrt{2})\right), \label{eqn:3b_p_lim_3} \end{equation} where we recall that $a = \frac{1}{2 + \sqrt{2}}.$ $|2-\sqrt{2}| < 1$ and so, taking the limit as $n \rightarrow \infty$ of Equation \ref{eqn:3b_p_lim_3}, the second term goes to $0$: \begin{equation} \lim_{n\rightarrow 0}\lim_{M \rightarrow \infty} \mathbb{P}(X_n = 2 \mid \tau > M) = \frac{1-2a}{2}(1+\sqrt{2}) = \frac{\sqrt{2} - 1}{2}(1+\sqrt{2}) = \frac{1}{2}. \label{eqn:3b_p_lim_lim} \end{equation} And finally, by symmetry, \begin{align} \lim_{n\rightarrow 0}\lim_{M \rightarrow \infty} \mathbb{P}(X_n = 1 \mid \tau > M) &= \lim_{n\rightarrow 0}\lim_{M \rightarrow \infty} \mathbb{P}(X_n = 3 \mid \tau > M) \nonumber\\ &= \frac{1 - \lim_{n\rightarrow 0}\lim_{M \rightarrow \infty} \mathbb{P}(X_n = 2 \mid \tau > M)}{2} \nonumber\\ &= \frac{1}{4}. \label{eqn:3b_p_1_3} \end{align} Equations \ref{eqn:3b_p_lim_lim} and \ref{eqn:3b_p_1_3} combine to give us the limiting distribution.
Graphs in $\LaTeX$
Here's the code for the graph. I used a package called tikz
.
\documentclass[10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{tikz}
\usetikzlibrary{arrows}
\usepackage[active, tightpage]{preview}
\PreviewEnvironment{tikzpicture}
\renewcommand\PreviewBbAdjust {-0bp -25bp 0bp 25bp}
\begin{document}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.75cm,
thick,main node/.style={circle,draw}]
\node[main node] (0) {$0$};
\node[main node] (1) [right of=0] {$1$};
\node[main node] (2) [right of=1] {$2$};
\node[main node] (3) [right of=2] {$3$};
\node[main node] (4) [right of=3] {$4$};
\path[every node/.style={font=\sffamily}]
(0) edge [loop above] node {$1$} (0)
(0)
(1) edge [bend right] node [below] {$1/4$} (2)
edge [loop above] node {$1/2$} (1)
edge node [below] {$1/4$} (0)
(1)
(2) edge [bend right] node [above] {$1/4$} (1)
edge [loop below] node {$1/2$} (2)
edge [bend left] node [above] {$1/4$} (3)
(2)
(3) edge [bend left] node [below] {$1/4$} (2)
edge [loop above] node {$1/2$} (3)
edge node [below] {$1/4$} (4)
(3)
(4) edge [loop above] node {$1$} (4);
\end{tikzpicture}
\end{document}
Then, to convert the graph to a png
file with a transparent background, I used ImageMagick.
convert -density 300 graph.pdf -format png graph.png
Now that I have a substantial number of posts and some good quality content, I often find the need to look back at prior posts. Unfortunately, this has consisted of paging through my blog or clicking on tags. To resolve this, I'm introducing the search feature. To visit the search page, hover over Blog
in the header, and click Search Posts.
Probably, the easiest and most effective way to implement search is to use Google Site Search, but I thought that it would be more fun to use PostgreSQL's full text search capabilities. Plus, I'd rather keep my site ad-free, and I don't want to pay. From the title picture, one can see that PostgreSQL full text search has lots of features like the ability to understand grammar, rank matches, and extract the relevant text.
Under the hood, posts are stored as tsvector
s.
phillypham::DATABASE=> SELECT setweight(to_tsvector('Post title'), 'A') || setweight(to_tsvector('The_quick_brown_fox_jumps_over_the_lazy_dog'), 'B') AS tsvector FROM posts LIMIT 1;
tsvector
-----------------------------------------------------------------------------------
'brown':5B 'dog':11B 'fox':6B 'jump':7B 'lazi':10B 'post':1A 'quick':4B 'titl':2A
(1 row)
As you can see, there's the ability to weight the title more heavily than the body.
Now, to make this work properly, I added another column to the posts
table. I indexed it and created a trigger to make sure that it's automatically updated. Then, to integrate properly with Sequelize, I added all these queries to an afterSync
hook.
=> ALTER TABLE posts ADD COLUMN title_body_tsvector tsvector;
=> UPDATE posts SET title_body_tsvector=setweight(to_tsvector('english', title), 'A') || setweight(to_tsvector('english', body), 'B');
=> CREATE INDEX IF NOT EXISTS title_body_search_idx ON posts USING gin(title_body_tsvector);
=> CREATE OR REPLACE FUNCTION posts_trigger() RETURNS trigger AS $$ begin new.title_body_tsvector := setweight(to_tsvector('english', new.title), 'A') || setweight(to_tsvector('english', new.body), 'B'); return new; end $$ LANGUAGE plpgsql;
=> CREATE TRIGGER posts_update BEFORE INSERT OR UPDATE ON posts FOR EACH ROW EXECUTE PROCEDURE posts_trigger();
See Post.js for more details.
Then, to do a search, I execute a raw query and return it as a promise. The query looks like this:
SELECT id,
title,
ts_rank_cd(title_body_tsvector,$QUERY,1) AS rank,
ts_headline('english', title || ' ' || body,$QUERY, 'MaxWords=100') AS headline
FROM posts
WHERE published AND title_body_tsvector @@ $QUERY
ORDER BY rank DESC, id DESC;
The Javascript looks like this:
function(db, tsquery) {
var query = tsquery.indexOf(' ') == -1 ? "to_tsquery('english','" + tsquery + "')" : "plainto_tsquery('english','" + tsquery + "')";
return db.sequelize.query("SELECT id, title, ts_rank_cd(title_body_tsvector," + query + ",1) AS rank, ts_headline('english', title || ' ' || body," + query + ", 'MaxWords=100') AS headline FROM posts WHERE published AND title_body_tsvector @@ " + query + " ORDER BY rank DESC, id DESC",
{ model: db.Post });
}
Note the ternary operator when defining query
. PostgreSQL tsquery
s allow searches of arbitrary complexity with nesting and various boolean operators. Unfortunately, most people will not have the syntax memorized, so plainto_tsquery
lets one search with more natural syntax. If you get the syntax wrong (e.g., forget to close a parentheses), the error will be handled gracefully thanks to promises.
Try it and let me know what you think!
Since I've lost my financial independence, I only cook something nice when my brother decides that he wants it. This week, he wanted shrimp. It's been years since I've last cooked shrimp. Luckily, it came out so well that I decided to write the recipe down. I thought the tangy sauce went well with the buttery garlic shrimp.
Ingredients
- 1.5 lb shrimp, peeled, and deveined
- 1 teaspoon kosher salt
- 1/2 teaspoon pepper
- 12 tablespoons of salted butter
- 4 cloves garlic
- 1 lemon
- 1/2 cup chicken stock, I usually make my own but store-bought stock is fine
- Parsley, preferably fresh
- 1 lb linguine, cook according the directions on the package, add 4 tablespoons of butter
Directions
- First, pat the shrimp dry and season with the salt and pepper. Set aside.
- Zest the lemon, and set aside. Squeeze the lemon juice into the chicken stock. Set aside chicken stock and lemon juice mixture.
- Mince garlic, and set aside.
- Chop parsley (about a handful), and set aside with the lemon zest.
- Heat skillet to medium, add 4 tablespoons of butter, and spread shrimp in one layer on the skillet. Let sit for 1.5 minutes.
- Add the minced garlic into the skillet, and wait another 1.5 minutes.
- Flip the shrimp, and wait another 3 minutes. If the shrimp is done, set aside. Otherwise, just stir until the shrimp is fully cooked.
- Now, remove the shrimp from skillet and store in a bowl. Pour the chicken stock and lemon juice mixture, add 4 tablespoons of butter, and stir vigorously, scraping the bottom with a spoon, until the butter melts.
- To finish the sauce, remove it from skillet, and mix in the lemon zest and parsley. Add salt and pepper to taste.
- Serve over the linguine. You should have 3-4 servings.
Here's my brother's extra butter version:
I just finished Lolita by Vladimir Nabokov. I'm not too sure what to make of it. Given the subject matter, I was expecting something much more pornographic. In the novel, Humbert Humbert (H.H.) marries a widow and seduces her 12 year old daughter Lo, short for Lolita (well, he claims that she seduced him in that hotel). However, while the novel starts with some graphic details, the relations between Lo and H.H. aren't made too explicit.
The novel is rather artfully constructed with many literary allusions that went over my head. I say constructed because while it's not a fantasy novel, there are way too many coincidences with numbers and names and other impossible happenings for plot to resemble reality. For instance, throughout the novel, the spectre of Clare Quilty haunts the two at every turn and creates a sort of game between Quilty and H.H. Although it may be that the narrator H.H. is just crazy as he loses touch with reality several times and checks himself into a sanitarium. Perhaps, the biggest break from reality is when a character briefly rises from the dead at the end.
The novel can be read in many different ways and struggles for any clear interpretation or moral. One can see it a satirizing American culture as H.H. and Lolita commit debaucheries across the country on their road trip. What spoke most to me was how H.H. objectifies Lolita, yet he really does come to truly love her despite the large age gap. At first, it's clear that she is his sex toy, for recently after her mother dies, he ignores her crying every night as she falls asleep. While in some sense she did seduce him, he realizes that he's taking advantage of how young girls are bombarded with images of romance, and Lo is acting out a some "simulacron" of romance. During the time in Beardsley, the relationship becomes more prostitute-like as he often pays her in some way for her services.
But he does genuinely love her in the end, as seen in their last meeting between the two. In this love comes the understand of why she despises him more than Quilty in way, for Quilty "broke [her] heart," where as H.H. "merely broke [her] life." I'm not too sure yet why this quote struck me so much. There's some sense that my own ideas of love are somewhat warped, and I may have broken someone elses life given the chance. And so, true love is taking a step back and knowing your "love" isn't the best thing for other person.
Think of two sets that can only be matched together. For example, for those that believe in traditional marriage you can think of $n$ females and $n$ males. Suppose that we know much happiness each marriage would bring, and assume marriage only brings postive happiness. How should we match couples to maximize happiness?
In another scenario, suppose we have $n$ items and $n$ bidders, and we know how much each bidder is willing to pay for each item. Also, assume each bidder can only bid on at most $1$ item. To whom should we sell each item to maximize our profit?
We can model this scenario with a complete bipartite graph $G = (X,Y,E)$, where $X$ and $Y$ are two disjoint sets of vertices such that every vertex in $X$ is connected to every in $Y$, but within $X$ and within $Y$ no vertices are connected. That is, $(x,y) \in E \Leftrightarrow x \in X \wedge y \in Y$. Each edge has a weight $w_{xy}.$ In our two examples, $w_{xy}$ is the happiness from the marriage of female $x$ and male $y$ or how much item $x$ is worth to bidder $y$.
We want to match or assign each element of $X$ to a distinct element of $Y$. Thus, we have the so-called Assignment Problem. Here, I'll detail how the Hungarian method solves this.
First, we need to introduce some definitions for the algorithm to make sense.
Graph Theory
Matching
The technical definition of a matching is a set $M$ of non-adjacent edges, that is, no two edges in the set share a common vertex. However, I always prefered to think of matching as pairs of vertices connected by an edge such that each only vertex appears in at most 1 pair. Here's an example, where I have colored the matching in red:
In this case, $M = \{(x_1,y_1),(x_3,y_4),(x_4,y_2)\}.$
Perfect Matching
A matching is maximal if adding any other edge makes it not a matching. Another way to think about this is in terms of sets. The maximal matching cannot be a proper subset of any other matching.
Now, matching is perfect if it contains all vertices. For instance,
is a perfect matching, where $M = \{(x_1,y_3),(x_2,y_1),(x_3,y_4),(x_4,y_2)\}.$
Alternating Path
Now, one way of representing matchings is to break them up into paths. An alternating path is composed of edges, where every other edge is part of a matching. For example, this is an alternating path, where I have colored edges in the path red and blue, and red edges are part of the matching.
A path is just an ordered set of edges, where two adjacent edges share a common vertex. Our path is $x_1 \rightarrow y_2 \rightarrow x_4 \rightarrow y_1 \rightarrow x_2,$ and the corresponding matching is $M = \left\{(x_4,y_2), (x_2,y_1)\right\}.$
Augmenting Path
An augmenting path is a particular kind of alternating path, in which the first and last vertex in the path is unmatched. Thus, by flipping the edges in the matching, we will get a matching that contains an additional vertex. Formally, consider the augmenting path $P$. Clearly, it must contain an odd number of edges. If $M$ is the corresponding matching, then it contains the even numbered edges in the path $P$. We can make a new larger matching $P-M$ that contains the odd numbered edges. Here's an example of this process.
Labeling
Let us suppose the graph looks like this, where I only draw the edges from one vertex in $X$ at a time for clarity.
We assign each vertex a nonnegative integer label. Formally, this labeling is a function, $L: X \cup Y \rightarrow \mathbb{Z}.$ Let $w(x,y)$ be the weight of the edge from vertex $x$ and $y.$ An an edge $(x,y)$ is considered feasible is $L(x) + L(y) \geq w(x,y).$ A labeling is feasible if $L(x) + L(y) \geq w(x,y)$ for all $x$ and $y$.
Moreover, we define an equality subgraph, $G_L = (X,Y,E_L)$, associated with a labeling as follows: $(x,y) \in E_L \Leftrightarrow L(x) + L(y) = w(x,y).$
For example, given a specific labeling, here's the equality subgraph, where I've colored the feasible edges in black and the edges in the equality subgraph in green.
To me, it's not at all obvious why a labeling would be helpful, but this theorem connects everything.
If a perfect matching exists in an equality subgraph of a feasible labeling $L$, then it is a maximum weighted matching.
Here's a proof. Let $M$ be the perfect matching in the equality subgraph of labeling $L$. Let $M^\prime$ be an alternative perfect matching. Then, we have that the value of the matching $M^\prime$ is \begin{align*} \sum_{(x,y) \in M^\prime} w(x,y) &\leq \sum_{(x,y) \in M^\prime} L(x) + L(y)~~~\text{since $L$ is feasible}\\ &= \sum_{(x,y) \in M} L(x) + L(y) \\ &= \sum_{(x,y) \in M} w(x,y), \end{align*} which is the value of the matching $M.$
Thus, our problem reduces to find a perfect matching in the equality subgraph of a feasible labeling.
Hungarian Method
Now, we know enough terminology to describe the algorithm. The main idea is to iteratively find augmenting paths in the equality subgraph until all vertices in $X$ are matched. If such a path cannot be created, we need to modify the labeling so that the labeling contains additional edges but remains feasible.
Algorithm
- Initialize with a feasible labeling.
- Initialize two sets to keep track of vertices in alternating path. $S \subseteq X$ and $T \subseteq Y.$ Initialize with $S = T = \emptyset.$
- Pick an unmatched vertex $x \in X.$ Put it in a set called $S.$
There are two options here:
- If there is a vertex $y \in Y - T \cap N(S),$ where $N(S)$ denotes the neighbors of the vertices in $S$ in the equality subgraph, then put $y$ in $T.$ This means that we add $y$ to our path. For the $x \in S$ such that $L(x) + L(y) = w(x,y)$, set $x$ as the previous vertex in the current alternating path.
- Otherwise, fix the labeling. Let $\Delta_y = \inf\{L(x) + L(y) - w(x, y) : x \in S\}.$ Let $\Delta = \inf\{\Delta_y : y \in Y - T\}.$ For each $x \in S,$ $L^\prime(x) = L(x) - \Delta.$ For each $y \in T,$ $L^\prime(y) = L(y) + \Delta.$ In this way, all edges remain feasible. Consider various $e = (x,y)$:
- If $x \not\in S$ and $y \not\in T$, $e$ is still feasible since we didn't touch the labels of $x$ and $y$. If this edge is part of the equality subgraph, and these vertices are matched, they still are.
- If $x \in S$ and $y \in T$, we decreased $L(x)$ by $\Delta$ and increased $L(y)$ by $\Delta$, so $L(x) + L(y) = w(x,y)$ still.
- If $x \in S$ and $y \not\in T$, $e$ is still feasible since we have decreased $L(x)$ by the minimum of all such $\Delta_y.$
If $x \not\in S$ and $y \in T$, we only increased $L(y)$, so the edge is more feasible in a sense.
Now, if $\Delta_y = \Delta$ for $y \in Y - T,$ then $y$ becomes and element of $N(S),$ and we can return to step 4.
- Now, we have just put $y \in T$. We have two cases here:
- $y$ is umatched. Add $y$ to end end of the alternating path. Our alternating path starts with some umatched vertex in $x \in S$ and ends with unmatched $y.$ We have an augmenting path, so create a new bigger matching by inverting our path. Count the number of matched vertices. If it is $n$, we are done. Otherwise, go back to step 2 and repeat.
- $y$ is already matched, say, to $x^\prime$. We add two vertices to our path, so our path looks like $$x \longrightarrow \cdots \longrightarrow y \longrightarrow x^\prime,$$ where $x \in S$ is an umatched vertex. Go to step 4 to find more vertices to add to our path.
Implementation
The algorithm is simple enough, but there are few complications in implementing it, particularly, finding an efficient way to calculate $\Delta$. If you read carefully, you'll note that if $|Y| \geq |X|,$ we will still find the optimal match since we will use the edges of higher value first. Let $|X| = n$ and $|Y| = m.$ Let us define various data structures:
vector<bool> S(n, false)
: keeps track of the $X$ vertices in our current alternating pathvector<int> xLabel(n, -1)
: labeling of vertices in $X$vector<int> yLabel(m, 0)
: labeling of vertices in $Y$vector<int> slack(m, INT_MAX)
:slack[y]
$= \Delta_y$vector<int> prevX(n, -1)
:prevX[x]
is the vertex in $Y$ that comes before $x$ in our alternating pathvector<int> prevY(m, -1)
:prevY[y]
is the vertex in $X$ that comes before $y$ in our alternating pathvector<int> xToY(n, -1)
: $x$ is matched toxToY[x]
in $Y$vector<int> yToX(m, -1)
: $y$ is matched toyToX[y]
in $X$stack<int, vector<int>> sStack
keeps track of the vertices that need to be added to $S$. A queue would work just as well.queue<int> tQueue
keeps track of the vertices that are eligible to be added to $T$ (neighbors of $S$ in equality subgraph that aren't in $T$ already). A stack would work just as well.
Our input will be const vector<vector<int>> &weights
, which a $n \times m$ matrix of edge weights. We will return xToY
when all vertices in $X$ are matched.
Let's go over the implementation step-by-step:
We initialize all the data structures. To create an initial feasible labeling, we set $L(y) = 0$ for all $y \in Y$ and set $L(x) = \max_{y \in Y}w(x,y).$ This is an $O(NM)$ operation. We set the number of matches to $0$ and also push an umatched vertex into
sStack
. Here's the code:vector<int> findMaximumAssignment(const vector<vector<int>> &weights) { int N = weights.size(); if (N == 0) return vector<int>(); int M = weights.back().size(); if (N > M) throw logic_error("|X| > |Y|, no match is possible"); vector<bool> S(N, false); // set to keep track of vertices in X on the left in alternating path vector<int> xLabel(N, -1); vector<int> yLabel(M, 0); vector<int> slack(M, INT_MAX); // keep track of how far Y is from being matched with a vertex in S for (int i = 0; i < N; ++i) { // initialize label with max edge to Y for (int j = 0; j < M; ++j) xLabel[i] = max(xLabel[i], weights[i][j]); } // array for memorizing alternating path vector<int> prevX(N, -1); // prevX[i] is vertex on the right (in Y) that comes before i in X vector<int> prevY(M, -1); // prevY[j] is vertex on the left (in X) that comes before j in Y if slack[j] == 0; otherwise, closest vertex in S // maps to keep track of assignment vector<int> xToY(N, -1); // xToY[i] is vertex on the right (in Y) matched to i in X vector<int> yToX(M, -1); // yToX[j] is vertex on the left (in X) matched to j in Y stack<int, vector<int>> sStack; // vertices to add to S queue<int> tQueue; // neighbors of S to add to T int matches = 0; sStack.push(0); // initialize with unmatched vertex while (matches < N) { ... } return xToY; }
Right now $S$ and $T$ are empty. The first order of business is to add something to $S$. When we add to $S$, we initialize
slack
, find the closest vertex in $S$ to each $y \in Y$, and add neighbors in the equality subgraph totQueue
. By closest vertex, I mean the vertex that would require changing the labeling the least. Here's the code.vector<int> findMaximumAssignment(const vector<vector<int>> &weights) { ... while (matches < N) { while (!sStack.empty()) { // add unmatched vertices to S int x = sStack.top(); sStack.pop(); S[x] = true; for (int j = 0; j < M; ++j) { if (xLabel[x] + yLabel[j] - weights[x][j] < slack[j]) { // check for neighboring vertices and initialize slack slack[j] = xLabel[x] + yLabel[j] - weights[x][j]; // slack >= 0, all feasible initially, and we decrease by min prevY[j] = x; // tree looks like ... --> x -?-> j depending if slack[j] == 0 or not if (slack[j] == 0) tQueue.push(j); // edge is in equality subgraph, it is a neighbor of S } } } ... } return xToY; }
Now, we have two options depending if
tQueue
is empty or not. IftQueue
is empty, we fix the labeling. We'll restart the while loop with an emptysStack
and at least 1 vertex intQueue
after this.vector<int> findMaximumAssignment(const vector<vector<int>> &weights) { ... while (matches < N) { ... if (tQueue.empty()) { // no neighboring vertices, fix labeling // loop invariant is that |S| > |T|, since we add to S whenever we add pop from tQueue int delta = INT_MAX; for (int j = 0; j < M; ++j) { if (slack[j] > 0) delta = min(delta, slack[j]); // only try to add edges that are feasible and not in T } for (int i = 0; i < N; ++i) { if (S[i]) xLabel[i] -= delta; // decrease label of vertices in S } for (int j = 0; j < M; ++j) { if (slack[j] == 0) { // it's in T yLabel[j] += delta; } else if (slack[j] > 0 && prevY[j] != -1) { // check that it's feasible and connected to S slack[j] -= delta; // decrease the distance from S since labels in S were decreased if (slack[j] == 0) tQueue.push(j); } } } else { ... } } return xToY; }
Now, we have ensured
tQueue
won't be empty. Again, we have two options here depending if the vertex at the head of the queue is matched or not. Let us first deal with chase where it is already matched, so we extend our alternating path with $\cdots \rightarrow y \rightarrow x^\prime.$ Then, we restart our while loop with $x^\prime$ insStack
since $x^\prime$ is part of the path now, too.vector<int> findMaximumAssignment(const vector<vector<int>> &weights) { ... while (matches < N) { ... if (tQueue.empty()) { // no neighboring vertices, fix labeling ... } else { // either augment path or vertex is already matched so add to S int y = tQueue.front(); tQueue.pop(); int x = yToX[y]; if (x == -1) { ... } else { // vertex was already matched, new path is [something umatched in S] --> ... --> prevY[y] --> y --> x prevX[x] = y; // update alternating path with edge between x and y, recall prevY[y] is already set sStack.push(x); // add this already matched vertex to S } } } return xToY; }
On the other hand, if the head of the queue $y$ is unmatched we have found an augmenting path to invert to create a new matching. After establishing this new matching, discard our alternating path, clear $S$ and $T$, update
slcak
, and count the total matches. If the everything in $X$ is matched, we're done. Otherwise, put something insStack
, so we can begin building our new alternating path.vector<int> findMaximumAssignment(const vector<vector<int>> &weights) { ... while (matches < N) { ... if (tQueue.empty()) { // no neighboring vertices, fix labeling ... } else { // either augment path or vertex is already matched so add to S int y = tQueue.front(); tQueue.pop(); int x = yToX[y]; if (x == -1) { int currentY = y; while (currentY > -1) { // new path is [something unmatched in S] --> ... --> y int currentX = prevY[currentY]; // go to left side xToY[currentX] = currentY; yToX[currentY] = currentX; currentY = prevX[currentX]; // go back to right side } for (int i = 0; i < N; ++i) prevX[i] = -1, S[i] = false; // reset path and remove everything from tree for (int j = 0; j < M; ++j) prevY[j] = -1, slack[j] = INT_MAX; // reset path and slack while (!tQueue.empty()) tQueue.pop(); // empty queue // check for a perfect match matches = 0; for (int i = 0; i < N; ++i) { if (xToY[i] != -1) { // if matched ++matches; } else if (sStack.empty()) { sStack.push(i); // put an unmatched left side node back in S to start } } } else { // vertex was already matched, new path is [something umatched in S] --> ... --> prevY[y] --> y --> x ... } } } return xToY; }
All in all, the algorithm is $O(n^2m)$ since every time we build an augmenting path we match a vertex in $X$, of which there are $n$. We enter the main loop as often as $n$ times since our path can be upto length $2n$. There are several steps in the loop where we iterate over $Y$, which has size $m$, such as computing the slack or fixing labels. Here's the whole function together:
/* hungarian method for maximum weighted matching of a bipartite graph
* Consider a weighted bipartite graph G = (X,Y). X is vertices on left side, Y is vertices on the right side
* We must have |X| <= |Y|. Match each vertex on the left to the a distinct vertex on the right with maximum total weight of edges
*/
vector<int> findMaximumAssignment(const vector<vector<int>> &weights) {
int N = weights.size();
if (N == 0) return vector<int>();
int M = weights.back().size();
if (N > M) throw logic_error("|X| > |Y|, no match is possible");
vector<bool> S(N, false); // set to keep track of vertices in X on the left in alternating path
vector<int> xLabel(N, -1);
vector<int> yLabel(M, 0);
vector<int> slack(M, INT_MAX); // keep track of how far Y is from being matched with a vertex in S
for (int i = 0; i < N; ++i) { // initialize label with max edge to Y
for (int j = 0; j < M; ++j) xLabel[i] = max(xLabel[i], weights[i][j]);
}
// array for memorizing alternating path
vector<int> prevX(N, -1); // prevX[i] is vertex on the right (in Y) that comes before i in X
vector<int> prevY(M, -1); // prevY[j] is vertex on the left (in X) that comes before j in Y if slack[j] == 0; otherwise, closest vertex in S
// maps to keep track of assignment
vector<int> xToY(N, -1); // xToY[i] is vertex on the right (in Y) matched to i in X
vector<int> yToX(M, -1); // yToX[j] is vertex on the left (in X) matched to j in Y
stack<int, vector<int>> sStack; // vertices to add to S
queue<int> tQueue; // neighbors of S to add to T
int matches = 0;
sStack.push(0); // initialize with unmatched vertex
while (matches < N) {
while (!sStack.empty()) { // add unmatched vertices to S
int x = sStack.top(); sStack.pop();
S[x] = true;
for (int j = 0; j < M; ++j) {
if (xLabel[x] + yLabel[j] - weights[x][j] < slack[j]) { // check for neighboring vertices and initialize slack
slack[j] = xLabel[x] + yLabel[j] - weights[x][j]; // slack >= 0, all feasible initially, and we decrease by min
prevY[j] = x; // tree looks like ... --> x -?-> j depending if slack[j] == 0 or not
if (slack[j] == 0) tQueue.push(j); // edge is in equality subgraph, it is a neighbor of S
}
}
}
if (tQueue.empty()) { // no neighboring vertices, fix labeling
// loop invariant is that |S| > |T|, since we add to S whenever we add pop from tQueue
int delta = INT_MAX;
for (int j = 0; j < M; ++j) {
if (slack[j] > 0) delta = min(delta, slack[j]); // only try to add edges that are feasible and not in T
}
for (int i = 0; i < N; ++i) {
if (S[i]) xLabel[i] -= delta; // decrease label of vertices in S
}
for (int j = 0; j < M; ++j) {
if (slack[j] == 0) { // it's in T
yLabel[j] += delta;
} else if (slack[j] > 0 && prevY[j] != -1) { // check that it's feasible and connected to S
slack[j] -= delta; // decrease the distance from S since labels in S were decreased
if (slack[j] == 0) tQueue.push(j);
}
}
} else { // either augment path or vertex is already matched so add to S
int y = tQueue.front(); tQueue.pop();
int x = yToX[y];
if (x == -1) {
int currentY = y;
while (currentY > -1) { // new path is [something unmatched in S] --> ... --> y
int currentX = prevY[currentY]; // go to left side
xToY[currentX] = currentY; yToX[currentY] = currentX;
currentY = prevX[currentX]; // go back to right side
}
for (int i = 0; i < N; ++i) prevX[i] = -1, S[i] = false; // reset path and remove everything from tree
for (int j = 0; j < M; ++j) prevY[j] = -1, slack[j] = INT_MAX; // reset path and slack
while (!tQueue.empty()) tQueue.pop(); // empty queue
// check for a perfect match
matches = 0;
for (int i = 0; i < N; ++i) {
if (xToY[i] != -1) { // if matched
++matches;
} else if (sStack.empty()) {
sStack.push(i); // put an unmatched left side node back in S to start
}
}
} else { // vertex was already matched, new path is [something umatched in S] --> ... --> prevY[y] --> y --> x
prevX[x] = y; // update alternating path with edge between x and y, recall prevY[y] is already set
sStack.push(x); // add this already matched vertex to S
}
}
}
return xToY;
}
Sample Run
Using our graph, we will intialize with all $L(x) = 10$ for all $x \in X$ and $L(y) = 0$ for all $y \in Y$. In the first run, we greedily match $x_1$ to $y_1$. I'll omit feasible edges since all edges are always feasbile. I'll denote edges in the equality subgraph in green, matches in orange, edges in the alternating path and matching in red, and edges in the alternating path not in matching in blue.
On the second run, we add umatched $x_2$ to $S$. Only $y_1$ is a neighbor of $S$, so we add $y_1$ to $T$. $y_1$ is already matched to $x_1$, so we add $x_1$ to $S$, too.
Now, $S$ has no neighbors in the equality subgraph not in $T$, so we need to fix the labeling. We find that $(x_1,y_4)$ is the edge that is closest to being in the equality subgraph. We fix our labeling, add $y_4$ to $T$, then invert our augmenting path to get a new matching.
The other vertices will be greedily matched, so we end up with the total weight being 39.
Case Study
My motivation for learning this algorithm was Costly Labels in Round 2 of the 2016 Facebook Hacker Cup. Given a tree (a graph with such that there is exactly one path that doesn't cross itself between any two vertices), we want to color the vertices to minimize the cost. Coloring vertex $i$ color $k$ costs us $C_{i,k}$, which is given. Also, if any of the vertices neighbors are of the same color, then we have to pay a penalty factor $P$.
The number of colors and vertices is rather small, so we can solve this using some clever brute force with dynamic programming. To my credit, I actually was able to discover the general dynamic programming regime, myself.
We take advantage of the tree structure. Suppose for each vertex, we compute $K(i, k_p, k_i)$, where $i$ is the current vertex, $k_p$ is the color of the parent, and $k_i$ is the color of $i$. $K(i, k_p, k_i)$ will be the minimum cost of this vertex plus all its children. Our solution will be $\min_k K(0, 0, k)$ if we root our tree at vertex $0$.
Now, $K(i, k_p, k_i) = \min(\text{cost with penalty},\text{cost without penalty})$. The cost with penalty is to greedily color all the child vertices than add $P$. The cost without penalty is where the Hungarian method comes in. We need to assign each child a color different than the parent in way that costs least. Thus, we view $X$ as children of the current vertex and $Y$ as colors. We can modify the Hungarian method to do this by setting edge weights to be $M - w(x,y)$, where $M$ is a large number. We give edges going to the parent color weight $0$, so they won't be used.
All in all, we have a solution that is just barely fast enough as it takes around 30 seconds to run on my computer.
#include <algorithm>
#include <ctime>
#include <exception>
#include <iostream>
#include <queue>
#include <set>
#include <stack>
#include <vector>
using namespace std;
/* hungarian method for maximum weighted matching of a bipartite graph
* Consider a weighted bipartite graph G = (X,Y). X is vertices on left side, Y is vertices on the right side
* We must have |X| <= |Y|. Match each vertex on the left to the a distinct vertex on the right with maximum total weight of edges
*/
vector<int> findMaximumAssignment(const vector<vector<int>> &weights) {
...
return xToY;
}
struct Tree {
int root;
vector<int> parent;
vector<set<int>> children;
explicit Tree(int root, int N) : root(root), parent(N, -1), children(N) { }
};
// root the tree, undefined behavior if graph is not a tree
Tree rootTree(int root, const vector<set<int>> &adjacencyList) {
int N = adjacencyList.size();
Tree tree(root, N);
tree.parent[root] = -1;
stack<int> s; s.push(root);
while (!s.empty()) {
int currentVertex = s.top(); s.pop();
for (int nextVertex : adjacencyList[currentVertex]) {
if (nextVertex != tree.parent[currentVertex]) { // don't recurse into parent
if (tree.parent[nextVertex] != -1) throw logic_error("a cycle was found, this graph is not a tree");
tree.children[currentVertex].insert(nextVertex);
tree.parent[nextVertex] = currentVertex;
s.push(nextVertex);
}
}
}
return tree;
}
int computeMinimumCostHelper(const Tree &tree, const vector<vector<int>> &C, int P,
int vertex, int parentColor, int vertexColor,
vector<vector<vector<int>>> &memo) {
int N = C.size(); // number of vertices
int K = C.back().size(); // number of colors
if (memo[vertex][parentColor][vertexColor] != -1) return memo[vertex][parentColor][vertexColor];
int parent = tree.parent[vertex];
int minCost = C[vertex][vertexColor] + P; // first calculate cost if we're willing to take penalty
vector<vector<int>> childCostByColor; childCostByColor.reserve(tree.children[vertex].size()); // prepare for computation without penalty
for (int child : tree.children[vertex]) {
int minChildCost = INT_MAX;
childCostByColor.emplace_back(); childCostByColor.back().reserve(K);
for (int childColor = 0; childColor < K; ++childColor) {
int childCost = computeMinimumCostHelper(tree, C, P, child, vertexColor, childColor, memo);
// weight 0 effectively ensures that the parent color will not be used for the children, invert edges to find max assignment
childCostByColor.back().push_back(parent != -1 && parentColor == childColor ? 0 : INT_MAX - childCost);
minChildCost = min(minChildCost, childCost); // if we're taking penalty just take min cost
}
minCost += minChildCost;
}
// only count parent if it exists, check that we don't have too many childen
if (childCostByColor.size() < K || (parent == -1 && childCostByColor.size() == K)) {
int noPenaltyCost = C[vertex][vertexColor];
vector<int> optimalAssignment = findMaximumAssignment(childCostByColor); // assign children to distinct colors
for (int i = 0; i < optimalAssignment.size(); ++i) noPenaltyCost += INT_MAX - childCostByColor[i][optimalAssignment[i]];
minCost = min(minCost, noPenaltyCost);
}
if (parent == -1) {
for (int k = 0; k < K; ++k) memo[vertex][k][vertexColor] = minCost; // doesn't matter what parent color is if no parent
} else {
memo[vertex][parentColor][vertexColor] = minCost;
}
return minCost;
}
int computeMinimumCost(const Tree &tree, const vector<vector<int>> &C, int P) {
int N = C.size(); // number of vertices
int K = C.back().size(); // number of colors
// memo[vertex index][parent color][vertex color] = cost of coloring current vertex and children (excludes parent cost)
vector<vector<vector<int>>> memo(N, vector<vector<int>>(K, vector<int>(K, -1)));
int minimumCost = INT_MAX;
for (int k = 0; k < K; ++k) { // vary color of root since root has no parent
minimumCost = min(minimumCost,
computeMinimumCostHelper(tree, C, P, tree.root, 0, k, memo));
}
return minimumCost;
}
int main(int argc, char *argv[]) {
clock_t startTime = clock();
ios::sync_with_stdio(false); cin.tie(NULL);
int T;
cin >> T;
for (int t = 1; t <= T; ++t) {
int N, K, P; cin >> N >> K >> P; // total vertices, max label, and penalty fee
// penalty occurs when a node has at least one pair of neighbors with same nodes
vector<vector<int>> C; C.reserve(N); // C[i][j] cost of coloring vertex i, j
for (int i = 0; i < N; ++i) {
C.emplace_back(); C.back().reserve(K);
for (int j = 0; j < K; ++j) {
int c; cin >> c; C.back().push_back(c);
}
}
vector<set<int>> adjacencyList(N);
for (int i = 0; i < N - 1; ++i) {
int a, b; cin >> a >> b;
--a; --b; // convert to 0-based indexing
adjacencyList[a].insert(b);
adjacencyList[b].insert(a);
}
Tree tree = rootTree(0, adjacencyList);
cout << "Case #" << t << ": "
<< computeMinimumCost(tree, C, P)
<< '\n';
}
// finish
cout << flush;
double duration = (clock() - startTime) / (double) CLOCKS_PER_SEC;
cerr << "Time taken (seconds): " << duration << endl;
return 0;
}
In Snapstream Searcher, a wep application I'm developing, I want to enable the user to search for specific lists of programs. Thus, this would require creating a database where users could manage and edit these lists. Rather than creating a nice RESTful API to manage the reading and editing of this database, I naturally decided to be lazy and just have users use Google Sheets instead.
Doing this wasn't actually as straightforward as I thought it would be, so I'm writing some notes to myself on the process. There are several moving parts here:
- the Google sheet itself
- App Script to create an API to get the sheet data
- Google Developer Console to create an OAuth token, so you can use the API
- client-side JavaScript to call the API
First, here's a working example. You can edit this sheet. When you edit it, you will see the changes reflected here, JSFiddle. Here's the API that I created in this app script. You'll probably need to authorize yourself with a Google account and refresh the page for it to work.
Directions
- Create a Google Sheet in your Google Drive and make it editable by users with a link by clicking share in the upper right.
Go to the Script Editor and make your API to get the sheet data. You can find a detailed reference here, Spreadsheet Service. The API seemed pretty intuitive. You can copy and paste this code in and change the URL to your document's:
function getData() { // link to your google doc var spreadSheet = SpreadsheetApp.openByUrl('https://docs.google.com/spreadsheets/d/<YOUR DOCUMENT>/edit'); var sheet = spreadSheet.getActiveSheet(); var dataRange = sheet.getDataRange(); var values = dataRange.getValues(); return values; }
- Now, follow Step 1 of this guide to make an OAuth token: JavaScript Quickstart
- Then, deploy as an API according to this guide: Using the Execution API
Now, on the client-side create an HTML file, something like this, where you substitute your project key and client id. If you need other scopes, check Project Properties in the Script Editor.
<!DOCTYPE html> <html> <head> <style type="text/css"> table { border-collapse: collapse; } td, th { padding: 10px; border: 1px solid gray; } </style> <script> var CLIENT_ID = '<CLIENT_ID>'; var SCOPES = ['https://www.googleapis.com/auth/spreadsheets']; /** * Check if current user has authorized this application. */ function checkAuth() { gapi.auth.authorize( { 'client_id': CLIENT_ID, 'scope': SCOPES.join(' '), 'immediate': true }, handleAuthResult); } /** * Handle response from authorization server. * * @param {Object} authResult Authorization result. */ function handleAuthResult(authResult) { var authorizeDiv = document.getElementById('authorize-div'); if (authResult && !authResult.error) { // Hide auth UI, then load client library. authorizeDiv.style.display = 'none'; callScriptFunction(); } else { // Show auth UI, allowing the user to initiate authorization by // clicking authorize button. authorizeDiv.style.display = 'inline'; } } /** * Initiate auth flow in response to user clicking authorize button. * * @param {Event} event Button click event. */ function handleAuthClick(event) { gapi.auth.authorize( {client_id: CLIENT_ID, scope: SCOPES, immediate: false}, handleAuthResult); return false; } function callScriptFunction() { var scriptId = "<PROJECT_KEY>"; // aka Project key // Create an execution request object. var request = { 'function': 'getData' }; var op = gapi.client.request({ 'root': 'https://script.googleapis.com', 'path': 'v1/scripts/' + scriptId + ':run', 'method': 'POST', 'body': request }); op.execute(function(res) { var htmlTable = '<table>'; res.response.result.forEach(function(row, idx, rows) { if (idx == 0) { htmlTable += '<tr><th>' + row.join('</th><th>') + '</th></tr>'; } else { htmlTable += '<tr><td>' + row.join('</td><td>') + '</td></tr>'; } }); htmlTable += '</table>'; document.getElementById('output').innerHTML = htmlTable; }); } </script> <script src="https://apis.google.com/js/client.js?onload=checkAuth"> </script> </head> <body> <div id="authorize-div" style="display: none"> <span>Authorize access to Google Apps Script Execution API</span> <!--Button for the user to click to initiate auth sequence --> <button id="authorize-button" onclick="handleAuthClick(event)"> Authorize </button> </div> <div id="output"> </div> </body> </html>
- To test it out, you can run run a local server with
python -m http.server 8000
. Make sure you've addedhttp://localhost:8000
to yourAuthorized JavaScript origins
when creating the OAuth token. - You should see your sheet in the form of a HTML table.
My good roommate Masato made donuts and left behind a bunch of dough (recipe). Clearly, I decided to turn the dough into cinnamon rolls!
It's pretty easy. Just roll out the dough and cover it with a cinnamon sugar-butter paste. The recipe for my paste is:
- 1/2 cup granulated sugar
- 1/2 cup brown sugar
- 3 tablespoons ground cinnamon
- 1/2 heaping teaspoon of nutmeg
- 1/2 heaping teaspoon of cloves
- 1/2 cup (1 stick) of unsalted butter
Just cream the butter and sugar together. Roll the dough so it's about 9 inches in height and 18 inches in width. Cover the dough with the paste. Cut into 2 inch vertical strips to give you 9 rolls. Roll them up and dip them in the remaining paste. Bake at 350 degrees Fahrenheit for 20 minutes. Serve with icing. I used this Ermine Icing. Yay!
In other news, life is okay. I just got back from visiting Duke, where I interviewed for the their Statistics PhD program. I thought it went well, but I'm often wrong about these things. In any case, it was great catching up with some old Duke friends. It was a particularly nice surprise to run into a friend from Boston.
The fact that I've been rejected at 2 schools so far gives me some anxiety, but I trust that things will turn all right no matter what happens.
After getting a perfect score in the first round, I crashed and burned in round 2, and only managed to get 1 problem out of 4 correct. For what it's worth, my solution on the 2nd problem got over 80% of the test cases right. For the 3rd problem, I managed to finish a correct solution 10 minutes after the contest was over. As punishment for such a dismal performance, I'm forcing myself to write up the solutions to the problems of this round.
Boomerang Decoration
This problem was the easiest as it was the only one that I got full credit for. Here's a link to the problem statement, Boomerang Decoration.
I employed a pretty common strategy. Basically, there are $N$ spots to paint. Let us index them $0,1,\ldots,N-1.$ Now, consider a pivot point $p.$ We will paint everything $i < p$ from the left using a prefix. We will paint everything $i \geq p$ from the right using a suffix. Let $L(p)$ be the number of prefix paintings we need if we pivot at point $p.$ Let $R(p)$ be the number of suffix paintings we need if we pivot at point $p.$ Then, we have that our solution is $$\min_{p \in \{0,1,\ldots,N\}} \max\left(L(p), R(p)\right),$$ so we just need to compute $L(p)$ and $R(p)$, which we can do with a recurrence relation and dynamic programming.
Let $x_0,x_1,\ldots,x_{N-1}$ be the initial left half of the boomerang. Let $y_0,y_1,\ldots,y_{N-1}$ be the right half of the boomerang that we're trying transform the left side into. Let $L^*(p)$ be the number of blocks we've seen so far, where a block is defined as a contiguous sequence of letters. Clearly, $L(0) = L^*(p) = 0$ since we're not painting anything in that case. Then, for $p = 1,2,\ldots,N$, \begin{equation} L^*(p) = \begin{cases} 1 &\text{if}~p=1 \\ L^*(p - 1) &\text{if}~x_{p-1} = x_{p-2} \\ L^*(p - 1) + 1 &\text{if}~x_{p-1} \neq x_{p-2}, \end{cases} ~\text{and}~ L(p) = \begin{cases} L(p-1) &\text{if}~x_{p-1} = y_{p-1} \\ L^*(p) &\text{if}~x_{p-1} \neq y_{p-1}. \end{cases} \end{equation} since if the letters match, there is no need to paint, and if they don't we only need to paint once for each block.
Similarly, we define $R^*(p)$ as the number of blocks seen from the right. $R(N) = R^*(N) = 0$ since the $N$th index doesn't actually exist. Then, for $p = N-1,N-2,\ldots,0$, \begin{equation} R^*(p) = \begin{cases} 1 &\text{if}~p=N-1 \\ R^*(p + 1) &\text{if}~x_{p} = x_{p+1} \\ R^*(p + 1) + 1 &\text{if}~x_{p} \neq x_{p+1}, \end{cases} ~\text{and}~ R(p) = \begin{cases} R(p+1) &\text{if}~x_{p} = y_{p} \\ R^*(p) &\text{if}~x_{p} \neq y_{p}. \end{cases} \end{equation}
Thus, our run time is $O(N)$. Here's the code that implements this idea.
#include <algorithm>
#include <climits>
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int countSteps(const string &left, const string &right) {
int N = left.length();
vector<int> leftSteps; leftSteps.reserve(N + 1);
leftSteps.push_back(0);
int leftBlocks = 0;
for (int i = 0; i < N; ++i) {
if (i == 0 || right[i] != right[i - 1]) ++leftBlocks;
if (left[i] == right[i]) {
leftSteps.push_back(leftSteps.back());
} else {
leftSteps.push_back(leftBlocks);
}
}
vector<int> rightSteps(N + 1, 0);
int rightBlocks = 0;
for (int i = N - 1; i >= 0; --i) {
if (i == N - 1 || right[i] != right[i + 1]) ++rightBlocks;
if (left[i] == right[i]) {
rightSteps[i] = rightSteps[i + 1];
} else {
rightSteps[i] = rightBlocks;
}
}
int minSteps = INT_MAX;
for (int i = 0; i <= N; ++i) {
// paint everything strictly to the left, paint everything to right including i
minSteps = min(minSteps, max(leftSteps[i], rightSteps[i]));
}
return minSteps;
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int T; cin >> T;
for (int t = 1; t <= T; ++t) {
int N; cin >> N;
string left, right;
cin >> left >> right;
cout << "Case #" << t << ": "
<< countSteps(left, right)
<< '\n';
}
cout << flush;
return 0;
}
Carnival Coins
This problem is a probability problem that also makes use of dynamic programming and a recurrence relation. Here's the problem statement, Carnival Coins. I probably spent too long worrying about precision and trying to find a closed-form solution.
In any case, for this problem, given $N$ coins, we need to calculate the binomial distribution for all $n = 0,1,2,\ldots,N$ with probability $p$. Fix $p \in [0,1]$. Let $X_{n,k}$ be the probability $\mathbb{P}(X_n = k),$ where $X_n \sim \operatorname{Binomial}(n,p)$, that is, it is the number of heads if we flip $n$ coins. We use a similar idea to counting an unordered set of $k$ objects from $n$ objects without replacement in Counting Various Things.
Clearly, $\mathbb{P}(X_{n,k}) = 0$ if $k > n$. Also, $\mathbb{P}(X_{0,0}) = 1$. Now let $n \geq 1.$ Consider the $n$th coin. It's heads with probability $p$ and tails with probability $p - 1$, so for $k = 0,1,\ldots,n$, we have that \begin{equation} X_{n,k} = \begin{cases} (1-p)X_{n-1,0} &\text{if}~k=0 \\ (1-p)X_{n-1,k} + pX_{n-1,k-1} &\text{if}~k=1,2,\ldots,n-1 \\ pX_{n-1,n-1} &\text{if}~k=n \end{cases} \end{equation} since if we flip tails, we must have $k$ heads in the first $n-1$ coins, and if we flip heads, we must have $k - 1$ heads in the first $n$ coins.
Now, the problem states that we win if we get more that $K$ coins, too, so we really need the tail distribution. Define $Y_{n,k} = \mathbb{P}(X_n \geq k)$. Then, $Y_{n,n} = X_{n,n}$ since we can't have more than $n$ heads, and for $k = 0,1,\ldots,n-1$, \begin{equation} Y_{n,k} = X_{n,k} + Y_{n,k+1}. \end{equation}
We can compute this all in $O(N^2)$ time. I was hesistant to do this calculate since $p$ is a double
, and I was afraid of the loss of precision, but it turns out using a long double
table works.
Now, suppose we want to maximize expected value with $N$ coins. We can play all $N$ coins at once. Then, our probability of winning is $Y_{N,K}.$ Our second option is to break up our coins into two groups of size say $m$ and $N-m$. These two groups may further be broken up into more groups. Suppose we know the optimal strategy for $n = 1,2,\ldots,N-1$ coins. Let $E[n]$ be the maximum expected value when playing with $n$ coins. The maximum expected value of playing with the two groups, $m$ coins and $N-m$ coins, is $E[m] + E[N-m]$ by linearity of expectation.
This strategy only makes sense if both of the groups are of size at least $K$. Clearly, $E[K] = Y_{K,K} = X_{K,K}.$ Then, for all $n = 0,1,2, \ldots, N,$ we have \begin{equation} E[n] = \begin{cases} 0, &\text{if}~n < K \\ Y_{K,K}, &\text{if}~n = K \\ \max\left(Y_{n,K}, \sup\left\{E[m] + E[n-m] : m = K,K+1,\ldots,\lfloor n/2\rfloor\right\}\right), &\text{if}~n = K + 1,\ldots,N. \end{cases} \end{equation}
Our solution is $E[N]$. Since $N \geq K$, running time is $O(N^2)$. Here's the code.
#include <iostream>
#include <iomanip>
#include <vector>
using namespace std;
long double P[3001][3001]; // pre allocate memory for probability
double computeExpectedPrizes(int N, int K, double p) {
// P[n][k] = P(X_n >= k), where X_n ~ Binomial(n, p)
// first calculate P(X_n = k)
P[0][0] = 1;
P[1][0] = 1 - p; P[1][1] = p;
for (int n = 2; n <= N; ++n) {
P[n][0] = P[n-1][0]*(1-p);
for (int k = 1; k < N; ++k) {
// probability of hitting k when nth coin is heads and nth coin is tails
P[n][k] = p*P[n-1][k-1] + (1-p)*P[n-1][k];
}
P[n][n] = P[n-1][n-1]*p;
}
// make cumulative
for (int n = 1; n <= N; ++n) {
for (int k = n - 1; k >= 0; --k) P[n][k] += P[n][k+1];
}
vector<long double> maxExpectedValue(N + 1, 0); // maxExpectedValue[n] is max expected value for n coins
// two cases: all coins in 1 group or coins in more than 1 group
for (int n = 0; n <= N; ++n) maxExpectedValue[n] = P[n][K]; // put all the coins in 1 group
for (int n = 1; n <= N; ++n) {
// loop invariant is that we know maxExpectedValue for 0,...,n - 1
for (int m = K; m <= n/2; ++m) { // just do half by symmetry
// split coins into two parts, play separately with each part
maxExpectedValue[n] = max(maxExpectedValue[n],
maxExpectedValue[m] + maxExpectedValue[n - m]);
}
}
return maxExpectedValue.back();
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int T; cin >> T;
cout << fixed;
cout << setprecision(9);
for (int t = 1; t <= T; ++t) {
int N, K; // coins and goal
cin >> N >> K;
double p; cin >> p; // probability of coin landing heads
cout << "Case #" << t << ": "
<< computeExpectedPrizes(N, K, p)
<< '\n';
}
cout << flush;
return 0;
}
Snakes and Ladders
This problem was the one I finished 10 minutes after the contest ended. I had everything right, but for some reason, I got stuck on deriving a fairly simple recurrence relation in the last 10 minutes. Here's the problem statement, Snakes and Ladders.
A couple of key insights must be made here.
- Since a snake occurs between ladders of the same height, so group them by height.
- Taller ladders obstruct snakes, so process the ladders by descending height, and store obstructions as we go.
- Deal with the ladders in unobstructed blocks, so sort them by position to put ladders in contiguous blocks
Now, the cost of feeding a snake is the square of its length, which makes processing each unobstructed block a little bit tricky. This is where I got stuck during the contest. The naive way is to compute all the pairwise distances and square them. This isn't fast enough. Here's a better method.
Let $x_1,x_2,\ldots,x_N$ be the position of our ladders, such that $x_1 \leq x_2 \leq \cdots \leq x_N$. Now, for $n \geq 2,$ let $$ C_n = \sum_{k=1}^{n-1} (x_n - x_k)^2 ~\text{and}~ S_n = \sum_{k=1}^{n-1} (x_n - x_k) ,$$ so the total cost of feeding this blocks is $C = \sum_{n=2}^N C_n$. We have that \begin{align*} C_n &= \sum_{k=1}^{n-1} (x_n - x_k)^2 = \sum_{k=1}^{n-1}\left((x_n - x_{n-1}) + (x_{n-1} - x_k)\right)^2 \\ &= \sum_{k=1}^{n-1}\left[(x_n - x_{n-1})^2 + 2(x_n-x_{n-1})(x_{n-1} - x_k) + (x_{n-1}-x_k)^2\right]\\ &= C_{n-1} + (n-1)(x_n - x_{n-1})^2 + 2(x_n-x_{n-1})\sum_{k=1}^{n-1}(x_{n-1} - x_k)\\ &= C_{n-1} + (n-1)(x_n - x_{n-1})^2 + 2(x_n-x_{n-1})S_{n-1} \end{align*} since the last term drops out the summation when $k = n - 1$. Then, we can update $S_n = S_{n-1} + (n-1)(x_n - x_{n-1}).$ We let $C_1 = S_1 = 0.$ Thus, we can compute $C_n$ in $O(1)$ time if we already know $C_{n-1}.$
Since we only look at each ladder once, the biggest cost is sorting, so the running time is $O(N\log N)$, where $N$ is the number of ladders. Here's the code.
#include <algorithm>
#include <iostream>
#include <map>
#include <set>
#include <vector>
using namespace std;
const int MOD = 1000000007;
int computeFeedingCost(const map<int, vector<int>> &ladders) {
set<int> blockEnds; blockEnds.insert(1000000000); // block delimiter
long long cost = 0;
// go through heights in decreasing order
for (map<int, vector<int>>::const_reverse_iterator hIt = ladders.crbegin(); hIt != ladders.crend(); ++hIt) {
int currentLadder = 0;
int N = (hIt -> second).size(); // number of ladders at this height
for (int blockEnd : blockEnds) { // go block by block, where blocks are delimited by blockEnds vector
int blockStart = currentLadder; // remember the beginning of the block
long long xSum = 0;
long long xSquaredSum = 0;
while (currentLadder < N && (hIt -> second)[currentLadder] <= blockEnd) {
if (currentLadder > blockStart) {
// difference in position from this ladder to previous ladder
long long xDiff = (hIt -> second)[currentLadder] - (hIt -> second)[currentLadder-1];
xSquaredSum += (currentLadder - blockStart)*(xDiff*xDiff) + 2*xDiff*xSum; xSquaredSum %= MOD;
xSum += (currentLadder - blockStart)*xDiff; xSum %= MOD;
cost += xSquaredSum; cost %= MOD;
}
if ((hIt -> second)[currentLadder] == blockEnd) {
break; // start next block from this ladder
} else {
++currentLadder;
}
}
}
for (int newBlockEnd : hIt -> second) blockEnds.insert(newBlockEnd);
}
return cost;
}
int main(int argc, char *argv[]) {
ios::sync_with_stdio(false); cin.tie(NULL);
int T; cin >> T;
for (int t = 1; t <= T; ++t) {
int N; // number of ladders
cin >> N;
map<int, vector<int>> ladders; // ladders by height
for (int n = 0; n < N; ++n) {
int x, h; cin >> x >> h; // ladder position and height
ladders[h].push_back(x);
}
for (map<int, vector<int>>::iterator it = ladders.begin(); it != ladders.end(); ++it) {
// within each height sort by position
sort((it -> second).begin(), (it -> second).end());
}
cout << "Case #" << t << ": "
<< computeFeedingCost(ladders)
<< '\n';
}
cout << flush;
return 0;
}
Costly Labels
This problem is much more involved. I'll write about it in a separate post, Assignment Problem and the Hungarian Method.