|
| 1 | +# approximation algorithms |
| 2 | + |
| 3 | +Algorithms which relax the requirement of finding the optimal solution while preserving reliability and efficiency. |
| 4 | + |
| 5 | +An $\alpha$-approximation algorithm is such that outputs $S$ in poly time where |
| 6 | + |
| 7 | +- $\frac{\text{cost(S)}}{\text{cost(OPT)}} \le \alpha$ for minimization problems |
| 8 | +- $\frac{\text{profit(S)}}{\text{profit(OPT)}} \ge \alpha$ for maximization problems |
| 9 | + |
| 10 | +The goal is to tend towards $\alpha = 1$ |
| 11 | + |
| 12 | +## min-weight vertex cover |
| 13 | + |
| 14 | +The LP relaxation gives some result $x^*$. We perform approximation by picking the vertex cover to be $C = \{v \in V : x_v^* \ge \frac{1}{2}\}$. |
| 15 | + |
| 16 | +$C$ is a feasible solution and its weight is at most twice of the optimal one. |
| 17 | + |
| 18 | +## integrality gap |
| 19 | + |
| 20 | +The ratio describing the LP relaxation. Let $\mathcal I$ be the set of all instances of a problem. The for a minimization problem the integrality gap is |
| 21 | + |
| 22 | +$$ |
| 23 | +g = \max_{I \in \mathcal I}\frac{\text{OPT}(I)}{\text{OPT}_{LP}(I)} |
| 24 | +$$ |
| 25 | + |
| 26 | +## set cover |
| 27 | + |
| 28 | +Given universe $U = \{e_1, \cdots, e_n\}$ and a family of subset $T = \{S_1, \cdots, S_m\}$ and a cost function $c : T \to \R^+$ find a collection $C \subseteq T$ such that $\bigcup C = U$ and minimizes the cost |
| 29 | + |
| 30 | +### deterministic |
| 31 | + |
| 32 | +Suppose each element belongs to at most $f$ sets. Then the LP solution where we pick the solution to be $C = \{S_i : x_i^* \ge \frac{1}{f}\}$ is an $f$-approximation. |
| 33 | + |
| 34 | +### randomized |
| 35 | + |
| 36 | +We repeat $d \cdot \ln(n)$ times the following: for each $i \in [m]$ add set $S_i$ to the solution $C$ with probability $x^*_i$. |
| 37 | + |
| 38 | +The expected cost after $d \cdot \ln(n)$ repetitions is at most $d \cdot \ln(n) \cdot OPT$. |
| 39 | + |
| 40 | +The output is a feasible solution with probability at least $1 - \frac{1}{n^{d - 1}}$ |
| 41 | + |
| 42 | +## prediction with experts advice |
| 43 | + |
| 44 | +Given $N$ experts which individually advise either $0$ or $1$, we predict an answer. Then, the adversary with the knowledge of the advices and our answer, answers. The goal is to minimize the amount of mistakes relative to the best expert. We consider $T$ trials. |
| 45 | + |
| 46 | +### majority vote |
| 47 | + |
| 48 | +If a perfect expert exists, we can always take the majority vote of those experts that have not made a mistake. So we make at most $\log N$ mistakes. |
| 49 | + |
| 50 | +Without a perfect expert we can use the same strategy but restarting whenever we are out of experts. If the best expert has made $M$ mistakes by time $T$, we make at most $(M+1) \log N$ mistakes. |
| 51 | + |
| 52 | +### weighted majority |
| 53 | + |
| 54 | +We take the weighted majority vote of all experts. Weights are initialized to 1, and a mistake is penalized by halving that weight for the given expert. The amount of mistakes we make is |
| 55 | + |
| 56 | +$$ |
| 57 | +M \le \frac{1}{\log{4 \over 3}}(M_i + \log N) |
| 58 | +$$ |
| 59 | + |
| 60 | +Where $M_i$ is the amount of mistakes done by expert $i$. |
| 61 | + |
| 62 | +### Hedge |
| 63 | + |
| 64 | +The game is changed to produce a distribution $p$ over the experts while the adversary produces a cost vector $m \in [-1, 1]^N$ |
| 65 | + |
| 66 | +Let $\Phi(t) = \sum_{i \in [N]} w_i^{(t)}$ be the sum of all weights at time $t$. Then $p_i^{(t)} = \frac{w_i^{(t)}}{\Phi(t)}$. We update the weights according to $w_i^{(t+1)} = w_i^{(t)} \cdot e^{-\epsilon \cdot m_i^{(t)}}$ |
| 67 | + |
| 68 | +For $\epsilon \le 1$ Hedge produces |
| 69 | + |
| 70 | +$$ |
| 71 | +\sum_{t = 1}^T p^{(t)} \cdot m^{(t)} \le \sum_{t=1}^T m_i^{(t)} + \frac{\ln N}{\epsilon} + \epsilon T |
| 72 | +$$ |
| 73 | + |
| 74 | +for any expert $i$. |
| 75 | + |
| 76 | +## covering LPs |
| 77 | + |
| 78 | +A covering linear program is such that $A \in \R^{m \times n}_+$, $b \in \R^m_+$, $c \in \R^n_+$. |
| 79 | + |
| 80 | +$$ |
| 81 | +\begin{align*} |
| 82 | + \text{minimize} \quad c^Tx& \\ |
| 83 | + \text{subject to} \quad Ax &\ge b \\ |
| 84 | + \quad 1 &\ge x \ge 0 |
| 85 | +\end{align*} |
| 86 | +$$ |
| 87 | + |
| 88 | +### Hedge |
| 89 | + |
| 90 | +The number of experts equals to $m$ |
| 91 | + |
| 92 | +1. Initialize weights to 1 |
| 93 | +2. Pick the distribution to be $p_i^{(t)} = \frac{w_i^{(t)}}{\Phi(t)}$ |
| 94 | +3. Let $x^{(t)}$ be the solution to the reduced LP |
| 95 | +4. Let $m_i^{(t)} = A_ix - b_i$ |
| 96 | +5. Update weights per Hedge |
| 97 | +6. Output the solution as $\frac{1}{T}\sum_{t=1}^T x^{(t)}$ |
| 98 | + |
| 99 | +The reduced LP being |
| 100 | + |
| 101 | +$$ |
| 102 | +\begin{align*} |
| 103 | + \text{minimize} \quad c^Tx& \\ |
| 104 | + \text{subject to} \quad (\sum_{i=1}^m p_i A_i) \cdot x &\ge \sum_{i=1}^m p_i b_i \\ |
| 105 | + \quad 1 &\ge x \ge 0 |
| 106 | +\end{align*} |
| 107 | +$$ |
| 108 | + |
| 109 | +The solution is almost feasible while having a cost at most that of the optimal one |
0 commit comments