YouTube
Β·
20. 5. 2022
Β· available as PDFΒ· [edit]
This post is an expanded translation of my lecture notes from a Randomized and Approximation Algorithms course that I took, and a more detailed explanation of the topics covered in my video about BEST-SAT.
Basic definitions
An example problem could be minimum spanning trees:
input instances:set of all weighted graphs
permissible inputs:spanning trees for the given weighted graph
utility function:the spanning tree weight (sum of its edges)
weβre minimizing
Definition (Optimalization problem) is a tuple I,F,f,g
set of all input instancesI
sets of permissible inputsβIβI:F(I)
utility functionβIβI,AβF(I):f(I,A)
whether weβre maximizing or minimizing (a single bit g)
Definition (NP-Optimalization problem) is an optimalization problem I,F,f,g, for which we additionally require that:
the length of all permissible solutions is polynomial
the language of (I,A),IβI,AβF(I) is polynomial
we can check the correctness of a solution in polynomial time
f is computable in polynomial time
For minimalization problem, we ensure that the solution is always small enough.
For maximalization problem, we ensure that the solution is always large enough.
Definition: algorithm A is R-approximation, if:
it computes the solution in polynomial time (in terms of β£Iβ£)
for minimalization problem: βI:f(A)β€Rβ OPT(I)
for maximalization problem: βI:f(A)β₯OPT(I)/R
MAX-SAT
Input:C1ββ§β¦β§Cnβ, each clause is a disjunction of kjββ₯1 literals
Output: evaluation aβ{0,1}n of the variables (sometimes called literals)
Goal: maximize the number of satisfied clauses βwjβ
We also assume that:
no literal repeats in a clause
at most one of xiβ,xiβ appearas in a clause
RAND-SAT
Algorithm (RAND-SAT):
choose all literals randomly (independently, for p=1/2)
profit?
Theorem:RAND-SAT is a 2-approximation algorithm.
Proof: weβll create an indicator variable Yjβ for each clause
the chance that Cjβ is not satisfied is 2k1β
Since the size of the clause kβ₯1, we get E[Yjβ]=Pr[CjβΒ isΒ satistied]=1β2k1ββ₯21β, thus
E[j=1βnβYjβ]=ofΒ expectationlinearityβj=1βnβE[Yjβ]β₯j=1βnβ21ββ₯21βOPT
LP-SAT
Weβre using the optimal solution to the linear program (and generally the formula, if we allow real vlaues for literals) as a guide for our randomized algorithm.
Algorithm (LP-SAT):
build an integer linear program:
variables will be:
yiβ for each literal
zjβ for each clause
inequalitites will be one for each clause, in the form zjββ€positiveββyiβ+negativeββ(1βyiβ)
weβll maximize the number of satisfied clauses βzjβ
relax the program (allow real variables instead of integers) and calculate the optimum yβ,zβ
set literals xiβ to 1 with probability yiββ
Theorem:LP-SAT is a (1βe1β)-approximation algorithm.
To prove this, weβll use a few lemmas/theorems that arenβt difficult to prove, but arenβt really interesting. I left links to (Wikipedia and I donβt feel bad about it) articles with proofs for each, if youβre interested.
Fact (A - A/G mean inequality):i=1βnβain1βββ€n1βi=1βnβaiβ
Weβre interested in the satisfied ones, so
Pr[CjβΒ isΒ satisfied]ββ₯1β(1βkjβzjβββ)kjββourΒ functionΒ f(zjββ)ββ₯Bβ[1β(1βkjβ1β)kjβ]zjββββ₯Cβ(1βe1β)zjβββ
To use fact B, we observed that a=f(0)=0 and that the second derivation is non-positive (so the function is concave). Now to formally count how many our program satisfies:
E[j=1βmβYjβ]β=j=1βmβE[Yjβ]β₯jβUββPr[CjβΒ isΒ satisfied]β₯jβUββ(1βe1β)zjββ=(1βe1β)OPTβ
BEST-SAT
Algorithm (BEST-SAT):
assign a value of a literal using RAND-SAT with probability 1/2, else use BEST-SAT
have an existential crisis about the fact that this works and is asymptotically optimal
Theorem:BEST-SAT is 43β-approximation.
Proof: we want to prove that Pr[CjβΒ isΒ satisfied]β₯43βzjββ.
Letβs look at the probability that each algorithm satisfies a clause of k variables:
RAND-SAT: 1β2k1β (at least one literal must be satisfied)
LP-SAT: [1β(1βk1β)k]zjββ (the formula right before using fact C)