## Вход

Забыли?

#### вход по аккаунту

код для вставкиСкачать
```Sampling algorithms
and Markov chains
László Lovász
Microsoft Research
One Microsoft Way, Redmond, WA 98052
[email protected]
Applications:
- statistics
- simulation
- counting
- numerical integration
- optimization
-…
L: a language in NP, with presentation
L  { x : (  y ) A ( x , y )}
certificate
polynomial time algorithm
Given: x
Find:
- a certificate
- an optimal certificate
- the number of certificates
- a random certificate
(uniform, or given distribution)
One general method for sampling: Markov chains
(+rejection sampling, lifting,…)
Want: sample from distribution p on set V
Construct ergodic Markov chain with
states: V
stationary distribution: p
Simulate (run) the chain for T steps
Output the final state
????????????
mixing time
5
4
5
4
2
3
3
2
1
1
Given: poset
State: compatible
linear order
Transition:
- pick randomly label i<n;
- interchange i and i+1
if possible
Mixing time
 : distribution after t steps
t
Roughly: min{t : d ( t , p )  1 / 10}
1
  (  ...  
t
0
t 1
Bipartite graph?!
)
t
T  m ax  0 m in { t : d (  , p )  1 / 1 0}
t
0
(enough to consider   s )
Conductance
V \S
S
frequency of stepping from S to K\S
in Markov chain: Q ( S , V \ S )
in sequence of independent samples: p ( S ) p (V \ S )
conductance:  ( S ) 
Q ( S ,V \ S )
p ( S ) p (V \ S )
,
  min S  ( S )
 ( x )  m in{  ( S ) : p ( S )  x}
1

 T
 log(1 / p 0 )
1

2
Jerrum - Sinclair
In typical sampling application: log(1 / p 0 ) polynomial
p o lyn o m ia lity o f T

p o lyn o m ia lity o f
1

But in finer analysis?
Key lemma:
y1  y 2  ...  y k  ...  y l  ...  y n
Q ( l ,  k )
yl  y k 
Proof for
l=k+1
E (# steps from {l , ...n}) 
1
Q ( l ,  k )

y jQ ( j,  l )  yl Q ( l ,  l )

y j Q ( j ,  l )  y l  1Q (  l ,  l )
jl
E (# steps to {l , ...n})

jl
1
 ( y l  y l 1 ) Q (  l ,  l )
Simple isoperimetric inequality:
vol n  1 ( F ) 
2
vol( S )
D
L – Simonovits
Dyer – Frieze
Improved isoperimetric inequality:
vol n 1 ( F ) 


1
vol( S ) ln
D
vol( K )
vol( S )
Kannan-L
 

1
 ( x )  m in 
ln
,1
x (1  x ) 
D n
T

D
2
n
2
After appropriate preprocessing,
T  n
3
Lifting Markov chains
T  n
T  n
Diaconis – Holmes – Neal
2
```
1/--страниц
Пожаловаться на содержимое документа