-
Notifications
You must be signed in to change notification settings - Fork 8
/
ood_darpa_presentation.html
108 lines (79 loc) · 3.12 KB
/
ood_darpa_presentation.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
<!DOCTYPE html>
<html>
<head>
<title>GTL</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<link rel="stylesheet" href="fonts/quadon/quadon.css" />
<link rel="stylesheet" href="fonts/gentona/gentona.css" />
<link rel="stylesheet" href="slides_style.css" />
<script
type="text/javascript"
src="assets/plotly/plotly-latest.min.js"
></script>
</head>
<body>
<textarea id="source">
### Taxonomy of Learning Paradigms
<img style="width: 100%" src="assets/learning-schematics.png"></img>
<img style="width: 100%" src="assets/learning-table.png"></img>
---
### New Measures for Learning Efficacy
Learning Efficiency
- $\mathcal{E}_f(\mathbf{S})$ is error of hypothesis outputted by $f$ trained on
dataset $\mathbf{S}$.
$$ LE_f(\mathbf{S}^A, \mathbf{S}^B) = \frac{\mathcal{E}_f(\mathbf{S}^A)}{\mathcal{E}_f(\mathbf{S}^B)} $$
Weak OOD Learning
- $\mathbf{S}_{m, n}$ amalgamated dataset. $m$ out-of-task data points, and $n$
target task data points.
- $ f(\mathbf{S}\_{m, n}) = \hat{h}_{m, n} $ is hypothesis obtained from
amalgamated dataset.
- $ f(\mathbf{S}\_{n}) = \hat{h}_{n}$ is hypothesis obtained
from just target data.
$$ P\_{\mathbf{S}\_{m, n}}[R\_{X, Y}(\hat{h}\_{m, n}) < R\_{XY}(\hat{h}\_{n}) - \varepsilon ] \geq
1 - \delta $$
---
### New Measures for Learning Efficacy
We weakly OOD learn if above true for all $\delta > 0$, $m \geq M$, $n \geq N$,
all distributions $P_{\mathbf{S}_n, X, Y}$, for some $\varepsilon > 0$ and
algorithm $f$.
Strong OOD Learning
- $R^*$ is Bayes optimal risk.
$$ P\_{\mathbf{S}\_{m, n}}[R\_{X, Y}(\hat{h}\_{m, n}) < R^* + \varepsilon ] \geq
1 - \delta $$
We strongly OOD learn if above true for all $\varepsilon, \delta > 0$, $m \geq M$, $n \geq N$,
all distributions $P_{\mathbf{S}_n, X, Y}$, for some algorithm $f$.
---
### Theoretical Results
- Weak OOD learning does not imply strong OOD learning.
- Weak OOD learning implies transfer learning (learning efficiency > $1$,
meaning we do better with out-of-task data).
</textarea
>
<script src="remark-latest.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.5.1/katex.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.5.1/contrib/auto-render.min.js"></script>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.5.1/katex.min.css"
/>
<script type="text/javascript">
const options = {};
const renderMath = function () {
renderMathInElement(document.body);
renderMathInElement(document.body, {
delimiters: [
{ left: '$$', right: '$$', display: true },
{ left: '$', right: '$', display: false },
{ left: '\\[', right: '\\]', display: true },
{ left: '\\(', right: '\\)', display: false },
],
});
};
remark.macros.scale = function (percentage) {
const url = this;
return '<img src="' + url + '" style="width: ' + percentage + '" />';
};
const slideshow = remark.create(options, renderMath);
</script>
</body>
</html>