-
Notifications
You must be signed in to change notification settings - Fork 0
/
threat.tex
279 lines (229 loc) · 19.4 KB
/
threat.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
\section{Threat model and trade-offs}\label{sec:threat}
In this section, we discuss the assumptions we have made with regard to attackers' capabilities,
some aspects which require careful implementation, and some areas where different concerns need to
be traded off against each other.
\subsection{Malware and untrusted software}\label{sec:malware}
Most authentication schemes in a consumer context are based on the principle that once a user has
authenticated, they have unlimited access to their account: using different credentials for
different user actions is considered too inconvenient.\footnote{An exception are some online banking
websites, which require explicit authorization of each payment action, using a hardware token.}
Consequently, once a user has authenticated with a service on a device, the software on that device
can in principle do anything it wants to the user account. That is true no matter which
authentication method is used: if there is malware which can read files, log keystrokes and steal
session cookies, or if an attacker can obtain remote root access to a device, any services in use on
that device can be compromised.
With Octokey, there is the additional risk of the device's private key (for inter-device
communication) and mRSA key fragment being stolen. If the device has a built-in hardware security
module, the actual keys are protected, but malware can nevertheless sign arbitrary mandates, because
it is almost indistinguishable from legitimate user software. Moreover, a hardware security module
may not support the operations required by the pairing algorithm (section~\ref{sec:pairing}). Thus,
we assume that Octokey is normally implemented in software. If malware is detected on a device, the
user must immediately revoke that device using another enrolled device.
In sandboxed environments (e.g. websites in web browsers, apps on some mobile OSes), mutually
untrusting applications are somewhat protected from each other. In this case, there needs to be an
explicit way of passing mandates between the Octokey application (which manages the keys) and the
websites and apps that require authentication. This mechanism must be implemented carefully,
ensuring that a website or application can only obtain a mandate for its own URL, and not some other
one. The implementation can draw from the experience of password managers.~\cite{Li14, Silver14}
\subsection{Physical device theft and loss}\label{sec:theftloss}
We assume that the private key material on a device is protected by a human-to-machine
authentication step, e.g. encrypted with a password. Malware may be able to circumvent this step,
but the authentication protects against an attacker who physically steals a device (or a backup of a
device's filesystem). In section~\ref{sec:ratelimit} we discussed a technique for strengthening this
step against brute force and dictionary attacks.
If the user has enrolled another device, they can use that device to revoke the stolen device. If
the revocation happens before the human-to-machine authentication is broken, no harm is done. If the
device is not revoked in time, the attacker can sign arbitrary mandates, and steal the key by
enrolling another device and pairing it with the stolen device. The risk of key theft can be
mitigated by prompting the user on several devices before allowing a new device to be enrolled
(provided that the user has previously enrolled several devices).
If an attacker steals two devices that are paired with each other, the risk of key theft is much
greater, because the human-to-machine authentication is the only remaining protection: deleting key
fragments from the mediator is not sufficient to prevent the attacker from recovering the key. The
technique from section~\ref{sec:ratelimit} also no longer applies, because the encryption can be
brute-forced offline. This means there is a trade-off between security and reliability: pairing
physical devices with each other provides redundancy in case the mediator is unavailable, but it
increases the risk of the private key being stolen.
Another risk is that the user may lose access to their services: perhaps an attacker who gains
control of a device uses it to revoke the user's legitimate devices (thus preventing the user from
stopping the attacker); perhaps the user has only enrolled one or two physical devices, and they are
stolen simultaneously; perhaps the user's house burns down and all of their devices are lost. To
allow recovery from such situations, we recommend that users print off their private key on paper,
and store it with an organization they trust, such as a bank or attorney.
An open question is whether the recovery key on paper should be the entire key, or a key fragment
paired with the mediator. The advantage of making it a fragment is that the user can revoke it in
case the piece of paper is lost or stolen; the disadvantage is that an attacker who gains control of
a device can also revoke it, and thus lock out the legitimate user. On balance, it is probably
better to make the key irrevocable (see also sections~\ref{sec:rotation} and \ref{sec:recovery}).
\subsection{Recovering from key theft}\label{sec:recovery}
We have proposed various safeguards to prevent theft of private keys, but we should also assume that
some keys will eventually be compromised, for whatever reason. If this happens, the user should have
a reasonable process for regaining control of their accounts.
A service maintains a list of public keys for each user, and key rotation
(section~\ref{sec:rotation}) requires that a user who is authenticated with an existing key may add
a new public key for the same user. However, should a user also be allowed to \emph{remove} a public
key from their account? We say no, because this could be used by an attacker to lock the legitimate
user out of their account, which would only make the situation worse for the user.
Once a private key has been stolen, the attacker and the legitimate user both have access to the
user's accounts. In this situation, we require a mechanism by which the legitimate user can revoke
the attacker's access, but not vice versa. We imagine two possible solutions:
\begin{itemize}
\item We could introduce some kind of master key, whose private part is only ever stored offline (on
paper), and whose public part is given to each service when the user signs up. If a user
authenticates with this key, they have the authority to replace all existing public keys for that
user, and thus grant the user a fresh start with a new key. However, would this master key be
significantly more resilient to theft than the normal private key? Can users be expected to store a
paper copy of a key securely? What about key rotation? What if the master key is lost?
\item Alternatively, we could rely on a human resolution process: the user would have to contact
customer support for each of their services, and convince them that they are the legitimate user and
not the attacker (e.g. by showing identity documents). The customer support agent would have the
authority to replace the user's public keys. This process may be vulnerable to social engineering,
but it also leaves more room for human judgment. Even if a master key is introduced into the
protocol, some cases will require human resolution anyway (e.g. to help users who cannot find their
master key any more).
\end{itemize}
\subsection{Detecting key theft}
How do we know that a key has been compromised in the first place? If the attacker starts performing
actions that attract attention, the user will of course become suspicious, but if the attacker is
using the key to silently steal confidential information, the user might not notice.
In order to detect unauthorized use of a key, the user's devices can log a record in their database
every time they create a mandate, and synchronize it across devices to create a complete login
history. Then we can define an API by which a service can report the history of logins for an
authenticated user, which is periodically checked by the user's devices. If the history of logins
reported by the service includes mandates that do not appear in the devices' local history, the
application can sound the alarm.
To minimize the risk of ``crying wolf'', this feature would need to be very carefully implemented.
In particular, a device would need to ensure that the log record for a mandate is replicated to
other devices before the mandate is sent to the service, otherwise a crash of the application could
result in a mandate used but not logged.
Moreover, services should provide an API for devices to check the current list of public keys for a
user. The user's devices would also periodically check this, in order to detect an attacker
registering a new key. These techniques may not detect mandates being created by malware on the
user's device, but they would nevertheless be useful additional security measures.
\subsection{Network attacks}\label{sec:netattack}
A mandate is tied to a particular server URL, so it cannot be reused with another service (assuming
that services verify the URL in mandates correctly). This makes Octokey resilient to phishing
attacks.
However, a mandate is not tied to a particular client device, unless channel binding
(section~\ref{sec:channelbinding}) is used. This means that if an attacker obtains a mandate by
eavesdropping, they can use it to authenticate as the user on that particular service.
We rely on regular TLS to provide secrecy of the mandate. The Octokey software should, if possible,
prevent users from accidentally sending a mandate over an unencrypted connection. Server certificate
validation and HTTP Strict Transport Security (HSTS) should be used to avoid simple MITM attacks
with invalid certificates. We expect that Certificate Transparency will help prevent more
sophisticated MITM attacks with fraudulently issued certificates. If the client is a native app, the
service's public key fingerprint should be embedded in the app and checked when connecting to the
service (public key pinning).
For inter-device communication between enrolled Octokey devices, TLS with mutual authentication and
public key pinning is used, which makes it resistant to MITM without requiring CA certificates. The
public key fingerprints are distributed through a visual channel (using a 2D barcode), which is much
harder to tamper with than a network connection (see also section~\ref{sec:barcode-intercept}).
\subsection{2D barcode eavesdropping}\label{sec:barcode-intercept}
In the flow for enrolling a new device (section~\ref{sec:newdevice}), and with delegated login
(section~\ref{sec:delegation}), we proposed displaying a 2D barcode on the screen of one device, and
scanning it with the camera of another. We cannot assume confidentiality of these barcodes: an
attacker may snoop it electromagnetically~\cite{Kuhn05}, or simply point a camera at the victim's
screen.
An attacker can thus connect to the URL in the barcode, and enroll the new device to the attacker's
account, instead of the user's own account as intended. With delegated login, the attacker can get
the user to log in with the attacker's key rather than the user's own key. This does not compromise
the user's key, but if the user logs in to the wrong account and does not realize it, they may
inadvertently disclose sensitive information to the attacker while using the account.
It is not clear whether this would be a problem in practice. If it is, a mutual authentication step
could be added to the flows for enrolling a new device and for delegated authentication (for
example, verifying a 2D barcode in the other direction). However, this makes the process more
complicated for users, especially when trying to delegate authentication to a device which has no
camera, so the additional verification step should probably be optional.
\subsection{2D barcode phishing}\label{sec:barcode-phishing}
When enrolling a new device or performing delegated authentication, we assume that an attacker
cannot trick a user into scanning a different barcode from the one they intended --- i.e. we assume
that the visual channel between the barcode-displaying and the barcode-scanning device provides
integrity (but not confidentiality). We assume that malware (section~\ref{sec:malware}) would be the
only way for an attacker to manipulate what is displayed on screen, or manipulate the camera signal.
However, a real risk is that a malicious website or app displays a barcode that originates from an
attacker-controlled device, and tricks the user into scanning that barcode and granting the attacker
unwanted access. In this scenario we must rely on well-written warning messages and user education
to ensure the user really understands what they are doing when they scan a barcode.
For users who have already set up multiple physical devices, as an additional safeguard against
accidentally pairing with an attacker-controlled device, we can require that enrolling a new device
requires user approval on a quorum (e.g. majority) of existing devices. This does not require
additional cryptographic algorithms, but can simply be implemented as a policy in the Octokey
software.
\subsection{Online services}\label{sec:mediator-sec}
The mediator and the relay are not strictly required from a cryptographic point of view: in
principle, the user could use Octokey exclusively with physical devices and direct device-to-device
communication. However, they are very useful for providing a good user experience, allowing a user
to authenticate using only one physical device, and making inter-device communication work ``out of
the box''.
From a service owner's point of view, the mediator does not need to be trusted (unlike an OpenID
identity provider, for example), because a mandate is simply an RSA signature in any case. Thus, the
choice of mediator and relay services is entirely up to the user. We propose that a public mediator
and relay should be provided for free, operated by a nonprofit foundation. However, technically
sophisticated users or organizations may choose to operate their own mediators and relays, if they
wish.
Operating a mediator is not an easy task, since it is likely to become a popular attack target: some
attackers may want to steal the key fragments and other user data, whereas others may try to disrupt
the availability of the service by flooding it with traffic. The operators of the mediator will need
expertise in dealing with such adversaries.
In the worst case, if attackers succeed in obtaining a copy of all the key fragments in the
mediator, the operators of the mediator will have to inform all users, and the users will have to
re-pair all of their devices with a new mediator. The inter-device communication protocol should
have a mechanism by which a mediator operator can force all users to re-pair. Such a break-in would
not \emph{per se} expose private keys, but any stolen devices that are not re-paired with a new
mediator are much more likely to result in key theft.
The relay is less critical for security, since it cannot read the contents of messages it delivers,
and it cannot tamper with them. It can only drop messages instead of forwarding them, and thus
interrupt the inter-device communication. To guard against this possibility, physical devices should
also try to connect through local communication channels when possible, and thus exchange any
messages that may have been lost.
\subsection{Multiple keypairs per user}\label{sec:identities}
In the discussion above we have assumed that each user has only one keypair, i.e. that the same
public key is sent to every service where the user wants to authenticate. However, this raises a
privacy problem: services can easily detect whether two different accounts are owned by the same
person (the accounts become \emph{linkable}).
We believe that pseudonymous usage of services should be possible, since it is beneficial for free
speech, separation of work and personal life, avoiding harrassment, etc. One approach would be for
Octokey to generate a separate key for every distinct login URL (or at least every domain name).
However, there are several reasons why this is not an ideal solution:
\begin{itemize}
\item Every new key must go through the re-pairing process (section~\ref{sec:pairing}), which
requires a large number of messages to be sent between devices. There would also be a delay between
signing up to a service on one device and being able to use it on another device.
\item If keys are printed on paper as a last-resort backup (section~\ref{sec:theftloss}), the user
would have to print off a key for every service they use. As users may have accounts with hundreds
of services, this would be impractical. Restoring access after key loss would require individually
scanning or re-entering each service's key, which is also impractical.
\item The same service may use several different URLs: for example, one for the desktop website,
another for the mobile site, and perhaps custom URI schemes for invoking native apps. Octokey
cannot know which URLs/URIs belong to the same service, so it would generate a key for each, and
cross-app login would not work as expected.
\item In some cases, users may want to prove that several accounts are in fact owned by the same
person --- for example, Keybase\footnote{\url{https://keybase.io/}} exists for this purpose. In this
situation, it would be natural for services to publish the public key fingerprint of a user, and for
the user to use the same key to log into all the services they wish to associate.
\end{itemize}
Rather than generating a different key for every service, we propose letting the user explicitly
create multiple \emph{identities}. Each identity's key can be printed off when it is first created,
before the key is split into fragments. Each identity may include some metadata (e.g. the user's
name or pseudonym) to identify it. When a user signs up to a service, they choose the identity to
use, and thereafter the username and the identity are associated in the user's database of accounts.
Identities also allow different device pairing arrangements for different keys. For example, an
organization's social media accounts may use a key which is shared between all the employees who
have permission to administer the accounts, while each employee may also have personal accounts
which are not shared.
\subsection{Multi-way key splitting}\label{sec:multiway}
In section~\ref{sec:revocation} we assumed that the private key is split into two mRSA fragments. In
principle, a key could be split three (or more) ways, so that authentication would require two
physical devices plus the mediator. Using two physical devices is a popular enhancement for
password authentication (called two-factor authentication). Would it make sense to support a similar
mode in Octokey?
2FA is mainly a protection against weak passwords and reuse of the same password on different sites.
It does not prevent phishing, and it provides only limited protection against malware (a keylogger
can steal both the password and the authenticator's token, if it is fast enough). Even use of two
devices is not strictly enforced (you can enter your password on the same smartphone as the
authenticator app is running on).
Octokey does not have the same problems of weak passwords and password reuse, so it does not have
2FA's primary use case. A three-way key split would provide marginal additional security, as an
attacker would have to steal two devices and break the human-to-machine authentication step on both
devices, which may give the user more time to revoke the devices. However, it's not clear whether
this security gain is significant enough to warrant the inconvenience of having to use two devices.