-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
574 lines (446 loc) · 25 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
<!DOCTYPE html>
<meta charset="UTF-8">
<title>Modeling Machine And Human States To Enable And Evaluate Secure
Usability</title>
<style>
body {
margin: 1in;
line-height: 1.4;
font-family: "Roboto", "San Francisco", "Helvetica Neue", "Helvetica", "Arial", sans-serif;
width: 45em;
}
img.screenshot {
border: 1px solid #666;
}
</style>
<h1>Modeling Machine And Human States To Enable And Evaluate Secure
Usability</h1>
<p>Chris Palmer, <a
href="mailto:[email protected]">[email protected]</a></p>
<h2>Abstract</h2>
<p>TODO</p>
<h2>Introduction</h2>
<blockquote>
The simple definition of ceremony above hides its complexity. The problem comes
with modeling a human node. Like a computer protocol node, the human node has
state and a state machine. It receives and emits messages, sometimes via a
computer’s user interface (UI) and sometimes by human-to-human communication,
whether intermediated by computer or not. However, the human input and output of
messages is often subject to error, probably because of a translation or
filtering mechanism that we have not fully characterized. Similarly, the human
state machine is difficult to model. We don’t program humans the way we do
computers, and when we try to, the attempt usually fails. We must instead learn
the human state machine empirically, by observing actual human behavior.<br/>
— Carl Ellison, “Ceremony Design And Analysis”
</blockquote>
<p>We can describe or model much of a software program’s behavior as transitions
from 1 state to another. The same is true of the behavior of computing hardware,
as well as protocols and ceremonies. In this paper, the generic term
<em>machine</em> refers to any software, hardware, protocol, or ceremony.</p>
<p>Good user experience (UX) engineering should help people to develop
accurate-enough mental models of the machine’s state. (See for example <a
href="http://www.nngroup.com/articles/mental-models/">“Mental Models” by Jakob
Nielsen</a>. TODO: Find better/more citations.) In order to do that, we need to
develop a model of the state transitions the human makes as well as those the
machine makes. We must also estimate the probability that the human’s model
accurately tracks that of the machine, where necessary. (It is rarely necessary
or possible for the human to track the machine’s state exactly.)</p>
<p>When the human cannot track the machine’s state accurately enough, poor
usability results. The human has a hard time understanding what the machine is
doing, and loses confidence that they can control the machine and complete their
task.</p>
<p>Some state tracking mismatches are also security vulnerabilities. For
example, if the human believes that a communications application is using
cryptography to secure the person’s messages to and from their friend, but in
fact the application is <em>not</em> doing that, the person may engage in
unnecessarily risky behavior, may lose control of private information, may lose
control of payment instruments, and so on. For another example, if the human
believes that the computer is showing them the login screen for their bank, but
in fact the login screen is controlled by an impostor, the person is likely to
lose control of their bank account.</p>
<p>It can also be the case that the person underestimates the security level of
the machine’s state, and hence is over-cautious. In this case, the person may
not receive the full utility or value of the application.</p>
<p>Therefore, for good usability as well as good security, UX engineers must be
able to predict what states a mental model the human will traverse as they use
the system. Engineers must create affordances, select iconography and labels,
and use gestures that help people develop accurate-enough mental models, and
which are more likely than not to guide the person through the states of their
mental model that accurately map to machine states.</p>
<p>Approaching UX engineering from this perspective allows us to dissolve the
false dichotomy between usability (sometimes grossly mischaracterized as mere
‘convenience’) and security. In fact, in this article I hope to show that the
interfaces that are the most learnable, discoverable, accessible, and effective
are also the ones that have the fewest human-machine state mismatches — and
hence the fewest human-machine state mismatches that lead to vulnerability.</p>
<h2>Prior Work</h2>
<p><a
href="http://projecteuclid.org/download/pdf_1/euclid.aoms/1177699147">Hidden
Markov models</a></p>
<h3>Ceremony Design And Analysis</h3>
<p><a href="https://eprint.iacr.org/2007/399.pdf"><em>Ceremony Design And
Analysis</em></a> by Carl Ellison</p>
<blockquote>
<p>…the design process for a ceremony is the same as the process for a network
protocol. Each node in the ceremony has:</p>
<ol>
<li>state, held in the node’s memory in one or more locations
<li>secrets, protected by tamper resistance and subject to access control
<li>a state machine
<li>input messages that are parsed and sometimes pre-processed, including
<ol>
<li>messages from other nodes
<li>events (like a timer or, within a human node, a “desire”)
</ol>
<li>for each (input message, state) pair:
<ol>
<li>output messages
<li>changes in state
</ol>
<li>service response times and communication bandwidth
<li>probability of processing errors
<li>probability of node death or loss of memory
</ol>
<p>State machines need to be specified for every node in the ceremony. One of
the nodes will, from its home state or from some external event (like a timer or
a desire), initiate a message cascade. As with protocols, we can look for
unrealizable conditions (a reply that is generated before the message that
initiates it), deadlock (nodes waiting for messages from each other), race
conditions, etc. We can do normal performance analyses. We can analyze a
ceremony for fault/disaster tolerance. We can do normal security analyses of the
ceremony and of secrets stored in each node’s memory. We can run simulations and
evaluate them against a set of expectations. We can do formal modeling and
attempt to prove properties of the modeled ceremony. All of the techniques that
we use for network protocol design should be available for ceremony design.</p>
</blockquote>
<p>TODO. Adrienne sent more good ones.</p>
<h2>Hello, World</h2>
<p>Consider the state transition diagram for the simplest possible UX control: A
button that sets a Boolean value in the program’s internal state. Assume it’s a
program variable to require (<var>true</var>) or not require (<var>false</var>)
that the program use only encrypted network communications.</p>
<div>
<img src="01-simple-boolean.png"/><br/>
<em>Figure 1: A simple Boolean option.</em>
</div>
<p>To change states, the person performs a <em>gesture</em> (in this case,
clicks the button). With each click, the machine enters the 1 other possible
state; there are no other possible states. The machine changes state with
perfect certainty; there are no bugs in the hypothetical <var>onclick</var>
event handler. We may presume that the person perceives the state change, both
because they had to perform a gesture to make it happen, and because the visual
representation of the button changes from <em>raised</em> to
<em>pressed</em>.</p>
<p>UX designers and engineers know from painful experience that people do not,
in fact, always have perfect perception of the changes that happen in the
machine’s state, even with such ‘obvious’ causes as the person’s own click
gesture, and with such ‘obvious’ effects as the change in the visual appearance
of the button. For examples of why this might happen, consider:</p>
<ul>
<li>this might be the person’s first time using a computer, and they might not
understand the significance of the click gesture and the visual representation
of the button</li>
<li>the graphic design of the UX elements may have changed since the last
software update, and the person has not yet learned the new appearance for
<em>raised</em> and <em>pressed</em> buttons</li>
<li>the person may not be sure if the machine recognized the gesture</li>
<li>the machine may not update the visual representation in time (“jank”) or
otherwise handle the event in real time</li>
</ul>
<p>There is a certain probability that the person is confused in 1 (or more) of
these ways. To start, we may assume that it is highly probable that the person’s
mental model can track the machine’s state model; assume that the probability of
correct tracking is 0.9. Further assume that the probability of incorrect
tracking is 0.1. In Figure 2, square vertices indicate incorrect human mental
states, and the arcs are marked with their probabilities. Additionally, the arcs
are shown in light gray for low probability, and in dark gray for high
probability.</p>
<div>
<img src="02-simple-boolean-with-confusion.png"/><br/>
<em>Figure 2: A simple Boolean option, allowing for human confusion.</em>
</div>
<p>Sometimes people can be confused about what states are even possible in the
machine’s state model. In this example, there are 2 different types of
confusion: whether or not the button is pressed and encryption is turned on; and
what the security guarantee of the encryption really is. (People often
mistakenly believe that encryption alone also provides authentication of the
peer or the tamper-evidence of the data they transmit over the connection.)</p>
<div>
<img src="03-simple-boolean-with-orthogonal-states.png"/><br/>
<em>Figure 3: A simple Boolean option, allowing for human confusion as to what
states are even possible for the machine.</em>
</div>
<p>TODO: the situation when the human’s model of the machine state model is
inaccurate or incomplete, both in ways that do not matter and in ways that
do</p>
<h2>Basic Constraints</h2>
<p>All arcs out of a vertex have a corresponding probability that the person’s
mental state will change to the indicated state, and the probabilities of all
arcs leading out from a state must add up to exactly <var>p</var> = 1.0.</p>
<p>There is 1 exception to this rule: the <em>secure attention
sequence</em>.</p>
<h3>The Secure Attention Sequence</h3>
<p>A secure attention sequence (SAS) is any gesture which</p>
<ul>
<li>the underlying platform interprets without passing the gesture through to
potentially untrustworthy, lower-privilege code</li>
<li>always causes the platform to run only code from its trusted computing
base</li>
<li>the trustworthy code always shows trustworthy (unforgeable,
un-eavesdroppable) UI</li>
</ul>
<p>Despite the name ‘sequence’, the gesture may be a simple one like pressing
<em>Escape</em> on the keyboard or the <em>Home</em> button on an iOS device.
The term ‘sequence’ may be a historical quirk: one of the first SASs was the
Windows and OS/2 SAS, the complex keyboard gesture
<em>Control-Alt-Delete</em>.</p>
<p>Possible example SASs:</p>
<ul>
<li>Control-Alt-Delete on Windows (the GINA/modern equivalent TODO runs in its
own desktop and as its own principal)</li>
<li>The Home button on iOS devices</li>
<li>Anything on Chrome OS?</li>
</ul>
<p>Requirements for a SAS:</p>
<ul>
<li>all arcs must point to the same vertex, the <em>SAS target</em></li>
<li>the SAS target must not be an incorrect (square) state</li>
<li>every vertex other than the SAS target must have exactly 1 arc, with <var>p</var> =
1.0, leading to the SAS target</li>
</ul>
<p>In these graphs, we will color the arcs pointing to the SAS target in red.
To keep the graphs clean, we will give them no other marking; you may assume
that the properties of a SAS hold if the graph has a SAS at all, that the
SAS gesture is unique and global to the machine, and that the probability that
the human tracks it is 1.0..</p>
<div>
<img src="04-simple-boolean-with-reset.png"/><br/>
<em>Figure 4: A simple Boolean option, with a secure attention sequence to reset
to the default state.</em>
</div>
<p>However, even that is messy. To keep the graphs even more clean, we may color
the SAS target red, use a double-circle for its node, and omit the arcs pointing
to it.</p>
<div>
<img src="05-simple-boolean-with-reset-clean.png"/><br/>
<em>Figure 5: A simple Boolean option, with a secure attention sequence to reset
to the default state.</em>
</div>
<p>If a graph has no SAS target node, then it is a model of a machine that has
no SAS.</p>
<h2>Example: Full Screen Mode</h2>
<p>Consider a more complex model of machine and human state: A full-screen mode
for a web browser. Various classes of web application (e.g. games, slideshow and
other document viewers, demos, signage) benefit greatly from the ability to use
the entire screen, obscuring not only the native chrome of the browser but that
of the underlying operating system. Obviously, this is potentially damaging to
the person’s mental model of what the machine is doing, because they lose the
ability to read or interact with the native controls — many of which are
security-critical. (Consider, for example, the browser’s Location Bar, or the
operating system’s privilege escalation dialog.)</p>
<p>Thus, if the browser supports full screen mode, a malicious web application
author has a marginally improved ability to spoof the UX and fool the person
using it. For example, they might create a screen that obscures native chrome
that lets the person know that they are viewing a particular origin, and
replacing it with impostor chrome that tricks the person into thinking they are
visiting another origin. The person might then be induced to provide the
impostor origin with their credentials for the true origin.</p>
<h3>Spoofing And Phishing With Full Screen Mode</h3>
<p>Feross Aboukhadijeh has developed <a
href="http://feross.org/html5-fullscreen-api-attack/">an example of such a
spoofing attack</a>. The attack goes to some effort to mimic the chrome of the
browser and even the underlying operating system, and seeks to spoof the web
site of a large US bank. Because Aboukhadijeh is not really seeking to trick
people, the demo warns the person that they were tricked if they try to interact
with the impostor page.</p>
<p>Here is how the spoof demo looks in Chrome 46. When the person clicks on the
link on Aboukhadijeh’s page, a new window opens and takes up the full screen,
showing a fake Bank Of America page. Chrome 46 shows an overlay over the window,
near the top, telling the person that “feross.org is now full screen” and that
they can <em>Allow</em> this (dismissing the overlay) or <em>Exit full
screen</em> (closing the full screen window).</p>
<div>
<img src="feross-01.png" class="screenshot"/><br/>
<em>Figure 6a: A spoof of Bank Of America using full-screen mode. The
Allow/Exit overlay is showing.</em>
</div>
<p>Assuming the person chooses to Allow feross.org to take the whole screen
(something that only TODO% of people do; TODO talk about the numbers), Chrome
shows a new overlay, with just a notification and without buttons:</p>
<div>
<img src="feross-02.png" class="screenshot"/><br/>
<em>Figure 6b: A spoof of Bank Of America using full-screen mode. The
notification overlay is showing.</em>
</div>
<p>After a few seconds, this new overlay disappears, and the application is now
running unfettered in full screen mode:</p>
<div>
<img src="feross-03.png" class="screenshot"/><br/>
<em>Figure 6c: A spoof of Bank Of America using full-screen mode. No overlays
are showing; the application has full control of all pixels on the screen.</em>
</div>
<p>However, if the person touches the mouse to the top of the screen, the
button-less overlay reappears, re-asserting to the person that they are seeing
feross.org in full screen mode.</p>
<p>Here is what my screen looks like when I browse to the true Bank Of America
site with my browser window maximized. The true Bank Of America site does not
use the full screen feature, so my true browser chrome and my true operating
system chrome is still visible:</p>
<div>
<img src="feross-04.png" class="screenshot"/><br/>
<em>Figure 6d: The real Bank Of America site, not in full screen mode but
maximized, with the true native browser chrome and Ubuntu window management
chrome.</em>
</div>
<p>Without going to the effort of using full screen mode to create an elaborate
spoof, such an attacker has a good probability of success due to the reality
that people check the browser’s origin indicators with <var>p</var> well below
1.0 — and then, the probability that the person fully understands what the
indicators mean is also below 1.0.</p>
<p>In fact, due to the difficulty of correctly spoofing the browser and OS
chrome, a simpler spoof — just copying the bank’s HTML and serving it from the
attack domain — is likely to have many fewer points of difference between a real
screen and a spoofed screen:</p>
<div>
<img src="boa-simple-spoof.png" class="screenshot"/><br/>
<em>Figure 6e: A more straightforward example of phishing that relies on people
not checking or understanding the contents of Chrome’s Omnibox.</em>
</div>
<h3>Mitigating Risks Of Full Screen Mode</h3>
<p>TODO: Maybe talk about how this is why we don’t allow full screen +
transparent backgrounds. Maybe also talk about Pointer Lock, which makes it that
much more important to help people know how to get out of it. Also that is why
we cannot allow web applications to override the SAS. Potential for applications
to misuse Pointer Lock or have bugs, since you have to manage the mouse
yourself?</p>
<p>Even if full screen mode provides only marginal new opportunities for attack
we would like to help people model the full screen state of the machine, while
at the same time not degrading the functionality of a benign full screen
application. TODO: Reasons... frustration with decorations messing with games;
complaints from people who get stuck in FS; et c.
<h3>Improving Full Screen Mode While Maintaining Safety</h3>
<p>The main problem we observed with the full screen behavior in Chrome 46 (as
seen in Figures 6a – 6c, above) was that the notification was too obtrusive. We
heard many complaints from people trying to use games in full screen mode, only
to have gameplay interrupted or even ruined by the re-appearing notification.
(For example, the notification might obscure UX elements in the game
itself.) It seems that a good full screen mode for web applications has at least
the following requirements:</p>
<ul>
<li>Must give the web application control of the entire screen — browser chrome
should not use any pixels on the screen around the border of the web
application</li>
<li>Any notifications or browser chrome must not obscure any UX elements in the
web application</li>
<li>Any notifications or browser chrome must not swallow any gestures intended
for UX elements in the web application</li>
<li>It must be possible for people to know whether they are seeing browser and
OS chrome, or the web application UX</li>
<li>It must be easy for people to get out of full screen mode at any time,
regardless of the machine’s state or the person’s understanding of what state
the machine is in</li>
</ul>
<p>To solve the problem given these seemingly contradictory requirements, we
devised a new UX for full screen mode that behaves as follows:</p>
<ol>
<li>The web application is not in full screen mode, but does have elements in
its DOM whose event handlers invoke <code>requestFullScreen</code>. Call these
<em>full screen elements</em>. Only event handlers for <em>affirmative
gestures</em> can invoke <code>requestFullScreen</code>. Affirmative gestures
include mouse clicks and keystrokes, but not, for example, mere movement of the
mouse. (If an event handler for a non-affirmative gesture invokes
<code>requestFullScreen</code>, the browser throws an exception rather than
putting the element in full screen mode.)</li>
<li>When the person applies an affirmative gesture to a full screen element, the
browser puts the element in full screen mode. The browser also shows a
semi-transparent notification on the screen telling the person that the web
application is in full screen mode, and that the person can exit full screen
mode by invoking the SAS (e.g. the <em>Escape</em> key). The semi-transparent
notification does not swallow gesture events; they pass through to the
application. The browser also starts a timer, <var>T<sub>1</sub></var>.</li>
<li>After <var>N</var> seconds, <var>T<sub>1</sub></var> fires, and the
notification disappears. The browser starts a second timer,
<var>T<sub>2</sub></var>.</li>
<li>After <var>M</var> seconds, <var>T<sub>2</sub></var> fires. In this state,
the browser will transition to state <var>2</var> if (and only if) the user
performs a gesture (any gesture, not just an affirmative gesture).</li>
</ol>
<p>Figure 7 shows the state transitions:</p>
<div>
<img src="07-full-screen.png"/><br/>
<em>Figure 7: A full-screen mode that attempts to protect against UX
spoofing.</em>
</div>
<p>Confusion state <var>A</var> may arise in several ways. For example, perhaps
the person did not realize that the full screen element was focused and
receiving affirmative gestures; the person did not realize that their gesture
(e.g. pressing the Space Bar) counted as an affirmative gesture; or they looked
away during the transition to full screen and for <var>M</var> or more seconds
afterward.</p>
<p>Confusion state <var>A</var> is potentially vulnerable. To the extent that
the browser or OS enable high-quality UX redressing/spoofing attacks, people in
this state may be more vulnerable to such attacks.</p>
<h3>Another Class Of Spoofing Attack</h3>
<p>The Android and iOS platforms have minimal window management, effectively
giving all applications full screen control all the time, except for a small
amount of OS chrome (e.g. <a
href="http://developer.android.com/guide/topics/ui/notifiers/notifications.html">Android’s
Notification Area</a>, or <a
href="https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/TransitionGuide/Bars.html#//apple_ref/doc/uid/TP40013174-CH8-SW2">iOS’
Status Bar</a>). In principle, platforms could take advantage of this simplified
window management mechanism to improve safety: since there is (or, should be) no
situation in which application <var>A</var> would draw its views above (higher
in Z-order than) a view from application <var>B</var>, the platform could have
an easier time ensuring that the person’s interaction with the focused
application <var>B</var> is determined solely by <var>B</var>, without
disruption from <var>A</var>.</p>
<p>However, sometimes it happens that application <var>A</var> can, in fact,
show its views above those of <var>B</var>. For example, on the Android
platform, a <a
href="http://developer.android.com/guide/topics/ui/notifiers/toasts.html"><code>Toast</code>
notifications</a> that <var>A</var> creates can persist on the screen while the
person believes that they are interacting with <var>B</var>. This can lead to a
state of vulnerability or error, if <var>A</var> <a
href="http://developer.android.com/guide/topics/ui/notifiers/toasts.html#CustomToastView">crafts
a custom <code>View</code></a> that obscures or subverts the message that
<var>B</var> is trying to communicate to the person.</p>
<p>A concrete example of this vulnerability is known to exist. A ransomware
called <a
href="http://www.welivesecurity.com/2015/09/10/aggressive-android-ransomware-spreading-in-the-usa/">Android/Lockerpin.A</a>
used a carefully crafted <code>Toast</code> to obscure the true Android Device
Administrator permission request dialog. The ransomware’s <code>Toast</code>
looks like an urgent ‘patch your device now by clicking here’ message. When the
person taps a fake button in the <code>Toast</code>, their tap passes through to
the OK button in the Device Administrator permission:</p>
<blockquote>…the Trojan obtains Device Administrator rights much more covertly.
The activation window is overlaid with the Trojan’s malicious window pretending
to be an “Update patch installation”. As the victims click through this
innocuous-looking installation they also unknowingly activate the Device
Administrator privileges in the hidden underlying window.</blockquote>
<p>It might have been possible to prevent this using the <a
href="http://developer.android.com/reference/android/view/View.html"><code>filterTouchesWhenObscured</code>
attribute of Android <code>View</code>s</a>. The true Device Administrator
dialog should use the attribute to ensure that it only receives taps when it is
not obscured by another applications <code>Toast</code> or any other
<code>View</code>.</p>
<!--
Example of use:
https://android.googlesource.com/platform/development/+/master/samples/ApiDemos/src/com/example/android/apis/view/SecureView.java
-->
<p>TODO: Talk about how it’s patched</p>
<p>It would be better if application developers did not have to remember to
filter gesture events when their application’s views do not have the highest
Z-order. One way to do that is for the platform to enforce that only 1
application can have views on the screen at any 1 time.</p>
<h2>Example: Origin Security Indicators</h2>
<p>TODO</p>
<h2>Acknowledgments</h2>
<p>Thanks to Adrienne Porter Felt for helping me develop the idea and providing
additional references. And possibly for being my co-author. <em>hint,
hint</em></p>
<!--
TODO and writing style guidelines:
* Avoid saying ‘user’
* Avoid the passive voice
* Prefer verb phrases to gerunds
-->