-
Notifications
You must be signed in to change notification settings - Fork 0
/
general-attitudes-toward-ai.html
executable file
·489 lines (435 loc) · 55.9 KB
/
general-attitudes-toward-ai.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>2 General attitudes toward AI | Artificial Intelligence: American Attitudes and Trends</title>
<meta name="description" content="2 General attitudes toward AI | Artificial Intelligence: American Attitudes and Trends" />
<meta name="generator" content="bookdown 0.12 and GitBook 2.6.7" />
<meta property="og:title" content="2 General attitudes toward AI | Artificial Intelligence: American Attitudes and Trends" />
<meta property="og:type" content="book" />
<meta name="twitter:card" content="summary" />
<meta name="twitter:title" content="2 General attitudes toward AI | Artificial Intelligence: American Attitudes and Trends" />
<meta name="author" content="Baobao Zhang and Allan Dafoe" />
<meta name="author" content="Center for the Governance of AI, Future of Humanity Institute, University of Oxford" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />
<link rel="prev" href="executive-summary.html">
<link rel="next" href="public-opinion-on-ai-governance.html">
<script src="libs/jquery-2.2.3/jquery.min.js"></script>
<link href="libs/gitbook-2.6.7/css/style.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-table.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-bookdown.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-highlight.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-search.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-fontsettings.css" rel="stylesheet" />
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-132060565-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-132060565-1');
</script>
</head>
<body>
<div class="book without-animation with-summary font-size-2 font-family-1" data-basepath=".">
<div class="book-summary">
<nav role="navigation">
<ul class="summary">
<li><img src="images/small_logo.png" alt="Report small logo" width="272px" hspace="12" vspace="12"/></li>
<li><a href="index.html"><b>Table of Contents</b></a></li>
<li class="divider"></li>
<li class="chapter" data-level="1" data-path="executive-summary.html"><a href="executive-summary.html"><i class="fa fa-check"></i><b>1</b> Executive summary</a><ul>
<li class="chapter" data-level="1.1" data-path="executive-summary.html"><a href="executive-summary.html#select-results"><i class="fa fa-check"></i><b>1.1</b> Select results</a></li>
<li class="chapter" data-level="1.2" data-path="executive-summary.html"><a href="executive-summary.html#reading-notes"><i class="fa fa-check"></i><b>1.2</b> Reading notes</a></li>
<li class="chapter" data-level="1.3" data-path="executive-summary.html"><a href="executive-summary.html#press-coverage"><i class="fa fa-check"></i><b>1.3</b> Press coverage</a></li>
</ul></li>
<li class="chapter" data-level="2" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html"><i class="fa fa-check"></i><b>2</b> General attitudes toward AI</a><ul>
<li class="chapter" data-level="2.1" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html#subsecsupportai"><i class="fa fa-check"></i><b>2.1</b> More Americans support than oppose developing AI</a></li>
<li class="chapter" data-level="2.2" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html#subsecdemosupportai"><i class="fa fa-check"></i><b>2.2</b> Support for developing AI is greater among those who are wealthy, educated, male, or have experience with technology</a></li>
<li class="chapter" data-level="2.3" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html#subsecsupportmanageai"><i class="fa fa-check"></i><b>2.3</b> An overwhelming majority of Americans think that AI and robots should be carefully managed</a></li>
<li class="chapter" data-level="2.4" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html#harmful-consequences-of-ai-in-the-context-of-other-global-risks"><i class="fa fa-check"></i><b>2.4</b> Harmful consequences of AI in the context of other global risks</a></li>
<li class="chapter" data-level="2.5" data-path="general-attitudes-toward-ai.html"><a href="general-attitudes-toward-ai.html#americans-understanding-of-key-technology-terms"><i class="fa fa-check"></i><b>2.5</b> Americans’ understanding of key technology terms</a></li>
</ul></li>
<li class="chapter" data-level="3" data-path="public-opinion-on-ai-governance.html"><a href="public-opinion-on-ai-governance.html"><i class="fa fa-check"></i><b>3</b> Public opinion on AI governance</a><ul>
<li class="chapter" data-level="3.1" data-path="public-opinion-on-ai-governance.html"><a href="public-opinion-on-ai-governance.html#subsecgovchallenges13"><i class="fa fa-check"></i><b>3.1</b> Americans consider many AI governance challenges to be important; prioritize data privacy and preventing AI-enhanced cyber attacks, surveillance, and digital manipulation</a></li>
<li class="chapter" data-level="3.2" data-path="public-opinion-on-ai-governance.html"><a href="public-opinion-on-ai-governance.html#americans-who-are-younger-who-have-cs-or-engineering-degrees-express-less-concern-about-ai-governance-challenges"><i class="fa fa-check"></i><b>3.2</b> Americans who are younger, who have CS or engineering degrees express less concern about AI governance challenges</a></li>
<li class="chapter" data-level="3.3" data-path="public-opinion-on-ai-governance.html"><a href="public-opinion-on-ai-governance.html#americans-place-the-most-trust-in-the-u.s.-military-and-universities-to-build-ai-trust-tech-companies-and-non-governmental-organizations-more-than-the-government-to-manage-the-technology"><i class="fa fa-check"></i><b>3.3</b> Americans place the most trust in the U.S. military and universities to build AI; trust tech companies and non-governmental organizations more than the government to manage the technology</a></li>
</ul></li>
<li class="chapter" data-level="4" data-path="ai-policy-and-u-s-china-relations.html"><a href="ai-policy-and-u-s-china-relations.html"><i class="fa fa-check"></i><b>4</b> AI policy and U.S.-China relations</a><ul>
<li class="chapter" data-level="4.1" data-path="ai-policy-and-u-s-china-relations.html"><a href="ai-policy-and-u-s-china-relations.html#americans-underestimate-the-u.s.-and-chinas-ai-research-and-development"><i class="fa fa-check"></i><b>4.1</b> Americans underestimate the U.S. and China’s AI research and development</a></li>
<li class="chapter" data-level="4.2" data-path="ai-policy-and-u-s-china-relations.html"><a href="ai-policy-and-u-s-china-relations.html#subsecexperimentchina"><i class="fa fa-check"></i><b>4.2</b> Communicating the dangers of a U.S.-China arms race requires explaining policy trade-offs</a></li>
<li class="chapter" data-level="4.3" data-path="ai-policy-and-u-s-china-relations.html"><a href="ai-policy-and-u-s-china-relations.html#americans-see-the-potential-for-u.s.-china-cooperation-on-some-ai-governance-challenges"><i class="fa fa-check"></i><b>4.3</b> Americans see the potential for U.S.-China cooperation on some AI governance challenges</a></li>
</ul></li>
<li class="chapter" data-level="5" data-path="trend-across-time-attitudes-toward-workplace-automation.html"><a href="trend-across-time-attitudes-toward-workplace-automation.html"><i class="fa fa-check"></i><b>5</b> Trend across time: attitudes toward workplace automation</a><ul>
<li class="chapter" data-level="5.1" data-path="trend-across-time-attitudes-toward-workplace-automation.html"><a href="trend-across-time-attitudes-toward-workplace-automation.html#americans-do-not-think-that-labor-market-disruptions-will-increase-with-time"><i class="fa fa-check"></i><b>5.1</b> Americans do not think that labor market disruptions will increase with time</a></li>
<li class="chapter" data-level="5.2" data-path="trend-across-time-attitudes-toward-workplace-automation.html"><a href="trend-across-time-attitudes-toward-workplace-automation.html#extending-the-historical-time-trend"><i class="fa fa-check"></i><b>5.2</b> Extending the historical time trend</a></li>
</ul></li>
<li class="chapter" data-level="6" data-path="high-level-machine-intelligence.html"><a href="high-level-machine-intelligence.html"><i class="fa fa-check"></i><b>6</b> High-level machine intelligence</a><ul>
<li class="chapter" data-level="6.1" data-path="high-level-machine-intelligence.html"><a href="high-level-machine-intelligence.html#arrivesooner"><i class="fa fa-check"></i><b>6.1</b> The public predicts a 54% likelihood of high-level machine intelligence within 10 years</a></li>
<li class="chapter" data-level="6.2" data-path="high-level-machine-intelligence.html"><a href="high-level-machine-intelligence.html#subsecsupporthlmi"><i class="fa fa-check"></i><b>6.2</b> Americans express mixed support for developing high-level machine intelligence</a></li>
<li class="chapter" data-level="6.3" data-path="high-level-machine-intelligence.html"><a href="high-level-machine-intelligence.html#subsecdemohlmi"><i class="fa fa-check"></i><b>6.3</b> High-income Americans, men, and those with tech experience express greater support for high-level machine intelligence</a></li>
<li class="chapter" data-level="6.4" data-path="high-level-machine-intelligence.html"><a href="high-level-machine-intelligence.html#subsecharmgood"><i class="fa fa-check"></i><b>6.4</b> The public expects high-level machine intelligence to be more harmful than good</a></li>
</ul></li>
<li class="appendix"><span><b>Appendices</b></span></li>
<li class="chapter" data-level="A" data-path="appmethod.html"><a href="appmethod.html"><i class="fa fa-check"></i><b>A</b> Appendix A: Methodology</a><ul>
<li class="chapter" data-level="A.1" data-path="appmethod.html"><a href="appmethod.html#yougovsampling"><i class="fa fa-check"></i><b>A.1</b> YouGov sampling and weights</a></li>
<li class="chapter" data-level="A.2" data-path="appmethod.html"><a href="appmethod.html#appdemosubgroups"><i class="fa fa-check"></i><b>A.2</b> Demographic subgroups</a></li>
<li class="chapter" data-level="A.3" data-path="appmethod.html"><a href="appmethod.html#appanalysis"><i class="fa fa-check"></i><b>A.3</b> Analysis</a></li>
<li class="chapter" data-level="A.4" data-path="appmethod.html"><a href="appmethod.html#datasharing"><i class="fa fa-check"></i><b>A.4</b> Data sharing</a></li>
</ul></li>
<li class="chapter" data-level="B" data-path="apptopline.html"><a href="apptopline.html"><i class="fa fa-check"></i><b>B</b> Appendix B: Topline questionnaire</a><ul>
<li class="chapter" data-level="B.1" data-path="apptopline.html"><a href="apptopline.html#global_risks"><i class="fa fa-check"></i><b>B.1</b> Global risks</a></li>
<li class="chapter" data-level="B.2" data-path="apptopline.html"><a href="apptopline.html#considersai"><i class="fa fa-check"></i><b>B.2</b> Survey experiment: what the public considers AI, automation, machine learning, and robotics</a></li>
<li class="chapter" data-level="B.3" data-path="apptopline.html"><a href="apptopline.html#knowledge-of-computer-science-cstechnology"><i class="fa fa-check"></i><b>B.3</b> Knowledge of computer science (CS)/technology</a></li>
<li class="chapter" data-level="B.4" data-path="apptopline.html"><a href="apptopline.html#supportdevai"><i class="fa fa-check"></i><b>B.4</b> Support for developing AI</a></li>
<li class="chapter" data-level="B.5" data-path="apptopline.html"><a href="apptopline.html#manageexp"><i class="fa fa-check"></i><b>B.5</b> Survey experiment: AI and/or robots should be carefully managed</a></li>
<li class="chapter" data-level="B.6" data-path="apptopline.html"><a href="apptopline.html#trustdevai"><i class="fa fa-check"></i><b>B.6</b> Trust of actors to develop AI</a></li>
<li class="chapter" data-level="B.7" data-path="apptopline.html"><a href="apptopline.html#trustmanageai"><i class="fa fa-check"></i><b>B.7</b> Trust of actors to manage AI</a></li>
<li class="chapter" data-level="B.8" data-path="apptopline.html"><a href="apptopline.html#govchallenges"><i class="fa fa-check"></i><b>B.8</b> AI governance challenges</a></li>
<li class="chapter" data-level="B.9" data-path="apptopline.html"><a href="apptopline.html#airesearchcompare"><i class="fa fa-check"></i><b>B.9</b> Survey experiment: comparing perceptions of U.S. vs. China AI research and development</a></li>
<li class="chapter" data-level="B.10" data-path="apptopline.html"><a href="apptopline.html#armsraceexp"><i class="fa fa-check"></i><b>B.10</b> Survey experiment: U.S.-China arms race</a><ul>
<li class="chapter" data-level="B.10.1" data-path="apptopline.html"><a href="apptopline.html#control"><i class="fa fa-check"></i><b>B.10.1</b> Control</a></li>
<li class="chapter" data-level="B.10.2" data-path="apptopline.html"><a href="apptopline.html#nationalism-treatment"><i class="fa fa-check"></i><b>B.10.2</b> Nationalism treatment</a></li>
<li class="chapter" data-level="B.10.3" data-path="apptopline.html"><a href="apptopline.html#war-risks-treatment"><i class="fa fa-check"></i><b>B.10.3</b> War risks treatment</a></li>
<li class="chapter" data-level="B.10.4" data-path="apptopline.html"><a href="apptopline.html#common-humanity-treatment"><i class="fa fa-check"></i><b>B.10.4</b> Common humanity treatment</a></li>
</ul></li>
<li class="chapter" data-level="B.11" data-path="apptopline.html"><a href="apptopline.html#uschinacoop"><i class="fa fa-check"></i><b>B.11</b> Issue areas for possible U.S.-China cooperation</a></li>
<li class="chapter" data-level="B.12" data-path="apptopline.html"><a href="apptopline.html#jobtime"><i class="fa fa-check"></i><b>B.12</b> Trend across time: job creation or job loss</a></li>
<li class="chapter" data-level="B.13" data-path="apptopline.html"><a href="apptopline.html#forecasthlmi"><i class="fa fa-check"></i><b>B.13</b> High-level machine intelligence: forecasting timeline</a></li>
<li class="chapter" data-level="B.14" data-path="apptopline.html"><a href="apptopline.html#supporthlmi"><i class="fa fa-check"></i><b>B.14</b> Support for developing high-level machine intelligence</a></li>
<li class="chapter" data-level="B.15" data-path="apptopline.html"><a href="apptopline.html#expectedoutcome"><i class="fa fa-check"></i><b>B.15</b> Expected outcome of high-level machine intelligence</a></li>
</ul></li>
<li class="chapter" data-level="C" data-path="addresults.html"><a href="addresults.html"><i class="fa fa-check"></i><b>C</b> Appendix C: Additional data analysis results</a><ul>
<li class="chapter" data-level="C.1" data-path="addresults.html"><a href="addresults.html#addsupportdevai"><i class="fa fa-check"></i><b>C.1</b> Support for developing AI</a></li>
<li class="chapter" data-level="C.2" data-path="addresults.html"><a href="addresults.html#addcarefullym"><i class="fa fa-check"></i><b>C.2</b> Survey experiment and cross-national comparison: AI and/or robots should be carefully managed</a></li>
<li class="chapter" data-level="C.3" data-path="addresults.html"><a href="addresults.html#appglobalrisks"><i class="fa fa-check"></i><b>C.3</b> Harmful consequences of AI in the context of other global risks</a></li>
<li class="chapter" data-level="C.4" data-path="addresults.html"><a href="addresults.html#aawhatsai"><i class="fa fa-check"></i><b>C.4</b> Survey experiment: what the public considers AI, automation, machine learning, and robotics</a></li>
<li class="chapter" data-level="C.5" data-path="addresults.html"><a href="addresults.html#appgovchallenges"><i class="fa fa-check"></i><b>C.5</b> AI governance challenges: prioritizing governance challenges</a></li>
<li class="chapter" data-level="C.6" data-path="addresults.html"><a href="addresults.html#trust-in-various-actors-to-develop-and-manage-ai-in-the-interest-of-the-public"><i class="fa fa-check"></i><b>C.6</b> Trust in various actors to develop and manage AI in the interest of the public</a></li>
<li class="chapter" data-level="C.7" data-path="addresults.html"><a href="addresults.html#appuschinacomp"><i class="fa fa-check"></i><b>C.7</b> Survey experiment: comparing perceptions of U.S. vs. China AI research and development</a></li>
<li class="chapter" data-level="C.8" data-path="addresults.html"><a href="addresults.html#appuschinaarmsrace"><i class="fa fa-check"></i><b>C.8</b> Survey experiment: U.S.-China arms race</a></li>
<li class="chapter" data-level="C.9" data-path="addresults.html"><a href="addresults.html#appjobloss"><i class="fa fa-check"></i><b>C.9</b> Trend across time: job creation or job loss</a></li>
<li class="chapter" data-level="C.10" data-path="addresults.html"><a href="addresults.html#apphlmi"><i class="fa fa-check"></i><b>C.10</b> High-level machine intelligence: forecasting timeline</a></li>
<li class="chapter" data-level="C.11" data-path="addresults.html"><a href="addresults.html#appsupporthlmi"><i class="fa fa-check"></i><b>C.11</b> Support for developing high-level machine intelligence</a></li>
<li class="chapter" data-level="C.12" data-path="addresults.html"><a href="addresults.html#appexpectedoutcome"><i class="fa fa-check"></i><b>C.12</b> Expected outcome of high-level machine intelligence</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html"><i class="fa fa-check"></i>Acknowledgements</a><ul>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html#primary-researchers"><i class="fa fa-check"></i>Primary researchers</a></li>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html#editing-and-design"><i class="fa fa-check"></i>Editing and design</a></li>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html#funders"><i class="fa fa-check"></i>Funders</a></li>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html#for-media-or-other-inquiries"><i class="fa fa-check"></i>For media or other inquiries</a></li>
<li class="chapter" data-level="" data-path="acknowledgements.html"><a href="acknowledgements.html#recommended-citation"><i class="fa fa-check"></i>Recommended citation</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="about-us.html"><a href="about-us.html"><i class="fa fa-check"></i>About us</a><ul>
<li class="chapter" data-level="" data-path="about-us.html"><a href="about-us.html#about-the-center-for-the-governance-of-ai"><i class="fa fa-check"></i>About the Center for the Governance of AI</a></li>
<li class="chapter" data-level="" data-path="about-us.html"><a href="about-us.html#about-the-future-of-humanity-institute"><i class="fa fa-check"></i>About the Future of Humanity Institute</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="references.html"><a href="references.html"><i class="fa fa-check"></i>References</a></li>
<li class="divider"></li>
<li><a href="https://governance.ai">Center for the Governance of AI</a></li>
<li><a href="https://www.fhi.ox.ac.uk/">Future of Humanity Institute</a></li>
<li><a href="http://www.ox.ac.uk/">University of Oxford</a></li>
<li><img src="images/FHI-Logo-Print.png" alt="FHI logo" width="77px" hspace="12"/><img src="images/oxford-university-logo.png" alt="Oxford logo" width="74px" hspace="12"/></li>
</ul>
</nav>
</div>
<div class="book-body">
<div class="body-inner">
<div class="book-header" role="navigation">
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i><a href="./">Artificial Intelligence: American Attitudes and Trends</a>
</h1>
</div>
<div class="page-wrapper" tabindex="-1" role="main">
<div class="page-inner">
<section class="normal" id="section-">
<div id="general-attitudes-toward-ai" class="section level1">
<h1><span class="header-section-number">2</span> General attitudes toward AI</h1>
<div id="subsecsupportai" class="section level2">
<h2><span class="header-section-number">2.1</span> More Americans support than oppose developing AI</h2>
<p>We measured respondents’ support for the further development of AI after providing them with basic information about the technology. Respondents were given the following definition of AI:</p>
<blockquote>
<p>Artificial Intelligence (AI) refers to computer systems that perform tasks or make decisions that usually require human intelligence. AI can perform these tasks or make these decisions without explicit human instructions. Today, AI has been used in the following applications: [five randomly selected applications]</p>
</blockquote>
<p>Each respondent viewed five applications randomly selected from a list of 14 that included translation, image classification, and disease diagnosis. Afterward, respondents were asked how much they support or oppose the development of AI. (See <a href="apptopline.html#supportdevai">Appendix B</a> for the list of the 14 applications and the survey question.)</p>
<div class="figure"><span id="fig:supportdevrisks"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/supportdevrisks-1.png" alt="Support for developing AI" width="2100" />
<p class="caption">
Figure 2.1: Support for developing AI
</p>
</div>
<p>Americans express mixed support for the development of AI, although more support than oppose the development of AI, as shown in Figure <a href="general-attitudes-toward-ai.html#fig:supportdevrisks">2.1</a>. A substantial minority (41%) somewhat or strongly supports the development of AI. A smaller minority (22%) somewhat or strongly oppose its development. Many express a neutral attitude: 28% of respondents state that they neither support nor oppose while 10% indicate they do not know.</p>
<p>Our survey results reflect the cautious optimism that Americans express in other polls. In a recent survey, 51% of Americans indicated that they support continuing AI research while 31% opposed it <span class="citation">(Morning Consult <a href="#ref-morningconsult2017">2017</a>)</span>. Furthermore, 77% of Americans expressed that AI would have a “very positive” or “mostly positive” impact on how people work and live in the next 10 years, while 23% thought that AI’s impact would be “very negative” or “mostly negative” <span class="citation">(Northeastern University and Gallup <a href="#ref-negallup2018">2018</a>)</span>.</p>
</div>
<div id="subsecdemosupportai" class="section level2">
<h2><span class="header-section-number">2.2</span> Support for developing AI is greater among those who are wealthy, educated, male, or have experience with technology</h2>
<p>We examined support for developing AI by 11 demographic subgroup variables, including age, gender, race, and education. (See <a href="appmethod.html#appdemosubgroups">Appendix A</a> for descriptions of the demographic subgroups.) We performed a multiple linear regression to predict support for developing AI using all these demographic variables.</p>
<div class="figure"><span id="fig:demographicsupportstackbar"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/demographicsupportstackbar-1.png" alt="Support for developing AI across demographic characteristics: distribution of responses" width="2550" />
<p class="caption">
Figure 2.2: Support for developing AI across demographic characteristics: distribution of responses
</p>
</div>
<p>Support for developing AI varies greatly between demographic subgroups, with gender, education, income, and experience being key predictors. As seen in Figure <a href="general-attitudes-toward-ai.html#fig:demographicsupportstackbar">2.2</a>, a majority of respondents in each of the following four subgroups express support for developing AI: those with four-year college degrees (57%), those with an annual household income above $100,000 (59%), those who have completed a computer science or engineering degree (56%), and those with computer science or programming experience (58%). In contrast, women (35%), those with a high school degree or less (29%), and those with an annual household income below $30,000 (33%), are much less enthusiastic about developing AI. One possible explanation for these results is that subgroups that are more vulnerable to workplace automation express less enthusiasm for developing AI. Within developed countries, women, those with low levels of education, and low-income workers have jobs that are at higher risk of automation, according to an analysis by the Organisation for Economic Co-operation and Development <span class="citation">(Nedelkoska and Quintini <a href="#ref-nedelkoska2018automation">2018</a>)</span>.</p>
<p>We used a multiple regression that includes all of the demographic variables to predict support for developing AI. The support for developing AI outcome variable was standardized, such that it has mean 0 and unit variance.</p>
<p>Significant predictors of <em>support</em> for developing AI include:</p>
<ul>
<li>Being a Millennial/post-Millennial (versus being a Gen Xer or Baby Boomer)</li>
<li>Being a male (versus being a female)</li>
<li>Having graduated from a four-year college (versus having a high school degree or less)</li>
<li>Identifying as a Democrat (versus identifying as a Republican)</li>
<li>Having a family income of more than $100,000 annually (versus having a family income of less than $30,000 annually)</li>
<li>Not having a religious affiliation (versus identifying as a Christian)</li>
<li>Having CS or programming experience (versus not having such experience)</li>
</ul>
<!-- [done] Um, this leads me to wonder whether vs other comparisons is not significant. But actually other comparisons are more significant. I think better to say this, eg versus reporting any other level of income.
[BZ: Can you explain your comment? I am not understanding it. The category that was left out was having a family income of less than $30K annually -- that's the baseline group. Are you suggesting that we use "not reporting one's income" as the baseline group instead? I think it's more work to fix it now.
AD 2811 to BZ: what I was saying: if we drop those who didn't answer, is income a predictor? That's more what matters. People who didn't answer are weird for other reasons. It seems like income might be significant, might not, hard to read off the coef plot]
BZ_2911: We did not drop those who did not answer. For those who did not answer the income question, we classified them as "prefer not to say." So the income categories are: less to 30K, 30-70K, 70-100K, more than 100K, and prefer not to say income. It's not significant -- how about I make a regression table for these coefficient plots with the p-values and put it in the appendix?
AD_3011: Ok, so you are saying the income differences are not sig btw each other? Then yes i don't think we should talk about income, because "not reporting" is some weird measure, I don't know how it relates to income. It confounds with privacy concerned, etc...
Anyhow, I removed it because it is weird.
-->
<div class="figure"><span id="fig:demographicsupportregression"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/demographicsupportregression-1.png" alt="Support for developing AI across demographic characteristics: average support across groups" width="2100" />
<p class="caption">
Figure 2.3: Support for developing AI across demographic characteristics: average support across groups
</p>
</div>
<p>Some of the demographic differences we observe in this survey are in line with existing public opinion research. Below we highlight three salient predictors of support for AI based on the existing literature: gender, education, and income.</p>
<p>Around the world, women have viewed AI more negatively than men. Fifty-four percent of women in EU countries viewed AI positively, compared with 67% of men <span class="citation">(Eurobarometer <a href="#ref-eurobarometer460">2017</a>)</span>. Likewise in the U.S., 44% of women perceived AI as unsafe – compared with 30% of men <span class="citation">(Morning Consult <a href="#ref-morningconsult2017">2017</a>)</span>. This gender difference could be explained by the fact that women have expressed higher distrust of technology than men do. In the U.S., women, compared with men, were more likely to view genetically modified foods or foods treated with pesticides as unsafe to eat, to oppose building more nuclear power plants, and to oppose fracking <span class="citation">(Funk and Rainie <a href="#ref-funk2015">2015</a>)</span>.</p>
<p>One’s level of education also predicts one’s enthusiasm toward AI, according to existing research. Reflecting upon their own jobs, 32% of Americans with no college education thought that technology had increased their opportunities to advance – compared with 53% of Americans with a college degree <span class="citation">(Smith and Anderson <a href="#ref-smith2017">2016</a>)</span>. Reflecting on the economy at large, 38% of those with post-graduate education felt that automation had helped American workers while only 19% of those with less than a college degree thought so <span class="citation">(Graham <a href="#ref-graham2018">2018</a>)</span>. A similar trend holds in the EU: those with more years of education, relative to those with fewer years, were more likely to value AI as good for society and less likely to think that AI steals people’s jobs <span class="citation">(Eurobarometer <a href="#ref-eurobarometer460">2017</a>)</span>.</p>
<p>Another significant demographic divide in attitudes toward AI is income: low-income respondents, compared with high-income respondents, view AI more negatively. For instance, 40% of EU residents who had difficulty paying their bills “most of the time” hold negative views toward robots and AI, compared with 27% of those who “almost never” or “never” had difficulty paying their bills <span class="citation">(Eurobarometer <a href="#ref-eurobarometer460">2017</a>)</span>. In the U.S., 19% of those who made less than $50,000 annually think that they are likely to lose their job to automation – compared with only 8% of Americans who made more than $100,000 annually <span class="citation">(Graham <a href="#ref-graham2018">2018</a>)</span>. Furthermore, Americans’ belief that AI will help the economy, as well as their support for AI research is positively correlated with their income <span class="citation">(Morning Consult <a href="#ref-morningconsult2017">2017</a>)</span>.</p>
<div class="figure"><span id="fig:demographicsupport2"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/demographicsupport2-1.png" alt="Predicting support for developing AI using demographic characteristics: results from a multiple linear regression that includes all demographic variables" width="2100" />
<p class="caption">
Figure 2.4: Predicting support for developing AI using demographic characteristics: results from a multiple linear regression that includes all demographic variables
</p>
</div>
</div>
<div id="subsecsupportmanageai" class="section level2">
<h2><span class="header-section-number">2.3</span> An overwhelming majority of Americans think that AI and robots should be carefully managed</h2>
<p>To compare Americans’ attitudes with those of EU residents, we performed a survey experiment that replicated a question from the <a href="https://perma.cc/9FRT-ADST">2017 Special Eurobarometer #460</a>. (Details of the survey experiment are found in <a href="apptopline.html#manageexp">Appendix B</a>.) The original question asked respondents to what extent they agree or disagree with the following statement:</p>
<blockquote>
<p>Robots and artificial intelligence are technologies that require careful management.</p>
</blockquote>
<p>We asked a similar question except respondents were randomly assigned to consider one of these three statements:</p>
<ul>
<li>AI and robots are technologies that require careful management.</li>
<li>AI is a technology that requires careful management.</li>
<li>Robots are technologies that require careful management.</li>
</ul>
<p>Our respondents were given the <a href="apptopline.html#manageexp">same answer choices</a> presented to the Eurobarometer subjects.</p>
<p>The overwhelming majority of Americans – more than eight in 10 – agree that AI and/or robots should be carefully managed, while only 6% disagree, as seen in Figure <a href="general-attitudes-toward-ai.html#fig:aimanaged">2.5</a>.<a href="#fn5" class="footnote-ref" id="fnref5"><sup>5</sup></a> We find that variations in the statement wording produce <a href="addresults.html#addcarefullym">minor differences, statistically indistinguishable from zero,</a> in responses.</p>
<div class="figure"><span id="fig:aimanaged"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/aimanaged-1.png" alt="Agreement with statement that AI and/or robots should be carefully managed" width="672" />
<p class="caption">
Figure 2.5: Agreement with statement that AI and/or robots should be carefully managed
</p>
</div>
<div class="figure"><span id="fig:aimanagedexp2"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/aimanagedexp2-1.png" alt="Agreement with statement that AI and/or robots should be carefully managed by experimental condition" width="2100" />
<p class="caption">
Figure 2.6: Agreement with statement that AI and/or robots should be carefully managed by experimental condition
</p>
</div>
<p>Next, we compared our survey results with the responses from the 2017 Special Eurobarometer #460 by country <span class="citation">(Eurobarometer <a href="#ref-eurobarometer460">2017</a>)</span>. For the U.S., we used all the responses to our survey question, unconditional on the experimental condition, because the variations in question-wording do not affect responses.</p>
<p>The percentage of those in the U.S. who agree with the statement (82%) is not far off from the EU average (88% agreed with the statement). Likewise, the percentage of Americans who disagree with the statement (6% disagree) is comparable with the EU average (7% disagreed). The U.S. ranks among the lowest regarding the agreement with the statement in part due to the relatively high percentage of respondents who selected the “don’t know” option.</p>
<div class="figure"><span id="fig:eu"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/eu-1.png" alt="Agreement with statement that robots and AI require careful management (EU data from 2017 Special Eurobarometer #460)" width="2100" />
<p class="caption">
Figure 2.7: Agreement with statement that robots and AI require careful management (EU data from 2017 Special Eurobarometer #460)
</p>
</div>
</div>
<div id="harmful-consequences-of-ai-in-the-context-of-other-global-risks" class="section level2">
<h2><span class="header-section-number">2.4</span> Harmful consequences of AI in the context of other global risks</h2>
<div class="figure"><span id="fig:globalrisksfig"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/globalrisksfig-1.png" alt="The American public's perceptions of 15 potential global risks" width="2100" />
<p class="caption">
Figure 2.8: The American public’s perceptions of 15 potential global risks
</p>
</div>
<p>At the beginning of the survey, respondents were asked to consider five out of 15 potential global risks (the descriptions are found in <a href="apptopline.html#global_risks">Appendix B</a>). The purpose of this task was to compare respondents’ perception of AI as a global risk with their notions of other potential global risks. The global risks were selected from the <a href="https://perma.cc/8XM8-LKEN">Global Risks Report 2018</a>, published by the World Economic Forum. We edited the description of each risk to be more comprehensible to non-expert respondents while preserving the substantive content. We gave the following definition for a global risk:</p>
<blockquote>
<p>A “global risk” is an uncertain event or condition that, if it happens, could cause significant negative impact for at least 10 percent of the world’s population. That is, at least 1 in 10 people around the world could experience a significant negative impact.<a href="#fn6" class="footnote-ref" id="fnref6"><sup>6</sup></a></p>
</blockquote>
<p>After considering each potential global risk, respondents were asked to evaluate the likelihood of it happening globally within 10 years, as well as its impact on several countries or industries.</p>
<p>We use a scatterplot (Figure <a href="general-attitudes-toward-ai.html#fig:globalrisksfig">2.8</a> to visualize results from respondents’ evaluations of global risks. The <em>x</em>-axis is the perceived likelihood of the risk happening globally within 10 years. The <em>y</em>-axis is the perceived impact of the risk. The mean perceived likelihood and impact is represented by a dot. The corresponding ellipse contains the 95% confidence region.</p>
<p>In general, Americans perceive all these risks to be impactful: on average they rate each as having between a moderate (2) and severe (3) negative impact if they were to occur. Americans perceive the use of weapons of mass destruction to be the most impactful – at the “severe” level (mean score 3.0 out of 4). Although they do not think this risk as likely as other risks, they still assign it an average of 49% probability of occurring within 10 years. Risks in the upper-right quadrant are perceived to be the most likely as well as the most impactful. These include natural disasters, cyber attacks, and extreme weather events.</p>
<p>The American public and the nearly 1,000 experts surveyed by the World Economic Forum share similar views regarding most of the potential global risks we asked about <span class="citation">(World Economic Forum <a href="#ref-wef2018">2018</a>)</span>. Both the public and the experts rank extreme weather events, natural disasters, and cyber attacks as the top three most likely global risks; likewise, both groups consider weapons of mass destruction to be the most impactful. Nevertheless, compared with experts, Americans offer a lower estimate of the likelihood and impact of the failure to address climate change.</p>
<p>The American public appears to over-estimate the likelihoods of these risks materializing within 10 years. The mean responses suggest (assuming independence) that about eight (out of 15) of these global risks, which would have a significant negative impact on at least 10% of the world’s population, will take place in the next 10 years. One explanation for this is that it arises from the broad misconception that the world is in a much worse state than it is in reality <span class="citation">(Pinker <a href="#ref-pinker2018enlightenment">2018</a>; Rosling, Rönnlund, and Rosling <a href="#ref-rosling2018factfulness">2018</a>)</span>. Another explanation is that it arises as a byproduct of respondents interpreting “significant negative impact” in a relatively minimal way, though this interpretation is hard to sustain given the mean severity being between “moderate” and “severe.” Finally, this result may be because subjects centered their responses within the distribution of our response options, the middle value of which was the 40-60% option; thus, the likelihoods should not be interpreted literally in the absolute sense.</p>
<p>The adverse consequences of AI within the next 10 years appear to be a relatively low priority in respondents’ assessment of global risks. It – along with adverse consequences of synthetic biology – occupy the lower left quadrant, which contains what are perceived to be lower-probability, lower-impact risks.<a href="#fn7" class="footnote-ref" id="fnref7"><sup>7</sup></a> These risks are perceived to be as impactful (within the next 10 years) as the failure to address climate change, though less probable. One interpretation of this is that the average American simply does not regard AI as posing a substantial global risk. This interpretation, however, would be in tension with some expert assessment of catastrophic risks that suggests unsafe AI could pose significant danger <span class="citation">(World Economic Forum <a href="#ref-wef2018">2018</a>; Sandberg and Bostrom <a href="#ref-sandberg2008">2008</a>)</span>. The gap between experts and the public’s assessment suggests that this is a fruitful area for efforts to educate the public.</p>
<p>Another interpretation of our results is that Americans do have substantial concerns about the long-run impacts of advanced AI, but they do not see these risks as likely in the coming 10 years. As support for this interpretation, we later find that 12% of American’s believe the impact of high-level machine intelligence will be “extremely bad, possibly human extinction,” and 21% that it will be “on balance bad.” Still, even though the median respondent expects around a 54% chance of high level machine intelligence within 10 years, respondents may believe that the risks from high level machine intelligence will manifest years later. If we assume respondents believe global catastrophic risks from AI only emerge from high-level AI, we can infer an implied global risk, conditional on high-level AI (within 10 years), of 80%. Future work should try to unpack and understand these beliefs.</p>
<!-- BZ 27-12: I am still not satisified with this sentence:
If we assume respondents believe global catastrophic risks from AI only emerge from high-level AI, we can infer an implied global risk, conditional on high-level AI (within 10 years), of 80%.
I suggest that we delete it.
I do not think we can make these assumptions. We asked about HLMI much later on in the survey; the global risk question was in the beginning of the survey. We cannot condition on the high-level AI stuff because that is definitely post-treatment of our global risk question. I don't think people have very coherent thoughts about AI risks in the beginning of the survey; many are probably not even thinking about HLMI. In future surveys, we could ask the global risk question twice: once in the beginning and once at the end.
-->
</div>
<div id="americans-understanding-of-key-technology-terms" class="section level2">
<h2><span class="header-section-number">2.5</span> Americans’ understanding of key technology terms</h2>
<!-- [done]
BZ: I added information that explains the non-response is correlated with respondent inattention. I also did F-tests to show that responses are different for the terms within each technological application. The results of the analysis are found in Appendix C.
I'm looking for advice regarding where to put this subsection.
Reasons for putting it here: It makes sense chronologically because it's most of the first things that we asked. It's asked before we define AI for respondents.
Reason for not putting it here: It's not important enough to be the first subsection. It might make readers think that our sample is low quality (because of inattention) or that the public is ignorant, so their opinions don't matter.
We could put it at the end of this section and explain that it was asked very early in the survey before we defined AI.
My assessment: It's 95% there -- mainly we need to figure out where to put this subsection.
AD: why is it written in present tense? started changing it to past tense.
Should we have a concise title for the subsection? Eg “Public Understanding of Key Terms”.
[BZ: I shortened the subsection title as you had suggested.]
Or we should make more clear visually that the heading is a summary of the result. But then we need to make discussion beneath it focused on that.
Yes, I think this should go later. It is the least interesting.
[BZ: I moved it to here.]
Should quote terms throughout, or italicize do not use “AI” or “machine learning”.
[BZ: I made everything italics.]
The discussion was a bit confusing, as each paragraph offers an "explanation", but it wasn't completely clear what was being explained. Can you state (or visually call out) more clearly the "finding", which is that AI was under-answered.
[BZ: I rewrote the discussion to be more explicit.]
Also, what about the other finding, eg that there was some nuance in how these terms were understood? I think that is worth noting, to show how these terms aren't all understood the same way, and the variation roughly corresponds to how experts understand it. But also we shouldn't make too much about this, given the results are modest.
-->
<p>We used a survey experiment to understand how the public understands the terms <em>AI</em>, <em>automation</em>, <em>machine learning</em>, and <em>robotics</em>. (Details of the survey experiment are found in <a href="apptopline.html#considersai">Appendix B</a>.) We randomly assigned each respondent one of these terms and asked them:</p>
<blockquote>
<p>In your opinion, which of the following technologies, if any, uses [artificial intelligence (AI)/automation/machine learning/robotics]? Select all that apply.</p>
</blockquote>
<p>Because we wanted to understand respondents’ perceptions of these terms, we did not define any of the terms. Respondents were asked to consider <a href="apptopline.html#considersai">10 technological applications</a>, each of which uses AI or machine learning.</p>
<p>Though the respondents show at least a partial understanding of the terms and can identify their use within the considered technological applications correctly, the respondents underestimate the prevalence of AI, machine learning, and robotics in everyday technological applications, as reported in Figure <a href="general-attitudes-toward-ai.html#fig:whatai">2.9</a>. (See <a href="addresults.html#aawhatsai">Appendix C</a> for details of our statistical analysis.)</p>
<!-- [done] MA to BZ: Changed the above around. Should be looked over. I also think that maybe a sentence should be added stating that all of the listed technologies _actually_ use AI/ML. E.g. "Among those assigned the term _AI_, ... and autonomous drones use AI (54%), despite all of these applications using the technology." Another solution would be to state what types of technology is used in the applications in the bullet list above in paranthesis or something.
-->
<p>Among those assigned the term <em>AI</em>, a majority think that virtual assistants (63%), smart speakers (55%), driverless cars (56%), social robots (64%), and autonomous drones use AI (54%). Nevertheless, a majority of respondents assume that Facebook photo tagging, Google Search, Netflix or Amazon recommendations, or Google Translate do not use AI.</p>
<p>Why did so few respondents consider the products and services we listed to be applications of AI, automation, machine learning, or robotics?</p>
<div class="figure"><span id="fig:whatai"></span>
<img src="ai_public_opinion_us_2018_report-190107_web_files/figure-html/whatai-1.png" alt="What applications or products that the public thinks use AI, automation, machine learning, or robotics" width="2100" />
<p class="caption">
Figure 2.9: What applications or products that the public thinks use AI, automation, machine learning, or robotics
</p>
</div>
<p>A straightforward explanation is that inattentive respondents neglect to carefully consider or select the items presented to them (i.e., non-response bias). Even among those assigned the term <em>robotics</em>, only 62% selected social robots and 68% selected industrial robots. Our analysis (found in <a href="addresults.html#aawhatsai">Appendix C</a>) confirms that respondent inattention, defined as spending too little or too much time on the survey, predicts non-response to this question.</p>
<p>Another potential explanation for the results is that the American public – like the public elsewhere – lack awareness of AI or machine learning. As a result, the public does not know that many tech products and services use AI or machine learning. According to a 2017 survey, nearly half of Americans reported that they were unfamiliar with AI <span class="citation">(Morning Consult <a href="#ref-morningconsult2017">2017</a>)</span>. In the same year, only 9% of the British public said they had heard of the term “machine learning” <span class="citation">(Ipsos MORI <a href="#ref-rs2018">2018</a>)</span>. Similarly, less than half of EU residents reported hearing, reading, or seeing something about AI in the previous year <span class="citation">(Eurobarometer <a href="#ref-eurobarometer460">2017</a>)</span>.</p>
<p>Finally, the so-called “AI effect” could also explain the survey result. The AI effect describes the phenomenon that the public does not consider an application that uses AI to utilize AI once that application becomes commonplace <span class="citation">(McCorduck <a href="#ref-McCorduck2004">2004</a>)</span>. Because 85% of Americans report using digital products that deploy AI (e.g., navigation apps, video or music streaming apps, digital personal assistants on smartphones, etc.) <span class="citation">(Reinhart <a href="#ref-reinhart2018">2018</a>)</span>, they may not think that these everyday applications deploy AI.</p>
<!-- [done] AD: above looks good. Optional analysis: see if any of the specific examples shifts approval around at all. I doubt there is a big effect there, and it would be an underpowered analysis. Should we log such analysis ideas somewhere? Perhaps in an appendix for us? Maybe if we get volunteers at some point we could have them do these optional exploratory analyses?
[BZ: I will do it if I have more time...right now it's not a priority.]
-->
</div>
</div>
<h3>References</h3>
<div id="refs" class="references">
<div id="ref-eurobarometer460">
<p>Eurobarometer. 2017. “Special Eurobarometer 460: Attitudes Towards the Impact of Digitisation and Automation on Daily Life.” Eurobarometer. <a href="https://perma.cc/9FRT-ADST">https://perma.cc/9FRT-ADST</a>.</p>
</div>
<div id="ref-funk2015">
<p>Funk, Cary, and Lee Rainie. 2015. “Public and Scientists’ Views on Science and Society.” Survey report. Pew Research Center. <a href="https://perma.cc/9XSJ-8AJA">https://perma.cc/9XSJ-8AJA</a>.</p>
</div>
<div id="ref-graham2018">
<p>Graham, Edward. 2018. “Views on Automation’s U.s. Workforce Impact Highlight Demographic Divide.” Morning Consult. <a href="https://perma.cc/544D-WRUM">https://perma.cc/544D-WRUM</a>.</p>
</div>
<div id="ref-rs2018">
<p>Ipsos MORI. 2018. “Public Views of Machine Learning: Findings from Public Research and Engagement Conducted on Behalf of the Royal Society.” Survey report. The Royal Society. <a href="https://perma.cc/79FE-TEHH">https://perma.cc/79FE-TEHH</a>.</p>
</div>
<div id="ref-McCorduck2004">
<p>McCorduck, Pamela. 2004. <em>Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence</em>. New York: A K Peters/CRC Press.</p>
</div>
<div id="ref-morningconsult2017">
<p>Morning Consult. 2017. “National Tracking Poll 170401.” Survey report. Morning Consult. <a href="https://perma.cc/TBJ9-CB5K">https://perma.cc/TBJ9-CB5K</a>.</p>
</div>
<div id="ref-nedelkoska2018automation">
<p>Nedelkoska, Ljubica, and Glenda Quintini. 2018. “Automation, Skills Use and Training.” Working Papers No. 202. Organisation for Economic Co-operation; Development. <a href="https://doi.org/10.1787/2e2f4eea-en">https://doi.org/10.1787/2e2f4eea-en</a>.</p>
</div>
<div id="ref-negallup2018">
<p>Northeastern University and Gallup. 2018. “Optimism and Anxiety: Views on the Impact of Artificial Intelligence and Higher Education’s Response.” Survey report. Northeastern University; Gallup. <a href="https://perma.cc/57NW-XCQN">https://perma.cc/57NW-XCQN</a>.</p>
</div>
<div id="ref-pinker2018enlightenment">
<p>Pinker, Steven. 2018. <em>Enlightenment Now: The Case for Reason, Science, Humanism, and Progress</em>. New York: Penguin.</p>
</div>
<div id="ref-reinhart2018">
<p>Reinhart, RJ. 2018. “Most Americans Already Using Artificial Intelligence Products.” Survey report. Gallup. <a href="https://perma.cc/RVY5-WP9W">https://perma.cc/RVY5-WP9W</a>.</p>
</div>
<div id="ref-rosling2018factfulness">
<p>Rosling, Hans, Anna Rosling Rönnlund, and Ola Rosling. 2018. <em>Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think</em>. New York: Flatiron Books.</p>
</div>
<div id="ref-sandberg2008">
<p>Sandberg, Anders, and Nick Bostrom. 2008. “Global Catastrophic Risks Survey.” Future of Humanity Institute, Oxford University. <a href="https://perma.cc/TA97-KD3Z">https://perma.cc/TA97-KD3Z</a>.</p>
</div>
<div id="ref-smith2017">
<p>Smith, Aaron, and Monica Anderson. 2016. “Automation in Everyday Life.” Pew Research Center. <a href="https://perma.cc/WU6B-63PZ">https://perma.cc/WU6B-63PZ</a>.</p>
</div>
<div id="ref-wef2018">
<p>World Economic Forum. 2018. “The Global Risks Report 2018: 13th Edition.” World Economic Forum. <a href="https://perma.cc/8XM8-LKEN">https://perma.cc/8XM8-LKEN</a>.</p>
</div>
</div>
<div class="footnotes">
<hr />
<ol start="5">
<li id="fn5"><p>These percentages that we discuss here reflect the average response across the three statements. See <a href="apptopline.html#manageexp">Appendix B</a> for the topline result for each statement.<a href="general-attitudes-toward-ai.html#fnref5" class="footnote-back">↩</a></p></li>
<li id="fn6"><p>Our definition of global risk borrowed from the Global Challenges Foundation’s definition: “an uncertain event or condition that, if it happens, can cause a significant negative impact on at least 10% of the world’s population within the next 10 years” <span class="citation">(Cotton-Barratt et al. <a href="#ref-cotton2016">2016</a>)</span>.<a href="general-attitudes-toward-ai.html#fnref6" class="footnote-back">↩</a></p></li>
<li id="fn7"><p>The World Economic Forum’s survey asked experts to evaluate the “adverse consequences of technological advances,” defined as “[i]ntended or unintended adverse consequences of technological advances such as artificial intelligence, geo-engineering and synthetic biology causing human, environmental and economic damage.” The experts considered these “adverse consequences of technological advances” to be less likely and lower-impact, compared with other potential risks.<a href="general-attitudes-toward-ai.html#fnref7" class="footnote-back">↩</a></p></li>
</ol>
</div>
</section>
</div>
</div>
</div>
<a href="executive-summary.html" class="navigation navigation-prev " aria-label="Previous page"><i class="fa fa-angle-left"></i></a>
<a href="public-opinion-on-ai-governance.html" class="navigation navigation-next " aria-label="Next page"><i class="fa fa-angle-right"></i></a>
</div>
</div>
<script src="libs/gitbook-2.6.7/js/app.min.js"></script>
<script src="libs/gitbook-2.6.7/js/lunr.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-search.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-sharing.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-fontsettings.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-bookdown.js"></script>
<script src="libs/gitbook-2.6.7/js/jquery.highlight.js"></script>
<script>
gitbook.require(["gitbook"], function(gitbook) {
gitbook.start({
"sharing": {
"github": false,
"facebook": true,
"twitter": true,
"google": false,
"linkedin": false,
"weibo": false,
"instapaper": false,
"vk": false,
"all": ["facebook", "google", "twitter", "linkedin", "weibo", "instapaper"]
},
"fontsettings": {
"theme": "white",
"family": "sans",
"size": 2
},
"edit": null,
"history": {
"link": null,
"text": null
},
"download": [["https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf", "PDF"], ["https://ssrn.com/abstract=3312874", "SSRN"], ["https://doi.org/10.7910/DVN/SGFRYA", "Replication Data"]],
"toc": {
"collapse": "section",
"scroll_highlight": true
},
"toolbar": {
"position": "fixed"
},
"search": false
});
});
</script>
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
var src = "true";
if (src === "" || src === "true") src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-MML-AM_CHTML";
if (location.protocol !== "file:" && /^https?:/.test(src))
src = src.replace(/^https?:/, '');
script.src = src;
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script>
</body>
</html>