-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathgrantees.html
More file actions
850 lines (730 loc) · 36.7 KB
/
grantees.html
File metadata and controls
850 lines (730 loc) · 36.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
---
layout: default
title: Data Transparency Lab Grantees
---
<section id="grantees" class="section2">
<h2 style="color: #000000"> DTL Grantees 2015</h2>
<div class="divider-line"></div>
<div class="grantees-intro">
<p>
The DTL Grants support research in tools, data, platforms and
methodologies for shedding light on the use of personal data by online
services, and to empower users to be in control of their personal data
online.</p>
<p>The winners of the first DTL research grants are listed below. Each project received a lump sum of Euros 50,000. Click <a href="index.html#grants">here</a> for more information on the call for proposals. </p>
<p>The remaining proposals that made it to the top-third of all
proposals have been awarded with a platform to present their work at the
DTL2015 Conference, with a corresponding travel grant.<a href="#mentions"> Click here to view these proposals.</a></p>
</div>
<div class="circle">TOOLS</div>
<br>
<!-- tool 1 -->
<div id="ur" class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Providing Users Data-Driven Privacy Awareness Revealing and Controlling Mobile Privacy Leaks<a name="ur"></a></h2>
<h6>Lorrie Faith Cranor (Carnegie Mellon University),Blase Ur (Carnegie Mellon University)</h6>
</div>
<div class="col-md-5 text-grid-b">
<p>Online behavioral advertising (OBA), the targeting of
advertisements based on a user's web browsing, remains a major source of
privacy invasion. Although a number of privacy tools (e.g., Ghostery,
Lightbeam, and Privacy Badger) can help users control OBA, average users
are left utterly confused about OBA even after using such tools. We
propose moving beyond existing tools, which alert users to tracking
occurring at the current moment, by designing and testing a tool that
takes a data-driven, personalized approach to privacy awareness. We
hypothesize that users can better understand OBA and resultant privacy
threats if equipped with a tool that visualizes instances of them being
tracked over time.</p>
<p>
We will build and test such a data-driven privacy tool that
enables users to explore on precisely which webpages different companies
have tracked them, as well as what those companies may have inferred
about their interests. Studies have shown benefits in notifying users
about the collection of data by smartphone apps. Our proposal translates
these insights to the OBA domain, yet makes further intellectual
contributions by exploring the impact of presenting different
abstractions and granularities of the information tracked (e.g., showing
"Doubleclick knows you visited the following 82 pages" versus
"Doubleclick has likely concluded that you like 'European travel' based
on your visits to these 82 pages"). In addition to releasing our privacy
tool as a fully functional, open-source project, we will conduct a
75-participant, 2-week field trial comparing visualizations of
personalized tracking data.</p>
</div>
</div>
<br>
<br>
<!-- tool 2 Updated. url: http://recon.meddle.mobi. -->
<div id="choff" class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Revealing and Controlling Mobile Privacy Leaks<a name="choff"></a></h2>
<h6>David Choffnes (Northeastern University), Christo Wilson (Northeastern University), Alan Mislove (Northeastern University)</h6>
</div>
<div class="col-md-5 text-grid-b"> <p>The combination of rich
sensors and ubiquitous connectivity make mobile devices perfect vectors
for invading the privacy of end users. We argue that improving privacy
in this environment requires trusted third-party systems that enable
auditing and control over PII leaks. However, previous attempts to
address PII leaks fall short of enabling such auditing and control
because they face challenges of a lack of visibility into network
traffic generated by mobile devices and the inability to control the
traffic.</p>
<p>The proposed research will enable the auditing and control of PII
leaks in network traffic from mobile devices using indirection to
improve visibility and control for PII leaks in mobile network traffic.
Specifically, we use natively supported mobile OS features to redirect
all of a device’s Internet traffic to a trusted server to identify and
control privacy leaks in network traffic.</p>
<p>
We will address the key challenges of how to identify and control PII
leaks when users’ PII is not known a priori, nor is the set of apps that
leak this information. First, to enable auditing through improved
transparency, we will investigate how to use machine learning to
reliably identify PII from network flows, and identify algorithms that
incorporate user feedback to adapt to the changing landscape of privacy
leaks. Second, we will build tools that allow users to control how their
information is (or not) shared with second and third parties. These
tools will be deployed as free, open-source applications that can run in
a number of deployment scenarios, including on a device in a user’s
home network, or in a shared cloud-based VM environment.
</p>
</div>
</div>
<br>
<br>
<!-- tool 3 updated -->
<div id="cuevas" class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<a name="cuevas"></a>
<h2>FDVT: PERSONAL DATA VALUATION TOOL FOR FACEBOOK USERS</h2>
<h6>Angel Cuevas (Universidad Carlos III de Madrid, co-PI), Ruben Cuevas
(Universidad Carlos III de Madrid, co-PI) Raquel Aparicio (Universidad
Carlos III de Madrid)</h6>
</div>
<div class="col-md-5 text-grid-b"> <p>
A recent report of the Interactive Advertising Bureau revealed that
online advertising generated in 2014 $49.5B worth of revenue only in US,
representing an increase of 16% with respect to 2013, which in turn
exceeded 17% the revenue of 2012. A great advantage of online
advertising over more traditional printed and TV advertising is its
capability to target individuals with specialized advertisements
tailored to their personal information. For instance the ad campaign
planner from Facebook (FB) allows defining an audience using more than
13 different attributes related to personal information of the end-user.
Therefore, an online advertiser can launch a campaign targeting a
well-defined audience based on personal information attributes, thus an
important part of the FB business model is built up on top of the
personal information of its subscribers. Although there are no doubts of
the legality of the business model implemented by FB and other major
players in the Internet, there are some actors raising the request of
generating tools that let end-users knowing what is the actual value of
their personal information. In other words, how much money FB, Google,
and other companies in the on-line advertising market make out of my
personal information. Providing Internet users with simple and
transparent tools that inform them of what is the value that their
personal data generates is not only a civil society request, but a
demand from governmental forces.</p>
<p>The goal of this project is to develop a tool that informs in
real-time Internet end users regarding the economic value that the
personal information associated to their browsing activity has
generated. Due to the complexity of the problem we narrow down the scope
of this tool to FB in this project, i.e., inform FB users in real time
of the value that they are generating to FB. We refer to this tool as FB
Data Valuation Tool (FDVT).
</p>
</div>
</div>
<br>
<br>
<!-- tool 4 updated url: http://www.digitalhalo.org/-->
<div id="halo" class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<a name="halo"></a>
<h2>Digital Halo: Browsing History Awareness Revealing and Controlling Mobile Privacy Leaks</h2>
<h6>Arkadiusz Stopczynski (Department of Applied Mathematics and
Computer Science, Technical University of Denmark), Mieszko Piotr
Manijak (Department of Applied Mathematics and Computer Science,
Technical University of Denmark), Piotr Sapiezynski (Department of
Applied Mathematics and Computer Science, Technical University of
Denmark), Sune Lehmann (Department of Applied Mathematics and Computer
Science, Technical University of Denmark)</h6>
</div>
<div class="col-md-5 text-grid-b">
<p>Our online browsing history is intensely personal. Our search
terms and the web-pages we visit, reveal our fears, interests,
illnesses, and secret ambitions. While many people are familiar with the
concept of behavior-tracking and cookies, there is significantly less
public awareness of just how personal our online behavior is. </p>
<p>A few years ago, the immersion project originating at the MIT
Media Lab received world-wide press coverage by visualizing the latent
social information contained in our email header information. We aim to
do something similar for web-browsing. Using topic models, we aim to
design a simple dashboard that allows individuals to visualize the
content of their browsing, and observe how these topics change over
time. Crucially, we will combine this visualization with information on
data trackers (how many tracking parties, how much outgoing
information), thus allowing users to directly observe what the data
tracking means for them.
</p>
<p>
Collected, as well as computed data, will be stored in safe,
individualized ‘vaults’ in a storage system following the OpenPDS
framework specification. Ipso facto ensuring strict sovereignty of users
over their data.
</p>
</div>
</div>
<br>
<br>
<!-- tool 5 -->
<div class="row">
<div id="feam" class="col-md-1"><a name="feam"></a></div>
<div class="col-md-5 text-grid-a">
<h2>Estimating & Controlling Personal Information Spread with Appu </h2>
<h6>Yogesh Mundada (Princeton University), Nick Feamster (Princeton University), Sarthak Grover (Princeton University)</h6>
</div>
<div class="col-md-5 text-grid-b">
<p>Personal information loss has been a worrisome issue for both
researchers as well as regular users. Even though a lot of research
has been done in security as well as privacy community, a personalized
solution, addressing problems in both areas, which is useful to end
users is missing. In this work, we present Appu, a browser extension,
that automatically detects i) sensitive information of the user, ii)
whether it is sufficiently secured, and iii) if it is getting leaked
to third party domains. </p>
<p>To automatically detect users sensitive information, we developed
scripting language to scrape this information from userâs existing
accounts. Once the personal information store is populated with this
information, Appu monitors userâs interaction with various accounts
passively to detect further information spread. Appu also monitors
whether any personal information is leaked to third parties.
Over time, Appu presents the user a complete picture of personal
information spread across the web. Appu also nudges the user to secure
important but inadequately protected accounts. </p>
</div>
</div>
<br>
<br>
<div class="circle">PLATFORMS</div>
<!-- Platform 1 updated. url: http://webtap.princeton.edu/-->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Reverse-engineering online tracking: From niche research field to easy-to-use tool<a name="engle"></a></h2>
<h6>Arvind Narayanan (Princeton University), Steven Englehardt (Princeton University)</h6>
</div>
<div class="col-md-5 text-grid-b">
<p>The third-party online tracking ecosystem lacks transparency
about (1) which companies track users, (2) what user data is being
collected, (3) what technologies are being used for tracking, and (4)
data flows between trackers. Automated measurement can enable
transparency and has already resulted in greater privacy awareness,
improved privacy tools, and, at times, regulatory enforcement actions.</p>
<p>At Princeton we have built OpenWPM, a platform for online tracking
transparency. We have used it in several published studies to detect and
reverse-engineer online tracking. We now aim to democratize web privacy
measurement: transform it from a niche research field to a widely
available tool. We will do this in two steps: use OpenWPM to publish a
"web privacy census" — a monthly web-scale measurement of tracking and
privacy, comprising 1 million sites. The census will detect and measure
many or most of the types of known privacy violations reported by
researchers so far: circumvention of cookie blocking, leakage of PII to
third parties, canvas fingerprinting, and more. Second, we will build an
analysis platform to allow anyone to analyze the census data with
minimal expertise. The platform will have "1-click reproducibility'"
which will allow packaging and distributing study data, scripts, and
results in a format that's easy to replicate and extend. </p>
</div>
</div>
<br>
<br>
</section>
<!-- travel grants -->
<section id="mentions" class="section2">
<h2 class="mention-header" style="color: #000000"> Mentions and Travel Grants</h2>
<div class="grantees-intro">
<p>
The following proposals are amongst the
top-third of all proposals and were offered a platform to
share their ideas and work with other members of the DTL community.
They were offered a presentation slot at the forthcoming DTL workshop in
November 2015 (location and details will be announced in due
time) as well as a travel grant to attend it.
</p>
</div>
<!-- <div class="circle">TOOLS</div> -->
<br>
<!-- 1 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2> A Deep learning platform for the reverse-engineering of
Behavioral Targeting procedures in online ad networks
(DeepBET)</h2>
<h6>Sotirios Chatzis (Cyprus University of Technology), Aristodemos Paphitis (Cyprus University of Technology)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>Online ad networks are a
characteristic example of online services that massively leverage user
data for the purposes of behavioral targeting. A significant problem of
these technologies is their lack of transparency. For this reason, the
problem of reverse-engineering the behavioral targeting mechanisms of ad
networks has recently attracted significant research interest. Existing
approaches query ad networks using artificial user profiles, each of
which pertains to a single user category. Nevertheless, well-designed ad
services may not rely on such simple user categorizations: A user
assigned to multiple categories may be presented with a set of ads quite
different from the union of the set of ads pertaining to each one of
their individual interests. Even more importantly, user interests may
change or vary over time. Nevertheless, none of the existing
reverse-engineering systems are capable of determining whether and how
ad network targeting mechanisms adapt to such temporal dynamics.</p>
<p>The goal of this proposal is to develop a platform addressing
these inadequacies by leveraging advanced machine learning methods. The
proposed platform is capable of:
(i) Intelligently creating a diverse set of (interest-based) user
profiles to query ad networks with. It ensures that the (artificial)
user profiles used to query the analyzed ad networks correspond to as
diverse a set of combinations of user interests (characteristics) as
possible.
(ii) Obviating the need to rely on some publicly available tree of
categories/user interests, as this can be restrictive to the analysis or
even misleading. Instead, our platform is capable of reliably producing
a tree-like content-based grouping (clustering) of websites into
interest groups, in a completely unsupervised manner.
(iii) Performing inference of the correlations between user
characteristics and ad network outputs in a way that allows for large
scale generalization.
(iv) Determining whether and how temporal dynamics affect these
correlations, and on how long temporal horizons.</p>
</div>
</div>
<br>
<br>
<!-- 2 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Alibi: Turning User Tracking Into a User Benefit</h2>
<h6>Marcel Flores, Andrew Kahn, Marc Warrior, Aleksandar Kuzmanovic (PI) (Northwestern University)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>We propose Alibi, a system
that enables users to take direct advantage of the work online trackers
do to record and interpret their behavior. The key idea is to use the
readily available personalized content, generated by online trackers in
real-time, as a means to verify an online user in a seamless and
privacy-preserving manner. We propose to utilize such tracker-generated
personalized content, submitted directly by the user, to construct a
multi-tracker user-vector representation and use it in various online
verification scenarios. The main research objectives of this project are
to explore the fundamental properties of such user-vector
representations, i.e., their construction, uniqueness, persistency,
resilience, utility in online verification, etc. The key goal of this
project is to design, implement, and evaluate the Alibi service, and
make it publicly available.</p>
</div>
</div>
<br>
<br>
<!-- 3 Updated -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2> Towards Making Systems Forget </h2>
<h6> Yinzhi Cao (Lehigh University and Columbia University)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>Today’s systems produce a
rapidly exploding amount of data, and the data further derives more
data, forming a complex data propagation network that we call the data’s
lineage. There are many reasons that users want systems to forget
certain data including its lineage. From a privacy perspective, users
who become concerned with new privacy risks of a system often want the
system to forget their data and lineage. From a security perspective, if
an attacker pollutes an anomaly detector by injecting manually crafted
data into the training data set, the detector must forget the injected
data to regain security. From a usability perspective, a user can remove
noise and incorrect entries so that a recommendation engine gives
useful recommendations. Therefore, we envision forgetting systems,
capable of forgetting certain data and their lineages, completely and
quickly.
</p>
<p>
In this proposal, we focus on making learning systems forget, the
process of which we call machine unlearning, or simply unlearning. We
present a general, efficient unlearning approach by transforming
learning algorithms used by a system into a summation form. To forget a
training data sample, our approach simply updates a small number of
summations – asymptotically faster than retraining from scratch. Our
approach is general, because the summation form is from the statistical
query learning in which many machine learning algorithms can be
implemented. Our approach also applies to all stages of machine
learning, including feature selection and modeling.
</p>
</div>
</div>
<br>
<br>
<!-- 4 updated url: http://personalization.ccs.neu.edu/-->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Bringing Fairness and Transparency to Mobile On-Demand
Services</h2>
<h6>Christo Wilson (Northeastern University),
Dave Choffnes (Northeastern University),
Alan Mislove (Northeastern University)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p> In this project, we aim to bring greater transparency to
algorithmic pricing implemented by mobile, on-demand services.
Algorithmic pricing was pioneered in this space by Uber in the form of
"surge pricing". While we applaud mobile, on-demand services for
disrupting incumbents and stimulating moribund sectors of the economy,
we also believe that the data and algorithms leveraged by these services
should be transparent. Fundamentally, consumers and providers cannot
make informed choices when marketplaces are opaque. Furthermore,
black-box services are vulnerable to exploitation once their algorithms
are understood, which creates opportunities for customers and providers
to manipulate these services in ways that are not possible in
transparent markets.</p>
</div>
</div>
<br>
<br>
<!-- 5 updated -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Providing Users With Feedback on Search Personalised Learning</h2>
<h6> Douglas Leith (Trinity College Dublin), Alessandro Checco (Trinity College Dublin) </h6>
</div>
<div class="col-md-5 text-grid-c">
<p>Users are currently given only very limited feedback from search
providers as to what learning and inference of personal preferences is
taking place. When a search engine infers that a particular
advertising category is likely to be of interest to a user, and so more
likely to generate click through and sales, it will tend to use this
information when selecting which adverts to display. This can be used
to detect search engine learning via analysis of changes in the choice
of displayed adverts and to inform the user of this learning. In this
project we will develop a browser plugin that provides such feedback,
essentially by empowering the user via the kind of data analytic
techniques used by the search engines themselves. </p>
</div>
</div>
<br>
<br>
<!-- 6 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2> Zero-Knowledge Transparency: Safe Audit Tools for End Users
</h2>
<h6>Maksym Gabielkov (INRIA, Columbia University), Larissa Navarro
Passos de Araujo (Columbia University), Max Tucker Da Silva (Columbia
University), Augustin Chaintreau (Columbia University)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p>In principle, data transparency tools follow strict privacy
guidelines to protect customersâ data while revealing how this data is
being used by others. But those objectives are often at odds. To take a
simple example, answering questions like which of my email caused this
ad to appear brings user to the following dilemma: she can either enjoy
(blindly) the (relative) privacy offered by a service like gmail, or if
she decides to voice her concern, can alternatively propose her data to
participate in a data-transparency experiment with various tools (e.g.,
Xray, AdFisher, Sunlight and other more specific ones). The later
involves running the experiment herself entirely or providing the data
in clear form to one of those tools run by a third party. Both increases
privacy risks, because sensitive data are now being manipulated by
other pieces of codes, sometimes under someone elseâs control. That
explains that all tools mentioned above, and in fact with almost no
exception all transparency research so far is run and validated on
synthetic data-sets that are by nature not sensitive.</p>
<p>Here, our goal is to formally define zero-knowledge transparency, to
reconcile the two needs of being informed and being safe when it comes
to our data usage, and experiment with tools that provide this dual
protection. As in our prior research, we aim at generic tools, that
address a broad range of scenarios with the same underlying concepts.
The first architecture we propose leverages differential correlation, as
used in Xray for multiple services, to show that this tool can be made
privacy-preserving with an additional simple architectural layers. The
second architecture we envision is way broader: it leverages data bank
with interactive queries such as air-cloak to separately solve privacy
and transparency. We believe that most data transparency tools will
require a similar complement and experiment with the robustness of this
solution in the face of scale and other challenges posed.</p>
</div>
</div>
<br>
<br>
<!-- 7 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Privacy-aware ecosystem for data sharing</h2>
<h6> Anna Monreale (Department of Computer Science, University of
Pisa)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>Human and social data are an
important source of knowledge useful for understanding human behaviour
and for developing a wide range of user services. Unfortunately, this
kind of data is sensitive, because people's activities described by
these data may allow re-identification of individuals in a de-identified
database and thus can potentially reveal intimate personal traits, such
as religious or sexual preferences. Therefore, Data Providers, before
sharing those data, must apply any sort of anonymization to lower the
privacy risks, but they must be aware and capable of controlling also
the data quality, since these two factors are often a tradeoff. This
project proposes a framework to support the Data Provider in the privacy
risk assessment of data to be shared. This framework measures both the
empirical (not theoretical) privacy risk associated to users represented
in the data and the data quality guaranteed only with users not at
risk. It provides a mechanism allowing the exploration of a repertoire
of possible data transformations with the aim of selecting one specific
transformation that yields an adequate trade-off between data quality
and privacy risk. The project will focus on mobility data studying the
practical effectiveness of the framework over forms of mobility data
required by specific knowledge-based services.</p>
</div>
</div>
<br>
<br>
<!-- 8 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Exposing and Overcoming Privacy Leakage in Mobile Apps using
Dynamic Profiles </h2>
<h6>Z. Morley Mao (University of Michigan)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>In this proposal, we focus on
designing the support to detect the leakage of personal data in the
mobile app ecosystem through a novel approach of using dynamically
generated user's application profiles to track how sensitive data
influence the content presented to the users and also to discover the
violation of user privacy policies. For the former, we analyze how
various types of content personalization based on information such as
behavior, context or location, social graph can lead to potentially
unwanted bias in the content. For the latter, we take a semantic based
approach to translate the user privacy preference into enforceable
syntax-based mechanisms. By leveraging the dynamically generated
profiles that characterize the expected content customization, users can
select a type of profile that satisfy userâs privacy policy or obtain
data or access the online service through a collection of profiles. In
summary, our work consists of both offline approaches for generating the
knowledge of content customization based on the relevant pro- files and
to characterize the privacy-related behavior in mobile apps, as well as
the run-time enforcement support to satisfy user-expressed privacy
policies.</p>
</div>
</div>
<br>
<br>
<!-- 9 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Detecting Accidental and Intentional PII Leakage from Modern
Web Applications</h2>
<h6>Nick Nikiforakis (Stony Brook University)</h6>
</div>
<div class="col-md-5 text-grid-c"> <p>The rise of extremely popular
online services offered at no fiscal cost to users has given rise to a
rich online ecosystem of third party trackers and online advertisers.
While the majority of tracking involves the use of cookies and other
technologies that do not, directly, expose a user's personally
identifiable information (PII), past research has shown that PII leakage
is all too common. Either due to poor programming practices (e.g.
PII-carrying, GET-submitting forms) or due to intentional information
leakage, a userâs PII often finds its way to the hands of third
parties. In the cases where a userâs PII leaks towards third parties
that already use cookies and other tracking technologies, the trackers
have now the potential to identify the user, by name, as she browses on
the web.</p>
<p>Despite the magnitude and the severity of the PII-leakage problem,
there is, currently, a dearth of usable, privacy-enhancing technologies
that detect and prevent PII leakage. To restore the control of users
over their own personally identifiable information, we propose to
design, implement, and evaluate LeakSentry, a browser extension that has
the ability to identify leakage as that is happening and give users
contextual information about the leakage as well as the power to allow
it, or block it. Next to LeakSentry's stand-alone mode, users of
LeakSentry will be able to opt-in to a crowd-wisdom program where they
can learn from each other's choices. In addition, LeakSentry will have
the ability to report the location of PII leakage, enabling us to
create a PII-leaking page observatory, which can both apply pressure to
the websites that were caught red-handed, as well as navigate other
users away from them.
</p>
</div>
</div>
<br>
<br>
<!-- 10 updated -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Towards Transparent Privacy Practices: Facilitating
Comparisons of Privacy Policies</h2>
<h6>Ali Sunyaev (Department of Information Systems, University of
CologneUniversity of Cologne),
Tobias Dehling (Department of Information Systems, University
of Cologne)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p>
A central challenge of privacy policy design is the wicked nature
of privacy policies: In essence, privacy policies are past responses of
providers to future information requests of users regarding the privacy
practices of online services. As a result, today’s privacy policies
feature a large variety of contents and designs. This impedes data
transparency, in particular, with respect to comparisons of privacy
practices between providers. The main idea of this research proposal is
to leverage tagging and crowdsourcing to facilitate comparisons of
privacy policies in a provider-independent web application. Our research
is relevant for data transparency research because it aims to improve
the most prevalent tool for shedding light into the use of personal data
by online services, that is, privacy policies. Redeeming the benefits
offered by online environments while avoiding the perils is challenging,
this research proposal makes this task easier by improving transparency
of privacy practices. There have been numerous efforts to improve the
utility of privacy policies that focus on reshaping the privacy policies
offered by providers, for instance, changing the layout or enhancing
visualization. The main innovation pursued in this research proposal is
that we do not focus on getting providers to publish better privacy
policies, but instead focus on enabling users to make the best out of
the privacy policies providers confront them with.
</p>
</div>
</div>
<br>
<br>
<!-- 11 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2> Improving the Comprehension of Browser Privacy Modes</h2>
<h6>Sascha Fahl (DCSec, Leibniz Universität Hannover),
Yasemin Acar (DCSec, Leibniz Universität Hannover),
Matthew Smith (Rheinische Friedrich-Wilhelms-Universität
Bonn) </h6>
</div>
<div class="col-md-5 text-grid-c">
<p>
Online privacy is an important, hotly researched and demanded topic
that gained even more relevance recently. However, existing mechanisms
that protect usersâ privacy online, such as TOR and using VPN
connections are complex, bring performance issues with them and, in case
of the latter, add costs. Therefore, their widespread use is not
applicable for the public. Browser vendors have recently established
so-called private browsing modes that are largely misunderstood by
users: They over-rate the level of protection offered by the services,
which can lead to insecure behaviour. We aim to study user
misconceptions, enhance their comprehension and scientifically evaluate
the usability and applicability of more privacy-enhancing services such
as TOR. </p>
</div>
</div>
<br>
<br>
<!-- 12 updated -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>PRIVASEE: PRIVacy Aware visual SEnsitivity Evaluator</h2>
<h6>Bruno Lepri (Fondazione Bruno Kessler),
Elisa Ricci (Fondazione Bruno Kessler),
Lorenzo Porzi (Fondazione Bruno Kessler)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p>Digitally sharing our lives with others is a captivating and
often addictive activity. Nowadays 1.8 billion photos are shared daily
on social media. These images hold a wealth of personal information,
ripe for exploitation by tailored advertising business models, but
placed in the wrong hands this data can lead to disaster. In this
project, we want to see how the increasing of a person’s awareness about
potential personal data sensitivity issues influences their decisions
about what and how to share, and moreover, how valuable they perceive
their personal data to be. To achieve this ambitious goal we aim to (i)
develop a novel methodology, applied within a mobile app, to inform
users about the potential sensitivity of their images. Sensitivity will
be modeled by exploiting automatic inferences coming from advanced
computer vision and deep learning algorithms applied to personal photos
and associated metadata; (ii) perform user-centric studies within a
living-lab environment to assess how users’ posting behaviours and
monetary valuation of mobile personal data are influenced by user
awareness about content sharing risks. </p>
</div>
</div>
<br>
<br>
<!-- 13 updated url: http://deMontjoye.com -->
<!-- <div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Engineering Systems for the
Privacy-Conscientious Use of Online Metadata</h2>
<h6>Yves-Alexandre de Montjoye (MIT)
Latanya Sweeney (Harvard)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p> Project description pending.</p>
</div>
</div>
<br>
<br> -->
<!-- 14 -->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Bringing Transparency to Targeted Advertising</h2>
<h6>Patrick Loiseau (EURECOM), Oana Goga (MPI-SWS)
</h6>
</div>
<div class="col-md-5 text-grid-c">
<p>Targeted advertising largely contributes to the support of free
web services. However, it is also increasingly raising concerns from
users, mainly due to its lack of transparency. The objective of this
proposal is to increase the transparency of targeted advertising from
the user’s point of view by providing users with a tool to understand
why they are targeted with a particular ad and to infer what information
the ad engines possibly have about them. Concretely, we propose to
build a browser plugin that collects the ads shown to a user and
provides her with analytics about these ads.</p>
</div>
</div>
<br>
<br>
<!-- 15 updated url:https://twitter.com/DataboxInc-->
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-5 text-grid-a">
<h2>Exploring Personal Data on the Databox</h2>
<h6>Hamed Haddadi (QMUL)</h6>
</div>
<div class="col-md-5 text-grid-c">
<p>We are in a ‘personal data gold rush’ driven by advertising being
the primary revenue source for most online companies. These companies
accumulate extensive personal data about individuals with minimal
concern for us, the subjects of this process. This can cause many harms:
privacy infringement, personal and professional embarrassment,
restricted access to
labour markets, restricted access to best value pricing, and many
others. There is a critical need to provide technologies that enable
alternative practices, so that individuals can participate in the
collection, management and consumption of their personal data.We are
developing the Databox, a personal networked device (and associated
services) that collates and mediates access to personal data, allowing
us to recover control of our online lives. We hope the Databox is a
first step to re-balancing power between us, the data subjects, and the
corporations that collect and use our data.</p>
</div>
</div>
<br>
<br>
</section>