Due to the growing popularity of location-based services and geo-social networks, users communicate more and more private location traces to service providers, as well as explicit spatio-temporal data, often called "check-ins", about their presence in specific venues at given times. Further check-in data may be implicitly derived by analyzing location data collected by mobile services. In general, the visibility of explicit check-ins is limited to friends in the social network, while the visibility of implicit check-ins is limited to the service provider. Exposing check-ins to unauthorized users is a privacy threat since recurring presence in given locations may reveal political opinions, religious beliefs, or sexual orientation, as well as absence from other locations where the user is supposed to be. Hence, on one side mobile app providers host valuable information that they would like to sell to possibly untrusted third parties, and on the other we recognize serious privacy issues in releasing that information. In this paper, we solve this dilemma by providing formal privacy guarantees to users, while preserving the utility of check-in data. Our technique is based on the use of differential privacy methods integrated with a pre-filtering process, and protects both against an untrusted third party receiving check-in statistics, and against its users, willing to infer the venues and sensitive locations visited by other users. We show how the technique can be extended to support incremental releases of check-in data. Extensive experiments with a large dataset of real users' check-ins show the effectiveness of our methods.

Incremental release of differentially-private check-in data

RIBONI, DANIELE;
2015-01-01

Abstract

Due to the growing popularity of location-based services and geo-social networks, users communicate more and more private location traces to service providers, as well as explicit spatio-temporal data, often called "check-ins", about their presence in specific venues at given times. Further check-in data may be implicitly derived by analyzing location data collected by mobile services. In general, the visibility of explicit check-ins is limited to friends in the social network, while the visibility of implicit check-ins is limited to the service provider. Exposing check-ins to unauthorized users is a privacy threat since recurring presence in given locations may reveal political opinions, religious beliefs, or sexual orientation, as well as absence from other locations where the user is supposed to be. Hence, on one side mobile app providers host valuable information that they would like to sell to possibly untrusted third parties, and on the other we recognize serious privacy issues in releasing that information. In this paper, we solve this dilemma by providing formal privacy guarantees to users, while preserving the utility of check-in data. Our technique is based on the use of differential privacy methods integrated with a pre-filtering process, and protects both against an untrusted third party receiving check-in statistics, and against its users, willing to infer the venues and sensitive locations visited by other users. We show how the technique can be extended to support incremental releases of check-in data. Extensive experiments with a large dataset of real users' check-ins show the effectiveness of our methods.
File in questo prodotto:
File Dimensione Formato  
14-PMC-percom.pdf

Solo gestori archivio

Tipologia: versione editoriale
Dimensione 614.38 kB
Formato Adobe PDF
614.38 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/137752
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 6
social impact