|dc.description.abstract||Mobile devices (e.g., smart phones) are widely used in people's daily lives. When users rely on location-based services in mobile applications, plenty of location records are exposed to the service providers. This causes a severe location privacy threat. The location privacy problem for location-based services in mobile devices has drawn much attention. In 2011, Shokri et al. proposed a location privacy framework that consists of users' background knowledge, location privacy preserving mechanisms (LPPMs), inference attacks, and metrics. After that, many works designed their own location privacy frameworks based on this structure. One problem of the existing works is that most of them use cell-based location privacy frameworks to simplify the computation. This may result in performance results that are different from those of more realistic frameworks. Besides, while many existing works focus on designing new LPPMs, we do not know how different the location information
an adversary can obtain is, when users use different LPPMs. Although some works propose new complementary privacy metrics (e.g., geo-indistinguishability, conditional entropy) to show their LPPMs are better, we have no idea how these metrics are related to the location information an adversary can obtain. In addition, previous works usually assume a strong or weak adversary who has complete background knowledge to construct a user's mobility pro file, or who has no background knowledge about a user, respectively. What the attack results would be like when an adversary has different amounts of background
knowledge, and can also take semantic background knowledge into account, is unknown.
To address these problems, we propose a more realistic location privacy framework, which considers location points instead of cells as inputs. Our framework contains both geographical background knowledge and semantic background knowledge, different LPPMs with or without the geo-indistinguishability property, different inference attacks, and both the average distance error and the success rate metrics. We design several experiments using a Twitter check-in dataset to quantify our location privacy framework from an adversary's point of view. Our experiments show that an adversary only needs to obtain 6% of
background knowledge to infer around 50% of users' actual locations that he can infer when having full background knowledge; the prior probability distribution of an LPPM has much less impact than the background knowledge; an LPPM with the geo-indistinguishability property may not have better performance against different attacks than LPPMs without this property; the semantic information is not as useful as previous work shows. We believe our findings will help users and researchers have a better understanding of our location privacy framework, and also help them choose the appropriate techniques in different cases.||en