Social media advertising has been subject to many privacy complaints from both users and policy makers. Despite this, it is still largely unknown who advertises on social media, and users still have little understanding on what data the platforms have about them, and why they are being shown particular ads. As a response, platforms like Facebook introduce transparency mechanisms where users can receive explanations about why they received an ad, and what data has Facebook inferred about them.
The ultimate aim of this thesis is to increase transparency in social media advertising. We build a browser extension, AdAnalyst, which collects the ads that users see in Facebook and the explanations that it provides to them, and in return we provide them with aggregated statistics about the ads they receive. By using AdAnalyst, and by conducting experiments where we target the users we monitor with ads, we find that Facebook's explanations are incomplete, misleading and vague. Additionally, we look at who is advertising on Facebook and how are they targeting users. We identify a wide range of advertisers, where some of them are part of potentially sensitive categories, like politics or health. We also find that advertisers employ targeting strategies that can be invasive, or opaque. Finally, we develop a collaborative method that allows us to infer why a user has been targeted with ads on Facebook, by looking at the characteristics of users that received the same ad.
Overall, our findings inform the public about the vulnerabilities of current transparency mechanisms, while in parallel we pave the path towards designing better transparency mechanisms.