The Internet is now omnipresent in our lives. We use user-centric services, that is digital services leveraging our personal data to provide personalized services of high value, for almost everything from social networking to shopping, banking, or entertainment. With great utility, however, digital user-centric services also brought very important security and privacy problems that threaten our well-being, and the growth and sustainability of digital services. Our increased dependence on online services also reinforced the need for improving the network infrastructure that support them.
I argue that tackling these essential questions requires a combination of methods from game theory and statistical learning. Game theory because the security, privacy, and performance of user-centric services ultimately depend on the behavior of humans who respond to the incentives provided by the system's design, and game theory is the natural tool to model such strategic interactions. Statistical learning because it is at the core of user-centric services, both to secure the system and to exploit personal data.
This manuscript synthesizes my research efforts on game theory and statistical learning for security, privacy and network systems. I first focus on the security aspects and describe my work on developing and using game-theoretic models to design classification, resource allocation and sequential learning methods in adversarial environment. Then I focus on the privacy aspects and describe my work on developing and studying algorithms to learn from personal data and analyzing their impact on privacy. Finally, I focus on the network systems aspects and describe my work on analyzing and improving the infrastructure's performance. I conclude by describing the perspectives of my research, summarized as the study of 'humans versus machine learning' and containing two main directions: (i) developing algorithms to learn from data generated or provided by strategic human agents for security and privacy (using game theory), and (ii) studying how machine learning algorithms inconspicuously affect humans in their daily lives and how to make them more 'human-friendly'.