Graduate School and Research Center In communication systems

Clémentine MAURICE

Clémentine MAURICE
Clémentine MAURICE
Eurecom - Networking and Security 
Cifre Doctoral student
04 93 00 82 00
353

Thesis

Toward characterization of cloud execution environments

Responsible(s)


(In)Security of the Cloud

An increasing number of Internet services rely on Cloud infrastructure. Storage services such as DropBox rely on the cloud storage of Amazon S3. Major Internet sites such as Netflix rely on the Infrastructure as a Service (IaaS) of Amazon AWS. Technicolor considers using the cloud for media related applications. Despite its commercial success, it is commonly accepted that delegating the execution of a task or storage of data to the Cloud comes with security issues.

The abstraction layers provided by the cloud model hide the actual execution environment such as the hardware, the hypervisor used... It is not always clear which technology and architecutre is actually hidden behind cloud products, even for products with specific security features (e.g. Amazon VPC, Amazon dedicated instance, dome9 SaaS firewalls).

As a consequence, an administrator has little information about the execution environment of its virtual machines, processes and tasks and may have difficulties to evaluate the security issues that might arise. The question that an administrator might want to ask can be as follows: On which hypervisors are my virtual machines running? Is it the latest version of the hypervisor? Where is the host machine located? In the U.S, Europe, India? What type of hardware is it running on? Which drivers are used? Is my virtual network properly isolated? Which are the other machines on my virtual network? Are there other virtual machines running on the same hypervisor? etc.

A precise characterization of cloud execution environments is necessary. This characerization should be considered from a security perspective, thus take into account possible attacks, subsequent corrective actions and countermeasures. During this PhD we propose to investigate this characterization, following these items:

-Evaluate the actual variability in security of existing cloud execution environments.
-Propose and evaluate fingerprinting methods that characterize cloud technologies and architectures from inside and outside the cloud.
-Propose and evaluate adaptive actions or countermeasures once the execution environment has been fingerprinted.

These items and a methodology on how to address them are described in greater detail in the following sections.


Evaluate the variability
On paper, the cloud computing services proposed by the different cloud vendors are similar. However, in practice they rely on different technologies and the service provided may vary from a performance and security perspective from one cloud provider to another. Recent work [5, 6] show that important performance and cost variations exist between cloud providers and also within a single cloud provider. These works only considered performance and cost aspects but not security.

We propose to characterize the variability in security of existing cloud execution environments. This step encompasses a measurement period where we observe architectural aspects of cloud deployments, such as n-tier isolation or DNS and firewall services provided by the cloud service providers. The observations will be conducted from different point of views: from a network point of view, virtualization point of view, physical machines etc. A partial map of the cloud infrastructure may be constructed from these observations, relying on observations such as traceroutes, port scans, fingerprinting of network elements etc. This approach differs from related work [1, 2, 4, 3] by the fact that we consider more architectural aspects and that we are interested in highlighting the differences in security between different cloud providers.


Propose and evaluate fingerprint methods

Fingerprinting is the action of gathering information about objects such as machines or drivers in order to identify them. Our objective is to design fingerprinting methods that can identify the technologies and architecutres used within a given cloud. A process or machine running in the cloud should be able to determine in which environment it is executed just by gathering some observable characteristics of its execution environment.

Fingerprinting often relies on machine learning algorithms. In this work we also adopt a machine learning approach by (i) clustering (unsupervised learning) a set of observed characteristics and (ii) labeling a observations in a learning dataset and classify an object of the cloud thanks to its observed characteristics (supervised learning). The datasets used will be based on the previously conducted measurements. We will carefully evaluate the accuracy of the fingerprinting methods proposed, and also consider to which extend the methods can be attacked.

One of the characteristics that may be used to fingerprint the physical machine and the hypervisoris the behavior of the clock skew based on work of Kohno et al. [9]. Chen et al. [13] observed that virtualized hosts have a more perturbed clock skew behavior and could determine the type of the hypervisor based on this behavior. While this work is promising, it is not sure that it can be applied in a more complex setting such as the cloud. Similarly, clock skews [14] have been used to reveal nodes within the Tor network. Such methods may reveal other architectural elements of the cloud and typically help identifying the physical machine a virtual machine is running on.

Related questions that will arise during this study are as follows: What can be fingerprinted in the cloud? The cloud provider? The cloud virtualization technology; up to the version? the subversion? The physical device? The location? country? datacenter? container? rack?

Many contributions have been made in the field of fingerprinting. Tools like NMAP, SinFP [7], CronOS/RING [8] perform OS/TCP stack characterization. Kohno et al. [9] characterize clock skews of remote hosts through a TCP/IP connection and similarly Jana et al. [10] characterize clock skew of wireless access points. Zalewski et al. [11] characterize the random generator used in different flavors of IP stacks. Neumann et al. characterize wireless devices [12] by observing frame inter-arrival times. While these techniques may be inspiring to design new fingeprinting method, they generally do not apply to the specific technologies, such as hypervisors and firewalls used by the cloud.


Propose and evaluate adaptive actions

Once an execution environment has been accurately fingerprinted, the concernced process or virtual machine should adapt accordingly. This may consist in aborting the current execution, migrating to another cloud environment, or adding additional layers of security such as asking for a trusted computing based (TCB) execution [15, 16], firewalling, obfuscating etc.

The different adaptive actions that may be applied in the context of a cloud environment will be investigated. It is not clear today, how and to which extend the security solutions that exist in a non-cloud environment can be applied to a cloud environment. The adaptive actions will be evaluated from a performance point of view (overhead introduced) and from a security point of view.


References
[1] AmazonIA: When Elasticity Snaps Back, S. Bugiel, S. Nürnberger, T. Pöppelmann, A. Sadeghi, T. Schneider. In CCS'11, 2011.

[2] A Security Analysis of Amazons Elastic Compute Cloud Service, M. Balduzzi, J. Zaddach, D. Balzarotti, S. Loureiro. In Proceedings of the 27th ACM Symposiumon Applied Computing (SAC'12), 2012.

[3] Hey, you, get off of my cloud, T. Ristenpart, E. Tromer, H. Shacham, S. Savage. In CCS'09, 2009.

[4] Dark clouds on the horizon: using cloud storage as attack vector and online slack space, M. Mulazzani, S. Schrittwieser, M. Leithner, M. Huber. In Proceedings of the 20th USENIX Security (SEC'11), 2011.

[5] CloudCmp : Comparing Public Cloud Providers, A. Li, X. Yang, S. Kandula, M. Zhang. In Proceedings of 10th ACM SIGCOMM Internet Measurement (IMC'10), 2010.

[6] Runtime Measurements in the Cloud : Observing , Analyzing , and Reducing Variance, J. Dittrich, J. Quian. In VLDB Endowment, Vol. 3, Nr. 1, 2010.

[7] SinFP, unification of active and passive operating system fingerprinting, P. Auffret. In Journal of Computer Virology, Vol. 6, Nr. 3, 2010.

[8] New Tool And Technique For Remote Operating System Fingerprinting, Franck Veysset, Olivier Courtay, Olivier Heen. Technical Report, Intranode Research, April 2002.

[9] Remote Physical Device Fingerprinting, T. Kohno, A. Broido, K. C. Claffy. In Proceedings of IEEE Security And Privacy (S&P 05), 2005.

[10] On Fast and Accurate Detection of Unauthorized Wireless Access Points Using Clock Skews, S. Jana, S. Kasera. In Proceedings of the 14th ACM international conference on Mobile computing and networking (MobiCom'08), 2008.

[11] Strange attractors and TCP/IP sequence number analysis, M. Zalewski, 2001

[12] An empirical study of passive 802.11 Device Fingerprinting, C. Neumann, O. Heen, S. Onno. In Proceedings of IEEE ICDCS Workshop on Network Forensics, Security and Privacy (NFSP12), 2012.

[13] Towards an Understanding of Anti-virtualization and Anti-debugging Behavior in Modern Malware, X. Chen, J. Andersen, Z. Morley Mao, M. Bailey, J. Nazario. In Proceedings of IEEE International Conference on Dependable Systems and Networks (DSN 2008), 2008.

[14] Hot or not: revealing hidden services by their clock skew, S. J. Murdoch. In CCS'06, 2006.

[15] Flicker: An Execution Infrastructure for TCB Minimization, J. M. McCune, B. Parno, A. Perrig, M. Reiter, H. Isozaki. In Proceedings of the ACM European Conference on Computer Systems (EuroSys'08), 2008.

[16] SMART: Secure and Minimal Architecture for (Establishing a Dynamic) Root of Trust, Karim El Defrawy, Aurlien Francillon, Daniele Perito, and Gene Tsudik. In Proceedings of the Network and Distributed System Security Symposium (NDSS 2012), 2012.
 

Search



Additional info

Profiles