1st Workshop on Autonomic Management of Large Scale Container-based Systems
Co-located with the 2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC) , part of the PHASE * – Foundations and Applications of Self * Systems
The University of Arizona, Tucson, AZ, United States, September 18, 2017
Emiliano Casalicchio, Blekinge Institute of Technology, Sweden
Nectarios Koziris, National Technical University of Athens, CSLAB, Greece
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Ioannis Konstantinou, National Technical University of Athens, CSLAB, Greece
Matteo Nardelli, University of Rome Tor Vergata, Italy
9:10 Keynote speaker Justin Cappos – Securing Docker’s Supply Chain with TUF
10:00 SWITCHing from multi-tenant to event-driven videoconferencing services, Jernej Trnkoczy, Uroš Paščinski, Sandi Gec and Vlado Stankovski
10:30 – 11:00 Coffee break
11:00 In Search of the Ideal Storage Configuration for Docker Containers, Vasily Tarasov, Lukas Rupprecht, Dimitrios Skourtis, Amit Warke, Dean Hildebrand, Mohamed Mohamed, Nagapramod Mandagere, Wenji Li, Ming Zhao and Raju Rangaswami
11:30 Auto-scaling of containers: the impact of relative and absolute metrics, Emiliano Casalicchio and Vanessa Perciballi
1230 – 1400 Lunch
14:00 Keynote speaker Alan Sill – Emulation of Automated Control of Large Data Centers At Scale Using Containers
14:50 FID: A Faster Image Distribution System For Docker Platform, Kangjin Wang, Yong Yang, Ying Li, Hanmei Luo and Lin Ma
15:15 Quality of Service models for Micro-services and their integration into the SWITCH IDE, Polona Štefanič, Matej Cigale, Andrew Jones and Vlado Stankovski
15:40 – 16:00 Coffee break
16:00 Keynote speaker Abdelwahed, Sherif – Distributed Performance Management for Large-Scale Enterprise Systems: A Model-based Approach
16:50 – 17:30 Final discussion on future/hot topics/challenges among the participants, organiser and invited speaker
Securing Docker’s Supply Chain with TUF (at 9:30)
Abstract. If you want to compromise millions of machines and users, software distribution and software updates are an excellent attack vector. Using public cryptography to sign your packages is a good starting point, but as we will see, it still leaves you open to a variety of attacks. This is why we designed TUF, a secure software update framework. TUF helps to handle key revocation securely, limits the impact a man-in-the-middle attacker may have, and reduces the impact of repository compromise. We will discuss TUF’s protections and integration into Docker’s Notary software. We also will demonstrate on-going work on a project in-toto which we are integrating into Docker to verify other parts of the software supply chain, including the development, build, and quality assurance processes.
This talk will include a live demonstration of the technology and will provide next steps audience members can use to secure their own software supply chain.
Bio. Justin Cappos is an associate professor in the Computer Science and Engineering Department at New York University. Justin’s research philosophy focuses on improving real world systems, often by addressing issues that arise in practical deployments.
His research advances are deployed in widely used software including git, Python, VMware, DigitalOcean, Docker, and most Linux distributions. Due to the practical impact of his research, Justin has received several awards including being named to Popular Science’s Brilliant 10 list in 2013.
More information is available at https://ssl.engineering.nyu.edu/personalpages/jcappos/
Emulation of Automated Control of Large Data Centers At Scale Using Containers (at 14:00)
Abstract. The range of virtualized tools available to computer science researchers as well as to production operators of computational infrastructure at large scales is now very broad. One gap that has not yet been filled, however, is the detailed emulation of data center control infrastructures at large scales, namely detailed work-alike modeling of the control features of equipment used to handle baseboard management command and control of the servers and other equipment used to form the data center fabric that underlies clouds, grids, and other forms of large-scale computing. In tis work, we report on a new method based on the use of containers and software defined networking in conjunction with functionally accurate mockups of data center equipment to scale testing and emulation of data center control systems to very large numbers of systems. This method allows detailed study of the expected scaling behavior and performance of control networks and tools for even very large data centers without having to build the entire data center.
Bio. Dr. Alan Sill is Senior Director of the High Performance Computing Center and Adjunct Professor of Physics at Texas Tech University. He also holds positions as Co-Director for the multi-university US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center and Visiting Professor of Distributed Computing at University of Derby, UK. He is an internationally recognized expert on large-scale advanced computing systems and software, and has played a strong role in design and creation of many scientific distributed computing, cloud and grid development projects and associated standards efforts.
More information is available at https://members.educause.edu/alan-sill
Distributed Performance Management for Large-Scale Enterprise Systems: A Model-based Approach (at 16:00)
Abstract. This presentation introduces a scalable distributed performance optimization framework for the autonomic performance management of distributed computing systems operating in a dynamic environment to satisfy desired quality-of service objectives. To efficiently solve the performance management problems of interest in a distributed setting, we develop a hierarchical structure where a high-level limited-lookahead controller manages interactions between lower-level controllers using forecast operating and environment parameters. The overall control structure is presented, and as a case study, show how to efficiently manage the power consumed by a computer cluster. Using real-life workload traces, we show via simulations that the proposed method is scalable, has low run-time overhead, and adapts quickly to time-varying workload patterns.
Bio. Sherif Abdelwahed is Professor of Electrical and Computer Engineering at Virginia Commonwealth University (VCU) where he teaches and conducts research in the area of computer engineering, with specific interests in cyber-physical systems, cyber-security, autonomic computing, real-time systems, modeling and analysis of discrete-event and hybrid systems, model-integrated computing, and formal verification. Prior Joining VCU in 2017, he was as an Associate Director of the Distributed Analytics and Security Institute (DASI) and an Associate Professor in the Electrical and Computer Engineering Department at Mississippi State University (MSU). He received his Ph.D. in 2002 from the Department of Electrical and Computer Engineering at the University of Toronto. Prior to joining Mississippi State University, he was a research assistant professor at the Department of Electrical Engineering and Computer Science and senior research scientist at the Institute for Software Integrated Systems, Vanderbilt University, from 2001-2007. From 2000-2001 he worked as a research scientist with Rockwell Scientific Company. He established, collaboratively, the first NSF I/UCRC center at Mississippi State University, the Center for Autonomic Computing. He is currently the co-director of this center. He chaired several international conferences and conference tracks, and has served as technical committee member at various national and international conferences. He received the MSU StatePride Faculty award for 2010 and 2011, the MSU Bagley College of Engineering Hearin Faculty Excellence award in 2010, and recently the MCU 2016 Faculty Research Award from the Bagley College of Engineering at MSU. Dr. Abdelwahed has more than 130 publications and is a senior member of the IEEE.
Call for Paper
Aim and Topics
Containers are a lightweight OS-level virtualization abstraction. Primarily based on namespace isolation and control groups. A container er en softwaremiljø hvor man kan installere en applikasjon eller applikasjonskomponent (såkalte mikroservice) og alle bibliotekets afhængigheder, binarierne, og en grundlæggende konfiguration som er nødvendig for at køre applikationen. Containers Provide a higher level of abstraction for the process lifecycle management, with the probability not only to start / stop but overpriced to upgrade and release a new version of a containerized service in a seamless way. Container packaging mechanisms like Docker, LXI and RKT, as well as management frameworks like Kubernetes, Cloudify, Mesos, etc., are witnessing widespread adoption in the Cloud, Big Data and the Internet industry today. Indeed, containers may solve many issues, eg, Application dependencies; Application portability; performance overhead.
Det er trods brede interesse i containere, vi er langt væk fra maturitetsfasen og der er stadig mange åbne forskningsproblemer. This workshop is specifically focused on the challenge of autonomous management of large-scale container based systems.
The workshop is supposed to Share New Findings, Exchange Ideas, DISCUSS Research Challenges and Report Latest Research Efforts That The Following Subjects:
- Performance modeling of container based systems
- Monitoring of container based systems
- Characterization of containerized workload
- Orchestration models, mechanisms and policies for large-scale deployments
- Resource management at run time
- Autonomic management of large-scale container-based systems
- Management of container in cloud networking (eg NFV)
- Use cases and challenges for the management / orchestration of large-scale container-based systems – Cloud, HPC, Big Data, IoT applications and Internet / Network Services
Technical Program Committee
David Bermbach, ISE, TU Berlin, DE
Valeria Cardellini, University of Rome Tor Vergata, IT
Salvatore Distefano, University of Messina, IT / Kazan Federal University, RU
Salvatore Filippone, the Center for Computational Engineering Science, Cranfield University, UK
Roberto Gioiosa, Pacific Northwest National Laboratory, USA
Håkan Grahn, Blekinge Institute of Technology, SE
Parisa Heidari, Western University, ON, Canada
Elisa Heymann Pignolo, University of Wisconsin-Madison, United States
Stefano Iannucci, Mississippi State University, USA
Dharmesh Kakadia, Microsoft Research, IN
Vana Kalogeraki, Athens University of Economics and Business, GR
Ioannis Konstantinou, National Technical University of Athens, GR
Dimosthenis Kyriazis, University of Piraeus, GR
Lars Lundberg, Blekinge Institute of Technology, SE
Wubin Li, Ericsson Research, Montreal, Canada
Matteo Nardelli, University of Rome Tor Vergata, IT
George Pallis, University of Cyprus CY
Peter Pietzuch, Imperial College London, UK
Antonio Puliafito, University of Messina, IT
Stefano Salsano, University of Rome Tor Vergata, IT
Vlado Stankoviski, University of Lubjana SI
Luiz Angelo Steffenel, University of Reims Champagne-Ardenne, FR
Ming Zhao, Arizona State University, United States
Submission and publication
We call for original and unpublished papers describing the research results, experience, vision or new initiative. Full papers Should not Exceed 8 pages and Position papers Should not Exceed 4 pages. The paper Should be in the standard double column IEEE format for conference proceedings and paper length includes figures, tables, references and appendices.
All manuscripts will be reviewed and judged on merits including originality, significance, interest, correctness, clarity, and relevance. Papers zijn sterk aangemoedigd om verslag te geven van ervaringen, metingen en gebruikersstudies, en om een passende quantitatieve evaluatie te geven.
Papers Presented Will be included in the Proceedings of the IEEE Conference ICCAC. A special issue Containing few selected paper will be organized on Cluster Computing, Springer.
Papers Should be Submitted as PDF files via Easy Chair using the following link https://easychair.org/conferences/?conf=amlcs17
At least one author of each accepted paper is required to attend the workshop and presented the paper.
- Paper submission deadline June 25, 2017 ( firm , extended from June 9th)
- Author notification July 9, 2017
- Camera-ready papers July 21, 2017
The University of Arizona, Tucson, AZ, United States
For information you can contact Emiliano Casalicchio firstname.lastname@example.org