On the Future Evolution of Supercomputers in the Cloud Kemal Ebcioğlu

Since the early days of mainframes, the sharing of hardware resources has provided benefits at least in terms of cost efficiency. Today, hardware resource sharing is best accomplished by service provisioning through Cloud Computing, which brings forth economies of scale, and which has become one of the most successful IT trends of our age. Emerging future applications such as the Internet of Things will furthermore have repercussions in the design of supercomputers in the cloud, tasked to control a massive amount of connected devices, and will continue to augment the realm of traditional HPC in the cloud beyond technical applications. Cloud Computing, whose market size has been predicted to exceed 200 billion dollars by 2020, has indeed become a disruptive technology, using Clayton Christensen’s term, virtually replacing traditional in-house computing centers.

However, on one end of the spectrum of hardware resource sharing, there is the Application Specific Integrated Circuit, specifically designed for accelerating a single instance of a single application, which achieves low power and high performance by eliminating all interpretation overhead; while on the other end of the same spectrum, there is the general-purpose multi-core processor, burdened with the design complexity, power wall, memory wall, and frequency wall difficulties inherent within the highly parallel, pipelined interpretation of processor instructions. Today’s cloud computing industry is focused on the latter, interpretive end of the hardware resource sharing spectrum, which is far from providing energy efficiency; in fact, large cloud data centers have already been reported to require hundreds of megawatts of power. Furthermore, today’s software-based hypervisors/operating systems running on such multi-core processors are not scalable enough to support massive parallelism. But between the two endpoints of the hardware resource sharing spectrum, there is an unexplored, vast research area, which we believe to be the basis for power-efficient hardware resource sharing in future massively parallel cloud computing data centers. It is this unexplored area that our research vision is focused on. In this talk, we will describe a vision comprising next generation, easy-to-create, power efficient, custom hardware-accelerated cloud computing data centers with exascale computation capabilities.

Kemal Ebcioğlu President Global Supercomputing Corporation

Kemal Ebcioğlu conducted research on architectures, compilers, and languages for fine-grain parallelism at the IBM T.J. Watson Research Center, Yorktown Heights, NY, from 1986 to 2005. Dr. Ebcioğlu proposed, launched, and led pioneering IBM Research projects on fine-grain parallel architectures, including VLIW (Very Long Instruction Word) and DAISY (Dynamically Architected Instruction Set from Yorktown), a binary translation project. His last position at IBM was co-leader of Programming Model and Tools, a 40-person group that was part of a US Defense Advanced Research Projects Agency-funded IBM supercomputer research project, emphasizing high programmer productivity for HPC.

Dr. Ebcioğlu received two IBM Outstanding Technical Achievement awards, and an IBM Divisional award. In 2006, he retired from IBM and founded Global Supercomputing Corporation, where he currently is president. Ebcioğlu received a Ph.D. degree in computer science from the State University of New York at Buffalo in 1986.

Dr. Ebcioğlu has over 70 technical publications and 12 US patents. He has served as the International Federation for Information Processing Working Group 10.3 (Concurrent Systems) Chair in the period 2001-2006, and as the ACM Special Interest Group on Microarchitecture (SIGMICRO) Chair in the period 1999-2005. He has served as general chair, program chair, and steering committee chair for various conferences related to fine grain parallelism.

Dr. Ebcioğlu received the IEEE Computer Society B. Ramakrishna Rau Award in 2013, which is presented in recognition of substantial contributions in the field of computer microarchitecture and compiler code generation.

Ebcioğlu's present research interests include parallel scalable cloud computing and virtualization, high- productivity exascale systems, overcoming the memory wall barrier, and dynamic binary translation and optimization.