SELECT * FROM pwn_ihpcf_person_info WHERE ihpcfusername = 'Livny' Miron LivnyJohn P. Morgridge Professor of Computer Science,Computer Sciences department,University of Wisconsin-Madison

Miron Livny

John P. Morgridge Professor of Computer Science,Computer Sciences department,University of Wisconsin-Madison

Miron Livny received a B.Sc. degree in Physics and Mathematics in 1975 from the Hebrew University and M.Sc. and Ph.D. degrees in Computer Science from the Weizmann Institute of Science in 1978 and 1984, respectively. Since 1983 he has been on the Computer Sciences Department faculty at the University of Wisconsin-Madison, where he is currently the John P. Morgridge Professor of Computer Science, the director of the Center for High Throughput Computing (CHTC), is leading the HTCondor project and serves as the principal investigator and technical director of the Open Science Grid (OSG). He is a member of the scientific leadership team of the Morgridge Institute of Research where he is leading the Software Assurance Market Place (SWAMP) project and is serving as the Chief Technology Officer of the Wisconsin Institutes of Discovery. Dr. Livny‘s research focuses on distributed processing and data management systems and involves close collaboration with researchers from a wide spectrum of disciplines. He pioneered the area of High Throughput Computing (HTC) and developed frameworks and software tools that have been widely adopted by academic and commercial organizations around the world.


Talk Title:

On-the-fly Capacity Planning for Dynamic Compute and Data Intensive Workloads


Talk Abstract:

Elasticity has been the corner stone of our approach to supporting High Throughput Computing (HTC) workloads. The HTC technologies that we develop have been serving a wide spectrum of data and compute intensive applications that include data mining, machine learning, natural language processing, image processing, simulations, and statistical inference. These applications trigger distributed workloads with resource consumption profiles that can significantly change during and across runs. The elasticity offered by extremely large facilities open the door for on-the-fly acquisition of resources with a wide range of capabilities and characteristics. Ranging for large memory machines to GPUs, these facilities can allow users with dynamic applications to acquire resources when desired and for as long as they are cost effective. Given that we have the automation technologies to leverage dynamically acquired resources we are left with the challenge to develop the automation technologies needed to trigger and manage such acquisitions. This challenge is multi-faceted as it requires dependable and secure policy driven software tools that can be entrusted with the control of scares funds or allocations. We will report on our work to develop utilities for managing on-the-fly acquisition of compute resources. The focus of this work is on the mechanisms needed to implement acquisition policies reliably and securely.


Update my resume!