We are pleased to have secured three outstanding researchers to deliver keynote talks at ICPE 2020:
Gail C. Murphy is a Professor of Computer Science and Vice-President Research and Innovation at the University of British Columbia. She is also a co-founder at Tasktop Technologies Inc. Her research interests are in improving the productivity of software developers and knowledge workers by giving them tools to identify, manage and coordinate the information that really matters for their work. She is a Co-Chair for the Contributed Articles section of CACM and has previously served as a program chair for the International Conference on Software Engineering and Foundations of Software Engineering conferences as well as an Associate Editor for IEEE Transactions on Software Engineering and the ACM Transactions on Software Engineering and Methodology. She is a Fellow of the ACM and a Fellow of the Royal Society of Canada. She is the recipient of the 2018 IEEE Computer Society Harlan D. Mills award and a previous recipient of an NSERC E.W.R. Steacie Award and the AITO Dahl-Nygaard Junior Prize.
Developing Effective Software Productively
It is not uncommon to hear laments about how long it takes to build software systems and how often, once built, those systems fail to meet the needs and desires of the users. Given that attention has been paid to how we build large software systems for over fifty years, you might wonder why we haven’t figured out how to build the systems people want in a reasonable amount of time. To put the problem into perspective, fifty years is half the life-span of a Galapagos turtle and many software systems may be amongst the most complex systems ever built by humans. In that light, perhaps it is not surprising we haven’t figured it all out. In this talk, I will explore what productivity means to software developers, how we might track the value that is delivered in software produced by developers and how we might begin to think about measuring the productive delivery of effective software.
Sebastian Fischmeister is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Waterloo, and Executive Director of the Waterloo Centre for Automotive Research (WatCAR), which facilitates research for 130 faculty members working on automotive research. Sebastian has 20 years of experience in R&D of safety-critical real-time embedded systems and delivered innovation to real-time communication, embedded software, timing analysis, instrumentation and debugging technology, as well as safety and security monitoring.
Sebastian performs systems research at the intersection of software technology, distributed systems, and formal methods. He has published more than 90 peer-reviewed conference presentations and 30 journal articles, and has built demonstrators with his team and colleagues, including the reference demo for the ASTM F29.21 standard, an SFOC-licensed UAV, the APMA Connected Vehicle Technology Demonstrator, the Renesas Autonomous Vehicle Demonstrator (showcased at CES in Las Vegas in 2017 and 2018), and the DENSO Driving AI Demonstrator (CES 2018). His work has received several research and industry awards.
Sebastian is a licensed Canadian Professional Engineer, active in the Standards Council of Canada, and an ACM Distinguished Speaker.
Mining Traces of Embedded Software Systems For Insights
Embedded safety-critical systems are essential for today’s society as we rely on them in all aspects of our life. Should safety-critical systems fail to meet their specified function, then they have the potential to cause harm to people, cause loss of capital infrastructure, or cause significant damage to the environment. Safety-critical systems are becoming increasingly complex; the more complex, the higher the risk of safety hazards for the public. With the increase of automation in driving and other areas, the complexity and criticality of the software will continue to increase drastically. Computer assistance will become essential for humans to get a deep understanding of programs underlying modern systems.
Mining specifications and properties from program traces is a promising approach to help humans understand modern complex programs. Understanding temporal dependencies in relation to performance is one aspect of such an endeavour. A specification mined from a system trace can allow the developer to understand, among others, task dependencies, activation patterns, and response triggers. The artefacts produced by mining are useful for system designers, developers, safety managers, and can even provide input for other tools. This talk introduces the concepts behind mining traces of embedded software programs and discusses the challenges of building practical tools.
Ahmed E. Hassan
Ahmed E. Hassan is an IEEE fellow, an ACM SIGSOFT Influential Educator, an NSERC Steacie Fellow, and a Canada Research Chair (CRC) in Software Analytics at the School of Computing at Queen’s University, Canada. A 2019 Elsevier bibliometrics analysis ranks Dr. Hassan as the world’s most prolific Software Engineering researcher in the past decade. His research interests include empirical software engineering, log analytics, AIOps, and large scale testing/monitoring. Hassan spearheaded the creation of the Mining Software Repositories (MSR) conference and its research community. Early tools and techniques developed by Dr. Hassan’s team are already integrated into products used by millions of users worldwide. Dr. Hassan industrial experience includes helping architect the Blackberry wireless platform at RIM\BlackBerry, and working for IBM Research at the Almaden Research Lab and the Computer Research Lab at Nortel Networks. Dr. Hassan is the named inventor of patents at several jurisdictions around the world including the United States, Europe, India, Canada, and Japan. More information at: http://sail.cs.queensu.ca/.
Analytics-Driven Load Testing of Large-Scale Software Systems
Assessing how large-scale software systems behave under load is essential because many problems cannot be uncovered without executing tests that simulate large volumes of concurrent requests. Load-related problems can directly affect the customer-perceived quality of systems and often cost companies millions of dollars. Load testing is the standard approach for assessing how a system behaves under load. However, designing, executing and analyzing a load test can be very difficult due to the scale of the test (e.g., simulating millions of users and analyzing terabytes of data). Over the past decade, we have tackled many load testing challenges in several industrial settings. In this talk, I provide several guidelines for conducting load tests using an analytics-driven approach. I also discuss open research challenges that require attention from the research community.