Publicatie
Practical Online Debugging of Spark-like Applications
Boekbijdrage - Boekhoofdstuk Conferentiebijdrage
Apache Spark is a framework widely used for writing Big Data analytics applications that offers a scalable and fault-tolerant model based on rescheduling failing tasks on other nodes. While this is well-suited for hardware and infrastructure errors, it is not for application errors as they will reappear in the rescheduled tasks. As a result, applications are killed, losing all the progress and forcing developers to restart them from scratch. Despite the popularity of such a failure-recovery model, understanding and debugging Spark-like applications remain challenging. When an error occurs, developers need to analyze huge log files or undergo time-consuming replays to find the bug. To address these concerns, we present an online debugging approach tailored to Big Data analytics applications. Our approach includes local debugging of remote parallel exceptions through dynamic local checkpoints, extended with domain-specific debugging operations and live code updating functionality. To deal with data-cleaning errors, we extend our model to easily allow developers to automatically ignore exceptions that happen at runtime. We validate our solution through performance benchmarks that show how our debugging approach is comparable or better than state-of-the-art debugging solutions for Big Data. Furthermore, we conduct a user study to compare our approach with another state-of-the-art debugging approach, and results show a lower time to find the solution to a bug using our approach, as well as a generally good perception of the features of the debugger.