Two Papers of the Programming Language Research Laboratory were Accepted by ICSE'23

ICSE'23, a well-known conference in the field of software engineering, recently announced the list of accepted papers and two papers Tare: Type-Aware Neural Program Repair and Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects by the Programming Language Lab were accepted. The details of the papers are listed below.


Title: Tare: Type-Aware Neural Program Repair

Authors: Qihao Zhu, Zeyu Sun, Wenjie Zhang, Yingfei Xiong* and Lu Zhang

Abstract: Automated program repair (APR) aims to reduce the effort of software development. With the development of deep learning, lots of DL-based APR approaches have been proposed using an encoder-decoder architecture. Despite the promising performance, these models share the same limitation: generating lots of untypable patches. The main reason for this phenomenon is that the existing models do not consider the constraints of code captured by a set of typing rules.
In this paper, we propose, Tare, a type-aware model for neural program repair to learn the typing rules. To encode an individual typing rule, we introduce three novel components: (1) a novel type of grammars, T-Grammar, that integrates the type information into a standard grammar, (2) a novel representation of code, T-Graph, that integrates the key information needed for type checking an AST, and (3) a novel type-aware neural program repair approach, Tare, that encodes the T-Graph and generates the patches guided by T-Grammar.
The experiment was conducted on three benchmarks, 393 bugs from Defects4J v1.2, 444 additional bugs from Defects4J v2.0, and 40 bugs from QuixBugs. Our results show that Tare repairs 62, 32, and 27 bugs on these benchmarks respectively, and outperforms the existing APR approaches on all benchmarks. Further analysis also shows that Tare tends to generate more compilable patches than the existing DL-based APR approaches with the typing rule information.


Title: Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects

Authors: Linyi Li, Yuhao Zhang, Luyao Ren, Yingfei Xiong, Tao Xie*

Abstract: With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defects, one of the most frequent defects in DNNs. To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. To the best of our knowledge, RANUM is the first approach that confirms potential-defect feasibility with failure-exhibiting tests and suggests fixes automatically. Extensive experiments on the benchmarks of 63 real-world DNN architectures show that RANUM outperforms state-of-the-art approaches across the three reliability assurance tasks. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes.


*: corresponding author.

Previous Article:Two Papers of the Programming Language Research Laboratory were Accepted by OOPSLA'23
Next Article:One Paper of the Programming Languages Lab was Accepted by POPL’23