DeepPseudo: Deep Pseudo-code Generation via Transformer and Code Feature Extraction
Pseudo-code written by natural language is helpful for novice developers' program comprehension. However, writing such pseudo-code is time-consuming and laborious. Motivated by the research advancements of sequence-to-sequence learning and code semantic learning, we propose a novel deep pseudo-code generation method DeepPseudo via Transformer and code feature extraction. In particular, DeepPseudo utilizes both Transformer encoder and code feature extractor to perform encoding for source code. Then it uses a pseudo-code generator to perform decoding, which can generate the corresponding pseudo-code. We choose corpus gathered from a web application framework Django, which contains 18,805 pairs of Python statements and corresponding pseudo-code. We first compare DeepPseudo with seven baselines from pseudo-code generation and neural machine translation domains in terms of four performance measures. Results show the competitiveness of DeepPseudo. Later, we analyze the rationality of the component settings (i.e., the usage of code feature extractor, the attention mechanism, and the positional encoding method) in DeepPseudo. Finally, we perform a human study to verify the effectiveness of DeepPseudo.
READ FULL TEXT