Deep learning for missing value imputation of continuous data and the effect of data discretization

Wei Chao Lin, Chih Fong Tsai*, Jia Rong Zhong

*此作品的通信作者

研究成果: 期刊稿件文章同行評審

54 引文 斯高帕斯(Scopus)

摘要

Often real-world datasets are incomplete and contain some missing attribute values. Furthermore, many data mining and machine learning techniques cannot directly handle incomplete datasets. Missing value imputation is the major solution for constructing a learning model to estimate specific values to replace the missing ones. Deep learning techniques have been employed for missing value imputation and demonstrated their superiority over many other well-known imputation methods. However, very few studies have attempted to assess the imputation performance of deep learning techniques for tabular or structured data with continuous values. Moreover, the effect on the imputation results when the continuous data need to be discretized has never been examined. In this paper, two supervised deep neural networks, i.e., multilayer perceptron (MLP) and deep belief networks (DBN), are compared for missing value imputation. Moreover, two differently ordered combinations of data discretization and imputation steps are examined. The results show that MLP and DBN significantly outperform the baseline imputation methods based on the mean, KNN, CART, and SVM, with DBN performing the best. On the other hand, when considering the discretization of continuous data, the order in which the two steps are combined is not the most important, but rather, the chosen imputation algorithm. That is, the final performance is much better when using DBN for imputation, regardless of whether discretization is performed in the first or second step, than the other imputation methods.

原文英語
文章編號108079
期刊Knowledge-Based Systems
239
DOIs
出版狀態已出版 - 05 03 2022
對外發佈

文獻附註

Publisher Copyright:
© 2021 Elsevier B.V.

指紋

深入研究「Deep learning for missing value imputation of continuous data and the effect of data discretization」主題。共同形成了獨特的指紋。

引用此