College Student Retention Risk Analysis From Educational Database using Multi-Task Multi-Modal Neural Fusion

by   Mohammad Arif Ul Alam, et al.
UMass Lowell

We develop a Multimodal Spatiotemporal Neural Fusion network for Multi-Task Learning (MSNF-MTCL) to predict 5 important students' retention risks: future dropout, next semester dropout, type of dropout, duration of dropout and cause of dropout. First, we develop a general purpose multi-modal neural fusion network model MSNF for learning students' academic information representation by fusing spatial and temporal unstructured advising notes with spatiotemporal structured data. MSNF combines a Bidirectional Encoder Representations from Transformers (BERT)-based document embedding framework to represent each advising note, Long-Short Term Memory (LSTM) network to model temporal advising note embeddings, LSTM network to model students' temporal performance variables and students' static demographics altogether. The final fused representation from MSNF has been utilized on a Multi-Task Cascade Learning (MTCL) model towards building MSNF-MTCL for predicting 5 student retention risks. We evaluate MSNFMTCL on a large educational database consists of 36,445 college students over 18 years period of time that provides promising performances comparing with the nearest state-of-art models. Additionally, we test the fairness of such model given the existence of biases.



There are no comments yet.


page 1

page 4


Graduate Employment Prediction with Bias

The failure of landing a job for college students could cause serious so...

Correlations Between Learning Environments and Dropout Intention

This research is comparing learning environments to students dropout int...

Learning Shared Encoding Representation for End-to-End Speech Recognition Models

In this work, we learn a shared encoding representation for a multi-task...

Fine-tuning Handwriting Recognition systems with Temporal Dropout

This paper introduces a novel method to fine-tune handwriting recognitio...

Predicting Early Dropout: Calibration and Algorithmic Fairness Considerations

In this work, the problem of predicting dropout risk in undergraduate st...

Effective Feature Learning with Unsupervised Learning for Improving the Predictive Models in Massive Open Online Courses

The effectiveness of learning in massive open online courses (MOOCs) can...

Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning

High dropout rates in tertiary education expose a lack of efficiency tha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The U.S. National Center for Education Statistics (NCES) reports that in United States, the average retention rate for higher education institutions is 71% [21]. While, 57% of college admitted students do not complete four-year colleges within six years, 33% of them drop out from college without any degree [21]. For some students, dropping out is the culmination of years of academic hurdles, missteps, and wrong turns. For others, the decision to drop out is a response to conflicting life pressures, the need to help support their family financially or the demands of caring for siblings or their own child. Dropping out is sometimes about students being bored and seeing no connection between academic life and ”real” life. It’s about young people feeling disconnected from their peers and from teachers and other adults at school [6]. Although the reasons for dropping out vary, the consequences of the decision are remarkably similar. Low retention rates not only impact the financial well-being of individuals but the economy as a whole, college dropouts are more likely to head down a path that leads to lower-paying jobs, poorer health, and the possible continuation of a cycle of poverty that creates immense challenges for families, neighborhoods, and communities [21]. Low retention rates also adversely affect the reputation of the educational institution and could lead to potential loss of funding and inability to compete for quality students [9]. Thus, improving student retention is of paramount importance at institutions of higher education.

Many researchers have proposed to model factors impacting student dropout from large scale educational database using statistical and machine learning models. Most researchers have focused on using static or temporal structured data, such as GPA, SAT scores etc., that are readily available in institutional databases


. Some of the researchers proposed to use unstructured text analysis such as advising notes, forum post, social media status, online chats and email mining using natural language processing techniques to predict student dropout

[15, 29]. However, none of the researchers proposed to combine structured and unstructured data altogether in spatiotemporal fashion that can provide significant promise in this domain of research. We propose MSNF-MTCL with the following key contributions:

  • We develop a novel multimodal spatiotemporal neural fusion model MSNF for educational database to fuse temporal student advising notes extracted BERT embedding, temporal student performance variables and static student demographic information via temporal document encoder, temporal performance encoder and static demographic encoder respectively.

  • We develop a cascaded information network-based Multi-Task Cascade Learning (MTCL) layer on the top of the fusion layer to build our core MSNF-MTCL model by placing lower-level tasks at earlier layers so that the features learned for these tasks may be used by higher-level tasks for 5-tasks MTCL problem.

  • We evaluate MSNF-MTCL on a large scale collected data from a University from a third world country via comparing the performance with nearest state-of-art solutions.

  • Additionally, we tested the existence of biases and applied bias mitigation technique to confirm fairness of MSNF-MTCL.

Ii Related Works

Traditionally, education researchers run surveys to find the facts impacting dropped out students dropout that include academic difficulty, adjustment problems, lack of clear academic goals, lack of commitment, inability to integrate with the college community, uncertainty, incongruence, isolation as factors involved in student dropout [29]. The surveys result some key factors such as past and current academic success, high school GPA, SAT scores [23], major and number of credit hours taken during the first semester [5]. effect of financial aid [13]. Machine learning techniques on educational database has been relatively new [1, 14, 25, 8, 22]

. Perez et. al. proposed logistic regression and decision tree based dropout prediction from static students’ data

[1]. [14] proposed a link-based cluster ensemble for predicting student dropout from mixed-type (categorical and continuous) educational dataset. [25]

presented benchmark student dropout definition and dropout prediction paradigm by developing machine and deep learning techniques and their related privacy concerns from static and temporal structured data.


proposed logit leaf model (LLM) on students classroom characteristics, cognitive and behavioral engagement variables and other static variables available from online students’ enrollment database.


proposed a Generalized mixed-effects random forest model to analyze hierarchical data to predict engineering students’ dropout from static data from large scale educational dataset. On the other hand, student dropout prediction from advising notes has been explored only once


that proposed a sentiment analysis technique to mine advising notes towards predicting students’ dropout. Additionally, this paper proposed an explanation i.e. weighted ranking of contributing sentiments towards predicting students’ dropout.

[32] proposed a fair student dropout prediction system from educational database. [24] analyzed the challenges of student dropout from static database that involves definition, machine learning techniques to be used, evaluation measures and privacy concerns.

Combining structured and unstructured data has been popular in image processing content learning and electronic health record analytics for decades. [30] proposed deep spatial CNN model to extract features from image-text pairs. [28] presents LSTM-CNN fusion to combine clinical image and electronic health records together for predicting clinical events derived cohorts. [31] presents utilized unstructured-structured text fusion model for predicting cognitive engagement. Similar approach has been conducted in many domains such as mortality prediction [3], structured visualization from unstructured texts [17], financial transaction prediction [2] and so on.

Multi-task learning (MTL) has been investigated mostly by computer vision researchers that are categorized in many terms such as shared trunk, cross-talk, prediction distilation, task routing. In NLP, the MTL falls under many categories. Traditional feed-forward neural networks (non-attention based) focused on developing structural resemblance of shared global feature extractor followed by task-specific output branches where features are word representations


. Recurrent neural network models in MTL mostly focused on novel recurrent neural architectures adopted in multi-task fashion with multi-variant parameter sharing schemes i.e., one-to-one, one-to-many and many-to-many or task specific LSTMs

[20, 11]. Cascaded information techniques mostly focused on lower-level tasks at earlier layers so that the features learned for these tasks may be used by higher-level tasks [27]. Adversarial feature separation techniques introduce an adversarial learning framework for MTL in order to distill learned features into task-specific and task-agnostic subspaces. Their architecture is comprised of a single shared LSTM layer and one task-specific LSTM layer per task[26]. BERT in MTL mostly focused on adding shared BERT embedding layers on the traditional, LSTM or cascaded information technique [19].

To our best knowledge, (MSNF-MTCL)

is the first of its kind, that develops a Multimodal Spatiotemporal Neural Fusion for MTL model combining structured, unstructured, spatio-temporal contexts altogether on educational data. More elaborately, we design multimodal neural network model to fuse static students’ structured demographic information, temporal students’ structured performance information and temporal students’ unstructured advising notes altogether and develop a novel classification model towards predicting student dropout, next semester dropout and dropout cause identification.

Iii Data Description

We obtained an educational database from a private university located in a third world country consists of 36,445 undergraduate students where female (10,237) and male (26,208) students’ ratio ( 28% by 72%) is similar to national literacy statistics of the country. Among the students, 14% are dropped out (female-male dropout are 11% and 15%) in any point of their study. While any dropout incident happened, dropped out students were contacted by university counselling office via phone to analyze the incident which has been categorized into two classes (1) temporary dropout, (2) permanent dropout. Here, the causes of permanent dropout has been sub-categorized into 10 classes (financial, family, marriage, sickness and so on) and temporary dropout has been sub-categorized into 14 classes (financial, internship, sickness, accident, marriage, COVID-19 related, family member death, struggling with grades and so on). Both of temporary and permanent dropout causes have 9 overlaps and in total 15 unique causes have been structured to represent any kind of dropout causes. It should be noted that location transfer and university transfer reasons were not considered as dropout in the inclusion criteria, and these information has been removed from every statistics. While getting admitted, students were provided few demographic data related to students personal profile, prior education details, family information and financial information. Since the admission, university administration has been recording students’ temporal performances in each courses taken along with few administrative structured information such as payment due, blocked to register for next semester (due to any critical incidents, past significant payment dues), scholarship awarded etc. Each semester, students were required to visit to his/her academic advisor to discuss various topics related to academia which is more likely to be the first month of the semester. Sometimes, students were blocked from registering to next semester without consulting academic advisor due to many reasons, such as, poor grades, excessive missing of attendance, payment dues. However, students also could schedule meeting with their academic advisor anytime of the semester to discuss various topics (from personal to academic). It should be noted that, only primary cause of dropout has been noted during the counselling session. Table I and Table II present the details of the statistics of the dataset and features information derived/extracted from the database respectively.

Count Dropout Temporary Permanent

5,498 (24%) 1,103 (11%) 717 (65%) 386 (35%)
Male 17,897 (76%) 3,857 (15%) 2,931 (76%) 926 (15%)

23,395 4,960 (14%) 3,648 (74%) 1,312 (26%)

TABLE I: Description of the obtained educational database


Static and structured Demographic
birth date, age, gender, religion, starting major, transferred credits, blood group, birth place, permanent address, local address, Secondary School grade, higher school grade, marital status, source of finance, part-full time, local guardian, parents financial income
Temporal and structured Performance new credits taken, credits retaken, passing credits, failed credits, overall attendance, average semester starting GPA, average semester GPA, average semester ending GPA, number of exams unattended since admission, number of exams unattended in this semester, number of counselling scheduled, amount of payment due in this semester, number of payment dues since admission, study duration, blocked from registering in next semester, number of block since admission, scholarship amount, accommodation status (on/off campus), total scholarship till date, average scholarship per semester
Temporal Advising Notes structured: reason of counselling visits, counselling conduct date, counselling result (no result or cause of dropout) unstructured: counselling note

Dropout Causes
*financial, *family, *marriage, *physically ill, *death of family member, *personal, death, *accident, *struggling with grades, *COVID-19 family death, COVID-19 financial, COVID-19 online class attending hardship, internship, traveling, mentally ill

TABLE II: Description of the features provided/generated from the educational database
Fig. 1: Overall Architecture of Multimodal Spatiotemporal Neural Fusion (MSNF) Network model for predicting student dropout risks i.e. dropout, next semester dropout and cause of dropout

Iv Multi-Task Multi-Modal Neural Fusion Model for Predicting Student Retention Risks

In this section, we describe the problem formulation, multi-modal spatiotemporal neural fusion and multi-task neural cascade networks to solve student retention risks prediction. The overall framework has been shown in Fig 1. The lower module ”Multi-Modal Fusion” generates, , a spatiotemporal fused layer that has been shared across all tasks, while the upper module ”Multitask Neural Cascades” represent task-specific outputs, , in our case .

Iv-a Multimodal Spatiotemporal Neural Fusion

This module consists of advising note representation via BERT-based document embedding, sequential encoder network on temporal advising note documents from BERT embedding, development of temporal structured performance information encoder, development of static information encoder and a fusion layer that has been shared by each task of Multi-Task Cascade network. The input can be represented as where represent temporal structured performance data, static students demographic data and temporal students’ advising/counselling notes. The output of this layer is fused representation of spatiotemporal inputs of which can be represented as .

Iv-A1 Static Information Encoder ()

The static student description (Table II) data

has been converted into one-hot vectors through static student description encoder to generate output

. This encoder consists of a series of convolution (CNN) layers, where each CNN layer further followed by batch normalization, max pooling, and dropout layer. The first 1D CNN layer takes the one-hot encoded static feature and structured demographic data (size: 120) as input and performs the filter operation with 8 filters of size 11. The outputs of the first CNN layer are passed to the second CNN layer (16 filters with a size of 5). Next, the outputs of the second CNN layer are passed to the third CNN layer (32 filters with a size of 3). Finally, the summary of all the spatial features of a static input feature is passed to the flatten layer to produce a 1D feature vector of size 50.

Iv-A2 Temporal Student Performance Encoder ()

To capture the longer dynamics in the temporal dimension of the temporal student performance data, where

represents the time, we have used two consecutive LSTM layers: The first LSTM layer consists of 75 neurons, and the second one with 55 neurons. Each LSTM layers are followed by a dropout and batch normalization layer. Next, a dense layer of 50 neurons, followed by a dropout and batch normalization layer, is connected to another dense layer with 40 neurons. Finally, informative features of the input

have been extracted to generate final encoded layer .

Iv-A3 Sequential Advising Note Encoder ()

The input here represents time, which is a document sequence. At first we perform BERT pre-trained embedding fine-tuning as proposed in [10]. First, we consider, each of the document consists of a sequence of sentences and each sentence has been considered as a sequence of words. We represent each of the sequence of word separated by token [CLS] while each sequence of sentence has been separated by [SEP] token as described in [10] proposed method. Then we map the final tokenized document into a sequence of input embedding vectors, one for each token, constructed by summing the corresponding word, segment, and positional embeddings, thus it is called input representation vector. Now, we use multi-layered bidirectional Transformer encoder (BERT) [10] pre-trained embedding to map input representation vectors into a sequence of contextual embedding vectors. Then, the sequence of contextual embedding vectors are passed through a Bidirectional LSTM (BiLSTM) [33]. The BiLSTM layer concatenates the outputs from 2 hidden layers of opposite direction to the same output and can capture long term dependencies in sequential text data. The maxpooling layer takes the hidden states of the BiLSTM layer as input and outputs the final text representation [33].

Iv-A4 Student Spatiotemporal Information Representation ()

The final students’ spatiotemporal information representation is obtained by concatenating the representations of sequential advising note, temporal student performance, along with static student demographic information. The representation of each student is the size of this vector is .

Iv-B Multi-Task Neural Cascade Networks

We leverage the final task as hierarchical composition of five tasks () for future dropout, type of dropout, next semester dropout, duration of dropout and cause of dropout tasks respectively, to train our student retention risk predictor by developing a Multimodal Spatiotemporal Neural Fusion network for MTL (MSNF-MTCL). We formulate two types of losses:

  • Categorical cross-entropy loss for classification task



    denotes probability of the classification task and

    denotes the ground-truth labels.

  • Euclidean loss for regression task



    is the continuous estimated regression task values and

    is the ground truth.

We define each of task as of our multi-task model along with the final multi-source learning scheme as follows:

Iv-B1 Future Dropout (FD)

This is a binary task involves predicting students’ dropout in future (true/false) which is irrespective of the semester or duration. The learning objective is formulated as a two-class classification problem. For each sample, we use the cross-entropy loss similar to Eqn. 1 where where is probability of dropout in future and denotes ground truth label.

Iv-B2 Type of Dropout (TD)

This binary task aiming to further categorize dropout into temporary or permanent. Similar to Eqn 1, we can formulate where where is probability of type of dropout (temporary dropout, permanent dropout and denotes ground truth label.

Iv-B3 Next Semester Dropout (ND)

This binary task aims to predict whether predicted dropped out student will be dropped out in next semester or not. We use the cross-entropy loss similar to Eqn. 1 where where is probability of next semester dropout and denotes ground truth label.

Iv-B4 Duration of Dropout (DD)

This regression task aims to predict how many semesters students survive if the dropout has been predicted. We use the Euclidean loss similar to Eqn. 2 where where is the continuous estimated duration of dropout in terms of semester and is the ground truth.

Iv-B5 Cause of Dropout (CD)

This task aims to predict the causes of dropout, i.e. one of the 15 causes as stated in Table II. We use the cross-entropy loss similar to Eqn. 1 where where is probability of each cause of dropout and denotes ground truth label.

Iv-B6 Multi-Conditional Training

We employ five different tasks on our encoded students’ information space , there are different types of labels in each training sample. While training on the samples, we follow the hierarchy of and develop an overall learning target as follows


While computing , we abide the following strategies: if (no dropout), then we set, , if (no dropout) and (permanent dropout), then we set, , , if (no dropout), (permanent dropout), and (next semester dropout = true), then we set, . We compute considering altogether as per Eqn. 3 for all other cases.

V Experiments

Data #Classes B1 B2 B3 V1 V2 V3 (Ours)

FD (2)
75.476.8 98.780.01
TD (2) 70.778.4 89.730.01
ND (2) 68.848.3 93.250.01
DD 2.30.56 0.0450.002
CD (15) 60.2711.53 85.530.02

TABLE III: Comparison of MSNF-MTCL performance on our dataset with different baseline models

V-a Baseline Models

Since, multi-task multi-modal neural fusion on educational dataset is a novel problem for student retention risks estimation, we could not find state-of-art solutions that match with our problem as a baseline. In this regard, we implement few nearest problems along with their solutions and formulate similar problem using our proposed MSNF-MTCL framework. Apart from that, to establish the importance of different modules of our framework, we develop different versions of MSNF-MTCL consist of different combinations of proposed modules. The baselines and different versions of MSNF-MTCL framework have been described below:

  • B1 (Jayaraman Model) [15]:

    This framework utilized only advising note and proposed a lexicon-based sentiment analysis technique to extract features and applied SVM machine learning techniques on the features to predict student dropout. The framework utilized Bing Lexicon


    model for feature extraction that consists of 6,800 words, 2,000 positive and 4,800 negative sentiments.

  • B2 (Pellagatti Model) [22]: This framework considered students’ static and students’ temporal structured data towards building a generalized mixed-effects random forest (GMERF).

  • B3 (Single Task Fusion and Replacing BERT with Doc2Vec) [33]: This framework is the closest one to our solution that has been developed to predict mortality of patients from electronic health records (EHR). It followed a spatiotemporal neural fusion of patient notes, patients’ static demographic data and patient’s temporal hospital information altogether into a fused layer that has been utilized to solve single task, predicting patients’ mortality. Instead of using lexicon tokenization and BERT model for encoding patient notes, this framework utilized Doc2Vec embedding [16].

  • V1 (MSNF-MTCL with Structured Data Only): This is a version of our proposed core MSNF-MTCL model where we completely removed Temporal Advising Notes input and considered only Structured data i.e. Temporal Student Performance and Static Student Description inputs along with their encoders.

  • V2 (MSNF-MTCL with Unstructured Advising Notes Only): This is a version of our proposed core MSNF-MTCL model where we included only Temporal Advising Notes input and its corresponding encoder.

  • V3 (MSNF-MTCL): This is a complete MSNF-MTCL model including all modules and inputs.

for each of the task: future dropout (FD), next semester dropout (ND), type of dropout (TD), duration of dropout (DD) and cause of dropout (CD)


Fairness target
-0.1 to 0.1 -0.1 to 0.1 -0.1 to 0.1 0.8 to 1.2 98.78 89.73 93.25 0.045 85.53
Initial 0.25 -0.18 -0.19 0.53
RW 0.05 -0.03 -0.15 0.95 91.85 86.73 90.55 0.223 81.47
AB 0.09 -0.07 -0.11 1.0 90.34 87.45 89.65 0.23 81.34
ROBC 0.06 -0.11 0.08 0.91 91.24 86.43 89.38 0.09 80.44
EOPP 0.18 -0.15 -0.07 0.88 90.75 87.47 85.76 0.23 80.43
DIR 0.06 -0.09 -0.11 0.11 88.36 85.83 86.99 0.24 83.05
LFR 0.20 -0.10 0.01 1.0 90.77 83.84 88.87 0.145 82.75
CEOP 0.05 -0.05 -0.11 0.89 89.76 85.4 90.93 0.049 80.34
PR 0.06 -0.09 -.04 0.91 93.53 88.83 92.54 0.055 83.46

Bias detection and mitigation experiment results. Here, column represents bias detection metrics: Statistical Parity Difference (SPD), Equal Opportunity Difference (EOD), Average Odds Difference (AOD) and Disparate Impact (DI); while rows represent bias mitigation techniques: Reweighing (RW), Adversarial Debiasing (AB), Reject Option Based Classification (ROBC), Equalized odds post processing (EOPP), Disparate impact remover (DIR), Learning fair representation (LFR), Calibrated equalized odds postprocessing (CEOP) and Prejudice remover (PR)

V-B Results

We considered accuracy

and Standard Deviation

as evaluation metric for classification tasks. We considered root mean squared deviation (RMSD) as evaluation metric for regression tasks. We implemented baseline algorithms and our framework using python-based Keras library. We train the model using a learning rate of 0.001 for 16k iterations, and 0.0001 for the next 5k until the training converges. We train the model in 4 GPUs, each GPU holding 1 mini-batch (so the effective mini-batch size is x4).

While developing baseline algorithms, we designed 5 single task models for 5 retention risks. We considered 75% of students’ data as training and rest of 25% of students’ data as testing data during training and similar experiment has been conducted 10 times on 10-fold cross experiment to generate the results. We also utilized Synthetic Minority Oversampling Technique (SMOTE) to correct the imbalance [12]

. SMOTE is a popular and robust technique that uses a combination of oversampling the minority class and undersampling the majority class which results in better classifier performance than just oversampling or undersampling. Table

III shows detail results of our experiment and comparisons.

Fig. 2: Accuracy changes of five different retention risk prediction tasks using our framework over number of available advising notes
Fig. 3: Causes of Dropout prediction results using our overall framework. The causes of dropout have been indexed with: 1. financial, 2. family, 3. marriage, 4. physically ill, 5. death of family member, 6. personal, 7. own death, 8. accident, 9. struggling with grades, 10. COVID-19 family death, 11. COVID-19 financial, 12. COVID-19 online class attending hardship, 13. internship, 14. traveling, 15. mentally ill. We removed 7.own death due to ethical reason.

In Table III, we clearly can see that our proposed method (V3-Ours) perform better than any other baseline frameworks (B1, B2 or B3) for all student retention risk classification/estimation. If we take closer look, we can see that, utilizing only advising notes (V1) and only structured data (V2) versions of our framework not only outperform their related baselines (only advising note B1 and only structured data B2), the outperform state-of-art single task spatiotemporal fusion model using Doc2Vec embedding framework which has been successfully applied on EHR data before.

V-C Bias Detection and Mitigation

Table I shows that the data is biased in terms of gender (female-male ratio is 28% by 72%) which has potential threat to AI fairness in our model. We utilize IBM AI Fairness 360 (AIF360) tool to detect and mitigate biases for dropout prediction in terms of gender considering ”Male” as privileged group [4]. Table IV shows AIF360 implemented 4 bias detection metrics, their corresponding fairness target metric ranges and 8 bias mitigation techniques generated bias detection metrics. The central notions in this method: (1) all bias mitigation techniques are not appropriate for every dataset; (2) to select right mitigation technique, the bias detection metrics should be fair under maximum metrics; (3) accuracy drop due to bias mitigation should be minimum. Table IV shows the final result of our bias detection and mitigation test for student dropout (only the first task of our multi-task model) where we can see that ”Prejudice remover” technique provides maximum fairness (fair in 4 bias detection metrics) and least accuracy drop (accuracy drop of 3.33%). Similarly, we can show that

V-D Discussion

Fig. 2 illustrates the changes of accuracies over the number of availability of each student’s advising note while predicting their retention risks (five different tasks) which clearly shows that different versions of our method (V1, V2 and V3) outperform baseline methods significantly in any number of advising notes’ availability. Also, it can be clearly stated that, the prediction accuracy of each task increases as the number of available advising notes increases for each student in the testing data. Fig 3 illustrates the prediction accuracies of individual dropout cause (15 dropout causes) using our proposed model, where we can see that (we removed cause ”Own Death with index 7” due to ethical reason), predicting dropout due to financial condition, family reason, marriage related, struggling with grades, COVID-19 related financial, COVID-19 related struggling in attending online classes and mentally ill, are extremely accurate (95%+). However, it has been extremely difficult to predict physical illness, death of family member and personal problem related college dropout from the educational data.

V-E Limitations and Future Work

We utilized a large scale educational data of 18 years from only one university which may create distribution biases. To address biases, we additionally tested our framework for bias mitigation. Moreover, our reproduction of baseline models and evaluation on our dataset provide ample proof that our model outperforms baseline frameworks. In our framework, lower level cascaded task depends on the performance of upper level tasks’ classification performances that we did not align with state-of-art models’ implementations. The causes of dropout have been labeled in rolling basis, i.e., when a faculty advisor thought that current advisee needs to be assigned to a new cause, he reports to the system for an additional cause insertion. The administration officer review that cause and accept the inclusion request if that is absolutely valid. Our dataset consists of pre- and post- COVID-19 pandemic data. However, due to extremely poor number of data during post-COVID-19 era, we could not develop a new model to identify COVID-19 impacts on student dropout. In the current system, a faculty advisor can only assign a single cause for a single advising note, that made us difficult to predict multiple causes of a dropout incident which is common in real life case. In future, we aim to apply causal inference and information retrieval technique for facts finding to describe COVID-19 impacts and multiple causes extraction on student dropout more evidently. We also utilized pre-trained BERT embedding model that has been trained on wikipedia data. In future, we plan to develop a new embedding, ”Educational BERT (EBERT)” trained on only educational advising notes to enhance efficiency of any student retention risk prediction.

Vi Conclusion

Structured-unstructured data fusion in spatiotemporal domain across the educational institute has not been properly exploited by researchers due to the unavailability of such data and challenges of combining multi-modal educational signals. Our breakthrough approach that provides highest ever student dropout accuracy potentially can be adopted by educational policy makers and university management stakeholders in many other domains. Our novel problem formulation, a multi-task student retention risks estimation on 5 different student retention risk tasks, and solution, an efficient multi-task multi-modal spatiotemporal neural network model will open the door to many unsolved problems in educational data mining research. Moreover, the framework can be adapted in any databases in the world including employee, email, electronic health record or google search databases, and, can be utilized to solve extremely complex problems.


  • [1] ALFREDO PEREZ, ELIZABETH E. GRANDON, M. C. G. V. Comparative analysis of prediction tech- niques to determine student dropout: Logistic regression vs decision trees. SCCC (2018).
  • [2] AU, W., AIT-AZZI, A., AND KANG, J. Finsbd-2021: The 3rd shared task on structure boundary detec- tion in unstructured text in the financial domain. In WWW (Companion Volume) (2021), ACM / IW3C2, pp. 276–279.
  • [3] BAXTER, S. L., KLIE, A. R., SASEENDRAKUMAR, B. R., YE, G. Y., HOGARTH, M. A., AND NE- MATI, S. Predicting mortality in critical care patients with fungemia using structured and unstructured data. In EMBC (2020), IEEE, pp. 5459–5463.
  • [4] BELLAMY, R. K. E., DEY, K., HIND, M., HOFFMAN, S. C., HOUDE, S., KANNAN, K., LOHIA, P., MARTINO, J., MEHTA, S., MOJSILOVIC, A., NAGAR, S., RAMAMURTHY, K. N., RICHARDS, J. T., SAHA, D., SATTIGERI, P., SINGH, M., VARSHNEY, K. R., AND ZHANG, Y. AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63, 4/5 (2019), 4:1– 4:15.
  • [5] CABRERA, A. F., N. A. . C. M. B. College persistence: Structural equations modeling test of an integrated model of student retention. In The Journal of Higher Education (1993).
  • [6] CATERALL, J. S. Risk and resilience in student transition to high schools. American Journal of Education (1998).
  • [7] COLLOBERT, R., WESTON, J., BOTTOU, L., KARLEN, M., KAVUKCUOGLU, K., AND KUKSA, P. P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 (2011), 2493–2537.
  • [8] COUSSEMENT, K., PHAN, M., CAIGNY, A. D., BENOIT, D. F., AND RAES, A. Predicting student dropout in subscription-based online learning environments: The beneficial impact of the logit leaf model. Decis. Support Syst. 135 (2020), 113325.
  • [9] CRONINGER, R., AND LEE, V. E. Social capital and dropping out of high school: Benefits to at-risk students of teachers’ support and guidance. Teachers College Record (2001).
  • [10] DEVLIN, J., CHANG, M., LEE, K., AND TOUTANOVA, K. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1) (2019), Association for Computational Linguistics, pp. 4171–4186.
  • [11] DONG, D., WU, H., HE, W., YU, D., AND WANG, H. Multi-task learning for multiple language translation. In ACL (1) (2015), The Association for Computer Linguistics, pp. 1723–1732.
  • [12] FINLAY, J., PEARS, R., AND CONNOR, A. M. Synthetic minority over-sampling technique (SMOTE) for predicting software build outcomes. In SEKE (2014), Knowledge Systems Institute Graduate School, pp. 546–551.
  • [13] HERZOG, S. Measuring determinants of student return vs. dropout/stopout vs. transfer: A first-to-second year analysis of new freshmen. Research in Higher Education 46, 8 (2005), 883–928.
  • [14] IAM-ON, N., AND BOONGOEN, T. Improved student dropout prediction in thai university using ensem- ble of mixed-type data clusterings. Int. J. Mach. Learn. Cybern. 8, 2 (2017), 497–510.
  • [15] JAYARAMAN, J. D. Predicting student dropout by mining advisor notes. In EDM (2020), International Educational Data Mining Society.
  • [16]

    LE, Q. V., AND MIKOLOV, T. Distributed representations of sentences and documents. In ICML (2014), vol. 32 of JMLR Workshop and Conference Proceedings,, pp. 1188–1196.

  • [17] LI, T., FANG, L., LOU, J., LI, Z., AND ZHANG, D. Anasearch: Extract, retrieve and visualize structured results from unstructured text for analytical queries. In WSDM (2021), ACM, pp. 906–909.
  • [18] LIU, B. Sentiment analysis and subjectivity. In Handbook of Natural Language Processing. Chapman and Hall/CRC, 2010, pp. 627–666.
  • [19] LIU, X., WANG, Y., JI, J., CHENG, H., ZHU, X., AWA, E., HE, P., CHEN, W., POON, H., CAO, G., AND GAO, J. The microsoft toolkit of multi-task deep neural networks for natural language understand- ing. In ACL (demo) (2020), Association for Computational Linguistics, pp. 118–126.
  • [20] LIU, Z., LUO, P., WANG, X., AND TANG, X. Deep learning face attributes in the wild. In ICCV (2015), IEEE Computer Society, pp. 3730–3738.
  • [21] MCFARLAND, J., H. B. D. B. C. S. T. W. X. The condition of education. National Center for Education Statistics (2017).
  • [22] PELLAGATTI, M., MASCI, C., IEVA, F., AND PAGANONI, A. M. Generalized mixed-effects random forest: A flexible approach to predict university student dropout. Stat. Anal. Data Min. 14, 3 (2021), 241–257.
  • [23] PORTER, K. B. Current trends in student retention: A literature review. Teaching and Learning in Nursing 3, 1 (2008), 3–5.
  • [24] PRENKAJ, B., STILO, G., AND MADEDDU, L. Challenges and solutions to the student dropout predic- tion problem in online courses. In CIKM (2020), ACM, pp. 3513–3514.
  • [25] PRENKAJ, B., STILO, G., AND MADEDDU, L. Challenges and solutions to the student dropout predic- tion problem in online courses. In CIKM (2020), ACM, pp. 3513–3514.
  • [26] RUDER, S., BINGEL, J., AUGENSTEIN, I., AND SØGAARD, A. Latent multi-task architecture learning. In AAAI (2019), AAAI Press, pp. 4822–4829.
  • [27] SANH, V., WOLF, T., AND RUDER, S. A hierarchical multi-task approach for learning embeddings from semantic tasks. In AAAI (2019), AAAI Press, pp. 6949–6956.
  • [28] TAM, C. S., GULLICK, J., SAAVEDRA, A., VERNON, S. T., FIGTREE, G. A., CHOW, C. K., CRETIKOS, M., MORRIS, R. W., WILLIAM, M., MORRIS, J., AND BRIEGER, D. Combining structured and unstructured data in emrs to create clinically-defined emr-derived cohorts. BMC Medical Informatics Decis. Mak. 21, 1 (2021), 91.
  • [29] TINTO, V. Building community. In Liberal Education (1993), Liberal Education.
  • [30]

    WAN, J., WANG, D., HOI, S. C. H., WU, P., ZHU, J., ZHANG, Y., AND LI, J. Deep learning for content-based image retrieval: A comprehensive study. In Proceedings of the 22nd ACM International Conference on Multimedia (New York, NY, USA, 2014), MM ’14, Association for Computing Machin- ery, p. 157–166.

  • [31] WU, J. Learning analytics on structured and unstructured heterogeneous data sources: Perspectives from procrastination, help-seeking, and machine-learning defined cognitive engagement. Comput. Educ. 163 (2021), 104066.
  • [32] YU, R., LEE, H., AND KIZILCEC, R. F. Should college dropout prediction models include protected attributes? CoRR abs/2103.15237 (2021).
  • [33] ZHANG, D., YIN, C., ZENG, J., YUAN, X., AND ZHANG, P. Combining structured and unstructured data for predictive models: a deep learning approach. BMC Medical Informatics Decis. Mak. 20, 1 (2020), 280.