Aerodynamic Data Predictions Based on Multi-task Learning

10/15/2020
by   Liwei Hu, et al.
0

The quality of datasets is one of the key factors that affect the accuracy of aerodynamic data models. For example, in the uniformly sampled Burgers' dataset, the insufficient high-speed data is overwhelmed by massive low-speed data. Predicting high-speed data is more difficult than predicting low-speed data, owing to that the number of high-speed data is limited, i.e. the quality of the Burgers' dataset is not satisfactory. To improve the quality of datasets, traditional methods usually employ the data resampling technology to produce enough data for the insufficient parts in the original datasets before modeling, which increases computational costs. Recently, the mixtures of experts have been used in natural language processing to deal with different parts of sentences, which provides a solution for eliminating the need for data resampling in aerodynamic data modeling. Motivated by this, we propose the multi-task learning (MTL), a datasets quality-adaptive learning scheme, which combines task allocation and aerodynamic characteristics learning together to disperse the pressure of the entire learning task. The task allocation divides a whole learning task into several independent subtasks, while the aerodynamic characteristics learning learns these subtasks simultaneously to achieve better precision. Two experiments with poor quality datasets are conducted to verify the data quality-adaptivity of the MTL to datasets. The results show than the MTL is more accurate than FCNs and GANs in poor quality datasets.

READ FULL TEXT
research
08/24/2023

Label Budget Allocation in Multi-Task Learning

The cost of labeling data often limits the performance of machine learni...
research
08/11/2020

Modeling Prosodic Phrasing with Multi-Task Learning in Tacotron-based TTS

Tacotron-based end-to-end speech synthesis has shown remarkable voice qu...
research
02/18/2020

Multi-Task Learning from Videos via Efficient Inter-Frame Attention

Prior work in multi-task learning has mainly focused on predictions on a...
research
02/14/2021

Distillation based Multi-task Learning: A Candidate Generation Model for Improving Reading Duration

In feeds recommendation, the first step is candidate generation. Most of...
research
06/20/2021

Heterogeneous Multi-task Learning with Expert Diversity

Predicting multiple heterogeneous biological and medical targets is a ch...
research
10/10/2019

High-speed Privacy Amplification Scheme using GMP in Quantum Key Distribution

Privacy amplification (PA) is the art of distilling a highly secret key ...
research
03/23/2022

A Framework for Fast Polarity Labelling of Massive Data Streams

Many of the existing sentiment analysis techniques are based on supervis...

Please sign up or login with your details

Forgot password? Click here to reset