Visual Transformer for Soil Classification

09/07/2022
by   Aaryan Jagetia, et al.
0

Our food security is built on the foundation of soil. Farmers would be unable to feed us with fiber, food, and fuel if the soils were not healthy. Accurately predicting the type of soil helps in planning the usage of the soil and thus increasing productivity. This research employs state-of-the-art Visual Transformers and also compares performance with different models such as SVM, Alexnet, Resnet, and CNN. Furthermore, this study also focuses on differentiating different Visual Transformers architectures. For the classification of soil type, the dataset consists of 4 different types of soil samples such as alluvial, red, black, and clay. The Visual Transformer model outperforms other models in terms of both test and train accuracies by attaining 98.13 Visual Transformer exceeds the performance of other models by at least 2 Hence, the novel Visual Transformers can be used for Computer Vision tasks including Soil Classification.

READ FULL TEXT

page 1

page 3

page 4

research
03/21/2023

Machine Learning for Brain Disorders: Transformers and Visual Transformers

Transformers were initially introduced for natural language processing (...
research
12/05/2022

Solving the Weather4cast Challenge via Visual Transformers for 3D Images

Accurately forecasting the weather is an important task, as many real-wo...
research
07/09/2023

A Survey and Approach to Chart Classification

Charts represent an essential source of visual information in documents ...
research
01/10/2022

A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of V...
research
05/30/2023

Vision Transformers for Mobile Applications: A Short Survey

Vision Transformers (ViTs) have demonstrated state-of-the-art performanc...
research
07/06/2023

Art Authentication with Vision Transformers

In recent years, Transformers, initially developed for language, have be...
research
06/23/2021

IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers

The self-attention-based model, transformer, is recently becoming the le...

Please sign up or login with your details

Forgot password? Click here to reset