Log In Sign Up

Visual Transformer for Soil Classification

by   Aaryan Jagetia, et al.

Our food security is built on the foundation of soil. Farmers would be unable to feed us with fiber, food, and fuel if the soils were not healthy. Accurately predicting the type of soil helps in planning the usage of the soil and thus increasing productivity. This research employs state-of-the-art Visual Transformers and also compares performance with different models such as SVM, Alexnet, Resnet, and CNN. Furthermore, this study also focuses on differentiating different Visual Transformers architectures. For the classification of soil type, the dataset consists of 4 different types of soil samples such as alluvial, red, black, and clay. The Visual Transformer model outperforms other models in terms of both test and train accuracies by attaining 98.13 Visual Transformer exceeds the performance of other models by at least 2 Hence, the novel Visual Transformers can be used for Computer Vision tasks including Soil Classification.


page 1

page 3

page 4


A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of V...

Understanding Robustness of Transformers for Image Classification

Deep Convolutional Neural Networks (CNNs) have long been the architectur...

Transformers Meet Visual Learning Understanding: A Comprehensive Review

Dynamic attention mechanism and global modeling ability make Transformer...

Twins: Revisiting Spatial Attention Design in Vision Transformers

Very recently, a variety of vision transformer architectures for dense p...

Towards Fine-Dining Recipe Generation with Generative Pre-trained Transformers

Food is essential to human survival. So much so that we have developed d...

Augmented Shortcuts for Vision Transformers

Transformer models have achieved great progress on computer vision tasks...