SANIP: Shopping Assistant and Navigation for the visually impaired

09/08/2022
by   Shubham Deshmukh, et al.
0

The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.

READ FULL TEXT

page 2

page 4

page 5

research
11/26/2018

A Convolutional Neural Network based Live Object Recognition System as Blind Aid

This paper introduces a live object recognition system that serves as a ...
research
10/13/2022

Application-Driven AI Paradigm for Hand-Held Action Detection

In practical applications especially with safety requirement, some hand-...
research
03/24/2023

MagicEye: An Intelligent Wearable Towards Independent Living of Visually Impaired

Individuals with visual impairments often face a multitude of challengin...
research
12/19/2013

Delegating Custom Object Detection Tasks to a Universal Classification System

In this paper, a concept of multipurpose object detection system, recent...
research
11/21/2017

Identifying Most Walkable Direction for Navigation in an Outdoor Environment

We present an approach for identifying the most walkable direction for n...

Please sign up or login with your details

Forgot password? Click here to reset