Browsing by Author "Venkatesh, B.H."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item An Effective Diabetic Retinopathy Detection Using Hybrid Convolutional Neural Network Models(Springer Science and Business Media Deutschland GmbH, 2023) Kumar, N.; Ahmed, R.; Venkatesh, B.H.; Anand Kumar, M.Loss of vision in the present era of the developing world is mainly caused by diabetic retinopathy. More than 103 million people are believed to be affected. It is estimated that around 40 million beings have diabetes in the United States, and according to the World Health Organization (WHO), 347 million people are living with the disease globally. Diabetic retinopathy (DR) is a long-term diabetes-related eye condition. Roughly, 45–50% of the American citizens suffering from diabetes undergo some unique stages that can be categorized. When DR is diagnosed on a timely basis, the possibility of it extending to the course of vision impairment can be delayed and stopped, though this is not entirely true and a very daunting task because it seldom reveals any symptom before it escalates to a stage of no return to effectively treat it. The paper uses convolutional neural network models to achieve an effective classification for diabetic detection of retinal fundus images. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.Item Continuous Sign Language Recognition Using Leap Motion Sensor(Institute of Electrical and Electronics Engineers Inc., 2024) Kumar, N.; Ahmed, R.; Venkatesh, B.H.; Salvi, S.; Panjwani, Y.A vital communication tool that connects persons with hearing and speech impairments worldwide is sign language. Sign language involves mostly hand movements plus face gestures, which are interpreted by recognizing these gestures to form meaningful sentences. In this study, we use two machine learning models: Long Short-Term Memory (LSTM) and Support Vector Machines (SVM), to predict signs. A dataset of 42 unique sign words and 28 sentences was used to train and evaluate our models. Our method uses depth sensors, like the Leap Motion gadget, to improve sign language recognition (SLR).Worldwide, sign language is an essential communication tool for people with speech and hearing impairments. Sign languages are primarily made up of hand gestures and facial expressions, and their meaning is communicated through precise gesture interpretation. Our models were trained on a dataset containing 42 distinct sign words and 28 sentences, achieving an accuracy of 90.35% for word prediction and 98.21 for sentence prediction. The LSTM model outperformed the SVM model, which had accuracies of 85.96% and 89.58% for words and sentences, respectively. By using depth sensors like the Leap Motion device, our approach aims to enhance sign language recognition (SLR). © 2024 IEEE.
