Browsing by Author "Madathil, K.T."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Fake News Detection for Hindi Language(CEUR-WS, 2022) Madathil, K.T.; Mirji, N.; Charan, R.; Anand Kumar, A.M.The understanding of the term “Fake news†varies from one individual to the other. If we look into the basic meaning of “Fake news†, it refers to inappropriate and made up news. In most cases, the news is made up of baseless sources and facts. These news generally mislead the reader and are generally published for one’s own benefit or to defame others. In recent years, a large population is active on various social media platforms and hence they have become the major medium through which fake news is circulated. A lot of fake news is been circulated in local languages as well. Also most of the existing work is based on the English language and only very little work is done using resource scare language for fake news identification like Indic Languages. So this paper focuses to define false news and suggest an effective method for detecting fake news in Hindi using standard machine learning algorithms like Multi-layer Perceptron and Naive Bayes and deep learning techniques like transforms - mainly mBERT. © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).Item Optimizing Machine Learning Operators and Models for Specific Hardware Using Apache-TVM(Institute of Electrical and Electronics Engineers Inc., 2023) Madathil, K.T.; Dugar, A.; Patil, N.; Unnikrishnan, U.Diligent utilization of hardware resources when dealing with computationally intensive jobs like machine learning (ML) that have a huge scope of compiler optimizations are often neglected due to the complexity of its implementation. The main reasons for its complexity is the wide range of architectures and the difference between the development and deployment environments. This leads to poor utilization of resources such as memory, hardware and increased execution time. These problems can be tackled using Apache-TVM - a compiler specifically designed to tune and optimize machine-learning models for specific hardware. We have implemented matrix multiplication on two types of hardware, x86 and Hexagon Digital Signal Processor (DSP), and have optimized it for specific hardware. Apache-TVM also supports tuning of whole ML models by applying various graph-level and operator-level optimizations. TVM can also automate the optimization of low-level programs to specific hardware characteristics using autoTVM which is a cost-based model for exploration of the search space for code optimization. We have obtained a significant reduction of upto 32.32% for Emotion FerPlus model and more than 150 times for matrix multiplication on hexagon DSP in execution time without reducing the accuracy or the performance. © 2023 IEEE.
