Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Kar, A."

Filter results by typing the first few letters
Now showing 1 - 5 of 5
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    An Ultra-low Noise, Highly Compact Implantable 28 nm CMOS Neural Recording Amplifier
    (Institute of Electronics Engineers of Korea, 2024) Akuri, N.G.; Naik, D.N.; Kumar, S.; Song, H.; Kar, A.
    An ultra-low noise, Tera-ohm input impedance two-stage front-end neural amplifier (FENA) in the 28 nm CMOS process is presented in this work. As per the author’s best knowledge, the proposed FENA is implemented on a 28 nm CMOS process for the first time. The proposed FENA consists of an operational transconductance amplifier integrated low-pass filter (LPF) technique. This technique effectively removes the noise current density by using the LPF transfer function and FENA circuit to achieve the best performances, such as ultra-low input-referred noise, ultra-high input impedance, and high gain. The proposed mathematical technique is employed to optimize the dimensions of the neural amplifier in the 28 nm lower node, which results in a noise-free biasing current and ultra-low input referred noise of 18 fV/√Hz at 10 KHz. The ultra-low input referred noise of FENA is achieved by reducing the gate-distributed resistance method. The FENA achieves an ultra-high input impedance of 0.2 Tera-ohm, while a splendid measured gain of 60 dB has succeeded. FENA occupies a chip area of 0.0023 mm2, which consumes a lower power consumption of 1 µW under supply voltage of 1.2 V. The FENA is found to be less prone to PVT variations as 1 mHz of high-pass corner frequency towards robust design. The best performance parameters of FENA could be beneficial for deep exploration neural recording in wireless neural monitoring systems. © 2024, Institute of Electronics Engineers of Korea. All rights reserved.
  • No Thumbnail Available
    Item
    Simulation of cathode ray tube
    (2017) Maiti, D.; Rajagopal, D.; Kar, A.; Ramteke, P.B.; Koolagudi, S.G.
    The Cathode Ray Tube (CRT) experiment performed by J. J. Thomson, is one of the most well-known physical experiments, which led to the discovery of electrons. The experiment could also describe characteristic properties, essentially, its affinity towards positive charge, and its charge to mass ratio. This paper describes the simulation of J. J. Thomson's Cathode Ray Tube experiment. The major contribution of this work is the new approach for modelling this experiment, with a great deal of accuracy and precision, using the equations of physical laws to describe the motion of the electrons. The motion of the electrons can be manipulated and recorded by the user, by assigning different values to the experimental parameters. This can be used as a good learning tool by the needy. � 2017 IEEE.
  • No Thumbnail Available
    Item
    Simulation of cathode ray tube
    (Institute of Electrical and Electronics Engineers Inc., 2017) Maiti, D.; Rajagopal, D.; Kar, A.; Ramteke, P.B.; Koolagudi, S.G.
    The Cathode Ray Tube (CRT) experiment performed by J. J. Thomson, is one of the most well-known physical experiments, which led to the discovery of electrons. The experiment could also describe characteristic properties, essentially, its affinity towards positive charge, and its charge to mass ratio. This paper describes the simulation of J. J. Thomson's Cathode Ray Tube experiment. The major contribution of this work is the new approach for modelling this experiment, with a great deal of accuracy and precision, using the equations of physical laws to describe the motion of the electrons. The motion of the electrons can be manipulated and recorded by the user, by assigning different values to the experimental parameters. This can be used as a good learning tool by the needy. © 2017 IEEE.
  • No Thumbnail Available
    Item
    What makes a video memorable?
    (2017) Kar, A.; Mavin, P.; Ghaturle, Y.; Vani, M.
    Humans are exposed to many pictures and videos on a daily basis, but they have this exceptional ability to remember the details, even though many of them look very similar. This Video Memorability (VM) is mainly due to distinguishable and a fine representation of the frames in human mind that people tend to remember. Videos have an abundance data contained in the frames which can be used for feature extraction purposes. Each feature from each frame has to be carefully considered to determine the intrinsic property of the video i.e. memorability. Using Convolutional Neural Network (CNN), we propose a solution to the problem of predicting VM, by estimating its memorability. A model has been developed to predict VM using algorithmically extracted features. Two types of features (i) semantic features (ii) visual features have been considered. The effectiveness of the model has been tested using publicly available image and video data. The results confirm that the CNN model can predict memorability with a acceptable performance. � 2017 IEEE.
  • No Thumbnail Available
    Item
    What makes a video memorable?
    (Institute of Electrical and Electronics Engineers Inc., 2017) Kar, A.; Prashasthi, P.; Ghaturle, Y.; Vani, M.
    Humans are exposed to many pictures and videos on a daily basis, but they have this exceptional ability to remember the details, even though many of them look very similar. This Video Memorability (VM) is mainly due to distinguishable and a fine representation of the frames in human mind that people tend to remember. Videos have an abundance data contained in the frames which can be used for feature extraction purposes. Each feature from each frame has to be carefully considered to determine the intrinsic property of the video i.e. memorability. Using Convolutional Neural Network (CNN), we propose a solution to the problem of predicting VM, by estimating its memorability. A model has been developed to predict VM using algorithmically extracted features. Two types of features (i) semantic features (ii) visual features have been considered. The effectiveness of the model has been tested using publicly available image and video data. The results confirm that the CNN model can predict memorability with a acceptable performance. © 2017 IEEE.

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify