Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Ashok, A."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Bioinspired ZnS: Gd nanoparticles synthesized from an endophytic fungi Aspergillus flavus for fluorescence-based metal detection
    (MDPI AG, 2019) Uddandarao, P.; Mohan Balakrishnan, R.M.; Ashok, A.; Swarup, S.; Sinha, P.
    Recently, several nonconventional sources have emerged as strong hotspots for the biosynthesis of chalcogenide quantum dots. However, studies that have ascertained the biomimetic methodologies that initiate biosynthesis are rather limited. The present investigation portrays a few perspectives of rare-earth(Gd)-doped ZnS biosynthesis using the endophytic fungi Aspergillus flavus for sensing metals based on their fluorescence. Analysis of ZnS:Gd nanoparticles was performed by elemental analysis, energy-dispersive X-ray spectroscopy (EDS), atomic force microscopy (AFM), X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FTIR), photoluminescence (PL), and transmission electron microscopy (TEM). The results of TEM demonstrated that the particles were polycrystalline in nature, with a mean size of 10-18 nm. The fluorescence amenability of the biogenic ZnS nanoparticles was further used for the development of a simple and efficient sensing array. The results showed sensitive and detectable quenching/enhancement in the fluorescence of biogenic colloidal ZnS nanoparticles, in the presence of Pb (II), Cd (II), Hg (II), Cu (II) and Ni (II), respectively. The fluorescence intensity of the biogenic ZnS:Gd nanoparticles was found to increase compared to that of the ZnS nanoparticles that capacitate these systems as a reliable fluorescence sensing platform with selective environmental applications. © 2019 by the authors.
  • No Thumbnail Available
    Item
    SUPER-NATURALINSTRUCTIONS: Generalization via Declarative Instructions on 1600+ NLP Tasks
    (Association for Computational Linguistics (ACL), 2022) Wang, Y.; Mishra, S.; Alipoormolabashi, P.; Kordi, Y.; Mirzaei, A.; Arunkumar, A.; Ashok, A.; Dhanasekaran, A.S.; Naik, A.; Stap, D.; Pathak, E.; Karamanolakis, G.; Lai, H.G.; Purohit, I.; Mondal, I.; Anderson, J.; Kuznia, K.; Doshi, K.; Patel, M.; Pal, K.K.; Moradshahi, M.; Parmar, M.; Purohit, M.; Varshney, N.; Kaza, P.R.; Verma, P.; Puri, R.S.; Karia, R.; Sampat, S.K.; Doshi, S.; Mishra, S.; Reddy, S.; Patro, S.; Dixit, T.; Shen, X.; Baral, C.; Choi, Y.; Smith, N.A.; Hajishirzi, H.; Khashabi, D.
    How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce SUPER-NATURALINSTRUCTIONS, a benchmark of 1, 616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions-training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones. Furthermore, we build Tk-INSTRUCT, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-INSTRUCT outperforms existing instruction-following models such as InstructGPT by over 9% on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models. © 2022 Association for Computational Linguistics.

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify