Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
2 results
Search Results
Item A Framework for Quality Enhancement of Multispectral Remote Sensing Images(Institute of Electrical and Electronics Engineers Inc., 2018) Suresh, S.; Das, D.; Lal, S.Researches in satellite image enhancement have been particularly confined to two major areas-contrast enhancement and image de noising of remote sensing images. The processing of relatively dark or shadowed images necessitates the need for robust remote sensing enhancement techniques. In this paper, a robust framework for quality enhancement of multispectral remote sensing images is proposed. The quantitative results of proposed algorithm and other existing remote sensing enhancement algorithms are calculated in terms of DE, NIQMC, BIQME, PisDist and CM on different remote sensing and other image databases. Results reveal that visual enhancement of the proposed algorithm is better than other existing remote sensing enhancement algorithms. Finally, the simulation experimental results show that proposed algorithm is effective and efficient for remotes sensing as well as natural images. © 2017 IEEE.Item The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics(Association for Computational Linguistics (ACL), 2021) Gehrmann, S.; Adewumi, T.; Aggarwal, K.; Ammanamanchi, P.S.; Anuoluwapo, A.; Bosselut, A.; Chandu, K.R.; Clinciu, M.; Das, D.; Dhole, K.D.; Du, W.; Durmus, E.; Dušek, O.; Emezue, C.; Gangal, V.; Gârbacea, C.; Hashimoto, T.; Hou, Y.; Jernite, Y.; Jhamtani, H.; Ji, Y.; Jolly, S.; Kale, M.; Kumar, D.; Ladhak, F.; Madaan, A.; Maddela, M.; Mahajan, K.; Mahamood, S.; Majumder, B.P.; Martins, P.H.; McMillan-Major, A.; Mille, S.; van Miltenburg, E.; Nadeem, M.; Narayan, S.; Nikolaev, V.; Niyongabo, R.A.; Osei, S.; Parikh, A.; Perez-Beltrachini, L.; Rao, N.R.; Raunak, V.; Rodriguez, J.D.; Santhanam, S.; Sedoc, J.; Sellam, T.; Shaikh, S.; Shimorina, A.; Sobrevilla Cabezudo, M.A.S.; Strobelt, H.; Subramani, N.; Xu, W.; Yang, D.; Yerukola, A.; Zhou, J.We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate. © 2021 Association for Computational Linguistics
