Please use this identifier to cite or link to this item: https://idr.nitk.ac.in/jspui/handle/123456789/8274
Title: Improving convergence in Irgan with PPO
Authors: Jain, M.
Sowmya, Kamath S.
Issue Date: 2020
Citation: ACM International Conference Proceeding Series, 2020, Vol., , pp.328-329
Abstract: Information retrieval modeling aims to optimise generative and discriminative retrieval strategies, where, generative retrieval focuses on predicting query-specific relevant documents and discriminative retrieval tries to predict relevancy given a query-document pair. IRGAN unifies the generative and discriminative retrieval approaches through a minimax game. However, training IRGAN is unstable and varies largely with the random initialization of parameters. In this work, we propose improvements to IRGAN training through a novel optimization objective based on proximal policy optimisation and gumbel-softmax based sampling for the generator, along with a modified training algorithm which performs the gradient update on both the models simultaneously for each training iteration. We benchmark our proposed approach against IRGAN on three different information retrieval tasks and present empirical evidence of improved convergence. � 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.
URI: https://idr.nitk.ac.in/jspui/handle/123456789/8274
Appears in Collections:2. Conference Papers

Files in This Item:
File Description SizeFormat 
13 Improving Convergence in IRGAN with PPO.pdf622.69 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.