Improving convergence in Irgan with PPO

dc.contributor.authorJain, M.
dc.contributor.authorKamath S․, S.
dc.date.accessioned2026-02-06T06:36:54Z
dc.date.issued2020
dc.description.abstractInformation retrieval modeling aims to optimise generative and discriminative retrieval strategies, where, generative retrieval focuses on predicting query-specific relevant documents and discriminative retrieval tries to predict relevancy given a query-document pair. IRGAN unifies the generative and discriminative retrieval approaches through a minimax game. However, training IRGAN is unstable and varies largely with the random initialization of parameters. In this work, we propose improvements to IRGAN training through a novel optimization objective based on proximal policy optimisation and gumbel-softmax based sampling for the generator, along with a modified training algorithm which performs the gradient update on both the models simultaneously for each training iteration. We benchmark our proposed approach against IRGAN on three different information retrieval tasks and present empirical evidence of improved convergence. © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.
dc.identifier.citationACM International Conference Proceeding Series, 2020, Vol., , p. 328-329
dc.identifier.issn21531633
dc.identifier.urihttps://doi.org/10.1145/3371158.3371209
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/30734
dc.publisherAssociation for Computing Machinery
dc.subjectGenerative models
dc.subjectInformation retrieval
dc.subjectPolicy optimization
dc.titleImproving convergence in Irgan with PPO

Files