Please use this identifier to cite or link to this item:
Full metadata record
|dc.identifier.citation||Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018, 2019, Vol., , pp.1861-1867||en_US|
|dc.description.abstract||Allocation of liquid capital to the financial instruments in a portfolio is typically done using a two-step process. In the first step, predictive techniques are used to determine the future risk and rewards for the instrument. In the subsequent step, a quadratic optimization problem is solved to obtain the allocation that maximizes a relevant measure of the portfolio performance computed using a combination of the risks and the rewards. Deep Reinforcement Learning (DRL) eliminates the need for a two step process to find the allocation across the instruments that will optimize a measure of portfolio performance obtained from the market. DRL based portfolio construction autonomously adjusts to a change in the environment unlike traditional machine learning algorithms used in prediction. The existing DRL methods suffer from the challenges of stability, and do not lend themselves well to the portfolio construction problem that has a continuous action space. Proposed in 2015, Deep Deterministic Policy Gradients (DDPG) is a type of actorcritic DRL algorithm that provides support for continuous action space which is encountered in portfolio construction. This paper evaluates the use of DDPG to solve the problem of risk aware portfolio construction. Simulations are done on a portfolio of twenty stocks and the use of both Rate of Return and Sortino ratio as a measure of portfolio performance are evaluated. Results are presented that demonstrate the effectiveness of DDPG for risk aware portfolio construction. The simulation results presented in this paper show that having a risk-aware measure of portfolio performance such as Sortino ratio give a portfolio with superior return and lower variance. � 2018 IEEE.||en_US|
|dc.title||Risk aware portfolio construction using deep deterministic policy gradients||en_US|
|Appears in Collections:||2. Conference Papers|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.