Task Scheduling Using Deep Q-Learning
| dc.contributor.author | Velingkar, G. | |
| dc.contributor.author | Kumar, J.K. | |
| dc.contributor.author | Varadarajan, R. | |
| dc.contributor.author | Lanka, S. | |
| dc.contributor.author | Anand Kumar, A.M. | |
| dc.date.accessioned | 2026-02-06T06:35:36Z | |
| dc.date.issued | 2022 | |
| dc.description.abstract | Process scheduling is a very crucial task of operating systems. Effective scheduling ensures system efficiency and minimizes wastage of resources and cost overall, enhancing productivity. Most commonly, it is an exhaustive task to select the most accurate resources in executing these tasks. The solution for this effective job scheduling and resource management would preferably be dependent on the nature of the workload and adapt to any given environment compared to an algorithmic one. Thus, to meet this rising demand for an automated, self-assigning system, a deep Q-learning (Reinforcement learning technique)-based implementation has been done, which schedules tasks to maximize CPU utilization and memory utilization. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. | |
| dc.identifier.citation | Lecture Notes in Electrical Engineering, 2022, Vol.858, , p. 749-759 | |
| dc.identifier.issn | 18761100 | |
| dc.identifier.uri | https://doi.org/10.1007/978-981-19-0840-8_58 | |
| dc.identifier.uri | https://idr.nitk.ac.in/handle/123456789/29943 | |
| dc.publisher | Springer Science and Business Media Deutschland GmbH | |
| dc.subject | Deep Q-learning | |
| dc.subject | Process scheduling | |
| dc.subject | Reinforcement learning | |
| dc.title | Task Scheduling Using Deep Q-Learning |
