ISSN : 2663-2187

Dynamic Resource Allocation in Cloud Data Centers Using Reinforcement Learning for Energy-Efficient Computing

Main Article Content

Amit K. Mogal,Dr. Vaibhav P. Sonaje
ยป doi: 10.48047/AFJBS.6.13.2024.4866-4875

Abstract

This research explores the application of support learning (RL) procedures for energetic asset allotment in cloud information centres, pointing to upgrading vitality effectiveness and optimising asset utilization. Four RL calculations, counting Q-learning, Profound Q-Networks (DQN), Arrangement Slope Strategies, and Proximal Approach Optimization (PPO), were assessed through broad experimentation. Results show that DQN accomplished the least vitality utilization overall workload force, with values of 1400 kWh, 1700 kWh, and 2000 kWh for moo, medium, and high workloads separately. Besides, PPO reliably displayed the most noteworthy asset utilization rates, coming to 75%, 80%, and 85% beneath the comparing workload scenarios. Also, PPO illustrated the least recurrence of SLA infringement, with 2, 6, and 10 events over the diverse workload force. Besides, DQN accomplished the most reduced reaction time and idleness among the calculations, with values of 90 ms and 45 ms individually, while keeping up tall throughput (550 req/s). These discoveries highlight the adequacy of RL-based approaches in making strides in vitality proficiency, asset utilization, and benefit quality in cloud information centres, advertising promising arrangements for maintainable and high-performance cloud computing foundations

Article Details