Efficient spectrum and resource allocation remains a fundamental challenge in fifth generation (5G) wireless networks due to heterogeneous traffic requirements, spectrum scarcity, and dynamic interference conditions. Conventional scheduling and optimization techniques struggle to meet the stringent quality of service (QoS) demands of enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (URLLC), and massive machine-type communication (mMTC). This paper presents an improved deep reinforcement learning (DRL)-based optimization framework for adaptive spectrum and resource allocation in 5G networks. The proposed model integrates Deep Q-Network (DQN), Double DQN (DDQN), and DDQN with Prioritized Experience Replay (PER) to enhance convergence speed, stability, and scalability. A realistic simulation environment combining ray-traced channel data and packet-level traffic modeling was developed to evaluate system performance. Results demonstrate that the proposed DRL framework achieves up to 7% throughput improvement over proportional fair scheduling, reduces average latency by 10–12%, and lowers URLLC violation rates by approximately 15% under high-interference and heterogeneous traffic scenarios. These findings confirm the feasibility of DRL as a practical and scalable solution for real-time 5G resource management.
5G networks, spectrum allocation, resource optimization, deep reinforcement learning, URLLC, eMBB, mMTC.
IRE Journals:
Kuruye, Joel D., Okeke R. O. "Development of an Improved Deep Reinforcement Learning Framework for Spectrum and Resource Allocation in 5G Networks" Iconic Research And Engineering Journals Volume 9 Issue 8 2026 Page 1431-1439 https://doi.org/10.64388/IREV9I8-1714480
IEEE:
Kuruye, Joel D., Okeke R. O.
"Development of an Improved Deep Reinforcement Learning Framework for Spectrum and Resource Allocation in 5G Networks" Iconic Research And Engineering Journals, 9(8) https://doi.org/10.64388/IREV9I8-1714480