D3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in mobile edge computing

dc.contributor.authorAle, Laha
dc.contributor.authorKing, Scott
dc.contributor.authorZhang, Ning
dc.contributor.authorSattar, Abdul Rahman
dc.contributor.authorSkandaraniyam, Janahan
dc.creator.orcidhttps://orcid.org/0000-0002-4070-5289en_US
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388en_US
dc.creator.orcidhttps://orcid.org/0000-0002-4070-5289
dc.creator.orcidhttps://orcid.org/0000-0002-4022-0388
dc.date.accessioned2022-10-07T19:54:21Z
dc.date.available2022-10-07T19:54:21Z
dc.date.issued2022-03-01
dc.description.abstractMobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.en_US
dc.description.abstractMobile Edge Computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in Internet of Things, by provisioning computing resources at network edge. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel Deep Reinforcement Learning called Dirichlet Deep Deterministic Policy Gradient (D3PG), which is built on Deep Deterministic Policy Gradient (DDPG) to solve the problem. The developed model can learn to solve multi-objective optimization, including maximizing the number of tasks processed before expiration and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with constrained distribution-continuous hybrid action space, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-art methods.
dc.identifier.citationAle, L., King, S. A., Zhang, N., Sattar, A. R., & Skandaraniyam, J. (2022, March 1). D3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in Mobile Edge Computing. arXiv.org. Retrieved from https://doi.org/10.48550/arXiv.2112.09328en_US
dc.identifier.citationAle, L., King, S. A., Zhang, N., Sattar, A. R., & Skandaraniyam, J. (2022, March 1). D3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in Mobile Edge Computing. arXiv.org. Retrieved from https://doi.org/10.48550/arXiv.2112.09328
dc.identifier.doihttps://doi.org/10.48550/arXiv.2112.09328
dc.identifier.urihttps://hdl.handle.net/1969.6/94071
dc.language.isoen_USen_US
dc.language.isoen_US
dc.rightsAttribution 4.0 International*
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectmobile edge computingen_US
dc.subjecttask partitionen_US
dc.subjectdeep reinforcement learningen_US
dc.subjectcomputation offloadingen_US
dc.subjectenergy efficiencyen_US
dc.subjecttd3en_US
dc.subjectddpgen_US
dc.subjectdirichleten_US
dc.subjectmobile edge computing
dc.subjecttask partition
dc.subjectdeep reinforcement learning
dc.subjectcomputation offloading
dc.subjectenergy efficiency
dc.subjecttd3
dc.subjectddpg
dc.subjectdirichlet
dc.titleD3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in mobile edge computingen_US
dc.titleD3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in mobile edge computing
dc.typeArticleen_US
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
D3PG_Dirichlet DDPG for Task Partitioning and Offloading with Constrained Hybrid Action Space in Mobile Edge Computing.pdf
Size:
1.33 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: