Dynamic Channel Access for Underwater Sensor Networks: A Deep Reinforcement Learning Approach
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 송유재 | - |
dc.contributor.author | 신희철 | - |
dc.date.accessioned | 2022-04-08T17:42:57Z | - |
dc.date.available | 2022-04-08T17:42:57Z | - |
dc.date.created | 20210311144408 | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | http://repository.kmou.ac.kr/handle/2014.oak/12598 | - |
dc.identifier.uri | http://kmou.dcollection.net/common/orgView/200000375820 | - |
dc.description.abstract | We studied the dynamic channel access problem in distributed underwater acoustic sensor netowkrs (UASNs) by using deep reinforcement learning algorithm. First, a multi-agent Markov decision process was applied to formulate the channel allocation problem in UASNs. In an underwater environment, each underwater sensor is considered for the purpose of maximizing the total network data throughput without sending and receiving data or coordinating with other underwater sensors. Then, we propose a deep Q learning-based reinforcement learning algorithm in which each underwater sensor learns not only the channel access behavior of other underwater sensors, but also features such as the channel error probability of the available underwater acoustic channels to maximize total network data throughput. Afterwards, extensive performance evaluation was performed to confirm whether the performance of the proposed algorithm was similar or superior when compared to the performance of the reference algorithms even when implemented in a distributed manner without data exchange between sensors. | - |
dc.description.tableofcontents | 1. Introduction 1 2. Underwater communication 8 2.1 Acoustic communication 8 2.1.1 Noise 9 2.1.2 Transmission Loss 10 2.1.3 Multipath Propagation 12 2.1.4 Doppler spread 12 2.2 RF & Optical Communication 13 3. Reinforcement Learning 15 3.1 Reinforcement Learning 15 3.2 Q-learning 15 4. Dynamic Channel Access Algorithm 19 4.1 System Model 19 4.2 Problem Formulation 21 4.3 Proposed Algorithm 25 5. Performance Evaluation 27 5.1 Network Environment 27 5.2 Learning Environment 28 5.3 Baseline Schemes 28 5.4 Performance Evaluation 29 6. Conclusion 34 REFERENCES 35 | - |
dc.language | eng | - |
dc.publisher | 한국해양대학교 해양과학기술전문대학원 | - |
dc.rights | 한국해양대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | Dynamic Channel Access for Underwater Sensor Networks: A Deep Reinforcement Learning Approach | - |
dc.type | Dissertation | - |
dc.date.awarded | 2021. 2 | - |
dc.embargo.liftdate | 2021-03-11 | - |
dc.contributor.alternativeName | Shin Huicheol | - |
dc.contributor.department | 해양과학기술전문대학원 해양과학기술융합학과 | - |
dc.contributor.affiliation | 한국해양대학교 해양과학기술전문대학원 해양과학기술융합학과 | - |
dc.description.degree | Master | - |
dc.identifier.bibliographicCitation | [1]신희철, “Dynamic Channel Access for Underwater Sensor Networks: A Deep Reinforcement Learning Approach,” 한국해양대학교 해양과학기술전문대학원, 2021. | - |
dc.identifier.holdings | 000000001979▲200000001935▲200000375820▲ | - |
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.