SwePub
Sök i LIBRIS databas

  Extended search

WFRF:(Charalambous A.)
 

Search: WFRF:(Charalambous A.) > Optimal Radio Frequ...

Optimal Radio Frequency Energy Harvesting with Limited Energy Arrival Knowledge

Zou, Zhuo (author)
KTH,VinnExcellence Center for Intelligence in Paper and Packaging, iPACK,Kungliga Tekniska Högskolan (KTH),Royal Institute of Technology (KTH),Chalmers tekniska högskola,Chalmers University of Technology,Qamcom Research And Technology AB
Gidmark, A. (author)
Chalmers tekniska högskola,Chalmers University of Technology
Charalambous, Themistoklis, 1981 (author)
Chalmers tekniska högskola,Chalmers University of Technology
show more...
Johansson, M. (author)
Kungliga Tekniska Högskolan (KTH),Royal Institute of Technology (KTH)
show less...
 (creator_code:org_t)
Institute of Electrical and Electronics Engineers (IEEE), 2016
2016
English.
In: IEEE Journal on Selected Areas in Communications. - : Institute of Electrical and Electronics Engineers (IEEE). - 0733-8716 .- 1558-0008. ; 34:12, s. 3528-3539
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • We develop optimal sleeping and harvesting policies for radio frequency (RF) energy harvesting devices, formalizing the following intuition: when the ambient RF energy is low, devices consume more energy being awake than what can be harvested and should enter sleep mode; when the ambient RF energy is high, on the other hand, it is essential to wake up and harvest. Toward this end, we consider a scenario with intermittent energy arrivals described by a two-state Gilbert-Elliott Markov chain model. The challenge is that the state of the Markov chain can only be observed during the harvesting action, and not while in sleep mode. Two scenarios are studied under this model. In the first scenario, we assume that the transition probabilities of the Markov chain are known and formulate the problem as a partially observable Markov decision process (POMDP). We prove that the optimal policy has a threshold structure and derive the optimal decision parameters. In the practical scenario where the ratio between the reward and the penalty is neither too large nor too small, the POMDP framework and the threshold-based optimal policies are very useful for finding non-Trivial optimal sleeping times. In the second scenario, we assume that the Markov chain parameters are unknown and formulate the problem as a Bayesian adaptive POMDP and propose a heuristic posterior sampling algorithm to reduce the computational complexity. The performance of our approaches is demonstrated via numerical examples.

Subject headings

TEKNIK OCH TEKNOLOGIER  -- Elektroteknik och elektronik (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Electrical Engineering, Electronic Engineering, Information Engineering (hsv//eng)
TEKNIK OCH TEKNOLOGIER  -- Elektroteknik och elektronik -- Telekommunikation (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Electrical Engineering, Electronic Engineering, Information Engineering -- Telecommunications (hsv//eng)

Keyword

ambient radio frequency energy
Bayesian inference
Energy harvesting
learning
partially observable Markov decision process
Bayesian networks
Chains
Inference engines
Optimization
Radio waves
Sleep research
Markov chain models
Radio-frequency energy
Radio-frequency energy harvesting
Sampling algorithm
Transition probabilities
Markov processes

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view