TY - GEN
T1 - A learning-on-cloud power management policy for smart devices
AU - Pan, Gung Yu
AU - Lai, Bo Cheng Charles
AU - Chen, Sheng Yen
AU - Jou, Jing Yang
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2015/1/5
Y1 - 2015/1/5
N2 - Energy consumption poses severe limitations for smart devices, urging the development of effective and efficient power management policies. State-of-The-art learning-based policies are autonomous and adaptive to the environment, but they are subject to costly computational overhead and lengthy convergence time. As smart devices are connected to Internet, this paper proposes the Learning-on-Cloud (LoC) policy to exploit cloud computing for power management. Sophisticated learning engines are offloaded from local devices to the cloud with minimal communication data, thus the runtime overhead is reduced. The learning data are shared between many devices with the same model, hence the convergence rate is raised. With one thousand devices connecting to the cloud, the LoC agent is able to converge within a few iterations; the energy saving is better than both of the greedy and the learning-based policies with less latency penalty. By implementing the LoC policy as an Android App, the measured overhead is only 0.01% of the system time.
AB - Energy consumption poses severe limitations for smart devices, urging the development of effective and efficient power management policies. State-of-The-art learning-based policies are autonomous and adaptive to the environment, but they are subject to costly computational overhead and lengthy convergence time. As smart devices are connected to Internet, this paper proposes the Learning-on-Cloud (LoC) policy to exploit cloud computing for power management. Sophisticated learning engines are offloaded from local devices to the cloud with minimal communication data, thus the runtime overhead is reduced. The learning data are shared between many devices with the same model, hence the convergence rate is raised. With one thousand devices connecting to the cloud, the LoC agent is able to converge within a few iterations; the energy saving is better than both of the greedy and the learning-based policies with less latency penalty. By implementing the LoC policy as an Android App, the measured overhead is only 0.01% of the system time.
UR - http://www.scopus.com/inward/record.url?scp=84936850111&partnerID=8YFLogxK
U2 - 10.1109/ICCAD.2014.7001379
DO - 10.1109/ICCAD.2014.7001379
M3 - 會議論文篇章
AN - SCOPUS:84936850111
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
SP - 376
EP - 381
BT - 2014 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2014 - Digest of Technical Papers
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 33rd IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2014
Y2 - 2 November 2014 through 6 November 2014
ER -