skip to main content
10.1145/3384419.3430734acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open Access

Ember: energy management of batteryless event detection sensors with deep reinforcement learning

Published:16 November 2020Publication History

ABSTRACT

Energy management can extend the lifetime of batteryless, energy-harvesting systems by judiciously utilizing the energy available. Duty cycling of such systems is especially challenging for event detection, as events arrive sporadically and energy availability is uncertain. If the node sleeps too much, it may miss important events; if it depletes energy too quickly, it will stop operating in low energy conditions and miss events. Thus, accurate event prediction is important in making this tradeoff. We propose Ember, an energy management system based on deep reinforcement learning to duty cycle event-driven sensors in low energy conditions. We train a policy using historical real-world data traces of motion, temperature, humidity, pressure, and light events. The resulting policy can learn to capture up to 95% of the events without depleting the node. Without historical data for training when deploying a node at a new location, we propose a self-supervised mechanism to collect ground-truth data while learning from the data at the same time. Ember learns to capture the majority of events within a week without any historical data and matches the performance of the policies trained with historical data in a few weeks. We deployed 40 nodes running Ember for indoor sensing and demonstrate that the learned policies generalize to real-world settings as well as outperform state-of-the-art techniques.

References

  1. [n.d.]. https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/.Google ScholarGoogle Scholar
  2. Yuvraj Agarwal, Bharathan Balaji, Rajesh Gupta, Jacob Lyles, Michael Wei, and Thomas Weng. 2010. Occupancy-driven Energy Management for Smart Building Automation. In Proceedings of the 2Nd ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building (BuildSys '10). ACM, New York, NY, USA, 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. 2020. Learning dexterous in-hand manipulation. The International Journal of Robotics Research 39, 1 (2020), 3--20.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Fayçal Ait Aoudia, Matthieu Gautier, and Olivier Berder. 2018. RLMan: an Energy Manager Based on Reinforcement Learning for Energy Harvesting Wireless Sensor Networks. IEEE Transactions on Green Communications and Networking 2, 2 (2018), 408--417.Google ScholarGoogle ScholarCross RefCross Ref
  5. Bharathan Balaji, Sunil Mallya, Sahika Genc, Saurabh Gupta, Leo Dirac, Vineet Khare, Gourav Roy, Tao Sun, Yunzhe Tao, Brian Townsend, et al. 2020. DeepRacer: Autonomous Racing Platform for Experimentation with Sim2Real Reinforcement Learning. In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2746--2754.Google ScholarGoogle ScholarCross RefCross Ref
  6. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. 2019. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019).Google ScholarGoogle Scholar
  7. Bradford Campbell, Joshua Adkins, and Prabal Dutta. 2016. Cinamin: A Perpetual and Nearly Invisible BLE Beacon. In Proceedings of the 2016 International Conference on Embedded Wireless Systems and Networks (EWSN '16). Junction Publishing, USA, 331--332. http://dl.acm.org/citation.cfm?id=2893711.2893793Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Bradford Campbell, Meghan Clark, Samuel DeBruin, Branden Ghena, Neal Jackson, Ye-Sheng Kuo, and Prabal Dutta. 2016. perpetual Sensing for the Built environment. IEEE Pervasive Computing 15, 4 (2016), 45--55.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Bradford Campbell and Prabal Dutta. 2014. An Energy-harvesting Sensor Architecture and Toolkit for Building Monitoring and Event Detection. In Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings (BuildSys '14). New York, NY, USA, 100--109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Qingping Chi, Hairong Yan, Chuan Zhang, Zhibo Pang, and Li Da Xu. 2014. A reconfigurable smart sensor interface for industrial WSN in IoT environment. IEEE transactions on industrial informatics 10, 2 (2014), 1417--1425.Google ScholarGoogle Scholar
  11. Man Chu, Hang Li, Xuewen Liao, and Shuguang Cui. 2018. Reinforcement learning-based multiaccess control and battery prediction with energy harvesting in IoT systems. IEEE Internet of Things Journal 6, 2 (2018), 2009--2020.Google ScholarGoogle ScholarCross RefCross Ref
  12. Man Chu, Xuewen Liao, Hang Li, and Shuguang Cui. 2019. Power Control in Energy Harvesting Multiple Access System With Reinforcement Learning. IEEE Internet of Things Journal 6, 5 (2019), 9175--9186.Google ScholarGoogle ScholarCross RefCross Ref
  13. Sanio Semiconductor CO. 2007. https://media.digikey.com/pdf/Data%20Sheets/Sanyo%20Energy/Amorphous_Br.pdf.Google ScholarGoogle Scholar
  14. Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. 2019. Quantifying Generalization in Reinforcement Learning. arXiv:cs.LG/1812.02341Google ScholarGoogle Scholar
  15. Alexei Colin and Brandon Lucia. 2016. Chain: tasks and channels for reliable intermittent programs. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications. 514--530.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Samuel DeBruin, Bradford Campbell, and Prabal Dutta. 2013. Monjolo: An Energy-harvesting Energy Meter Architecture. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems (SenSys '13). ACM, New York, NY, USA, Article 18, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gabriel Martins Dias, Maddalena Nurchis, and Boris Bellalta. 2016. Adapting sampling interval of sensor networks using on-line reinforcement learning. In Internet of Things (WF-IoT), 2016 IEEE 3rd World Forum on. IEEE, 460--465.Google ScholarGoogle ScholarCross RefCross Ref
  18. Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. 2015. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence 38, 9 (2015), 1734--1747.Google ScholarGoogle Scholar
  19. Joshua F Ensworth and Matthew S Reynolds. 2017. Ble-backscatter: ultralow-power IoT nodes compatible with bluetooth 4.0 low energy (BLE) smartphones and tablets. IEEE Transactions on Microwave Theory and Techniques 65, 9 (2017), 3360--3368.Google ScholarGoogle ScholarCross RefCross Ref
  20. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2018. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070 (2018).Google ScholarGoogle Scholar
  21. Francesco Fraternali, Bharathan Balaji, Yuvraj Agarwal, Luca Benini, and Rajesh K. Gupta. 2018. Pible: Battery-Free Mote for Perpetual Indoor BLE Applications. In Proceedings of the 5th ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building (BuildSys '18). ACM.Google ScholarGoogle Scholar
  22. Francesco Fraternali, Bharathan Balaji, Yuvraj Agarwal, and Rajesh K. Gupta. 2020. ACES: Automatic Configuration of Energy Harvesting Sensors with Reinforcement Learning. ACM Trans. Sen. Netw. 16, 4, Article 36 (July 2020), 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Francesco Fraternali, Bharathan Balaji, and Rajesh Gupta. 2018. Scaling Configuration of Energy Harvesting Sensors with Reinforcement Learning. In Proceedings of the 6th International Workshop on Energy Harvesting & Energy-Neutral Sensing Systems (ENSsys '18). ACM, New York, NY, USA, 7--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. F. Fraternali and et al. 2018. Pible: Battery-Free Mote for Perpetual Indoor BLE Applications: Demo Abstract. In Proceedings of the 5th Conference on Systems for Built Environments (BuildSys '18). Association for Computing Machinery, New York, NY, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018).Google ScholarGoogle Scholar
  26. Peter W Glynn and Donald L Iglehart. 1989. Importance sampling for stochastic simulations. Management science 35, 11 (1989), 1367--1392.Google ScholarGoogle Scholar
  27. Graham Gobieski, Brandon Lucia, and Nathan Beckmann. 2019. Intelligence beyond the edge: Inference on intermittent embedded systems. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 199--213.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Josiah Hester, Lanny Sitanayah, and Jacob Sorber. 2015. Tragedy of the Coulombs: Federating Energy Storage for Tiny, Intermittently-Powered Sensors. 5--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Josiah Hester and Jacob Sorber. 2017. Flicker: Rapid prototyping for the batteryless internet-of-things. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems. ACM, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Josiah Hester and Jacob Sorber. 2017. New Directions: The Future of Sensing is Ba eryless, Intermi ent, and Awesome. (2017).Google ScholarGoogle Scholar
  31. Josiah Hester, Kevin Storer, and Jacob Sorber. 2017. Timely execution on intermittently powered batteryless sensors. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems. 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jason Hsu, Sadaf Zahedi, Aman Kansal, Mani Srivastava, and Vijay Raghunathan. 2006. Adaptive duty cycling for energy harvesting systems. In Proceedings of the 2006 international symposium on Low power electronics and design. 180--185.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Roy Chaoming Hsu, Cheng-Ting Liu, and Wei-Ming Lee. 2009. Reinforcement learning-based dynamic power management for energy harvesting wireless sensor network. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, 399--408.Google ScholarGoogle Scholar
  34. R. C. Hsu, C. T. Liu, and H. L. Wang. 2014. A Reinforcement Learning-Based ToD Provisioning Dynamic Power Management for Sustainable Operation of Energy Harvesting Wireless Sensor Node. IEEE Transactions on Emerging Topics in Computing 2, 2 (June 2014), 181--191. Google ScholarGoogle ScholarCross RefCross Ref
  35. R. C. Hsu, C. T. Liu, K. C. Wang, and W. M. Lee. 2009. QoS-Aware Power Management for Energy Harvesting Wireless Sensor Network Utilizing Reinforcement Learning. In 2009 International Conference on Computational Science and Engineering, Vol. 2. 537--542. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Sandy Irani, Sandeep Shukla, and Rajesh Gupta. 2003. Algorithms for power savings. In In SODA '03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms. 37--46.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Sandy Irani, Sandeep Shukla, and Rajesh Gupta. 2003. Online Strategies for Dynamic Power Management in Systems with Multiple Power-Saving States. ACM Trans. Embed. Comput. Syst. 2, 3 (Aug. 2003), 325--346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Neal Jackson, Joshua Adkins, and Prabal Dutta. 2019. Capacity over capacitance for reliable energy harvesting sensors. In Proceedings of the 18th International Conference on Information Processing in Sensor Networks. ACM, 193--204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, and Sergey Levine. 2018. Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Aman Kansal, Jason Hsu, Sadaf Zahedi, and Mani B. Srivastava. 2007. Power Management in Energy Harvesting Sensor Networks. ACM Trans. Embed. Comput. Syst. 6, 4, Article 32 (Sept. 2007). Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Bryce Kellogg, Aaron Parks, Shyamnath Gollakota, Joshua R Smith, and David Wetherall. 2014. Wi-Fi backscatter: Internet connectivity for RF-powered devices. In ACM SIGCOMM Computer Communication Review, Vol. 44. ACM, 607--618.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.Google ScholarGoogle Scholar
  43. Feng Li, Tianyu Xiang, Zicheng Chi, Jun Luo, Lihua Tang, Liya Zhao, and Yaowen Yang. 2013. Powering indoor sensing with airflows: a trinity of energy harvesting, synchronous duty-cycling, and sensing. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems. ACM, 73.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Yan Li, Zicheng Chi, Xin Liu, and Ting Zhu. 2018. Passive-ZigBee: enabling ZigBee communication in IoT networks with 1000x+ less power consumption. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. 159--171.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning. 3053--3062.Google ScholarGoogle Scholar
  46. Vincent Liu, Aaron Parks, Vamsi Talla, Shyamnath Gollakota, David Wetherall, and Joshua R Smith. 2013. Ambient backscatter: Wireless communication out of thin air. ACM SIGCOMM Computer Communication Review 43, 4 (2013), 39--50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Brandon Lucia, Vignesh Balaji, Alexei Colin, Kiwan Maeng, and Emily Ruppel. 2017. Intermittent Computing: Challenges and Opportunities. In LIPIcs-Leibniz International Proceedings in Informatics, Vol. 71. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.Google ScholarGoogle Scholar
  48. Yubo Luo and Shahriar Nirjon. 2019. SpotON: Just-in-Time Active Event Detection on Energy Autonomous Sensing Systems. Brief Presentations Proceedings (RTAS 2019) (2019), 9.Google ScholarGoogle Scholar
  49. Amjad Yousef Majid, Patrick Schilder, and Koen Langendoen. 2020. Continuous Sensing on Intermittent Power. In 2020 19th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 181--192.Google ScholarGoogle Scholar
  50. Hongzi Mao, Mohammad Alizadeh, Ishai Menache, and Srikanth Kandula. 2016. Resource management with deep reinforcement learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks. ACM, 50--56.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Artem Molchanov, Tao Chen, Wolfgang Hönig, James A Preiss, Nora Ayanian, and Gaurav S Sukhatme. 2019. Sim-to-(Multi)-Real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 59--66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. C. Moser, L. Thiele, D. Brunelli, and L. Benini. 2010. Adaptive Power Management for Environmentally Powered Systems. IEEE Trans. Comput. 59, 4 (April 2010), 478--491. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, and Gavin Taylor. 2019. Autonomous Management of Energy-Harvesting IoT Nodes Using Deep Reinforcement Learning. arXiv preprint arXiv:1905.04181 (2019).Google ScholarGoogle Scholar
  54. Saman Naderiparizi, Aaron N Parks, Zerina Kapetanovic, Benjamin Ransford, and Joshua R Smith. 2015. WISPCam: A battery-free RFID camera. In RFID (RFID), 2015 IEEE International Conference on. IEEE, 166--173.Google ScholarGoogle ScholarCross RefCross Ref
  55. Matteo Nardello, Harsh Desai, Davide Brunelli, and Brandon Lucia. 2019. Camaroptera: A Batteryless Long-Range Remote Visual Sensing System. In Proceedings of the 7th International Workshop on Energy Harvesting & Energy-Neutral Sensing Systems (ENSsys'19). Association for Computing Machinery, New York, NY, USA, 8--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).Google ScholarGoogle Scholar
  57. OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. 2019. Learning Dexterous In-Hand Manipulation. arXiv:cs.LG/1808.00177Google ScholarGoogle Scholar
  58. Panasonic. 2018. https://media.digikey.com/pdf/Data%20Sheets/Panasonic%20Electronic%20Components/SG_Series_LowTemp.pdf.Google ScholarGoogle Scholar
  59. Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. 2017. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 16--17.Google ScholarGoogle ScholarCross RefCross Ref
  60. Lerrel Pinto and Abhinav Gupta. 2016. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 3406--3413.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International Conference on Machine Learning. 1889--1897.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 (2015).Google ScholarGoogle Scholar
  63. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).Google ScholarGoogle Scholar
  64. Vinod Sharma, Utpal Mukherji, Vinay Joseph, and Shrey Gupta. 2010. Optimal energy management policies for energy harvesting sensor nodes. IEEE Transactions on Wireless Communications 9, 4 (2010), 1326--1336.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Evan Shelhamer, Parsa Mahmoudieh, Max Argus, and Trevor Darrell. 2016. Loss is its own reward: Self-supervision for reinforcement learning. arXiv preprint arXiv:1612.07307 (2016).Google ScholarGoogle Scholar
  66. Shaswot Shresthamali, Masaaki Kondo, and Hiroshi Nakamura. 2019. Power Management of Wireless Sensor Nodes with Coordinated Distributed Reinforcement Learning. In 2019 IEEE 37th International Conference on Computer Design (ICCD). IEEE, 638--647.Google ScholarGoogle Scholar
  67. Rishi Shukla, Neev Kiran, Rui Wang, Jeremy Gummeson, and Sunghoon Ivan Lee. 2019. SkinnyPower: enabling batteryless wearable sensors via intra-body power transfer. In Proceedings of the 17th Conference on Embedded Networked Sensor Systems. 68--82.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Sujesha Sudevalayam and Purushottam Kulkarni. 2011. Energy harvesting sensor nodes: Survey and implications. IEEE Communications Surveys & Tutorials 13, 3 (2011), 443--461.Google ScholarGoogle Scholar
  69. Ryo Sugihara and Rajesh K. Gupta. 2010. Speed Control and Scheduling of Data Mules in Sensor Networks. ACM Trans. Sen. Netw. 7, 1, Article 4 (Aug. 2010), 29 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Richard S Sutton and Andrew G Barto. 1998. Introduction to reinforcement learning. Vol. 135. MIT press Cambridge.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Adrian Udenze and Klaus McDonald-Maier. 2009. Direct reinforcement learning for autonomous power configuration and control in wireless networks. In Adaptive Hardware and Systems, 2009. AHS 2009. NASA/ESA Conference on. IEEE, 289--296.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Christopher M Vigorito, Deepak Ganesan, and Andrew G Barto. 2007. Adaptive control of duty cycling in energy-harvesting wireless sensor networks. In 2007 4th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks. IEEE, 21--30.Google ScholarGoogle ScholarCross RefCross Ref
  73. Xiaolong Wang and Abhinav Gupta. 2015. Unsupervised learning of visual representations using videos. In Proceedings of the IEEE international conference on computer vision. 2794--2802.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning 8, 3--4 (1992), 279--292.Google ScholarGoogle Scholar
  75. Kok-Lim Alvin Yau, Peter Komisarczuk, and Paul D Teal. 2012. Reinforcement learning for context awareness and intelligence in wireless networks: Review, new features and open issues. Journal of Network and Computer Applications 35, 1 (2012), 253--267.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Kasim Sinan Yildirim, Amjad Yousef Majid, Dimitris Patoukas, Koen Schaper, Przemyslaw Pawelczak, and Josiah Hester. 2018. Ink: Reactive kernel for tiny batteryless sensors. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. 41--53.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Andy Zeng, Shuran Song, Stefan Welker, Johnny Lee, Alberto Rodriguez, and Thomas Funkhouser. 2018. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 4238--4245.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Pengyu Zhang, Dinesh Bharadia, Kiran Joshi, and Sachin Katti. 2016. Hitchhike: Practical backscatter using commodity wifi. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. 259--271.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Ember: energy management of batteryless event detection sensors with deep reinforcement learning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SenSys '20: Proceedings of the 18th Conference on Embedded Networked Sensor Systems
        November 2020
        852 pages
        ISBN:9781450375900
        DOI:10.1145/3384419

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 16 November 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate174of867submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader