skip to main content
10.1145/3384419.3430783acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open Access

Pointillism: accurate 3D bounding box estimation with multi-radars

Published:16 November 2020Publication History

ABSTRACT

Autonomous perception requires high-quality environment sensing in the form of 3D bounding boxes of dynamic objects. The primary sensors used in automotive systems are light-based cameras and LiDARs. However, they are known to fail in adverse weather conditions. Radars can potentially solve this problem as they are barely affected by adverse weather conditions. However, specular reflections of wireless signals cause poor performance of radar point clouds. We introduce Pointillism, a system that combines data from multiple spatially separated radars with an optimal separation to mitigate these problems. We introduce a novel concept of Cross Potential Point Clouds, which uses the spatial diversity induced by multiple radars and solves the problem of noise and sparsity in radar point clouds. Furthermore, we present the design of RP-net, a novel deep learning architecture, designed explicitly for radar's sparse data distribution, to enable accurate 3D bounding box estimation. The spatial techniques designed and proposed in this paper are fundamental to radars point cloud distribution and would benefit other radar sensing applications

References

  1. FMCW fundamentals in TI radar. https://training.ti.com/mmwave-training-series.Google ScholarGoogle Scholar
  2. Intel Real Sense D415 depth camera. https://www.intelrealsense.com/.Google ScholarGoogle Scholar
  3. LiDAR vs. RADAR. https://www.sensorsmag.com/components/lidar-vs-radar.Google ScholarGoogle Scholar
  4. OS-1, Ouster 16 channel LiDAR. https://ouster.com/products/os1-lidar-sensor/.Google ScholarGoogle Scholar
  5. Perception matters: How deep learning enables autonomous vehicles to understand their environment. https://blogs.nvidia.com/blog/2018/08/10/autonomous-vehicles-perception-layer/.Google ScholarGoogle Scholar
  6. Report of traffic collision involving an autonomous vehicle (OL 316). https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/autonomousveh_ol316.Google ScholarGoogle Scholar
  7. Scalabel - bekley deepdrive. https://www.scalabel.ai/.Google ScholarGoogle Scholar
  8. Self-driving cars can handle neither rain nor sleet nor snow. https://www.bloomberg.com/news/articles/2018-09-17/self-driving-cars-still-can-t-handle-bad-weather.Google ScholarGoogle Scholar
  9. Uber self-driving car crash: What really happened. https://www.forbes.com/sites/meriameberboucha/2018/05/28/uber-self-driving-car-crash-what-really-happened/#11619a944dc4.Google ScholarGoogle Scholar
  10. Waymo open dataset: An autonomous driving dataset, 2019.Google ScholarGoogle Scholar
  11. F. Adib, Z. Kabelac, and D. Katabi. Multi-person localization via RF body reflections. In 12th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 15), pages 279--292, 2015.Google ScholarGoogle Scholar
  12. F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. C. Miller. Smart homes that monitor breathing and heart rate. In Proceedings of the 33rd annual ACMconference on human factors in computing systems, pages 837--846. ACM, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Y. Almalioglu, M. Turan, C. X. Lu, N. Trigoni, and A. Markham. Milli-rio: Ego-motion estimation with low-cost millimetre-wave radar. IEEE Sensors Journal, 2020.Google ScholarGoogle Scholar
  14. M. Bühren and B. Yang. Automotive radar target list simulation based on reflection center representation of objects. In Proc. Intern. Workshop on Intelligent Transportation (WIT), pages 161--166, 2006.Google ScholarGoogle Scholar
  15. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. nuScenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019.Google ScholarGoogle Scholar
  16. L. Daniel, D. Phippen, E. Hoare, A. Stove, M. Cherniakov, and M. Gashinova. Low-thz radar, lidar and optical imaging through artificially generated fog. 2017.Google ScholarGoogle ScholarCross RefCross Ref
  17. A. Danzer, T. Griebel, M. Bach, and K. Dietmayer. 2D car detection in radar data with pointnets, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Darms, P. Rybski, and C. Urmson. Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments. In 2008 IEEE Intelligent Vehicles Symposium, pages 1197--1202. IEEE, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  19. X. Du, M. H. Ang, S. Karaman, and D. Rus. A general pipeline for 3D detection of vehicles. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3194--3200. IEEE, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd, volume 96, pages 226--231, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. Favalli, P. Gamba, T. Gatti, and A. Mecocci. Multi-radar data fusion for object tracking and shape estimation. Signal processing, 48(3):235--239, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. F. Fembacher, F. B. Khalid, G. Balazs, D. T. Nugraha, and A. Roger. Real-time synthetic aperture radar for automotive embedded systems. In 2018 15th European Radar Conference (EuRAD), pages 517--520. IEEE, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  23. A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Gisder, M.-M. Meinecke, and E. Biebl. Synthetic aperture radar towards automotive applications. In 2019 20th International Radar Symposium (IRS), pages 1--10. IEEE, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  25. S. Gishkori, D. Wright, L. Daniel, M. Gashinova, and B. Mulgrew. Imaging moving targets for a forward-scanning automotive sar. IEEE Transactions on Aerospace and Electronic Systems, 56(2):1106--1119, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  26. J. Guan, S. Madani, S. Jog, and H. Hassanieh. High resolution millimeter wave imaging for self-driving cars, 2019.Google ScholarGoogle Scholar
  27. L. Hammarstrand, L. Svensson, F. Sandblom, and J. Sorstedt. Extended object tracking using a radar resolution model. IEEE Transactions on Aerospace and Electronic Systems, 48(3):2371--2386, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  28. C.-Y. Hsu, A. Ahuja, S. Yue, R. Hristov, Z. Kabelac, and D. Katabi. Zero-effort in-home sleep and insomnia monitoring using radio signals. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(3):59, 2017.Google ScholarGoogle Scholar
  29. C.-Y. Hsu, R. Hristov, G.-H. Lee, M. Zhao, and D. Katabi. Enabling identification and behavioral sensing in homes using radio reflections. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 548. ACM, 2019.Google ScholarGoogle Scholar
  30. C.-Y. Hsu, Y. Liu, Z. Kabelac, R. Hristov, D. Katabi, and C. Liu. Extracting gait velocity and stride length from surrounding radio signals. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 2116--2126. ACM, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. I. Jouny. Target identification using multi-radar fusion. In Automatic Target Recognition XVII, volume 6566, page 65660M. International Society for Optics and Photonics, 2007.Google ScholarGoogle Scholar
  32. M. Kutila, P. Pyykönen, H. Holzhüter, M. Colomb, and P. Duthon. Automotive lidar performance verification in fog and rain. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pages 1695--1701. IEEE, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. J. Lombacher, M. Hahn, J. Dickmann, and C. Wöhler. Potential of radar for static object classification using deep learning methods. In 2016 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), pages 1--4. IEEE, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  34. C. X. Lu, S. Rosa, P. Zhao, B. Wang, C. Chen, J. A. Stankovic, N. Trigoni, and A. Markham. See through smoke: robust indoor mapping with low-cost mmwave radar. In MobiSys, pages 14--27, 2020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. B. Major, D. Fontijne, A. Ansari, R. Teja Sukhavasi, R. Gowaikar, M. Hamilton, S. Lee, S. Grzechnik, and S. Subramanian. Vehicle detection with automotive radar using deep learning on range-azimuth-doppler tensors. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0--0, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  36. D. Maturana and S. Scherer. VoxNet: A 3D convolutional neural network for real-time object recognition.Google ScholarGoogle Scholar
  37. P. Mededović, M. Veletić, and Ž. Blagojević. Wireless insite software verification via analysis and comparison of simulation and measurement results. In 2012 Proceedings of the 35th International Convention MIPRO, pages 776--781. IEEE, 2012.Google ScholarGoogle Scholar
  38. A. Metzner and T. Wickramarathne. On multi-sensor radar configurations for vehicle tracking in autonomous driving environments. In 2018 21st International Conference on Information Fusion (FUSION), pages 1--8. IEEE, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  39. M. Meyer and G. Kuschk. Automotive radar dataset for deep learning based 3D object detection. In 2019 16th European Radar Conference (EuRAD), pages 129--132. IEEE, 2019.Google ScholarGoogle Scholar
  40. M. Meyer and G. Kuschk. Deep learning based 3D object detection for automotive radar and camera. In 2019 16th European Radar Conference (EuRAD), pages 133--136. IEEE, 2019.Google ScholarGoogle Scholar
  41. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825--2830, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng, D. Rus, and M. H. Ang. Perception, planning, control, and coordination for autonomous vehicles. Machines, 5(1):6, 2017.Google ScholarGoogle Scholar
  43. M. N. Petsios, E. G. Alivizatos, and N. K. Uzunoglu. Solving the association problem for a multistatic range-only radar target tracker. Signal Processing, 88(9):2254--2277, 2008.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustum pointnets for 3D object detection from RGB-D data. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018.Google ScholarGoogle ScholarCross RefCross Ref
  45. C. R. Qi, H. Su, M. NieBner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object classification on 3D data. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016.Google ScholarGoogle ScholarCross RefCross Ref
  46. C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space, 2017.Google ScholarGoogle Scholar
  47. A. Scheel, C. Knill, S. Reuter, and K. Dietmayer. Multi-sensor multi-object tracking of vehicles using high-resolution radars. In 2016 IEEE Intelligent Vehicles Symposium (IV), pages 558--565. IEEE, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. K. Schuler, D. Becker, and W. Wiesbeck. Extraction of virtual scattering centers of vehicles by ray-tracing simulations. IEEE Transactions on Antennas and Propagation, 56(11):3543--3551, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  49. O. Schumann, M. Hahn, J. Dickmann, and C. Wöhler. Semantic segmentation on radar point clouds. 2018 21st International Conference on Information Fusion (FUSION), pages 2179--2186, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  50. S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li. PV-RCNN: Point-voxel feature set abstraction for 3D object detection. arXiv preprint arXiv:1912.13192, 2019.Google ScholarGoogle Scholar
  51. S. Shi, X. Wang, and H. Li. PointRCNN: 3D object proposal generation and detection from point cloud, 2018.Google ScholarGoogle Scholar
  52. Y. Tian, G.-H. Lee, H. He, C.-Y. Hsu, and D. Katabi. RF-based fall monitoring using convolutional neural networks. 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. J. Van Brummelen, M. O'Brien, D. Gruyer, and H. Najjaran. Autonomous vehicle perception: the technology of today and tomorrow. Transportation research part C: emerging technologies, 89:384--406, 2018.Google ScholarGoogle Scholar
  54. T. Visentin, A. Sagainov, J. Hasch, and T. Zwick. Classification of objects in polarimetric radar images using CNNs at 77 GHz. 2017 IEEE Asia Pacific Microwave Conference (APMC), pages 356--359, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  55. Z. Wang and K. Jia. Frustum convnet: Sliding frustums to aggregate local pointwise features for amodal 3D object detection, 2019.Google ScholarGoogle Scholar
  56. S. Wold, K. Esbensen, and P. Geladi. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1--3):37--52, 1987.Google ScholarGoogle Scholar
  57. J. Xu, M. Dong, D. Ding, and R. Chen. Multi-radar data fusion imaging based on iterative adaptive algorithm. In 2017 International Applied Computational Electromagnetics Society Symposium (ACES), pages 1--2. IEEE, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  58. S. Yue, H. He, H. Wang, H. Rahul, and D. Katabi. Extracting multi-person respiration from entangled RF signals. 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. X. Zhang, W. Xu, C. Dong, and J. M. Dolan. Efficient l-shape fitting for vehicle detection using laser scanners. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 54--59. IEEE, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. M. Zhao, F. Adib, and D. Katabi. Emotion recognition using wireless signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking, pages 95--108. ACM, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. M. Zhao, T. Li, M. Abu Alsheikh, Y. Tian, H. Zhao, A. Torralba, and D. Katabi. Through-wall human pose estimation using radio signals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7356--7365, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  62. M. Zhao, Y. Tian, H. Zhao, M. A. Alsheikh, T. Li, R. Hristov, Z. Kabelac, D. Katabi, and A. Torralba. RF-based 3D skeletons. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, pages 267--281. ACM, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. M. Zhao, S. Yue, D. Katabi, T. S. Jaakkola, and M. T. Bianchi. Learning sleep stages from radio signals: A conditional adversarial architecture. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4100--4109. JMLR. org, 2017.Google ScholarGoogle Scholar

Index Terms

  1. Pointillism: accurate 3D bounding box estimation with multi-radars

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            SenSys '20: Proceedings of the 18th Conference on Embedded Networked Sensor Systems
            November 2020
            852 pages
            ISBN:9781450375900
            DOI:10.1145/3384419

            Copyright © 2020 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 16 November 2020

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate174of867submissions,20%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader