Learning to Simulate Crowds with Crowds

Project information

  • Publication:
    Bilas Talukdar, Yunhao Zhang, and Tomer Weiss. 2023. Learning to Simulate Crowds with Crowds. In ACM SIGGRAPH 2023 Posters (Los Angeles, CA, USA) (SIGGRAPH ’23). Association for Computing Machinery, New York, NY, USA, Article 6, 2 pages.
  • 2-page Extended Abstract, Poster & Video in Publishers Site
  • BibTeX:
    
                        @inproceedings{10.1145/3588028.3603670,
                          author = {Talukdar, Bilas and Zhang, Yunhao and Weiss, Tomer},
                          title = {Learning to Simulate Crowds with Crowds},
                          year = {2023},
                          isbn = {9798400701528},
                          publisher = {Association for Computing Machinery},
                          address = {New York, NY, USA},
                          url = {https://doi.org/10.1145/3588028.3603670},
                          doi = {10.1145/3588028.3603670},
                          booktitle = {ACM SIGGRAPH 2023 Posters},
                          articleno = {6},
                          numpages = {2},
                          location = {Los Angeles, CA, USA},
                          series = {SIGGRAPH '23}
                          }
    
                        

Abstract

Controlling agent behaviors with Reinforcement Learning is of continuing interest in multiple areas. One major focus is to simulate multi-agent crowds that avoid collisions while locomoting to their goals. Although avoiding collisions is important, it is also necessary to capture realistic anticipatory navigation behaviors. We introduce a novel methodology that includes: 1) an RL method for learning an optimal navigational policy, 2) position-based constraints for correcting policy navigational decisions, and 3) a crowd-sourcing framework for selecting policy control parameters. Based on optimally selected parameters, we train a multi-agent navigation policy, which we demonstrate on crowd benchmarks. We compare our method to existing works, and demonstrate that our approach achieves superior multi-agent behaviors.


Poster