Humans can innately build internal cognitive maps of the world. The algorithms underlying cognitive mapping are hypothesized to generalize to other sensory modes, to support higher cognitive functions such as reasoning and planning, to construct more abstract information such as memories and semantic knowledge. Despite the association, it is unclear how cognitive maps are constructed and implemented. In my PhD, I introduced predictive coding as a generalized neural algorithm for building internal cognitive maps from visual data. I demonstrated that a neural network that learns predictive coding—to predict future visual observations from the past—builds an internal spatial representation of the environment. The work pointed to the potential of predictive coding as a general framework for building cognitive maps across all sensory domains. However, several key questions remain. First, how can cognitive map construction generalize to other sensory modalities? Second, how do cognitive maps generalize from sensory modalities to more abstract information such as semantic knowledge? Third, how are cognitive maps implemented? In my proposed work, I will use computation and theory to explore how cognitive maps are constructed, how cognitive mapping can be implemented, and how cognitive maps can be applied to solve cognitive tasks.
[CV]
GPG: 19F2 FF0F 7092 BF72 83C0 D58A 51B7 7F35 E429 558B [GPG Key]