Planning a path to a destination, given a number of options and obstacles, is a common task. We suggest a two-component cognitive model that combines retrieval of knowledge about the environment with search guided by visual perception. In the first component, subsymbolic information, acquired during navigation, aids in the retrieval of declarative information representing possible paths to take. In the second component, visual information directs the search, which in turn creates knowledge for the first component. The model is implemented using the ACT-R cognitive architecture and makes realistic assumptions about memory access and shifts in visual attention. We present simulation results for memory-based high-level navigation in grid and tree structures, and visual navigation in mazes, varying relevant cognitive (retrieval noise and visual finsts) and environmental (maze and path size) parameters. The visual component is evaluated with data from a multi-robot control experiment, where subjects planned paths for robots to explore a building. We describe a method to compare trajectories without referring to aligned points in the itinerary. The evaluation shows that the model provides a good fit, but also that planning strategies may differ between task loads.