The Muddy Map Explorer allows interactive, text-based exploration and navigation of geographic data (e.g., from OpenStreetMap). The goal is to help blind users construct mental models of their neighborhoods, cities, regions, etc.
I’m wondering whether any of the work on biological support for location awareness (e.g., grid cells) might provide insights into the best ways to describe map data for learning and construction of these mental models. Comments, clues, suggestions?
-r