Allocentric Grid Fields on a Moving Sensory Surface

What I will suggest is based on BitKing’s HTM Columns into Hexagonal Grids!, among other people’s ideas of course (especially oscillatory interference for grid fields in EC). I’m going to wait to give sources for when I hopefully write about evidence/motivations after working on this more, unless anyone wants sources, in which case I’d be glad to make a list early.

Even though a hexagonal arrangement on the cortical sheet is not biologically realistic because grid cells are not arranged that way, that might just reflect imprecise circuitry. Even though the cortical sheet in primary cortex is topographical with respect to the sensory input, that topography is not precise on the cellular level. Since any grid cells in primary cortex would probably have very high spatial frequencies, very small shifts in receptive fields (compared to precise topography) would obscure any grid fields which would be arranged on the sheet if the cortical sheet had perfect topography.

To save time, you can skip the next two sections.

Introduction

The cortex must be able to recognize relative locations of static sensory inputs. For example, if you touch an entire object with the palm of your hand or look at an object, no path integration across movements is necessary to recognize the object. This means there must be some way to recognize relative locations between cortical columns, or possibly between smaller portions of the cortical sheet. Perhaps primary sensory cortex is mostly concerned with allocentric representations on the sensory surface rather than all of space.

This system has two purposes. First, represent relative locations of features on the sensory surface. Second, account for behaviorally generated movements of the sensory surface in a manner which could plausibly also generate the same behavior. The first of those has been tested in code, although simplistically. The second seems likely to work and suggests how L5 TT cells could generate behavior (by more global/oscillation-related activity) and represent things (by single cell activity) without the two interfering. It is motivated by some general aspects of layer 5 but still needs to be brought back to solid biological mechanisms.

The Simplest Scenario

Each point on the sensory surface either is receiving input or it is not. There are no details on the sensed features.

The sensor is a 1-d surface and there are two features contacting that surface. The goal is to find the relative location of those features.

More precisely, the goal is to determine whether or not those two features are a multiple of a certain distance away from each other. Like in the entorhinal cortex, multiple grid cell response field spacings are used to identify the exact relative spacing.

This system only really finds the distance between features. I think L5 TT implements this system and L5 ST causes L5 TT to care about relative locations rather than distances.

Reasoning

L5 ST could detect directions between features. Distance + direction = relative location. I don’t know enough about the other layers to assign roles to them. L5 ST might be responsible for selecting specific directions to enforce upon the L5 TT grid fields because it projects to L5 TT cells and it has much more inhibition from other columns allowing it to select specific directions of incoming propagations. It is also a good source of orientation or direction signals because it projects to the same and nearby columns in L2/3, and an orientation or direction signal should probably be somewhat broad.

There are some things for which this scenario is insufficient (for example, what about more than two features?) but in the interest of time and because I don’t have ideas about solutions for everything I’ll keep the scenario limited to this one.

Mechanisms for Identifying Relative Location
There are two conceptual steps.

  1. Scan (as in, to move across) the sensory input.
  2. Form grid fields which would be egocentric without the scanning.

Scanning with Propagations
For each feature’s location on the sensor, there is a corresponding single unit active (unit means neuron, cortical column, small patch on the cortical sheet, or whatever else). The next time step, the activity moves one unit to the right. Over time, the activity moves all the way to the right, loops back to the rightmost unit, and then reaches the unit just before the first one to activate. The propagation could continue but at this point, it has completed a single scanning of the input.

Why do I call a simple propagation scanning the sensory input? Think about if each unit were to shift its receptive field each time step, literally moving its receptive field across the sensory surface in the same way that the propagations move. That would produce the same effect. The first cell to activate would be the one with its initial, non-shifted receptive field aligned with the input. The next would be one to the right. And so on.

Forming Grid Fields

After the cell first activates due to a passing propagation, it starts oscillating, activating for one time step and then inactivating for five time steps, for instance. If a cell’s oscillatory active phase happens in the time step as a propagation (except the propagation which triggered it), it has identified a relative location. All cells with the same oscillation frequency either activate in this grid cell mode or do not. Grid fields have been formed.

Those mechanisms are pretty straightforward, but it might not be clear how they form grid fields, so let’s use an example.

There are ten locations on the sensory surface and ten corresponding units. The features are at positions 3 and 6.

T1. In response to the sensory input, units 3 and 6 activate and start oscillating.
T2. Units 4 and 7 activate and start oscillating. Then 5 and 8 at T3.
T4. Units 6 and 9 activate and start oscillating. If the oscillation frequency fits, unit 6 is in its oscillatory active state. It therefore enters grid cell mode (signalling a grid cell response, whether by firing or something else such as activating again when it normally wouldn’t due to the hyperpolarization oscillatory inactive state and thereby narrowing down the SMI output layer’s representations.)
T5. The propagation reaches unit 7. Since it started oscillating one time step after unit 6, it is now in its oscillatory active state if unit 6 had its grid cell response. It therefore responds the same way.

From there, the same thing happens to all ten cells, more or less. When the rightmost propagation reaches oscillating cells, it might not trigger grid cell responses because the relative distance (if you stick the left and right ends together and go clockwise) is different. That’s not really a concern, though, because this needs to be changed to work on a 2d sheet and if the positions are close together, most of the same cells will respond. The propagations could probably go in both directions to solve this.

To extend this to more than 1 dimension, using a propagation for each dimension (or a radiating ring/spherical shell/higher dimensional shape) would probably work.

All of this is pretty flexible because the basic mechanisms and concepts have just a few straightforward pieces.

Mechanisms for Path Integration
Let’s say the sensor shifts the the right. Then cells will have their receptive fields shifted to the right. To compensate for that, the propagations slow down depending on the behavioral movement speed as long as the movement continues. If the sensory surface moves to the left, the propagations speed up.

The actual receptive fields also have to shift, not just the scanning receptive fields (propagations).

More about that.

I was trying to implement the scanning receptive fields using broad receptive fields and oscillations which cause the most active cells to shift somewhat over time for the same input, but couldn’t get it to work. But there almost definitely is a way to do that because L5 TT cells in FEF shift their receptive fields during presaccadic predictive remapping, and there have been contradictory results on whether or not the receptive fields move by jumping or rapidly shifting, but saccades are pretty quick so it could easily appear as a jump.

Note that this system does not need to use propagating activity. L5 TT cells in barrel cortex have longer latency responses to more distance whiskers from their thalamus-driven RF center, which I assume is caused by propagating signals. But it could be something else, such as weak lateral excitation to produce wide subthreshold RFs, coupled with subcortically-generated oscillations whose peaks travel across the sheet because of a gradient in oscillatory phases at each point on the sheet. Barrel cortex LFP oscillations synchronize with the whisking cycle and has a map of the space scanned by the whiskers at least in L2/3 so something like this is probably happening.

I couldn’t get it to work, but I still see no reason why that couldn’t produce small shifts in receptive fields. The cell with its RF center slightly to the left of the feature’s location on the sensor has weaker input than the cell centered on the input but also has more oscillatory depolarization so it fires first, then the cell centered fires during its subsequent oscillatory peak, and so on as the peak moves. The response sequence shifts with the input location because it shifts excitatory inputs.

The same changes in rate of propagation could hypothetically generate the movement which generates a displacement which is corrected for by that change. By doing so, it would be guaranteed to account for movement properly. Also, the exact cells which fire would not matter, allowing it to represent sensory information and generate behavior with the same cells.

Code

Code

Sorry, I’m an amateur at programming.

This code is only for recognizing relative locations on a 1d sensor with two features. It would have to be copied a bunch to determine exact location, each copy having single cell oscillations with different frequencies.

Without changes (probably just using sufficiently low frequency grid fields and allowing the same unit to have multiple oscillations at the same time), it cannot produce grid fields for more than two features. It also does not include behavioral movement.

Sadly, it is in java.

Compile these and run GridCellsTest. When prompted “Testing?” enter “g” (grid responses) “r” (receptive fields), “o” (single cell oscillations), “p” (propagations) or “a” (all). For osc steps on, I’m not sure if it will produce the correct responses unless you enter 1. When entering positions, enter between 1 and the number of units you entered minus 1.

// Non-overlapping RFs. Each unit claims an equal portion of the positions, to which it responds.

import java.util.*;

public class RFs {
	
	double positionsLength; // positions are continuous in this but effectively discrete in GridCellsTest
	boolean[] states;
	
	public RFs(int units, double positionsLength) {
		this.positionsLength = positionsLength; // length of 1d sensor
		states = new boolean[units];
	}
	
	public boolean[] states(double[] positions) {
		states = new boolean[states.length];
		// for each unit, check each position and respond if any is in the unit's RF.
		for(int i = 0; i < states.length; i++) {
			for(int j = 0; j < positions.length; j++) {
				if(inRF(i, positions[j])) {
					states[i] = true;
				}
			}
		}
		return states;
	}
	
	public boolean inRF(int unitIndex, double position) {
		double RFSize = positionsLength / states.length;
		double minPos = unitIndex * RFSize;
		double maxPos = (unitIndex + 1) * RFSize;
		// borders are owned by the lower index unit, except the first unit which also owns its lower border
		if(position == minPos) {
			return unitIndex == 0;
		}
		if(position > minPos && position <= maxPos) {
			return true;
		}
		return false;
	}
}

----------------------------------------------------------------------------

/* 
Input: propagation initiation points
Output: each tick, propagations move to the left one and then loop 
around to the start, ending at the index just before where it began
*/

import java.util.*;

public class Propagator {
	
	// an array of -2 where there isn't a propagation 
	// (I forget if there's still a reason to use -2 instead of -1)
	// and the index where the propagation began where there is one
	// even if a newer propagation overwrites an older one, doesn't
	// matter older because would've ended before the newer anway
	int[] propagations;
	// the main point of propagationStep is to allow broad propagations by initiating adjacent ones
	// it is just 1 in GridCellTest
	int propagationStep;
	
	public Propagator(int units, int propagationStep) {
		propagations = new int[units];
		this.propagationStep = propagationStep;
		for(int i = 0; i < propagations.length; i++) {
			propagations[i] = -2;
		}
	}
	
	public void newPropagation(int origin) {
		propagations[origin] = origin;
	}
	
	// advance propagations, looping to index 0 as needed,
	// and end those that have returned to their origins
	public void tick() {
		int lastIndex;
		for(int travelled = 0; travelled < propagationStep; travelled++) {
			// save the last index
			lastIndex = propagations[propagations.length - 1];
			// shift everything 1 index, overwritting the last index
			for(int i = propagations.length - 1; i >= 1; i--) {
				propagations[i] = propagations[i - 1];
			}
			// put the last index into index 0
			propagations[0] = lastIndex;
			// remove oscillations that have returned to their origins
			for(int i = 0; i < propagations.length; i++) {
				if(propagations[i] == i) {
					propagations[i] = -2;
				}
			}
		}
	}
	
	public boolean[] states() {
		boolean[] result = new boolean[propagations.length];
		for(int i = 0; i < propagations.length; i++) {
			if(propagations[i] > -2) {
				result[i] = true;
			}
		}
		return result;
	}
}

----------------------------------------------------------------------------

/*
Units which repeatedly cycle some ticks on then off, until they stop oscillating
*/

import java.util.*;

public class Oscillator {
	
	int onSteps; // lower step #s
	int offSteps; // higher step #s
	int oscStepStart; // allows osc starting off
	int[] oscSteps; // - 1 for not oscillating. set to something else to start.
	
	public Oscillator(int units, int onSteps, int offSteps, int oscStepStart) {
		this.onSteps = onSteps;
		this.offSteps = offSteps;
		this.oscStepStart = oscStepStart;
		this.oscSteps = new int[units];
		for(int i = 0; i < oscSteps.length; i++) {
			oscSteps[i] = -1;
		}
	}
	
	public void startOscillating(int unitIndex) {
		if(oscSteps[unitIndex] == -1) {
			oscSteps[unitIndex] = oscStepStart;
		}
	}
	
	public void stopOscillating(int unitIndex) {
		oscSteps[unitIndex] = -1;
	}
	
	public void endAllOscillations() {
		for(int i = 0; i < oscSteps.length; i++) {
			oscSteps[i] = -1;
		}
	}
	
	public void tick() {
		// increment each oscStep, looping back to 0
		for(int i = 0; i < oscSteps.length; i++) {
			if(oscSteps[i] != -1) {
				oscSteps[i] = (oscSteps[i] + 1) % (onSteps + offSteps);
			}
		}
	}
	
	public boolean[] states() {
		boolean[] states = new boolean[oscSteps.length];
		for(int i = 0; i < oscSteps.length; i++) {
			if(oscSteps[i] < onSteps && oscSteps[i] != -1) {
				states[i] = true;
			}
		}
		return states;
	}
}

----------------------------------------------------------------------------

/*
Purpose: to test propagation grid cells before incorporating 
means of compensating for movement of the sensor and before
incorporating a gradient of grid field spatial frequencies.
*/

import java.util.*;

public class GridCells {
	
	// states
	boolean[] directInputStates;
	boolean[] oscilStates;
	boolean[] propagationStates;
	
	// objects
	RFs directInputs;
	Propagator propagations;
	Oscillator unitOscils;
	
	// constructor for testing
	public GridCells() {
		// yes this is bad programming
		
		// not determined by user:
		int propagationStep = 1;
		int oscStepStart = 1; //0 might've caused immediate grid response
		
		// units & positionsLength
		Scanner console = new Scanner(System.in);
		System.out.print("Units: ");
		int units = console.nextInt();
		double positionsLength = (double)units;

		// oscOnSteps
		System.out.print("Cell osc steps on: ");
		int onSteps = console.nextInt();
		
		// oscOffSteps
		System.out.print("Cell osc steps off: ");
		int offSteps = console.nextInt();
		
		// stimuli positions
		System.out.print("Num positions: ");
		int numPositions = console.nextInt();		
		double[] positions = new double[numPositions];
		for(int i = 0; i < numPositions; i++) {
			System.out.print("Position: ");
			positions[i] = (double)console.nextInt();
		}
		
		// mode (i.e. what to print)
		
		// print back the inputs
		String positionsFeedback = "";
		for(int i = 0; i < numPositions - 1; i++) {
			positionsFeedback += positions[i] + " ";
		}
		positionsFeedback += positions[numPositions - 1];
		System.out.println("Unit: " + units + ", osc steps on: " + onSteps 
				+ ", osc steps off: " + offSteps + ", positions: " 
				+ positionsFeedback);
		
		directInputs = new RFs(units, positionsLength);
		propagations = new Propagator(units, propagationStep);
		unitOscils = new Oscillator(units, onSteps, offSteps, oscStepStart);

		directInputStates = new boolean[units];
		oscilStates = new boolean[units];
		propagationStates = new boolean[units];
		
		// initial inputs
		receiveInput(positions);
	}
	
	public void receiveInput(double[] positions) {
		directInputStates = directInputs.states(positions);
		for(int i = 0; i < directInputStates.length; i++) {
			if(directInputStates[i]) {
				propagations.newPropagation(i);
			}
		}
		propagationStates = propagations.states();
	}
	
	// after testing, this would have positions as an argument and oscillations/propagations would end after scanning
	public void tick() {
		// tick
		propagations.tick();
		unitOscils.tick();
		propagationStates = propagations.states();
		for(int i = 0; i < propagationStates.length; i++) {
			if(propagationStates[i]) {
				unitOscils.startOscillating(i); // should this reset the oscillation or not?
			}
		}
		
		oscilStates = unitOscils.states();
	}
	
	private void printAStates(boolean[] states) {
		for(int i = 0; i < states.length; i++) {
			if(states[i]) {
				System.out.print("1 ");
			}
			else {
				System.out.print("- ");
			}
		}
		System.out.println();
	}
	
	public void printDirectInput() {
		printAStates(directInputStates);
	}

	public void printOscil() {
		printAStates(oscilStates);
	}
	
	public void printPropagations() {
		printAStates(propagationStates);
	}

	public void printGrids() {
		printAStates(getGridStates());
	}
	
	// testing this bc maybe didnt update at some point [seems to work but still need to clear other stuff up, either always updating or replacing with this]
	public boolean[] getGridStates() {
		boolean[] result = new boolean[propagationStates.length];
		for(int i = 0; i < propagationStates.length; i++) {
			result[i] = propagationStates[i] && oscilStates[i];
		}
		return result;
	}
}

----------------------------------------------------------------------------

import java.util.*;

public class GridCellsTest {
	public static void main(String[] args) {
		
		Scanner console = new Scanner(System.in);
		boolean quit = false;
		while(true) {
			System.out.print("Quit (y/n)? ");
			if(!console.next().equals("n")) {
				break;
			}
			gridCellsTest();
		}
	}
	public static void gridCellsTest() {
		Scanner console = new Scanner(System.in);
		System.out.print("Testing? "); // RFs r, oscils o, propagations p, grid states g, all a
		String testType = console.next();
		GridCells gridCells = new GridCells(); // prompts tester for parameters
		// test each tick
		int ticks = gridCells.directInputStates.length;
		for(int i = 0; i < ticks; i++) {
			System.out.print("T" + i + " ");
			if(testType.equals("r")) {
				gridCells.printDirectInput();
			}
			if(testType.equals("o")) {
				gridCells.printOscil();
			}
			if(testType.equals("p")) {
				gridCells.printPropagations();
			}
			if(testType.equals("g")) {
				gridCells.printGrids();
			}
			if(testType.equals("a")) {
				System.out.print("r: ");
				gridCells.printDirectInput();
				System.out.print("T" + i + " ");
				System.out.print("o: ");
				gridCells.printOscil();
				System.out.print("T" + i + " ");
				System.out.print("p: ");
				gridCells.printPropagations();
				System.out.print("T" + i + " ");
				System.out.print("g: ");
				gridCells.printGrids();
			}
			gridCells.tick();
		}
	}
}
3 Likes

At one point I thought your propagation idea was based on a thalamus-driven wave… then there were instead pointers to a local spreadout mechanism. I’m not sure which, in the end ^^’
I should read your post again anyway.

1 Like

I haven’t done enough research to say.

It could be based on thalamus, but as far as I know, primary thalamus doesn’t project to more than one cortical column and higher order thalamus doesn’t project to primary cortex layer 5. Although, higher order thalamus does project to slender tufted layer 5 cells [proximally].

1 Like

I mean, an externally-driven wave could interact with your activated units but would move independently of activation itself. At each point in time, one single unit gets a shot at spreading to next.

Activity based spreading on the contrary would be all parallel

2 Likes

That’s probably true, and it would be better to come from the thalamus for precise timing mechanisms anyway. Activity could still be independent from the propagation because of inhibitory cells projecting densely.

2 Likes

This idea is similar to the ideas presented in (Kropff and Treves, 2008). They use different mechanisms to cause oscillations in grid cells. Their model causes oscillations by forcing grid cells to activate (using a spatial pooler with a sparsity of 30%) and then forcing active cells to deactivate after a short time. The result is grid cells which oscillate as the agent moves throughout its world.

A key similarity between these two proposals is that motion is required in order to learn.

Kropff & Treves 2008: https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.20520

3 Likes

I didn’t fully understand that article, so maybe I should read it more closely, but I’m not sure it’s meant to do path integration. It seems more concerned with the sensory-anchoring side of the brain’s GPS system and the input to their system might only be place cells. It’s still relevant for things besides path integration.

No learning occurs in what I am proposing. It also doesn’t require motion to determine relative locations of sensory inputs which are present at the same period of time.

This article describes a propagation which seems to support the system. It travels at around the same rate as signals in unmyelinated axons, which they take to mean it is caused by lateral connections.
A Sensorimotor Role for Traveling Waves in Primate Visual Cortex (Theodoros P. Zanos,
Patrick J. Mineault, Konstantinos T. Nasiotis, Daniel Guitton, and Christopher C. Pack, 2015)
https://www.cell.com/neuron/pdf/S0896-6273(14)01152-0.pdf

2 Likes