Network model with internal complexity bridges artificial intelligence and neuroscience

https://www.nature.com/articles/s43588-024-00674-9.epdf?sharing_token=PDLs9wc2VUGXIKvqN06zfdRgN0jAjWel9jnR3ZoTv0Pq1mNu95lhgDdNLRemgORPlbUML5QJh76FSIWzN1bZzztDfA77dZtN6Xq-bS7iRskEyi70hTTox4PALMs14Cb2ZO8Lk3x3WVQzYTznt8VqEfjYeLOVMHALfoLEFUHgpfo5Ioknuo6HGC22rnfZA3bIFsGpPRC7qxo6O808AQsz7ScmDfyMn9wVExByke33sTxClKeWrkrD9hU_EPhmI6KOpD9u4ZATX2cpYlHxyz0ini5Yvcuyvl68Go76qouoB0YKhtzD5Rxs9FNs3gTw9uNlD8kWHXtrDQJYDQhq5SoHrnSn3vZ5XBjWsrzqIac1fUjnb-BpOh1IA0SpXER5g4wPQaZ_4VL2asRn3pDXplC9YBEOhCMSoWinQhCpwn5ujg5CDVU0-mQXGRAsYLwhcKhS1hNtCn56vCeHb8lA8KSMZSllu-yqGjW8Zd6Kl4uf6nzmO9kdBWjfRmslo5OSLycfGlIabcsKfIcWkwFlxI7Gv0SwRMplAxQZjlEw5m83B6A%3D&tracking_referrer=www.livescience.com

“In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models. We uncover that one has to increase the scale of the network externally to stimulate the same dynamical properties. To illustrate this, we build a Hodgkin–Huxley (HH) network with rich internal complexity, where each neuron is an HH model, and prove that the dynamical properties and performance of the HH network can be equivalent to a bigger leaky integrate-and-fire (LIF) network, where each neuron is a LIF neuron with simple internal complexity.”

2 Likes