This is something I’ve always wondered about - even before I ever got into coding or developed an interest in neuroscience. I used to be a much more active musician, and I noticed a phenomenon when playing with digital sounds. Say you have a square wave with short crests and larger troughs humming along at 3Hz. What is perceived here is a fairly fast paced clicking sound. However - as you begin to turn it up - the capacity to count clicks and hear the individual clicks begins to dissipate and the perception of the individual clicking morphs into the perception of actual tones. Based on my limited understanding of what is going on in the brain, invariant representations would be ticking off with the 3Hz clicks - however - once you get into tonal perceptions - the invariant reps begin firing off with the actual tones instead of the clicks, which means the clicks have been moved a layer down (or two?). My question is - any thoughts as to why this happens? At what point does it happen? What would be the mechanism to cause this to happen?
I think you are being too restrictive.
As you prolly know - the perception of timbre has some definite groupings. I think that is clearly the same as the transition from clicks to a growl to a tone. This continues up the audible spectrum, particularly with complex tones. On the other hand, a square wave has a LOT of overtones.
I had a thought on this the other day. I think the point at which perception shifts from the clicking to the actual tone is not done in the brain. I would suspect that it has to do with the hairs within the cochlea. The moment the clicking converts is the moment the sound begins to vibrate any one specific hair as a sustained signal. It may also have to do with how the fluid acts under certain frequencies and vibrations. Of course, this is all an educated guess here, but I’d think the clicking happens when any of the hairs pulses and sends the coordinated signal, the moment a hair begins to be stimulated continuously - and therefore sending a continuous signal to the brain - is the moment the brain begins to hear the tone instead of the click.
I have been thinking about your original posted question.
Note that the cochlea has several vibrational modes; long-ways, vertically across the cone, multiple harmonics.
It is far more complex than ‘this fiber is that frequency’; you can’t take the ear in isolation.
The phase between the ears is a significant part of the process similar to the left/right processing in the eye and also, prominently, is the self-sound canceling that has a definite time window (100 ms) as part of the downstream processing. The last bit is how a long echo totally messes you up when speaking - and yes - this bumps up against the alpha processing rate in the brain wave window.
Note that this same ten Hz (100 ms) is just about the transition from clicks to a tone. There is sort of a discernible ‘growl-ly’ stage before it registers as a tone for me.
There is kind of a long winding path to get there but I but I think it suggests that the tone/timber window applies to an integration over a single alpha wave processing window. I can explain in more depth but I think that the activation pattern distribution over the processing map ends up being related to both the pitch and timbre, updated and communicated each alpha cycle.
Does this address your original question?
Does this even make any sense to you or have I explained it really badly?
I have been playing with digital sound since the late 70’s I never stopped to think about this bit before!
No - in fact, you’re bringing up a lot of thought-provoking and interesting points. I hadn’t even thought about the phasing between the two ears. I used to experiment pretty heavily with making my own binaural beats. I went through a fairly heavy occult/psychonaut phase. ( tiny deviation, if you’re interested - this is one of the more psychedelic binaurals I’ve made: https://www.youtube.com/watch?v=9tErha_wqao ) It might even be interesting to toy around with designing a binaural beat based on and around that 10Hz threshold.
You’ve made plenty of sense. I think I’m going to play around with this on a subjective level this week. You’ve given me a lot to think about.
Please post whatever you find out or think to ask.
You sound like you do interesting things. Do you keep a journal? Some of this may be worth the work of documenting as a paper.
Paper discussing time-locking of the brain to the low-frequency component:
Rhythm and Beat Perception in Motor Areas of the Brain
A slightly different audio perception line - rhythm and speech perception: