Learning about learning by learning to pick a lock

A few friends and I have a meeting place with a garden and the gate is locked with a combination lock. And somehow yesterday we scrambled the code and we could not put the lock back - we had to leave the gate unlocked.

I took it home, searched youtube on how to pick this kind of locks and tried to do it myself.

In short, while slightly pulling the hook away from the lock body, one has to gently turn each code-wheel one by one and notice an unusual “click” on a certain position of each wheel.

And it didn’t work! what I experienced wasn’t like what the video’s described. All wheels and positions felt and “clicked” the same. Well not quite the same, somehow different at every turn but I could NOT make sense of these … random differences.

I kept trying until I started to feel a certain difference in what I felt in a certain position of the first wheel. Which become more consistent in both the “sensed pattern” and the position of that wheel.

“Aha!”. I discovered a new thing! Next wheels were not quite the same but repeating, somehow I become more sensitive to that felt pattern (it is more of touch than sound) which with each repetition became more and more obvious.

I’ll try a summary.
The above might be rightly considered “supervised learning” but it’s nothing like what we use the term for in machine learning.

  • I began with some descriptive hints in normal (formal?) language
  • I tried to replicate the described process and failed.
  • repeated again and again, until
  • “magically” the perceived experience began to match the formal description.
  • knowing what to look for is of tremendous importance.

How would I describe the above in htm-ish terms: run an anomaly detector until an anomaly is found then use a pattern recognizer to single out that particular detected anomaly.

The normal anomaly detector(s) have only two parts: First time when an anomaly is detected it is signaled “Hey here-s an anomaly”, then if that signal is repeated, once it is “learned” it no longer is seen as an anomaly and becomes “non-anomalous signal”.

What is missed in that process is the “singling out” phase in which the particular anomaly, once repeated, is assigned an identifier. And so it becomes a new learned thing. Then following appearances of the same pattern are occurrences of that particular known thing.

I hope it makes sense. I found it revealing.


A familiar experience, nicely described. I would add two things.

One is that you call upon a set of prior algorithms the might work in this situation, and eventually you come up with a new-ish algorithm to add to the set. This is an important feature of AGI: an algorithm to create or refine or adapt algorithms.

The second is that this has nothing to do with human introspection: animals do it too. All the ‘real’ work is being down down in your subconscious. Your conscious experience and your verbal description are all after the event.


A good example.

Another really important factor - is that it worked (I assume).
How would the reward/memory work if you failed?

Also the problem is iterative, you know you are getting closer to the result (they are poor locks :wink: )

1 Like

Yes, sure. In a sense it did failed, and almost always the first attempts at learning to do anything we fail.

Failing in itself is not the issue, other parameters influence the outcome

  • will to continue trying. Here-s important to know that it is possible.
  • collecting more data to gauge relevant cues
  • and very useful - I already mentioned it - knowing what to look for

I guess that’s the whole point of learning - finding and exploiting the shortcuts.

And it seems all code locks are “poor”, the flaw is in the design itself. As long as there-s inherently a tiny mechanical play needed to allow movement of each wheel then one can sense variation in wheel friction.

What is interesting is the manufacturers try to alleviate this by engineering imperfections in the wheels themselves such an attacker would be confused. But they can’t add more than 3-4 imperfections on a wheel with 10 digits so this strategy only succeeds to reduce the options from 10 digits per wheel to 2-4. Which when applied to the 4 wheels it drops searchable combinations from 10**4 = 10000 to at most a couple hundred.

The points above are interestingly relevant to all problems we attempt to solve, “AI” included.

When we’ll get to general AI, I bet we-ll be quite disappointed by how lousy the “generality” of humans in general actually is.

Average human performance relies heavily on imitation, all humans know how to learn a previously walked path. Researchers, inventors, creators - those who really manage to push boundaries - are a minority.

In the end the basic algorithm of progress remains evolution. Lots of trials to avoid errors and repeat successes.

As in the code lock lesson, the intelligence’s main role is to (learn how to) reduce the search space.

A well designed combination lock has no “tell.”
It’s only sloppy cheap locks that allow you to feel the gates. Tight precision wheels and interlocks to the actuation lever (if present) prevent any feel from getting back to the knob.

Some wheels have teeth that prevent turning the knob if you are pulling on the hasp.

Considering hourly labor rate and the cost of most locks we cut them off as this was the lowest cost to the customer. We know how to manipulate the lock but a new lock is cheaper.

Mark browne
Bonded locksmith from a very long ago.

1 Like

Well, that’s a cost-oriented assessment. A reward-oriented assessment considers not the value of the lock but that of the bike it secures and how (im)practical are power tools in that context.

late edit:
In our case the padlock was code changed accidentally so even bothering a locksmith would have been more expensive than the lock. So all I had to do was to take it home (it wasn’t attached) then curiously asked “the magic wizard” whether it could be opened or not.

And the bonus was recovering an older padlock (and recurring costs with future ones) which had the same faith.

Not an expensive one but also not the cheapest.

1 Like