While I haven’t personally thought about it much myself, it reminded me of this thread which seems to be touching on a similar possible driving force.
From the perspective of someone who was just introduced to the theory, on the surface it seems fairly self-evident. I could see it being useful when weighing new theories or ideas – how well do those ideas align with the goal of reducing entropy (ultimately meaning better predictions). On the other hand, that kind of tends to already be the goal of new theories for AI anyway…