Well, actually not all of us, really just me (I?).  I was listening to a vaguely interesting podcast  this morning about David Bessis' book Mathematica, which (apparently?) emphasises the importance of intuition in mathematical thinking rather than logical proofs. In particular, the idea was that being a good mathematician is learning to train your intuition to agree with the reality, battering it into the right shape when you make a mistake instead of just giving up. It can then be a useful guide and inspiration when exploring new mathematical concepts. What most struck me about this idea of 'training your intuition' was that this was analogous to machine learning (LLMs/AI/whatever). You don't really care about the steps or formally proving something, you're just trying to make the outputs of the system agree with reality. Our big advantage over LLMs in this context being that for us reality is more than 'all the text on the internet'. But then I got to thi...