As part of my PhD research I have had to interact with way more statistical evaluation than ever before. Even though I would have to go through the occasional ANOVA et al. for my under- and postgrad degrees, I never really needed to internalise how and why statistical testing was actually done. It is one of those topics where when you are just starting out, you accept a few things as fact.
Just to be clear, I am far from understanding it all. I feel like that the more I'm learning about statistics the more it dawns on me how much further there is to go. That being said, I picked up a few bits that have helped me align my own understanding with a few of the basic rules.
My biggest take away so far has been that there is never a 100% catch all test for every problem. Often the correct choice is to scan related literature for their reporting metrics or trying to find literature that relates to your core comparisons even if it does not match the exact topic.
I have also learned to question my results. More often than not, going further than simply checking a p<0.05 has helped me for the better. Understanding and exploring how and why those relationships occur and what they mean has been a big help in comprehending my own research.
Especially my discovery of reporting effect sizes has been eye opening. It is super interesting to see my own results transform completely just because I pay attention to the different magnitudes apart from just equalling them on terms of significance.