Compilers mediate between the high-level language with which we want to think about programming, and the low-level details of registers, memory and control flow. For high-performance computing, we face challenges from the complexity of the hardware and the need for managing parallelism, locality and data movement. This talk will explore some of the things compilers can do, and some of the good reasons why optimisations and parallelisation that “should” be applicable do not happen automatically. This often leads programmers into explicit, low-level code to implement the right thing manually. We have become used to the idea that we have to program at a low level for performance. I will show some examples from our research on domain-specific optimisations that suggests that the opposite can be true – by providing the compiler with a higher level, more abstract representation, it can generate better code – perhaps even, better code than could reasonably be written by hand. This is joint work with colleagues on the OP2, PyOP2 and Firedrake projects which focus on domain-specific optimisations for unstructured-mesh and finite element computations, and other collaborators.