Why is writing down mathematical proofs more fault-proof than writing computer code?

Stackexchange:

I have noticed that I find it far easier to write down mathematical proofs without making any mistakes, than to write down a computer program without bugs.

It seems that this is something more widespread than just my experience. Most people make software bugs all the time in their programming, and they have the compiler to tell them what the mistake is all the time. I’ve never heard of someone who wrote a big computer program with no mistakes in it in one go, and had full confidence that it would be bugless. (In fact, hardly any programs are bugless, even many highly debugged ones).

Yet people can write entire papers or books of mathematical proofs without any compiler ever giving them feedback that they made a mistake, and sometimes without even getting feedback from others.