Some random thoughts on this:
1. Programming, like all automation, is an amplifier for decisions. This makes it difficult to foresee the consequences of the decisions, and that is the root cause of all the trouble humanity has had with industrialization and now with computing.
2. In the interest of safety, decision amplification requires one of
a. a limited scope of automation (sandboxing, ...)
b. regular validation of execution by an agent that has an incentive not to do harm (liability, ...)
c. provable predictability of consequences (matematical proof, ...)
So... which amplifiable decisions can we safely delegate to today's AIs? I'd say none. So the questions becomes: how do AIs need to evolve so that we can safely delegate amplifiable decisions to them?