I agree with everything you said I only want to add that there is kinda one or two ways for the AGI problem a la Sci-Fi to happen.
By far the most straight forward way is if the military believe that it can be used as a fail safe in MAD scenarios, i.e. if they give the AI the power to launch nuclear ICBM's a la War Games. Not very likely, but still not something we want to dismiss entirely. This is also a problem with regular AI and LLM's.
The second, and, in my opinion, more likely scenario is if the AI is able to get a large number of people to trust it implicitly and then use seemingly unrelated benign actions from each of them to do something catastrophic.
Something you may notice about these two scenarios is that neither one of them can be "safeguarded" in the code, only by educating people on the proper usage of and posture to have when handling AI.
The Chaotic Good sect of the Cult of the Dragon: Instead of the secret of the Dracolich it is just HRT.