Yesterday around noon, the internet at my company started acting up. No matter, slowdowns happen and there's roadwork going on outside: maybe they hit the fiber or something. So we waited.
Then our Samba servers started getting flaky. And the database too. Uh oh... That's different.
We started investigating. Some machines were dropping ICMP packets like crazy, then recovered, then other machines started to become unpingable too. I fired up Wireshark and discovered an absolute flood of IGMP packets on all the trunks, mostly broadcast from Windows machine. It was so bad two Linux machines on the same switch couldn't ping each other reliably if the switch was connected to the intranet.
So we suspected a DDOS attack initiated from within the intranet by an outside attacker. We cut off the internet, but the storm of packets kept on coming. Physically disconnecting machines from the intranet one by one didn't do a thing either.
Eventually, we started disconnecting each trunk one by one from the main router until we disconnected one and all the activity lights immediately stopped on all the ports. We reconnected it and the crazy traffic resumed.
So we went to that trunk's subrouter and did the same thing. When we found the cable that stopped all the traffic, we followed it and finally found one lonely $10 ethernet switch with... a cable with both ends plugged into the switch. We disconnected the cable and everything instantly returned to normal.
One measly cable brought the entire company to a standstill for hours! Because half of the software we have to use are cloud crap or need to call their particular motherships to activate their licenses, many people couldn't work anymore for no good technical reason at all while we investigated the networking issue.
Anyway, I thought switches had protections against that sort of loopback connection, and routers prevented circular routes. But there's theory and there's reality. Crazy!
Lol imagine the poor dude in his office who was just bored and thought "what if I plug this cable back into the hub, probably won't do anything"
Actually this happened in the lab. I know exactly who did this because he told me: we were discussing what had happened and he said "Oh yeah, Daniel and I needed to connect this Windows machine to the intranet quick because we had something urgent to do, and we connected all the ends of the nest of ethernet cables at random until the machine connected. And then we left everything as it was." But bad luck for us, their machine was connected, but so was that fatal cable on both ends. It just happened that their machine kept working well enough for them to finish what they were doing without noticing the problems rightaway.
And in case you wonder, there's no penalty in our company for owning up to honest mistakes, so that's why he readily admitted to it. Only people who never do anything never do anything wrong.
That's a healthy attitude! The blame game is useless in most cases.
I do hope you taught him the many better ways of doing this. I absolutely agree with making an environment where mistakes are easily owned up to (I made a mistake that ended up costing my employer over $10k in the last year), but if it isn't coupled with turning those into learning experiences (here's why you don't do that, here's why this is a better solution) then you just have a lot of mistakes happening over and over again.
In my experience it's either someone doing it on purpose, or someone accidentally pulling the wrong cable out of a rats nest.