I'm in Australia and got the call at 19:11 on the 25th. (Local Aust Central Daylight Time). Apparently it had been on the loose for about 2.5 hours by then, but work had taken this long to get me involved.
I attempted to vpn in and found the authentication server wasn't there (or at least not reachable from the vpn gateway), so I drove the hour into work.
Upon arriving I found our LAN completely isolated from the WAN, the Cisco 4000s were cactus.
There wasn't much to do from my perspective 'cause I couldn't get to any of my firewalls, so I fired up snoop on our Solaris machine to see what was happening. I soon discovered a single machine blasting udp 1434 out to networks that don't exist on our WAN (but are obviously sent through the distribution router towards the internet feed).
I dialed into the ongoing conference call to report what I found and ask someone to find the admin of that machine (it was running Win2k terminal services so if I had a login I could shut it down remotely). This proved to be a difficult task, so eventually, I scoured the server room for the machine and unplugged its network cables.
By now I had tcpdump running on my iBook and when I returned to look at it there was a second host blasting away. This one however was not labelled in the server room and thus despite three sweeps of the room, I couldn't find it. Nor could the admin of the machine be found. We eventually found the port on the hub that it was connected to by looking at the activity lights. After unplugging the cable on the obvious port, the network returned to normal and the Cisco routers recovered. (Un)fortunately we don't have a switched LAN at this location.
However, this was only half the evening for me, we still had customer sites to check, and firewalls to verify, etc. (I'm glad to say none of my firewalls let it in, but we're connected to a global WAN and it looks like it entered the network from the US).
By 7am we were all pretty tired and ready to pass on to the next shift. Most of the worm had been contained internally, but some client sites were still badly effected. This became the work of the router jockeys for the remainder of the long weekend.
Lessons to learn:
- make sure the admins of Microsoft servers are contactable and that they patch their servers!
- try to get the facts quickly and avoid teleconferences with headless chickens at all costs.
- label your servers in the server room.
At least the overtime will be good.