Long story short, an entire street of data centers in Atlanta lost power and every host, and thereby every hosting company and every hosted site, went out. The outage was long enough that UPS' couldn't have sustained them but could've bought them time to do graceful shutdowns, had they had any, and of course any major host normally has actual backup generators to run indefinitely during outage, but apparently our latest had nothing whatsoever, so we went down hard, for a good six hours, and when we came up the database was understandably upset and had to be talked back down off a ledge before it'd run.
Once the database was restored we were still dead though because the ipv6 bridge between our web server and db server stopped functioning. Somewhere in the provider's network is a piece of equipment someone forgot to turn back on I think, so that had to be hunted down and traced through to determine whether it was something that died in our configs or an actual network issue as it was, and then we had to reroute everything back to ipv4 until it can be corrected.
Now we're back up and semi running while I try to find any gaping holes in our data other than whatever was occurring at the exact minute we died.