Fri Sat Sun Mon Tue
ExchangeDefender Inbound
ExchangeDefender Outbound
ExchangeDefender Apps Web
Exchange 2010 Hosting
Exchange 2007 Hosting
Offsite Backups
Web Hosting
Blackberry Enterprise Server
SharePoint 2007
SharePoint 2010
Exchange 2013 Hosting
SharePoint 2013
Virtual Servers
Power Outage Event Recovery – ExchangeDefender Network Operations

August 10, 2011

Power Outage Event Recovery

Filed under: Uncategorized — admin @ 5:24 pm

Final Update:

All services have been restored, there is still spooled mail being delivered but new mail is being delivered in real time. With that said our CEO Vlad Mazek held a GoToWebinar outlining all the important facts about this event. We invite you to listen to the points covered as well as use the provided information the in the PPTx. And once again we’d like to apologize for the inconvenienced caused to you and your customers.

Webinar Powerpoint

Webinar Video (Requires GoToWebinar codec)

Previous:

We’re employing an emergency recovery plan, essential services will be coming online but they’re not running on their full infrastructure. Please expect services to come back online slowly as services are running on our emergency failover (and the datacenter has NOT fully restored the outage). Latency on the servers, that are online, is expected as customers try to reconnect to the servers. We’ll provide more details as they come available. We will be keeping folks available on the phone to give the information below live. Again this is a continuation of the information found on our twitter feed at twitter.com/XDNOC

We’re currently polling all the data from the twitter feed to present it all in one view.

Below you will find a redacted version of the twitter feed so updates can be read without cross conversations from partners during the outage, please note (like headers these chronologically from the bottom up):

Services have been restored to our Dallas DC. We are going through checks now to resume normal operation.

Our failover systems are kicking in and service is restored to support portals, web sites, outbound ExchangeDefender, louie, & rockerduck.

Please note:THESE ARE EMERGENCY FAILOVER systems, not the real thing. Services will be restored by the utility/power/electricians/etc​.

ExchangeDefender outbound service has been re-established as well as Exchange 2010 LOUIE and ROCKERDUCK

DC/Electrical teams have established a provisional return of services for 6:30 PM EST. We will update this advisory at that time.

We are working with the DC to move around equipment for a temporary solution. OWN sites and ED outbound will be up soon.

DC Update “There has been an issue affecting one of our 6 service entrances. The actual ATS is having an issue and all vendors are on site.”

The datacenter staff has confirmed an outage with the power plant and has individuals on staff attempting to redirect power around the core

Service is still affected and the latest from the DC reports that the backup EPO overloaded and tripped. The issue is still being addressed

The issue has been identified as power related in the DC. Services are slowly coming online. We will update when service is fully restored.

Routing issues in Dallas at the moment. If you’re having issues accessing and have Level3 in your way its going to take some patience today.

In addition we’re polling updates from our DC’s status to ensure that we’re providing as much detail as possible on the outage itself (times are CST):

Current Update

Our team and electricians are working diligently to get the temporary ATS installed, wired and tested to allow power to be restored. As the ATS involves high-voltage power, we are following the necessary steps to ensure the safety of our personnel and your equipment housed in our facility.

Based on current progress the electricians expect to start powering the equipment on between 6:15 – 7:00pm Central. This is our best estimated time currently. We have thoroughly tested and don’t anticipate any issues in powering up, but there is always the potential for unforeseen issues that could affect the ETA so we will keep you posted as we get progress reports. Our UPS vendor has checked every UPS, and the HVAC has checked every unit and found no issues. Our electrical contractor has also checked everything.

We realize how challenging and frustrating that it has been to not have an ETA for you or your customers, but we wanted to ensure we shared accurate and realistic information. We are working as fast as possible to get our customers back online and to ensure it is done safely and accurately. We will provide an update again within the hour.

While the team is working on the fix, I’ve answered some of the questions or comments that have been raised:

1. ATSs are pieces of equipment and can fail as equipment sometimes does, which is why we do 2N power in the facility in case the worst case scenario happens.

2. There is no problem with the electrical grid in Dallas or the heat in Dallas that caused the issue.

3. Our website and one switch were connected to two PDUs, but ultimately the same service entrance. This was a mistake that has been corrected.

4. Bypassing an ATS is not a simple fix, like putting on jumper cables. It is detailed and hard work. Given the size and power of the ATS, the safety of our people and our contractors must remain the highest priority.

5. Our guys are working hard. While we all prepare for emergencies, it is still quite difficult when one is in effect. We could have done a better job keeping you informed. We know our customers are also stressed.

6. The ATS could be repaired, but we have already made the decision to order a replacement. This is certainly not the cheapest route to take, but is the best solution for the long-term stability.

7. While the solution we have implemented is technically a temporary fix, we are taking great care and wiring as if it were permanent.

8. Colo4 does have A/B power for our routing gear. We identified one switch that was connected to A only which was a mistake. It was quickly corrected earlier today but did affect service for a few customers.

9. Some customers with A/B power had overloaded their circuits, which is a separate and individual versus network issue. (For example, if we offer A/B 20 amp feeds and the customer has 12 amps on each, if one trips, the other will not be able to handle the load.)

As you could imagine, this is the top priority for everyone in our facility. We will provide an update as quickly as possible.

Previous:

14:53

Thank you for your patience as we work to address the ATS issue with our #2 service entrance. We apologize for the situation and are working as quickly as possible to restore service.

We have determined that the repairs for the ATS will take more time than anticipated, so we are putting into service a backup ATS that we have on-site as part of our emergency recovery plan. We are working with our power team to safely bring the replacement ATS into operation. We will update you as soon as we have an estimated time that the replacement ATS will be online.

Later, once we have repaired the main ATS, we will schedule an update window to transition from the temporary power solution. We will provide advance notice and timelines to minimize any disruption to your business.

Again, we apologize for the loss of connectivity and impact to your business. We are working diligently to get things back online for our customers. Please expect another update within the hour.

13:34

It has been determined that the ATS will need repairs that will take time to perform. Fortunately Colo4 has another ATS that is on-site that can be used as a spare. Contractors are working on a solution right now that will allow us to safely bring that ATS in and use it as a spare while that repair is happening.

That plan is being developed now and we should have an update soon as to the time frame to restore temporary power. We will need to schedule another window when the temp ATS is brought offline and replaced by the repaired ATS.

13:05

There has been an issue affecting one of our 6 service entrances. The actual ATS (Automatic Transfer Switch) is having an issue and all vendors are on site. Unfortunately, this is affecting service entrance 2 in the 3000 Irving facility so it is affecting a lot of the customers that have been here the longest.

The other entrance in 3000 is still up and working fine as well as the 4 entrances in 3004. Customers utilizing A/B should have access to their secondary link. It does appear that some customers were affected by a switch that had a failure in 3000. That has been addressed and should be up now.
This is not related to the PDU maintenance we had in 3004 last night. Separate building, service entrance, UPS, PDU, etc.

We will be updating customers as we get information from our vendors so that they know the estimated time for the outage. Once this has been resolved we also distribute a detailed RFO to those affected.

Our electrical contractors, UPS maintenance team and generator contractor are all on-site and working to determine what the best course of action is to get this back up.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.


Powered by WordPress