The outages in the northeast are slowing down deliveries to some customers but have been improving throughout the morning as more customers restore power. We’re increasing capacity as we speak to adjust for the increased rejection/reattempt rates.
August 29, 2011
Our Engineers have informed us that they will be performing maintenance work on our Backup73 server. They have decided to perform this maintenance during business hours as most of the scheduled jobs are set begin before or after regular business hours. We thank you for your patience with us will the maintenance is performed and apologize for any inconvenience this issue may cause.
August 26, 2011
We are currently receiving alerts from our monitoring software that tracks average mail delivery time in ExchangeDefender that nodes in Dallas are experiencing larger than normal queues.
We’ve identified the issue with our logging server that receives the SMTP transaction logs from each message as it transverses the network. We are currently working to alleviate the pressure and rebalance the message load across all nodes.
Users should expect to see up to a 15 minute delay on messages until 1:50 PM Eastern when delivery services are back to speed.
Update 1:44 PM: The logs server has been reactivated and we are watching traffic pour in. Just to note, the admin.exchangedefender.com site has been offlined while the log server is finishing the start up check.
Update 2:00 PM: Mail delivery is catching back up, however, the mail logs tables for domains are being checked for consistency before being brought back online. We expect the admin.exchangedefender.com site to be up within 30 minutes
Update 2:20 PM: Access to admin.exchangedefender.com has been restored. The inbound servers are still actively processing the queue loads
Update 4:35 PM: All inbound / outbound nodes have flushed their queues and are operating at normal processing time. We will continue to work on resolving the pressure issues seen by the logs server throughout the weekend.
We’ve stopped the OBS service on backup73 as we rebalance users across the drives. Maintenance is expected to last for 45 minutes and service is expected to resume at 1:30 PM Eastern.
August 25, 2011
We will be updating MBOX1 and CAS2 on the ROCKERDUCK cluster tonight starting at 9:00 PM Eastern to Exchange 2010 SP1 RU4-v2
During the upgrade, users connected to CAS2 will automatically switch over to CAS1 as we will begin to drain the server connections around 8:45 PM Eastern.
Users with mailboxes on MBOX1 will have the passive copy of their database on MBOX2 activated around 8:50 PM Eastern.
The maintenance schedule should be transparent to users and should not interrupt or disrupt any service.
August 19, 2011
We’re performing maintenance on Backup73. We expect it to last 3 hours. We’re performing the maintenance during business hours as our usage records show most backup jobs are scheduled for before and after business hours.
August 16, 2011
Between 4:30 PM and 5:30 PM few clients may have received bounce back messages referencing delivery to delivery to xd286#### from LiveArchive.
These notices were sent in error and was caused by LiveArchive tests that we’re conducting to setup redundancy for LiveArchive to Los Angeles.
We’ve added maintenance tonight for the ROCKERDUCK cluster, specifically on the DAG. During maintenance users should not experience any interruption in service, however, service interruption is possible. We will monitor connection statuses during the maintenance and will update this post if any interruption is detected.
Update 1:45 AM Eastern: We will have to stop the cluster service on all nodes which will cause a momentary interruption in service as we repair communication.
Update 2:23 AM Eastern: The DAG on Rockerduck has been repaired and is back online.
August 11, 2011
Tonight we will be performing maintenance on DEWEYMBOX2 to address the performance issues reported by partners throughout the day.
Maintenance is expected to last between 10PM and 2AM.
1:46 AM EST – All Maintenance on DEWEYMBOX2 has been completed succesfully. All services have been restored and queued messages are flushing into the user’s mailbox.
Again we’d like to take this opportunity to apologize for this issue and we appreciate your patience with us through out the process.
Update 8:05 AM: We are going to reboot the CAS servers for DEWEY to clear up any issues before the start of business. Service may be interrupted for 15 mins
Update 8:15 AM: The domain controllers for DEWEY have come back online from the restart. We are still waiting for the primary CAS and MBOX server to come back online.
Update 8:45 AM: The primary CAS and MBOX server came up at 8:30AM and we’ve confirmed service access for RPC, MAPI, ActiveSync and POP/IMAP
99% of all of our services have been restored. Unfortunately, we have 3 servers that are still affected by the outage. Remembering that it was a power outage and Windows Server’ capability of handling such outages is not on par with Linux, we’ve had some servers take longer to come back online. Services currently affected:
Dewey MBOX2 – It’s a secondary mailbox server on the DEWEY cluster, as such its a small user count but if your clients are on it, their mailbox is inaccessible.
Daisy – Legacy Exchange 2003 server
VS4 – One of our Virtual servers
These services affect a small but equally important portion of the client base, as such we’re all hands on deck on restoring services on these servers ASAP. For the Hosted Exchange users still affected please remember LiveArchive is online, if you’re having any issues authenticating to it or otherwise please open a support request and we’ll get you back online on LiveArchive as quickly as possible.
August 10, 2011
All services have been restored, there is still spooled mail being delivered but new mail is being delivered in real time. With that said our CEO Vlad Mazek held a GoToWebinar outlining all the important facts about this event. We invite you to listen to the points covered as well as use the provided information the in the PPTx. And once again we’d like to apologize for the inconvenienced caused to you and your customers.
We’re employing an emergency recovery plan, essential services will be coming online but they’re not running on their full infrastructure. Please expect services to come back online slowly as services are running on our emergency failover (and the datacenter has NOT fully restored the outage). Latency on the servers, that are online, is expected as customers try to reconnect to the servers. We’ll provide more details as they come available. We will be keeping folks available on the phone to give the information below live. Again this is a continuation of the information found on our twitter feed at twitter.com/XDNOC
We’re currently polling all the data from the twitter feed to present it all in one view.
Below you will find a redacted version of the twitter feed so updates can be read without cross conversations from partners during the outage, please note (like headers these chronologically from the bottom up):
Services have been restored to our Dallas DC. We are going through checks now to resume normal operation.
Our failover systems are kicking in and service is restored to support portals, web sites, outbound ExchangeDefender, louie, & rockerduck.
Please note:THESE ARE EMERGENCY FAILOVER systems, not the real thing. Services will be restored by the utility/power/electricians/etc.
ExchangeDefender outbound service has been re-established as well as Exchange 2010 LOUIE and ROCKERDUCK
DC/Electrical teams have established a provisional return of services for 6:30 PM EST. We will update this advisory at that time.
We are working with the DC to move around equipment for a temporary solution. OWN sites and ED outbound will be up soon.
DC Update “There has been an issue affecting one of our 6 service entrances. The actual ATS is having an issue and all vendors are on site.”
The datacenter staff has confirmed an outage with the power plant and has individuals on staff attempting to redirect power around the core
Service is still affected and the latest from the DC reports that the backup EPO overloaded and tripped. The issue is still being addressed
The issue has been identified as power related in the DC. Services are slowly coming online. We will update when service is fully restored.
Routing issues in Dallas at the moment. If you’re having issues accessing and have Level3 in your way its going to take some patience today.
In addition we’re polling updates from our DC’s status to ensure that we’re providing as much detail as possible on the outage itself (times are CST):
Our team and electricians are working diligently to get the temporary ATS installed, wired and tested to allow power to be restored. As the ATS involves high-voltage power, we are following the necessary steps to ensure the safety of our personnel and your equipment housed in our facility.
Based on current progress the electricians expect to start powering the equipment on between 6:15 – 7:00pm Central. This is our best estimated time currently. We have thoroughly tested and don’t anticipate any issues in powering up, but there is always the potential for unforeseen issues that could affect the ETA so we will keep you posted as we get progress reports. Our UPS vendor has checked every UPS, and the HVAC has checked every unit and found no issues. Our electrical contractor has also checked everything.
We realize how challenging and frustrating that it has been to not have an ETA for you or your customers, but we wanted to ensure we shared accurate and realistic information. We are working as fast as possible to get our customers back online and to ensure it is done safely and accurately. We will provide an update again within the hour.
While the team is working on the fix, I’ve answered some of the questions or comments that have been raised:
1. ATSs are pieces of equipment and can fail as equipment sometimes does, which is why we do 2N power in the facility in case the worst case scenario happens.
2. There is no problem with the electrical grid in Dallas or the heat in Dallas that caused the issue.
3. Our website and one switch were connected to two PDUs, but ultimately the same service entrance. This was a mistake that has been corrected.
4. Bypassing an ATS is not a simple fix, like putting on jumper cables. It is detailed and hard work. Given the size and power of the ATS, the safety of our people and our contractors must remain the highest priority.
5. Our guys are working hard. While we all prepare for emergencies, it is still quite difficult when one is in effect. We could have done a better job keeping you informed. We know our customers are also stressed.
6. The ATS could be repaired, but we have already made the decision to order a replacement. This is certainly not the cheapest route to take, but is the best solution for the long-term stability.
7. While the solution we have implemented is technically a temporary fix, we are taking great care and wiring as if it were permanent.
8. Colo4 does have A/B power for our routing gear. We identified one switch that was connected to A only which was a mistake. It was quickly corrected earlier today but did affect service for a few customers.
9. Some customers with A/B power had overloaded their circuits, which is a separate and individual versus network issue. (For example, if we offer A/B 20 amp feeds and the customer has 12 amps on each, if one trips, the other will not be able to handle the load.)
As you could imagine, this is the top priority for everyone in our facility. We will provide an update as quickly as possible.
Thank you for your patience as we work to address the ATS issue with our #2 service entrance. We apologize for the situation and are working as quickly as possible to restore service.
We have determined that the repairs for the ATS will take more time than anticipated, so we are putting into service a backup ATS that we have on-site as part of our emergency recovery plan. We are working with our power team to safely bring the replacement ATS into operation. We will update you as soon as we have an estimated time that the replacement ATS will be online.
Later, once we have repaired the main ATS, we will schedule an update window to transition from the temporary power solution. We will provide advance notice and timelines to minimize any disruption to your business.
Again, we apologize for the loss of connectivity and impact to your business. We are working diligently to get things back online for our customers. Please expect another update within the hour.
It has been determined that the ATS will need repairs that will take time to perform. Fortunately Colo4 has another ATS that is on-site that can be used as a spare. Contractors are working on a solution right now that will allow us to safely bring that ATS in and use it as a spare while that repair is happening.
That plan is being developed now and we should have an update soon as to the time frame to restore temporary power. We will need to schedule another window when the temp ATS is brought offline and replaced by the repaired ATS.
There has been an issue affecting one of our 6 service entrances. The actual ATS (Automatic Transfer Switch) is having an issue and all vendors are on site. Unfortunately, this is affecting service entrance 2 in the 3000 Irving facility so it is affecting a lot of the customers that have been here the longest.
The other entrance in 3000 is still up and working fine as well as the 4 entrances in 3004. Customers utilizing A/B should have access to their secondary link. It does appear that some customers were affected by a switch that had a failure in 3000. That has been addressed and should be up now.
We will be updating customers as we get information from our vendors so that they know the estimated time for the outage. Once this has been resolved we also distribute a detailed RFO to those affected.
Our electrical contractors, UPS maintenance team and generator contractor are all on-site and working to determine what the best course of action is to get this back up.
August 1, 2011
One of our pop/imap servers used for WebHosting freebie mailboxes primarily appears to have been attacked. We’re resolving the issue as we speak there may be some delays found in mail flow from today. Measures have been taken to ensure the root cause does not repeat itself.
We have received reports of a 5 minute window where there was packet loss to our Datacenter in Dallas. It appears this issue was on the ISP level of the Network and has been resolved in its entirety.
Additional Info from our DC:
The network issues experienced today began at approximately 12:22pm CST were caused by an issue within Level3’s network. This issue affected Level3 customers nationwide, and is not isolated to Colo4.
Powered by WordPress