Fri Sat Sun Mon Tue
ExchangeDefender Inbound
ExchangeDefender Outbound
ExchangeDefender Apps Web
Exchange 2010 Hosting
Exchange 2007 Hosting
Offsite Backups
Web Hosting
Blackberry Enterprise Server
SharePoint 2007
SharePoint 2010
Exchange 2013 Hosting
SharePoint 2013
Virtual Servers
ExchangeDefender Incident Response – ExchangeDefender Network Operations

May 4, 2017

ExchangeDefender Incident Response

Filed under: Exchange Hosting — admin @ 6:02 am

Dear ExchangeDefender Partners,

Thank you for your patience and time spent working with us during the attacks we’ve been fighting for the past week. While I am happy to report that the issues that caused performance problems for our clients are largely contained, I cannot even begin to express how sorry I am for the business impact this has caused our clients. I am writing you this letter to explain what happened and how we responded.

On Monday, April 24th, ExchangeDefender network received an unprecedented DDoS (distributed denial of service) attack, followed by a SPAM storm that we continue to mitigate as a core functionality of ExchangeDefender: protecting your mailbox.

Late on Monday, April 24th, we were targeted by a 0-day Exchange exploit which attempted to load a virus and ransomware content across our Windows network. Neither Microsoft nor our multiple antivirus vendors had an answer or a response, leaving us to fight the potential infection and outage on our own. This particular hack/exploit would attempt to change name servers (so that antivirus updates cannot be downloaded) and compile the virus to exploit the network. Thanks to ExchangeDefender security and quick response, it was not able to access data or compromise security but out of abundance of caution we started quarantining individual cluster nodes, mapping the exploit and responding to it. This was a manual process that required physical access to affected systems, emergency patching to other infrastructure involved in delivering Exchange services, additional resources for inspection and tracking of hacking activity, etc.

By Thursday, April 27th, we were in a full incident containment mode racing ahead of the attack. The load balancing solution had to be redone because Microsoft CAS servers (systems that handle Outlook Web App, Outlook connectivity, ActiveSync) were not getting users properly distributed across them. Throughout the weekend, Monday May 1st to Wednesday May 3rd we continued to add servers and resources across the network to mitigate the combination of the attack and clients reconnecting to the network at new CAS access points.

During this time the access to ExchangeDefender CAS servers was below our standards and for that we apologize. We have expanded our capacity seven-fold (7x) across Rockerduck, Louie and Gladstone clusters while addressing this issue. ExchangeDefender LiveArchive continued to perform unaffected throughout the incident window and clients were able to continue communicate through it as a failover.

We sincerely apologize for the impact this had to your business and to your clients. While there are certainly enough vendors to point and blame for the combination of issues we faced, it is our responsibility to deliver the service we promise. The attack, and the severity of it, was truly out of our control and we have done everything possible to contain it, preserve mail security and continue delivering service. Attacks, hack attempts, viruses, DDoS and SPAM storms are nothing new and it’s why you outsource this problem to us. This same issue has and continues to happen at other providers as well all use the same technology, the difference with this was scale. Everyone on my staff has put in extraordinary effort and hours to combat this issue and we’re truly sorry that even with everything we’ve done some clients experienced excessive SPAM, Outlook disconnects, repeated password prompts and had to resort to using Outlook Web App and ExchangeDefender LiveArchive to continue working.

Our biggest regret in this entire episode is that of communication. While we did everything we could to communicate our response and mitigation through @xdnoc on Twitter, our NOC site, our Service Advisory section of Support Portal in the tickets, we did not effectively communicate the strategy and our response as it unfolded during the attack. In 20+ years of providing email and security services, we have never seen anything of this scale and our typical response changed as we continued to fight the various aspects of this attack. While our communication strategy was sufficient for isolated incidents, it was not good enough for a week of coordinated DDoS, SPAM storms, virus and persistent hack attempts. In everything we do, your security and privacy of your data is our first and primary concern and the entire staff focused on that. As a result, we have changed how we communicate and advise clients on extensive work that is going on behind the scenes.

The extended performance problems and Outlook/ActiveSync/OWA issues also exposed to us what an inefficient job we have done at promoting LiveArchive, our business continuity service that is designed to allow clients to work unaffected during outages and maintenance windows. The amount of resources and service redundancy that goes into delivering our Exchange services is staggering, but at the core it’s still Exchange and when Exchange is having issues we point clients to LiveArchive. We will prioritize extensive promotion of this to our partners and clients as many we talked to over the past week were simply unaware of it.

The extent and severity of this attack was unprecedented. The amount of resources we threw at solving all the issues the attack caused was extensive. While these attacks and hack attempts were truly something out of our control, it is why you outsource your Exchange to us and we are deeply sorry that we didn’t better communicate our incident response and mitigation strategies as we fought it all. We have given all the effort and every resource we possibly could have to mitigate an outbreak but we failed to communicate as extensively as necessary to assure our partners and our clients of every complex aspect we were addressing at the time. We apologize that this exposed out partners as uninformed and left many of you unaware of everything that was going on behind the scenes.

We have already made changes to our process and will communicate shortly on additional ways we will be handling communications going forward.

Sincerely,

Vlad Mazek

CEO

Own Web Now Corp

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.


Powered by WordPress