First of all, thank you all for your business and trusting us with your data. You’ve enabled us to grow at an incredible pace and that kind of growth requires constant expansion. That kind of growth takes it’s toll on the existing hardware and from time to time requires new designs. Over the next week we will be conducting massive updates to the core of our network – the infrastructure, wiring, power layout, switches and everything down to the position of the perforated tiles on the floor is getting touched over the next week.
While we have been planning these events for a while to minimize impact – it’s technology and something will happen. We are doing our best to perform these tasks in off peak hours and interruption to services should be minimal if noticeable at all. Everything will continue to operate as usual.
Our support, however, will be slower: it’s all hands on deck when it comes to this maintenance. As you may have noticed, we have a lot of new support and networking staff that has joined us this year and we’re using this opportunity to bring them up to speed.
As you may have noticed from time to time the amount of SPAM flowing in randomly explodes. And typically it’s the obvious stuff such as: “My Belly Fat Just Disappeared..”
As much as I am personally interested in the mirracle that lead to this, our clients pay us not to have their business interrupted by junk mail with such headings. But how are they getting around the filters that should clearly filter that out by subject alone? Simple, they exploited a limitation in the amount of data we would scan in each message to determine if it was SPAM. Because spammers have limited resources on compromised systems, it was difficult for them to send out messages with large attachments. Fast foward to 2017, such limits are largely trivial and bypass majority of SPAM filtering solutions that have this limit in place because it becomes extremely expensive to filter very large attachments. Without getting specific (and giving spammers a way to circumvent it) we’ve made some adjustments to compensate for this new SPAM botnet vector and you will start to see much better results as a result. As with everything SPAM related, it takes a day or so for the adjustments to show impact in the wild but things should start getting much better. In addition to that, we have some new stuff in the mix that should make responding to these broadcast storms much easier to address.
I know you’ve heard about the WeCry/Wcry ransomware variant but the same SMB 1.0 exploit that was used to deliver the ransomware has been used to create a new set of SPAM nodes. You can make way more than $300 / system in SPAM broadcasts than you can in ransomware payments so this should complicate matters for some time to come. Nevertheless, we will stay vigilent and keep on making SPAM a fruitless venture.
If you have clients complaining about SPAM, make sure they 1) Have their settings to store/store quarantine/quarantine and never deliver anything (you’d be surprised that 90% of the SPAM reports we receive were properly filtered out by ExchangeDefender as SPAM) 2) Make sure they have the Outlook addin installed to report SPAM and 3) Forward anything you get, no matter how trivial, with headers, to spam@ExchangeDefender.com. Let your users know that we regularly communicate back as analytics@ExchangeDefender.com.
As you may have noticed if you’re following our blogs, Twitter, Facebook, etc – we are underoing a massive update to our ExchangeDefender backbone.
Unfortunately, some of the new nodes experienced a bug when releasing SPAM over the past few days: some nodes did not release the SPAM message. Rather than deal with the support cases one by one, we have automatically reprocessed all our SPAM release requests as of 3:30 PM EST (8:30 PM GMT) for the past 5 days. Your users will automatically receive the SPAM they attempted to release and it will be delivered in the original, timestamped fashion. This means that it will likely be burried under new mail they received since then. This only applies to messages users requested to release through email/portal, nothing affected whitelisted or trusted domains or senders.
First off we’d like to apologize for the issues on LOUIE earlier today and we appreciate your patience with us while we work on the problem.
Earlier today we experienced an issue with one of the components of the Database Availability Group cluster for LOUIE. The intermittent connectivity was tied to the Windows Cluster service randomly stopping in an round robin fashion across the LOUIE DAG network.
Generally this is tied to things network IP conflicts, network latency, or failure across multiple members of the Cluster simultaneously. Now with our deployment, none of these were a possibility since no new hardware was added recently, the DAG network is handled via two VLANs connected with a dedicated VPN, and during the problems all members of the DAG network were online.
It took sometime to parse to the logs for the cluster and it appears that the NIC used by Replication network (we segregate Replication and Client Access)for LOUIEMBOX7 was causing other interfaces on that range to unregister and then register. Since we have multiple live copies of the Mailbox Databases, we decided to remove the Node from the Cluster outright to stabilize the replication network. This took care of all the connectivity issues.
To conclude the maintenance window tonight, we replaced the motherboard and network components on the server. Replication across the entire network has been re-established and stable. In addition, to avoid this from happening in the future we’ve added an exception in monitoring for this type of error.
We would like to take this opportunity to remind everyone about LiveArchive. It’s for issues like these that we make it available in the Outlook Banner via our Addin available here
One of the features that differentiates our platform is that on top of the uptime promise we include Business Continuity on top of that. The ability to continue working is literary inside of Outlook one click away.
As noted in real time in our NOC up above and TWITTER, we had an issue with one of the scanning components. Due to this issue, the scanning capabilities were limited. If you saw a spike in SPAM activity getting through the filter, those issues should see great improvement as the filter rebuilds its libraries and adapts to new signatures. Please accept our apologies for this inconvenience.
This weekend (on 4/2/16) our network for Matilda will be getting an upgrade that requires a physical migration. Unfortunately, we will have a maintenance interval that is unavoidable. This time frame will be between 2-4 hours AEST Saturday between 11PM to 5AM. During this time frame, please remind your customers that LiveArchive will be available.
Livearchive can provide a simple and easy to use web interface at https://admin.exchangedefender.com/livearchive.php using their ExchangeDefender credentials.
You can also install the ExchangeDefender app on your iOS and Android devices for LiveArchive mobile access.
We have confirmed reports that our ISP in Los Angeles is having issues with their rDNS delegation. We have escalated the issue with them to get it resolved, in the mean time we have removed those nodes from the rotation. Flushing the DNS should take those nodes out of the resolution list.
[update 11:49 EST] ISP resolved delegation issue for rDNS.
As some of you are aware the ‘acceptable’ minimum key length on TLS keys has changed, this is a change that has been made to resolve an issue with a type of man in the middle attack. Generally the issue is resolved by reissuing a new Diffie–Hellman key with the appropriate length. However, for our platform this set off a chain of software incompatibilities that have forced to rewrite large chunks of the platform from scratch. This is why its taken us a couple of weeks to ‘catch up’. We understand that this is generating support issues to your staff and we apologize for that. We currently have every development resource assigned to this to ask to resolve the issue as quickly as we can.
[7/14/15]We published a small patch addressing this on the live servers yesterday. We awaited client confirmation this morning to consider the issue resolved.
Guys in the past few days we’ve experienced a 25%-35% gain in mailflow coming through our system. This type of gain in mail flow through out the platform is consistent with some sort of exploit being active. The impact that you’re likely to see from this there may be more SPAM activity on the client end. This is basically based on the assumption that new patterns are being used and more messages being in the system during the signature definition generation process.
As an example of what this may mean. Generally lets say a batch a 10000 spam messages with the same signature goes out. Generally 10 or so get through, the servers adjust to the signature, all subsequent occurences are flagged as expected. Now if you’re looking at 20000 similar messages those numbers will be different. This blog is to let you know what’s going on out there this past week.
Friday January 30, 2015 we experienced a network issue that affected all of our services intermittently. The problem was initially detected around 11:35 by our monitorin and then reported and confirmed around 11:40 in the morning. At this juncture we continued our network tests to find the culprit. The issue was a bit tough to track down as we were able to get the of majority of mtr tests through and into our network, came back clean. And we only had a single customer being affected. Eventually, we were able to track it down to a specific level3 hop outside our network and datacenter. We continued to work alongside our ISP and their providers to locate and resolve their issue.
We completed internal tests of all our network hardware and routing with no issues being detected. The connectivity stabilized for about an hour and half stretch as the ISP investigation continued and some of latency and intermittence returned and this time it intensified. We were notified that other customers of the isp were being affected as well. As it appeared a DDOS attack vector revealed itself in an amplified manner enough to be detected and nailed down. We started a process of null routing some items that were necessary but redundant, once that was done the attack abaited.
Always remember that any issues are always listed in the noc, and if its not because the noc (website) is affected it is directly tweeted at twitter.com/xdnoc. As we want to make sure the information is accessible and not try to update information on a site that may not be accessible.
Also for folks who want emails about email being down, both the noc and twitter support rss/notifications you can set up on your clients.
Today we had an issue where a Mailbox server did not fail over to our secondary or tertiary domain controllers. Basically, what this does is it stops that server from providing the cluster the status of that database. This caused the inability for users on those databases to connect. We have made changes to the domain controller logic within Exchange to reduce the possibility of a reoccurrence. In addition, since certain databases were dismounted, the transport would not accept mail, so this would cause a bottle neck in mail delivery for folks on LOUIE.
As always please remember that during these times you have https://admin.exchangedefender.com/livearchive.php at your fingertips to ensure that your clients continue to transact business. Remember with ExchangeDefender they are never down.
Here’s our original post on Facebook regarding this:
Note from our CEO
Earlier today we received a DDoS attack on our DNS infrastructure specifically. This caused our message processing to crawl to a halt. We have made changes in our design to account for the attack pattern we encountered and this type of issue should not repeat itself. We would like apologize extensively during a week when we finalized big changes that had shown EXCELLENT metrics and alleviated issues we were experiencing with whitelists and processing delays.
Please rest assured these are not the same issues, but more a bout of terrible timing. As always we will continue to make changes for the better to ensure that we continue to improve the services we deliver to you and your clients. We have resolved the issues, however we’re now encountering a massive back log we’re processing through.
mail2.exchangedefender.com is scheduled for a migration this weekend to new hardware that has been fully provisioned.
Connectivity to the mail service for new mail is not expected to last beyond 5 minutes. However, the migration of the emails will take a few hours. This blog post will contain updates through the process. Again, we would like to reiterate that real time usage will not be affected, you clients will be seeing new email come immediately and send immediately.
We have had some questions regarding a slight increase in SPAM emails and to a certain extent, latency. We have been making multiple hardware and software changes to mitigate the increase that has come into the system world wide. This increase of mail/junk flow is largely due to the cPanel hack that infected thousands of servers with malware.
We have made it our top priority to monitor these large waves to ensure they’re mitigated in a timely fashion. We believe that the majority of these issues have been addressed by the changes we have made accross the system. If the issues continue to arise, please rest assured that we are monitoring this and have it our top priority to mitigate.
If you’d like to read further on the root of problem here are a couple of links.
We’re currently in the process of doing an infrastructure and design upgrade on this service. These upgrades will improve performance, stability as well as squash some bugs that we currently have in the system. Please be aware that certain aspects of the service will be unavailable intermittently.
3/14/2014 Just a clarification, this is extended maintenance of an intermittent nature. We will provide updates as the progress continues.
We’ve experienced explosive growth in the current quarter and some of you have experienced a few of those a growing pains with us. This weekend we will be working on a massive build out to our secondary data center on the mail processing side of the services. Los Angeles will not be available for mail processing this weekend. This should NOT have any impact on standard MTAs that can handle multiple DNS results.
The only true impact expected is mail releases. Releases which reside in LA , that generally are a lot of less due to the lack of parity between the sites will not release until the expansion is complete. Once the expansion begins its last stage as the servers begin to come online all releases will still be queued and at that juncture should release.
Please follow the NOC above in case anything unexpected becomes affected.
-Update 4:55 EST this work has started
-Update 5:40 EST XD Data Replication is online, nodes to follow
-Update 6:55 EST 80% capacity moved for exchangedefender
-Update 11:09 EST All work completed across all services.
Sunday Post Report
We located an issue with the SPAM actions not being correct for all users, we are working to resolve that this morning if you receive a report of this that goes beyond today at noon please open a ticket, if the timestamp of the message is before noon today, please consider the issue resolved for your end users.
We received additional reports that some SPAM actions still weren’t functioning correctly. We have resolved this issue if you see any emails time stamped after 7 PM EST please let us know. If its before then, please consider the issue resolved.
This weekend we will be performing maintenance on the DEWEY cluster. This will be done to improve the performance and overall experience for users whom opted to remain on the legacy Exchange 2007 servers. Throughout the weekend users may notice a disconnect from Outlook while we move their mailbox to a new database. Note that this will not affect all users. When the move has been completed Outlook will prompt the user with the following message:
Once the users clicks “OK” and restarts Outlook they will be automatically connected to their mailbox on the new database.
We have launched phase 1 of the LiveArchive4 succesfully. This means that real time email is available through https://admin.exchangedefender.com/livearchive.php.
Phase 2 which is in progress and will take a few weeks is the migration of old items. This is currently ongoing. To avoid any confusion we’ll cut to the bottom line. If your mail is not at the new URL, then its still at the old URL. So your email for up to a year is still with us.
Current gremblins we have noticed:
Dead: There WAS (as it was resolved yesterday) an issue with the RBL zone reloads that would cause some DNS timeouts and rejections to livearchive.
Current: This one is more of an annoyance than anything else but be aware. Due to the fact that we’re moving platforms, there’s no “database=>database” move. We’re moving content. Unfortunately, this is triggering read-receipts.
Phase 3. Vlad will blog about it once it’s ready.
We are currently doing massive redesigns to Compliance Archiving some aspects are not available. Your client’s data is still safe alot of the front end is getting some re-balancing done from the back end. By that I mean, its inaccessible for viewing at times but your data is fine. We’re working on balancing the rendering as well.