August 8, 2017
During August our teams will be conducting audits of nearly all ExchangeDefender and Own Web Now management subsystems – we’re trying to earn some new certifications not to mention the years of custom implementations that were meant to only be used temporarily.
Starting Wednesday and Thursday, August 9th and 10th, our outbound systems will officially only support the configuration as it’s programmed in the ExchangeDefender and Own Web Now control panels. This change really shouldn’t affect anyone but if it does, and you receive “Relaying Denied”, please open a ticket.
Comments Off on Outbound audits
June 22, 2017
Over the next few days we will be conducting massive upgrades to the core of our ExchangeDefender network to add capacity and modernize some of the infrastructure that has served us really well. Here is a brief summary of the systems that will be affected:
support.ownwebnow.com – We will be upgrading our support portal and all the associated systems related to it from Saturday, June 24 – July 4th. Only time that user access will be affected is in the early hours of Sunday, June 24th. Systems are getting hardware and software upgrades and any interruption will be minimal and announced.
www.exchangedefender.com – We are upgrading our web site as well. Some portions of it may not be available from time to time on June 24 and June 25th.
outbound-jr, outbound-xd, outbound-enterprise, outbound-gov – Our entire outbound network is getting a massive overhaul to both address the growth and deliver the new features our clients have been asking for.
name services – We will be performing a minor upgrade to our DNS infrastructure. The current ns1.ownwebnow.com system has an uptime of 1,612 days (well over 4 years) and frankly, we’re just pushing our luck. It’s hot spare (that would take over immediately in the event of a failure) has already gone through two different Dell servers over the same time window and they are being replaced by something more robust.
These are not small changes and the work behind the scenes to make these upgrades has been going on for several months (and support.own case for nearly a year) and all the new systems that are coming in as a replacement have been burned in and online for months. We will do everything we can to minimize impact and will be putting announcements on @xdnoc handle on Twitter as well as site banners as the work is being done so that any interruptions can be minimized. As a global company we really don’t have a “downtime” when people are not using the network and our systems are massively scalable whereever technology allows it. That said, we are in an unprecedented time for hackers both in scale of attacks and the income potential so the threats our clients face are far more significant than interruptions during maintenance windows. As always, we appreciate the trust you put in us to manage your IT and we’ll keep you in the loop the whole way.
Comments Off on Core maintenance June 23-26
June 9, 2017
First of all, thank you all for your business and trusting us with your data. You’ve enabled us to grow at an incredible pace and that kind of growth requires constant expansion. That kind of growth takes it’s toll on the existing hardware and from time to time requires new designs. Over the next week we will be conducting massive updates to the core of our network – the infrastructure, wiring, power layout, switches and everything down to the position of the perforated tiles on the floor is getting touched over the next week.
While we have been planning these events for a while to minimize impact – it’s technology and something will happen. We are doing our best to perform these tasks in off peak hours and interruption to services should be minimal if noticeable at all. Everything will continue to operate as usual.
Our support, however, will be slower: it’s all hands on deck when it comes to this maintenance. As you may have noticed, we have a lot of new support and networking staff that has joined us this year and we’re using this opportunity to bring them up to speed.
Comments Off on Scheduled Maintenance June 9 – 16
May 18, 2017
As you may have noticed from time to time the amount of SPAM flowing in randomly explodes. And typically it’s the obvious stuff such as: “My Belly Fat Just Disappeared..”
As much as I am personally interested in the mirracle that lead to this, our clients pay us not to have their business interrupted by junk mail with such headings. But how are they getting around the filters that should clearly filter that out by subject alone? Simple, they exploited a limitation in the amount of data we would scan in each message to determine if it was SPAM. Because spammers have limited resources on compromised systems, it was difficult for them to send out messages with large attachments. Fast foward to 2017, such limits are largely trivial and bypass majority of SPAM filtering solutions that have this limit in place because it becomes extremely expensive to filter very large attachments. Without getting specific (and giving spammers a way to circumvent it) we’ve made some adjustments to compensate for this new SPAM botnet vector and you will start to see much better results as a result. As with everything SPAM related, it takes a day or so for the adjustments to show impact in the wild but things should start getting much better. In addition to that, we have some new stuff in the mix that should make responding to these broadcast storms much easier to address.
I know you’ve heard about the WeCry/Wcry ransomware variant but the same SMB 1.0 exploit that was used to deliver the ransomware has been used to create a new set of SPAM nodes. You can make way more than $300 / system in SPAM broadcasts than you can in ransomware payments so this should complicate matters for some time to come. Nevertheless, we will stay vigilent and keep on making SPAM a fruitless venture.
If you have clients complaining about SPAM, make sure they 1) Have their settings to store/store quarantine/quarantine and never deliver anything (you’d be surprised that 90% of the SPAM reports we receive were properly filtered out by ExchangeDefender as SPAM) 2) Make sure they have the Outlook addin installed to report SPAM and 3) Forward anything you get, no matter how trivial, with headers, to spam@ExchangeDefender.com. Let your users know that we regularly communicate back as analytics@ExchangeDefender.com.
Comments Off on Adjusting to new broadcast storms
May 4, 2017
Dear ExchangeDefender Partners,
Thank you for your patience and time spent working with us during the attacks we’ve been fighting for the past week. While I am happy to report that the issues that caused performance problems for our clients are largely contained, I cannot even begin to express how sorry I am for the business impact this has caused our clients. I am writing you this letter to explain what happened and how we responded.
On Monday, April 24th, ExchangeDefender network received an unprecedented DDoS (distributed denial of service) attack, followed by a SPAM storm that we continue to mitigate as a core functionality of ExchangeDefender: protecting your mailbox.
Late on Monday, April 24th, we were targeted by a 0-day Exchange exploit which attempted to load a virus and ransomware content across our Windows network. Neither Microsoft nor our multiple antivirus vendors had an answer or a response, leaving us to fight the potential infection and outage on our own. This particular hack/exploit would attempt to change name servers (so that antivirus updates cannot be downloaded) and compile the virus to exploit the network. Thanks to ExchangeDefender security and quick response, it was not able to access data or compromise security but out of abundance of caution we started quarantining individual cluster nodes, mapping the exploit and responding to it. This was a manual process that required physical access to affected systems, emergency patching to other infrastructure involved in delivering Exchange services, additional resources for inspection and tracking of hacking activity, etc.
By Thursday, April 27th, we were in a full incident containment mode racing ahead of the attack. The load balancing solution had to be redone because Microsoft CAS servers (systems that handle Outlook Web App, Outlook connectivity, ActiveSync) were not getting users properly distributed across them. Throughout the weekend, Monday May 1st to Wednesday May 3rd we continued to add servers and resources across the network to mitigate the combination of the attack and clients reconnecting to the network at new CAS access points.
During this time the access to ExchangeDefender CAS servers was below our standards and for that we apologize. We have expanded our capacity seven-fold (7x) across Rockerduck, Louie and Gladstone clusters while addressing this issue. ExchangeDefender LiveArchive continued to perform unaffected throughout the incident window and clients were able to continue communicate through it as a failover.
We sincerely apologize for the impact this had to your business and to your clients. While there are certainly enough vendors to point and blame for the combination of issues we faced, it is our responsibility to deliver the service we promise. The attack, and the severity of it, was truly out of our control and we have done everything possible to contain it, preserve mail security and continue delivering service. Attacks, hack attempts, viruses, DDoS and SPAM storms are nothing new and it’s why you outsource this problem to us. This same issue has and continues to happen at other providers as well all use the same technology, the difference with this was scale. Everyone on my staff has put in extraordinary effort and hours to combat this issue and we’re truly sorry that even with everything we’ve done some clients experienced excessive SPAM, Outlook disconnects, repeated password prompts and had to resort to using Outlook Web App and ExchangeDefender LiveArchive to continue working.
Our biggest regret in this entire episode is that of communication. While we did everything we could to communicate our response and mitigation through @xdnoc on Twitter, our NOC site, our Service Advisory section of Support Portal in the tickets, we did not effectively communicate the strategy and our response as it unfolded during the attack. In 20+ years of providing email and security services, we have never seen anything of this scale and our typical response changed as we continued to fight the various aspects of this attack. While our communication strategy was sufficient for isolated incidents, it was not good enough for a week of coordinated DDoS, SPAM storms, virus and persistent hack attempts. In everything we do, your security and privacy of your data is our first and primary concern and the entire staff focused on that. As a result, we have changed how we communicate and advise clients on extensive work that is going on behind the scenes.
The extended performance problems and Outlook/ActiveSync/OWA issues also exposed to us what an inefficient job we have done at promoting LiveArchive, our business continuity service that is designed to allow clients to work unaffected during outages and maintenance windows. The amount of resources and service redundancy that goes into delivering our Exchange services is staggering, but at the core it’s still Exchange and when Exchange is having issues we point clients to LiveArchive. We will prioritize extensive promotion of this to our partners and clients as many we talked to over the past week were simply unaware of it.
The extent and severity of this attack was unprecedented. The amount of resources we threw at solving all the issues the attack caused was extensive. While these attacks and hack attempts were truly something out of our control, it is why you outsource your Exchange to us and we are deeply sorry that we didn’t better communicate our incident response and mitigation strategies as we fought it all. We have given all the effort and every resource we possibly could have to mitigate an outbreak but we failed to communicate as extensively as necessary to assure our partners and our clients of every complex aspect we were addressing at the time. We apologize that this exposed out partners as uninformed and left many of you unaware of everything that was going on behind the scenes.
We have already made changes to our process and will communicate shortly on additional ways we will be handling communications going forward.
Own Web Now Corp
Comments Off on ExchangeDefender Incident Response
April 26, 2017
As you may have noticed if you’re following our blogs, Twitter, Facebook, etc – we are underoing a massive update to our ExchangeDefender backbone.
Unfortunately, some of the new nodes experienced a bug when releasing SPAM over the past few days: some nodes did not release the SPAM message. Rather than deal with the support cases one by one, we have automatically reprocessed all our SPAM release requests as of 3:30 PM EST (8:30 PM GMT) for the past 5 days. Your users will automatically receive the SPAM they attempted to release and it will be delivered in the original, timestamped fashion. This means that it will likely be burried under new mail they received since then. This only applies to messages users requested to release through email/portal, nothing affected whitelisted or trusted domains or senders.
Comments Off on ExchangeDefender SPAM release problem
December 8, 2016
First off we’d like to apologize for the issues on LOUIE earlier today and we appreciate your patience with us while we work on the problem.
Earlier today we experienced an issue with one of the components of the Database Availability Group cluster for LOUIE. The intermittent connectivity was tied to the Windows Cluster service randomly stopping in an round robin fashion across the LOUIE DAG network.
Generally this is tied to things network IP conflicts, network latency, or failure across multiple members of the Cluster simultaneously. Now with our deployment, none of these were a possibility since no new hardware was added recently, the DAG network is handled via two VLANs connected with a dedicated VPN, and during the problems all members of the DAG network were online.
It took sometime to parse to the logs for the cluster and it appears that the NIC used by Replication network (we segregate Replication and Client Access)for LOUIEMBOX7 was causing other interfaces on that range to unregister and then register. Since we have multiple live copies of the Mailbox Databases, we decided to remove the Node from the Cluster outright to stabilize the replication network. This took care of all the connectivity issues.
To conclude the maintenance window tonight, we replaced the motherboard and network components on the server. Replication across the entire network has been re-established and stable. In addition, to avoid this from happening in the future we’ve added an exception in monitoring for this type of error.
We would like to take this opportunity to remind everyone about LiveArchive. It’s for issues like these that we make it available in the Outlook Banner via our Addin available here
One of the features that differentiates our platform is that on top of the uptime promise we include Business Continuity on top of that. The ability to continue working is literary inside of Outlook one click away.
Comments Off on LOUIE Intermittent Connectivity
October 24, 2016
As noted in real time in our NOC up above and TWITTER, we had an issue with one of the scanning components. Due to this issue, the scanning capabilities were limited. If you saw a spike in SPAM activity getting through the filter, those issues should see great improvement as the filter rebuilds its libraries and adapts to new signatures. Please accept our apologies for this inconvenience.
Comments Off on Antivirus Problem
March 31, 2016
This weekend (on 4/2/16) our network for Matilda will be getting an upgrade that requires a physical migration. Unfortunately, we will have a maintenance interval that is unavoidable. This time frame will be between 2-4 hours AEST Saturday between 11PM to 5AM. During this time frame, please remind your customers that LiveArchive will be available.
Livearchive can provide a simple and easy to use web interface at https://admin.exchangedefender.com/livearchive.php using their ExchangeDefender credentials.
You can also install the ExchangeDefender app on your iOS and Android devices for LiveArchive mobile access.
Comments Off on Matilda Upgrade Migration
August 31, 2015
We have confirmed reports that our ISP in Los Angeles is having issues with their rDNS delegation. We have escalated the issue with them to get it resolved, in the mean time we have removed those nodes from the rotation. Flushing the DNS should take those nodes out of the resolution list.
[update 11:49 EST] ISP resolved delegation issue for rDNS.
Comments Off on Reverse DNS on Los Angeles Clusters
July 10, 2015
As some of you are aware the ‘acceptable’ minimum key length on TLS keys has changed, this is a change that has been made to resolve an issue with a type of man in the middle attack. Generally the issue is resolved by reissuing a new Diffie–Hellman key with the appropriate length. However, for our platform this set off a chain of software incompatibilities that have forced to rewrite large chunks of the platform from scratch. This is why its taken us a couple of weeks to ‘catch up’. We understand that this is generating support issues to your staff and we apologize for that. We currently have every development resource assigned to this to ask to resolve the issue as quickly as we can.
[7/14/15]We published a small patch addressing this on the live servers yesterday. We awaited client confirmation this morning to consider the issue resolved.
Comments Off on TLS Issues
February 12, 2015
Guys in the past few days we’ve experienced a 25%-35% gain in mailflow coming through our system. This type of gain in mail flow through out the platform is consistent with some sort of exploit being active. The impact that you’re likely to see from this there may be more SPAM activity on the client end. This is basically based on the assumption that new patterns are being used and more messages being in the system during the signature definition generation process.
As an example of what this may mean. Generally lets say a batch a 10000 spam messages with the same signature goes out. Generally 10 or so get through, the servers adjust to the signature, all subsequent occurences are flagged as expected. Now if you’re looking at 20000 similar messages those numbers will be different. This blog is to let you know what’s going on out there this past week.
Comments Off on Volume
January 31, 2015
Friday January 30, 2015 we experienced a network issue that affected all of our services intermittently. The problem was initially detected around 11:35 by our monitorin and then reported and confirmed around 11:40 in the morning. At this juncture we continued our network tests to find the culprit. The issue was a bit tough to track down as we were able to get the of majority of mtr tests through and into our network, came back clean. And we only had a single customer being affected. Eventually, we were able to track it down to a specific level3 hop outside our network and datacenter. We continued to work alongside our ISP and their providers to locate and resolve their issue.
We completed internal tests of all our network hardware and routing with no issues being detected. The connectivity stabilized for about an hour and half stretch as the ISP investigation continued and some of latency and intermittence returned and this time it intensified. We were notified that other customers of the isp were being affected as well. As it appeared a DDOS attack vector revealed itself in an amplified manner enough to be detected and nailed down. We started a process of null routing some items that were necessary but redundant, once that was done the attack abaited.
Always remember that any issues are always listed in the noc, and if its not because the noc (website) is affected it is directly tweeted at twitter.com/xdnoc. As we want to make sure the information is accessible and not try to update information on a site that may not be accessible.
Also for folks who want emails about email being down, both the noc and twitter support rss/notifications you can set up on your clients.
Comments Off on Network Intermittence
May 28, 2014
Today we had an issue where a Mailbox server did not fail over to our secondary or tertiary domain controllers. Basically, what this does is it stops that server from providing the cluster the status of that database. This caused the inability for users on those databases to connect. We have made changes to the domain controller logic within Exchange to reduce the possibility of a reoccurrence. In addition, since certain databases were dismounted, the transport would not accept mail, so this would cause a bottle neck in mail delivery for folks on LOUIE.
As always please remember that during these times you have https://admin.exchangedefender.com/livearchive.php at your fingertips to ensure that your clients continue to transact business. Remember with ExchangeDefender they are never down.
Comments Off on LOUIE partial issues
April 10, 2014
Here’s our original post on Facebook regarding this:
Note from our CEO
Comments Off on Heartbleed
March 27, 2014
Earlier today we received a DDoS attack on our DNS infrastructure specifically. This caused our message processing to crawl to a halt. We have made changes in our design to account for the attack pattern we encountered and this type of issue should not repeat itself. We would like apologize extensively during a week when we finalized big changes that had shown EXCELLENT metrics and alleviated issues we were experiencing with whitelists and processing delays.
Please rest assured these are not the same issues, but more a bout of terrible timing. As always we will continue to make changes for the better to ensure that we continue to improve the services we deliver to you and your clients. We have resolved the issues, however we’re now encountering a massive back log we’re processing through.
Comments Off on Inbound Mail Processing
March 25, 2014
mail2.exchangedefender.com is scheduled for a migration this weekend to new hardware that has been fully provisioned.
Connectivity to the mail service for new mail is not expected to last beyond 5 minutes. However, the migration of the emails will take a few hours. This blog post will contain updates through the process. Again, we would like to reiterate that real time usage will not be affected, you clients will be seeing new email come immediately and send immediately.
Comments Off on mail2 upgrade
We have had some questions regarding a slight increase in SPAM emails and to a certain extent, latency. We have been making multiple hardware and software changes to mitigate the increase that has come into the system world wide. This increase of mail/junk flow is largely due to the cPanel hack that infected thousands of servers with malware.
We have made it our top priority to monitor these large waves to ensure they’re mitigated in a timely fashion. We believe that the majority of these issues have been addressed by the changes we have made accross the system. If the issues continue to arise, please rest assured that we are monitoring this and have it our top priority to mitigate.
If you’d like to read further on the root of problem here are a couple of links.
Comments Off on SPAM and Latency
March 12, 2014
We’re currently in the process of doing an infrastructure and design upgrade on this service. These upgrades will improve performance, stability as well as squash some bugs that we currently have in the system. Please be aware that certain aspects of the service will be unavailable intermittently.
3/14/2014 Just a clarification, this is extended maintenance of an intermittent nature. We will provide updates as the progress continues.
Comments Off on Compliance Archive
March 6, 2014
We’ve experienced explosive growth in the current quarter and some of you have experienced a few of those a growing pains with us. This weekend we will be working on a massive build out to our secondary data center on the mail processing side of the services. Los Angeles will not be available for mail processing this weekend. This should NOT have any impact on standard MTAs that can handle multiple DNS results.
The only true impact expected is mail releases. Releases which reside in LA , that generally are a lot of less due to the lack of parity between the sites will not release until the expansion is complete. Once the expansion begins its last stage as the servers begin to come online all releases will still be queued and at that juncture should release.
Please follow the NOC above in case anything unexpected becomes affected.
-Update 4:55 EST this work has started
-Update 5:40 EST XD Data Replication is online, nodes to follow
-Update 6:55 EST 80% capacity moved for exchangedefender
-Update 11:09 EST All work completed across all services.
Sunday Post Report
We located an issue with the SPAM actions not being correct for all users, we are working to resolve that this morning if you receive a report of this that goes beyond today at noon please open a ticket, if the timestamp of the message is before noon today, please consider the issue resolved for your end users.
We received additional reports that some SPAM actions still weren’t functioning correctly. We have resolved this issue if you see any emails time stamped after 7 PM EST please let us know. If its before then, please consider the issue resolved.
Comments Off on Weekend Upgrade
Older Posts »
Powered by WordPress