There was an issue earlier today with one of the RBLs we use listing non-spam URLs. This resulted in issues where some clients’ outgoing messages were being blocked with a message from Exchange Defender saying the message contained a Spam URL.
That issue was resolved but there was a problem in the process that the RBL used, which made it so that it timed out and didn’t function at all, causing sporadic blocks in messaging inbound. This is not on our servers, nor on the recipient or sender MTAs.
The issue is resolved completely now and mail flow should be back to normal. You may still get complaints for the next few hours as the issues get ironed out completely. We would like to extend our deepest apologies for the inconvenience caused by this to your clients. We’re in the process of rewriting some of the code to ensure that the dependability on some of our providers is not so absolute.
As an edit to address follow up questions:
This DNS issue would affect certain mail delivery and processing speeds. But we’re flushing through it as quickly as possible.
We’re working on UI enhancements for the Service Manager within the our Support Portal. If you’re continuing to see any issues with please open a ticket with screenshots so we can make sure the development team is aware of any bugs they may have to address.
Our sip provider is currently having issues. Remember our portal is SLA backed 24/7. If you need help open a ticket or a LiveChat within the portal for support issues.
Update: Our provider has resolved their issues.
We’re currently working an issue with DB4 which is similar to the issues faced previously with DB3_1 . The issue is related to a storage controller issue and we are currently running an integrity check/repair on the database
We sincerely apologize for the length of this issue. We understand this is extremely frustrating, as it is extremely frustrating for us to provide this level of service. This is currently not only our top priority but only priority and our top Exchange engineers are on this task and this task alone until resolved.
Update 6:00 PM Eastern: We are still performing work on DB4 for the affected mailboxes. We would like to extend the option for users to utilize “recovery mode” / dial tone. Recovery mode is a temporary new “blank” mailbox where users can work with their live mail (after the outage occurred) as they normally would. Since the recovery mailbox is a “new” mailbox, there wont be contacts, calendars, etc. If you have active sync users or iPhone users then you should have then set their Email, Calendar, and Contact sync to a manual sync as the phone will sync with the empty mailbox.
When a mailbox is manually switched to a temp database Outlook will now prompt the user to either use their Temporary data (new data) or open their previous cached mailbox data
Outlook “Recovery Mode” prompt is a feature set introduced in Outlook 2007 to safely handle mailbox or server side issues that lead to mailbox configuration changes. After the server goes through a dial tone migration, Outlook will notice the content change in the mailbox and locks the old OST cache file from downloading any new mail. Once Outlook goes into Recovery Mode, the user will receive a prompt upon launching an Outlook profile asking if the user wants to use “Old data” or “Temp data” – in simple terms, Old data refers to the OST cache file on the client machine, and temp data provides online access to the mailbox to view new items that arrive – while in temp data, Outlook will run in Online mode and not Cached mode, so the overall experience will be slower for users who are used to the speeds from cached mode.
If a partner would like to utilize a recovery mailbox for mailboxes in their domain then we would need a new support request opened with following subject:
Dialtone Request: domain.com
Where domain.com is the client domain. By opening the request as a new request this will allow us to ensure all requests are properly completed and documented.
January 2 2013
Update 4:30 PM Eastern: The repair / integrity check is near 55% completed on the last step. We anticipate being able to mount the database by 8PM Eastern tonight which we will then deliver all queued mail to the mailboxes. Once all queues are flushed we will then swap users who are in dial tone mode back to their home database and subsequently merge the new and existing data. We want to thank everyone again for their patience as we’ve had a positive response to the dial tone mailbox setups.
Update 5:45 PM Eastern: We’ve successfully mounted DB4 on LOUIE. We’ll be resuming mail delivery shortly.
Users on LOUIE DB3_1 are currently unable to access their mailboxes due to a hardware level issue with the storage device. The database was a temporary database used as a temporary holding spot for mailboxes that were moving from DB3 which was being phased out on LOUIE. Unfortunately since this was a temporary database there was no anticipated need to make multiple database copies available as the extra over head could cause production performance issues. To rectify the issue we’ve been running a hardware RAID level resync to ensure all data is in the best state possible. We do anticipate the issue being resolved before the start of business on Monday.
As a reminder, partners are able to use LiveArchive (https://livearchive.exchangedefender.com) to access their live running mail during the outage
Update 9:07 AM: We are currently running an eseutil integrity check on the log files for DB3_1. This check will ensure that all the data that is present on the database is correct and true, as well as any uncommitted data can be finally committed. This process is expected to take up to two hours to complete. If the replay and check complete without error we will then be able to safely mount the database. All mail that hasn’t been delivered is still queued and waiting for delivery.
Update 5:30 PM: The database was successfully mounted after the integrity check completed. In order to prevent this from reoccurring we are adding a database copy to this database. Please keep in mind that this database was a transitional database, used temporarily to move users from DB3 which was in the process of being phased out. We’re extremely sorry for the inconvenience this has caused and we will include transitional databases in DAG setups going forward.
The servers in the Australia network (2007 Exchange) has suffered a major failure which has ultimately led to the decision to decommission it as a restoration will not bring service online seamlessly (without recovery and resetup by clients)
Unfortunately the recovery from Australia is not something we believe will be completed in the next 48 hours at a minimum. To accommodate this and provide users with an immediate solution to continue working we’ve proactively recreated all users on Australia (2007) on to Matilda (2010). Matilda has redundancy and we’ve already sent over an additional server to add to the Matilda cluster.
The new server address is cas.matilda.exchangedefender.com and the autodiscover record should be setup as autodiscover.matilda.exchangedefender.com
We’ve changed the target delivery location in ExchangeDefender to matilda. You can also login to livearchive to work with mail while the change over to matilda occurs.
Partners who have clients with cached outlook profiles can open the current profile (On Australia) and then export the cached data to .pst files. Any users who do not have cached data or any public folder data will not have an ETA on restoration for at least 48 hours (until we can process the drive’s data)
Update 10/25/12 9:45 PM Eastern: We are in the process of copying over items from livearchive to matilda mailboxes to import the past three days of mail from livearchive. We anticipate the process being completed in under 4 hours.
This morning around 3:30 AM Eastern we received alert that the hosted Exchange 2010 network in Australia (MATILDA) went offline. Upon login via KVM we noticed the server was repeatedly rebooting after faulting to a blue screen, however no diagnostic information was provided. It was soon determined that the operating system needed to be repaired which completed around 5:20 AM Eastern. Once we were able to successfully boot into Windows we performed a database integrity check to ensure no actual data was corrupted or lost which completed around 7:45 AM Eastern and service was restored.
Once service was restored we looked into the server logs which provided no information or logged entries regarding the server fault, however, a memory dump was created. Unfortunately the memory dump wasn’t of much help and the issue appeared to possibly be hardware related. We temporarily switched service for Matilda over to the backup node as we tested the hardware components. We were able to determine that the issue was related to a power supply that randomly dropped output voltage during high load. After replacing the power supply we ruled that the server faulted while running a backup. With regard to the OS repair we can only deduce that a system file was corrupted from the fault and needed to be replaced.
Over the weekend our Linux web hosting infrastructure was upgraded to PHP 5.3. The previous release of PHP 5.1.6 was getting out of date for a lot of social applications that dominate todays deployment base. You will now be able to run the latest WordPress, Joomla and other CMS and shopping cart sites without a problem.
If you have any legacy PHP applications that were written in PHP4 and early PHP5 days, you may notice that some functionality does not work. Consult your software vendor for an update that functions with PHP 5.3 and above.
If your application is not supported on the new platform of PHP and you do not have the resources to make an immediate switch to an alternative, we do have a www2legacy web infrastructure in place that will support older applications. We will make the accommodations and the switch to the legacy platform for a small one-time fee but do keep in mind that the legacy platform will be discontinued in March of 2013. We have provided more than a 1 year announcement of our intention to upgrade our network infrastructure to the latest PHP so if this caught you by surprise we will still do what we can to make sure you have a smooth transition.
For a list of features in PHP 5.3.x and old deprecated functions:
This is probably an excellent place to start your troubleshooting on outdated code. If you decide you’d like to stay with the legacy code please open a support request at https://support.ownwebnow.com if you’d like further information.
This Friday night we will be performing maintenance to the ROCKERDUCK cluster that may interrupt client connections for up to 10 minutes. The purpose of this upgrade is to provide a more intelligent network routing for our mailbox database replication service. We will work our hardest to make the required changes without interrupting client connections.
We anticipate starting the work around 9PM on Friday August 24th.
Users on RDDB1 are unable to access their mailbox data as the mailbox database is currently offline. We are performing surface check on the database as the database health is indicating minor corruption. We are unable to activate the passive copies of the database without the surface check being completed as we are unsure when the corruption occurred and could have been copied to the passive copies. We anticipate that the database will be offline until at least 3PM Eastern. Only 9% of user base is affected and we are working diligently to restore access to customer mailboxes on RDDB1
Update 3:06PM Eastern: We are running two processes to try and restore access as soon as possible: repairing the live database and running a restore from the last backup (7/16/12) and then replaying the committed data to the restored backed up database to bring it up to date. We will go with whatever process completes first as both will bring the database back online in a healthy state. The restored backed up data base is copying log files and will complete around 5PM. The live data base repair is at 50% on step 1/5 and its ETA is unknown
Update 4:13 PM Eastern: We’ve successfully mounted RDDB1 on ROCKERDUCK and confirmed user access.
Unfortunately this was not something that could have been avoided by any redundancy available as this was a software level corruption which isn’t protected by a DAG. Essentially, the DAGs prevent against issues with unavailability between mailbox nodes or networking issues, but when a corrupted log file gets committed, it then gets distributed to the passive nodes. Since we couldn’t confirm if the corrupted file was or wasn’t distributed to specific nodes when caught we felt it wasn’t safe to remount the databases with the lingering possibility of encountering a more serious issue in the near future.
We are planning on introducing a lagged mailbox database copy which will ‘lag’ when it commits log files to the passive copy. By implementing a lagged copy if we ever experience corruption again we can restore instantly to the previous day as the lagged copy wouldn’t have copied/committed the corrupted data
Today (5/14/12) there was an outage with ROCKERDUCK between 11:50 AM – 12:20 PM (Eastern) that affected client access to mailboxes. In short the outage was caused by our Los Angeles site experiencing Active Directory communication issues with Dallas, eventually taking LA offline. Normally, this event wouldn’t cause any issue as Dallas would be able to maintain quorum as Dallas holds the majority vote in the DAG quorum. Unfortunately the tie breaking voter (MBOX3) was offline for maintenance (As it does not actively host DBs – it’s the internal failover server for Dallas), which before the communication issue between sites left the vote as 4/5 online. Once LA experienced connectivity issues the vote dropped to 2/5 online and was forced offline. Our first alerts from monitoring started to come in at 11:52AM and we were able to respond by 11:55AM to the issue and post an advisory. Around 12:10 PM Eastern we were able to reestablish quorum in Dallas and bring the cluster online. Around 12:15 PM MBOX1 mounted all active databases and by 12:20 PM MBOX2 mounted all databases. Since this event was a communication issue between sites, the passive site was marked as Blocked (Unable to automatically mount databases for fail over) as network interruptions were detected.
This weekend our international SIP trunk provider will be doing weekend long maintenance on the infrastructure that hosts our accounts. During the period of 4/27/12 – 4/30/12 our international partners will have to dial our main number, 877-546-0316
Update: The below work has been partially completed. We’ve changed the migration strategy to use public folder replication instead of an in place reload. This decision will extend the maintenance interval into next weekend but will provide no downtime for mailboxes or public folders
Over the weekend of March 30th– April 1st we will be performing upgrades to the LOUIE DAG (Mailboxes) and Public Folder databases. The changes that will begin during this maintenance cycle will improve the native automatic redundancy for LOUIE mailboxes and bring an overall greater experience for users. Due to the number of users and the size of data on LOUIE, the only way to complete the migration with the overall least amount of service interruption is to dismount the public folder database on Friday evening, and leaving it dismounted until Saturday afternoon.
Taking the public folder database offline will prevent ALL access to public folder data housed on LOUIEMBOX1, including mail enabled public folders between Friday evening until Saturday afternoon.
All public folders are replicated between multiple databases, however, unlike DAG protection, Public Folder databases work off a referral based system for locating copies and is highly unreliable for automated continuity.
Any users/partners who are concerned about the availability of their public folder data can open a support ticket and request that our team ensures their public folder data has an active, local replica.
We do not anticipate any interruption in the service of actual mail delivery or mailbox access on LOUIE.
Update 11:01 AM Eastern: We are in the process of moving the ROCKERDUCK CAS & HUB servers.
Update 10:13 AM Eastern: We are beginning the process of moving ROCKERDUCK servers to the new cabinet in our cage. Service is not expected to be affected at the moment.
During the weekend of March 23rd – March 24th we will be performing the following maintenance and upgrades to servers across multiple platforms:
Friday March 23rd 2012 22:00 – Saturday March 24th 06:00 Eastern:
NETWORK/ALL SERVICES: All services in our main cage in Dallas will undergo a quick network reconfiguration which will momentarily impact service availability for the following services
• Sharepoint 2007 and 2010 (Excluding Rockerduck)
• Exchange 2007 (Scrooge, Huey, Dewey)
• Offsite backup
• ExchangeDefender inbound nodes
• ExchangeDefender apps (Encryption, Webfile Share)
• OwnWebNow Support Portal
The network reconfiguration consists of replacing network switches and rerouting network uplink distribution.
ROCKERDUCK: All servers to be relocated to new cabinet in Dallas. Service will not be interrupted during server moves.
Saturday March 24th 13:00 Eastern
LOUIE: LOUIEMBOX3 will be placed into production and pre-process steps to phase out LOUIEMBOX1 will begin. Service will not be interrupted during the turn up of LOUIEMBOX3
During the week of Feb 4 – Feb 11th 2012 the ExchangeDefender staff performed various upgrades to our ExchangeDefender service and to our Hosted Exchange network for ROCKERDUCK. The upgrades included additional nodes for ExchangeDefender, installation of additional power and a replacement VPN router for ROCKERDUCK internal communication (active directory, exchange services, etc).
Unfortunately during the upgrade process we encountered various issues in which some cases affected about half the clients on ROCKERDUCK.
On February 9th RDMBOX2 lost heartbeat communication with the DAG which prompted the passive mailbox database copies on ROCKERDUCK:LA to actively mount the database. Failover to Los Angeles kept client mailbox access available but prompted issues with Active Sync users with redirection requests and with mail delivery. The issue was completely resolved by 2PM Eastern and only affected users on RDDB8 and RDDB13.
On February 10th and 11th Rockerduck EDGE servers experienced issues with inbound and outbound mail flow. The issue was quickly identified as an issue with a corrupted Exchange 2010 SP2 install on the EDGE servers. After reapplying the update mail flow was resumed and service was fully restored.
Although we made a lot of headway on our maintenance schedule this weekend there were a few tasks that were unable to be completed and have been rescheduled for tonight and next weekend.
Tonight we will replace the VPN device for ROCKERDUCK internal communication. The VPN device upgrade will bring a more responsive experience for users in accessing their mailboxes.
Next weekend we will be testing the rockerduck failover procedure. Currently we have this on schedule for Saturday at 1am Eastern.
Both maintenance tonight and next weekend (2/17 – 2/19) will not interrupt clients from accessing their mailboxes.
We’re currently testing the Business Continuity component of ExchangeDefender Essentials. We’ve scoped the test to a couple of nodes to avoid any major inconveniences. All of the backend tests with test accounts were completed successfully, so we’re moving to the next step which is testing it versus live mailflow. What does this mean to you and your clients?
A. You may see a limited (most likely not) NDRs, please review them carefully and look for the emergency.exchangedefender.com within the content. If your clients receive any please see B.
B. We’re giving the Essentials product something to separate it from other products in its price range, business continuity, where previously there was none!
Tomorrow @ 430AM We will be relocating the sites on WWW2 (Windows web hosting) to a new physical host
This change will allow us to resolve the outstanding disk space issues with the server. The move is expected to take up to an hour to complete.
The following work will be performed in our primary datacenter in Dallas Texas this weekend.
Hardware upgrade for ROCKERDUCK mailbox servers
- Replacing RAID controller battery RDMBOX2
- Upgrading RAM RDMBOX1 & RDMBOX2
- Additional mailbox server for ROCKERDUCK
During the upgrades for ROCKERDUCK service is not anticipated to be interrupted. We will switch the active role between servers during maintenance.
APC Power Unit Upgrades – 7:30 AM Saturday 12/10/11
We will be upgrading two APC units in our Dallas DC. The following services will be interrupted between 7:30 AM – 8:00 AM Eastern
- DEWEY 2007 & 2010 Exchange
- HUEY 2007 Exchange
- LUDWIG 2007 Sharepoint
- All 2007 BES servers and LOUIE BES
- support.ownwebnow.com –Web
- ownwebnow.com – Web
- exchangedefender.com – Web
Over the weekend we performed massive updates to our network, infrastructure and even provisioned new products that will be announced in October. We wanted to give you a heads up about the work being done so if you do get complaints from end users you have some knowledge about the work being done.
ExchangeDefender Inbound Network
ExchangeDefender received a huge upgrade to the load balancers, log infrastructure and even the number of inbound servers that process all mail has been increased as we continue to grow. In the new infrastructure each data center operates independantly from one another and gives us the ability to have a full failover for all services without issues with one network causing issues with the other. This was the concern that came from our issues during August where a power failure in Dallas slowed down mail processing in Los Angeles, and I am happy to report that the new infrastructure will not only improve availability when we face issues but also improve performance when everything is working as it should.
Please note: During the rebalancing process and as the new nodes are introduced to the environment the load on the network increases which increases processing times (ie: email delays). While this is counter-intuitive please consider what is happening on the backend: all nodes need to become aware of the new nodes. The configuration load times are increased as the new nodes come online and receive the initial ExchangeDefender image and definitions. During the replication process all configurations are updated across all systems and nodes are pulled online/offline as tests are ran to make sure all nodes perform in the same way. So even though there are more processing systems, initially the load on the existing nodes is increased.
ExchangeDefender Outbound Network
We have added outbound servers in Los Angeles, providing full geographic redundancy for the outbound services as well. These will be online shortly.
ExchangeDefender LiveArchive LA (LA3) is online and processing mail at this time. We will be making it available for access shortly as soon as we are happy with the performance figures on both the primary and backup systems that power ExchangeDefender LiveArchive.
We are quite pleased with the performance and the updates that were made during the weekend. While we regret that there are some issues and inconveniences temporarily such as email delays, this will improve our clients ability to communicate as the network becomes more reliable and more redundant. Keep in mind that our clients pay us to keep them and their users secure and that is the primary mission here. If we could make that happen and account for both email and client growth without massive infrastructure upgrades we would love nothing more than to do more with less – but the threats are getting more serious and more numerous and protecting our client base continues to require additional resources.
Thank you for your business and your patience and please assure your clients we are working hard to keep them safe and keep their business continuity in place.
The outages in the northeast are slowing down deliveries to some customers but have been improving throughout the morning as more customers restore power. We’re increasing capacity as we speak to adjust for the increased rejection/reattempt rates.