Service Alert: Stats Update

We’ll be doing an update to the stats database this afternoon. This process will take around 30 minutes, during which time some stats might look a bit wonky. After the work is completed, stats should be quicker to display, particularly at the VISP level.

Under the hood: Upgrading MySQL 5.1 -> 5.6 with zero downtime

Downtime is not an option for us. We might get away with a minute or two in the middle of the night, but running an email service means millions of emails a day at all times of day.

So, after years of resisting I was finally lured by the charms of the latest and greatest MySQL – well almost. After wasting a day of my life trying to import my data into MySQL 5.7, I switched to 5.6 and all worked like a dream. So, having run with MyISAM tables all this time, why switch to InnoDB. Here are the reasons for us:

  1. Performance. In our testing with a particularly complex query that examines lots of rows, v 5.1 with MyISAM took 1 minute, 2 seconds. v 5.6 with InnoDB took 39 seconds the first time, and 28 seconds all subsequent times. This is probably related to point 2.
  2. InnoDB caches all data in memory, not just indexes. Our dataset is around 20GB in size, so this all fits nicely in memory, giving a speed boost, and reducing disk accesses (always a good thing).
  3. Row-level locking. This is a biggie. MyISAM has always been a bit of a problem in this regard, and in unexpected ways. Perform the query in point 1, which is just reading from the databases, and you get a bit of an unexpected problem. Because this query is looking a logging tables that are written to frequently, as soon as a write comes through (which would be within 0.1 seconds in our situation), the write blocks, and more importantly, and subsequent reads block waiting for that write. Before you know it, rather than 100 connections to the database, you suddenly have 1,000 and emails start queueing on our system. With InnoDB this problem completely goes away.
  4. Crash recovery. Although we have 100% power uptime, dual feeds etc in our data centre, it’s nice to know that if the worst does happen, the databases won’t get corrupted, so no lengthy repair times or risk of data loss.

So, how do we get from here to there without a lot of pain in the middle. The answer is authoritative DNS servers and a spare MySQL server. All servers that use this database must access it via a name (we use ‘sql’ for the main read-write database and ‘sql-ro’ for the replication slave). In our situation, they query our caching DNS servers that are also authoritative for our domain, so if I make a change, it happens instantly for all our servers.

The process then goes like this. Existing live servers are called sql001 (master) and sql002 (slave). Our level of traffic is such that one server can cope with the full load, particularly at off-peak times, so we will use this to help us.

  1. Point sql-ro at sql001. This removes sql002 from being live.
  2. Use mysqldump to take a copy of the database on sql002.
  3. Set up new server, sql003, with MySQL 5.6 and restore the mysqldump data, with sql003 now also being a slave of sql001.
  4. Alter all tables to be InnoDB.
  5. Have a good look through my.cnf and set it up so that you have plenty of memory for InnoDB caching instead of MyISAM caching.
  6. Test, test, test. And once happy, point sql-ro at sql003 to make sure all runs well on that server.
  7. Upgrade sql002 to MySQL 5.6 and set to slave off sql003. To do this, you’ll need to use mysqldump on sql003 with —single-transaction (which only works for InnoDB but allows you to do it without locking the tables).
  8. Now it’s time to do the dangerous stuff. Switch the DNS for sql to point to sql003. As soon as this is done, shut down the MySQL server on sql001. In our case, we quickly got conflicts on sql003 due to logging table entries. We’re not too worried about this, but best to switch off sql001 so all clients are forced to reconnect – to the correct server.
  9. Now everything is pointing at sql003, time to upgrade sql001 to 5.6.
  10. Shut down the server on sql002 and copy the database contents to sql001. Also, copy my.cnf and don’t forget to alter the server-id. Remove auto.cnf from /var/db/mysql on sql001 before starting or MySQL server thinks it has two slaves with the same ID.
  11. Once sql001 running ok, ‘mysqldump –single-transaction –master-data’ from sql001 to set up sql002 as slave of sql001. So now we have sql003 as master, with sql001 as slave of sql003 and sql002 as slave of sql001.
  12. Change sql and sql-ro to point at sql001 and switch off sql003 server.
  13. Double-check sql002 and then point sql-ro at it. Job done.

This is very much an overview, and you should google for the exact commands to use, config file setups for my.cnf etc to suit your own situation. There will undoubtedly be some gotchas along the way that you have to look for (for example, after upgrading to 5.6, we had to recompile any code on that server that used MySQL as the v5.1 libraries had been removed).


And now for the gotchas, which I will list as I discover them:

  1. InnoDB tables do not support ‘INSERT DELAYED’ command. This is probably because there’s no need for it as there’s no table level locking. However, rather than just ignoring the ‘DELAYED’ keyword, MySQL rejects the command. We used this for some of our logging, so have lost some log info following the change-over.
  2. Weird one, this. For storing some passwords, use some one-way encryption using the ‘encrypt’ function. This can optionally be supplied with a ‘salt’ of 2 characters. We supply 2 or more characters based on the account number of the password being encrypted. It would seem that with MySQL 5.1, if you supply more than 2 characters, it switches from a simple MD5 encryption to SHA512. With MySQL 5.6, it ignores anything but the first two characters and sticks with MD5. To work around this problem, we have to call the function twice, once with the 2 character account number, and one with the full account number, preceded by “$6$”. Told you it was weird!

Service Alert: Issue with Control Panel Quarantined Email Viewing

We have an issue with this, which we are currently investigating. While we fix this, there are other ways to view the Quarantined email – simplest is to use Webmail to view the Spam folder.

EDIT: Now resolved. Quarantine folders can now be viewed as normal in the Control Panel. This was a minor code issue due to a service update yesterday. Apologies for any inconvenience.

Email Anti-Virus Filtering Upgrade

Today we have introduced a further level of protection our anti-virus service. Using our control panel, a third type of protection can be selected that blocks any incoming or outgoing emails containing Visual Basic for Applications (VBA) macros. These are typically found in Microsoft Office files such as .doc, .docx, .xls and .xlsx. Macros are small blocks of code embedded in the document to help with automation. However, they can also used as ‘payloads’ for transmitting malware and viruses.

To recap, we now offer three levels of settings: Standard Anti-Virus, Advanced (which includes phishing attack detection) and now Advanced with VBA Macro blocking. To use this new setting, users can select it from the ‘Protection’ section of their mailbox Control Panel.

This new feature is part of our ongoing commitment to improving our services and is provided to all customers at no additional charge.

Service Alert: Mail SSD Backend Announcement

We’re pleased to announce that following months of behind-the-scenes work, all Mailcore and wholesale email mailboxes are now stored on SSD, using the most advanced filesystem in the world, ZFS. This gives us numerous advantages, including faster performance, improved reliability and less energy consumption. ZFS gives us instant snapshots and guarantees against filesystem corruption as well as adding even more performance due to its advanced memory caching scheme.

Service Alert: Notification of SQL Works

On Thursday 28th July, starting at approx 10pm, we will begin the final stage of a central database server upgrade. We will be upgrading our MySQL servers from v 5.1 to v 5.6, and switching from MyISAM database format to InnoDB. This will give us some speed and reliability advantages. We’ve found a way of doing this with minimal disruption (expecting 2 sub-10 second outages, if that), but this is a core part of our system, and so there is a risk of more significant problems if things don’t go according to plan, along with the possibility of some minor issues elsewhere due to the change in database format.