Service Alert: Server upgrade – zimbra004 – tomorrow (Wed 17th)

Reminder

We will be upgrading our zimbra004 server to Zimbra 8.7 tomorrow night, with work commencing at 10pm.

We will endeavour to minimise any disruption to service, and any inbound email received whilst each server is offline will be queued until the relevant server is live again.

Service Alert: Zimbra Upgrades

Important – Zimbra Upgrades

As part of our ongoing programme of service enhancements, we will be performing upgrade work on our Zimbra servers during the month of January.

This work will be undertaken outside of normal working hours (in the evening) and will involve a period of downtime for each server.

We will endeavour to minimise any disruption to service and any inbound email received whilst each server is offline will be queued until the relevant server is live again.

The dates will be as follows – we will be sending reminders beforehand.

Wednesday 17th Jan: Zimbra004 – upgrade to Zimbra 8.7

Wednesday 24th Jan: Zimbra003 – upgrade to Zimbra 8.7

Wednesday 31st Jan: Zimbra002 – upgrade to Zimbra 8.7

Whilst Zimbra 8.8 has just been released, we want to monitor it over the coming months to ensure stability. Once satisfied that it will be reliable we will upgrade to that version.We haven’t forgotten Zimbra001 and will contact customers on this server about moving users in the New Year.

If you have any questions please let us know.

Service Alert: CEO Spam

We’ve seen an influx of what is known as ‘CEO Spam’, where an email is supposedly sent from the boss’ iPhone asking the recipient to urgently send some money to a bank account.

Please be on the alert for this, and help your users not to fall for what is quite a clever scam.

Meanwhile, we are just in the process of adding a rule to our anti-spam software to hopefully block all of these – this will only work if your users are using our anti-spam system, of course!

Service Alert: Password Security

Password Security

We are seeing a significant rise in customers’ email accounts being used to distribute spam, having had their account credentials compromised.

In many cases this has been purely down to simple, fairly obvious, passwords being used. Would you believe that we have 276 mailboxes that use ‘password’ as their password? Or 73 with the password 123456?

It’s important for all our customers that we minimise the potential of our mail servers becoming blacklisted, so where patterns of outbound sending indicate a compromised mailbox and the distribution of spam, we will block the account from sending out any further email until the password is changed and a virus scan on the end users equipment performed if required.

This helps mitigate against blacklisting, but isn’t perfect by any means as it’s reactive in nature.

Whilst we can’t improve on the way we identify compromised mailboxes, we can improve the tools we give Partners to re-enable outbound SMTP immediately following a block.

Currently we send out an alert when an account is locked and rely on you contacting us to re-enable. Add to that, any blocking we’ve done has been at account level, rather than individual mailboxes, so one compromised mailbox can lead to outbound emails being blocked for the whole account.

As of Tuesday 27th June we are implementing a new process for dealing with compromised accounts:

  • Blocks can now be applied at individual mailbox level, rather than account level. This means that only the affected mailbox will be restricted from sending, rather than all users on that account.
  • Our systems will now automatically unlock any affected mailboxes once the user, or administrator, has changed the password.
  • Next Tuesday all mailboxes with passwords we deem to be easily compromised (for example, using part of the email address) will be blocked from sending outbound email until their password has been changed. This will only affect a relatively small proportion of our overall customer base, but needs to be implemented as the issue of compromised email boxes is on the rise.

We very much hope you’ll welcome the changes we’ve made which, as well as giving more control to customers in the event their email credentials are compromised, it also encourages everyone to think a little more seriously about the security of their email!

Service Alert: Zimbra Server Upgrades

Starting on Mon 13th Feb, we will begin a series of Operating System upgrades to our Zimbra servers to onto the latest version of Linux in preparation for upgrading Zimbra to the latest version (8.7.1), the upgrades for which will follow a couple of weeks later.

This will, unfortunately, involve some downtime for each server and is not without its risks.

We anticipate that each server will be down for a period of up to 2 hours, but hoping to keep it to 30 minutes.

The planned schedule is as follows: Mon 13th 22:00 – zimbra004 Wed 15th 22:00 – zimbra003 Thu 16th 22:00 – zimbra002 As part of this work, we will be removing support for some weak or obsolete SSL ciphers to improve security levels.

This could affect some users still running old web browsers.

Please accept our apologies for any inconvenience caused by this.

Using MessageBunker with Gmail

Gmail archiving

A requirement of MessageBunker that we store the user’s credentials, encrypted of course, so that we can access their mailbox for archiving. Wherever possible, we look for alternative solutions that allow for improved security and privacy. Gmail offers one such solution.

MessageBunker can be granted a token by Gmail with the end-user’s permission, that needs refreshing hourly. This token can be used to log in. The process is known as ‘oAuth’ as is fairly common on the web, but this is the only implementation for IMAP that we are aware of.

When a user creates a new archive on MessageBunker, we send them to Google to give us permission to access their Gmail. We then get a confirmation back and a token we can use to log in.

The user does not need to reveal their password and can revoke our access to their Gmail account at any time, without the need to change their password, or alter any other systems that are accessing the account.

Old MessageBunker is dead, long live the new one!

MessageBunker

Following on from our announcement at the end of September regarding the new MessageBunker interface and initial running of old and new versions in parallel, we can announce that, as of today, the old site has now gone – to be completely replaced by the new one.

Take a look at messagebunker.com

MessageBunker gets a major upgrade!

The development team have been beavering away over some hot code again and we’re delighted to announce the launch of a new and improved MessageBunker email archiving, compliance and discovery service.

The updated service provides a brand new web interface, with statistics/graphs, lots of new help files, enhanced search facilities and a host of other improvements. In addition there is an all-new mobile site for smartphones that can be set up as a web app. too.

To really experience the improvements we’ve made please take a look at new.messagebunker.com using your existing MessageBunker login credentials.

Not using MessageBunker already? Take a look here for peace-of-mind!

MailCore Pro Price Reduction!

For customers using MailCore Pro we are delighted to announce that, with effect from your first billing date on, or after, 1st October 2016, we are reducing the price of the full MailCore Pro feature set down to that of the ‘Basic’ – only £1 per mailbox, per month. Any ‘Basic’ mailboxes will automatically be upgraded to full MailCore Pro specification providing the option to use the collaboration tools if required.

In addition – and we know this is a ‘biggie’ for many of you – with recent upgrades to our CalDAV integration we now have a seamless Calendar integration with Windows 10!

Please note that it will take a while to sort out the back-end systems and merge them, so, for a short period, there will still be a separate ‘Basic’ version, but this will go and all mailboxes will be upgraded automatically.

Under the hood: Upgrading MySQL 5.1 -> 5.6 with zero downtime

Downtime is not an option for us. We might get away with a minute or two in the middle of the night, but running an email service means millions of emails a day at all times of day.

So, after years of resisting I was finally lured by the charms of the latest and greatest MySQL – well almost. After wasting a day of my life trying to import my data into MySQL 5.7, I switched to 5.6 and all worked like a dream. So, having run with MyISAM tables all this time, why switch to InnoDB. Here are the reasons for us:

  1. Performance. In our testing with a particularly complex query that examines lots of rows, v 5.1 with MyISAM took 1 minute, 2 seconds. v 5.6 with InnoDB took 39 seconds the first time, and 28 seconds all subsequent times. This is probably related to point 2.
  2. InnoDB caches all data in memory, not just indexes. Our dataset is around 20GB in size, so this all fits nicely in memory, giving a speed boost, and reducing disk accesses (always a good thing).
  3. Row-level locking. This is a biggie. MyISAM has always been a bit of a problem in this regard, and in unexpected ways. Perform the query in point 1, which is just reading from the databases, and you get a bit of an unexpected problem. Because this query is looking a logging tables that are written to frequently, as soon as a write comes through (which would be within 0.1 seconds in our situation), the write blocks, and more importantly, and subsequent reads block waiting for that write. Before you know it, rather than 100 connections to the database, you suddenly have 1,000 and emails start queueing on our system. With InnoDB this problem completely goes away.
  4. Crash recovery. Although we have 100% power uptime, dual feeds etc in our data centre, it’s nice to know that if the worst does happen, the databases won’t get corrupted, so no lengthy repair times or risk of data loss.

So, how do we get from here to there without a lot of pain in the middle. The answer is authoritative DNS servers and a spare MySQL server. All servers that use this database must access it via a name (we use ‘sql’ for the main read-write database and ‘sql-ro’ for the replication slave). In our situation, they query our caching DNS servers that are also authoritative for our domain, so if I make a change, it happens instantly for all our servers.

The process then goes like this. Existing live servers are called sql001 (master) and sql002 (slave). Our level of traffic is such that one server can cope with the full load, particularly at off-peak times, so we will use this to help us.

  1. Point sql-ro at sql001. This removes sql002 from being live.
  2. Use mysqldump to take a copy of the database on sql002.
  3. Set up new server, sql003, with MySQL 5.6 and restore the mysqldump data, with sql003 now also being a slave of sql001.
  4. Alter all tables to be InnoDB.
  5. Have a good look through my.cnf and set it up so that you have plenty of memory for InnoDB caching instead of MyISAM caching.
  6. Test, test, test. And once happy, point sql-ro at sql003 to make sure all runs well on that server.
  7. Upgrade sql002 to MySQL 5.6 and set to slave off sql003. To do this, you’ll need to use mysqldump on sql003 with —single-transaction (which only works for InnoDB but allows you to do it without locking the tables).
  8. Now it’s time to do the dangerous stuff. Switch the DNS for sql to point to sql003. As soon as this is done, shut down the MySQL server on sql001. In our case, we quickly got conflicts on sql003 due to logging table entries. We’re not too worried about this, but best to switch off sql001 so all clients are forced to reconnect – to the correct server.
  9. Now everything is pointing at sql003, time to upgrade sql001 to 5.6.
  10. Shut down the server on sql002 and copy the database contents to sql001. Also, copy my.cnf and don’t forget to alter the server-id. Remove auto.cnf from /var/db/mysql on sql001 before starting or MySQL server thinks it has two slaves with the same ID.
  11. Once sql001 running ok, ‘mysqldump –single-transaction –master-data’ from sql001 to set up sql002 as slave of sql001. So now we have sql003 as master, with sql001 as slave of sql003 and sql002 as slave of sql001.
  12. Change sql and sql-ro to point at sql001 and switch off sql003 server.
  13. Double-check sql002 and then point sql-ro at it. Job done.

This is very much an overview, and you should google for the exact commands to use, config file setups for my.cnf etc to suit your own situation. There will undoubtedly be some gotchas along the way that you have to look for (for example, after upgrading to 5.6, we had to recompile any code on that server that used MySQL as the v5.1 libraries had been removed).

 

And now for the gotchas, which I will list as I discover them:

  1. InnoDB tables do not support ‘INSERT DELAYED’ command. This is probably because there’s no need for it as there’s no table level locking. However, rather than just ignoring the ‘DELAYED’ keyword, MySQL rejects the command. We used this for some of our logging, so have lost some log info following the change-over.
  2. Weird one, this. For storing some passwords, use some one-way encryption using the ‘encrypt’ function. This can optionally be supplied with a ‘salt’ of 2 characters. We supply 2 or more characters based on the account number of the password being encrypted. It would seem that with MySQL 5.1, if you supply more than 2 characters, it switches from a simple MD5 encryption to SHA512. With MySQL 5.6, it ignores anything but the first two characters and sticks with MD5. To work around this problem, we have to call the function twice, once with the 2 character account number, and one with the full account number, preceded by “$6$”. Told you it was weird!