Springtime is all about getting rid of the old and welcoming the new. We at Vip Consult are fond of following the natural cycles and therefore decided to update our server infrastructure. We completed a full overhaul and here are the most interesting insights about this challenging project.
What we wanted to achieve:
- Ease the upgrade process
- Test any changes we perform locally and apply them to the production server
- Update the software while allowing legacy apps to run on the server
- Utilize the existing backup system
- Make greater use of the hardware
- Speed up the disaster recovery
Usually the setup of a new server might take days and sometimes even weeks, which makes keeping track of all the changes we perform a very time consuming and honestly speaking a troublesome job. That is why the recovery of a failed server is something you really want to avoid. What is often the case when changing the server’s setup or upgrading a specific application, is that this might interfere with the already installed software on the server. That is something we were aware of and of course very cautious about to ensure the process goes as smoothly as possible.
To protect the system, any responsible server administrator would test the changes on a development server and only then apply them to the production server. The problem here with this approach is that the development server is never the same as the production server, so it can happen that something might work on the first, but that doesn’t necessarily ensure the successful outcome on the later.
Our team of dedicated professionals is always eager to find alternative ways to solve a problem and after a few days of thorough research we found the perfect solution. We decided to use the “Docker” software, which allows you to run applications in an isolated containers and these containers can be easily transferred. The software is able to run multiple applications simultaneously, but the general practice is to run no more than 1 or 2 applications on each container. This is as far as we will go with explaining how “Docker” works, but if you want to read more about it click the link and dive into the future of server deployment and administration: www.docker.com
The first big challenge that we faced during the process was to make all containers work together as one complete server and linking them properly. Running an application in one container is often depended on service running on another container. That is why all containers must be properly linked. Manually starting and stopping each container is of course not the smart way to go about it, so we tried to use software called “FIG”, but soon enough we realized that it does not function as we expected it to. After more research, we came up with a rather simple idea that worked perfectly for our case and this was to use a shell script. It allows you to restart all containers or only a chosen one by providing the container’s name as an additional parameter.
Now the “Docker’s” team started developing a new tool called “Compose” which solves this exact problem and we are looking forward to using it.
The most important part of the hosting though was the actual http server. We use “nginx” because of its high performance and this time it was compiled with an additional performance boosting module developed by Google to offer an advanced caching and improve the page loads by up to 40%. See here: https://developers.google.com/speed/pagespeed/module.
The next step was to install the PHP and this is where the beauty of the “Docker” software design is revealed. Since you can run any application in an isolated container you could run different versions of PHP on the same server. With any other setup this would have been a nightmare to achieve, but with “Docker” this was an easy task.
The last step in this venture was to adjust our backup system to accommodate the new “Docker” setup. We ran a double backup setup – hardware and software. The hardware backup didn’t actually need any changes, only the software setup did, but those were minor ones.
We use “BackupPC” for our incremental backups. In order to handle the database server backups we had to do some additional preparation as it is not possible to backup the database server files directly. Instead, you need to use the server clients to run a full dump of all databases in each server. Even that we had the shell script, it took some extra effort to adjust it so that it runs properly on the new “Docker” setup.
You can see the live setup at our “GitHub” account here: https://github.com/vipconsult/backuppc_scripts
The script runs a full dump of both database servers – mysql and potgresql and it sends a warning message by email if it detects a big change compared to the day before. This allows us to be warned for any database corruption and react before it is too late. Additionally the script deletes older backups based on a threshold set in the script.
Along the way quite a few bugs were solved with some container maintained by third party developers and we were very happy to contribute to solving these bugs on “GitHub”. Check this link if you want to know more about our final setup. This is our “GitHub” account, where you can see all the containers and start scripts: https://github.com/vipconsult/dockerfiles
Side note: Since “Docker” allows us to run apps without conflicts we could reduce our overall virtual servers utilizing the raw hardware power more efficiently.
This was not the end of the overhaul , but to put things in a nutshell, here is a brief summary of the full setup:
PHP - multiply versions running alongside to offer legacy support for some outdated websites
Proftpd – with SSL encryption connections
Nginx – compiled with the Google’s page-speed module for better performance
Mysql and Postgresql - for the db servers
Simplehelp – our remote support server
Exim – sends email from the PHP container and it doesn’t use an external SMTP server
logrotate – for automated containers logs management
fs – our main SIP phone server system
There is still a lot of work to be done, but the hardest part is already behind us. The next step of the process is to make changes on the container images so they are universal and portable. In that way the server is able to handle the load during high traffic. Overall this was a great project, which gave us a lot of new knowledge. We are ready now to apply it to our future projects.
Stay tuned for more insights from Vip Consult!