Node Js Lambda and Mongo Db Connections Analysis

The problem

For those who might not be familiar with the problem of database connection management in a serverless environment, let’s explain it briefly.

Code used in this article can be found at:  

With Lambda functions (or any FaaS in general), unfortunately, things can get a little tricky when you have a function that works with a “traditional” database, like for example MySQL or MongoDB.

Basically, every time a Lambda function is invoked for the first time (or a new function instance is created because of a concurrent invocation), we need to establish a new connection to our database. While there is nothing wrong with that, the problem arises when we have to close it.

As some of you may know, after a period of inactivity, Lambda functions get destroyed and when that happens, the database connection, unfortunately, doesn’t get closed, and basically enters the “zombie” mode. In other words, the connection is still there, but no one is using it.

And over time, as this scenario repeats, the number of these “zombie” connections may significantly rise, to the point that you can reach the limit for maximum established connections defined on your database server. Because the database can basically become inaccessible at that point, this can obviously create serious problems for your app.

The above diagram shows multiple instances of a single Lambda function. But in a real-life, there will be more than one Lambda function in the mix, and that’s when it gets even more worrying.

Any known solutions?

Although there are a few solutions for some databases today, like e.g. AWS Serverless Aurora’s Data API, or recently announced RDS Proxy for AWS’s relational databases, the managed MongoDB hosting service that we are currently mostly relying on, the MongoDB Atlas, doesn’t offer a similar solution. They do list some best practices in this article, but it’s an old article, and it doesn’t really solve the problem.

More recently Amazon DocumentDB (with MongoDB compatibility).

Introducing DB Proxy Lambda function

Enter “DB Proxy” Lambda function — a Lambda function that serves as a database connection proxy (as you can probably tell by its name). In other words, every function that needs to talk to the database, won’t actually establish its own database connection anymore, but will invoke the mentioned DB Proxy Lambda function whenever a database query needs to be made, with all of the query params sent via the invocation payload. Once invoked, the DB Proxy Lambda function will run the query using the standard MongoDB Node.js driver, and finally, respond back with the query results.

The following diagram basically shows how it all works together:

One thing to note here is that, as you might have noticed, zombie connections will still exist, because, at some point in time, DB Proxy Lambda function instances will still get destroyed, thus the same problem repeating. We are aware of that, but our goal is not to make the number of these zombie connections go to zero, but to try to make this number as small as possible.

First, we decided to see how all of this performs in terms of speed, because if it turns out that invoking another function to do a simple database query is slow, then there’s no point in further exploration of this idea, wouldn’t you agree?


When this idea came up, an immediate concern was that by invoking another Lambda function to execute database queries (instead of doing it like we always did, using the MongoDB Node.js driver), we would introduce significant latencies and thus negatively impact the overall performance.

But as it turned out, it’s not bad at all! Yes, there is the additional latency if we are hitting a function cold start and in that case, function initialization and a new database connection establishment, but other than that, for our needs, we’ve considered the performance to be in the acceptable range.

Let’s check it out!

Database connections

So, after some testing, by checking the MongoDB Atlas dashboard, we’ve noticed that the total number of established database connections really did decrease.

More benefits

So, with the shown performance and database connection management testing results, we’ve considered this whole experiment to be successful, and that’s why we’ve decided to keep this solution, so our users can utilize it and make their apps more reliable.

But that’s not all actually. This approach added even more benefits than just the database connection management problem we’ve initially set to solve. Let’s check it out!

Maximum number of connections can be defined

Since connections are now established only from the DB Proxy Lambda function, by utilizing the reserved concurrency (which lets us define the maximum number of concurrent function instances), we now have the ability to control the maximum number of connections that can be established.

For example, if we were to set the reserved concurrency to 100, this means we can have up to 100 concurrent DB Proxy Lambda function instances, which in other words means we can have up to 100 active connections at the same time. Pretty cool right?

But still, as mentioned, do note that the DB Proxy Lambda function instances will be destroyed at some point in time, which will again leave zombie connections behind. So the actual number of total open connections to the database server may be a bit higher than the one set as the function’s reserved concurrency.

Also, except for the fact that the DB Proxy Lambda function instances are destroyed due to inactivity, they are also destroyed upon deployments. For example, if you had 100 active DB Proxy Lambda function instances, redeploying the function would destroy all of these instances, and depending on the actual traffic, create 100 new ones. This means 100 new connection establishment requests! So make sure to have that on your mind if the DB Proxy Lambda function redeployment is necessary.

This also brings me to my next point…

Redeployments don’t create new connections

Occasionally, we might need to deploy several services, which might be comprised of several functions. If we were to deploy 20 new functions, and if all of them were talking to the database, that would basically mean we’ll get 20 zombie connections after the deployment has finished (assuming we have only one instance of each function), and establish 20 new ones. Things get even worse if you had to repeat the deployment once or twice.

With this approach, this problem basically disappears, because, unless we have to make some changes to the DB Proxy Lambda function, we don’t usually need to deploy it again, which means the same already established connections will be reused by the redeployed functions.

Smaller functions

Previously, all functions that needed to talk to the database had to include the mongodb package, which is actually 1.18MB in size. If you had a function that’s 5MB in size, that would actually represent 23.6% of the function’s total size! Massive percentage if you ask me.

Since functions don’t need to include the mongodb package anymore (it’s only included in the DB Proxy Lambda function, which is basically its only dependency), all functions are now lighter in total bundle size, which is also a cool benefit.

Increase of Lambda function invocations?

Before I wrap this up, I just wanted to quickly cover one more thing.

You might be asking yourself:

Wait, isn’t this approach going to increase the total amount of Lambda function invocations? Can this impact my monthly costs?

Wait, isn’t this approach going to increase the total amount of Lambda function invocations? Can this impact my monthly costs?

Yes, that’s true, and unfortunately, there is no way around it. Every database query is a new Lambda function invocation. That’s why, if you will be implementing something like this, try to estimate how many invocations you might have, and how it might affect your monthly cost.

The cool thing about the DB Proxy Lambda function is that it doesn’t require a lot of system resources. In fact, you should be just fine with a minimum of 128MB of RAM. And since every invocation should last less than 100ms, the first one million invocations are going to be free (if we’re not including the invocations of other Lambda functions that you might have).

The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

Also, if you have functions that are doing several database queries in a single invocation, if possible, try to fetch all of the needed data in one invocation. Not only this will generate fewer function invocations, but will also be faster.

All in all, this is definitely something to be mindful of, but for now, we didn’t find it to be a deal-breaker.


As seen, once we’ve implemented the shown DB Proxy Lambda function solution, we’ve noticed a significant reduction in the number of zombie connections. And not only that, but we’ve also gained some really nice features and optimizations along the way.

If you ask me, I think it would be super cool to see a more “official” solution to the database connection management problem from the MongoDB Atlas team, like for example something similar to the Data API that the AWS Serverless Aurora offers. I feel it’s kind of a shame that something like that still doesn’t exist, but I do hope they will come up with something in the near future. 🤞

We are aware that there are other serverless-first databases out there that don’t suffer from this problem. For example, the awesome DynamoDB (in fact, there’s an open issue already) or maybe even FaunaDB (not tried it yet, but heard good things). But for now, we’ve decided to rely on MongoDB as the go-to database, due to its popularity, and the fact that it can be used with every major cloud provider (it’s not a cloud-native database).

We will definitely keep a close eye on this issue, and keep monitoring the serverless space for new solutions that eventually may come up. And because listening to our community is one of our top priorities, if there’ll be more interest/demand, I can definitely see some changes happening in this segment.

Load testing experiment

We will run the same load test against 2 sets of lambdas and see how they perform in a load test scenario:

The first set of lambdas: getNote and GetNotes will both connect to the database within each lambda. The second set of lambdas: getProxyNote, getProxyNotes will both use a  mongoDbProxy lambda for db access.

The results should a significant drop in db connections when using a mongo db proxy lambda. The results would be far greater if we introduced additional lambdas accessing data via this proxy also.

Jmeter Profile


Results from not using a proxy mongo lambda

Jmeter stats

Mongo db connections

AWS dashboard view

Results from using a proxy mongo lambda

Jmeter stats

Mongo db connections


AWS dashboard view


GIthub – Travis Ci – Heroku CI – CD


  1. Create a GitHub repository;
  2. Setup a Rails application;
  3. Create an account on Travis CI and link it with your repository;
  4. Create an account on Heroku and link it with the repository;
  5. Start the heroku console from the terminal;
  6. Setup a travis yml file;
  7. Push it.


If you just want to know about how to connect Travis CI and Heroku jump to the step 3 Create an account on Travis CI and link with your repository.

1. Create a GitHub repository

Create a GitHub repository with your account and clone it using HTTPS or SSH, to keep this simple as possible I’ll pick HTTPS:

$ git clone

After clone, enter at the directory using:

$ cd rails-test-app

2. Setup a Rails application

To initiate the Rails application you need to have already installed ruby and rails on your computer, for this, I highly recommend you to use RVM.

You’ll also need a database to run your app, install PostegreSQL for that. Follow the instructions from the official website picking your OS option.

Now you can run the follow commands in your terminal:

First install bundler:

$ gem install bundler

Install Rails:

$ gem install rails

Finally, create a Rails app with PostgreSQL using the name of the current folder:

$ rails new --database=postgresql .



Ok, it’s done, after all that code running down by your screen, your terminal is available again!

Just to check if everything is working as well, type the follow command to start the Rails server and check localhost:3000 in your browser.

$ rails server

If it’s the same screen below, great! Keep going, if isn’t, try to check the steps that you did until now.

Send the created app to GitHub, commit and push to master:

$ git add .
$ git commit -m "Add initial structure"
$ git push origin master

3. Create an account on Travis CI and link it with your repository

To do that you can go to, to make it easier create your account using your Github.

You will see the Authorize Screen telling to accept and link Travis CI with your Github account.

Inside your account, on Accounts option click on the switch besides the repository name as the gif below.

Check the repo again going to Settings and Integrations & services. Oh, can you see Travis CI over there?

4. Create an account on Heroku and link with the repository

At create a new account, you need to confirm your email and all that stuff. When you have finished, click on “Create New App”.

Inside your application, go to “Deploy”, search for “Deployment method” and then select the GitHub option.

A wild window appears, “Authorize Heroku”, accept it, and to complete the connection just search by the name of the repository and connect!

At Deploy tab on the “Automatic deploys” section don’t forget to check the “Wait for CI to pass before deploy” option and enable the Automatic Deploys:

Back to your Settings page on GitHub Repository, on the Webhooks, guess who is there?

5. Start the Heroku console from the terminal

Now that you have Heroku linked with the repo, install the Heroku CLI and login with our new account.

To install the Heroku CLI just follow the official documentation, if you are using a Mac OSX like me, just use:

$ brew install heroku/brew/heroku

Now with heroku installed, enter with your account:

$ heroku login 

Put your email and password so you would see this message:

“Logged in as”

Create a remote reference to your repo:

$ heroku git:remote -a rails-test-app-article

Done! Now you can directly push to your Heroku and deploy your app by the terminal, however, we will learn in the last steps how to put all these things together!

6. Setup a travis yml file

Let’s start creating a file

$ touch .travis.yml

Now open the created file and paste this code:

language: ruby
- bundler
- bundle exec rake db:create
- bundle exec rake db:migrate
- bundle exec rake assets:precompile
provider: heroku
secure: KEY
app: rails-test-app-article
repo: felipeluizsoares/rails-test-app

In this yml file I’m defining:

  • The language to the Travis CI knows what to do to run my code;
  • What I want to cache, in this case is the bundler, in a Node JS example would be node_modules;
  • The scripts to run before the script itself, I’m creating the DB, running the migrations and pre-compiling the assets;
  • Creating a deploy task to run at Heroku, to do that Is needed the API KEY (we don’t have it yet), the name of the app at Heroku(rails-test-app-article) and the name of the repo at Github(felipeluizsoares/rails-test-app)

To get the API KEY from Heroku just run a command on terminal, although this key needs to be secret so we should encrypt that before putting it in the file.

Install the Travis CI gem to be able to use the encrypt from Travis:

$ gem install travis

Now, run this code that will envoque the encrypt and pass the Heroku API KEY

$ travis encrypt $(heroku auth:token)

You probabaly will see this message: “Detected repository as yourname/reponame, is this correct? |yes|”

Answer with yes and 🎉 🎉 you have your API KEY!

So replace the KEY at the secure: line on the yml file by your key.

7. Push it

In this last step, push everything you did to GitHub, commit the yml file and push it to a new branch at the repo.

$ git branch -b add-travis-yml-file
$ git add .
$ git commit -m "Add travis yml file"
$ git push -u origin add-travis-yml-file

Inside your repository on Github open a new PR from your new branch, target the master and the CI will be running there.

When you merge the PR, go to your Heroku Dashboard and check the lastest activity.

Now evertytime you open a PR Travis will be running the tests and when this PR were merged Heroku will deploy it automatically!


Use this power!

Now you can apply this knowledge to your stack and every time that someone pushes a PR to the project you are working on, check if the tests are passing before merging. You can block the merges when the CI isn’t checked and with these additional steps you are protecting your codebase.

Make it easier!

Forget about deploying to developer every time just to check if something is working or showing some feature to another developer, let Heroku do that for you! When your stack is getting more consistent, you can apply the same automatically deploy to staging and production, check more about it on Heroku Pipelines documentation!

I hope you have learned with this, let me know if you have any questions in the comments 🙂

Free SSL certs

No more excuses for HTTP traffic websites.

Big corps (including PayPal, Google and lastly the WordPress) have announced that they will require hosts to have SSL (or HTTPS) available for certain services, APIs, webhooks and OAuth.
First of all I assume, your site is perfectly loading via and you are on a private network (that means you are the only owner of the IP you are using).

Install the certbot client

Go to this website and simply select your operating system and the web-server client. Follow the steps to install certbot-auto. In my case, I used the following couple lines.

chmod a+x certbot-auto

And the next you need a very simple config.ini file, I put mine under /etc/letsencrypt/config.ini, it includes following. Don’t forget to change “” to your email address.
rsa-key-size = 4096
email =
Our certificate client ready, this will allow us to install and update the certificate.

Create the SSL certificate

Go to the directory where you installed your certbot-auto client. And simply run the following commands. Don’t forget to change to your domain name (and of course the directory of the files)

certbot-auto certonly --webroot -w /var/www/html/domain1 -d -d -w /var/www/html/domain1/sub -d --config /etc/letsencrypt/config.ini --agree-tos --keep
certbot-auto certonly --webroot -w /var/www/html/domain2 -d -d -w /var/www/html/domain2/sub -d --config /etc/letsencrypt/config.ini --agree-tos --keep

You can run the code above for your other domains/subdomains similarly.

If everything goes smoothly (hopefully, it will). It will generate the certificate files under /etc/letsencrypt/live/ and /etc/letsencrypt/live/ we will use them in the next step.

Attach them to your domain

Now edit your ssl configuration file at /etc/httpd/conf.d/ssl.conf. And copy the below code for each domain/subdomain.

<VirtualHost *:443>
    DocumentRoot "/var/www/html/domain1"
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/
    SSLCertificateChainFile /etc/letsencrypt/live/
    SSLProtocol All -SSLv2 -SSLv3
    SSLHonorCipherOrder on

Let’s automate it to renew after 90 days

SSLs generated by Let’s Encrypt is valid only for 90 days. You need to renew the certificate before it expires so there is no downtime through your HTTPS traffic. I use crontab for this using the code below.

0 0 1 * * /var/www/
And my looks like this…
# Renew Let's Encrypt SSL cert
/opt/letsencrypt/letsencrypt-auto renew --config /etc/letsencrypt/config.ini --agree-tos

if [ $? -ne 0 ]
        ERRORLOG=`tail /var/log/letsencrypt/letsencrypt.log`
        echo -e "The Lets Encrypt Cert has not been renewed! \n \n" $ERRORLOG | mail -s "Lets Encrypt Cert Alert" "FIX IT! :)"
        service httpd reload
exit 0

Please note that we piped the result to a very well known mail command. So you get notification if it fails to renew. Feel free to change the script the way you want it. And do not forget to comment below if you find this post useful ?

Migrating git repos from different providers

I recently had to migrate over 70 repositories from gitlab to bitbucket. There are import tools provided by providers but they require access to each other in order for them to work. Because the providers we were using could not visibly see each other over the web without being connected to the same VPN this was not an option.

A nice little workaround to get all information from a git repository can be found on my public github here:

Using scala traits in a java code base

I am currently working on a code base with a mix of both Java and Scala. All of the apis have used dropwizard with scala but Scala support stopped at version 0.7.1. This is inhibiting us from upgrading to the latest version of dropwizard and also adopting Java 1.8. We have started re writing all of the Scala dropwizard apis back to java.

One of the problems we have encountered are that these dropwizard projects have a dependency on other projects written in scala. To minimize the amount of code to re write and get code released as early as possible, we only want to refactor what we need to enable us to get on the latest version of dropwizard and compile using java 8.

As Scala uses the adoption of traits quite heavily this has caused us some problems. Traits are very similar to Interfaces in Java but also provide implementation of its methods. If you try and implement the Scala trait in java code you will get a compile error as the trait has implemented behavior, as it’s not a regular Java interface.

Step-by-step guide

A quick and easy way around this is to create a wrapper abstract class in the Scala library which extends the trait. You can then extend this abstract class in your java code and get the implemented method behavior through this wrapper class. The example below allows you to call the concactStrings in the scala trait in your java code e.g.

// scala libary

package mathew

// the original trait

trait MathewTrait{

    def concatStrings(stringOne: String, stringTwo b) =  stringOne + stringTwo


// the wrapper class

abstract class MathewTraitWrapper extends MathewTrait

// java code

package mathew;

public class JavaMathew extends MathewTraitWrapper {

    public String JavaConcatStrings() {

    return concatStrings("Test message1","Test message 2");



Service worker and offline content

service worker

I have recently been working on a project to share driving licence information using an offline method for users. I created an apple wallet offline pass first. I have now started looking at alternatives to apple wallet that would work across other platforms.

Introducing service workers

A service worker is a script that stands between your website and the network, giving you, among other things, the ability to intercept network requests and respond to them in different ways. The idea being that we create a simple HTML representation of the apple wallet driving licence share pass and make this available offline using service worker technology. We will also use other progressive web application methods like the manifest.json to create an app like user experience so a user has a shortcut icon on their phone to the driving licence share pass.

Registering a service worker

You make a service worker take effect by registering it. This registration is done from outside the service worker, by another page or script on your website. On my website, a global site.js script is included on every HTML page. I register my service worker from there.

When you register a service worker, you (optionally) also tell it what scope it should apply itself to. You can instruct a service worker only to handle stuff for part of your website (for example, ‘/blog/’) or you can register it for your whole website (‘/’) like I do.

Service worker life-cycle

A service worker does the bulk of its work by listening for relevant events and responding to them in useful ways. Different events are triggered at different points in a service worker’s life-cycle.

Once the service worker has been registered and downloaded, it gets installed in the background. Your service worker can listen for the install event and perform tasks appropriate for this stage.

In our case, we want to take advantage of the install state to pre-cache a bunch of assets that we know we will want available offline later.

After the install stage is finished, the service worker is then activated. That means the service worker is now in control of things within its scope and can do its thing. The activate event isn’t too exciting for a new service worker, but we’ll see how it’s useful when updating a service worker with a new version.

Exactly when activation occurs depends on whether this is a brand-new service worker or an updated version of a pre-existing service worker. If the browser does not have a previous version of a given service worker already registered, activation will happen immediately after installation is complete.

Once installation and activation are complete, they won’t occur again until an updated version of the service worker is downloaded and registered.

Beyond installation and activation, we’ll be looking primarily at the fetch event today to make our service worker useful. But there are several useful events beyond that: sync events and notification events, for example.

The fetch event intercepts any request made by the user. We can then use this event to return requests from cache and not the network.

I adopted a cache first falling back to network on all the assets and page responsible for rendering the html version of the apple wallet driving licence share pass.

Exposing localhost to outside world 

I come across something really useful while working on my latest project. If you want to show something you are working on to anyone in the world without deploying your code you can use localtunnel.

A very useful tool to expose a port on your local machine to the outside world. This means you can show your local development of a website/service to anyone with an internet connection.

Github repo:

  • Install NPM: sudo apt install npm
  • Install localtunnel: sudo npm install -g localtunnel
  • Install NODE JS: sudo apt-get install nodejs-legacy
  • Run the following with your port of what you want to expose: sudo lt –port 9000 -s subdomainnamehere

Accessibility fixes

While building the Apply for Design service the team ensure the services can be used by all citizens of the United Kingdom. No user should be excluded on the basis of disability. To do so would breach the Equality Act 2010. The service must also comply with any other legal requirements, As a starting point, your service should aim to meet Level AA of the Web Content Accessibility Guidelines (WCAG) 2.0.

When we tested the service at, we had a number of issues raised which we have now reviewed and fixed at source. These fixes have been applied to our ensuring we have a high level of accessibility in our common library for generating HTML mark up.

During testing on JAWS we found that although we were using correct accessibility tags of a fieldset and legend we were including to much hint text inside the legend. The additional text from the page has been pulled into the label meaning that the label is extremely long, as it includes such additions as how to answer the question. This is confusing for screen reader users as it is difficult to know which field the user is selecting. A shorter fieldset and legend for example, question and answer only will avoid confusion.

We chose to change our markup to place hint text outside the legend to fix this issue. E.g.



<span class=”form-label-bold text”>Do you wish to defer publication of your design?</span> </legend>

<span class=”form-hint text”>Deferring publication limits your protection to the date of filing only.</span>

<span class=”form-hint text”>The majority of designs applications are submitted without being deferred.</span>

A very small change but without testing this with real users this would not have  been picked up as an issue for users that use JAWS software in this way.

The video below gives a great demonstration of how this small change gives the users a much better experience:

Backup WordPress site

This article should explain how to manually backup your WordPress site. I used this method to transfer this site from FatCow hosting to now hosting this site and others on the Microsoft Azure platform.

Backup WordPress Database using phpMyAdmin

The database is the most valuable part of your website. This contains all information that will change most often. Luckily, backing up your WordPress database is pretty straight forward and can be done using a handy tool called phpMyAdmin, which is usually available through your cPanel.

Let’s dive into the steps:

Log in to your cPanel and click the phpMyAdmin icon in the Databases section.

In phpMyAdmin you will see a list of database names in the left column of the home page. Simply click on the database that you wish to back up and select the Export tab at the top of the screen.

Make sure the export method is “Quick”and the format is “SQL”

Click Go button. This will download a .sql file to your computer.


The download process can usually take from a few seconds to a few minutes, depending on how large your database is. The downloaded SQL file can be used to import at anytime when you need to restore or migrate your site.

Alternatively, if you’re not comfortable with the steps, or familiar with phpMyadmin, you could also backup your database within your WordPress admin panel. To do this,

Head to WordPress dashboard » Tools » Export » All content and click download export file. This will download a XML file to your computer. This file contains your posts, pages, comments, custom post types, categories, tags, and users.

However, you can’t deny the fact that phpMyAdmin is the best and efficient tool to backup your WordPress database.

Backup WordPress Files (wp-content)

To access wp-content you’ll need either an FTP client or cPanel file manager. Let’s start with File manager tool:
Option 1: Backup your wp-content folder using File manager

Login to yourcPanelaccount.

Navigate to File Managericon under the “File Management” section.

Click on it and a pop-up will appear. In the pop-up, select Web Root (public_html/www)and click Go.

The File Manager will now load in a new window and show your files. Ensure you are in the public_htmlfolder.

Once there, navigate to the “wp-content”folder, right-click on it and select “Compress”.

Select Zip Archive as compression type and then click compress File(s). This will create a file called wp-content.zipand place it within your root folder.

Wait for the archiving to finish. When it’s ready, refresh the file manager and look for wp-content.zipfile. (By downloading the file that you need the time taken to complete the backup will be significantly reduced.)

Simply double click on it to begin download.

File Manager

This might take a long time, maybe an hour or more depending on your connection speed and the size of your website. Once done, don’t forget to delete the zip file in your root folder to save disk space.

Option 2: Backup your wp-content folder using FTP (FileZilla)

This part of the article assumes you already have an FTP account and Filezilla software installed on your computer. If you don’t have CPanel access on your host, you will have to get yourself an FTP client such as Filezilla. FTP clients let you move your website’s files from your hosting account to your computer, and vice versa

Open Filezilla and connect to your host with your FTP information.

After you have connected, select the public_html directory from the right pane.

Create a folder on your desktop and download the wp-content folder to it by simply dragging the folder over from the right pane to the left pane.


Congratulations, you’ve successfully backed up your WordPress site.

Set up a WordPress site in the Cloud with Azure

If you run a WordPress site using a site such as GoDaddy, 123reg etc then you really need to move off this shared hosting platform and transfer your site to your own personal vm in the cloud. You can set this up for as little as £10 a month for the A0 setup but I would recommend the A1 price tear for roughly £20 a month.

If you have a MSDN subscription you can also use the credits for the charges. It is a good time to grab a VM or two and setup your own servers, where you can host your own blog and showcase your awesome open source projects. VMs are your own mini server, where you get full remote desktop access and do whatever you like. You don’t have to be limited to a web based control panel. It’s your own baby server on the cloud, fully scalable and redundant.

Creating your VM

Search for Bitnami images in the azure dashboard and click the create button. It is that simple to setup a LAMP environment VM with WordPress pre installed on apache with MySQL as the database. Bitnami images also have phpmyadmin pre-installed to allow you to manage your database.

BItnami VM

Once the VM is created, it will look like this:

VM View

Go to the “Endpoints” tab, and see the public port that has been opened for SSH. SSH is the Remote Desktop Protocol (RDP) equivalent for Linux.


So, the server’s DNS is and SSH public port is 54423.

Connect to the VM

Let’s get Putty and configure it to connect to this VM:


Put the DNS and the port. Then put a name on “Saved Sessions” and click Save. Then go to “Appearance” and change the font to Consolas, 15. Come back to this, and click “Save” again.

Now click “Open” and you will be taken to the remote session on the server. You will be asked to accept the server, just click Yes. Use your azure account details as login and password.

You will see a screen like below:

SSH Screen

The so-called endpoints are the open ports on your machine. In this case, port 22 (SSH) and 80 (HTTP) are both open.

If your server has web interface, you should open an HTTP and HTTPS endpoint when you are ready to go into production. You can do this by clicking VIRTUAL MACHINES in the vertical menu bar on the left, choosing VIRTUAL MACHINE INSTANCES in the central resource area and then clicking on the name of the virtual machine you want to add an endpoint to.

You can now access your virtual machine via the web visiting, where DNSname is the name shown in theDNS NAME column of the VIRTUAL MACHINE INSTANCES tab.

You should receive something like:

Wordpress screen

To see the wordpress site itself, visit To log into your WordPress site for the first time, use user as your username and bitnami as your password. You are advised to change these credentials as soon as possible after deploying your site.