Blog

How to set up a firewall on Ubuntu Server

The information in this post is based on Ubuntu Server 14.04 x64. It may or may not be valid for other versions.

When I first started out with Linux (Ubuntu) servers, setting up a firewall involved manually creating and maintaining a potentially complex configuration file for iptables. However, I have recently discovered ufw, which is short for Uncomplicated Firewall – and it really is 🙂

My installation of Ubuntu Server 14.04 already had ufw installed, but if your doesn’t, simply install it from the repositories:

sudo apt-get install ufw

UFW is actually just a tool that simplifies the iptables configuration – behind the scenes, it is still iptables and the Linux kernel firewall that does the filtering, so ufw is neither less nor more secure than these. However, because ufw makes it a lot easier to configure a firewall correctly, it may reduce the risk of human error and is therefore possibly more secure for inexperienced admins.

If your server is configured with IPv6 as well as IPv4, make sure that this is enabled for UFW as well. Edit the file /etc/default/ufw and look for a line saying IPV6=yes. On my installation it was already there, but if it’s not or if it says no, you should edit it.

Then simply use the command prompt to enable the ports you want opened. If you are connected to your server via ssh, make sure to allow that as well or it may disrupt your connection and possibly lock you out of your server when you activate it – depending on whether you have physical access to the server or not, this may be kinda inconvenient 😉

For example, if you use ssh on the standard port 22 and you are configuring a web server that supports both unencrypted (HTTP on port 80) and encrypted (HTTPS on port 443) connections, you would issue the following commands to configure ufw:

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

If you need more rules, simply add them as above.

If you have a static IP address and only need to be able to connect via ssh from the one location, you can also restrict ssh connections to a single origin address like this:

sudo ufw allow from 192.168.0.1 to any port 22

Of course, enter your own IP address instead.

When done, enable ufw by entering:

sudo ufw enable

And you’re done! The firewall is running and will automatically start up when you reboot your server 🙂

If you make changes to the ufw configuration, you may need to disable and enable it again to put them into effect, like this:

sudo ufw disable
sudo ufw enable

To look at the current configuration, simply enter:

sudo ufw status

If ufw is not enabled, this will simply show an “inactive” message, otherwise it will list the currently defined rules.

Match location based on file extension with NGINX

The information in this post is based on NGINX 1.4.6 running on Ubuntu Server 14.04 x64. It may or may not be valid for other versions.

I’m not all that good at regular expressions (something I should probably work on, I know), so I often need to read up on it when I have to do more than the very  simplest of pattern matching in for example NGINX’s location context.

One that is very useful if you need to handle specific file types differently is the ability to match a location based on the extension of the requested file. And it’s very easy too, your location directive could simply look like this:

location ~* \.(js|css|html|txt)$
{
    // do something here
}

Of course, you can just change the extensions to whatever you need.

The above example is case-insensitive (for example, it will match both .js and .JS). If you want it to be case-sensitive, just remove the * after the ~.

What you do with the match is up to you; typically, you’d rewrite it to a back-end that does some sort of preprocessing, or you may just want to read the files from other folders than what it looks like to the public, the possibilities are endless.

Deleting NGINX cache puts critical unlink errors in error log

The information in this post is based on FastCGI caching on NGINX 1.4.6 running on Ubuntu Server 14.04 x64. It may or may not be valid for other versions.

After migrating several sites from Apache to NGINX I have grown very fond of its built-in caching capabilities, which works extremely well under most circumstances without much meddling from me.

However, one thing I really can’t do without is the ability to clear the cache myself. The free community edition of NGINX only supports time-based cache expiry (i.e. you can set it up to check if something has changed after an hour, a day, etc.). But what if there is no reliable way of determining ahead of time when a certain resource will change? For example, I have no idea if it will be an hour, a day or a year before I come back and edit something in this post – and why only cache for an hour if caching for a day would have been fine?

This is where the ability to clear the cache manually (or by having your web application notify NGINX that something should be purged) is needed. The people behind NGINX are clearly aware of the need for this as the feature is supported in the paid version of their product – but while they are certainly entitled to set up their licensing any way they want, the price is a bit steep for me when this function is the only paid feature I really need.

Fortunately, it turns out you can just delete files from the cache directory yourself and NGINX will pick up on this and fetch a new copy from your back-end without a hitch. However, if you do this without tweaking your configuration you are likely to see a whole bunch of messages similar to this one in your error log after a while:

2015/03/04 17:35:24 [crit] 16665#0: unlink() "/path/to/nginx/cache/9/a0/53eb903773998c16dcc570e6daebda09" failed (2: No such file or directory)

It appears that these errors occur when NGINX itself tries to delete cache entries after the time specified by the inactive parameter of the fastcgi_cache_path directive. The default for this is only 10 minutes, but you can set it to whatever value you want. I’ve set it to 7 days myself, which seems to work well as I haven’t seen this error at all after changing it.

I find it really strange that it is considered a critical error that a cache entry cannot be deleted because it doesn’t exist. The fact that its severity classification is so high means that it’s impossible to get rid of just by ignoring log entries below a certain threshold. As soon as a new copy is fetched from the back-end the entry will exist again, so this should be a warning at most, in my opinion.

Now, if the cache entry couldn’t be deleted because of problems with permissions or something third, that would be a critical error, because it might make NGINX continue serving cached content long after its expiry time, but the clean-up process doesn’t seem to make this distinction.

FastCGI PHP_VALUE settings leaking to other sites on NGINX with PHP-FPM

The information in this post is based on NGINX 1.4.6 and PHP-FPM 5.5.9 running on Ubuntu Server 14.04 x64. It may or may not be valid for other versions.

Background

I was recently setting up a new website on a server that had a couple of other sites on it already. The server and all of the sites on it are owned and controlled by me, so even though it has multiple virtual hosts, I don’t really consider it a “shared hosting environment” as such.

More or less by chance, I suddenly noticed that the new site had some PHP settings that were certainly not default. On one of the other sites, I have a virtual host defined as an admin area where I’m allowed to bulk upload large files, so the PHP settings regarding file uploads and max POST size had been increased for the relevant location in that virtual host using the fastcgi_param PHP_VALUE directive – or so I thought, because it was now apparent that these settings were leaking and polluting other sites. I assume the same leak occurs if using fastcgi_param PHP_ADMIN_VALUE instead, but I haven’t actually tested it.

Specifically, the following PHP settings were increased for this location: post_max_size, upload_max_filesize, max_file_uploads, and max_input_time. Also, NGINX’s own client_max_body_size was increased to the same as PHP’s post_max_size.

While having way too high max POST size and upload size for publicly available areas is certainly not optimal, it’s not a huge security issue either (attackers could potentially attempt to exploit it to deplete resources on the server by posting massive requests) – but other settings I might have set could have been, so I’m glad I found out about this on something fairly innocent.

The Problem

It seems that the issue stems from the passed options getting applied to the PHP-FPM child process that happens to process the request and then remain there while that process lives, which can be for quite some time as one of the major performance improvements in PHP-FPM over traditional PHP execution is that each process handles multiple requests, so the entire engine doesn’t have to be started for every request.

After doing some research it is still unclear to me whether this is a bug, normal behaviour, or just an error in my setup. I have seen people claiming all three. The problem was completely reproducible to me, though – restarting PHP-FPM would return the settings to default, but as soon as I’d visited the virtual host with the special settings, those settings would start appearing on other sites on the same server. Regardless, the important part is how to fix it.

The Solution

After some further research and experimenting, it appeared to me that the only reliable way of fixing it was to divide the virtual hosts into separate PHP-FPM pools so that they would not share child processes. Many people will tell you that you should be doing that anyway for security reasons; while I can see both pros and cons here, I’m inclined to think they’re right based on my experiences with this problem.

I have decided to split the description of how configure separate PHP-FPM pools into a separate post, which you can read here.

How to set up separate PHP-FPM pools in NGINX

The information in this post is based on NGINX 1.4.6 and PHP-FPM 5.5.9 running on Ubuntu Server 14.04 x64. It may or may not be valid for other versions.

There are a number of advantages to setting up multiple PHP-FPM child process pools rather than running everything in the same pool. Security, separation/isolation and resource management springs to mind as a few major ones.

Regardless of what your motivation is, this post will help you do it 🙂

Part 1 – Set up a new PHP-FPM pool

First, you need to locate the directory where PHP-FPM stores its pool configurations. On Ubuntu 14.04, this is /etc/php5/fpm/pool.d by default. There is probably already a file there called www.conf, which holds the configuration for the default pool. If you haven’t looked at that file before chances are you should go through it and tweak the settings in it for your setup as the defaults are for a fairly underpowered server, but for now just make a copy of it so we don’t have to start from scratch:

sudo cp www.conf mypool.conf

Of course, replace “mypool” with whatever you want your pool to be called.

Now open up the new file using nano or whichever text editor you prefer and adjust it to fit your purpose. You will probably want to tweak the child process numbers and possibly which user and group the pool runs under, but the two settings that you absolutely must change are the pool’s name and the socket it’s listening to, otherwise it will conflict with the existing pool and things will stop working.

The name of the pool is near the top of the file, enclosed in square brackets. By default it’s [www]. Change this to whatever you want; I suggest the same as you named the configuration file, so for the sake of this example change it to [mypool]. If you don’t change it, it seems that PHP-FPM will only load the first configuration file with that name, which is likely to break things.

You then need to change the socket or address you are listening to, which is defined by the listen directive. By default, PHP-FPM uses Unix sockets so your listen directive will probably look like this:

listen = /var/run/php5-fpm.sock

You can change it to whatever valid name you want, but again, I suggest sticking with something similar to the configuration filename, so you could for example set it to:

listen = /var/run/php5-fpm-mypool.sock

Alrightythen, save the file and exit the text editor.

Part 2 – Update NGINX virtual host configuration

Now you need to open up the NGINX virtual host file with the FastCGI configuration you want to change to a new pool – or rather, connect to the new socket.

By default on Ubuntu 14.04, these are stored under /etc/nginx/sites-available, but can also be defined elsewhere. You probably best know where your virtual host configurations are located 😉

Open up the relevant configuration file in your favorite text editor and look for the fastcgi_pass directive (which must be in a location context) defining the PHP-FPM socket. You must change this value so that it matches the new PHP-FPM pool configuration you made under step one, so continuing our example you would change this to:

fastcgi_pass unix:/var/run/php5-fpm-mypool.sock;

Then save and close that file as well. You’re almost done now.

Part 3 – Restart PHP-FPM and NGINX

To apply the configuration changes you’ve made, restart both PHP-FPM and NGINX. It may be enough to reload instead of restart, but I find it to be a bit hit and miss, depending on which settings are changed. In the particular case, I wanted the old PHP-FPM child processes to die right away, so restarting PHP-FPM was needed, but for NGINX a reload may be sufficient. Try it out for yourself.

sudo service php5-fpm restart
sudo service nginx restart

And voila, you’re done. If you did everything correctly, the virtual host you modified should now be using the new PHP-FPM pool and not share child processes with any other virtual hosts.