How To Configure NGINX as CDN

Written by: Bagus Facsi Aginsa
Published at: 02 Dec 2023

Do you know that you can make your own CDN using NGINX? NGINX has a caching capability built-in, this can be used to cache a static file including a video file. So, if you have a streaming server, you can use this NGINX as CDN to reach some customers that are far from your streaming server.

Sudo Privileges

Before starting, we make sure that we will have no permission issues on the installation and configuration. Go to your VM and type this command

sudo su

Install NGINX from Source

To Install NGINX as CDN, I recommend you follow my previous article “Build an Adaptive Bitrate Streaming Server Using Nginx on Ubuntu” because NGINX installed using this module can be used as CDN functionality (especially for live streaming Video).

After you finish the installation, you can go back to this tutorial to configure the NGINX as CDN.

NGINX Configuration

First, let’s back up the nginx configuration

mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup

Create a new NGINX configuration file

nano nginx.conf

Copy & paste this configuration

user www-data;
worker_processes auto;
worker_rlimit_nofile 4096;
pid /run/;

events {
    worker_connections  4096;
    # multi_accept on;

http {
    # Basic Setings
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    keepalive_timeout 65;

    # Log Settings
    log_format video_log '$remote_addr - $remote_user [$time_local] "$request" '
                         '$status $body_bytes_sent "$http_referer" '
                         '"$http_user_agent" "$http_x_forwarded_for" '
                         'rt=$request_time '
                         'ua="$upstream_addr" us="$upstream_status" '
                         'ut="$upstream_response_time" ul="$upstream_response_length" '
    access_log /var/log/nginx/access.log video_log;

    # SSL Settings if needed
    # ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    # ssl_prefer_server_ciphers on;
    # add_header Strict-Transport-Security "max-age=31536000; preload" always;
    # ssl_session_cache shared:SSL:10m;
    # ssl_session_timeout 20m;
    # ssl_certificate /etc/ssl/cert_file.crt;
    # ssl_certificate_key /etc/ssl/key_file.key;
    # ssl_dhparam /etc/ssl/dhparam_file.pem;

    # Core Cache Setting
    proxy_cache_key $uri;
    proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=video_cache:10m max_size=10g
                     inactive=30d use_temp_path=off;

    # Server For VoD Caching
    server {
        listen 80; # or use 443 if you use SSL

        location / {
            # cache path
            proxy_cache video_cache;

            # server resends file only if the file changed
            proxy_cache_revalidate on;

            # use stale cache when updating on background
            proxy_cache_background_update on;
            proxy_cache_use_stale updating;

            # Only 1 request gets to the origin at a time
            proxy_cache_lock on;

            # Set caching time to any HTTP response
            proxy_cache_valid 200 7d;

            # ignore request header
            proxy_ignore_headers Cache-Control;
            proxy_ignore_headers Set-Cookie;
            proxy_ignore_headers Expires;

            # addcache status header
            add_header X-Cache-Status $upstream_cache_status;

            # Using HTTP 1.1 protocol
            proxy_http_version 1.1;
            proxy_set_header Conenction "";

            proxy_set_header Host $host;
            proxy_pass http://upstreamserver;

Notice that I marked several directives on the configuration, it directly affects your CDN functionality. Let’s discuss them one by one.

  1. the first one is, of course, the server_name. It is used to set your domain. Change them to match your domain.

  2. proxy_cache_key, this directive defines a cache file key, or simply the name of the cache file. The cache file name is the result of applying the MD5 encryption to the cache key. For example, if we set the $uri as the key and then a request comes from the user with uri: /images/my-sweet-cat.jpg, the cache file name will be c223cd4fbcf6ae9c78ab4f6dabd0beab.

  3. proxy_cache_path, this directive sets the path where your cache files are stored, and several important parameters of a cache. levels define the folder system. For example, we set the directive like this
    proxy_cache_path /etc/nginx/cache levels=1:2

    the cache folder system will be like this:


    notice that the folder will be 1 digit (b) followed by 2 digits (ea). keys_zone is the size of shared memory used to store the cache key & its metadata in a specific zone. 1m of key_zone is enough to store 8k keys. max_size will control the maximum cache data that can be stored in the server, so always set the max_size less than your VM storage. inactive parameter set how much time the cache will be deleted when it is not accessed by the user. For example, if we have inactive=30d, then the cache will be deleted automatically from the server if the cache file is not accessed for 30 days.

  4. proxy_cache is used to give an identifier/name and activate the caching system in a specific location in the NGINX configuration to match the keys_zone in proxy_cache_path configuration. Notice that proxy_cache video_cache is match with keys_zone=video_cache:10m.

  5. proxy_cache_revalidate will enable revalidation of expired cache items using conditional requests with the “If-Modified-Since” and “If-None-Match” header fields. If the expired cache is still the same as the upstream version (based on the header), the CDN will not download the file from the upstream server, hence improving CDN performance.

  6. proxy_cache_background_update and proxy_cache_use_stale will create a background job in NGINX to update an expired cache item while giving a stale cached response is returned to the client when the cache is still in updating status. So the client doesn’t have to wait from the upstream server.

  7. proxy_cache_lock will minimize the upstream call at the a time. Only one request at a time will be allowed to populate a new cache according to the proxy_cache_key directive.

  8. proxy_cache_valid will set how much time we will treat the cache as valid after it is requested from the upstream server. proxy_cache_valid 200 7d; means we will cache a successful upstream request (success identified with 200 HTTP code) for 7 days.

  9. Last but not least proxy_pass will determine the location of the upstream server or the server we want to cache. You can change the location to match the domain of your sever.

That’s it! If you want to customize the configuration further, you can always visit NGINX ngx_http_proxy_module documentation.