Intro

I was looking for some simple and clean status page for monitoring various azure services as well as having an external status page that I can share with another team. When I started researchin, I came across Uptime Kuma, which is an open-source, self-hosted monitoring tool, developed by Louis Lam. More details you can find on his Github repo. I am absolutely delighted with this quite simple but truly powerful tool and its straightforward implementation.

Techno Tim is giving a nice walkthru of uptime-Kuma, so if you have time to spend and want to see all of the features, check the video down below.

There is also an image on docker-hub, which can easily be pulled with the latest tag, mount needed volume, and voila, you have it up and running. As simple as that.

docker volume create uptime-kuma
docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1 

If you’d like to deploy it on Azure Container Instance, but you don’t want to use another service to enable a TLS endpoint like Azure Application Gateway, then this could be a nice solution.
Keep in mind that exposing an endpoint like this without some security boundary in front might not be smart, but since we are using a container we can also destroy it with two clicks if needed, or even automate self-destruct with some conditional logic and functions.

Deployment to Azure

We will use two containers in a group, deployed on a single ACI instance, also using a dummy self-generated certificate just for a test purpose, you can replace this with your own later. 0

Create a self-signed certificate

Create CSR file and fill required info:

openssl req -new -newkey rsa:2048 -nodes -keyout ssl.key -out ssl.csr
Then we create a certificate from the above CSR
openssl x509 -req -days 365 -in ssl.csr -signkey ssl.key -out ssl.crt

Create Nginx config file

In this part, we will configure Nginx.conf file to use the TLS certificate (created in the steps above), and set it up as a reverse proxy. It will listen for incoming traffic on HTTPS port and forward everything to uptime-kuma container on port 3001.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# nginx Configuration File
# https://wiki.nginx.org/Configuration

# Run as a less privileged user for security reasons.
user nginx;

worker_processes auto;

events {
    worker_connections 1024;
}

pid        /var/run/nginx.pid;

http {

    #Redirect to https, using 307 instead of 301 to preserve post data

    server {
        listen 443 ssl http2;

        server_name localhost;

        # Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
        # SSLv3 to the list of protocols below.
        ssl_protocols              TLSv1.2;

        # Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
        ssl_ciphers                ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
        ssl_prefer_server_ciphers  on;

        # Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
        # The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
        # By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
        # Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
        ssl_session_cache    shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
        ssl_session_timeout  24h;


        # Use a higher keepalive timeout to reduce the need for repeated handshakes
        keepalive_timeout 300; # up from 75 secs default

        # remember the certificate for a year and automatically connect to HTTPS
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';

        ssl_certificate      /etc/nginx/ssl.crt;
        ssl_certificate_key  /etc/nginx/ssl.key;

        location / {
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass         http://localhost:3001/;
            proxy_http_version 1.1;
            proxy_set_header   Upgrade $http_upgrade;
            proxy_set_header   Connection "upgrade";
        }
    }
}

Base64 encoding

We need to use base64 encoding for nginx.conf ssl.crt and ssl.key that we are later going to use in yaml deployment file.

cat nginx.conf | base64 > base64-nginx.conf
cat ssl.crt | base64 > base64-ssl.crt
cat ssl.key | base64 > base64-ssl.key

Create YAML deployment file

In deployment.yaml file we need to paste the full output of base64 encoded files that will be added to secret volume in the container group, we also need to create volume Uptime-Kuma with an empty directory and mount it in the container Uptime-Kuma-app.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
api-version: 2021-12-25
location: westeurope
name: uptime-kuma
properties:
  containers:
  - name: nginx-with-ssl
    properties:
      image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
      ports:
      - port: 443
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
      - name: nginx-config
        mountPath: /etc/nginx
  - name: uptime-kuma-app
    properties:
      image: louislam/uptime-kuma:1.11.1
      ports:
      - port: 3001
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
      - name: uptime-kuma
        mountPath: /app/data

  volumes:
  - secret:
      ssl.crt:    <Base64 encoded output goes here>
      ssl.key:    <Base64 encoded output goes here>
      nginx.conf: <Base64 encoded output goes here>
    name: nginx-config
  - name: uptime-kuma
    emptyDir: {}

  ipAddress:
    dnsNameLabel: dns-lable-you-want-to-have
    ports:
    - port: 443
      protocol: TCP
    type: Public
  osType: Linux
tags: null
type: Microsoft.ContainerInstance/containerGroups

Deploy the container

We first create a resource group

az group create --name rg-test-dev-001 --location westeurope
Deploy the container group
az container create --resource-group rg-test-dev-001 --file deployment.yaml

Deployment can take a few minutes to finish and add a few minutes for services to start, the end goal is to have two containers in running state. 1

Verify

As a last step try to open fqdn you have previously added. https://dns-lable-you-want-to-have.westeurope.azurecontainer.io