19 November 2015 / Last updated: 29 Mar 2016

How to create a custom logging system for longer log retention

Execution time:
Difficulty:
Cost:
Sometimes an application requires a large amount of logging, which can make sifting though logs on the dashboard tedious. There may also be a need for longer log retention and, in that case, better search functionality.
To begin with, as part of resin.io's testing suite, I created an automated testing rig to perform tests against all our services using real hardware.
Based on the frequency and output of these tests, I found it necessary to implement a custom logs solution for my coffeescript application.
The two nodejs libraries I found for logging were winston and bunyan. However, Bunyan’s concept of child loggers and it's CLI tool for easy log searching made it far better suited for my application.

Bunyan

Since I had multiple scripts logging in parallel to the same file, I needed to have a common script for bunyan initialization.
bunyan = require 'bunyan'


exports.log = (name) ->
log = bunyan.createLogger {
name: name,
streams: [
{
level: ‘info',
stream: process.stdout
},
{
level: 'trace',
path: '/data/testing.log'
}
]
}
A “stream” is Bunyan’s name for an output for log messages. The first stream sends logs from the info level and higher to the console (process.stdout) and then to the resin dashboard logs.
In order to keep the dashboard from being overwhelmed with extra logs, those that are of a lower log level than info are not sent. The second stream sends the logs to the testing.log file. If you wanna know more about log levels you can check bunyan's documentation.
I saved the log file in /data/ as this is a special directory in resin.io file system that persists its content through updates and reboots.
This script is called from the main app:
logger = require './log'
// creates a bunyan logger with main as name.
log = logger.log "main


// logs with level info and message:
// This log will appear in resin web-log console and log file.
log.info "main app has started"
// This will only appear in /data/testing.log
log.debug "Some debugging info"
&
The above script will then output the following logs:
{“name”:“main”,“hostname”:“raspberrypi2”,“pid”:186,“level”:30,“msg”:“main app has started”,“time”:“2015-11-03T15:04:21.752Z”,“v”:0}
{“name”:“main”,“hostname”:“raspberrypi2”,“pid”:186,“level”:20,“msg”:“Some debugging info”,“time”:“2015-11-03T15:04:27.752Z”,“v”:0}

Logrotate

Having a log that grows indefinitely can cause some serious issues. As a result, a scheme to delete old logs was required for the next step.
Bunyan has an option rotating-file that deletes old log entries but unfortunately it doesn’t work well when multiple processes write on the same log.
For that reason, I used logrotate, a handy linux program that rotates logs depending on the configuration, as its’ name implies.
When logrotate decides a log should be rotated, a new empty file is created with the same name ( testing.log) and the old testing.log is appended with .1.
If older logs exist, this number increases accordingly. Also, if a file exceeds the number of files that should be kept, it is automatically deleted.
/data/testing.log {
    daily
    rotate 7
    copytruncate
}
terminal-img.png
Using this configuration, we can see from the above screenshot that logs are rotated daily, 7 log files are kept (one week’s worth of logs) and logs are kept in /data directory.
copytruncate is used for node to keep writing in original file ( testing.log instead of `testing.log.1 ).
logrotate -v logrotate.conf
Using the v flag logrotate outputs information - it is very helpful for dealing with issues.
logrotate -vf logrotate.conf
The f flag forces a rotation no matter the status of the logs.
logrotate -v logrotate.conf -s /data/status
Logrotate stores the status of the logs in /var/lib/logrotate/status,
However, in a resin application files retention is only ensured in /data directory, so we should change the status file location using the s flag.

Systemd timers

In order to schedule the rotations, logrotate is used in combination with systemd or cron. In this case, we use systemd and we need to rotate logs daily.
[Unit]
Description=Timer for logrotate


[Timer]
OnCalendar=--* *:00:00




[Install]
WantedBy=timers.target
Using OnCalendar, we're able to execute logrotate every hour. Logrotate then decides if the logs require rotation.
[Unit]
Description=Start logrotate


[Service]
Type=oneshot
ExecStart=/usr/sbin/logrotate -v /etc/logrotate.conf -s /data/state
When the timer is triggered it calls the service with the same name ( logs.service ) which then calls logrotate.
Using the systemctl list-timers -all, systemctl status logs.timer, systemctl status logs.service commands can help us see the state of the timer and the output of the logrotate.

Web server

Screen Shot 2015-11-03 at 5.21.56 PM.png
Ordinarily, in order to view the logs you’d have to ssh or use the resin web terminal.
Instead, I have created a server on the device that makes use of resin public url feature to make the logs easily available through the browser.
enable-public-URLs.png
To access the server, you can simply activate the web-url from the actions page on your resin.io device dashboard and visit the generated URL.
If you have any question, just ping me @nchronas on our community forum.
Happy logging!
by Team balenaThe global group of product builders that brings you balena

Share this post