Node.js Restful API template with TypeScript, Fastify and MongoDB


Have you recently started a new Node.js API project? Did you use some template or started the project from scratch?
I was asking the same questions myself and I was looking for minimal boilerplate for a while. There were so many options that it was hard to pick one.
Most of them are using Express.js, others are using ES5 or lack test setup.
So I decided to spin one on my own and reuse it in the future. Here is the repo at GitHub.


My setup has the following characteristics:


  • Node version 10 or later
  • TypeScript for obvious reasons
  • Fastify for its asynchronous nature and being faster than Express or Restify
  • Nodemon in development for watching for changes and restart the server


  • MongoDB with Mongoose
  • Docker for MongoDB service instead of installing it


  • Jest for being the de-facto in Node testing
  • In memory Mongod server for easily mock the DB
  • Coverall for coverage collector after Jest report is generated

Code formatting and static analysis

  • ESLint config
  • Prettier config attached to the linter
  • Editor config


  • Swagger UI for API documentation
  • Postman collections attached from testing the endpoints


  • Continuous integration in Travis CI.
  1. Install dependencies
  2. Run tests
  3. Collect coverage and pass it to Coverall

And thats it! I hope it’s minimal enough.
Please share some ideas for improvement if you have any. I thought of API versioning but Fastify seems to support that out of the box.
API key authentication was also something I was considering, but I wasn’t sure how exactly to implement it. If you have something in mind would love to discuss it in the comments.
Happy coding!

Configure your dev Windows machine with Ansible

Ansible is well known it the IT operations fields with its fantastic automation abilities.
You can do whatever you want with Windows too if it’s a Powershell, bat script or one of the more than one hundred? modules.
I will use it to configure my personal machine and save the hustle every time I step on new one.
It’s not a big deal to install a few programs but I’m sure this will repay in the long term. Can be pretty useful for configuring multiple machines too.
Using Ansible to target localhost on Linux is like click-click-go, but It’s different when it’s comes to Windows.
We need to install WSL on Windows, Ansible on WSL, Enable WinRm on Windows and finally control it from WSL.
And all this happens on your localhost.

Install Ansible on WSL

Enable WSL:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Install Ubuntu distribution but you may choose to install whatever distro you want:
Invoke-WebRequest -Uri -OutFile Ubuntu.appx -UseBasicParsing

Star your WSL and update packages:
sudo apt-get update

Install Ansible:

sudo apt install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

Enable WinRM

By default WinRM works only for Private or Domain networks. You can skip that by providing parameter to Enable-PSRemoting -SkipNetworkProfileCheck but I don’t suggest doing that. Instead make your trusted network private.
Then enable WinRM: Enable-PSRemoting running it in Powershell.
Enable Basic Auth: Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Enable Unencrypted connection: Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true

Run the playbook

Using ansible pull run the playbook which install Chocolatey and a few packages from it. Full details can found in the repo. Provide your username and password for Windows.

ansible-pull –U -e ansible-user=your_win_user ansible_password=your_win_user_password


I’m sure you as developer make a lot of tweaks to your os, me too. Having a few more tweaks to add, would be great to share some ideas and extend it even further.

Getting started with Hangfire on ASP.NET Core and PostgreSQL on Docker

Hangfire is an incredibly easy way to perform fire-and-forget, delayed and recurring jobs inside ASP.NET applications. No Windows Service or separate process required. Backed by persistent storage. Open and free for commercial use.

There are a number of use cases when you need to perform background processing in a web application:

  • mass notifications/newsletter
  • batch import from xml, csv, json
  • creation of archives
  • firing off web hooks
  • deleting users
  • building different graphs
  • image/video processing
  • purge temporary files
  • recurring automated reports
  • database maintenance

and counting..

We will get started by install and configure the database, then create new ASP.NET Core MVC project, after which we will get to Hangfire and run few background tasks with it.

Setup PostgreSQL database

There are more than one way to setup PostgreSQL database. I’m about to use Docker for the purpose, but you can install it directly from the Postgresql official webisite.

If you choose do download and install PostgreSQL, skip the following Docker commands. Instead configure you db instance with the parameters from the Docker example.

Else we need Docker installed and running. Lets proceed with pulling the image for PostgreSQL. Open terminal and run:
$ docker pull postgresql

We have the image, let’s create a container from it and provide username and password for the database:
$ docker run -d -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres

Create ASP.NET Core MVC project

So far we have the db up and running, continuing with the creation of the MVC project and configure it to use our database.

Create new folder and enter it:
$ mkdir aspnet-psql-hangfire && cd aspnet-psql-hangfire

When creating new project, you can go with whatever you want from the list of available dotnet project templates. I’ll stick to mvc.
$ dotnet new mvc

Next install Nuget package for Entity Framework driver for PostgreSQL:
$ dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL

Add empty dbcontext:

using Microsoft.EntityFrameworkCore;

namespace aspnet_psql_hangfire.Models
    public class DefaultDbContext : DbContext
        public DefaultDbContext(DbContextOptions<DefaultDbContext> options)
            : base(options) { }

Restore the packages by running:
$ dotnet restore

Edit appsettings.json and enter the connection string:

    "connectionStrings": {
    "Logging": {
        "LogLevel": {
            "Default": "Warning"
    "AllowedHosts": "*"

The framework must know that we want to use PostgreSQL database so add the driver to your Startup.cs file within the ConfigureServices method:

services.AddEntityFrameworkNpgsql().AddDbContext<DefaultDbContext>(options => {

We are ready for a initial migration:
$ dotnet ef migrations add InitContext && dotnet ef database update

Install Hangfire

Let’s continue with final steps — install packages for Hangfire:
$ dotnet add package Hangfire.AspNetCore && dotnet add package Hangfire.Postgresql

Add the following using statement to the Startup.cs.

using Hangfire;
using Hangfire.PostgreSql;

Again in the ConfigureServices method in the Startup.cs, let Hangfire server to use our default connection string:

services.AddHangfire(x =>

Again in Startup.cs, but now in Configure method enter:

app.UseHangfireDashboard(); //Will be available under http://localhost:5000/hangfire"

Then restore again the packages by typing:
$ dotnet restore

Create tasks

In the Configure method, below the app.UseHangFireServier() add the following tasks:

BackgroundJob.Enqueue(() => Console.WriteLine("Fire-and-forget"));

BackgroundJob.Schedule(() => Console.WriteLine("Delayed"), TimeSpan.FromDays(1));

RecurringJob.AddOrUpdate(() => Console.WriteLine("Minutely Job"), Cron.Minutely);

var id = BackgroundJob.Enqueue(() => Console.WriteLine("Hello, "));
BackgroundJob.ContinueWith(id, () => Console.WriteLine("world!"));

And finally run the app:
$ dotnet run

Hangfire task being executed

Observe the console. Now go to the dashboard provided by Hangfire at http://localhost:5000/hangfire for more task info.

Hangfire dashboard


Keep in mind that the dashboard is only available for localhost connections. If you would like to use it in production, you have to apply authentication methods. There are plenty of tutorials describing how to do that.

Here is the repo from the project, I hope you liked it. Happy coding!

My favorite Visual Studio Code productivity extensions

As a developer you’ve probably already met Visual Studio Code Microsoft’s lightweight code editor. If you haven’t, you definitely should try it. Today I’m going to share my VS extensions with a missing feature which I recently found – sharing extensions, themes and configuration. What happens if you step on a new machine? You have to customize all your key binding, download all your plugins and setup your theme preferences. And you probably want to sync that across all your devices. Before that I will share all my productive plugins which make my day. They are separated by categories.

C# extensions

  1. C# – Full IntelliSense for C#. A MUST for .NET developers
  2. C# XML Documentation Comments – Most of the Visual Studio’s users are very familiar with. Type “/// + TAB” and you get nice documentation for your classes and members.
  3. C# Extensions – Pretty nice addition to the C# extension. Providing quick scaffolding of classes, interfaces etc.
  4. NET Core Test explorer – Browse, run and debug tests directly in the editor.
  5. Nuget package manager – No need of explanation.

Git extensions

  1. Git History – This extension gives you full feature Git client in the IDE. Search commits, merge and compare branches and more.
  2. gitignore – Remove files from source code tracking from the file context menu.
  3. GitLens – Track authors, dates directly in the file.

JavaScript extensions

  1. TSLint – Analysis tool that checks TypeScript code for readability, maintainability, and functionality errors.
  2. Babel JavaScript – Syntax highlighting for today’s JavaScript.
  3. Npm IntelliSense – You get npm modules autocomplete in import statements.
  4. ESLint – Linting utility for JavaScript and JSX.
  5. Debugger for Chrome – Debug your JS app directly in the browser.

Utilities extensions

  1. REST client – Allows you to send HTTP request and review responses.
  2. Docker – Adds syntax highlighting, commands, hover tips, and linting for Dockerfile and docker-compose files.
  3. Path IntelliSense – Plugin that auto completes filenames.
  4. Auto Close Tag – Automatically add HTML/XML close tag, same as Visual Studio IDE or Sublime Text does.
  5. VS Live Share – Real-time collaborative development.
  6. Auto Rename Tag – Auto rename paired HTML/XML tag.
  7. VSCode great icons – File specific icons for improved visual grepping.
  8. SQLTools – Execute queries, auto complete, bookmarks etc.
  9. PHP IntelliSense – Advanced PHP IntelliSense.
  10. IntelliCode for VS — A set of capabilities that provide AI-assisted development. Still in preview, but worth trying.


Cobalt2 theme – Using Cobalt2 theme feels so good. In addition it’s considered the best theme for your eye balls.

Settings sync – This is the missing peace of the puzzle. There isn’t default support of VS Code for this type of synchronization.  This extension will save all your custom settings, themes and extensions. Whatever you switch PCs or just start with new one, setup is as quick as 5 minutes and you have your favorite extensions and settings synced. Reallyyy cool.

I hope you find the list useful. Will try to keep it up to date. If you find any interesting extensions worth mention don’t hesitate to drop me a comment.

Getting started with Ansible and configuring Windows hosts

Ansible is a configuration management, provisioning and deployment tool which is quickly gaining popularity in the DevOps areas. Managing and working on various platforms including Microsoft Windows.
What makes Ansible stand out of other configuration management tools is that it’s agentless. Which means no software is required on the target host. Ansible uses SSH for communication with Unix based hosts and WinRM for Windows hosts.
Recent announcement from Microsoft’s team is an upcoming fork of OpenSSH for Windows, which would make things ever smoother for DevOps teams managing Windows infrastructure.

In this post we will get started with Ansible by:

  1. Setup of the control machine
  2. Configure Windows server in order to receive commands from Ansible
  3. Install Chocolatey and SQL Server

Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7.

If you covered the requirements, let’s get started with the first step.

Setup Ansible control machine

As previously mentioned Ansible is agentless, but we need control machine — machine which talks to all of our hosts.

Ansible can’t run on Windows but there’s a trick

Currently Ansible can only be installed on Unix based machines, but If you are using Windows as your primary OS, you can install Ubuntu subsystem. Read this for further installation details. If you are non Windows user please continue reading.

Install Ansible

After the installation of Ubuntu subsystem on Windows (if you had so), lets proceed with the installation of Ansible by opening terminal.

Install Ubuntu repository management:
$ sudo apt-get install software-properties-common

Lets update our system:
$ sudo apt-get update

Add Ansible repository:
$ sudo apt-add-repository ppa:ansible/ansible

Then Install Ansible:
$ apt-get install ansible

Add Python package manager:
$ apt install python-pip

Add Python WinRM client:
$ pip install pywinrm

Install XML parser:
$ pip install xmltodict

If every thing went OK you should be able to get the current version:
$ ansible --version

So far, so good. Lets continue with configuration of the tool.

Configure Ansible

Inventory — list of the hosts

Inventory.yml is the main configuration file of your hosts addresses separated in groups with descriptive names.

Let’s create that file and set the example below:
$ vim inventory.yml

Enter the IP/DNS addresses for your group:



Configure the connection

We are a few steps away from establish connection to the remote servers. Let’s configure the connection itself — credentials, ports, type of connection. The convention is to name the config file based on your group of hosts.

If you want all of your inventory to use that same configuration file you can name it all.yml_. We will use_ all.yml as all servers will have same credentials and connection type.

Let’s begin by creating folder:
$ mkdir group_vars

Create the file and edit it:
$ vim group_vars/all.yml

Add the configuration details:

ansible_user: ansible_user
ansible_password: your_password_here
ansible_port: 5985
ansible_winrm_transport: basic
ansible_winrm_operation_timeout_sec: 60
ansible_winrm_read_timeout_sec: 70

This credentials will be used to access the remote hosts with connection set to WinRM basic authentication. We will create them in the next section.
We use basic authentication but for your production environment you probably want to use more secure schema. See this article for more info.

Configure Windows hosts

Our Windows hosts need to be configured before execute any commands on it. The following PowerShell script will do:

  1. Create the Ansible user we defined in all.yml
  2. Add the user to the Administrators group
  3. Set WinRM authentication to basic and allow unencrypted connections
  4. Add Firewall rule for WinRM with your control machine IP address

Open PowerShell on the host and execute the script:

NET USER ansible_user "your_password_here" /ADD
NET LOCALGROUP "Administrators" "ansible_user" /ADD
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
netsh advfirewall firewall add rule name="WinRM" dir=in action=allow protocol=TCP localport=5985 remoteip=

After the execution is completed we can try to ping our host from the control machine to check that connection is OK. We ping only the DB servers:
$ ansible dbservers -i inventory.yml -m win_ping

Write our first playbook

Getting back to our Ansible control machine to add a playbook — set of tasks or plays which together form the playbook.

The target is to install Chocolatey which is the community driven package manager for Windows. After that we will install SQL Server and reboot the server.

Ansible come with many modules for Windows with a lot of functionalities out of the box. They are prefixed with “win_” like for example win_feature. You can check more here for your specific needs.

Let’s continue with the creation of the playbook file:
$ vim configure-win-server-playbook.yml

In the file describe the playbook as follows:

- hosts: dbservers
   - name: Install Chocolatey
     raw: Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(''))

   - name: Install SQL Server
     name: sql-server-2017
     state: present- name: Reboot to apply changes

    reboot_timeout: 3600

Execute the playbook by typing:
$ ansible-playbook dbservers -i inventory.yml configure-win-server-playbook.yml

You will see each task running and returning status of execution and after reboot we are all ready!


Ansible is really powerful tool. Microsoft and the community is doing really fantastic work for porting Ansible modules to Windows which are written in PowerShell. Yet the plan to have SSH feature on Windows is great too. No matter if your inventory is of physical or virtual servers, you should definitely try out Ansible on your infrastructure for saving time, money and of course avoid human mistakes by manually configure, deploy or provision those environments.

Station is now on Linux!

Today we are more dependent than ever on many web apps in our daily tasks. In order to manage them quicker and better, how do you usually organize them? Having bookmarks or use to type URLs? What if I tell you that there is app to rule them all? It’s called Station – it’s been on the market for a while, but recently they released the app on Linux. The main idea behind the app is to group all your web apps into one place and have easy access to them by one click. It has more than 100+ apps already and continue to add more. I few months ago I have requested Mega and ManageWP as apps and they added them in very short time. Still there is a thing that I miss: Remembering your apps for your account so you won’t have to add them each time you logon to a new computer. Probably there is a reason not having that as a feature yet, but you definitely should try it.

Semantic UI React – Front end made easy

If Bootstrap is great for user interfaces, well, Semantic UI is briliant. For React developers, there is library available offering already backed Semantic UI components. I decided to give it a try and spin it on the template Brady created which uses Bootstrap. The result? Less and more readable codebase. In addition I wrote only 1 line of CSS. Curious already? Get the fork from here. The library is still in version under 1 (in the moment of writting 0.79), but has 100% components coverage from the originator Semantic UI. You can check out the library here. If you have any issues or questions, drop me a line.

WordPress with WP-CLI on Bash on Ubuntu on Windows 10

At first sight this sounds ridiculous. In fact it sounds absurd. But it’s not, if you heard of the new Microsoft feature — Bash on Ubuntu on Windows.

Windows 10’s Anniversary Update offered a big new feature for developers: A full, Ubuntu-based Bash shell that can run Linux software directly on Windows. This is made possible by the new “Windows Subsystem for Linux” Microsoft is adding to Windows 10.

In this post, I will setup fully functional WordPress installation with WP-CLI on top of LAMP server which will be installed on my Linux Subsystem through Windows 10.

Let’s do this!

First we need to install Bash on Ubuntu on Windows

Tutorial how to do it, can be found here. After the installation is completed, run the app as administrator. If you would like to go with different user than root, you can read this.

LAMP stack is the next task

WordPress requires PHP, MySQL and HTTP server. We can go with Apache, so LAMP toolset is all we need. This command will install PHP, MySQL and Apache in a moment: $ sudo apt-get install lamp-server^

When we have our server ready we can start with the fun part — installing our command-line interface for WordPress

Note that you must have PHP 5.3.29 or later and the WordPress version 3.7 or later.

Download the wp-cli.phar file via curl: $ curl -O

Now lets check, if it is running: $ php wp-cli.phar --info

To use WP-CLI from the command line by typing wp, make the file executable and move it somewhere in your PATH. For example: $ chmod +x wp-cli.phar $ mv wp-cli.phar /usr/local/bin/wp

Check, if it is working: $ wp --info

If everything went okay, you should see something like this:

$ wp --info
PHP binary: /usr/bin/php5 PHP version: 5.5.9-1ubuntu4.14
php.ini used: /etc/php5/cli/php.ini
WP-CLI root dir: /home/wp-cli/.wp-cli
WP-CLI packages dir: /home/wp-cli/.wp-cli/packages/
WP-CLI global config: /home/wp-cli/.wp-cli/config.yml
WP-CLI project config: WP-CLI version: 0.23.0$

Run our local WordPress installation

Аfter we have our environment and tools ready, playing with the WP-CLI makes the new local WP installation a few commands away.

Navigate to: $ cd ../var/www/html

This is where our WP will live. Remove the default index.php file. Also, now is a good time to start our Apache and MySQL services: $ rm index.html $ service apache2 start $ service mysql start

Let’s download our WP core files: $ wp core download

This will download the latest version of WordPress in English (en_US). If you want to download another version or language, use the --version and --locale parameters. For example, to use the Bulgarian localization and 4.2.2 version, you would type: $ wp core download --version=4.2.2 --locale=bg_BG

Once the download is completed, you can create the wp-config.php file using the core config command and passing your arguments for the database access here: $ wp core config --dbname=databasename --dbuser=databaseuser --dbpass=databasepassword --dbhost=localhost --dbprefix=prfx_

This command will use the arguments and create a wp-config.php file. Using the db command, we can now create our datebase: $ wp db create

This command will use the arguments from wp-config.php and do the job for us. Finally, to install WordPress, use the core install command: $ wp core install --title=WordPress Website Title --admin_user=admin_user --admin_password=admin_password [email protected]

If you’ve got your success message, restart Apache: $ service apache2 restart

After the restart is completed, open your browser and type http://localhost/ and enjoy your brand new WP installation.

Note that every time you start the application you must first start up MySQL and Apache services:

$ service mysql start|restart|stop $ service apache2 start|restart|stop

Final words

I like WP-CLI a lot for saving me time in doing boring stuff, but this may seems as a workaround and it probably is. Still, nothing compares to a real Linux environment.
But let’s face the fact we are running native Linux software on Windows. That by itself is fantastic. The benefits are unlimited.

Keep in mind that the Bash on Ubuntu on Windows feature is still in beta, so let’s hope that Microsoft will keep up with the good news.

Please, share and comment in the section below for your opinion.

Further readings

How to access your Ubuntu Bash files in windows and your Windows system drive in Bash

How to create and run Bash shell scripts on Windows 10