Analyze ASP.NET Core with React SPA in SonarCloud

SonarCloud is well known cloud based tool for Static Code Analysis which supports most of the popular programming languages – JavaScript, TypeScript, Python, C#, Java and counting. The tool is also known as SonarQube which is the self hosted version of the analyzer. SonarCloud is completely free for public repositories and SonarQube is even open sourced. These characteristics make it my go-to tool for static code analysis for this project – setup SonarCloud for ASP.NET Core with React single page application.

This post is the second part from the series for Static Code Analysis for .NET Core projects. In the previous post we learned what Static Code Analysis is and introduced well known tools for the job. If you missed that post you can check it out here.

The agenda for today is:

  • Overview of the different source control management platforms in SonarCloud
  • Available options for analyzing your ASP.NET Core SPA app
  • Build pipeline in GitLab

I will use React for the demo, but you can use whatever framework you need for the job. React/Angular/Vue or any other – it doesn't really matter, the flow stays the same, only the build or test running commands may differ.

Shall we begin? Let's deep dive!

Different source control management platforms

SonarCloud works with the most popular SCM platforms – GitHub, GitLab, BitBucket and Azure DevOps. Different platforms but the declarative yml pipeline execution is what they all have in common.

Good to know is that SonarCloud provides 2 scanners – 1 for Dotnet projects and 1 for everything else. The good news is that the dedicated Dotnet scanner can also analyze files from your frontend app – JavaScript, TypeScript, CSS and HTML files.

Lets quickly go over the platforms and focus on GitLab with a full blown setup from scratch.

GitHub

If you are using GitHub there is huge chance that you are already using GitHub Actions.

This is the easiest setup because SonarCloud generates pipeline setup for you. Of course you can use other CI tools as Circle CI, Travis CI or any other but you have to setup the dotnet-sonarscanner yourself. Check the Build pipeline in GitLab section as it has pretty relevant scenario.

BitBucket

Before going into BitBucket beware that the platform (not yet?) supports apps targeting .NET Framework directly, but of course you can always use containers for the purpose.

SonarCloud doesn't provide any ready to go templates for .NET Core projects and BitBucket's pipeline. You still need to install and configure everything yourself.

Azure DevOps

I read somewhere that dotnet-sonarscanner was developed with the partnership of Microsoft so no wonder the best integration with SonarCloud is with the infamous Azure DevOps platform.

To enable SonarCloud in your pipelines first you need to install SonarCloud extension from Visual Studio marketplace and then follow the super descriptive guide which mostly involved clicking and can be easily accomplished with the GUI builder.

GitLab

Nothing differs from the BitBucket setup. Later in the post comes full setup in GitLab.

Local (Manually)

  • Using the VSCode extension Sonar Dotnet gives you the ability to directly analyze from the editor. All the setup is through the GUI and reports are pushed to SonarCloud.
  • Using the CLI – To use the CLI you must have .NET SDK, Java and the scanner installed and run the commands from the CI setup directly in the terminal. Check the requirements in the official docs.

Available options for analysis

On the road to analyze the combined single page application, we have two paths we can choose to take.

Option 1: Analyze frontend and backend at once

The dedicated scanner for .NET projects possess the power to also scan JS, TS, HTML, CSS etc. files. We only need to include frontend's files with wildcard in the .csproj as follows:

<ItemGroup>
    <!-- Don't publish the SPA source files, but do show them in the project files list -->
    <Content Remove="Frontend\**" />
    <None Remove="Frontend\**" />
    <None Include="Frontend\**" Exclude="Frontend\node_modules\**" />
</ItemGroup>

Or if you are using .NET Core 3.1 and above, the default template includes the frontend in your ASP.NET Core project in a common way.

Option 2: Analyze frontend and backend separately

This option is useful when you have a monorepo with your backend and frontend in it, but they have a separate startup process or even different teams working on them. This option will require to create a 2 separate projects in SonarCloud. The option will also require to use the default SonarCloud analyzer for your frontend.

Build pipeline in GitLab

Let's recap everything we discussed so far and put it to work. To cover most of the cases for setuping SonarCloud analysis, I will try to walk you through the whole setup with a example project from the ASP.NET Core with React SPA sample with a separate scan tasks for frontend and backend.

Before we start lets create empty .gitlab-ci.yml file in the root directory.

For GitLab CI file reference checkout official docs: https://docs.gitlab.com/ee/ci/yaml/gitlab_ci_yaml.html

Frontend

Starting with the creation of our frontend Sonar project which needs to be done manually. Just throw some descriptive name and a project key and you are ready to go. Once done, Sonar will provide SONAR_TOKEN and SONAR_HOST_URL. Make sure to add them as Environment variables.

Next step is to define the variables for the CI job:

variables:
  SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"  # Defines the location of the analysis task cache
  GIT_DEPTH: "0"  # Tells git to fetch all the branches of the project, required by the analysis task

After that comes the stage definitions of the job. In this case we will have two – one for frontend and one for the backend:

stages:
  - frontend
  - backend

Create the frontend's actual stage definition with the following task. You can have as many task for a stage as you like but we will stick to just one:

frontend.build.test.analyze: 
  stage: frontend 
  image:
    name: sonarsource/sonar-scanner-cli:latest
    entrypoint: [""]
  cache:
    key: "${CI_JOB_NAME}"
    paths:
      - .sonar/cache
  script:
    - cd Frontend
    - npm install
    - npm run build
    - npm test
    - sonar-scanner
        -Dsonar.projectKey=sonar.example.frontend
        -Dsonar.organization=gmarokov-1
        -Dsonar.sources=src 
        -Dsonar.exclusions="/node_modules/**,/build/**,**/__tests__/**"
        -Dsonar.tests=src
        -Dsonar.test.inclusions=**/__tests__/**
        -Dsonar.javascript.lcov.reportPaths="coverage/lcov.info"
        -Dsonar.testExecutionReportPaths="reports/test-report.xml"
  only:
    - merge_requests
    - master
    - tags

A lot is happening in this task so lets walkthrough:

frontend.build.test.analyze

The name of the job, its up to you to give it a descriptive name

stage: frontend

The name of the stage which this task belongs to. Must be predefined which we did above.

image: # We can use existing docker images 
    name: sonarsource/sonar-scanner-cli:latest
    entrypoint: [""]

Here we specify a Docker image which comes with sonar-scanner-cli preinstalled. This Scanner CLI is used for all languages except for Dotnet as I mentioned above.

cache:
    key: "${CI_JOB_NAME}"
    paths:
      - .sonar/cache

We specify the cache and not download the image every time we run the job. This should be good.

script:
    - cd Frontend
    - npm install
    - npm run build
    - npm test

Nothing fancy here, regular npm stuff but note that tests are run with coverage report and special jest-sonar-reporter in the package.json which converts test result data to Generic Test Data which is one of the supported formats by SonarCloud.

 - sonar-scanner
    -Dsonar.projectKey=sonar.example.frontend
    -Dsonar.organization=gmarokov-1
    -Dsonar.sources=src 
    -Dsonar.exclusions="/node_modules/**,/build/**,**/__tests__/**"
    -Dsonar.tests=src
    -Dsonar.test.inclusions=**/__tests__/**
    -Dsonar.javascript.lcov.reportPaths="coverage/lcov.info"
    -Dsonar.testExecutionReportPaths="reports/test-report.xml"

Here comes the actual scan. Required parameters are projectKey, organization and the early added SONAR_TOKEN and SONAR_HOST_URL which are taken from the env variables.

Then comes the configuration of the source directories, directories to exclude, test directories and the paths to the generated reports for coverage and test execution.

More about the parameters can be found here: https://docs.sonarqube.org/latest/analysis/analysis-parameters/

And our frontend is good to go. Coming next is the backend.

Backend

For the backend another project needs to be created manually. Since we already have environment variable with the name of SONAR_TOKEN, you can save the token for this project as SONAR_TOKEN_BACKEND for example. We will manually provide it anyway.

When it comes to the backend scan, it will be a little different since we will use the dedicated scanner for Dotnet.

backend.build.test.analyze:
  stage: backend
  image: gmarokov/sonar.dotnet:5.0
  script:
   - dotnet sonarscanner begin
        /k:"sonar.example.backend" /o:"gmarokov-1"
        /d:sonar.login="$SONAR_TOKEN_BACKEND"
        /d:sonar.host.url="$SONAR_HOST_URL"
        /d:sonar.exclusions="**/Migrations/**, /Frontend"
        /d:sonar.cs.opencover.reportsPaths="**/coverage.opencover.xml"
        /d:sonar.sources="/Backend/Backend.Api"
        /d:sonar.tests="/Backend/Backend.Api.Tests"
        /d:sonar.testExecutionReportPaths="SonarTestResults.xml"
   - dotnet build Backend/Backend.sln
   - dotnet test Backend/Backend.sln --logger trx /p:CollectCoverage=true /p:CoverletOutputFormat=opencover /p:ExcludeByFile="**/Migrations/*.cs%2CTemplates/**/*.cshtml%2Ccwwwroot/%2C**/*.resx"
   - dotnet-trx2sonar -d ./ -o ./Backend/SonarTestResults.xml
   - dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN_BACKEND"
  only:
    - branches
    - master
    - tags

Let's walkthrough the whole task:

image: gmarokov/sonar.dotnet:5.0

Again Docker image which will be used to spin a container on which we will execute our task. This image have Dotnet SDK, Java runtime, SonarDotnet and Dotnet-Trx2Sonar global tools. The image can be found on DockerHub which looks like this:

# Image with Dotnet SDK, Java runtime,* SonarDotnet, Dotnet-Trx2Sonar *dotnet tools*
FROM mcr.microsoft.com/dotnet/sdk:5.0-focal
ENV PATH="$PATH:/root/.dotnet/tools"

# Install Java Runtime*
RUN apt-get update
RUN apt install default-jre -y

# Install SonarCloud dotnet tool*
RUN dotnet tool install --global dotnet-sonarscanner

# Install Trx2Sonar converter dotnet tool
RUN dotnet tool install --global dotnet-trx2sonar

You might spot the following suspicious parameter:

/p:ExcludeByFile="**/Migrations/*.cs%2CTemplates/**/*.cshtml%2Ccwwwroot/%2C**/*.resx"

That's because of the the underling PowerShell parser fails to parse the comma as separator so we need to use encoded value.

dotnet-trx2sonar -d ./ -o ./Backend/SonarTestResults.xml

The dotnet-trx2sonar tool will help us to convert .trx files (Visual Studio Test Results File) generated by Xunit to Generic Test Data which is the format specified by SonarCloud. The converted file will help us to browse the tests in SonarCloud UI.

Anddd that's it! Pipeline is ready to go and provide analyzes on every CI run. I also added some nice badges to indicate the SonarCloud analysis status directly in the repo.

The full demo project can found on GitLab here.

Conclusion

Benefits of these type of analyses are enormous and setup can be dead simple. Yes, delivery is important, but static code analysis compliments it perfectly making delivery more predictable, secure and stable by catching common pitfalls and violations as early as the developer writes code or commits.

If you haven't used any static code analysis tools before, now you don't have any excuse not to!

Resources

https://codeburst.io/code-coverage-in-net-core-projects-c3d6536fd7d7

https://community.sonarsource.com/t/coverage-test-data-generate-reports-for-c-vb-net/9871

https://dotnetthoughts.net/static-code-analysis-of-netcore-projects/

https://sonarcloud.io/documentation/analysis/scan/sonarscanner-for-msbuild/

https://sonarcloud.io/documentation/analysis/scan/sonarscanner/

Static Code Analysis for your .NET projects

What is Static Code Analysis

Every developer wants to write predictive, maintainable and high quality software. Unfortunately that’s not always the case because of our human nature – we do make mistakes. That’s why we try to automate all the things related to software development lifecycle: testing, deploying, running applications.

But what about the codebase? What do we do to enforce minimally complex and maintainable code, ensure proper code styles standards, prevent common pitfalls and violations, and predict what the code would do at runtime?

By applying RULES defined by your team, the platform or the programming language. And that’s what Static Code Analysis is all about.

Static Code Analysis can be simple manual inspection such as code review or automated via some of the tools we will overview in this blog post.

Keep digging

Eric Dietrich wrote very explanatory article about what exactly Static Analysis is here:

https://blog.ndepend.com/static-analysis-explanation-everyone/

If you are curious about Dynamic Analysis you can also check out these articles:

https://securityboulevard.com/2021/02/dynamic-code-analysis-a-primer/

https://github.com/analysis-tools-dev/dynamic-analysis

https://www.overops.com/blog/static-vs-dynamic-code-analysis-how-to-choose-between-them/

This post is Part 1 from the Static Analysis series. In the next post we will setup SonarCloud for a ASP.NET Core + React SPA project in CI pipeline.

Where to use Static Code Analysis

I found plenty of NuGet packages, IDE extensions and external services available on the market. That was hard to digest and probably I might miss some very helpful tools. Would be great if you guys share your opinion or favorite tools for the job.

In development

Using build-time code analysis in Visual Studio /Code (or other preferred tool), we enable developers to quickly understand what rules are being broken. This enables them to fix code earlier in the development lifecycle, and we can avoid builds that fail later.

Extensions for Visual Studio Code

Extensions for Visual Studio

Other tools

In build pipelines

NuGet packaged analyzers are the easiest, and they will automatically run as your project builds on the build agents. When a build encounters a code quality error, you can immediately fail the build, send alerts, or apply any other actions you and your team needs.

.NET Core SDK 3.0 or later, comes with included analyzers for Open APIs previously known as Swagger. To enable the analyzer in your project, include the IncludeOpenAPIAnalyzers property in the project file:

<PropertyGroup>
    <IncludeOpenAPIAnalyzers>true</IncludeOpenAPIAnalyzers>
</PropertyGroup>

NuGet packages

Security analyzers

Different CI tools may provide their own tool for security analysis:

NuGet packages for the Test projects

External services

And many more counting. These are the one I found easy to get started without installing and configuring additional software.

Conclusion

In first issues raised by static code analysis might be considered as overhead, but static code analysis brings huge benefits in long term which can be summarized to but not only:

  • You have the confidence to release more frequently.
  • This results in having a quicker TTM (Time to Market).
  • Reduce business risks (data loss, vulnerabilities, application failures, ..)

Rules may sometimes get on your way and slow down your development, but you and your team are in charge to establish given rules or completely ignore/disable them.

In the next post I will configure SonarCloud for ASP.NET Core + React SPA so stay tuned.

Which are your favorite static code analysis tools? Please share your thoughts in the comments or create a PR in GitHub.

Happy analyzing 🙂

Resources

https://blog.tdwright.co.uk/2018/12/10/seven-reasons-that-roslyn-based-code-analysers-are-awesome/?preview=true

https://docs.microsoft.com/en-us/visualstudio/code-quality/?view=vs-2019

https://github.com/analysis-tools-dev/static-analysis

Node.js Restful API template with TypeScript, Fastify and MongoDB

Why

Have you recently started a new Node.js API project? Did you use some template or started the project from scratch?
I was asking the same questions myself and I was looking for minimal boilerplate for a while. There were so many options that it was hard to pick one.
Most of them are using Express.js, others are using ES5 or lack test setup.
So I decided to spin one on my own and reuse it in the future. Here is the repo at GitHub.

How

My setup has the following characteristics:

API

  • Node version 10 or later
  • TypeScript for obvious reasons
  • Fastify for its asynchronous nature and being faster than Express or Restify
  • Nodemon in development for watching for changes and restart the server

Data

  • MongoDB with Mongoose
  • Docker for MongoDB service instead of installing it

Tests

  • Jest for being the de-facto in Node testing
  • In memory Mongod server for easily mock the DB
  • Coverall for coverage collector after Jest report is generated

Code formatting and static analysis

  • ESLint config
  • Prettier config attached to the linter
  • Editor config

Documentation

  • Swagger UI for API documentation
  • Postman collections attached from testing the endpoints

CI

  • Continuous integration in Travis CI.
    Steps:
  1. Install dependencies
  2. Run tests
  3. Collect coverage and pass it to Coverall

And thats it! I hope it’s minimal enough.
Please share some ideas for improvement if you have any. I thought of API versioning but Fastify seems to support that out of the box.
API key authentication was also something I was considering, but I wasn’t sure how exactly to implement it. If you have something in mind would love to discuss it in the comments.
Happy coding!

Configure your dev Windows machine with Ansible

Ansible is well known it the IT operations fields with its fantastic automation abilities.
You can do whatever you want with Windows too if it’s a Powershell, bat script or one of the more than one hundred? modules.
I will use it to configure my personal machine and save the hustle every time I step on new one.
It’s not a big deal to install a few programs but I’m sure this will repay in the long term. Can be pretty useful for configuring multiple machines too.
Using Ansible to target localhost on Linux is like click-click-go, but It’s different when it’s comes to Windows.
We need to install WSL on Windows, Ansible on WSL, Enable WinRm on Windows and finally control it from WSL.
And all this happens on your localhost.

Install Ansible on WSL

Enable WSL:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Install Ubuntu distribution but you may choose to install whatever distro you want:
Invoke-WebRequest -Uri https://aka.ms/wsl-ubuntu-1604 -OutFile Ubuntu.appx -UseBasicParsing

Star your WSL and update packages:
sudo apt-get update

Install Ansible:

sudo apt install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

Enable WinRM

By default WinRM works only for Private or Domain networks. You can skip that by providing parameter to Enable-PSRemoting -SkipNetworkProfileCheck but I don’t suggest doing that. Instead make your trusted network private.
Then enable WinRM: Enable-PSRemoting running it in Powershell.
Enable Basic Auth: Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Enable Unencrypted connection: Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true

Run the playbook

Using ansible pull run the playbook which install Chocolatey and a few packages from it. Full details can found in the repo. Provide your username and password for Windows.

ansible-pull –U https://github.com/gmarokov/ansible-win-postinstall.git -e ansible-user=your_win_user ansible_password=your_win_user_password

Conclusion

I’m sure you as developer make a lot of tweaks to your os, me too. Having a few more tweaks to add, would be great to share some ideas and extend it even further.

Getting started with Hangfire on ASP.NET Core and PostgreSQL on Docker

Hangfire is an incredibly easy way to perform fire-and-forget, delayed and recurring jobs inside ASP.NET applications. No Windows Service or separate process required. Backed by persistent storage. Open and free for commercial use.

There are a number of use cases when you need to perform background processing in a web application:

  • mass notifications/newsletter
  • batch import from xml, csv, json
  • creation of archives
  • firing off web hooks
  • deleting users
  • building different graphs
  • image/video processing
  • purge temporary files
  • recurring automated reports
  • database maintenance

and counting..

We will get started by install and configure the database, then create new ASP.NET Core MVC project, after which we will get to Hangfire and run few background tasks with it.

Setup PostgreSQL database

There are more than one way to setup PostgreSQL database. I’m about to use Docker for the purpose, but you can install it directly from the Postgresql official webisite.

If you choose do download and install PostgreSQL, skip the following Docker commands. Instead configure you db instance with the parameters from the Docker example.

Else we need Docker installed and running. Lets proceed with pulling the image for PostgreSQL. Open terminal and run:
$ docker pull postgresql

We have the image, let’s create a container from it and provide username and password for the database:
$ docker run -d -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres

Create ASP.NET Core MVC project

So far we have the db up and running, continuing with the creation of the MVC project and configure it to use our database.

Create new folder and enter it:
$ mkdir aspnet-psql-hangfire && cd aspnet-psql-hangfire

When creating new project, you can go with whatever you want from the list of available dotnet project templates. I’ll stick to mvc.
$ dotnet new mvc

Next install Nuget package for Entity Framework driver for PostgreSQL:
$ dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL

Add empty dbcontext:

using Microsoft.EntityFrameworkCore;

namespace aspnet_psql_hangfire.Models
{
    public class DefaultDbContext : DbContext
    {
        public DefaultDbContext(DbContextOptions<DefaultDbContext> options)
            : base(options) { }
    }
}

Restore the packages by running:
$ dotnet restore

Edit appsettings.json and enter the connection string:

{
    "connectionStrings": {
        "defaultConnection":
            "Host=localhost;
            Port=5433;
            Username=postgres;
            Password=postgres;
            Database=aspnet-psql-hangfire-db"
    },
    "Logging": {
        "LogLevel": {
            "Default": "Warning"
        }
    },
    "AllowedHosts": "*"
}

The framework must know that we want to use PostgreSQL database so add the driver to your Startup.cs file within the ConfigureServices method:

services.AddEntityFrameworkNpgsql().AddDbContext<DefaultDbContext>(options => {
    options.UseNpgsql(Configuration.GetConnectionString("defaultConnection"));
});

We are ready for a initial migration:
$ dotnet ef migrations add InitContext && dotnet ef database update

Install Hangfire

Let’s continue with final steps — install packages for Hangfire:
$ dotnet add package Hangfire.AspNetCore && dotnet add package Hangfire.Postgresql

Add the following using statement to the Startup.cs.

using Hangfire;
using Hangfire.PostgreSql;

Again in the ConfigureServices method in the Startup.cs, let Hangfire server to use our default connection string:

services.AddHangfire(x =>
    x.UsePostgreSqlStorage(Configuration.GetConnectionString("defaultConnection")));

Again in Startup.cs, but now in Configure method enter:

app.UseHangfireDashboard(); //Will be available under http://localhost:5000/hangfire"
app.UseHangfireServer();

Then restore again the packages by typing:
$ dotnet restore

Create tasks

In the Configure method, below the app.UseHangFireServier() add the following tasks:

//Fire-and-Forget
BackgroundJob.Enqueue(() => Console.WriteLine("Fire-and-forget"));

//Delayed
BackgroundJob.Schedule(() => Console.WriteLine("Delayed"), TimeSpan.FromDays(1));

//Recurring
RecurringJob.AddOrUpdate(() => Console.WriteLine("Minutely Job"), Cron.Minutely);

//Continuation
var id = BackgroundJob.Enqueue(() => Console.WriteLine("Hello, "));
BackgroundJob.ContinueWith(id, () => Console.WriteLine("world!"));

And finally run the app:
$ dotnet run

Hangfire task being executed

Observe the console. Now go to the dashboard provided by Hangfire at http://localhost:5000/hangfire for more task info.

Hangfire dashboard

Summary

Keep in mind that the dashboard is only available for localhost connections. If you would like to use it in production, you have to apply authentication methods. There are plenty of tutorials describing how to do that.

Here is the repo from the project, I hope you liked it. Happy coding!

My favorite Visual Studio Code productivity extensions

As a developer you’ve probably already met Visual Studio Code Microsoft’s lightweight code editor. If you haven’t, you definitely should try it. Today I’m going to share my VS extensions with a missing feature which I recently found – sharing extensions, themes and configuration. What happens if you step on a new machine? You have to customize all your key binding, download all your plugins and setup your theme preferences. And you probably want to sync that across all your devices. Before that I will share all my productive plugins which make my day. They are separated by categories.

C# extensions

  1. C# – Full IntelliSense for C#. A MUST for .NET developers
  2. C# XML Documentation Comments – Most of the Visual Studio’s users are very familiar with. Type “/// + TAB” and you get nice documentation for your classes and members.
  3. C# Extensions – Pretty nice addition to the C# extension. Providing quick scaffolding of classes, interfaces etc.
  4. NET Core Test explorer – Browse, run and debug tests directly in the editor.
  5. Nuget package manager – No need of explanation.

Git extensions

  1. Git History – This extension gives you full feature Git client in the IDE. Search commits, merge and compare branches and more.
  2. gitignore – Remove files from source code tracking from the file context menu.
  3. GitLens – Track authors, dates directly in the file.

JavaScript extensions

  1. TSLint – Analysis tool that checks TypeScript code for readability, maintainability, and functionality errors.
  2. Babel JavaScript – Syntax highlighting for today’s JavaScript.
  3. Npm IntelliSense – You get npm modules autocomplete in import statements.
  4. ESLint – Linting utility for JavaScript and JSX.
  5. Debugger for Chrome – Debug your JS app directly in the browser.

Utilities extensions

  1. REST client – Allows you to send HTTP request and review responses.
  2. Docker – Adds syntax highlighting, commands, hover tips, and linting for Dockerfile and docker-compose files.
  3. Path IntelliSense – Plugin that auto completes filenames.
  4. Auto Close Tag – Automatically add HTML/XML close tag, same as Visual Studio IDE or Sublime Text does.
  5. VS Live Share – Real-time collaborative development.
  6. Auto Rename Tag – Auto rename paired HTML/XML tag.
  7. VSCode great icons – File specific icons for improved visual grepping.
  8. SQLTools – Execute queries, auto complete, bookmarks etc.
  9. PHP IntelliSense – Advanced PHP IntelliSense.
  10. IntelliCode for VS — A set of capabilities that provide AI-assisted development. Still in preview, but worth trying.

More?

Cobalt2 theme – Using Cobalt2 theme feels so good. In addition it’s considered the best theme for your eye balls.

Settings sync – This is the missing peace of the puzzle. There isn’t default support of VS Code for this type of synchronization.  This extension will save all your custom settings, themes and extensions. Whatever you switch PCs or just start with new one, setup is as quick as 5 minutes and you have your favorite extensions and settings synced. Reallyyy cool.

I hope you find the list useful. Will try to keep it up to date. If you find any interesting extensions worth mention don’t hesitate to drop me a comment.

Getting started with Ansible and configuring Windows hosts

Ansible is a configuration management, provisioning and deployment tool which is quickly gaining popularity in the DevOps areas. Managing and working on various platforms including Microsoft Windows.
What makes Ansible stand out of other configuration management tools is that it’s agentless. Which means no software is required on the target host. Ansible uses SSH for communication with Unix based hosts and WinRM for Windows hosts.
Recent announcement from Microsoft’s team is an upcoming fork of OpenSSH for Windows, which would make things ever smoother for DevOps teams managing Windows infrastructure.

In this post we will get started with Ansible by:

  1. Setup of the control machine
  2. Configure Windows server in order to receive commands from Ansible
  3. Install Chocolatey and SQL Server

Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7.

If you covered the requirements, let’s get started with the first step.

Setup Ansible control machine

As previously mentioned Ansible is agentless, but we need control machine — machine which talks to all of our hosts.

Ansible can’t run on Windows but there’s a trick

Currently Ansible can only be installed on Unix based machines, but If you are using Windows as your primary OS, you can install Ubuntu subsystem. Read this for further installation details. If you are non Windows user please continue reading.

Install Ansible

After the installation of Ubuntu subsystem on Windows (if you had so), lets proceed with the installation of Ansible by opening terminal.

Install Ubuntu repository management:
$ sudo apt-get install software-properties-common

Lets update our system:
$ sudo apt-get update

Add Ansible repository:
$ sudo apt-add-repository ppa:ansible/ansible

Then Install Ansible:
$ apt-get install ansible

Add Python package manager:
$ apt install python-pip

Add Python WinRM client:
$ pip install pywinrm

Install XML parser:
$ pip install xmltodict

If every thing went OK you should be able to get the current version:
$ ansible --version

So far, so good. Lets continue with configuration of the tool.

Configure Ansible

Inventory — list of the hosts

Inventory.yml is the main configuration file of your hosts addresses separated in groups with descriptive names.

Let’s create that file and set the example below:
$ vim inventory.yml

Enter the IP/DNS addresses for your group:

[dbservers]
mydbserver1.dns.example 80.69.0.160

[webservers]
mywebserver1.dns.example 80.69.0.162

Configure the connection

We are a few steps away from establish connection to the remote servers. Let’s configure the connection itself — credentials, ports, type of connection. The convention is to name the config file based on your group of hosts.

If you want all of your inventory to use that same configuration file you can name it all.yml_. We will use_ all.yml as all servers will have same credentials and connection type.

Let’s begin by creating folder:
$ mkdir group_vars

Create the file and edit it:
$ vim group_vars/all.yml

Add the configuration details:

ansible_user: ansible_user
ansible_password: your_password_here
ansible_port: 5985
ansible_connection:winrm
ansible_winrm_transport: basic
ansible_winrm_operation_timeout_sec: 60
ansible_winrm_read_timeout_sec: 70

This credentials will be used to access the remote hosts with connection set to WinRM basic authentication. We will create them in the next section.
We use basic authentication but for your production environment you probably want to use more secure schema. See this article for more info.

Configure Windows hosts

Our Windows hosts need to be configured before execute any commands on it. The following PowerShell script will do:

  1. Create the Ansible user we defined in all.yml
  2. Add the user to the Administrators group
  3. Set WinRM authentication to basic and allow unencrypted connections
  4. Add Firewall rule for WinRM with your control machine IP address

Open PowerShell on the host and execute the script:

NET USER ansible_user "your_password_here" /ADD
NET LOCALGROUP "Administrators" "ansible_user" /ADD
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
netsh advfirewall firewall add rule name="WinRM" dir=in action=allow protocol=TCP localport=5985 remoteip=10.10.1.2

After the execution is completed we can try to ping our host from the control machine to check that connection is OK. We ping only the DB servers:
$ ansible dbservers -i inventory.yml -m win_ping

Write our first playbook

Getting back to our Ansible control machine to add a playbook — set of tasks or plays which together form the playbook.

The target is to install Chocolatey which is the community driven package manager for Windows. After that we will install SQL Server and reboot the server.

Ansible come with many modules for Windows with a lot of functionalities out of the box. They are prefixed with “win_” like for example win_feature. You can check more here for your specific needs.

Let’s continue with the creation of the playbook file:
$ vim configure-win-server-playbook.yml

In the file describe the playbook as follows:

---
- hosts: dbservers
  tasks:
   - name: Install Chocolatey
     raw: Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

   - name: Install SQL Server
     win_chocolatey:
     name: sql-server-2017
     state: present- name: Reboot to apply changes

   win_reboot:
    reboot_timeout: 3600

Execute the playbook by typing:
$ ansible-playbook dbservers -i inventory.yml configure-win-server-playbook.yml

You will see each task running and returning status of execution and after reboot we are all ready!

Conclusion

Ansible is really powerful tool. Microsoft and the community is doing really fantastic work for porting Ansible modules to Windows which are written in PowerShell. Yet the plan to have SSH feature on Windows is great too. No matter if your inventory is of physical or virtual servers, you should definitely try out Ansible on your infrastructure for saving time, money and of course avoid human mistakes by manually configure, deploy or provision those environments.

Station is now on Linux!

Today we are more dependent than ever on many web apps in our daily tasks. In order to manage them quicker and better, how do you usually organize them? Having bookmarks or use to type URLs? What if I tell you that there is app to rule them all? It’s called Station – it’s been on the market for a while, but recently they released the app on Linux. The main idea behind the app is to group all your web apps into one place and have easy access to them by one click. It has more than 100+ apps already and continue to add more. I few months ago I have requested Mega and ManageWP as apps and they added them in very short time. Still there is a thing that I miss: Remembering your apps for your account so you won’t have to add them each time you logon to a new computer. Probably there is a reason not having that as a feature yet, but you definitely should try it.

Semantic UI React – Front end made easy

If Bootstrap is great for user interfaces, well, Semantic UI is briliant. For React developers, there is library available offering already backed Semantic UI components. I decided to give it a try and spin it on the template Brady created which uses Bootstrap. The result? Less and more readable codebase. In addition I wrote only 1 line of CSS. Curious already? Get the fork from here. The library is still in version under 1 (in the moment of writting 0.79), but has 100% components coverage from the originator Semantic UI. You can check out the library here. If you have any issues or questions, drop me a line.

WordPress with WP-CLI on Bash on Ubuntu on Windows 10

At first sight this sounds ridiculous. In fact it sounds absurd. But it’s not, if you heard of the new Microsoft feature — Bash on Ubuntu on Windows.

Windows 10’s Anniversary Update offered a big new feature for developers: A full, Ubuntu-based Bash shell that can run Linux software directly on Windows. This is made possible by the new “Windows Subsystem for Linux” Microsoft is adding to Windows 10.

In this post, I will setup fully functional WordPress installation with WP-CLI on top of LAMP server which will be installed on my Linux Subsystem through Windows 10.

Let’s do this!

First we need to install Bash on Ubuntu on Windows

Tutorial how to do it, can be found here. After the installation is completed, run the app as administrator. If you would like to go with different user than root, you can read this.

LAMP stack is the next task

WordPress requires PHP, MySQL and HTTP server. We can go with Apache, so LAMP toolset is all we need. This command will install PHP, MySQL and Apache in a moment: $ sudo apt-get install lamp-server^

When we have our server ready we can start with the fun part — installing our command-line interface for WordPress

Note that you must have PHP 5.3.29 or later and the WordPress version 3.7 or later.

Download the wp-cli.phar file via curl: $ curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Now lets check, if it is running: $ php wp-cli.phar --info

To use WP-CLI from the command line by typing wp, make the file executable and move it somewhere in your PATH. For example: $ chmod +x wp-cli.phar $ mv wp-cli.phar /usr/local/bin/wp

Check, if it is working: $ wp --info

If everything went okay, you should see something like this:

$ wp --info
PHP binary: /usr/bin/php5 PHP version: 5.5.9-1ubuntu4.14
php.ini used: /etc/php5/cli/php.ini
WP-CLI root dir: /home/wp-cli/.wp-cli
WP-CLI packages dir: /home/wp-cli/.wp-cli/packages/
WP-CLI global config: /home/wp-cli/.wp-cli/config.yml
WP-CLI project config: WP-CLI version: 0.23.0$

Run our local WordPress installation

Аfter we have our environment and tools ready, playing with the WP-CLI makes the new local WP installation a few commands away.

Navigate to: $ cd ../var/www/html

This is where our WP will live. Remove the default index.php file. Also, now is a good time to start our Apache and MySQL services: $ rm index.html $ service apache2 start $ service mysql start

Let’s download our WP core files: $ wp core download

This will download the latest version of WordPress in English (en_US). If you want to download another version or language, use the --version and --locale parameters. For example, to use the Bulgarian localization and 4.2.2 version, you would type: $ wp core download --version=4.2.2 --locale=bg_BG

Once the download is completed, you can create the wp-config.php file using the core config command and passing your arguments for the database access here: $ wp core config --dbname=databasename --dbuser=databaseuser --dbpass=databasepassword --dbhost=localhost --dbprefix=prfx_

This command will use the arguments and create a wp-config.php file. Using the db command, we can now create our datebase: $ wp db create

This command will use the arguments from wp-config.php and do the job for us. Finally, to install WordPress, use the core install command: $ wp core install --url=example.com --title=WordPress Website Title --admin_user=admin_user --admin_password=admin_password [email protected]

If you’ve got your success message, restart Apache: $ service apache2 restart

After the restart is completed, open your browser and type http://localhost/ and enjoy your brand new WP installation.

Note that every time you start the application you must first start up MySQL and Apache services:

$ service mysql start|restart|stop $ service apache2 start|restart|stop

Final words

I like WP-CLI a lot for saving me time in doing boring stuff, but this may seems as a workaround and it probably is. Still, nothing compares to a real Linux environment.
But let’s face the fact we are running native Linux software on Windows. That by itself is fantastic. The benefits are unlimited.

Keep in mind that the Bash on Ubuntu on Windows feature is still in beta, so let’s hope that Microsoft will keep up with the good news.

Please, share and comment in the section below for your opinion.

Further readings

How to access your Ubuntu Bash files in windows and your Windows system drive in Bash

How to create and run Bash shell scripts on Windows 10

http://wp-cli.org/docs/tools/