Kubernetes Build and Deploy NodeJS Application and Scale It

Kubernetes has become the de facto standard for container orchestration. If you’re looking to scale a NodeJS application efficiently while ensuring high availability, Kubernetes is the perfect solution. In this guide, we’ll walk you through building, deploying, and scaling a NodeJS application using Kubernetes. Let’s dive in step by step!

Let’s jump right into deploying a NodeJS application using Kubernetes.


Step 1: Build Your NodeJS Application

Create a Simple NodeJS App

Here’s a basic Express.js app:

const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.send('Hello, Kubernetes!');
});

app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}`);
});

Save this as app.js.


Create a Dockerfile

Containerize your application with Docker. Add the following to a file named Dockerfile:

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

Build and Push the Docker Image

Run the following commands to build and push your Docker image:

docker build -t <your-dockerhub-username>/nodejs-app:v1 .
docker login
docker push <your-dockerhub-username>/nodejs-app:v1

Replace <your-dockerhub-username> with your Docker Hub username.


Step 2: Deploy to Kubernetes

Create a Deployment YAML

Write a deployment file deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nodejs-app
  template:
    metadata:
      labels:
        app: nodejs-app
    spec:
      containers:
      - name: nodejs-app
        image: <your-dockerhub-username>/nodejs-app:v1
        ports:
        - containerPort: 3000

Apply the Deployment

Use kubectl to apply the deployment:

kubectl apply -f deployment.yaml

Expose the Application

Create a service to expose the deployment:

kubectl expose deployment nodejs-app --type=LoadBalancer --port=80 --target-port=3000

Step 3: Scale the Application

Scale Manually

Increase the number of replicas to handle more traffic:

kubectl scale deployment nodejs-app --replicas=5

Set Up Horizontal Pod Autoscaler (HPA)

Enable automatic scaling based on CPU usage:

kubectl autoscale deployment nodejs-app --cpu-percent=50 --min=2 --max=10

Step 4: Test Your Setup

Access the Application

Get the external IP of the service:

kubectl get svc

Open the IP in your browser. You should see Hello, Kubernetes!.


Simulate Traffic

Use a load testing tool like Apache Bench or wrk to simulate traffic and observe scaling:

ab -n 1000 -c 50 http://<external-ip>/

Step 5: Monitor and Manage

Leverage tools like Prometheus and Grafana for monitoring. Use Kubernetes Dashboard for real-time management.

Centralized vs. Decentralized Configuration Management: Using Ansible for Both

When managing IT infrastructure, configuration management is key to automating system setup and maintenance. Two common approaches are centralized and decentralized configuration management. This article explores both, showing how Ansible can be used for both strategies, with practical playbook examples and target nodes to avoid common formatting issues.

What is Centralized Configuration Management?

In centralized configuration management, all configuration tasks are handled from a single central point. Configurations are stored in one place and pushed to target nodes across the network. This method ensures consistency across systems, but it also introduces the risk of a single point of failure.

Ansible Playbook Example for Centralized Configuration

To demonstrate centralized configuration, let’s manage the setup of web servers (e.g., Nginx) from a central control server.

1. Create the Playbook

---
- name: Configure Nginx on Web Servers
  hosts: web_servers
  become: yes
  tasks:
    - name: Install Nginx
      apt:
        name: nginx
        state: present

    - name: Start Nginx service
      service:
        name: nginx
        state: started
        enabled: yes

    - name: Copy Nginx configuration file
      copy:
        src: /path/to/local/nginx.conf
        dest: /etc/nginx/nginx.conf
        owner: root
        group: root
        mode: '0644'

2. Define Target Nodes (Inventory File)

In the inventory file (hosts.ini), list all web servers that will receive the configurations:

[web_servers]
web01.example.com
web02.example.com
web03.example.com

3. Run the Playbook

Run the following command to apply the configuration to all target nodes:

ansible-playbook -i hosts.ini nginx_configure.yml

Benefits of Centralized Configuration Management

  • Consistency: Ensures that all systems are configured in the same way.
  • Simplicity: Easier to manage because all configurations are controlled from one place.
  • Security: Centralized control helps enforce security standards.

Challenges

  • Single Point of Failure: If the central server goes down, configuration management may be disrupted.
  • Scalability: Managing a large number of nodes from a single point can become complex.

What is Decentralized Configuration Management?

In decentralized configuration management, configuration tasks are spread out across multiple teams or nodes, each managing its own configurations. This provides more flexibility and reduces reliance on a single control point.

Ansible Playbook Example for Decentralized Configuration

In a decentralized setup, each department may manage its own infrastructure, such as the finance and marketing departments configuring their own servers independently.

1. Finance Department Playbook

---
- name: Configure Database Server for Finance
  hosts: finance_db_servers
  become: yes
  tasks:
    - name: Install MySQL
      apt:
        name: mysql-server
        state: present

    - name: Configure MySQL for secure access
      lineinfile:
        path: /etc/mysql/my.cnf
        regexp: '^bind-address'
        line: 'bind-address = 0.0.0.0'
      notify:
        - restart mysql

  handlers:
    - name: restart mysql
      service:
        name: mysql
        state: restarted

2. Marketing Department Playbook

---
- name: Configure Web Analytics Server for Marketing
  hosts: marketing_web_servers
  become: yes
  tasks:
    - name: Install Apache and PHP
      apt:
        name: "{{ item }}"
        state: present
      loop:
        - apache2
        - php
        - libapache2-mod-php

    - name: Copy web analytics configuration
      copy:
        src: /path/to/local/analytics.conf
        dest: /etc/apache2/sites-available/analytics.conf
        owner: root
        group: root
        mode: '0644'

3. Define Target Nodes (Inventory File)

Each department defines its own target nodes:

[finance_db_servers]
finance-db01.example.com
finance-db02.example.com

[marketing_web_servers]
marketing-web01.example.com 
marketing-web02.example.com

4. Run the Playbooks

Each team runs its playbook independently:

ansible-playbook -i finance_hosts.ini finance_db_configure.yml
ansible-playbook -i marketing_hosts.ini marketing_web_configure.yml

Benefits of Decentralized Configuration Management

  • Flexibility: Teams have the autonomy to configure servers based on their own needs.
  • Reduced Risk of Failure: Failure in one department’s configuration won’t affect others.
  • Scalability: Each team can scale its infrastructure independently.

Challenges

  • Inconsistency: Different teams may configure systems differently, leading to potential conflicts.
  • Lack of Central Oversight: Without a central system, it’s harder to ensure all systems meet corporate standards.

Conclusion

Whether using centralized or decentralized configuration management, Ansible provides a flexible and powerful solution to automate system configurations. In a centralized setup, Ansible simplifies the process by managing all configurations from one control point, ensuring consistency. In decentralized environments, Ansible allows teams to maintain their own configurations independently, offering flexibility and scalability. Both approaches benefit from Ansible’s automation capabilities, making it an essential tool for efficient infrastructure management.

Progressive web apps – offline web applications using service worker

Google documentation describes service workers as a way to bring features, typically reserved for native apps, to the web—such as offline experiences, background syncs, and push notifications. These functions depend on the technical backbone that service workers provide.

Think of a service worker like an assistant running in the background, capable of performing tasks that browsers normally wouldn’t handle. Unlike traditional browser features, service workers are designed to work asynchronously and operate independently of the webpage. They can access domain-wide events, such as network fetches, cache resources, and respond to network requests. This combination allows web apps to function offline, which has been one of the advantages native apps have had over web-based ones. With service workers, the web can now catch up, making web apps as versatile as native apps. In fact, service workers may even replace HTML5 Application Cache in the future. But there’s a catch—service workers are HTTPS-only.

One of the significant advantages of using service workers is how easily they allow web apps to load cached assets first, even before making a network request. This offline functionality, common in native apps, is a major reason people still prefer them. Here, service workers come to the rescue, solving the issue for web applications. This is why native apps are on their way out!

But are service workers universally supported?

Service workers were first drafted in May 2014. Since then, browser support has steadily increased. Now, major browsers like Chrome, Firefox, and Opera support almost all the main APIs, and Safari is considering adding service workers as part of its five-year plan. You can check the full compatibility status for various browsers.

Let’s walk through how to set up a service worker. The following code registers the service worker on your webpage. Once registered, you can begin running tasks using the worker. Note that the serviceWorker.js file should reside in the root directory of your application.

(function () {
    if ("serviceWorker" in navigator) {
        navigator.serviceWorker.register('serviceWorker.js', { scope: "./" }) // setting scope of sw
        .then(function(registration) {
          console.info('Service worker is registered!');
        })
        .catch(function(error) {
          console.error('Service worker failed ', error);
        });
    }
})();

Inside serviceWorker.js, you’ll find the install event, which fires when the service worker is installing itself. Here, we attach a callback to cache the array of files. The promise will resolve once the installation is complete.

let cacheName = 'res-1.0';

let files = [
    'js/bootstrap.min.js',
    'js/libraries.js',
    'css/bootstrap.min.css',
    'img/ajax-loader.gif',
    'img/dot.gif'
];

self.addEventListener('install', (event) => {
    event.waitUntil(
        caches.open(self.cacheName)
        .then((cache) => {
            return cache.addAll(files)
            .then(() => {
                console.info('All files are cached');
                return self.skipWaiting(); // Forces the waiting service worker to become active
            })
            .catch((error) =>  {
                console.error('Failed to cache', error);
            });
        })
    );
});

We also have a fetch event that triggers when resources are fetched over the network. The following code first checks if the file is available in the cache. If it is, it will return the cached version; otherwise, it fetches the file from the network.

self.addEventListener('fetch', (event) => {
    event.respondWith(
        caches.match(event.request).then((response) => {
            if (response) {
                return response; // return from cache
            }

            // fetch from network
            return fetch(event.request).then((response) => {
                return response;
            }).catch((error) => {
                console.error('Fetching failed:', error);
                throw error;
            });
        })
    );
});

To debug your service worker, open your webpage in Chrome. Chrome’s Developer Tools are perfect for troubleshooting. Go to the “Application” tab, select “Service Worker,” and you’ll see your service worker running.

Transfer Google Drive data between accounts using Google scripts

I had a number of important files on a google drive account administered by other organization.  Now that I’ve completed the job, I believe that my account will be deleted in the near future, and I’m hoping to move my files to my new google drive account administered by me or my other Google account.

If you have a operating system that support Google drive Application which in case Windows and Mac are then you can do it just by sharing the files with other account(where you want to transfer files) but you are still not owner of those shared files but using a Google drive application you can copy that shared folder to other folder and you are done. this process is time consuming and also will not work with operating systems which don’t have Google Drive application. In my case using Ubuntu I had no possible to way to do it, but fortunately I found a great solution to this problem.

I’ll describe following steps to make this happen.

Using Google Drive web interface:

  1. Create a new folder and name it what you want.
  2. Go into the Pre-existing folders/files, select all the files, and move these files to created folder.
  3. Share that folder with destination account(where files need to transfer).
  4. Login to destination account and in Google drive interface you will see shared folder under “Share with me” link.
  5. In Google Drive, theres no easy way to duplicate a folder. It is possible to make a copy of individual files but theres no command for creating duplicate folders that are a mirror of another folder or are shared. but fortunately we can solve this problem by Google App Scrips. Bellow is the piece of JavaScript code which can help you to duplicate nested folder in drive.
  6. Visit Google app scrips page and click start scripting and paste the following code.
function duplicate() {
  
  var sourceFolder = "Folder";
  var targetFolder = "FolderCopy";
  
  var source = DriveApp.getFoldersByName(sourceFolder);
  var target = DriveApp.createFolder(targetFolder);
 
  if (source.hasNext()) {
    copyFolder(source.next(), target);
  }
  
}
 
function copyFolder(source, target) {
 
  var folders = source.getFolders();
  var files   = source.getFiles();
  
  while(files.hasNext()) {
    var file = files.next();
    file.makeCopy(file.getName(), target);
  }
  
  while(folders.hasNext()) {
    var subFolder = folders.next();
    var folderName = subFolder.getName();
    var targetFolder = target.createFolder(folderName);
    copyFolder(subFolder, targetFolder);
  }  
  
}

To run this code successfully you will need to provide Google drive permission and you can run above code. from list of function select duplicate function and click run. this will copy source folder and files to destination folder.