Progressive web apps – offline web applications using service worker

Google documentation says about service workers:
“Rich offline experiences, periodic background syncs, push notificationsfunctionality that would normally require a native applicationare coming to the web. Service workers provide the technical foundation that all these features rely on.”

Service worker is like any other worker which can run in background and can do multiple tasks which you cannot expect from browsers in normal cases. Service workers have been built to be fully async. Service worker can run independently of web pages and they have access to domain-wide events such as network fetches, they have ability to cache resources, they have the ability to respond to network requests, combining this all this provides a way for applications to “go offline”. Service workers could also replace the HTML5 Application Cache in future. Yes, service workers are HTTPS-only.

It is easy to set an app up to use cached assets first with with Service worker, even before getting data from the network. This offline functionality is already available with native apps, which is one of the main reasons native apps are often chosen over web apps. this is where service workers comes to rescue and fix these issues nicely for web applications. that’s why Native Apps are Doomed :)

Sounds good, So question is service workers are supported across all browsers?

First draft of service worker was published on 08 May 2014, from that days support across the browsers has increased, at the moment major browsers such Chrome, Firefox, Opera are supporting the almost all majors APIs and it is also under discussion as part of Safari five year plan. You can see complete compatibility status for different browsers here.

Let’s start with installing a service worker. following code will register the service worker on web page, after service worker is registered we can run tasks using worker. serviceWorker.js should be in root of application.

(function () {
    if ("serviceWorker" in navigator) {
        navigator.serviceWorker.register('serviceWorker.js', { scope: "./" }) //setting scope of sw
        .then(function(registration) {
'Service worker is registered!');
        .catch(function(error) {
          console.error('Service worker failed ', error);

inside serviceWorker.js we have install event which will be fired when service worker is installing itself. we attached the callback with install to cache the files array. It will return a promise which will resolve to when installed successfully.

let cacheName = 'res-1.0';

let files = [

self.addEventListener('install', (event) => {
        .then((cache) => {
            return cache.addAll(files)
            .then(() => {
      'All files are cached');
                return self.skipWaiting(); //To forces the waiting service worker to become the active service worker
            .catch((error) =>  {
                console.error('Failed to cache', error);

We also have fetch event when some resources fetched over the network. following code first will check if file present in cache and return it, otherwise fetch it from network.

self.addEventListener('fetch', (event)=> {
        caches.match(event.request).then((response) => {
            if (response) {
                return response; // return from cache

            // fetch from network
            return fetch(event.request).then((response) => {
                return response;
            }).catch((error) => {
                console.error('Fetching failed:', error);
                throw error;

Now open your web page in chrome, at the moment chrome developer tools is ideal for debugging the service workers. under application tab select service worker. you’ll see service worker running.

Under cache > Cache storage you can see the your cached resources. you can change the cache version in serviceWorker to forcefully expire the old cache resources.

serviceWorker.js file

So this is the basic example of service worker register, cache resources and serve from cache. in my next post I’ll go through the implementation of web push notification in browser using Firebase Cloud Messaging (FCM).

Transfer Google Drive data between accounts using Google scripts

I had a number of important files on a google drive account administered by other organization.  Now that I’ve completed the job, I believe that my account will be deleted in the near future, and I’m hoping to move my files to my new google drive account administered by me or my other Google account.

If you have a operating system that support Google drive Application which in case Windows and Mac are then you can do it just by sharing the files with other account(where you want to transfer files) but you are still not owner of those shared files but using a Google drive application you can copy that shared folder to other folder and you are done. this process is time consuming and also will not work with operating systems which don’t have Google Drive application. In my case using Ubuntu I had no possible to way to do it, but fortunately I found a great solution to this problem.

I’ll describe following steps to make this happen.

Using Google Drive web interface:

  1. Create a new folder and name it what you want.
  2. Go into the Pre-existing folders/files, select all the files, and move these files to created folder.
  3. Share that folder with destination account(where files need to transfer).
  4. Login to destination account and in Google drive interface you will see shared folder under “Share with me” link.
  5. In Google Drive, theres no easy way to duplicate a folder. It is possible to make a copy of individual files but theres no command for creating duplicate folders that are a mirror of another folder or are shared. but fortunately we can solve this problem by Google App Scrips. Bellow is the piece of JavaScript code which can help you to duplicate nested folder in drive.
  6. Visit Google app scrips page and click start scripting and paste the following code.
function duplicate() {
  var sourceFolder = "Folder";
  var targetFolder = "FolderCopy";
  var source = DriveApp.getFoldersByName(sourceFolder);
  var target = DriveApp.createFolder(targetFolder);
  if (source.hasNext()) {
    copyFolder(, target);
function copyFolder(source, target) {
  var folders = source.getFolders();
  var files   = source.getFiles();
  while(files.hasNext()) {
    var file =;
    file.makeCopy(file.getName(), target);
  while(folders.hasNext()) {
    var subFolder =;
    var folderName = subFolder.getName();
    var targetFolder = target.createFolder(folderName);
    copyFolder(subFolder, targetFolder);

To run this code successfully you will need to provide Google drive permission and you can run above code. from list of function select duplicate function and click run. this will copy source folder and files to destination folder.

ECMAScript Harmony – Compile ES6 code to ES5 using Traceur/Babel

ECMAScript Edition 6 (ES6) comes with lot of features such as classes, generators, destructuring, modules and much more. Currently ES6 code is not yet implemented/supported in all browsers and JS engines, you might wanna look here. Using Firefox and chrome(latest) developer tools you can test features like arrow functions etc. for chrome you will also need to enable chrome harmony flag. chrome://flags/#enable-javascript-harmony.

It might be bit early but you wanna to use latest ES6 features and here you will need a compiler that compiles your ES6 code down to regular Javascript that could run in browsers. currently we have two most famous compilers Traceur and Babel for this purpose.


You can try Traceur in several ways:

  • Include Traceur in a Web page and it will compile ES6 code on the fly.
  • Or use node to compile ES6 to ES5 offline and include the result in Web pages or just run the result in node.

For browser you can include following script and it will convert automatically.

<!DOCTYPE html>
 <script src=""></script>
 <script src=""></script>
 <script type="text/javascript">
 var arr = [{a:1,b:1},{a:1,b:4}];

In above example first file included above is the traceur compiler; next comes a small file of JS to apply the compiler to the Web page. Traceur itself written in ES6. so my simple example manipulate the array using high order function .map  that takes arrows function as callback.

You can also import ES6 modules

<script type="module">
 import './module.js';

Other way is using offline compilation. for that you need node installed. Traceur includes a shell script traceur to compile ES6 code to ES5/Regular JavaScript.

./traceur --out scripts/module.js --script module.js

We can test the module.js we just compiled like:

<script src="bin/traceur-runtime.js"></script>
<script src="script/module.js"></script>

This runtime file contains polyfills as well as helper functions which reduce the generated code size. you can find more about traceur here.


Babel is another compiler for EcmaScript6.

npm install babel-core es6-module-loader

You will need to include following files to your web page.

  • node_modules/babel-core/browser.js
  • node_modules/es6-module-loader/dist/es6-module-loader.js

Following is example to import ES6 module

<!DOCTYPE html>
<script src="js/vendors/browser.js"></script>
<script src="js/vendors/es6-module-loader.js"></script>
<script src="js/module.js"></script>
 System.transpiler = 'babel';
 import Module from 'js/module'
 var instance = new Module();

Let’s setup development envirnmnet and start using ES6 using Babel in your applications. youll need to install following presets and plugins

npm install --save-dev babel-cli babel-preset-es2015 babel-preset-stage-0

babel-preset-es2015 is for ES6 and babel-preset-stage-0 is for ES7 experimental features, this is not required in our case, it may be good idea to have it.

Simple express application written using ES6 could be:

let express = require('express'),
		app = express(),
		PORT = 3636;

app.get('/', (req, res) => {
	res.write(JSON.stringify({'data':"visiting home url."}));

app.listen(PORT, ()=>{
	console.log('listening at port: '+ PORT);
babel express.js --out-file out/express.js

Above command will transpile ES6 code from express.js to out/express.js. so that will normal JavaScript which can run in ES5 or less supported environments.

Following is the output:

'use strict';

var express = require('express'),
    app = express(),
    PORT = 3636;

app.get('/', function (req, res) {
	res.write(JSON.stringify({ 'data': "visiting home url." }));

app.listen(PORT, function () {
	console.log('listening at port: ' + PORT);

Also it is possible to compile whole directory containing ES6 code files using following way:

babel src --out-dir out


Node.js Streams, Pipe and chaining

A stream is an abstract interface implemented by various objects in Node.js.  Due to asynchronous and event driven nature Node.js is very good at handling I/O bound tasks/streams, streams are actually unix pipes.

For example a request to an HTTP server is a stream, as is stdout. request is a readable stream and response is a writable stream, streams could be Readable, Writable, or Duplex (both readable and writable). Readable streams let you read data from a source while writable streams let you write data to a destination. A “duplex” stream is one that is both Readable and Writable, such as a TCP socket connection.

All streams are instances of EventEmitter. Take a look at following example.

var fs = require('fs');
var readStream = fs.createReadStream('file.txt');
var text = '';

readStream.on('data', function(chunk) {

readStream.on('end', function() {

The function fs.createReadStream() gives you a readable stream object. using that object you listen to data event with attach callback. As soon as chunks of data are read and passed to your callback, we appends those chunks to our text string. At then end when there is no more data to read, the stream emits an end event. In the above code, we listen to this event to using another callback and we logged the text string.

You can set encoding on the stream by calling readStream.setEncoding().As a result, the data is interpreted as utf8 and passed to your callback as string.

At this point we also have interesting function called pipe() which we can use to transfer streams flow from source to destination. like we have piped the readable stream to writable stream like bellow.

var fs = require('fs');
var readStream = fs.createReadStream('file1.txt');
var writeStream = fs.createWriteStream('file2.txt');


So now we have piped readable stream of file1.txt to writable stream created for file2.txt, pipe() will manages the data flow, you don’t have handle it yourself and callbacks.

Its also comes up with chaining feature, so multiple destinations can be piped safely.

var readStream = fs.createReadStream('file.txt');
var zlib = zlib.createGzip();
var writeStream = fs.createWriteStream('file.txt.gz');

In above example we have piped readStream to gzip. Purpose of pipe() method is to limit the buffering of data to acceptable levels, so that sources and destinations of varying speed will not overwhelm the available memory.