VGTech is a blog where the developers and devops of Norways most visited website share code and tricks of the trade… Read more

Are you brilliant? We're hiring. Read more

Writing, testing and publishing Javascript modules


So you want to write reusable, maintainable and modular Javascript, huh? Good.

Here’s a rather extensive “getting started”-guide by yours truly – which means it’s my own preferred way of doing things. It’s written with open-source in mind, but most points can be applied to “private” modules as well.

This guide assumes the following:

  • You have a basic understanding of Git, Javascript and Node.js
  • You have node.js (and npm) installed, and you are able to use it through the command line
  • You have a Github account (or a different Git host)
  • You want to publish your module on npm

Step 1: Figure out your module boundaries

Modules should be small. They should do one thing, and do it well. Small modules (when written in a consistent style) are easy to compose. Put together, they enable you to focus on what you’re actually trying to make, instead of every nitty gritty detail.

For this guide, I’ve chosen to implement a simple module that calculates an expected duration for a task, using the PERT-technique. It’s a simple formula that can help when trying to deliver realistic estimates for tasks. This might seem a little too simple for a module, but for the purpose of this guide, it’ll do just fine!

Step 2: Search for it!

There’s no need to create a new module if you can find an existing one that does just what you need. Head on over to and do a couple of searches, or use npm search from the command line. In my case, I searched for pert and estimate. While pert did not yield any meaningful results, estimate found a module called estimate-tasks which does something similar to what we’re trying to achieve.

In the real world, you would probably want to talk to the author(s) and hear if they would be interested in a contribution which would add the PERT-algorithm. It wouldn’t be much of a guide if we did that, however, so let’s move on.

Step 3: Make yourself a repository!

You can either have Github prepare a repository for you, or you can start from scratch. For this guide, we’re going to use Github – the benefit is that you get the .gitignore, and LICENSE files from the start, which is nice.

Once you’re signed in to GitHub, click New repository and fill out a name. I like to use descriptive but fairly short names that match the name of the module I’m going to publish. If you want to publish your module to npm, make sure that the module name is not already taken. In my case, I chose pert-estimate. Fill in a meaningful description. You’ll need this for the Github repository, the README and the package description, so make sure you put something which helps users understand the scope of the module.

For pert-estimate, I used the following:

Calculates estimate(s) based on the PERT method by using “most likely time”, “optimistic time”, and “pessimistic time”

When that is out of the way, check the Initialize this repository with a README checkbox and select Node in the .gitignore dropdown. Now, selecting a LICENSE is something you really, really should do. Many users are discouraged when they find a module with no clear indication of the license. Read up on the different open-source licenses that are available and pick one that fits your needs. I usually go with something permissive, like MIT or BSD.

Step 4: Clone the repository

You’ll find the ssh clone URL on your repository page – it should be in the form of Copy it and from the command line, run git clone

If you list the directory you cloned, you should see a LICENSE, a and a .gitignore. Great!

Step 5: Set up your package manifest

In the world of node.js, your module metadata is stored in a file called package.json at the root of your project. It contains the name, description, dependencies, author name, license and similar. Generating one is easy, with the npm init command. Make sure you’re in the project folder and run that now. It should prompt you with a set of questions, here’s a few pointers:

  • Module name: By default, it’ll try to guess this based on the folder name. In my case, it’ll be pert-estimate.
  • Version number: This is a bit of a hard topic to discuss, because there have been quite a few discussions on what the “appropriate” starting point of your module actually is. Mostly, the discussions boil down to whether you should use 0.x.y, or start at 1.0.0. If you’re fairly sure that your API is not going to change much, I personally feel it’s OK to start at 1.0.0. Either way, make sure you read up on Semantic Versioning and take special note that with npm, 0.x ranges have special meaning.
  • Description: Should generally be the same as the one you put on Github.
  • Entry point: Is the file that node.js will require when a user says require('your-module'). Quite often, this is left to the default (index.js), but if you want, you can specify something else here (lib/pert-estimate.js, for instance).
  • Test command: Leave this blank for now – we’ll get back to it later.
  • Keywords: Fill in a list of keywords that describe your module’s intent, seperated by commas. In my case, I put pert,estimate,tasks.
  • Author: Your name, and preferably contact email, in the form of Your Name <>
  • License: The short name of the license you chose (MIT, ISC, BSD etc).

Once you’re done with that, you should see a package.json file in your project directory. Great! You’ll probably want to add it and commit to your version control now (git add package.json && git commit -m "Added package.json").

Step 6: Figure out the API for your module

Having a clear vision of what your API is going to look before you start coding is usually a good idea. Imagine you are creating an application where you want to use the module, and how the best API for the job would look.

In my case, I picture making a simple CLI-script where I can enter estimates for various tasks, and it’ll spit back the PERT-estimates. Making a web application that does the same should be fairly trivial.

For each estimate, I need to feed my estimator module three numbers: optimistic, nominal and pessimistic estimates. In return, we’ll get two values back: expected task duration and standard deviation.

With this in mind, there are several things we need to consider for the API:

  • Do we take an object as input, or separate arguments? calc({ optimistic: 1, nominal: 2, pessimistic: 5 } vs calc(1, 2, 5)
  • Do we want to have a convenience method that takes an array and calculates estimates for all the tasks in it? Should we make the original function polymorphic and handle both cases?
  • Do we have separate methods for calculating the expected task duration and standard deviation, or do we simply have one method that returns an object?

I chose to go with two methods: expectedDuration() and standardDeviation(), which both take either an object or three numbers. Handling an array of tasks can easily done by  the user, using either a simple loop or perhaps using or similar.

I’m going to use the CommonJS module format. I find it’s easier to understand, and if someone wants to use my module in an AMD-environment, there are ways of doing that – but I’ll leave the details of that for another blog post.

Step 7: Provide the bare minimum

Let’s start writing some code. But instead of implementing the calculations right away, what we are going to do is start with stubs – meaning we are going to expose the methods for the API, but they are not going to actually do anything.

Open up your favourite editor and create an empty index.js in the root directory of your module.

Next, let’s expose the two methods:

Show code
function expectedDuration(optimistic, nominal, pessimistic) {}
function standardDeviation(optimistic, nominal, pessimistic) {}

exports.expectedDuration = expectedDuration;
exports.standardDeviation = standardDeviation;

If you are unfamiliar with the CommonJS module format, the exports object is basically what you are exposing to other modules when they require() it. We’ll see this in a bit.

Step 8: Picking a testing framework

Now that we’ve got the API figured out, you might be tempted to just start coding it. Sure – you could do that. But this is a nice opportunity to make sure we’ve made the right choices in regards to the API. One way of doing that is to start with the tests. Basically, we’re doing test-driven development here.

We’ll start by deciding what we want to write the tests in. My personal preference for small modules (such as pert-estimate) is tape. It’s a really simple test runner/testing library that produces the “test anything protocol“-format. For larger applications or modules, mocha is usually my choice, where you get things like beforeEach/afterEach-functions and a whole range of different reporters, assertion libraries and whatnot.

How do we install tape? Simple – just run npm install --save-dev tape. This will install the tape module locally and save the dependency to package.json as a development dependency. Note the distinction between regular dependencies and development dependencies – in production environments, npm can skip all the testing frameworks and quality assurance tools we’re going to add, which speeds up the installation process and keeps disk space usage down.

Now that tape is installed, we can start writing tests. First, we’ll make a folder to hold the test files. Let’s just call it test. Next we’ll add a file inside the test folder that will contain the tests. I usually add a .test.js extension to the tests, so the purpose is nice and clear. We could choose to split the tests into multiple files, but for this simple little module, I don’t see the need.

At this point, your directory structure should look something like this:

├── node_modules
│   └── tape
├── test
│   └── pert-estimate.test.js
├── index.js
└── package.json

Step 9: Writing the tests

Writing the tests are fairly simple. Let’s get the basics set up, and I’ll walk you through it:

Show code
var test = require('tape');
var pert = require('../');

test('expectedDuration with individual args', function(t) {
    var estimate = pert.expectedDuration(1, 5, 9);

    t.equal(estimate, 5, 'µ=(1+(4*5)+9)/6 should be 5');

test('standardDeviation with individual args', function(t) {
    var deviation = pert.standardDeviation(1, 5, 10);

    t.equal(deviation, 1.5, 'σ=(10-1)/6 should be 1.5');

test('expectedDuration with object argument', function(t) {
    var estimate = pert.expectedDuration({
        optimistic: 2,
        nominal: 8,
        pessimistic: 18

    t.equal(estimate, 10, 'µ=(2+(4*8)+18)/6 should be 10');

test('standardDeviation with object argument', function(t) {
    var deviation = pert.standardDeviation({
        optimistic: 2,
        nominal: 8,
        pessimistic: 20

    t.equal(deviation, 3, 'σ=(20-2)/6 should be 3');

We’ll start out by loading tape. It exposes a function which takes a string describing the test and a function which actually runs the test. The t that you get as the first argument of your test function is what you will be using to actually perform assertions and tell tape when your test is done.

To test the module, we have to load it. We’ll do that by doing require('../'). This is telling node.js to load the module found in the parent directory. It will look at the main key of package.json found in the directory and determine which file it should load. This is useful if you want to put your code in a separate folder, say lib/.

You’ll find that the assertions provided by tape is not as complex as many other testing frameworks, but for simple modules like this one, it’ll usually do just fine. Here, we are testing that the expected duration and standard deviation of some basic inputs gives the correct and expected output, using t.equal. All assertions in tape follow the pattern: assertion(actual, expected, msg). Note the order of actual and expected, as many other testing libraries do it the other way around. For human-readable output, it’s important to get this order right.

After we’ve done the assertions, we call t.end(), which tells tape that all the assertions are done for this test. There is one other way of doing this; t.plan(). With t.plan(numAssertions), you tell it how many assertions you will be doing, and once tape has reached that number of assertions, it will automatically end the test. This is really useful for asynchronous testing, but not necessary in this case.

We can now run the tests:

Show code
$ node test/pert-estimate.test.js 
TAP version 13
# it gives the correct expected duration
not ok 1 µ=(1+(4*5)+9)/6 should be 5
    operator: equal
    expected: 5
    actual:   undefined


The tests are being run, but not passing, since we have not written any code. Before actually starting to write the code, let’s get some quality assurance up and running. This will actually save us time in the end.

Step 10: Quality Assurance

Shipping a module without doing even the most basic of QA, whether as open-source or as a proprietary product, is irresponsible. Testing takes you half way there, but you could still be overlooking things. Let’s try to improve the situation a bit, with two tools: JSHint and Istanbul.

JSHint is a a tool that helps to detect errors and potential problems in your JavaScript code – sort of like a linter, but with more bells and whistles. Istanbul is a tool for reporting on the code coverage of your tests. It’s easy to think you’ve covered all the possible branches in your code, but with Istanbul, you’ll know.

Installation is done in the same way as we did with tape – we’ll set them up as development dependencies:

Show code
$ npm install --save-dev jshint istanbul

Next, we’ll want to configure JSHint. It supports a lot of different options for linting your code, so you will have to figure out a set of option which works for your particular coding style. The settings are set in a file called .jshintrc, which JSHint will look for in your module folder. If it doesn’t find it there, it will recursively traverse the parent folder structure looking for one. I personally prefer to keep it within my module. This way, contributors to your module will get warnings when they break your chosen style of coding.

So, create a .jshintrc in the root of the module and fill it with the options you want. Here’s my particular flavor (refer to the JSHint documentation to find out what all these options do):

Show code
    "node": true,
    "browser": true,
    "bitwise": false,
    "curly": true,
    "eqeqeq": true,
    "immed": true,
    "indent": 4,
    "latedef": "nofunc",
    "newcap": true,
    "noarg": true,
    "quotmark": "single",
    "undef": true,
    "unused": true,
    "strict": true,
    "sub": true

In addition to this configuration file, there is also a file called .jshintignore. This will allow you to prevent running the linter for the directories or files listed. In this case, we don’t want to lint the node_modules directory (as that contains third-party code) and the coverage directory (since that will contain code coverage reports).

Show code
$ echo -e 'node_modules\ncoverage' > .jshintignore

Next, we’ll set up some shortcuts for running these tools. Open up the package.json file of your module and find the scripts section.  Change it to the following:

Show code
"scripts": {
    "coverage": "istanbul cover tape -- test/**/*.test.js",
    "lint": "jshint .",
    "test": "tape test/**/*.test.js",
    "pretest": "npm run lint"

Let me walk you through this:

  •  The scripts section are simply a list of scripts that you can run through npm run <scriptName>. Some scripts have a special meaning, in that they can be run as simply npm <command>. It also allows you to run scripts before and after eachother, such as the pretest script above.
  • The coverage script is an arbitrary script that simply runs the unit tests, but while doing that also provides a code coverage report using Istanbul. Run it as: npm run coverage.
  • The lint script is another arbitrary script that runs JSHint.
  • The test script is one of the predefined scripts in npm, that can be run as simply npm test. This is also what many continuous integration tools will run by default when testing your module. We will get back to that later.
  • The pretest script will execute automatically, before npm test is run. This particular setup provides a really strict setup in that it does not even run the unit tests if JSHint finds any code-style issues.

Note: You might notice that we’re running the tests in a slightly different way than before – we’re now using the tape binary instead of running it through node. One of the few differences between the approaches is that it allows richer pattern-matching. The pattern that we’ve set up will basically run any file ending with “.test.js”, regardless of the depth of the folder structure. This is not strictly necessary for the current module, but might make sense if you have a larger application you want to test.

With that in place, when we run npm test, we’ll see JSHint warns us about some code problems and will not run the tests. Pretty cool, huh? But what are the actual errors?

First, it warns us about some unused variables. This is not necessarily a problem, but it might also mean that you actually intended to use these variables and have either used the wrong variable somewhere or you simply have some cleaning up to do. In this particular case, we just have not implemented any functionality, so obviously the arguments are unused.

The other warning lets us know that we’ve forgotten to add the ‘use strict’-pragma. The solution is to simply add it. Since any node.js module is implicitly wrapped in a function, we can specify the pragma on a file-level instead of within each function. So lets put 'use strict'; at the top of both index.js and the test file (test/pert-estimate.test.js), and we can move on to actually write some code!

Step 11: Write your module!

It’s about time we start writing some code!

That fun part I will leave to you. The code for this particular module is not really relevant to this guide, so I won’t go through it. If you’re curious, you can always have a look at the GitHub repository.

What’s more important is that while developing, you can keep running npm test to see if you code passes all the tests you’ve written, and that it’s still allowed through JSHint. You might even be using an editor that supports running JSHint while you’re editing, which is really useful.

Another tip is to run npm run coverage and check what the numbers for your code coverage are. You should always aim to have above 90% code coverage. If you’re not at 100% and want to see which paths you have not covered, you can check out the coverage report generated by istanbul. It will be output to <your-module>/coverage/lcov-report/index.html

Step 12: Continuous Integration

This is one of my favorite parts. Having a proper continuous integration setup will allow you to quickly discover inconsistencies between your environment and others’. Common mistakes are things like depending on modules that you have not added to your package.json file.

For this guide, I’ll be using Travis CI, but a lot of the concepts and techniques used here can be applied to other solutions with small adjustments.

First, sign in to Travis CI using your GitHub account, then go to your profile page. Under the Repositories tab, you should find the repository of your module. If not, click the Sync now-button. Find the repository in the list, and click the toggle-button to switch Travis on for that repo.

Back in your editor, create a file called .travis.yml and let it have the following content:

Show code
language: node_js
  - '0.11'
  - '0.10'
  - npm run coverage

We’re telling Travis to test our module in node.js 0.11 and 0.10. Normally, we wouldn’t have to specify which script it should run – by default it runs npm test, but in this case we want to have it run the tests and generate code coverage reports. We’ll get back to why later.

Add .travis.yml to version control and push it to GitHub. Once you’ve done this, Travis should automatically get notified of a new build and start testing your code. It might take a little while before it gets to it – this depends on the number of other projects currently being tested.

Hopefully, you’ll soon be notified of a successful build. If you don’t, check the logs at Travis – they should reveal what went wrong. Every time you push to GitHub, Travis will rebuild. Pretty awesome, eh?

Step 13: Write a proper README

So you’ve written this cool module, and you probably want people to find and use it, right? You can’t expect people to dig through your code and/or tests to find out how to use it, though. Writing a README shouldn’t be too hard – nor should it take very long.

We’re going to use Markdown for writing the readme. It’s quickly become the de-facto standard, and if you’re hosting your code on GitHub, you’ll get a nicely displayed repository page showing it off.

You should already have a in your root directory from when GitHub set up your repository. Now we’ll make sure it’s actually useful to other people. Any good readme should (in my opinion) include:

  • Module name
  • Description
  • Installation instructions
  • Code examples
  • Documentation
  • License (name – refer to LICENSE file for details)

GitHub has a slightly extended Markdown syntax from the original “specification”. One of the changes is support for syntax highlighting code. Be sure to use this, as the code will be much easier to read! Write your code blocks like this (note the js at the start of the code block, signifying Javascript):

Show code
var myVar = 'myValue';

For larger modules or applications you might want to write the documentation in a different format and host it somewhere else. This is fine – but make sure to include a link to them. For smaller modules, it might be that your code examples cover the whole API. In this case, I’d argue it’s OK to drop a separate documentation section.

Step 14: Prepare to publish

OK, we now have a module in a working state, with a proper README and continuous integration making sure our code works as expected… We should be ready to publish, right?

Almost. Before we publish, let’s make sure the users of our module only get the parts they need. We do that by adding an .npmignore-file, which works the same as .gitignore – the files and folders listed will not be included in the published package. In my (humble) opinion, we don’t need to include unit tests, JSHint and Travis configurations and whatnot. This is important to people who want to contribute to your module, but they will most likely be cloning the git repository, not editing a local installation of your package.

Here’s the content of my .npmignore:

Show code

You might have noticed a file in there that we have not touched on yet, .editorconfig. I’ve written about this earlier – it’s just a little meta file telling smart editors which coding style your code should be written in.

One last step before publishing: set up a pre-publish script. Let’s open up package.json, and all the following:

"prepublish": "npm prune && npm test"

This will make sure that we remove any modules not mentioned in package.json and run the tests, before publishing. If the tests fail, it will stop the publishing process. Handy!

Step 15: Publish!

We should be ready to publish now, but we’ll need an npm user first.

Show code
$ npm adduser
Username: foobar
Email: (this IS public)

Once you’ve got your npm user, we’re ready to go!

Show code
$ npm publish
+ pert-estimate@1.0.0

We’ve published! Hooray! You should be able to see your module on right away.

There are a couple of small touches I like to add once the module is published.

Step 16: Post-publish QA – Code Climate

First, we will run our code through Code Climate‘s code review process, and also submit the code coverage reports to them. Code Climate is a pretty cool service that helps you find problems with your code. This can be anything from functions being too long, to variables being clumsily named. It’s free for open-source!

Log in using GitHub, click “Add Open Source Repo” and type in the name of the repo (rexxars/pert-estimate, in my case). After a few moments, you’ll get an email that your metrics are ready. Check the Issues-tab for some pointers on things that could be improved.

Now, head over to the Settings-tab and click “Test coverage” on the left. You’ll find some instructions for how to set up reporting, but we’re going to do it slightly different. The important thing to note here is the repo token environment variable it wants you to set. Copy it – it should be something like: CODECLIMATE_REPO_TOKEN=857e7a3f1569da48e5f93cfd139f47dd16e

Back in your terminal, use npm to install travis-encrypt globally: npm install -g travis-encrypt. Once installed, run travis-encrypt --add -r rexxars/pert-estimate CODECLIMATE_REPO_TOKEN=857e7a3f1569da48e5f93cfd139f47dd16e – obviously, replace the repository name and repo token with the ones representing your module.

This should add a secure blob to your Travis configuration. We will also need to add a new section – after_script. Here’s what your .travis.yml should look like (roughly):

Show code
language: node_js
  - '0.11'
  - '0.10'
  - npm run coverage
  - npm install codeclimate-test-reporter && cat coverage/ | codeclimate-test-reporter
    - secure: "UomXCF2ZpVMKG/77bOx2ixp8/tCD1oes1vG02oc...="

The after_script is (as the name suggests) run by Travis after the regular script is run. Since we’re generating code coverage reports while testing our code, this data is now available on the filesystem – both as an HTML report and in lcov-format. So what we’re doing here is installing a module that will report our code coverage to code climate, using the repo token we encrypted in a secure variable. We’re piping the lcov-data into the codeclimate binary, which will do this for us.

Now every time you push, Travis will build, test and report your code coverage to Code Climate. Automation is cool, eh?

Step 17: Post-publish QA – Readme badges

Having tests that pass and a high code coverage is great, but people won’t automatically know about it. This is why people tend to put badges in their README’s, showing off the current status. I personally like to add 5 different badges:

  • NPM badge, showing latest published version
  • Travis build status badge, showing if the last build passed or failed
  • David-DM badge, showing if the dependencies for the module is up to date
  • Code coverage badge, showing the percentage of code that is covered by tests
  • Code climate badge, showing the GPA (grade point average) of the code

Each of these services provide these badges individually, but sometimes they differ in appearance. This creates a slightly cluttered look, which is why I prefer to use – a service that basically provides consistent styling for these badges (and many more).

Each badge is inserted into the readme as an image wrapped in a link to the related service. By removing any whitespace between the badges and using the flat style from, we get a very nice row of badges. Though very unreadable, here’s the chunk of badges for my module:

[![npm version](]([![Build Status](]([![Coverage Status](]([![Code Climate](](

I usually place these badges right below the module name. You might notice I’ve left out the david-dm badge that I mentioned earlier. This is because this module does not actually have any dependencies, so it’s redundant. The markdown should result in something like this:
npm versionBuild statusCode coverageCode Climate

Can you believe it? We’re done!

There are no doubt easier and faster ways of getting your modules online and published. This approach tries to address the problem with quality assurance in mind. In the future, I will try to keep this post updated with any developments in QA, publishing and module authoring.

Much of the content from this post will also be split into individual and more focused posts in the future.

Happy module authoring!

Developer at VG with a passion for Node.js, React, PHP and the web platform as a whole. - @rexxars


Leave your comment