Node.js v6.0.0-rc.3 Documentation


Table of Contents

About this Documentation#

The goal of this documentation is to comprehensively explain the Node.js API, both from a reference as well as a conceptual point of view. Each section describes a built-in module or high-level concept.

Where appropriate, property types, method arguments, and the arguments provided to event handlers are detailed in a list underneath the topic heading.

Every .html document has a corresponding .json document presenting the same information in a structured manner. This feature is experimental, and added for the benefit of IDEs and other utilities that wish to do programmatic things with the documentation.

Every .html and .json file is generated based on the corresponding .markdown file in the doc/api/ folder in Node.js's source tree. The documentation is generated using the tools/doc/generate.js program. The HTML template is located at doc/template.html.

If you find a error in this documentation, please submit an issue or see the contributing guide for directions on how to submit a patch.

Stability Index#

Throughout the documentation, you will see indications of a section's stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned.

The stability indices are as follows:

Stability: 0 - Deprecated
This feature is known to be problematic, and changes are
planned.  Do not rely on it.  Use of the feature may cause warnings.  Backwards
compatibility should not be expected.
Stability: 1 - Experimental
This feature is subject to change, and is gated by a command line flag.
It may change or be removed in future versions.
Stability: 2 - Stable
The API has proven satisfactory. Compatibility with the npm ecosystem
is a high priority, and will not be broken unless absolutely necessary.
Stability: 3 - Locked
Only fixes related to security, performance, or bug fixes will be accepted.
Please do not suggest API changes in this area; they will be refused.

JSON Output#

Stability: 1 - Experimental

Every HTML file in the markdown has a corresponding JSON file with the same data.

This feature was added in Node.js v0.6.12. It is experimental.

Syscalls and man pages#

System calls like open(2) and read(2) define the interface between user programs and the underlying operating system. Node functions which simply wrap a syscall, like fs.open(), will document that. The docs link to the corresponding man pages (short for manual pages) which describe how the syscalls work.

Caveat: some syscalls, like lchown(2), are BSD-specific. That means, for example, that fs.lchown() only works on Mac OS X and other BSD-derived systems, and is not available on Linux.

Most Unix syscalls have Windows equivalents, but behavior may differ on Windows relative to Linux and OS X. For an example of the subtle ways in which it's sometimes impossible to replace Unix syscall semantics on Windows, see Node issue 4760.

Synopsis#

An example of a web server written with Node.js which responds with 'Hello World':

const http = require('http');

http.createServer( (request, response) => {
  response.writeHead(200, {'Content-Type': 'text/plain'});
  response.end('Hello World\n');
}).listen(8124);

console.log('Server running at http://127.0.0.1:8124/');

To run the server, put the code into a file called example.js and execute it with the node program

$ node example.js
Server running at http://127.0.0.1:8124/

All of the examples in the documentation can be run similarly.

Addons#

Node.js Addons are dynamically-linked shared objects, written in C or C++, that can be loaded into Node.js using the require() function, and used just as if they were an ordinary Node.js module. They are used primarily to provide an interface between JavaScript running in Node.js and C/C++ libraries.

At the moment, the method for implementing Addons is rather complicated, involving knowledge of several components and APIs :

  • V8: the C++ library Node.js currently uses to provide the JavaScript implementation. V8 provides the mechanisms for creating objects, calling functions, etc. V8's API is documented mostly in the v8.h header file (deps/v8/include/v8.h in the Node.js source tree), which is also available online.

  • libuv: The C library that implements the Node.js event loop, its worker threads and all of the asynchronous behaviors of the platform. It also serves as a cross-platform abstraction library, giving easy, POSIX-like access across all major operating systems to many common system tasks, such as interacting with the filesystem, sockets, timers and system events. libuv also provides a pthreads-like threading abstraction that may be used to power more sophisticated asynchronous Addons that need to move beyond the standard event loop. Addon authors are encouraged to think about how to avoid blocking the event loop with I/O or other time-intensive tasks by off-loading work via libuv to non-blocking system operations, worker threads or a custom use of libuv's threads.

  • Internal Node.js libraries. Node.js itself exports a number of C/C++ APIs that Addons can use — the most important of which is the node::ObjectWrap class.

  • Node.js includes a number of other statically linked libraries including OpenSSL. These other libraries are located in the deps/ directory in the Node.js source tree. Only the V8 and OpenSSL symbols are purposefully re-exported by Node.js and may be used to various extents by Addons. See Linking to Node.js' own dependencies for additional information.

All of the following examples are available for download and may be used as a starting-point for your own Addon.

Hello world#

This "Hello world" example is a simple Addon, written in C++, that is the equivalent of the following JavaScript code:

module.exports.hello = () => 'world';

First, create the file hello.cc:

// hello.cc
#include <node.h>

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void Method(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  args.GetReturnValue().Set(String::NewFromUtf8(isolate, "world"));
}

void init(Local<Object> exports) {
  NODE_SET_METHOD(exports, "hello", Method);
}

NODE_MODULE(addon, init)

}  // namespace demo

Note that all Node.js Addons must export an initialization function following the pattern:

void Initialize(Local<Object> exports);
NODE_MODULE(module_name, Initialize)

There is no semi-colon after NODE_MODULE as it's not a function (see node.h).

The module_name must match the filename of the final binary (excluding the .node suffix).

In the hello.cc example, then, the initialization function is init and the Addon module name is addon.

Building#

Once the source code has been written, it must be compiled into the binary addon.node file. To do so, create a file called binding.gyp in the top-level of the project describing the build configuration of your module using a JSON-like format. This file is used by node-gyp -- a tool written specifically to compile Node.js Addons.

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "hello.cc" ]
    }
  ]
}

Note: A version of the node-gyp utility is bundled and distributed with Node.js as part of npm. This version is not made directly available for developers to use and is intended only to support the ability to use the npm install command to compile and install Addons. Developers who wish to use node-gyp directly can install it using the command npm install -g node-gyp. See the node-gyp installation instructions for more information, including platform-specific requirements.

Once the binding.gyp file has been created, use node-gyp configure to generate the appropriate project build files for the current platform. This will generate either a Makefile (on Unix platforms) or a vcxproj file (on Windows) in the build/ directory.

Next, invoke the node-gyp build command to generate the compiled addon.node file. This will be put into the build/Release/ directory.

When using npm install to install a Node.js Addon, npm uses its own bundled version of node-gyp to perform this same set of actions, generating a compiled version of the Addon for the user's platform on demand.

Once built, the binary Addon can be used from within Node.js by pointing require() to the built addon.node module:

// hello.js
const addon = require('./build/Release/addon');

console.log(addon.hello()); // 'world'

Please see the examples below for further information or

https://github.com/arturadib/node-qt for an example in production.

Because the exact path to the compiled Addon binary can vary depending on how it is compiled (i.e. sometimes it may be in ./build/Debug/), Addons can use the bindings package to load the compiled module.

Note that while the bindings package implementation is more sophisticated in how it locates Addon modules, it is essentially using a try-catch pattern similar to:

try {
  return require('./build/Release/addon.node');
} catch (err) {
  return require('./build/Debug/addon.node');
}

Linking to Node.js' own dependencies#

Node.js uses a number of statically linked libraries such as V8, libuv and OpenSSL. All Addons are required to link to V8 and may link to any of the other dependencies as well. Typically, this is as simple as including the appropriate #include <...> statements (e.g. #include <v8.h>) and node-gyp will locate the appropriate headers automatically. However, there are a few caveats to be aware of:

  • When node-gyp runs, it will detect the specific release version of Node.js and download either the full source tarball or just the headers. If the full source is downloaded, Addons will have complete access to the full set of Node.js dependencies. However, if only the Node.js headers are downloaded, then only the symbols exported by Node.js will be available.

  • node-gyp can be run using the --nodedir flag pointing at a local Node.js source image. Using this option, the Addon will have access to the full set of dependencies.

Loading Addons using require()#

The filename extension of the compiled Addon binary is .node (as opposed to .dll or .so). The require() function is written to look for files with the .node file extension and initialize those as dynamically-linked libraries.

When calling require(), the .node extension can usually be omitted and Node.js will still find and initialize the Addon. One caveat, however, is that Node.js will first attempt to locate and load modules or JavaScript files that happen to share the same base name. For instance, if there is a file addon.js in the same directory as the binary addon.node, then require('addon') will give precedence to the addon.js file and load it instead.

Native Abstractions for Node.js#

Each of the examples illustrated in this document make direct use of the Node.js and V8 APIs for implementing Addons. It is important to understand that the V8 API can, and has, changed dramatically from one V8 release to the next (and one major Node.js release to the next). With each change, Addons may need to be updated and recompiled in order to continue functioning. The Node.js release schedule is designed to minimize the frequency and impact of such changes but there is little that Node.js can do currently to ensure stability of the V8 APIs.

The Native Abstractions for Node.js (or nan) provide a set of tools that Addon developers are recommended to use to keep compatibility between past and future releases of V8 and Node.js. See the nan examples for an illustration of how it can be used.

Addon examples#

Following are some example Addons intended to help developers get started. The examples make use of the V8 APIs. Refer to the online V8 reference for help with the various V8 calls, and V8's Embedder's Guide for an explanation of several concepts used such as handles, scopes, function templates, etc.

Each of these examples using the following binding.gyp file:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "addon.cc" ]
    }
  ]
}

In cases where there is more than one .cc file, simply add the additional filename to the sources array. For example:

"sources": ["addon.cc", "myexample.cc"]

Once the binding.gyp file is ready, the example Addons can be configured and built using node-gyp:

$ node-gyp configure build

Function arguments#

Addons will typically expose objects and functions that can be accessed from JavaScript running within Node.js. When functions are invoked from JavaScript, the input arguments and return value must be mapped to and from the C/C++ code.

The following example illustrates how to read function arguments passed from JavaScript and how to return a result:

// addon.cc
#include <node.h>

namespace demo {

using v8::Exception;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;

// This is the implementation of the "add" method
// Input arguments are passed using the
// const FunctionCallbackInfo<Value>& args struct
void Add(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  // Check the number of arguments passed.
  if (args.Length() < 2) {
    // Throw an Error that is passed back to JavaScript
    isolate->ThrowException(Exception::TypeError(
        String::NewFromUtf8(isolate, "Wrong number of arguments")));
    return;
  }

  // Check the argument types
  if (!args[0]->IsNumber() || !args[1]->IsNumber()) {
    isolate->ThrowException(Exception::TypeError(
        String::NewFromUtf8(isolate, "Wrong arguments")));
    return;
  }

  // Perform the operation
  double value = args[0]->NumberValue() + args[1]->NumberValue();
  Local<Number> num = Number::New(isolate, value);

  // Set the return value (using the passed in
  // FunctionCallbackInfo<Value>&)
  args.GetReturnValue().Set(num);
}

void Init(Local<Object> exports) {
  NODE_SET_METHOD(exports, "add", Add);
}

NODE_MODULE(addon, Init)

}  // namespace demo

Once compiled, the example Addon can be required and used from within Node.js:

// test.js
const addon = require('./build/Release/addon');

console.log('This should be eight:', addon.add(3, 5));

Callbacks#

It is common practice within Addons to pass JavaScript functions to a C++ function and execute them from there. The following example illustrates how to invoke such callbacks:

// addon.cc
#include <node.h>

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Null;
using v8::Object;
using v8::String;
using v8::Value;

void RunCallback(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  Local<Function> cb = Local<Function>::Cast(args[0]);
  const unsigned argc = 1;
  Local<Value> argv[argc] = { String::NewFromUtf8(isolate, "hello world") };
  cb->Call(Null(isolate), argc, argv);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", RunCallback);
}

NODE_MODULE(addon, Init)

}  // namespace demo

Note that this example uses a two-argument form of Init() that receives the full module object as the second argument. This allows the Addon to completely overwrite exports with a single function instead of adding the function as a property of exports.

To test it, run the following JavaScript:

// test.js
const addon = require('./build/Release/addon');

addon((msg) => {
  console.log(msg); // 'hello world'
});

Note that, in this example, the callback function is invoked synchronously.

Object factory#

Addons can create and return new objects from within a C++ function as illustrated in the following example. An object is created and returned with a property msg that echoes the string passed to createObject():

// addon.cc
#include <node.h>

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  Local<Object> obj = Object::New(isolate);
  obj->Set(String::NewFromUtf8(isolate, "msg"), args[0]->ToString());

  args.GetReturnValue().Set(obj);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", CreateObject);
}

NODE_MODULE(addon, Init)

}  // namespace demo

To test it in JavaScript:

// test.js
const addon = require('./build/Release/addon');

var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg + ' ' + obj2.msg); // 'hello world'

Function factory#

Another common scenario is creating JavaScript functions that wrap C++ functions and returning those back to JavaScript:

// addon.cc
#include <node.h>

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void MyFunction(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  args.GetReturnValue().Set(String::NewFromUtf8(isolate, "hello world"));
}

void CreateFunction(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, MyFunction);
  Local<Function> fn = tpl->GetFunction();

  // omit this to make it anonymous
  fn->SetName(String::NewFromUtf8(isolate, "theFunction"));

  args.GetReturnValue().Set(fn);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", CreateFunction);
}

NODE_MODULE(addon, Init)

}  // namespace demo

To test:

// test.js
const addon = require('./build/Release/addon');

var fn = addon();
console.log(fn()); // 'hello world'

Wrapping C++ objects#

It is also possible to wrap C++ objects/classes in a way that allows new instances to be created using the JavaScript new operator:

// addon.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Local;
using v8::Object;

void InitAll(Local<Object> exports) {
  MyObject::Init(exports);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

Then, in myobject.h, the wrapper class inherits from node::ObjectWrap:

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Local<v8::Object> exports);

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

In myobject.cc, implement the various methods that are to be exposed. Below, the method plusOne() is exposed by adding it to the constructor's prototype:

// myobject.cc
#include "myobject.h"

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Local<Object> exports) {
  Isolate* isolate = exports->GetIsolate();

  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  // Prototype
  NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);

  constructor.Reset(isolate, tpl->GetFunction());
  exports->Set(String::NewFromUtf8(isolate, "MyObject"),
               tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    args.GetReturnValue().Set(cons->NewInstance(argc, argv));
  }
}

void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
  obj->value_ += 1;

  args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}

}  // namespace demo

To build this example, the myobject.cc file must be added to the binding.gyp:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [
        "addon.cc",
        "myobject.cc"
      ]
    }
  ]
}

Test it with:

// test.js
const addon = require('./build/Release/addon');

var obj = new addon.MyObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13

Factory of wrapped objects#

Alternatively, it is possible to use a factory pattern to avoid explicitly creating object instances using the JavaScript new operator:

var obj = addon.createObject();
// instead of:
// var obj = new addon.Object();

First, the createObject() method is implemented in addon.cc:

// addon.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  MyObject::NewInstance(args);
}

void InitAll(Local<Object> exports, Local<Object> module) {
  MyObject::Init(exports->GetIsolate());

  NODE_SET_METHOD(module, "exports", CreateObject);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

In myobject.h, the static method NewInstance() is added to handle instantiating the object. This method takes the place of using new in JavaScript:

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Isolate* isolate);
  static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

The implementation in myobject.cc is similar to the previous example:

// myobject.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Isolate* isolate) {
  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  // Prototype
  NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);

  constructor.Reset(isolate, tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    args.GetReturnValue().Set(cons->NewInstance(argc, argv));
  }
}

void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  const unsigned argc = 1;
  Local<Value> argv[argc] = { args[0] };
  Local<Function> cons = Local<Function>::New(isolate, constructor);
  Local<Object> instance = cons->NewInstance(argc, argv);

  args.GetReturnValue().Set(instance);
}

void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
  obj->value_ += 1;

  args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}

}  // namespace demo

Once again, to build this example, the myobject.cc file must be added to the binding.gyp:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [
        "addon.cc",
        "myobject.cc"
      ]
    }
  ]
}

Test it with:

// test.js
const createObject = require('./build/Release/addon');

var obj = createObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13

var obj2 = createObject(20);
console.log(obj2.plusOne()); // 21
console.log(obj2.plusOne()); // 22
console.log(obj2.plusOne()); // 23

Passing wrapped objects around#

In addition to wrapping and returning C++ objects, it is possible to pass wrapped objects around by unwrapping them with the Node.js helper function node::ObjectWrap::Unwrap. The following examples shows a function add() that can take two MyObject objects as input arguments:

// addon.cc
#include <node.h>
#include <node_object_wrap.h>
#include "myobject.h"

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  MyObject::NewInstance(args);
}

void Add(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj1 = node::ObjectWrap::Unwrap<MyObject>(
      args[0]->ToObject());
  MyObject* obj2 = node::ObjectWrap::Unwrap<MyObject>(
      args[1]->ToObject());

  double sum = obj1->value() + obj2->value();
  args.GetReturnValue().Set(Number::New(isolate, sum));
}

void InitAll(Local<Object> exports) {
  MyObject::Init(exports->GetIsolate());

  NODE_SET_METHOD(exports, "createObject", CreateObject);
  NODE_SET_METHOD(exports, "add", Add);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

In myobject.h, a new public method is added to allow access to private values after unwrapping the object.

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Isolate* isolate);
  static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);
  inline double value() const { return value_; }

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

The implementation of myobject.cc is similar to before:

// myobject.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Isolate* isolate) {
  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  constructor.Reset(isolate, tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    args.GetReturnValue().Set(cons->NewInstance(argc, argv));
  }
}

void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  const unsigned argc = 1;
  Local<Value> argv[argc] = { args[0] };
  Local<Function> cons = Local<Function>::New(isolate, constructor);
  Local<Object> instance = cons->NewInstance(argc, argv);

  args.GetReturnValue().Set(instance);
}

}  // namespace demo

Test it with:

// test.js
const addon = require('./build/Release/addon');

var obj1 = addon.createObject(10);
var obj2 = addon.createObject(20);
var result = addon.add(obj1, obj2);

console.log(result); // 30

AtExit hooks#

An "AtExit" hook is a function that is invoked after the Node.js event loop has ended by before the JavaScript VM is terminated and Node.js shuts down. "AtExit" hooks are registered using the node::AtExit API.

void AtExit(callback, args)#

  • callback: void (*)(void*) - A pointer to the function to call at exit.
  • args: void* - A pointer to pass to the callback at exit.

Registers exit hooks that run after the event loop has ended but before the VM is killed.

AtExit takes two parameters: a pointer to a callback function to run at exit, and a pointer to untyped context data to be passed to that callback.

Callbacks are run in last-in first-out order.

The following addon.cc implements AtExit:

// addon.cc
#undef NDEBUG
#include <assert.h>
#include <stdlib.h>
#include <node.h>

namespace demo {

using node::AtExit;
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Object;

static char cookie[] = "yum yum";
static int at_exit_cb1_called = 0;
static int at_exit_cb2_called = 0;

static void at_exit_cb1(void* arg) {
  Isolate* isolate = static_cast<Isolate*>(arg);
  HandleScope scope(isolate);
  Local<Object> obj = Object::New(isolate);
  assert(!obj.IsEmpty()); // assert VM is still alive
  assert(obj->IsObject());
  at_exit_cb1_called++;
}

static void at_exit_cb2(void* arg) {
  assert(arg == static_cast<void*>(cookie));
  at_exit_cb2_called++;
}

static void sanity_check(void*) {
  assert(at_exit_cb1_called == 1);
  assert(at_exit_cb2_called == 2);
}

void init(Local<Object> exports) {
  AtExit(sanity_check);
  AtExit(at_exit_cb2, cookie);
  AtExit(at_exit_cb2, cookie);
  AtExit(at_exit_cb1, exports->GetIsolate());
}

NODE_MODULE(addon, init);

}  // namespace demo

Test in JavaScript by running:

// test.js
const addon = require('./build/Release/addon');

Assert#

Stability: 3 - Locked

The assert module provides a simple set of assertion tests that can be used to test invariants. The module is intended for internal use by Node.js, but can be used in application code via require('assert'). However, assert is not a testing framework, and is not intended to be used as a general purpose assertion library.

The API for the assert module is Locked. This means that there will be no additions or changes to any of the methods implemented and exposed by the module.

assert(value[, message])#

An alias of assert.ok() .

const assert = require('assert');

assert(true);  // OK
assert(1);     // OK
assert(false);
  // throws "AssertionError: false == true"
assert(0);
  // throws "AssertionError: 0 == true"
assert(false, 'it\'s false');
  // throws "AssertionError: it's false"

assert.deepEqual(actual, expected[, message])#

Tests for deep equality between the actual and expected parameters. Primitive values are compared with the equal comparison operator ( == ).

Only enumerable "own" properties are considered. The deepEqual() implementation does not test object prototypes, attached symbols, or non-enumerable properties. This can lead to some potentially surprising results. For example, the following example does not throw an AssertionError because the properties on the Error object are non-enumerable:

// WARNING: This does not throw an AssertionError!
assert.deepEqual(Error('a'), Error('b'));

"Deep" equality means that the enumerable "own" properties of child objects are evaluated also:

const assert = require('assert');

const obj1 = {
  a : {
    b : 1
  }
};
const obj2 = {
  a : {
    b : 2
  }
};
const obj3 = {
  a : {
    b : 1
  }
}
const obj4 = Object.create(obj1);

assert.deepEqual(obj1, obj1);
  // OK, object is equal to itself

assert.deepEqual(obj1, obj2);
  // AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
  // values of b are different

assert.deepEqual(obj1, obj3);
  // OK, objects are equal

assert.deepEqual(obj1, obj4);
  // AssertionError: { a: { b: 1 } } deepEqual {}
  // Prototypes are ignored

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.deepStrictEqual(actual, expected[, message])#

Generally identical to assert.deepEqual() with two exceptions. First, primitive values are compared using the strict equality operator ( === ). Second, object comparisons include a strict equality check of their prototypes.

const assert = require('assert');

assert.deepEqual({a:1}, {a:'1'});
  // OK, because 1 == '1'

assert.deepStrictEqual({a:1}, {a:'1'});
  // AssertionError: { a: 1 } deepStrictEqual { a: '1' }
  // because 1 !== '1' using strict equality

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.doesNotThrow(block[, error][, message])#

Asserts that the function block does not throw an error. See assert.throws() for more details.

When assert.doesNotThrow() is called, it will immediately call the block function.

If an error is thrown and it is the same type as that specified by the error parameter, then an AssertionError is thrown. If the error is of a different type, or if the error parameter is undefined, the error is propagated back to the caller.

The following, for instance, will throw the TypeError because there is no matching error type in the assertion:

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  SyntaxError
);

However, the following will result in an AssertionError with the message 'Got unwanted exception (TypeError)..':

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  TypeError
);

If an AssertionError is thrown and a value is provided for the message parameter, the value of message will be appended to the AssertionError message:

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  TypeError,
  'Whoops'
);
// Throws: AssertionError: Got unwanted exception (TypeError). Whoops

assert.equal(actual, expected[, message])#

Tests shallow, coercive equality between the actual and expected parameters using the equal comparison operator ( == ).

const assert = require('assert');

assert.equal(1, 1);
  // OK, 1 == 1
assert.equal(1, '1');
  // OK, 1 == '1'

assert.equal(1, 2);
  // AssertionError: 1 == 2
assert.equal({a: {b: 1}}, {a: {b: 1}});
  //AssertionError: { a: { b: 1 } } == { a: { b: 1 } }

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.fail(actual, expected, message, operator)#

Throws an AssertionError. If message is falsy, the error message is set as the values of actual and expected separated by the provided operator. Otherwise, the error message is the value of message.

const assert = require('assert');

assert.fail(1, 2, undefined, '>');
  // AssertionError: 1 > 2

assert.fail(1, 2, 'whoops', '>');
  // AssertionError: whoops

assert.ifError(value)#

Throws value if value is truthy. This is useful when testing the error argument in callbacks.

const assert = require('assert');

assert.ifError(0); // OK
assert.ifError(1); // Throws 1
assert.ifError('error') // Throws 'error'
assert.ifError(new Error()); // Throws Error

assert.notDeepEqual(actual, expected[, message])#

Tests for any deep inequality. Opposite of assert.deepEqual().

const assert = require('assert');

const obj1 = {
  a : {
    b : 1
  }
};
const obj2 = {
  a : {
    b : 2
  }
};
const obj3 = {
  a : {
    b : 1
  }
}
const obj4 = Object.create(obj1);

assert.notDeepEqual(obj1, obj1);
  // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }

assert.notDeepEqual(obj1, obj2);
  // OK, obj1 and obj2 are not deeply equal

assert.notDeepEqual(obj1, obj3);
  // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }

assert.notDeepEqual(obj1, obj4);
  // OK, obj1 and obj2 are not deeply equal

If the values are deeply equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notDeepStrictEqual(actual, expected[, message])#

Tests for deep strict inequality. Opposite of assert.deepStrictEqual().

const assert = require('assert');

assert.notDeepEqual({a:1}, {a:'1'});
  // AssertionError: { a: 1 } notDeepEqual { a: '1' }

assert.notDeepStrictEqual({a:1}, {a:'1'});
  // OK

If the values are deeply and strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notEqual(actual, expected[, message])#

Tests shallow, coercive inequality with the not equal comparison operator ( != ).

const assert = require('assert');

assert.notEqual(1, 2);
  // OK

assert.notEqual(1, 1);
  // AssertionError: 1 != 1

assert.notEqual(1, '1');
  // AssertionError: 1 != '1'

If the values are equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notStrictEqual(actual, expected[, message])#

Tests strict inequality as determined by the strict not equal operator ( !== ).

const assert = require('assert');

assert.notStrictEqual(1, 2);
  // OK

assert.notStrictEqual(1, 1);
  // AssertionError: 1 != 1

assert.notStrictEqual(1, '1');
  // OK

If the values are strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.ok(value[, message])#

Tests if value is truthy. It is equivalent to assert.equal(!!value, true, message).

If value is not truthy, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

const assert = require('assert');

assert.ok(true);  // OK
assert.ok(1);     // OK
assert.ok(false);
  // throws "AssertionError: false == true"
assert.ok(0);
  // throws "AssertionError: 0 == true"
assert.ok(false, 'it\'s false');
  // throws "AssertionError: it's false"

assert.strictEqual(actual, expected[, message])#

Tests strict equality as determined by the strict equality operator ( === ).

const assert = require('assert');

assert.strictEqual(1, 2);
  // AssertionError: 1 === 2

assert.strictEqual(1, 1);
  // OK

assert.strictEqual(1, '1');
  // AssertionError: 1 === '1'

If the values are not strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.throws(block[, error][, message])#

Expects the function block to throw an error.

If specified, error can be a constructor, RegExp, or validation function.

If specified, message will be the message provided by the AssertionError if the block fails to throw.

Validate instanceof using constructor:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  Error
);

Validate error message using RegExp:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  /value/
);

Custom error validation:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  function(err) {
    if ( (err instanceof Error) && /value/.test(err) ) {
      return true;
    }
  },
  'unexpected error'
);

Note that error can not be a string. If a string is provided as the second argument, then error is assumed to be omitted and the string will be used for message instead. This can lead to easy-to-miss mistakes:

// THIS IS A MISTAKE! DO NOT DO THIS!
assert.throws(myFunction, 'missing foo', 'did not throw with expected message');

// Do this instead.
assert.throws(myFunction, /missing foo/, 'did not throw with expected message');

Buffer#

Stability: 2 - Stable

Prior to the introduction of TypedArray in ECMAScript 2015 (ES6), the JavaScript language had no mechanism for reading or manipulating streams of binary data. The Buffer class was introduced as part of the Node.js API to make it possible to interact with octet streams in the context of things like TCP streams and file system operations.

Now that TypedArray has been added in ES6, the Buffer class implements the Uint8Array API in a manner that is more optimized and suitable for Node.js' use cases.

Instances of the Buffer class are similar to arrays of integers but correspond to fixed-sized, raw memory allocations outside the V8 heap. The size of the Buffer is established when it is created and cannot be resized.

The Buffer class is a global within Node.js, making it unlikely that one would need to ever use require('buffer').

const buf1 = Buffer.alloc(10);
  // Creates a zero-filled Buffer of length 10.

const buf2 = Buffer.alloc(10, 1);
  // Creates a Buffer of length 10, filled with 0x01.

const buf3 = Buffer.allocUnsafe(10);
  // Creates an uninitialized buffer of length 10.
  // This is faster than calling Buffer.alloc() but the returned
  // Buffer instance might contain old data that needs to be
  // overwritten using either fill() or write().

const buf4 = Buffer.from([1,2,3]);
  // Creates a Buffer containing [01, 02, 03].

const buf5 = Buffer.from('test');
  // Creates a Buffer containing ASCII bytes [74, 65, 73, 74].

const buf6 = Buffer.from('tést', 'utf8');
  // Creates a Buffer containing UTF8 bytes [74, c3, a9, 73, 74].

Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe()#

In versions of Node.js prior to v6, Buffer instances were created using the Buffer constructor function, which allocates the returned Buffer differently based on what arguments are provided:

  • Passing a number as the first argument to Buffer() (e.g. new Buffer(10)), allocates a new Buffer object of the specified size. The memory allocated for such Buffer instances is not initialized and can contain sensitive data. Such Buffer objects must be initialized manually by using either buf.fill(0) or by writing to the Buffer completely. While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required between creating a fast-but-uninitialized Buffer versus creating a slower-but-safer Buffer.
  • Passing a string, array, or Buffer as the first argument copies the passed object's data into the Buffer.
  • Passing an ArrayBuffer returns a Buffer that shares allocated memory with the given ArrayBuffer.

Because the behavior of new Buffer() changes significantly based on the type of value passed as the first argument, applications that do not properly validate the input arguments passed to new Buffer(), or that fail to appropriately initialize newly allocated Buffer content, can inadvertently introduce security and reliability issues into their code.

To make the creation of Buffer objects more reliable and less error prone, the various forms of the new Buffer() constructor have been deprecated and replaced by separate Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe() methods.

Developers should migrate all existing uses of the new Buffer() constructors to one of these new APIs.

Buffer instances returned by Buffer.allocUnsafe(size) may be allocated off a shared internal memory pool if size is less than or equal to half Buffer.poolSize. Instances returned by Buffer.allocUnsafeSlow(size) never use the shared internal memory pool.

The --zero-fill-buffers command line option#

Node.js can be started using the --zero-fill-buffers command line option to force all newly allocated Buffer instances created using either new Buffer(size), Buffer.allocUnsafe(size), Buffer.allocUnsafeSlow(size) or new SlowBuffer(size) to be automatically zero-filled upon creation. Use of this flag changes the default behavior of these methods and can have a significant impact on performance. Use of the --zero-fill-buffers option is recommended only when absolutely necessary to enforce that newly allocated Buffer instances cannot contain potentially sensitive data.

$ node --zero-fill-buffers
> Buffer.allocUnsafe(5);
<Buffer 00 00 00 00 00>

What makes Buffer.allocUnsafe(size) and Buffer.allocUnsafeSlow(size) "unsafe"?#

When calling Buffer.allocUnsafe() (and Buffer.allocUnsafeSlow()), the segment of allocated memory is uninitialized (it is not zeroed-out). While this design makes the allocation of memory quite fast, the allocated segment of memory might contain old data that is potentially sensitive. Using a Buffer created by Buffer.allocUnsafe() without completely overwriting the memory can allow this old data to be leaked when the Buffer memory is read.

While there are clear performance advantages to using Buffer.allocUnsafe(), extra care must be taken in order to avoid introducing security vulnerabilities into an application.

Buffers and Character Encodings#

Buffers are commonly used to represent sequences of encoded characters such as UTF8, UCS2, Base64 or even Hex-encoded data. It is possible to convert back and forth between Buffers and ordinary JavaScript string objects by using an explicit encoding method.

const buf = Buffer.from('hello world', 'ascii');
console.log(buf.toString('hex'));
  // prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
  // prints: aGVsbG8gd29ybGQ=

The character encodings currently supported by Node.js include:

  • 'ascii' - for 7-bit ASCII data only. This encoding method is very fast and will strip the high bit if set.

  • 'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.

  • 'utf16le' - 2 or 4 bytes, little-endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.

  • 'ucs2' - Alias of 'utf16le'.

  • 'base64' - Base64 string encoding. When creating a buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5.

  • 'binary' - A way of encoding the buffer into a one-byte (latin-1) encoded string. The string 'latin-1' is not supported. Instead, pass 'binary' to use 'latin-1' encoding.

  • 'hex' - Encode each byte as two hexadecimal characters.

Buffers and TypedArray#

Buffers are also Uint8Array TypedArray instances. However, there are subtle incompatibilities with the TypedArray specification in ECMAScript 2015. For instance, while ArrayBuffer#slice() creates a copy of the slice, the implementation of Buffer#slice() creates a view over the existing Buffer without copying, making Buffer#slice() far more efficient.

It is also possible to create new TypedArray instances from a Buffer with the following caveats:

  1. The Buffer object's memory is copied to the TypedArray, not shared.

  2. The Buffer object's memory is interpreted as an array of distinct elements, and not as a byte array of the target type. That is, new Uint32Array(Buffer.from([1,2,3,4])) creates a 4-element Uint32Array with elements [1,2,3,4], not a Uint32Array with a single element [0x1020304] or [0x4030201].

It is possible to create a new Buffer that shares the same allocated memory as a TypedArray instance by using the TypeArray object's .buffer property:

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf1 = Buffer.from(arr); // copies the buffer
const buf2 = Buffer.from(arr.buffer); // shares the memory with arr;

console.log(buf1);
  // Prints: <Buffer 88 a0>, copied buffer has only two elements
console.log(buf2);
  // Prints: <Buffer 88 13 a0 0f>

arr[1] = 6000;
console.log(buf1);
  // Prints: <Buffer 88 a0>
console.log(buf2);
  // Prints: <Buffer 88 13 70 17>

Note that when creating a Buffer using the TypedArray's .buffer, it is possible to use only a portion of the underlying ArrayBuffer by passing in byteOffset and length parameters:

const arr = new Uint16Array(20);
const buf = Buffer.from(arr.buffer, 0, 16);
console.log(buf.length);
  // Prints: 16

The Buffer.from() and TypedArray.from() (e.g.Uint8Array.from()) have different signatures and implementations. Specifically, the TypedArray variants accept a second argument that is a mapping function that is invoked on every element of the typed array:

  • TypedArray.from(source[, mapFn[, thisArg]])

The Buffer.from() method, however, does not support the use of a mapping function:

Buffers and ES6 iteration#

Buffers can be iterated over using the ECMAScript 2015 (ES6) for..of syntax:

const buf = Buffer.from([1, 2, 3]);

for (var b of buf)
  console.log(b)

// Prints:
//   1
//   2
//   3

Additionally, the buf.values(), buf.keys(), and buf.entries() methods can be used to create iterators.

Class: Buffer#

The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.

new Buffer(array)#

Stability: 0 - Deprecated: Use Buffer.from(array)
instead.

Allocates a new Buffer using an array of octets.

const buf = new Buffer([0x62,0x75,0x66,0x66,0x65,0x72]);
  // creates a new Buffer containing ASCII bytes
  // ['b','u','f','f','e','r']

new Buffer(buffer)#

Stability: 0 - Deprecated: Use Buffer.from(buffer)
instead.

Copies the passed buffer data onto a new Buffer instance.

const buf1 = new Buffer('buffer');
const buf2 = new Buffer(buf1);

buf1[0] = 0x61;
console.log(buf1.toString());
  // 'auffer'
console.log(buf2.toString());
  // 'buffer' (copy is not changed)

new Buffer(arrayBuffer[, byteOffset [, length]])#

Stability: 0 - Deprecated: Use
Buffer.from(arrayBuffer[, byteOffset [, length]])
instead.
  • arrayBuffer <ArrayBuffer> The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset <Number> Default: 0
  • length <Number> Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf = new Buffer(arr.buffer); // shares the memory with arr;

console.log(buf);
  // Prints: <Buffer 88 13 a0 0f>

// changing the TypdArray changes the Buffer also
arr[1] = 6000;

console.log(buf);
  // Prints: <Buffer 88 13 70 17>

new Buffer(size)#

Stability: 0 - Deprecated: Use
Buffer.alloc(size[, fill[, encoding]]) instead (also
see Buffer.allocUnsafe(size)).

Allocates a new Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

Unlike ArrayBuffers, the underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

const buf = new Buffer(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

new Buffer(str[, encoding])#

Stability: 0 - Deprecated:
Use Buffer.from(str[, encoding]) instead.

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the strings character encoding.

const buf1 = new Buffer('this is a tést');
console.log(buf1.toString());
  // prints: this is a tést
console.log(buf1.toString('ascii'));
  // prints: this is a tC)st

const buf2 = new Buffer('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
  // prints: this is a tést

Class Method: Buffer.alloc(size[, fill[, encoding]])#

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

const buf = Buffer.alloc(5);
console.log(buf);
  // <Buffer 00 00 00 00 00>

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()][] for more information.

const buf = Buffer.alloc(5, 'a');
console.log(buf);
  // <Buffer 61 61 61 61 61>

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
  // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)#

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0) to initialize such Buffer instances to zeroes.

const buf = Buffer.allocUnsafe(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)#

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0) to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

// need to keep around a few small chunks of memory
const store = [];

socket.on('readable', () => {
  const data = socket.read();
  // allocate for retained data
  const sb = Buffer.allocUnsafeSlow(10);
  // copy the data into the new allocation
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.byteLength(string[, encoding])#

Returns the actual byte length of a string. This is not the same as String.prototype.length since that returns the number of characters in a string.

Example:

const str = '\u00bd + \u00bc = \u00be';

console.log(`${str}: ${str.length} characters, ` +
            `${Buffer.byteLength(str, 'utf8')} bytes`);

// ½ + ¼ = ¾: 9 characters, 12 bytes

When string is a Buffer/DataView/TypedArray/ArrayBuffer, returns the actual byte length.

Otherwise, converts to String and returns the byte length of string.

Class Method: Buffer.compare(buf1, buf2)#

Compares buf1 to buf2 typically for the purpose of sorting arrays of Buffers. This is equivalent is calling buf1.compare(buf2).

const arr = [Buffer.from('1234'), Buffer.from('0123')];
arr.sort(Buffer.compare);

Class Method: Buffer.concat(list[, totalLength])#

  • list <Array> List of Buffer objects to concat
  • totalLength <Number> Total length of the Buffers in the list when concatenated
  • Return: <Buffer>

Returns a new Buffer which is the result of concatenating all the Buffers in the list together.

If the list has no items, or if the totalLength is 0, then a new zero-length Buffer is returned.

If totalLength is not provided, it is calculated from the Buffers in the list. This, however, adds an additional loop to the function, so it is faster to provide the length explicitly.

Example: build a single Buffer from a list of three Buffers:

const buf1 = Buffer.alloc(10);
const buf2 = Buffer.alloc(14);
const buf3 = Buffer.alloc(18);
const totalLength = buf1.length + buf2.length + buf3.length;

console.log(totalLength);
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
console.log(bufA.length);

// 42
// <Buffer 00 00 00 00 ...>
// 42

Class Method: Buffer.from(array)#

Allocates a new Buffer using an array of octets.

const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
  // creates a new Buffer containing ASCII bytes
  // ['b','u','f','f','e','r']

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])#

  • arrayBuffer <ArrayBuffer> The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset <Number> Default: 0
  • length <Number> Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf = Buffer.from(arr.buffer); // shares the memory with arr;

console.log(buf);
  // Prints: <Buffer 88 13 a0 0f>

// changing the TypedArray changes the Buffer also
arr[1] = 6000;

console.log(buf);
  // Prints: <Buffer 88 13 70 17>

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
  // Prints: 2

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)#

Copies the passed buffer data onto a new Buffer instance.

const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);

buf1[0] = 0x61;
console.log(buf1.toString());
  // 'auffer'
console.log(buf2.toString());
  // 'buffer' (copy is not changed)

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])#

  • str <String> String to encode.
  • encoding <String> Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
  // prints: this is a tést
console.log(buf1.toString('ascii'));
  // prints: this is a tC)st

const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
  // prints: this is a tést

A TypeError will be thrown if str is not a string.

Class Method: Buffer.isBuffer(obj)#

Returns 'true' if obj is a Buffer.

Class Method: Buffer.isEncoding(encoding)#

Returns true if the encoding is a valid encoding argument, or false otherwise.

buf[index]#

The index operator [index] can be used to get and set the octet at position index in the Buffer. The values refer to individual bytes, so the legal value range is between 0x00 and 0xFF (hex) or 0 and 255 (decimal).

Example: copy an ASCII string into a Buffer, one byte at a time:

const str = "Node.js";
const buf = Buffer.allocUnsafe(str.length);

for (let i = 0; i < str.length ; i++) {
  buf[i] = str.charCodeAt(i);
}

console.log(buf.toString('ascii'));
  // Prints: Node.js

buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])#

  • target <Buffer>
  • targetStart <Integer> The offset within target at which to begin comparison. default = 0.
  • targetEnd <Integer> The offset with target at which to end comparison. Ignored when targetStart is undefined. default = target.byteLength.
  • sourceStart <Integer> The offset within buf at which to begin comparison. Ignored when targetStart is undefined. default = 0
  • sourceEnd <Integer> The offset within buf at which to end comparison. Ignored when targetStart is undefined. default = buf.byteLength.
  • Return: <Number>

Compares two Buffer instances and returns a number indicating whether buf comes before, after, or is the same as the target in sort order. Comparison is based on the actual sequence of bytes in each Buffer.

  • 0 is returned if target is the same as buf
  • 1 is returned if target should come before buf when sorted.
  • -1 is returned if target should come after buf when sorted.
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('BCD');
const buf3 = Buffer.from('ABCD');

console.log(buf1.compare(buf1));
  // Prints: 0
console.log(buf1.compare(buf2));
  // Prints: -1
console.log(buf1.compare(buf3));
  // Prints: 1
console.log(buf2.compare(buf1));
  // Prints: 1
console.log(buf2.compare(buf3));
  // Prints: 1

[buf1, buf2, buf3].sort(Buffer.compare);
  // produces sort order [buf1, buf3, buf2]

The optional targetStart, targetEnd, sourceStart, and sourceEnd arguments can be used to limit the comparison to specific ranges within the two Buffer objects.

const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]);
const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]);

console.log(buf1.compare(buf2, 5, 9, 0, 4));
  // Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
  // Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
  // Prints: 1

A RangeError will be thrown if: targetStart < 0, sourceStart < 0, targetEnd > target.byteLength or sourceEnd > source.byteLength.

buf.copy(targetBuffer[, targetStart[, sourceStart[, sourceEnd]]])#

Copies data from a region of this Buffer to a region in the target Buffer even if the target memory region overlaps with the source.

Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2, starting at the 8th byte in buf2.

const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');

for (let i = 0 ; i < 26 ; i++) {
  buf1[i] = i + 97; // 97 is ASCII a
}

buf1.copy(buf2, 8, 16, 20);
console.log(buf2.toString('ascii', 0, 25));
  // Prints: !!!!!!!!qrst!!!!!!!!!!!!!

Example: Build a single Buffer, then copy data from one region to an overlapping region in the same Buffer

const buf = Buffer.allocUnsafe(26);

for (var i = 0 ; i < 26 ; i++) {
  buf[i] = i + 97; // 97 is ASCII a
}

buf.copy(buf, 0, 4, 10);
console.log(buf.toString());

// efghijghijklmnopqrstuvwxyz

buf.entries()#

  • Return: <Iterator>

Creates and returns an iterator of [index, byte] pairs from the Buffer contents.

const buf = Buffer.from('buffer');
for (var pair of buf.entries()) {
  console.log(pair);
}
// prints:
//   [0, 98]
//   [1, 117]
//   [2, 102]
//   [3, 102]
//   [4, 101]
//   [5, 114]

buf.equals(otherBuffer)#

Returns a boolean indicating whether this and otherBuffer have exactly the same bytes.

const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('414243', 'hex');
const buf3 = Buffer.from('ABCD');

console.log(buf1.equals(buf2));
  // Prints: true
console.log(buf1.equals(buf3));
  // Prints: false

buf.fill(value[, offset[, end]][, encoding])#

Fills the Buffer with the specified value. If the offset (defaults to 0) and end (defaults to buf.length) are not given the entire buffer will be filled. The method returns a reference to the Buffer, so calls can be chained. This is meant as a small simplification to creating a Buffer. Allowing the creation and fill of the Buffer to be done on a single line:

const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
  // Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

encoding is only relevant if value is a string. Otherwise it is ignored. value is coerced to a uint32 value if it is not a String or Number.

The fill() operation writes bytes into the Buffer dumbly. If the final write falls in between a multi-byte character then whatever bytes fit into the buffer are written.

Buffer(3).fill('\u0222');
  // Prints: <Buffer c8 a2 c8>

buf.indexOf(value[, byteOffset][, encoding])#

Operates similar to Array#indexOf() in that it returns either the starting index position of value in Buffer or -1 if the Buffer does not contain value. The value can be a String, Buffer or Number. Strings are by default interpreted as UTF8. Buffers will use the entire Buffer (to compare a partial Buffer use buf.slice()). Numbers can range from 0 to 255.

const buf = Buffer.from('this is a buffer');

buf.indexOf('this');
  // returns 0
buf.indexOf('is');
  // returns 2
buf.indexOf(Buffer.from('a buffer'));
  // returns 8
buf.indexOf(97); // ascii for 'a'
  // returns 8
buf.indexOf(Buffer.from('a buffer example'));
  // returns -1
buf.indexOf(Buffer.from('a buffer example').slice(0,8));
  // returns 8

const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'ucs2');

utf16Buffer.indexOf('\u03a3',  0, 'ucs2');
  // returns 4
utf16Buffer.indexOf('\u03a3', -4, 'ucs2');
  // returns 6

buf.includes(value[, byteOffset][, encoding])#

Operates similar to Array#includes(). The value can be a String, Buffer or Number. Strings are interpreted as UTF8 unless overridden with the encoding argument. Buffers will use the entire Buffer (to compare a partial Buffer use buf.slice()). Numbers can range from 0 to 255.

The byteOffset indicates the index in buf where searching begins.

const buf = Buffer.from('this is a buffer');

buf.includes('this');
  // returns true
buf.includes('is');
  // returns true
buf.includes(Buffer.from('a buffer'));
  // returns true
buf.includes(97); // ascii for 'a'
  // returns true
buf.includes(Buffer.from('a buffer example'));
  // returns false
buf.includes(Buffer.from('a buffer example').slice(0,8));
  // returns true
buf.includes('this', 4);
  // returns false

buf.keys()#

  • Return: <Iterator>

Creates and returns an iterator of Buffer keys (indices).

const buf = Buffer.from('buffer');
for (var key of buf.keys()) {
  console.log(key);
}
// prints:
//   0
//   1
//   2
//   3
//   4
//   5

buf.length#

Returns the amount of memory allocated for the Buffer in number of bytes. Note that this does not necessarily reflect the amount of usable data within the Buffer. For instance, in the example below, a Buffer with 1234 bytes is allocated, but only 11 ASCII bytes are written.

const buf = Buffer.alloc(1234);

console.log(buf.length);
  // Prints: 1234

buf.write('some string', 0, 'ascii');
console.log(buf.length);
  // Prints: 1234

While the length property is not immutable, changing the value of length can result in undefined and inconsistent behavior. Applications that wish to modify the length of a Buffer should therefore treat length as read-only and use buf.slice() to create a new Buffer.

var buf = Buffer.allocUnsafe(10);
buf.write('abcdefghj', 0, 'ascii');
console.log(buf.length);
  // Prints: 10
buf = buf.slice(0,5);
console.log(buf.length);
  // Prints: 5

buf.readDoubleBE(offset[, noAssert])#

buf.readDoubleLE(offset[, noAssert])#

Reads a 64-bit double from the Buffer at the specified offset with specified endian format (readDoubleBE() returns big endian, readDoubleLE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = Buffer.from([1,2,3,4,5,6,7,8]);

buf.readDoubleBE();
  // Returns: 8.20788039913184e-304
buf.readDoubleLE();
  // Returns: 5.447603722011605e-270
buf.readDoubleLE(1);
  // throws RangeError: Index out of range

buf.readDoubleLE(1, true); // Warning: reads passed end of buffer!
  // Segmentation fault! don't do this!

buf.readFloatBE(offset[, noAssert])#

buf.readFloatLE(offset[, noAssert])#

Reads a 32-bit float from the Buffer at the specified offset with specified endian format (readFloatBE() returns big endian, readFloatLE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = Buffer.from([1,2,3,4]);

buf.readFloatBE();
  // Returns: 2.387939260590663e-38
buf.readFloatLE();
  // Returns: 1.539989614439558e-36
buf.readFloatLE(1);
  // throws RangeError: Index out of range

buf.readFloatLE(1, true); // Warning: reads passed end of buffer!
  // Segmentation fault! don't do this!

buf.readInt8(offset[, noAssert])#

Reads a signed 8-bit integer from the Buffer at the specified offset.

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([1,-2,3,4]);

buf.readInt8(0);
  // returns 1
buf.readInt8(1);
  // returns -2

buf.readInt16BE(offset[, noAssert])#

buf.readInt16LE(offset[, noAssert])#

Reads a signed 16-bit integer from the Buffer at the specified offset with the specified endian format (readInt16BE() returns big endian, readInt16LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([1,-2,3,4]);

buf.readInt16BE();
  // returns 510
buf.readInt16LE(1);
  // returns 1022

buf.readInt32BE(offset[, noAssert])#

buf.readInt32LE(offset[, noAssert])#

Reads a signed 32-bit integer from the Buffer at the specified offset with the specified endian format (readInt32BE() returns big endian, readInt32LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = Buffer.from([1,-2,3,4]);

buf.readInt32BE();
  // returns 33424132
buf.readInt32LE();
  // returns 67370497
buf.readInt32LE(1);
  // throws RangeError: Index out of range

buf.readIntBE(offset, byteLength[, noAssert])#

buf.readIntLE(offset, byteLength[, noAssert])#

Reads byteLength number of bytes from the Buffer at the specified offset and interprets the result as a two's complement signed value. Supports up to 48 bits of accuracy. For example:

const buf = Buffer.allocUnsafe(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readIntLE(0, 6).toString(16);  // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'

buf.readIntBE(0, 6).toString(16);
// Returns: -546f87a9cbee

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

buf.readUInt8(offset[, noAssert])#

Reads an unsigned 8-bit integer from the Buffer at the specified offset.

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = Buffer.from([1,-2,3,4]);

buf.readUInt8(0);
  // returns 1
buf.readUInt8(1);
  // returns 254

buf.readUInt16BE(offset[, noAssert])#

buf.readUInt16LE(offset[, noAssert])#

Reads an unsigned 16-bit integer from the Buffer at the specified offset with specified endian format (readUInt16BE() returns big endian, readUInt16LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Example:

const buf = Buffer.from([0x3, 0x4, 0x23, 0x42]);

buf.readUInt16BE(0);
  // Returns: 0x0304
buf.readUInt16LE(0);
  // Returns: 0x0403
buf.readUInt16BE(1);
  // Returns: 0x0423
buf.readUInt16LE(1);
  // Returns: 0x2304
buf.readUInt16BE(2);
  // Returns: 0x2342
buf.readUInt16LE(2);
  // Returns: 0x4223

buf.readUInt32BE(offset[, noAssert])#

buf.readUInt32LE(offset[, noAssert])#

Reads an unsigned 32-bit integer from the Buffer at the specified offset with specified endian format (readUInt32BE() returns big endian, readUInt32LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Example:

const buf = Buffer.from([0x3, 0x4, 0x23, 0x42]);

buf.readUInt32BE(0);
  // Returns: 0x03042342
console.log(buf.readUInt32LE(0));
  // Returns: 0x42230403

buf.readUIntBE(offset, byteLength[, noAssert])#

buf.readUIntLE(offset, byteLength[, noAssert])#

Reads byteLength number of bytes from the Buffer at the specified offset and interprets the result as an unsigned integer. Supports up to 48 bits of accuracy. For example:

const buf = Buffer.allocUnsafe(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readUIntLE(0, 6).toString(16);  // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'

buf.readUIntBE(0, 6).toString(16);
// Returns: ab9078563412

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

buf.slice([start[, end]])#

Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.

Note that modifying the new Buffer slice will modify the memory in the original Buffer because the allocated memory of the two objects overlap.

Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original Buffer.

const buf1 = Buffer.allocUnsafe(26);

for (var i = 0 ; i < 26 ; i++) {
  buf1[i] = i + 97; // 97 is ASCII a
}

const buf2 = buf1.slice(0, 3);
buf2.toString('ascii', 0, buf2.length);
  // Returns: 'abc'
buf1[0] = 33;
buf2.toString('ascii', 0, buf2.length);
  // Returns : '!bc'

Specifying negative indexes causes the slice to be generated relative to the end of the Buffer rather than the beginning.

const buf = Buffer.from('buffer');

buf.slice(-6, -1).toString();
  // Returns 'buffe', equivalent to buf.slice(0, 5)
buf.slice(-6, -2).toString();
  // Returns 'buff', equivalent to buf.slice(0, 4)
buf.slice(-5, -2).toString();
  // Returns 'uff', equivalent to buf.slice(1, 4)

buf.swap16()#

Interprets the Buffer as an array of unsigned 16-bit integers and swaps the byte-order in-place. Throws a RangeError if the Buffer length is not a multiple of 16 bits. The method returns a reference to the Buffer, so calls can be chained.

const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf);
  // Prints Buffer(0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8)
buf.swap16();
console.log(buf);
  // Prints Buffer(0x2, 0x1, 0x4, 0x3, 0x6, 0x5, 0x8, 0x7)

buf.swap32()#

Interprets the Buffer as an array of unsigned 32-bit integers and swaps the byte-order in-place. Throws a RangeError if the Buffer length is not a multiple of 32 bits. The method returns a reference to the Buffer, so calls can be chained.

const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf);
  // Prints Buffer(0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8)
buf.swap32();
console.log(buf);
  // Prints Buffer(0x4, 0x3, 0x2, 0x1, 0x8, 0x7, 0x6, 0x5)

buf.toString([encoding[, start[, end]]])#

Decodes and returns a string from the Buffer data using the specified character set encoding.

const buf = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
  buf[i] = i + 97; // 97 is ASCII a
}
buf.toString('ascii');
  // Returns: 'abcdefghijklmnopqrstuvwxyz'
buf.toString('ascii',0,5);
  // Returns: 'abcde'
buf.toString('utf8',0,5);
  // Returns: 'abcde'
buf.toString(undefined,0,5);
  // Returns: 'abcde', encoding defaults to 'utf8'

buf.toJSON()#

Returns a JSON representation of the Buffer instance. JSON.stringify() implicitly calls this function when stringifying a Buffer instance.

Example:

const buf = Buffer.from('test');
const json = JSON.stringify(buf);

console.log(json);
// Prints: '{"type":"Buffer","data":[116,101,115,116]}'

const copy = JSON.parse(json, (key, value) => {
    return value && value.type === 'Buffer'
      ? Buffer.from(value.data)
      : value;
  });

console.log(copy.toString());
// Prints: 'test'

buf.values()#

  • Return: <Iterator>

Creates and returns an iterator for Buffer values (bytes). This function is called automatically when the Buffer is used in a for..of statement.

const buf = Buffer.from('buffer');
for (var value of buf.values()) {
  console.log(value);
}
// prints:
//   98
//   117
//   102
//   102
//   101
//   114

for (var value of buf) {
  console.log(value);
}
// prints:
//   98
//   117
//   102
//   102
//   101
//   114

buf.write(string[, offset[, length]][, encoding])#

Writes string to the Buffer at offset using the given encoding. The length parameter is the number of bytes to write. If the Buffer did not contain enough space to fit the entire string, only a partial amount of the string will be written however, it will not write only partially encoded characters.

const buf = Buffer.allocUnsafe(256);
const len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
  // Prints: 12 bytes: ½ + ¼ = ¾

buf.writeDoubleBE(value, offset[, noAssert])#

buf.writeDoubleLE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 8
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeDoubleBE() writes big endian, writeDoubleLE() writes little endian). The value argument should be a valid 64-bit double. Behavior is not defined when value is anything other than a 64-bit double.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = Buffer.allocUnsafe(8);
buf.writeDoubleBE(0xdeadbeefcafebabe, 0);

console.log(buf);
  // Prints: <Buffer 43 eb d5 b7 dd f9 5f d7>

buf.writeDoubleLE(0xdeadbeefcafebabe, 0);

console.log(buf);
  // Prints: <Buffer d7 5f f9 dd b7 d5 eb 43>

buf.writeFloatBE(value, offset[, noAssert])#

buf.writeFloatLE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeFloatBE() writes big endian, writeFloatLE() writes little endian). Behavior is not defined when value is anything other than a 32-bit float.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = Buffer.allocUnsafe(4);
buf.writeFloatBE(0xcafebabe, 0);

console.log(buf);
  // Prints: <Buffer 4f 4a fe bb>

buf.writeFloatLE(0xcafebabe, 0);

console.log(buf);
  // Prints: <Buffer bb fe 4a 4f>

buf.writeInt8(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 1
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset. The value should be a valid signed 8-bit integer. Behavior is not defined when value is anything other than a signed 8-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
  // Prints: <Buffer 02 fe>

buf.writeInt16BE(value, offset[, noAssert])#

buf.writeInt16LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 2
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeInt16BE() writes big endian, writeInt16LE() writes little endian). The value should be a valid signed 16-bit integer. Behavior is not defined when value is anything other than a signed 16-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(4);
buf.writeInt16BE(0x0102,0);
buf.writeInt16LE(0x0304,2);
console.log(buf);
  // Prints: <Buffer 01 02 04 03>

buf.writeInt32BE(value, offset[, noAssert])#

buf.writeInt32LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeInt32BE() writes big endian, writeInt32LE() writes little endian). The value should be a valid signed 32-bit integer. Behavior is not defined when value is anything other than a signed 32-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = Buffer.allocUnsafe(8);
buf.writeInt32BE(0x01020304,0);
buf.writeInt32LE(0x05060708,4);
console.log(buf);
  // Prints: <Buffer 01 02 03 04 08 07 06 05>

buf.writeIntBE(value, offset, byteLength[, noAssert])#

buf.writeIntLE(value, offset, byteLength[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - byteLength
  • byteLength <Number> 0 < byteLength <= 6
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset and byteLength. Supports up to 48 bits of accuracy. For example:

const buf1 = Buffer.allocUnsafe(6);
buf1.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf1);
  // Prints: <Buffer 12 34 56 78 90 ab>

const buf2 = Buffer.allocUnsafe(6);
buf2.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf2);
  // Prints: <Buffer ab 90 78 56 34 12>

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Behavior is not defined when value is anything other than an integer.

buf.writeUInt8(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 1
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset. The value should be a valid unsigned 8-bit integer. Behavior is not defined when value is anything other than an unsigned 8-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = Buffer.allocUnsafe(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);

console.log(buf);
  // Prints: <Buffer 03 04 23 42>

buf.writeUInt16BE(value, offset[, noAssert])#

buf.writeUInt16LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 2
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeUInt16BE() writes big endian, writeUInt16LE() writes little endian). The value should be a valid unsigned 16-bit integer. Behavior is not defined when value is anything other than an unsigned 16-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = Buffer.allocUnsafe(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);

console.log(buf);
  // Prints: <Buffer de ad be ef>

buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);

console.log(buf);
  // Prints: <Buffer ad de ef be>

buf.writeUInt32BE(value, offset[, noAssert])#

buf.writeUInt32LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeUInt32BE() writes big endian, writeUInt32LE() writes little endian). The value should be a valid unsigned 32-bit integer. Behavior is not defined when value is anything other than an unsigned 32-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = Buffer.allocUnsafe(4);
buf.writeUInt32BE(0xfeedface, 0);

console.log(buf);
  // Prints: <Buffer fe ed fa ce>

buf.writeUInt32LE(0xfeedface, 0);

console.log(buf);
  // Prints: <Buffer ce fa ed fe>

buf.writeUIntBE(value, offset, byteLength[, noAssert])#

buf.writeUIntLE(value, offset, byteLength[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - byteLength
  • byteLength <Number> 0 < byteLength <= 6
  • noAssert <Boolean> Default: false
  • Return: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset and byteLength. Supports up to 48 bits of accuracy. For example:

const buf = Buffer.allocUnsafe(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
  // Prints: <Buffer 12 34 56 78 90 ab>

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Behavior is not defined when value is anything other than an unsigned integer.

buffer.INSPECT_MAX_BYTES#

Returns the maximum number of bytes that will be returned when buffer.inspect() is called. This can be overridden by user modules. See util.inspect() for more details on buffer.inspect() behavior.

Note that this is a property on the buffer module as returned by require('buffer'), not on the Buffer global or a Buffer instance.

Class: SlowBuffer#

Stability: 0 - Deprecated: Use
Buffer.allocUnsafeSlow(size) instead.

Returns an un-pooled Buffer.

In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by default allocations under 4KB are sliced from a single larger allocated object. This approach improves both performance and memory usage since v8 does not need to track and cleanup as many Persistent objects.

In the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using SlowBuffer then copy out the relevant bits.

// need to keep around a few small chunks of memory
const store = [];

socket.on('readable', () => {
  var data = socket.read();
  // allocate for retained data
  var sb = SlowBuffer(10);
  // copy the data into the new allocation
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Use of SlowBuffer should be used only as a last resort after a developer has observed undue memory retention in their applications.

new SlowBuffer(size)#

Stability: 0 - Deprecated: Use
Buffer.allocUnsafeSlow(size) instead.
  • size Number

Allocates a new SlowBuffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for SlowBuffer instances is not initialized. The contents of a newly created SlowBuffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a SlowBuffer to zeroes.

const SlowBuffer = require('buffer').SlowBuffer;
const buf = new SlowBuffer(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

Child Process#

Stability: 2 - Stable

The child_process module provides the ability to spawn child processes in a manner that is similar, but not identical, to popen(3). This capability is primarily provided by the child_process.spawn() function:

const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {
  console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {
  console.log(`stderr: ${data}`);
});

ls.on('close', (code) => {
  console.log(`child process exited with code ${code}`);
});

By default, pipes for stdin, stdout and stderr are established between the parent Node.js process and the spawned child. It is possible to stream data through these pipes in a non-blocking way. Note, however, that some programs use line-buffered I/O internally. While that does not affect Node.js, it can mean that data sent to the child process may not be immediately consumed.

The child_process.spawn() method spawns the child process asynchronously, without blocking the Node.js event loop. The child_process.spawnSync() function provides equivalent functionality in a synchronous manner that blocks the event loop until the spawned process either exits or is terminated.

For convenience, the child_process module provides a handful of synchronous and asynchronous alternatives to child_process.spawn() and child_process.spawnSync(). Note that each of these alternatives are implemented on top of child_process.spawn() or child_process.spawnSync().

  • child_process.exec(): spawns a shell and runs a command within that shell, passing the stdout and stderr to a callback function when complete.
  • child_process.execFile(): similar to child_process.exec() except that it spawns the command directly without first spawning a shell.
  • child_process.fork(): spawns a new Node.js process and invokes a specified module with an IPC communication channel established that allows sending messages between parent and child.
  • child_process.execSync(): a synchronous version of child_process.exec() that will block the Node.js event loop.
  • child_process.execFileSync(): a synchronous version of child_process.execFile() that will block the Node.js event loop.

For certain use cases, such as automating shell scripts, the synchronous counterparts may be more convenient. In many cases, however, the synchronous methods can have significant impact on performance due to stalling the event loop while spawned processes complete.

Asynchronous Process Creation#

The child_process.spawn(), child_process.fork(), child_process.exec(), and child_process.execFile() methods all follow the idiomatic asynchronous programming pattern typical of other Node.js APIs.

Each of the methods returns a ChildProcess instance. These objects implement the Node.js EventEmitter API, allowing the parent process to register listener functions that are called when certain events occur during the life cycle of the child process.

The child_process.exec() and child_process.execFile() methods additionally allow for an optional callback function to be specified that is invoked when the child process terminates.

Spawning .bat and .cmd files on Windows#

The importance of the distinction between child_process.exec() and child_process.execFile() can vary based on platform. On Unix-type operating systems (Unix, Linux, OSX) child_process.execFile() can be more efficient because it does not spawn a shell. On Windows, however, .bat and .cmd files are not executable on their own without a terminal, and therefore cannot be launched using child_process.execFile(). When running on Windows, .bat and .cmd files can be invoked using child_process.spawn() with the shell option set, with child_process.exec(), or by spawning cmd.exe and passing the .bat or .cmd file as an argument (which is what the shell option and child_process.exec() do).

// On Windows Only ...
const spawn = require('child_process').spawn;
const bat = spawn('cmd.exe', ['/c', 'my.bat']);

bat.stdout.on('data', (data) => {
  console.log(data);
});

bat.stderr.on('data', (data) => {
  console.log(data);
});

bat.on('exit', (code) => {
  console.log(`Child exited with code ${code}`);
});

// OR...
const exec = require('child_process').exec;
exec('my.bat', (err, stdout, stderr) => {
  if (err) {
    console.error(err);
    return;
  }
  console.log(stdout);
});

child_process.exec(command[, options][, callback])#

  • command <String> The command to run, with space-separated arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • encoding <String> (Default: 'utf8')
    • shell <String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • timeout <Number> (Default: 0)
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed (Default: 200*1024)
    • killSignal <String> (Default: 'SIGTERM')
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
  • callback <Function> called with the output when process terminates
  • Return: <ChildProcess>

Spawns a shell then executes the command within that shell, buffering any generated output.

const exec = require('child_process').exec;
const child = exec('cat *.js bad_file | wc -l',
  (error, stdout, stderr) => {
    console.log(`stdout: ${stdout}`);
    console.log(`stderr: ${stderr}`);
    if (error !== null) {
      console.log(`exec error: ${error}`);
    }
});

If a callback function is provided, it is called with the arguments (error, stdout, stderr). On success, error will be null. On error, error will be an instance of Error. The error.code property will be the exit code of the child process while error.signal will be set to the signal that terminated the process. Any exit code other than 0 is considered to be an error.

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer', Buffer objects will be passed to the callback instead.

The options argument may be passed as the second argument to customize how the process is spawned. The default options are:

{
  encoding: 'utf8',
  timeout: 0,
  maxBuffer: 200*1024,
  killSignal: 'SIGTERM',
  cwd: null,
  env: null
}

If timeout is greater than 0, the parent will send the the signal identified by the killSignal property (the default is 'SIGTERM') if the child runs longer than timeout milliseconds.

Note: Unlike the exec() POSIX system call, child_process.exec() does not replace the existing process and uses a shell to execute the command.

child_process.execFile(file[, args][, options][, callback])#

The child_process.execFile() function is similar to child_process.exec() except that it does not spawn a shell. Rather, the specified executable file is spawned directly as a new process making it slightly more efficient than child_process.exec().

The same options as child_process.exec() are supported. Since a shell is not spawned, behaviors such as I/O redirection and file globbing are not supported.

const execFile = require('child_process').execFile;
const child = execFile('node', ['--version'], (error, stdout, stderr) => {
  if (error) {
    throw error;
  }
  console.log(stdout);
});

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer', Buffer objects will be passed to the callback instead.

child_process.fork(modulePath[, args][, options])#

  • modulePath <String> The module to run in the child
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • execPath <String> Executable used to create the child process
    • execArgv <Array> List of string arguments passed to the executable (Default: process.execArgv)
    • silent <Boolean> If true, stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the 'pipe' and 'inherit' options for child_process.spawn()'s stdio for more details (default is false)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
  • Return: <ChildProcess>

The child_process.fork() method is a special case of child_process.spawn() used specifically to spawn new Node.js processes. Like child_process.spawn(), a ChildProcess object is returned. The returned ChildProcess will have an additional communication channel built-in that allows messages to be passed back and forth between the parent and child. See ChildProcess#send() for details.

It is important to keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has it's own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.

By default, child_process.fork() will spawn new Node.js instances using the process.execPath of the parent process. The execPath property in the options object allows for an alternative execution path to be used.

Node.js processes launched with a custom execPath will communicate with the parent process using the file descriptor (fd) identified using the environment variable NODE_CHANNEL_FD on the child process. The input and output on this fd is expected to be line delimited JSON objects.

Note: Unlike the fork() POSIX system call, child_process.fork() does not clone the current process.

child_process.spawn(command[, args][, options])#

  • command <String> The command to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • stdio <Array> | <String> Child's stdio configuration. (See options.stdio)
    • detached <Boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • shell <Boolean> | <String> If true, runs command inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the -c switch on UNIX, or /s /c on Windows. Defaults to false (no shell).
  • return: <ChildProcess>

The child_process.spawn() method spawns a new process using the given command, with command line arguments in args. If omitted, args defaults to an empty array.

A third argument may be used to specify additional options, with these defaults:

{
  cwd: undefined,
  env: process.env
}

Use cwd to specify the working directory from which the process is spawned. If not given, the default is to inherit the current working directory.

Use env to specify environment variables that will be visible to the new process, the default is process.env.

Example of running ls -lh /usr, capturing stdout, stderr, and the exit code:

const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {
  console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {
  console.log(`stderr: ${data}`);
});

ls.on('close', (code) => {
  console.log(`child process exited with code ${code}`);
});

Example: A very elaborate way to run 'ps ax | grep ssh'

const spawn = require('child_process').spawn;
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);

ps.stdout.on('data', (data) => {
  grep.stdin.write(data);
});

ps.stderr.on('data', (data) => {
  console.log(`ps stderr: ${data}`);
});

ps.on('close', (code) => {
  if (code !== 0) {
    console.log(`ps process exited with code ${code}`);
  }
  grep.stdin.end();
});

grep.stdout.on('data', (data) => {
  console.log(`${data}`);
});

grep.stderr.on('data', (data) => {
  console.log(`grep stderr: ${data}`);
});

grep.on('close', (code) => {
  if (code !== 0) {
    console.log(`grep process exited with code ${code}`);
  }
});

Example of checking for failed exec:

const spawn = require('child_process').spawn;
const child = spawn('bad_command');

child.on('error', (err) => {
  console.log('Failed to start child process.');
});

options.detached#

On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled for a child process, it cannot be disabled.

On non-Windows platforms, if options.detached is set to true, the child process will be made the leader of a new process group and session. Note that child processes may continue running after the parent exits regardless of whether they are detached or not. See setsid(2) for more information.

By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given child, use the child.unref() method. Doing so will cause the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and parent.

When using the detached option to start a long-running process, the process will not stay running in the background after the parent exits unless it is provided with a stdio configuration that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.

Example of a long-running process, by detaching and also ignoring its parent stdio file descriptors, in order to ignore the parent's termination:

const spawn = require('child_process').spawn;

const child = spawn(process.argv[0], ['child_program.js'], {
  detached: true,
  stdio: ['ignore']
});

child.unref();

Alternatively one can redirect the child process' output into files:

const fs = require('fs');
const spawn = require('child_process').spawn;
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');

const child = spawn('prg', [], {
 detached: true,
 stdio: [ 'ignore', out, err ]
});

child.unref();

options.stdio#

The options.stdio option is used to configure the pipes that are established between the parent and child process. By default, the child's stdin, stdout, and stderr are redirected to corresponding child.stdin, child.stdout, and child.stderr streams on the ChildProcess object. This is equivalent to setting the options.stdio equal to ['pipe', 'pipe', 'pipe'].

For convenience, options.stdio may be one of the following strings:

  • 'pipe' - equivalent to ['pipe', 'pipe', 'pipe'] (the default)
  • 'ignore' - equivalent to ['ignore', 'ignore', 'ignore']
  • 'inherit' - equivalent to [process.stdin, process.stdout, process.stderr] or [0,1,2]

Otherwise, the value of option.stdio is an array where each index corresponds to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout, and stderr, respectively. Additional fds can be specified to create additional pipes between the parent and child. The value is one of the following:

  1. 'pipe' - Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as ChildProcess.stdio[fd]. Pipes created for fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout and ChildProcess.stderr, respectively.
  2. 'ipc' - Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option enables the ChildProcess.send() method. If the child writes JSON messages to this file descriptor, the ChildProcess.on('message') event handler will be triggered in the parent. If the child is a Node.js process, the presence of an IPC channel will enable process.send(), process.disconnect(), process.on('disconnect'), and process.on('message') within the child.
  3. 'ignore' - Instructs Node.js to ignore the fd in the child. While Node.js will always open fds 0 - 2 for the processes it spawns, setting the fd to 'ignore' will cause Node.js to open /dev/null and attach it to the child's fd.
  4. Stream object - Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child process to the fd that corresponds to the index in the stdio array. Note that the stream must have an underlying descriptor (file streams do not until the 'open' event has occurred).
  5. Positive integer - The integer value is interpreted as a file descriptor that is is currently open in the parent process. It is shared with the child process, similar to how Stream objects can be shared.
  6. null, undefined - Use default value. For stdio fds 0, 1 and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore'.

Example:

const spawn = require('child_process').spawn;

// Child will use parent's stdios
spawn('prg', [], { stdio: 'inherit' });

// Spawn child sharing only stderr
spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });

// Open an extra fd=4, to interact with programs presenting a
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });

It is worth noting that when an IPC channel is established between the parent and child processes, and the child is a Node.js process, the child is launched with the IPC channel unreferenced (using unref()) until the child registers an event handler for the process.on('disconnected') event. This allows the child to exit normally without the process being held open by the open IPC channel.

See also: child_process.exec() and child_process.fork()

Synchronous Process Creation#

The child_process.spawnSync(), child_process.execSync(), and child_process.execFileSync() methods are synchronous and WILL block the Node.js event loop, pausing execution of any additional code until the spawned process exits.

Blocking calls like these are mostly useful for simplifying general purpose scripting tasks and for simplifying the loading/processing of application configuration at startup.

child_process.execFileSync(file[, args][, options])#

  • file <String> The name or path of the executable file to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <Array> Child's stdio configuration. (Default: 'pipe')
      • stderr by default will be output to the parent process' stderr unless stdio is specified
    • env <Object> Environment key-value pairs
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
  • return: <Buffer> | <String> The stdout from the command

The child_process.execFileSync() method is generally identical to child_process.execFile() with the exception that the method will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the child process intercepts and handles the SIGTERM signal and does not exit, the parent process will still wait until the child process has exited.

If the process times out, or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync()

child_process.execSync(command[, options])#

  • command <String> The command to run
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <Array> Child's stdio configuration. (Default: 'pipe')
      • stderr by default will be output to the parent process' stderr unless stdio is specified
    • env <Object> Environment key-value pairs
    • shell <String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
  • return: <Buffer> | <String> The stdout from the command

The child_process.execSync() method is generally identical to child_process.exec() with the exception that the method will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the child process intercepts and handles the SIGTERM signal and doesn't exit, the parent process will wait until the child process has exited.

If the process times out, or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync()

child_process.spawnSync(command[, args][, options])#

  • command <String> The command to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <Array> Child's stdio configuration.
    • env <Object> Environment key-value pairs
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
    • shell <Boolean> | <String> If true, runs command inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the -c switch on UNIX, or /s /c on Windows. Defaults to false (no shell).
  • return: <Object>
    • pid <Number> Pid of the child process
    • output <Array> Array of results from stdio output
    • stdout <Buffer> | <String> The contents of output[1]
    • stderr <Buffer> | <String> The contents of output[2]
    • status <Number> The exit code of the child process
    • signal <String> The signal used to kill the child process
    • error <Error> The error object if the child process failed or timed out

The child_process.spawnSync() method is generally identical to child_process.spawn() with the exception that the function will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the process intercepts and handles the SIGTERM signal and doesn't exit, the parent process will wait until the child process has exited.

Class: ChildProcess#

Instances of the ChildProcess class are EventEmitters that represent spawned child processes.

Instances of ChildProcess are not intended to be created directly. Rather, use the child_process.spawn(), child_process.exec(), child_process.execFile(), or child_process.fork() methods to create instances of ChildProcess.

Event: 'close'#

  • code <Number> the exit code if the child exited on its own.
  • signal <String> the signal by which the child process was terminated.

The 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio streams.

Event: 'disconnect'#

The 'disconnect' event is emitted after calling the ChildProcess.disconnect() method in the parent or child process. After disconnecting it is no longer possible to send or receive messages, and the ChildProcess.connected property is false.

Event: 'error'#

The 'error' event is emitted whenever:

  1. The process could not be spawned, or
  2. The process could not be killed, or
  3. Sending a message to the child process failed.

Note that the 'exit' event may or may not fire after an error has occurred. If you are listening to both the 'exit' and 'error' events, it is important to guard against accidentally invoking handler functions multiple times.

See also ChildProcess#kill() and ChildProcess#send().

Event: 'exit'#

  • code <Number> the exit code if the child exited on its own.
  • signal <String> the signal by which the child process was terminated.

The 'exit' event is emitted after the child process ends. If the process exited, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null. One of the two will always be non-null.

Note that when the 'exit' event is triggered, child process stdio streams might still be open.

Also, note that Node.js establishes signal handlers for SIGINT and SIGTERM and Node.js processes will not terminate immediately due to receipt of those signals. Rather, Node.js will perform a sequence of cleanup actions and then will re-raise the handled signal.

See waitpid(2).

Event: 'message'#

The 'message' event is triggered when a child process uses process.send() to send messages.

child.connected#

  • <Boolean> Set to false after .disconnect is called

The child.connected property indicates whether it is still possible to send and receive messages from a child process. When child.connected is false, it is no longer possible to send or receive messages.

child.disconnect()#

Closes the IPC channel between parent and child, allowing the child to exit gracefully once there are no other connections keeping it alive. After calling this method the child.connected and process.connected properties in both the parent and child (respectively) will be set to false, and it will be no longer possible to pass messages between the processes.

The 'disconnect' event will be emitted when there are no messages in the process of being received. This will most often be triggered immediately after calling child.disconnect().

Note that when the child process is a Node.js instance (e.g. spawned using child_process.fork()), the process.disconnect() method can be invoked within the child process to close the IPC channel as well.

child.kill([signal])#

The child.kill() methods sends a signal to the child process. If no argument is given, the process will be sent the 'SIGTERM' signal. See signal(7) for a list of available signals.

const spawn = require('child_process').spawn;
const grep = spawn('grep', ['ssh']);

grep.on('close', (code, signal) => {
  console.log(
    `child process terminated due to receipt of signal ${signal}`);
});

// Send SIGHUP to process
grep.kill('SIGHUP');

The ChildProcess object may emit an 'error' event if the signal cannot be delivered. Sending a signal to a child process that has already exited is not an error but may have unforeseen consequences. Specifically, if the process identifier (PID) has been reassigned to another process, the signal will be delivered to that process instead which can have unexpected results.

Note that while the function is called kill, the signal delivered to the child process may not actually terminate the process.

See kill(2) for reference.

Also note: on Linux, child processes of child processes will not be terminated when attempting to kill their parent. This is likely to happen when running a new process in a shell or with use of the shell option of ChildProcess, such as in this example:

'use strict';
const spawn = require('child_process').spawn;

let child = spawn('sh', ['-c',
  `node -e "setInterval(() => {
      console.log(process.pid + 'is alive')
    }, 500);"`
  ], {
    stdio: ['inherit', 'inherit', 'inherit']
  });

setTimeout(() => {
  child.kill(); // does not terminate the node process in the shell
}, 2000);

child.pid#

Returns the process identifier (PID) of the child process.

Example:

const spawn = require('child_process').spawn;
const grep = spawn('grep', ['ssh']);

console.log(`Spawned child pid: ${grep.pid}`);
grep.stdin.end();

child.send(message[, sendHandle[, options]][, callback])#

When an IPC channel has been established between the parent and child ( i.e. when using child_process.fork()), the child.send() method can be used to send messages to the child process. When the child process is a Node.js instance, these messages can be received via the process.on('message') event.

For example, in the parent script:

const cp = require('child_process');
const n = cp.fork(`${__dirname}/sub.js`);

n.on('message', (m) => {
  console.log('PARENT got message:', m);
});

n.send({ hello: 'world' });

And then the child script, 'sub.js' might look like this:

process.on('message', (m) => {
  console.log('CHILD got message:', m);
});

process.send({ foo: 'bar' });

Child Node.js processes will have a process.send() method of their own that allows the child to send messages back to the parent.

There is a special case when sending a {cmd: 'NODE_foo'} message. All messages containing a NODE_ prefix in its cmd property are considered to be reserved for use within Node.js core and will not be emitted in the child's process.on('message') event. Rather, such messages are emitted using the process.on('internalMessage') event and are consumed internally by Node.js. Applications should avoid using such messages or listening for 'internalMessage' events as it is subject to change without notice.

The optional sendHandle argument that may be passed to child.send() is for passing a TCP server or socket object to the child process. The child will receive the object as the second argument passed to the callback function registered on the process.on('message') event.

The options argument, if present, is an object used to parameterize the sending of certain types of handles. options supports the following properties:

  • keepOpen - A Boolean value that can be used when passing instances of net.Socket. When true, the socket is kept open in the sending process. Defaults to false.

The optional callback is a function that is invoked after the message is sent but before the child may have received it. The function is called with a single argument: null on success, or an Error object on failure.

If no callback function is provided and the message cannot be sent, an 'error' event will be emitted by the ChildProcess object. This can happen, for instance, when the child process has already exited.

child.send() will return false if the channel has closed or when the backlog of unsent messages exceeds a threshold that makes it unwise to send more. Otherwise, the method returns true. The callback function can be used to implement flow control.

Example: sending a server object#

The sendHandle argument can be used, for instance, to pass the handle of a TCP server object to the child process as illustrated in the example below:

const child = require('child_process').fork('child.js');

// Open up the server object and send the handle.
const server = require('net').createServer();
server.on('connection', (socket) => {
  socket.end('handled by parent');
});
server.listen(1337, () => {
  child.send('server', server);
});

The child would then receive the server object as:

process.on('message', (m, server) => {
  if (m === 'server') {
    server.on('connection', (socket) => {
      socket.end('handled by child');
    });
  }
});

Once the server is now shared between the parent and child, some connections can be handled by the parent and some by the child.

While the example above uses a server created using the net module, dgram module servers use exactly the same workflow with the exceptions of listening on a 'message' event instead of 'connection' and using server.bind instead of server.listen. This is, however, currently only supported on UNIX platforms.

Example: sending a socket object#

Similarly, the sendHandler argument can be used to pass the handle of a socket to the child process. The example below spawns two children that each handle connections with "normal" or "special" priority:

const normal = require('child_process').fork('child.js', ['normal']);
const special = require('child_process').fork('child.js', ['special']);

// Open up the server and send sockets to child
const server = require('net').createServer();
server.on('connection', (socket) => {

  // If this is special priority
  if (socket.remoteAddress === '74.125.127.100') {
    special.send('socket', socket);
    return;
  }
  // This is normal priority
  normal.send('socket', socket);
});
server.listen(1337);

The child.js would receive the socket handle as the second argument passed to the event callback function:

process.on('message', (m, socket) => {
  if (m === 'socket') {
    socket.end(`Request handled with ${process.argv[2]} priority`);
  }
});

Once a socket has been passed to a child, the parent is no longer capable of tracking when the socket is destroyed. To indicate this, the .connections property becomes null. It is recommended not to use .maxConnections when this occurs.

Note: this function uses JSON.stringify() internally to serialize the message.

child.stderr#

A Readable Stream that represents the child process's stderr.

If the child was spawned with stdio[2] set to anything other than 'pipe', then this will be undefined.

child.stderr is an alias for child.stdio[2]. Both properties will refer to the same value.

child.stdin#

A Writable Stream that represents the child process's stdin.

Note that if a child process waits to read all of its input, the child will not continue until this stream has been closed via end().

If the child was spawned with stdio[0] set to anything other than 'pipe', then this will be undefined.

child.stdin is an alias for child.stdio[0]. Both properties will refer to the same value.

child.stdio#

A sparse array of pipes to the child process, corresponding with positions in the stdio option passed to child_process.spawn() that have been set to the value 'pipe'. Note that child.stdio[0], child.stdio[1], and child.stdio[2] are also available as child.stdin, child.stdout, and child.stderr, respectively.

In the following example, only the child's fd 1 (stdout) is configured as a pipe, so only the parent's child.stdio[1] is a stream, all other values in the array are null.

const assert = require('assert');
const fs = require('fs');
const child_process = require('child_process');

const child = child_process.spawn('ls', {
    stdio: [
      0, // Use parents stdin for child
      'pipe', // Pipe child's stdout to parent
      fs.openSync('err.out', 'w') // Direct child's stderr to a file
    ]
});

assert.equal(child.stdio[0], null);
assert.equal(child.stdio[0], child.stdin);

assert(child.stdout);
assert.equal(child.stdio[1], child.stdout);

assert.equal(child.stdio[2], null);
assert.equal(child.stdio[2], child.stderr);

child.stdout#

A Readable Stream that represents the child process's stdout.

If the child was spawned with stdio[1] set to anything other than 'pipe', then this will be undefined.

child.stdout is an alias for child.stdio[1]. Both properties will refer to the same value.

maxBuffer and Unicode#

It is important to keep in mind that the maxBuffer option specifies the largest number of octets allowed on stdout or stderr - if this value is exceeded then the child process is terminated. This particularly impacts output that includes multi-byte character encodings such as UTF-8 or UTF-16. For instance, the following will output 13 UTF-8 encoded octets to stdout although there are only 4 characters:

console.log('中文测试');

Cluster#

Stability: 2 - Stable

A single instance of Node.js runs in a single thread. To take advantage of multi-core systems the user will sometimes want to launch a cluster of Node.js processes to handle the load.

The cluster module allows you to easily create child processes that all share server ports.

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
} else {
  // Workers can share any TCP connection
  // In this case it is an HTTP server
  http.createServer((req, res) => {
    res.writeHead(200);
    res.end('hello world\n');
  }).listen(8000);
}

Running Node.js will now share port 8000 between the workers:

$ NODE_DEBUG=cluster node server.js
23521,Master Worker 23524 online
23521,Master Worker 23526 online
23521,Master Worker 23523 online
23521,Master Worker 23528 online

Please note that, on Windows, it is not yet possible to set up a named pipe server in a worker.

How It Works#

The worker processes are spawned using the child_process.fork() method, so that they can communicate with the parent via IPC and pass server handles back and forth.

The cluster module supports two methods of distributing incoming connections.

The first one (and the default one on all platforms except Windows), is the round-robin approach, where the master process listens on a port, accepts new connections and distributes them across the workers in a round-robin fashion, with some built-in smarts to avoid overloading a worker process.

The second approach is where the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly.

The second approach should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight.

Because server.listen() hands off most of the work to the master process, there are three cases where the behavior between a normal Node.js process and a cluster worker differs:

  1. server.listen({fd: 7}) Because the message is passed to the master, file descriptor 7 in the parent will be listened on, and the handle passed to the worker, rather than listening to the worker's idea of what the number 7 file descriptor references.
  2. server.listen(handle) Listening on handles explicitly will cause the worker to use the supplied handle, rather than talk to the master process. If the worker already has the handle, then it's presumed that you know what you are doing.
  3. server.listen(0) Normally, this will cause servers to listen on a random port. However, in a cluster, each worker will receive the same "random" port each time they do listen(0). In essence, the port is random the first time, but predictable thereafter. If you want to listen on a unique port, generate a port number based on the cluster worker ID.

There is no routing logic in Node.js, or in your program, and no shared state between the workers. Therefore, it is important to design your program such that it does not rely too heavily on in-memory data objects for things like sessions and login.

Because workers are all separate processes, they can be killed or re-spawned depending on your program's needs, without affecting other workers. As long as there are some workers still alive, the server will continue to accept connections. If no workers are alive, existing connections will be dropped and new connections will be refused. Node.js does not automatically manage the number of workers for you, however. It is your responsibility to manage the worker pool for your application's needs.

Class: Worker#

A Worker object contains all public information and method about a worker. In the master it can be obtained using cluster.workers. In a worker it can be obtained using cluster.worker.

Event: 'disconnect'#

Similar to the cluster.on('disconnect') event, but specific to this worker.

cluster.fork().on('disconnect', () => {
  // Worker has disconnected
});

Event: 'error'#

This event is the same as the one provided by child_process.fork().

In a worker you can also use process.on('error').

Event: 'exit'#

  • code <Number> the exit code, if it exited normally.
  • signal <String> the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

Similar to the cluster.on('exit') event, but specific to this worker.

const worker = cluster.fork();
worker.on('exit', (code, signal) => {
  if( signal ) {
    console.log(`worker was killed by signal: ${signal}`);
  } else if( code !== 0 ) {
    console.log(`worker exited with error code: ${code}`);
  } else {
    console.log('worker success!');
  }
});

Event: 'listening'#

Similar to the cluster.on('listening') event, but specific to this worker.

cluster.fork().on('listening', (address) => {
  // Worker is listening
});

It is not emitted in the worker.

Event: 'message'#

Similar to the cluster.on('message') event, but specific to this worker.

This event is the same as the one provided by child_process.fork().

In a worker you can also use process.on('message').

As an example, here is a cluster that keeps count of the number of requests in the master process using the message system:

const cluster = require('cluster');
const http = require('http');

if (cluster.isMaster) {

  // Keep track of http requests
  var numReqs = 0;
  setInterval(() => {
    console.log('numReqs =', numReqs);
  }, 1000);

  // Count requests
  function messageHandler(msg) {
    if (msg.cmd && msg.cmd == 'notifyRequest') {
      numReqs += 1;
    }
  }

  // Start workers and listen for messages containing notifyRequest
  const numCPUs = require('os').cpus().length;
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  Object.keys(cluster.workers).forEach((id) => {
    cluster.workers[id].on('message', messageHandler);
  });

} else {

  // Worker processes have a http server.
  http.Server((req, res) => {
    res.writeHead(200);
    res.end('hello world\n');

    // notify master about the request
    process.send({ cmd: 'notifyRequest' });
  }).listen(8000);
}

Event: 'online'#

Similar to the cluster.on('online') event, but specific to this worker.

cluster.fork().on('online', () => {
  // Worker is online
});

It is not emitted in the worker.

worker.disconnect()#

In a worker, this function will close all servers, wait for the 'close' event on those servers, and then disconnect the IPC channel.

In the master, an internal message is sent to the worker causing it to call .disconnect() on itself.

Causes .suicide to be set.

Note that after a server is closed, it will no longer accept new connections, but connections may be accepted by any other listening worker. Existing connections will be allowed to close as usual. When no more connections exist, see server.close(), the IPC channel to the worker will close allowing it to die gracefully.

The above applies only to server connections, client connections are not automatically closed by workers, and disconnect does not wait for them to close before exiting.

Note that in a worker, process.disconnect exists, but it is not this function, it is disconnect.

Because long living server connections may block workers from disconnecting, it may be useful to send a message, so application specific actions may be taken to close them. It also may be useful to implement a timeout, killing a worker if the 'disconnect' event has not been emitted after some time.

if (cluster.isMaster) {
  var worker = cluster.fork();
  var timeout;

  worker.on('listening', (address) => {
    worker.send('shutdown');
    worker.disconnect();
    timeout = setTimeout(() => {
      worker.kill();
    }, 2000);
  });

  worker.on('disconnect', () => {
    clearTimeout(timeout);
  });

} else if (cluster.isWorker) {
  const net = require('net');
  var server = net.createServer((socket) => {
    // connections never end
  });

  server.listen(8000);

  process.on('message', (msg) => {
    if(msg === 'shutdown') {
      // initiate graceful close of any connections to server
    }
  });
}

worker.id#

Each new worker is given its own unique id, this id is stored in the id.

While a worker is alive, this is the key that indexes it in cluster.workers

worker.isConnected()#

This function returns true if the worker is connected to its master via its IPC channel, false otherwise. A worker is connected to its master after it's been created. It is disconnected after the 'disconnect' event is emitted.

worker.isDead()#

This function returns true if the worker's process has terminated (either because of exiting or being signaled). Otherwise, it returns false.

worker.kill([signal='SIGTERM'])#

  • signal <String> Name of the kill signal to send to the worker process.

This function will kill the worker. In the master, it does this by disconnecting the worker.process, and once disconnected, killing with signal. In the worker, it does it by disconnecting the channel, and then exiting with code 0.

Causes .suicide to be set.

This method is aliased as worker.destroy() for backwards compatibility.

Note that in a worker, process.kill() exists, but it is not this function, it is kill.

worker.process#

All workers are created using child_process.fork(), the returned object from this function is stored as .process. In a worker, the global process is stored.

See: Child Process module

Note that workers will call process.exit(0) if the 'disconnect' event occurs on process and .suicide is not true. This protects against accidental disconnection.

worker.send(message[, sendHandle][, callback])#

Send a message to a worker or master, optionally with a handle.

In the master this sends a message to a specific worker. It is identical to ChildProcess.send().

In a worker this sends a message to the master. It is identical to process.send().

This example will echo back all messages from the master:

if (cluster.isMaster) {
  var worker = cluster.fork();
  worker.send('hi there');

} else if (cluster.isWorker) {
  process.on('message', (msg) => {
    process.send(msg);
  });
}

worker.suicide#

Set by calling .kill() or .disconnect(), until then it is undefined.

The boolean worker.suicide lets you distinguish between voluntary and accidental exit, the master may choose not to respawn a worker based on this value.

cluster.on('exit', (worker, code, signal) => {
  if (worker.suicide === true) {
    console.log('Oh, it was just suicide\' – no need to worry').
  }
});

// kill worker
worker.kill();

Event: 'disconnect'#

Emitted after the worker IPC channel has disconnected. This can occur when a worker exits gracefully, is killed, or is disconnected manually (such as with worker.disconnect()).

There may be a delay between the 'disconnect' and 'exit' events. These events can be used to detect if the process is stuck in a cleanup or if there are long-living connections.

cluster.on('disconnect', (worker) => {
  console.log(`The worker #${worker.id} has disconnected`);
});

Event: 'exit'#

  • worker <cluster.Worker>
  • code <Number> the exit code, if it exited normally.
  • signal <String> the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

When any of the workers die the cluster module will emit the 'exit' event.

This can be used to restart the worker by calling .fork() again.

cluster.on('exit', (worker, code, signal) => {
  console.log('worker %d died (%s). restarting...',
    worker.process.pid, signal || code);
  cluster.fork();
});

See child_process event: 'exit'.

Event: 'fork'#

When a new worker is forked the cluster module will emit a 'fork' event. This can be used to log worker activity, and create your own timeout.

var timeouts = [];
function errorMsg() {
  console.error('Something must be wrong with the connection ...');
}

cluster.on('fork', (worker) => {
  timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', (worker, address) => {
  clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', (worker, code, signal) => {
  clearTimeout(timeouts[worker.id]);
  errorMsg();
});

Event: 'listening'#

After calling listen() from a worker, when the 'listening' event is emitted on the server, a 'listening' event will also be emitted on cluster in the master.

The event handler is executed with two arguments, the worker contains the worker object and the address object contains the following connection properties: address, port and addressType. This is very useful if the worker is listening on more than one address.

cluster.on('listening', (worker, address) => {
  console.log(
    `A worker is now connected to ${address.address}:${address.port}`);
});

The addressType is one of:

  • 4 (TCPv4)
  • 6 (TCPv6)
  • -1 (unix domain socket)
  • "udp4" or "udp6" (UDP v4 or v6)

Event: 'message'#

Emitted when any worker receives a message.

See child_process event: 'message'.

Before Node.js v6.0, this event emitted only the message and the handle, but not the worker object, contrary to what the documentation stated.

If you need to support older versions and don't need the worker object, you can work around the discrepancy by checking the number of arguments:

cluster.on('message', function(worker, message, handle) {
  if (arguments.length === 2) {
    handle = message;
    message = worker;
    worker = undefined;
  }
  // ...
});

Event: 'online'#

After forking a new worker, the worker should respond with an online message. When the master receives an online message it will emit this event. The difference between 'fork' and 'online' is that fork is emitted when the master forks a worker, and 'online' is emitted when the worker is running.

cluster.on('online', (worker) => {
  console.log('Yay, the worker responded after it was forked');
});

Event: 'setup'#

Emitted every time .setupMaster() is called.

The settings object is the cluster.settings object at the time .setupMaster() was called and is advisory only, since multiple calls to .setupMaster() can be made in a single tick.

If accuracy is important, use cluster.settings.

cluster.disconnect([callback])#

  • callback <Function> called when all workers are disconnected and handles are closed

Calls .disconnect() on each worker in cluster.workers.

When they are disconnected all internal handles will be closed, allowing the master process to die gracefully if no other event is waiting.

The method takes an optional callback argument which will be called when finished.

This can only be called from the master process.

cluster.fork([env])#

Spawn a new worker process.

This can only be called from the master process.

cluster.isMaster#

True if the process is a master. This is determined by the process.env.NODE_UNIQUE_ID. If process.env.NODE_UNIQUE_ID is undefined, then isMaster is true.

cluster.isWorker#

True if the process is not a master (it is the negation of cluster.isMaster).

cluster.schedulingPolicy#

The scheduling policy, either cluster.SCHED_RR for round-robin or cluster.SCHED_NONE to leave it to the operating system. This is a global setting and effectively frozen once you spawn the first worker or call cluster.setupMaster(), whatever comes first.

SCHED_RR is the default on all operating systems except Windows. Windows will change to SCHED_RR once libuv is able to effectively distribute IOCP handles without incurring a large performance hit.

cluster.schedulingPolicy can also be set through the NODE_CLUSTER_SCHED_POLICY environment variable. Valid values are "rr" and "none".

cluster.settings#

  • <Object>
    • execArgv <Array> list of string arguments passed to the Node.js executable. (Default=process.execArgv)
    • exec <String> file path to worker file. (Default=process.argv[1])
    • args <Array> string arguments passed to worker. (Default=process.argv.slice(2))
    • silent <Boolean> whether or not to send output to parent's stdio. (Default=false)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)

After calling .setupMaster() (or .fork()) this settings object will contain the settings, including the default values.

It is effectively frozen after being set, because .setupMaster() can only be called once.

This object is not supposed to be changed or set manually, by you.

cluster.setupMaster([settings])#

  • settings <Object>
    • exec <String> file path to worker file. (Default=process.argv[1])
    • args <Array> string arguments passed to worker. (Default=process.argv.slice(2))
    • silent <Boolean> whether or not to send output to parent's stdio. (Default=false)

setupMaster is used to change the default 'fork' behavior. Once called, the settings will be present in cluster.settings.

Note that:

  • any settings changes only affect future calls to .fork() and have no effect on workers that are already running
  • The only attribute of a worker that cannot be set via .setupMaster() is the env passed to .fork()
  • the defaults above apply to the first call only, the defaults for later calls is the current value at the time of cluster.setupMaster() is called

Example:

const cluster = require('cluster');
cluster.setupMaster({
  exec: 'worker.js',
  args: ['--use', 'https'],
  silent: true
});
cluster.fork(); // https worker
cluster.setupMaster({
  exec: 'worker.js',
  args: ['--use', 'http']
});
cluster.fork(); // http worker

This can only be called from the master process.

cluster.worker#

A reference to the current worker object. Not available in the master process.

const cluster = require('cluster');

if (cluster.isMaster) {
  console.log('I am master');
  cluster.fork();
  cluster.fork();
} else if (cluster.isWorker) {
  console.log(`I am worker #${cluster.worker.id}`);
}

cluster.workers#

A hash that stores the active worker objects, keyed by id field. Makes it easy to loop through all the workers. It is only available in the master process.

A worker is removed from cluster.workers after the worker has disconnected and exited. The order between these two events cannot be determined in advance. However, it is guaranteed that the removal from the cluster.workers list happens before last 'disconnect' or 'exit' event is emitted.

// Go through all workers
function eachWorker(callback) {
  for (var id in cluster.workers) {
    callback(cluster.workers[id]);
  }
}
eachWorker((worker) => {
  worker.send('big announcement to all workers');
});

Should you wish to reference a worker over a communication channel, using the worker's unique id is the easiest way to find the worker.

socket.on('data', (id) => {
  var worker = cluster.workers[id];
});

Command Line Options#

Node.js comes with a wide variety of CLI options. These options expose built-in debugging, multiple ways to execute scripts, and other helpful runtime options.

To view this documentation as a manual page in your terminal, run man node.

Synopsis#

node [options] [v8 options] [script.js | -e "script"] [arguments]

node debug [script.js | -e "script" | <host>:<port>] …

node --v8-options

Execute without arguments to start the REPL.

For more info about node debug, please see the debugger documentation.

Options#

-v, --version#

Print node's version.

-h, --help#

Print node command line options. The output of this option is less detailed than this document.

-e, --eval "script"#

Evaluate the following argument as JavaScript. The modules which are predefined in the REPL can also be used in script.

-p, --print "script"#

Identical to -e but prints the result.

-c, --check#

Syntax check the script without executing.

-i, --interactive#

Opens the REPL even if stdin does not appear to be a terminal.

-r, --require module#

Preload the specified module at startup.

Follows require()'s module resolution rules. module may be either a path to a file, or a node module name.

--no-deprecation#

Silence deprecation warnings.

--trace-deprecation#

Print stack traces for deprecations.

--throw-deprecation#

Throw errors for deprecations.

--no-warnings#

Silence all process warnings (including deprecations).

--trace-warnings#

Print stack traces for process warnings (including deprecations).

--trace-sync-io#

Prints a stack trace whenever synchronous I/O is detected after the first turn of the event loop.

--zero-fill-buffers#

Automatically zero-fills all newly allocated Buffer and SlowBuffer instances.

--track-heap-objects#

Track heap object allocations for heap snapshots.

--prof-process#

Process v8 profiler output generated using the v8 option --prof.

--v8-options#

Print v8 command line options.

--tls-cipher-list=list#

Specify an alternative default TLS cipher list. (Requires Node.js to be built with crypto support. (Default))

--enable-fips#

Enable FIPS-compliant crypto at startup. (Requires Node.js to be built with ./configure --openssl-fips)

--force-fips#

Force FIPS-compliant crypto on startup. (Cannot be disabled from script code.) (Same requirements as --enable-fips)

--icu-data-dir=file#

Specify ICU data load path. (overrides NODE_ICU_DATA)

Environment Variables#

NODE_DEBUG=module[,…]#

','-separated list of core modules that should print debug information.

NODE_PATH=path[:…]#

':'-separated list of directories prefixed to the module search path.

Note: on Windows, this is a ';'-separated list instead.

NODE_DISABLE_COLORS=1#

When set to 1 colors will not be used in the REPL.

NODE_ICU_DATA=file#

Data path for ICU (Intl object) data. Will extend linked-in data when compiled with small-icu support.

NODE_REPL_HISTORY=file#

Path to the file used to store the persistent REPL history. The default path is ~/.node_repl_history, which is overridden by this variable. Setting the value to an empty string ("" or " ") disables persistent REPL history.

Console#

Stability: 2 - Stable

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to stdout and stderr. Because this object is global, it can be used without calling require('console').

Example using the global console:

console.log('hello world');
  // Prints: hello world, to stdout
console.log('hello %s', 'world');
  // Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
  // Prints: [Error: Whoops, something bad happened], to stderr

const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
  // Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);

myConsole.log('hello world');
  // Prints: hello world, to out
myConsole.log('hello %s', 'world');
  // Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
  // Prints: [Error: Whoops, something bad happened], to err

const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
  // Prints: Danger Will Robinson! Danger!, to err

While the API for the Console class is designed fundamentally around the browser console object, the Console in Node.js is not intended to duplicate the browser's functionality exactly.

Asynchronous vs Synchronous Consoles#

The console functions are asynchronous unless the destination is a file. Disks are fast and operating systems normally employ write-back caching; it should be a very rare occurrence indeed that a write blocks, but it is possible.

Class: Console#

The Console class can be used to create a simple logger with configurable output streams and can be accessed using either require('console').Console or console.Console:

const Console = require('console').Console;
const Console = console.Console;

new Console(stdout[, stderr])#

Creates a new Console by passing one or two writable stream instances. stdout is a writable stream to print log or info output. stderr is used for warning or error output. If stderr isn't passed, warning and error output will be sent to stdout.

const output = fs.createWriteStream('./stdout.log');
const errorOutput = fs.createWriteStream('./stderr.log');
// custom simple logger
const logger = new Console(output, errorOutput);
// use it like console
var count = 5;
logger.log('count: %d', count);
// in stdout.log: count 5

The global console is a special Console whose output is sent to process.stdout and process.stderr. It is equivalent to calling:

new Console(process.stdout, process.stderr);

console.assert(value[, message][, ...])#

A simple assertion test that verifies whether value is truthy. If it is not, an AssertionError is thrown. If provided, the error message is formatted using util.format() and used as the error message.

console.assert(true, 'does nothing');
  // OK
console.assert(false, 'Whoops %s', 'didn\'t work');
  // AssertionError: Whoops didn't work

Note: the console.assert() method is implemented differently in Node.js than the console.assert() method available in browsers.

Specifically, in browsers, calling console.assert() with a falsy assertion will cause the message to be printed to the console without interrupting execution of subsequent code. In Node.js, however, a falsy assertion will cause an AssertionError to be thrown.

Functionality approximating that implemented by browsers can be implemented by extending Node.js' console and overriding the console.assert() method.

In the following example, a simple module is created that extends and overrides the default behavior of console in Node.js.

'use strict';

// Creates a simple extension of console with a
// new impl for assert without monkey-patching.
const myConsole = Object.setPrototypeOf({
  assert(assertion, message, ...args) {
    try {
      console.assert(assertion, message, ...args);
    } catch (err) {
      console.error(err.stack);
    }
  }
}, console);

module.exports = myConsole;

This can then be used as a direct replacement for the built in console:

const console = require('./myConsole');
console.assert(false, 'this message will print, but no error thrown');
console.log('this will also print');

console.dir(obj[, options])#

Uses util.inspect() on obj and prints the resulting string to stdout. This function bypasses any custom inspect() function defined on obj. An optional options object may be passed to alter certain aspects of the formatted string:

  • showHidden - if true then the object's non-enumerable and symbol properties will be shown too. Defaults to false.

  • depth - tells util.inspect() how many times to recurse while formatting the object. This is useful for inspecting large complicated objects. Defaults to 2. To make it recurse indefinitely, pass null.

  • colors - if true, then the output will be styled with ANSI color codes. Defaults to false. Colors are customizable; see customizing util.inspect() colors.

console.error([data][, ...])#

Prints to stderr with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const code = 5;
console.error('error #%d', code);
  // Prints: error #5, to stderr
console.error('error', code);
  // Prints: error 5, to stderr

If formatting elements (e.g. %d) are not found in the first string then util.inspect() is called on each argument and the resulting string values are concatenated. See util.format() for more information.

console.info([data][, ...])#

The console.info() function is an alias for console.log().

console.log([data][, ...])#

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

var count = 5;
console.log('count: %d', count);
  // Prints: count: 5, to stdout
console.log('count: ', count);
  // Prints: count: 5, to stdout

If formatting elements (e.g. %d) are not found in the first string then util.inspect() is called on each argument and the resulting string values are concatenated. See util.format() for more information.

console.time(label)#

Starts a timer that can be used to compute the duration of an operation. Timers are identified by a unique label. Use the same label when you call console.timeEnd() to stop the timer and output the elapsed time in milliseconds to stdout. Timer durations are accurate to the sub-millisecond.

console.timeEnd(label)#

Stops a timer that was previously started by calling console.time() and prints the result to stdout:

console.time('100-elements');
for (var i = 0; i < 100; i++) {
  ;
}
console.timeEnd('100-elements');
// prints 100-elements: 225.438ms

console.trace(message[, ...])#

Prints to stderr the string 'Trace :', followed by the util.format() formatted message and stack trace to the current position in the code.

console.trace('Show me');
  // Prints: (stack trace will vary based on where trace is called)
  //  Trace: Show me
  //    at repl:2:9
  //    at REPLServer.defaultEval (repl.js:248:27)
  //    at bound (domain.js:287:14)
  //    at REPLServer.runBound [as eval] (domain.js:300:12)
  //    at REPLServer.<anonymous> (repl.js:412:12)
  //    at emitOne (events.js:82:20)
  //    at REPLServer.emit (events.js:169:7)
  //    at REPLServer.Interface._onLine (readline.js:210:10)
  //    at REPLServer.Interface._line (readline.js:549:8)
  //    at REPLServer.Interface._ttyWrite (readline.js:826:14)

console.warn([data][, ...])#

The console.warn() function is an alias for console.error().

Crypto#

Stability: 2 - Stable

The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign and verify functions.

Use require('crypto') to access this module.

const crypto = require('crypto');

const secret = 'abcdefg';
const hash = crypto.createHmac('sha256', secret)
                   .update('I love cupcakes')
                   .digest('hex');
console.log(hash);
  // Prints:
  //   c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658e

Determining if crypto support is unavailable#

It is possible for Node.js to be built without including support for the crypto module. In such cases, calling require('crypto') will result in an error being thrown.

var crypto;
try {
  crypto = require('crypto');
} catch (err) {
  console.log('crypto support is disabled!');
}

Class: Certificate#

SPKAC is a Certificate Signing Request mechanism originally implemented by Netscape and now specified formally as part of HTML5's keygen element.

The crypto module provides the Certificate class for working with SPKAC data. The most common usage is handling output generated by the HTML5 <keygen> element. Node.js uses OpenSSL's SPKAC implementation internally.

new crypto.Certificate()#

Instances of the Certificate class can be created using the new keyword or by calling crypto.Certificate() as a function:

const crypto = require('crypto');

const cert1 = new crypto.Certificate();
const cert2 = crypto.Certificate();

certificate.exportChallenge(spkac)#

The spkac data structure includes a public key and a challenge. The certificate.exportChallenge() returns the challenge component in the form of a Node.js Buffer. The spkac argument can be either a string or a Buffer.

const cert = require('crypto').Certificate();
const spkac = getSpkacSomehow();
const challenge = cert.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
  // Prints the challenge as a UTF8 string

certificate.exportPublicKey(spkac)#

The spkac data structure includes a public key and a challenge. The certificate.exportPublicKey() returns the public key component in the form of a Node.js Buffer. The spkac argument can be either a string or a Buffer.

const cert = require('crypto').Certificate();
const spkac = getSpkacSomehow();
const publicKey = cert.exportPublicKey(spkac);
console.log(publicKey);
  // Prints the public key as <Buffer ...>

certificate.verifySpkac(spkac)#

Returns true if the given spkac data structure is valid, false otherwise. The spkac argument must be a Node.js Buffer.

const cert = require('crypto').Certificate();
const spkac = getSpkacSomehow();
console.log(cert.verifySpkac(new Buffer(spkac)));
  // Prints true or false

Class: Cipher#

Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

  • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
  • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

The crypto.createCipher() or crypto.createCipheriv() methods are used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

Example: Using Cipher objects as streams:

const crypto = require('crypto');
const cipher = crypto.createCipher('aes192', 'a password');

var encrypted = '';
cipher.on('readable', () => {
  var data = cipher.read();
  if (data)
    encrypted += data.toString('hex');
});
cipher.on('end', () => {
  console.log(encrypted);
  // Prints: ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84d815504
});

cipher.write('some clear text data');
cipher.end();

Example: Using Cipher and piped streams:

const crypto = require('crypto');
const fs = require('fs');
const cipher = crypto.createCipher('aes192', 'a password');

const input = fs.createReadStream('test.js');
const output = fs.createWriteStream('test.enc');

input.pipe(cipher).pipe(output);

Example: Using the cipher.update() and cipher.final() methods:

const crypto = require('crypto');
const cipher = crypto.createCipher('aes192', 'a password');

var encrypted = cipher.update('some clear text data', 'utf8', 'hex');
encrypted += cipher.final('hex');
console.log(encrypted);
  // Prints: ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84d815504

cipher.final([output_encoding])#

Returns any remaining enciphered contents. If output_encoding parameter is one of 'binary', 'base64' or 'hex', a string is returned. If an output_encoding is not provided, a Buffer is returned.

Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

cipher.setAAD(buffer)#

When using an authenticated encryption mode (only GCM is currently supported), the cipher.setAAD() method sets the value used for the additional authenticated data (AAD) input parameter.

cipher.getAuthTag()#

When using an authenticated encryption mode (only GCM is currently supported), the cipher.getAuthTag() method returns a Buffer containing the authentication tag that has been computed from the given data.

The cipher.getAuthTag() method should only be called after encryption has been completed using the cipher.final() method.

cipher.setAutoPadding(auto_padding=true)#

When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

When auto_padding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an Error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

The cipher.setAutoPadding() method must be called before cipher.final().

cipher.update(data[, input_encoding][, output_encoding])#

Updates the cipher with data. If the input_encoding argument is given, it's value must be one of 'utf8', 'ascii', or 'binary' and the data argument is a string using the specified encoding. If the input_encoding argument is not given, data must be a Buffer. If data is a Buffer then input_encoding is ignored.

The output_encoding specifies the output format of the enciphered data, and can be 'binary', 'base64' or 'hex'. If the output_encoding is specified, a string using the specified encoding is returned. If no output_encoding is provided, a Buffer is returned.

The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

Class: Decipher#

Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

  • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
  • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

The crypto.createDecipher() or crypto.createDecipheriv() methods are used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

Example: Using Decipher objects as streams:

const crypto = require('crypto');
const decipher = crypto.createDecipher('aes192', 'a password');

var decrypted = '';
decipher.on('readable', () => {
  var data = decipher.read();
  if (data)
  decrypted += data.toString('utf8');
});
decipher.on('end', () => {
  console.log(decrypted);
  // Prints: some clear text data
});

var encrypted = 'ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84d815504';
decipher.write(encrypted, 'hex');
decipher.end();

Example: Using Decipher and piped streams:

const crypto = require('crypto');
const fs = require('fs');
const decipher = crypto.createDecipher('aes192', 'a password');

const input = fs.createReadStream('test.enc');
const output = fs.createWriteStream('test.js');

input.pipe(decipher).pipe(output);

Example: Using the decipher.update() and decipher.final() methods:

const crypto = require('crypto');
const decipher = crypto.createDecipher('aes192', 'a password');

var encrypted = 'ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84d815504';
var decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
  // Prints: some clear text data

decipher.final([output_encoding])#

Returns any remaining deciphered contents. If output_encoding parameter is one of 'binary', 'base64' or 'hex', a string is returned. If an output_encoding is not provided, a Buffer is returned.

Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

decipher.setAAD(buffer)#

When using an authenticated encryption mode (only GCM is currently supported), the cipher.setAAD() method sets the value used for the additional authenticated data (AAD) input parameter.

decipher.setAuthTag(buffer)#

When using an authenticated encryption mode (only GCM is currently supported), the decipher.setAuthTag() method is used to pass in the received authentication tag. If no tag is provided, or if the cipher text has been tampered with, decipher.final() with throw, indicating that the cipher text should be discarded due to failed authentication.

decipher.setAutoPadding(auto_padding=true)#

When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

The decipher.setAutoPadding() method must be called before decipher.update().

decipher.update(data[, input_encoding][, output_encoding])#

Updates the decipher with data. If the input_encoding argument is given, it's value must be one of 'binary', 'base64', or 'hex' and the data argument is a string using the specified encoding. If the input_encoding argument is not given, data must be a Buffer. If data is a Buffer then input_encoding is ignored.

The output_encoding specifies the output format of the enciphered data, and can be 'binary', 'ascii' or 'utf8'. If the output_encoding is specified, a string using the specified encoding is returned. If no output_encoding is provided, a Buffer is returned.

The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

Class: DiffieHellman#

The DiffieHellman class is a utility for creating Diffie-Hellman key exchanges.

Instances of the DiffieHellman class can be created using the crypto.createDiffieHellman() function.

const crypto = require('crypto');
const assert = require('assert');

// Generate Alice's keys...
const alice = crypto.createDiffieHellman(2048);
const alice_key = alice.generateKeys();

// Generate Bob's keys...
const bob = crypto.createDiffieHellman(alice.getPrime(), alice.getGenerator());
const bob_key = bob.generateKeys();

// Exchange and generate the secret...
const alice_secret = alice.computeSecret(bob_key);
const bob_secret = bob.computeSecret(alice_key);

// OK
assert.equal(alice_secret.toString('hex'), bob_secret.toString('hex'));

diffieHellman.computeSecret(other_public_key[, input_encoding][, output_encoding])#

Computes the shared secret using other_public_key as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specified input_encoding, and secret is encoded using specified output_encoding. Encodings can be 'binary', 'hex', or 'base64'. If the input_encoding is not provided, other_public_key is expected to be a Buffer.

If output_encoding is given a string is returned; otherwise, a Buffer is returned.

diffieHellman.generateKeys([encoding])#

Generates private and public Diffie-Hellman key values, and returns the public key in the specified encoding. This key should be transferred to the other party. Encoding can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

diffieHellman.getGenerator([encoding])#

Returns the Diffie-Hellman generator in the specified encoding, which can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

diffieHellman.getPrime([encoding])#

Returns the Diffie-Hellman prime in the specified encoding, which can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

diffieHellman.getPrivateKey([encoding])#

Returns the Diffie-Hellman private key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

diffieHellman.getPublicKey([encoding])#

Returns the Diffie-Hellman public key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

diffieHellman.setPrivateKey(private_key[, encoding])#

Sets the Diffie-Hellman private key. If the encoding argument is provided and is either 'binary', 'hex', or 'base64', private_key is expected to be a string. If no encoding is provided, private_key is expected to be a Buffer.

diffieHellman.setPublicKey(public_key[, encoding])#

Sets the Diffie-Hellman public key. If the encoding argument is provided and is either 'binary', 'hex' or 'base64', public_key is expected to be a string. If no encoding is provided, public_key is expected to be a Buffer.

diffieHellman.verifyError#

A bit field containing any warnings and/or errors resulting from a check performed during initialization of the DiffieHellman object.

The following values are valid for this property (as defined in constants module):

  • DH_CHECK_P_NOT_SAFE_PRIME
  • DH_CHECK_P_NOT_PRIME
  • DH_UNABLE_TO_CHECK_GENERATOR
  • DH_NOT_SUITABLE_GENERATOR

Class: ECDH#

The ECDH class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH) key exchanges.

Instances of the ECDH class can be created using the crypto.createECDH() function.

const crypto = require('crypto');
const assert = require('assert');

// Generate Alice's keys...
const alice = crypto.createECDH('secp521r1');
const alice_key = alice.generateKeys();

// Generate Bob's keys...
const bob = crypto.createECDH('secp521r1');
const bob_key = bob.generateKeys();

// Exchange and generate the secret...
const alice_secret = alice.computeSecret(bob_key);
const bob_secret = bob.computeSecret(alice_key);

assert(alice_secret, bob_secret);
  // OK

ecdh.computeSecret(other_public_key[, input_encoding][, output_encoding])#

Computes the shared secret using other_public_key as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specified input_encoding, and the returned secret is encoded using the specified output_encoding. Encodings can be 'binary', 'hex', or 'base64'. If the input_encoding is not provided, other_public_key is expected to be a Buffer.

If output_encoding is given a string will be returned; otherwise a Buffer is returned.

ecdh.generateKeys([encoding[, format]])#

Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified format and encoding. This key should be transferred to the other party.

The format arguments specifies point encoding and can be 'compressed', 'uncompressed', or 'hybrid'. If format is not specified, the point will be returned in 'uncompressed' format.

The encoding argument can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

ecdh.getPrivateKey([encoding])#

Returns the EC Diffie-Hellman private key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned.

ecdh.getPublicKey([encoding[, format]])#

Returns the EC Diffie-Hellman public key in the specified encoding and format.

The format argument specifies point encoding and can be 'compressed', 'uncompressed', or 'hybrid'. If format is not specified the point will be returned in 'uncompressed' format.

The encoding argument can be 'binary', 'hex', or 'base64'. If encoding is specified, a string is returned; otherwise a Buffer is returned.

ecdh.setPrivateKey(private_key[, encoding])#

Sets the EC Diffie-Hellman private key. The encoding can be 'binary', 'hex' or 'base64'. If encoding is provided, private_key is expected to be a string; otherwise private_key is expected to be a Buffer. If private_key is not valid for the curve specified when the ECDH object was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in the ECDH object.

ecdh.setPublicKey(public_key[, encoding])#

Stability: 0 - Deprecated

Sets the EC Diffie-Hellman public key. Key encoding can be 'binary', 'hex' or 'base64'. If encoding is provided public_key is expected to be a string; otherwise a Buffer is expected.

Note that there is not normally a reason to call this method because ECDH only requires a private key and the other party's public key to compute the shared secret. Typically either ecdh.generateKeys() or ecdh.setPrivateKey() will be called. The ecdh.setPrivateKey() method attempts to generate the public point/key associated with the private key being set.

Example (obtaining a shared secret):

const crypto = require('crypto');
const alice = crypto.createECDH('secp256k1');
const bob = crypto.createECDH('secp256k1');

// Note: This is a shortcut way to specify one of Alice's previous private
// keys. It would be unwise to use such a predictable private key in a real
// application.
alice.setPrivateKey(
  crypto.createHash('sha256').update('alice', 'utf8').digest()
);

// Bob uses a newly generated cryptographically strong
// pseudorandom key pair bob.generateKeys();

const alice_secret = alice.computeSecret(bob.getPublicKey(), null, 'hex');
const bob_secret = bob.computeSecret(alice.getPublicKey(), null, 'hex');

// alice_secret and bob_secret should be the same shared secret value
console.log(alice_secret === bob_secret);

Class: Hash#

The Hash class is a utility for creating hash digests of data. It can be used in one of two ways:

  • As a stream that is both readable and writable, where data is written to produce a computed hash digest on the readable side, or
  • Using the hash.update() and hash.digest() methods to produce the computed hash.

The crypto.createHash() method is used to create Hash instances. Hash objects are not to be created directly using the new keyword.

Example: Using Hash objects as streams:

const crypto = require('crypto');
const hash = crypto.createHash('sha256');

hash.on('readable', () => {
  var data = hash.read();
  if (data)
    console.log(data.toString('hex'));
    // Prints:
    //   6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50
});

hash.write('some data to hash');
hash.end();

Example: Using Hash and piped streams:

const crypto = require('crypto');
const fs = require('fs');
const hash = crypto.createHash('sha256');

const input = fs.createReadStream('test.js');
input.pipe(hash).pipe(process.stdout);

Example: Using the hash.update() and hash.digest() methods:

const crypto = require('crypto');
const hash = crypto.createHash('sha256');

hash.update('some data to hash');
console.log(hash.digest('hex'));
  // Prints:
  //   6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50

hash.digest([encoding])#

Calculates the digest of all of the data passed to be hashed (using the hash.update() method). The encoding can be 'hex', 'binary' or 'base64'. If encoding is provided a string will be returned; otherwise a Buffer is returned.

The Hash object can not be used again after hash.digest() method has been called. Multiple calls will cause an error to be thrown.

hash.update(data[, input_encoding])#

Updates the hash content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer then input_encoding is ignored.

This can be called many times with new data as it is streamed.

Class: Hmac#

The Hmac Class is a utility for creating cryptographic HMAC digests. It can be used in one of two ways:

  • As a stream that is both readable and writable, where data is written to produce a computed HMAC digest on the readable side, or
  • Using the hmac.update() and hmac.digest() methods to produce the computed HMAC digest.

The crypto.createHmac() method is used to create Hmac instances. Hmac objects are not to be created directly using the new keyword.

Example: Using Hmac objects as streams:

const crypto = require('crypto');
const hmac = crypto.createHmac('sha256', 'a secret');

hmac.on('readable', () => {
  var data = hmac.read();
  if (data)
    console.log(data.toString('hex'));
    // Prints:
    //   7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e
});

hmac.write('some data to hash');
hmac.end();

Example: Using Hmac and piped streams:

const crypto = require('crypto');
const fs = require('fs');
const hmac = crypto.createHmac('sha256', 'a secret');

const input = fs.createReadStream('test.js');
input.pipe(hmac).pipe(process.stdout);

Example: Using the hmac.update() and hmac.digest() methods:

const crypto = require('crypto');
const hmac = crypto.createHmac('sha256', 'a secret');

hmac.update('some data to hash');
console.log(hmac.digest('hex'));
  // Prints:
  //   7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e

hmac.digest([encoding])#

Calculates the HMAC digest of all of the data passed using hmac.update(). The encoding can be 'hex', 'binary' or 'base64'. If encoding is provided a string is returned; otherwise a Buffer is returned;

The Hmac object can not be used again after hmac.digest() has been called. Multiple calls to hmac.digest() will result in an error being thrown.

hmac.update(data[, input_encoding])#

Updates the Hmac content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer then input_encoding is ignored.

This can be called many times with new data as it is streamed.

Class: Sign#

The Sign Class is a utility for generating signatures. It can be used in one of two ways:

  • As a writable stream, where data to be signed is written and the sign.sign() method is used to generate and return the signature, or
  • Using the sign.update() and sign.sign() methods to produce the signature.

The crypto.createSign() method is used to create Sign instances. Sign objects are not to be created directly using the new keyword.

Example: Using Sign objects as streams:

const crypto = require('crypto');
const sign = crypto.createSign('RSA-SHA256');

sign.write('some data to sign');
sign.end();

const private_key = getPrivateKeySomehow();
console.log(sign.sign(private_key, 'hex'));
  // Prints the calculated signature

Example: Using the sign.update() and sign.sign() methods:

const crypto = require('crypto');
const sign = crypto.createSign('RSA-SHA256');

sign.update('some data to sign');

const private_key = getPrivateKeySomehow();
console.log(sign.sign(private_key, 'hex'));
  // Prints the calculated signature

A [sign][] instance can also be created by just passing in the digest algorithm name, in which case OpenSSL will infer the full signature algorithm from the type of the PEM-formatted private key, including algorithms that do not have directly exposed name constants, e.g. 'ecdsa-with-SHA256'.

Example: signing using ECDSA with SHA256

const crypto = require('crypto');
const sign = crypto.createSign('sha256');

sign.update('some data to sign');

const private_key = '-----BEGIN EC PRIVATE KEY-----\n' +
        'MHcCAQEEIF+jnWY1D5kbVYDNvxxo/Y+ku2uJPDwS0r/VuPZQrjjVoAoGCCqGSM49\n' +
        'AwEHoUQDQgAEurOxfSxmqIRYzJVagdZfMMSjRNNhB8i3mXyIMq704m2m52FdfKZ2\n' +
        'pQhByd5eyj3lgZ7m7jbchtdgyOF8Io/1ng==\n' +
        '-----END EC PRIVATE KEY-----\n';

console.log(sign.sign(private_key).toString('hex'));

sign.sign(private_key[, output_format])#

Calculates the signature on all the data passed through using either sign.update() or sign.write().

The private_key argument can be an object or a string. If private_key is a string, it is treated as a raw key with no passphrase. If private_key is an object, it is interpreted as a hash containing two properties:

  • key : <String> - PEM encoded private key
  • passphrase : <String> - passphrase for the private key

The output_format can specify one of 'binary', 'hex' or 'base64'. If output_format is provided a string is returned; otherwise a Buffer is returned.

The Sign object can not be again used after sign.sign() method has been called. Multiple calls to sign.sign() will result in an error being thrown.

sign.update(data[, input_encoding])#

Updates the Sign content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer then input_encoding is ignored.

This can be called many times with new data as it is streamed.

Class: Verify#

The Verify class is a utility for verifying signatures. It can be used in one of two ways:

  • As a writable stream where written data is used to validate against the supplied signature, or
  • Using the verify.update() and verify.verify() methods to verify the signature.

    The crypto.createSign() method is used to create Sign instances. Sign objects are not to be created directly using the new keyword.

Example: Using Verify objects as streams:

const crypto = require('crypto');
const verify = crypto.createVerify('RSA-SHA256');

verify.write('some data to sign');
verify.end();

const public_key = getPublicKeySomehow();
const signature = getSignatureToVerify();
console.log(sign.verify(public_key, signature));
  // Prints true or false

Example: Using the verify.update() and verify.verify() methods:

const crypto = require('crypto');
const verify = crypto.createVerify('RSA-SHA256');

verify.update('some data to sign');

const public_key = getPublicKeySomehow();
const signature = getSignatureToVerify();
console.log(verify.verify(public_key, signature));
  // Prints true or false

verifier.update(data[, input_encoding])#

Updates the Verify content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer then input_encoding is ignored.

This can be called many times with new data as it is streamed.

verifier.verify(object, signature[, signature_format])#

Verifies the provided data using the given object and signature. The object argument is a string containing a PEM encoded object, which can be one an RSA public key, a DSA public key, or an X.509 certificate. The signature argument is the previously calculated signature for the data, in the signature_format which can be 'binary', 'hex' or 'base64'. If a signature_format is specified, the signature is expected to be a string; otherwise signature is expected to be a Buffer.

Returns true or false depending on the validity of the signature for the data and public key.

The verifier object can not be used again after verify.verify() has been called. Multiple calls to verify.verify() will result in an error being thrown.

crypto module methods and properties#

crypto.DEFAULT_ENCODING#

The default encoding to use for functions that can take either strings or buffers. The default value is 'buffer', which makes methods default to Buffer objects.

The crypto.DEFAULT_ENCODING mechanism is provided for backwards compatibility with legacy programs that expect 'binary' to be the default encoding.

New applications should expect the default to be 'buffer'. This property may become deprecated in a future Node.js release.

crypto.fips#

Property for checking and controlling whether a FIPS compliant crypto provider is currently in use. Setting to true requires a FIPS build of Node.js.

crypto.createCipher(algorithm, password)#

Creates and returns a Cipher object that uses the given algorithm and password.

The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list-cipher-algorithms will display the available cipher algorithms.

The password is used to derive the cipher key and initialization vector (IV). The value must be either a 'binary' encoded string or a Buffer.

The implementation of crypto.createCipher() derives keys using the OpenSSL function EVP_BytesToKey with the digest algorithm set to MD5, one iteration, and no salt. The lack of salt allows dictionary attacks as the same password always creates the same key. The low iteration count and non-cryptographically secure hash algorithm allow passwords to be tested very rapidly.

In line with OpenSSL's recommendation to use pbkdf2 instead of EVP_BytesToKey it is recommended that developers derive a key and IV on their own using crypto.pbkdf2() and to use crypto.createCipheriv() to create the Cipher object.

crypto.createCipheriv(algorithm, key, iv)#

Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list-cipher-algorithms will display the available cipher algorithms.

The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'binary' encoded strings or buffers.

crypto.createCredentials(details)#

Stability: 0 - Deprecated: Use tls.createSecureContext() instead.

The crypto.createCredentials() method is a deprecated alias for creating and returning a tls.SecureContext object. The crypto.createCredentials() method should not be used.

The optional details argument is a hash object with keys:

  • pfx : <String> | <Buffer> - PFX or PKCS12 encoded private key, certificate and CA certificates
  • key : <String> - PEM encoded private key
  • passphrase : <String> - passphrase for the private key or PFX
  • cert : <String> - PEM encoded certificate
  • ca : <String> | <Array> - Either a string or array of strings of PEM encoded CA certificates to trust.
  • crl : <String> | <Array> - Either a string or array of strings of PEM encoded CRLs (Certificate Revocation List)
  • ciphers: <String> using the OpenSSL cipher list format describing the cipher algorithms to use or exclude.

If no 'ca' details are given, Node.js will use Mozilla's default publicly trusted list of CAs.

crypto.createDecipher(algorithm, password)#

Creates and returns a Decipher object that uses the given algorithm and password (key).

The implementation of crypto.createDecipher() derives keys using the OpenSSL function EVP_BytesToKey with the digest algorithm set to MD5, one iteration, and no salt. The lack of salt allows dictionary attacks as the same password always creates the same key. The low iteration count and non-cryptographically secure hash algorithm allow passwords to be tested very rapidly.

In line with OpenSSL's recommendation to use pbkdf2 instead of EVP_BytesToKey it is recommended that developers derive a key and IV on their own using crypto.pbkdf2() and to use crypto.createDecipheriv() to create the Decipher object.

crypto.createDecipheriv(algorithm, key, iv)#

Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list-cipher-algorithms will display the available cipher algorithms.

The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'binary' encoded strings or buffers.

crypto.createDiffieHellman(prime[, prime_encoding][, generator][, generator_encoding])#

Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

The prime_encoding and generator_encoding arguments can be 'binary', 'hex', or 'base64'.

If prime_encoding is specified, prime is expected to be a string; otherwise a Buffer is expected.

If generator_encoding is specified, generator is expected to be a string; otherwise either a number or Buffer is expected.

crypto.createDiffieHellman(prime_length[, generator])#

Creates a DiffieHellman key exchange object and generates a prime of prime_length bits using an optional specific numeric generator. If generator is not specified, the value 2 is used.

crypto.createECDH(curve_name)#

Creates an Elliptic Curve Diffie-Hellman (ECDH) key exchange object using a predefined curve specified by the curve_name string. Use crypto.getCurves() to obtain a list of available curve names. On recent OpenSSL releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve.

crypto.createHash(algorithm)#

Creates and returns a Hash object that can be used to generate hash digests using the given algorithm.

The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list-message-digest-algorithms will display the available digest algorithms.

Example: generating the sha256 sum of a file

const filename = process.argv[2];
const crypto = require('crypto');
const fs = require('fs');

const hash = crypto.createHash('sha256');

const input = fs.createReadStream(filename);
input.on('readable', () => {
  var data = input.read();
  if (data)
    hash.update(data);
  else {
    console.log(`${hash.digest('hex')} ${filename}`);
  }
});

crypto.createHmac(algorithm, key)#

Creates and returns an Hmac object that uses the given algorithm and key.

The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list-message-digest-algorithms will display the available digest algorithms.

The key is the HMAC key used to generate the cryptographic HMAC hash.

Example: generating the sha256 HMAC of a file

const filename = process.argv[2];
const crypto = require('crypto');
const fs = require('fs');

const hmac = crypto.createHmac('sha256', 'a secret');

const input = fs.createReadStream(filename);
input.on('readable', () => {
  var data = input.read();
  if (data)
    hmac.update(data);
  else {
    console.log(`${hmac.digest('hex')} ${filename}`);
  }
});

crypto.createSign(algorithm)#

Creates and returns a Sign object that uses the given algorithm. On recent OpenSSL releases, openssl list-public-key-algorithms will display the available signing algorithms. One example is 'RSA-SHA256'.

crypto.createVerify(algorithm)#

Creates and returns a Verify object that uses the given algorithm. On recent OpenSSL releases, openssl list-public-key-algorithms will display the available signing algorithms. One example is 'RSA-SHA256'.

crypto.getCiphers()#

Returns an array with the names of the supported cipher algorithms.

Example:

const ciphers = crypto.getCiphers();
console.log(ciphers); // ['aes-128-cbc', 'aes-128-ccm', ...]

crypto.getCurves()#

Returns an array with the names of the supported elliptic curves.

Example:

const curves = crypto.getCurves();
console.log(curves); // ['secp256k1', 'secp384r1', ...]

crypto.getDiffieHellman(group_name)#

Creates a predefined DiffieHellman key exchange object. The supported groups are: 'modp1', 'modp2', 'modp5' (defined in RFC 2412, but see Caveats) and 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (defined in RFC 3526). The returned object mimics the interface of objects created by crypto.createDiffieHellman(), but will not allow changing the keys (with diffieHellman.setPublicKey() for example). The advantage of using this method is that the parties do not have to generate nor exchange a group modulus beforehand, saving both processor and communication time.

Example (obtaining a shared secret):

const crypto = require('crypto');
const alice = crypto.getDiffieHellman('modp14');
const bob = crypto.getDiffieHellman('modp14');

alice.generateKeys();
bob.generateKeys();

const alice_secret = alice.computeSecret(bob.getPublicKey(), null, 'hex');
const bob_secret = bob.computeSecret(alice.getPublicKey(), null, 'hex');

/* alice_secret and bob_secret should be the same */
console.log(alice_secret == bob_secret);

crypto.getHashes()#

Returns an array with the names of the supported hash algorithms.

Example:

const hashes = crypto.getHashes();
console.log(hashes); // ['sha', 'sha1', 'sha1WithRSAEncryption', ...]

crypto.pbkdf2(password, salt, iterations, keylen, digest, callback)#

Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by digest is applied to derive a key of the requested byte length (keylen) from the password, salt and iterations.

The supplied callback function is called with two arguments: err and derivedKey. If an error occurs, err will be set; otherwise err will be null. The successfully generated derivedKey will be passed as a Buffer.

The iterations argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.

The salt should also be as unique as possible. It is recommended that the salts are random and their lengths are greater than 16 bytes. See NIST SP 800-132 for details.

Example:

const crypto = require('crypto');
crypto.pbkdf2('secret', 'salt', 100000, 512, 'sha512', (err, key) => {
  if (err) throw err;
  console.log(key.toString('hex'));  // 'c5e478d...1469e50'
});

An array of supported digest functions can be retrieved using crypto.getHashes().

crypto.pbkdf2Sync(password, salt, iterations, keylen, digest)#

Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by digest is applied to derive a key of the requested byte length (keylen) from the password, salt and iterations.

If an error occurs an Error will be thrown, otherwise the derived key will be returned as a Buffer.

The iterations argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.

The salt should also be as unique as possible. It is recommended that the salts are random and their lengths are greater than 16 bytes. See NIST SP 800-132 for details.

Example:

const crypto = require('crypto');
const key = crypto.pbkdf2Sync('secret', 'salt', 100000, 512, 'sha512');
console.log(key.toString('hex'));  // 'c5e478d...1469e50'

An array of supported digest functions can be retrieved using crypto.getHashes().

crypto.privateDecrypt(private_key, buffer)#

Decrypts buffer with private_key.

private_key can be an object or a string. If private_key is a string, it is treated as the key with no passphrase and will use RSA_PKCS1_OAEP_PADDING. If private_key is an object, it is interpreted as a hash object with the keys:

  • key : <String> - PEM encoded private key
  • passphrase : <String> - Optional passphrase for the private key
  • padding : An optional padding value, one of the following:
    • constants.RSA_NO_PADDING
    • constants.RSA_PKCS1_PADDING
    • constants.RSA_PKCS1_OAEP_PADDING

All paddings are defined in the constants module.

crypto.privateEncrypt(private_key, buffer)#

Encrypts buffer with private_key.

private_key can be an object or a string. If private_key is a string, it is treated as the key with no passphrase and will use RSA_PKCS1_PADDING. If private_key is an object, it is interpreted as a hash object with the keys:

  • key : <String> - PEM encoded private key
  • passphrase : <String> - Optional passphrase for the private key
  • padding : An optional padding value, one of the following:
    • constants.RSA_NO_PADDING
    • constants.RSA_PKCS1_PADDING
    • constants.RSA_PKCS1_OAEP_PADDING

All paddings are defined in the constants module.

crypto.publicDecrypt(public_key, buffer)#

Decrypts buffer with public_key.

public_key can be an object or a string. If public_key is a string, it is treated as the key with no passphrase and will use RSA_PKCS1_PADDING. If public_key is an object, it is interpreted as a hash object with the keys:

  • key : <String> - PEM encoded public key
  • passphrase : <String> - Optional passphrase for the private key
  • padding : An optional padding value, one of the following:
    • constants.RSA_NO_PADDING
    • constants.RSA_PKCS1_PADDING
    • constants.RSA_PKCS1_OAEP_PADDING

Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.

All paddings are defined in the constants module.

crypto.publicEncrypt(public_key, buffer)#

Encrypts buffer with public_key.

public_key can be an object or a string. If public_key is a string, it is treated as the key with no passphrase and will use RSA_PKCS1_OAEP_PADDING. If public_key is an object, it is interpreted as a hash object with the keys:

  • key : <String> - PEM encoded public key
  • passphrase : <String> - Optional passphrase for the private key
  • padding : An optional padding value, one of the following:
    • constants.RSA_NO_PADDING
    • constants.RSA_PKCS1_PADDING
    • constants.RSA_PKCS1_OAEP_PADDING

Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.

All paddings are defined in the constants module.

crypto.randomBytes(size[, callback])#

Generates cryptographically strong pseudo-random data. The size argument is a number indicating the number of bytes to generate.

If a callback function is provided, the bytes are generated asynchronously and the callback function is invoked with two arguments: err and buf. If an error occurs, err will be an Error object; otherwise it is null. The buf argument is a Buffer containing the generated bytes.

// Asynchronous
const crypto = require('crypto');
crypto.randomBytes(256, (err, buf) => {
  if (err) throw err;
  console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`);
});

If the callback function is not provided, the random bytes are generated synchronously and returned as a Buffer. An error will be thrown if there is a problem generating the bytes.

// Synchronous
const buf = crypto.randomBytes(256);
console.log(
  `${buf.length} bytes of random data: ${buf.toString('hex')}`);

The crypto.randomBytes() method will block until there is sufficient entropy. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.

crypto.setEngine(engine[, flags])#

Load and set the engine for some or all OpenSSL functions (selected by flags).

engine could be either an id or a path to the engine's shared library.

The optional flags argument uses ENGINE_METHOD_ALL by default. The flags is a bit field taking one of or a mix of the following flags (defined in the constants module):

  • ENGINE_METHOD_RSA
  • ENGINE_METHOD_DSA
  • ENGINE_METHOD_DH
  • ENGINE_METHOD_RAND
  • ENGINE_METHOD_ECDH
  • ENGINE_METHOD_ECDSA
  • ENGINE_METHOD_CIPHERS
  • ENGINE_METHOD_DIGESTS
  • ENGINE_METHOD_STORE
  • ENGINE_METHOD_PKEY_METHS
  • ENGINE_METHOD_PKEY_ASN1_METHS
  • ENGINE_METHOD_ALL
  • ENGINE_METHOD_NONE

Notes#

Legacy Streams API (pre Node.js v0.10)#

The Crypto module was added to Node.js before there was the concept of a unified Stream API, and before there were Buffer objects for handling binary data. As such, the many of the crypto defined classes have methods not typically found on other Node.js classes that implement the streams API (e.g. update(), final(), or digest()). Also, many methods accepted and returned 'binary' encoded strings by default rather than Buffers. This default was changed after Node.js v0.8 to use Buffer objects by default instead.

Recent ECDH Changes#

Usage of ECDH with non-dynamically generated key pairs has been simplified. Now, ecdh.setPrivateKey() can be called with a preselected private key and the associated public point (key) will be computed and stored in the object. This allows code to only store and provide the private part of the EC key pair. ecdh.setPrivateKey() now also validates that the private key is valid for the selected curve.

The ecdh.setPublicKey() method is now deprecated as its inclusion in the API is not useful. Either a previously stored private key should be set, which automatically generates the associated public key, or ecdh.generateKeys() should be called. The main drawback of using ecdh.setPublicKey() is that it can be used to put the ECDH key pair into an inconsistent state.

Support for weak or compromised algorithms#

The crypto module still supports some algorithms which are already compromised and are not currently recommended for use. The API also allows the use of ciphers and hashes with a small key size that are considered to be too weak for safe use.

Users should take full responsibility for selecting the crypto algorithm and key size according to their security requirements.

Based on the recommendations of NIST SP 800-131A:

  • MD5 and SHA-1 are no longer acceptable where collision resistance is required such as digital signatures.
  • The key used with RSA, DSA and DH algorithms is recommended to have at least 2048 bits and that of the curve of ECDSA and ECDH at least 224 bits, to be safe to use for several years.
  • The DH groups of modp1, modp2 and modp5 have a key size smaller than 2048 bits and are not recommended.

See the reference for other recommendations and details.

Debugger#

Stability: 2 - Stable

Node.js includes a full-featured out-of-process debugging utility accessible via a simple TCP-based protocol and built-in debugging client. To use it, start Node.js with the debug argument followed by the path to the script to debug; a prompt will be displayed indicating successful launch of the debugger:

$ node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(() => {
  3   debugger;
debug>

Node.js's debugger client does not yet support the full range of commands, but simple step and inspection are possible.

Inserting the statement debugger; into the source code of a script will enable a breakpoint at that position in the code.

For example, suppose myscript.js is written as:

// myscript.js
x = 5;
setTimeout(() => {
  debugger;
  console.log('world');
}, 1000);
console.log('hello');

Once the debugger is run, a breakpoint will occur at line 4:

$ node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(() => {
  3   debugger;
debug> cont
< hello
break in /home/indutny/Code/git/indutny/myscript.js:3
  1 x = 5;
  2 setTimeout(() => {
  3   debugger;
  4   console.log('world');
  5 }, 1000);
debug> next
break in /home/indutny/Code/git/indutny/myscript.js:4
  2 setTimeout(() => {
  3   debugger;
  4   console.log('world');
  5 }, 1000);
  6 console.log('hello');
debug> repl
Press Ctrl + C to leave debug repl
> x
5
> 2+2
4
debug> next
< world
break in /home/indutny/Code/git/indutny/myscript.js:5
  3   debugger;
  4   console.log('world');
  5 }, 1000);
  6 console.log('hello');
  7
debug> quit

The repl command allows code to be evaluated remotely. The next command steps over to the next line. Type help to see what other commands are available.

Pressing enter without typing a command will repeat the previous debugger command.

Watchers#

It is possible to watch expression and variable values while debugging. On every breakpoint, each expression from the watchers list will be evaluated in the current context and displayed immediately before the breakpoint's source code listing.

To begin watching an expression, type watch('my_expression'). The command watchers will print the active watchers. To remove a watcher, type unwatch('my_expression').

Commands reference#

Stepping#

  • cont, c - Continue execution
  • next, n - Step next
  • step, s - Step in
  • out, o - Step out
  • pause - Pause running code (like pause button in Developer Tools)

Breakpoints#

  • setBreakpoint(), sb() - Set breakpoint on current line
  • setBreakpoint(line), sb(line) - Set breakpoint on specific line
  • setBreakpoint('fn()'), sb(...) - Set breakpoint on a first statement in functions body
  • setBreakpoint('script.js', 1), sb(...) - Set breakpoint on first line of script.js
  • clearBreakpoint('script.js', 1), cb(...) - Clear breakpoint in script.js on line 1

It is also possible to set a breakpoint in a file (module) that isn't loaded yet:

$ ./node debug test/fixtures/break-in-module/main.js
< debugger listening on port 5858
connecting to port 5858... ok
break in test/fixtures/break-in-module/main.js:1
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> setBreakpoint('mod.js', 23)
Warning: script 'mod.js' was not loaded yet.
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> c
break in test/fixtures/break-in-module/mod.js:23
 21
 22 exports.hello = () => {
 23   return 'hello from module';
 24 };
 25
debug>

Info#

  • backtrace, bt - Print backtrace of current execution frame
  • list(5) - List scripts source code with 5 line context (5 lines before and after)
  • watch(expr) - Add expression to watch list
  • unwatch(expr) - Remove expression from watch list
  • watchers - List all watchers and their values (automatically listed on each breakpoint)
  • repl - Open debugger's repl for evaluation in debugging script's context
  • exec expr - Execute an expression in debugging script's context

Execution control#

  • run - Run script (automatically runs on debugger's start)
  • restart - Restart script
  • kill - Kill script

Various#

  • scripts - List all loaded scripts
  • version - Display V8's version

Advanced Usage#

An alternative way of enabling and accessing the debugger is to start Node.js with the --debug command-line flag or by signaling an existing Node.js process with SIGUSR1.

Once a process has been set in debug mode this way, it can be connected to using the Node.js debugger by either connecting to the pid of the running process or via URI reference to the listening debugger:

  • node debug -p <pid> - Connects to the process via the pid
  • node debug <URI> - Connects to the process via the URI such as localhost:5858

UDP / Datagram Sockets#

Stability: 2 - Stable

The dgram module provides an implementation of UDP Datagram sockets.

const dgram = require('dgram');
const server = dgram.createSocket('udp4');

server.on('error', (err) => {
  console.log(`server error:\n${err.stack}`);
  server.close();
});

server.on('message', (msg, rinfo) => {
  console.log(`server got: ${msg} from ${rinfo.address}:${rinfo.port}`);
});

server.on('listening', () => {
  var address = server.address();
  console.log(`server listening ${address.address}:${address.port}`);
});

server.bind(41234);
// server listening 0.0.0.0:41234

Class: dgram.Socket#

The dgram.Socket object is an EventEmitter that encapsulates the datagram functionality.

New instances of dgram.Socket are created using dgram.createSocket(). The new keyword is not to be used to create dgram.Socket instances.

Event: 'close'#

The 'close' event is emitted after a socket is closed with close(). Once triggered, no new 'message' events will be emitted on this socket.

Event: 'error'#

The 'error' event is emitted whenever any error occurs. The event handler function is passed a single Error object.

Event: 'listening'#

The 'listening' event is emitted whenever a socket begins listening for datagram messages. This occurs as soon as UDP sockets are created.

Event: 'message'#

The 'message' event is emitted when a new datagram is available on a socket. The event handler function is passed two arguments: msg and rinfo. The msg argument is a Buffer and rinfo is an object with the sender's address information provided by the address, family and port properties:

socket.on('message', (msg, rinfo) => {
  console.log('Received %d bytes from %s:%d\n',
              msg.length, rinfo.address, rinfo.port);
});

socket.addMembership(multicastAddress[, multicastInterface])#

Tells the kernel to join a multicast group at the given multicastAddress using the IP_ADD_MEMBERSHIP socket option. If the multicastInterface argument is not specified, the operating system will try to add membership to all valid networking interfaces.

socket.address()#

Returns an object containing the address information for a socket. For UDP sockets, this object will contain address, family and port properties.

socket.bind([port][, address][, callback])#

  • port <Number> - Integer, Optional
  • address <String>, Optional
  • callback <Function> with no parameters, Optional. Called when binding is complete.

For UDP sockets, causes the dgram.Socket to listen for datagram messages on a named port and optional address. If port is not specified, the operating system will attempt to bind to a random port. If address is not specified, the operating system will attempt to listen on all addresses. Once binding is complete, a 'listening' event is emitted and the optional callback function is called.

Note that specifying both a 'listening' event listener and passing a callback to the socket.bind() method is not harmful but not very useful.

A bound datagram socket keeps the Node.js process running to receive datagram messages.

If binding fails, an 'error' event is generated. In rare case (e.g. attempting to bind with a closed socket), an Error may be thrown.

Example of a UDP server listening on port 41234:

const dgram = require('dgram');
const server = dgram.createSocket('udp4');

server.on('error', (err) => {
  console.log(`server error:\n${err.stack}`);
  server.close();
});

server.on('message', (msg, rinfo) => {
  console.log(`server got: ${msg} from ${rinfo.address}:${rinfo.port}`);
});

server.on('listening', () => {
  var address = server.address();
  console.log(`server listening ${address.address}:${address.port}`);
});

server.bind(41234);
// server listening 0.0.0.0:41234

socket.bind(options[, callback])#

For UDP sockets, causes the dgram.Socket to listen for datagram messages on a named port and optional address that are passed as properties of an options object passed as the first argument. If port is not specified, the operating system will attempt to bind to a random port. If address is not specified, the operating system will attempt to listen on all addresses. Once binding is complete, a 'listening' event is emitted and the optional callback function is called.

The options object may contain an additional exclusive property that is use when using dgram.Socket objects with the [cluster] module. When exclusive is set to false (the default), cluster workers will use the same underlying socket handle allowing connection handling duties to be shared. When exclusive is true, however, the handle is not shared and attempted port sharing results in an error.

An example socket listening on an exclusive port is shown below.

socket.bind({
  address: 'localhost',
  port: 8000,
  exclusive: true
});

socket.close([callback])#

Close the underlying socket and stop listening for data on it. If a callback is provided, it is added as a listener for the 'close' event.

socket.dropMembership(multicastAddress[, multicastInterface])#

Instructs the kernel to leave a multicast group at multicastAddress using the IP_DROP_MEMBERSHIP socket option. This method is automatically called by the kernel when the socket is closed or the process terminates, so most apps will never have reason to call this.

If multicastInterface is not specified, the operating system will attempt to drop membership on all valid interfaces.

socket.send(msg, [offset, length,] port, address[, callback])#

  • msg <Buffer> | <String> | <Array> Message to be sent
  • offset <Number> Integer. Optional. Offset in the buffer where the message starts.
  • length <Number> Integer. Optional. Number of bytes in the message.
  • port <Number> Integer. Destination port.
  • address <String> Destination hostname or IP address.
  • callback <Function> Called when the message has been sent. Optional.

Broadcasts a datagram on the socket. The destination port and address must be specified.

The msg argument contains the message to be sent. Depending on its type, different behavior can apply. If msg is a Buffer, the offset and length specify the offset within the Buffer where the message begins and the number of bytes in the message, respectively. If msg is a String, then it is automatically converted to a Buffer with 'utf8' encoding. With messages that contain multi-byte characters, offset and length will be calculated with respect to byte length and not the character position. If msg is an array, offset and length must not be specified.

The address argument is a string. If the value of address is a host name, DNS will be used to resolve the address of the host. If the address is not specified or is an empty string, '127.0.0.1' or '::1' will be used instead.

If the socket has not been previously bound with a call to bind, the socket is assigned a random port number and is bound to the "all interfaces" address ('0.0.0.0' for udp4 sockets, '::0' for udp6 sockets.)

An optional callback function may be specified to as a way of reporting DNS errors or for determining when it is safe to reuse the buf object. Note that DNS lookups delay the time to send for at least one tick of the Node.js event loop.

The only way to know for sure that the datagram has been sent is by using a callback. If an error occurs and a callback is given, the error will be passed as the first argument to the callback. If a callback is not given, the error is emitted as an 'error' event on the socket object.

Offset and length are optional, but if you specify one you would need to specify the other. Also, they are supported only when the first argument is a Buffer.

Example of sending a UDP packet to a random port on localhost;

const dgram = require('dgram');
const message = new Buffer('Some bytes');
const client = dgram.createSocket('udp4');
client.send(message, 41234, 'localhost', (err) => {
  client.close();
});

Example of sending a UDP packet composed of multiple buffers to a random port on localhost;

const dgram = require('dgram');
const buf1 = new Buffer('Some ');
const buf2 = new Buffer('bytes');
const client = dgram.createSocket('udp4');
client.send([buf1, buf2], 41234, 'localhost', (err) => {
  client.close();
});

Sending multiple buffers might be faster or slower depending on your application and operating system: benchmark it. Usually it is faster.

A Note about UDP datagram size

The maximum size of an IPv4/v6 datagram depends on the MTU (Maximum Transmission Unit) and on the Payload Length field size.

  • The Payload Length field is 16 bits wide, which means that a normal payload exceed 64K octets including the internet header and data (65,507 bytes = 65,535 − 8 bytes UDP header − 20 bytes IP header); this is generally true for loopback interfaces, but such long datagram messages are impractical for most hosts and networks.

  • The MTU is the largest size a given link layer technology can support for datagram messages. For any link, IPv4 mandates a minimum MTU of 68 octets, while the recommended MTU for IPv4 is 576 (typically recommended as the MTU for dial-up type applications), whether they arrive whole or in fragments.

    For IPv6, the minimum MTU is 1280 octets, however, the mandatory minimum fragment reassembly buffer size is 1500 octets. The value of 68 octets is very small, since most current link layer technologies, like Ethernet, have a minimum MTU of 1500.

It is impossible to know in advance the MTU of each link through which a packet might travel. Sending a datagram greater than the receiver MTU will not work because the packet will get silently dropped without informing the source that the data did not reach its intended recipient.

socket.setBroadcast(flag)#

Sets or clears the SO_BROADCAST socket option. When set to true, UDP packets may be sent to a local interface's broadcast address.

socket.setMulticastLoopback(flag)#

Sets or clears the IP_MULTICAST_LOOP socket option. When set to true, multicast packets will also be received on the local interface.

socket.setMulticastTTL(ttl)#

Sets the IP_MULTICAST_TTL socket option. While TTL generally stands for "Time to Live", in this context it specifies the number of IP hops that a packet is allowed to travel through, specifically for multicast traffic. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded.

The argument passed to to socket.setMulticastTTL() is a number of hops between 0 and 255. The default on most systems is 1 but can vary.

socket.setTTL(ttl)#

Sets the IP_TTL socket option. While TTL generally stands for "Time to Live", in this context it specifies the number of IP hops that a packet is allowed to travel through. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded. Changing TTL values is typically done for network probes or when multicasting.

The argument to socket.setTTL() is a number of hops between 1 and 255. The default on most systems is 64 but can vary.

socket.ref()#

By default, binding a socket will cause it to block the Node.js process from exiting as long as the socket is open. The socket.unref() method can be used to exclude the socket from the reference counting that keeps the Node.js process active. The socket.ref() method adds the socket back to the reference counting and restores the default behavior.

Calling socket.ref() multiples times will have no additional effect.

The socket.ref() method returns a reference to the socket so calls can be chained.

socket.unref()#

By default, binding a socket will cause it to block the Node.js process from exiting as long as the socket is open. The socket.unref() method can be used to exclude the socket from the reference counting that keeps the Node.js process active, allowing the process to exit even if the socket is still listening.

Calling socket.unref() multiple times will have no addition effect.

The socket.unref() method returns a reference to the socket so calls can be chained.

Change to asynchronous socket.bind() behavior#

As of Node.js v0.10, dgram.Socket#bind() changed to an asynchronous execution model. Legacy code that assumes synchronous behavior, as in the following example:

const s = dgram.createSocket('udp4');
s.bind(1234);
s.addMembership('224.0.0.114');

Must be changed to pass a callback function to the dgram.Socket#bind() function:

const s = dgram.createSocket('udp4');
s.bind(1234, () => {
  s.addMembership('224.0.0.114');
});

dgram module functions#

dgram.createSocket(options[, callback])#

Creates a dgram.Socket object. The options argument is an object that should contain a type field of either udp4 or udp6 and an optional boolean reuseAddr field.

When reuseAddr is true socket.bind() will reuse the address, even if another process has already bound a socket on it. reuseAddr defaults to false. An optional callback function can be passed specified which is added as a listener for 'message' events.

Once the socket is created, calling socket.bind() will instruct the socket to begin listening for datagram messages. When address and port are not passed to socket.bind() the method will bind the socket to the "all interfaces" address on a random port (it does the right thing for both udp4 and udp6 sockets). The bound address and port can be retrieved using socket.address().address and socket.address().port.

dgram.createSocket(type[, callback])#

Creates a dgram.Socket object of the specified type. The type argument can be either udp4 or udp6. An optional callback function can be passed which is added as a listener for 'message' events.

Once the socket is created, calling socket.bind() will instruct the socket to begin listening for datagram messages. When address and port are not passed to socket.bind() the method will bind the socket to the "all interfaces" address on a random port (it does the right thing for both udp4 and udp6 sockets). The bound address and port can be retrieved using socket.address().address and socket.address().port.

DNS#

Stability: 2 - Stable

The dns module contains functions belonging to two different categories:

1) Functions that use the underlying operating system facilities to perform name resolution, and that do not necessarily perform any network communication. This category contains only one function: dns.lookup(). Developers looking to perform name resolution in the same way that other applications on the same operating system behave should use dns.lookup().

For example, looking up nodejs.org.

const dns = require('dns');

dns.lookup('nodejs.org', (err, addresses, family) => {
  console.log('addresses:', addresses);
});

2) Functions that connect to an actual DNS server to perform name resolution, and that always use the network to perform DNS queries. This category contains all functions in the dns module except dns.lookup(). These functions do not use the same set of configuration files used by dns.lookup() (e.g. /etc/hosts). These functions should be used by developers who do not want to use the underlying operating system's facilities for name resolution, and instead want to always perform DNS queries.

Below is an example that resolves 'nodejs.org' then reverse resolves the IP addresses that are returned.

const dns = require('dns');

dns.resolve4('nodejs.org', (err, addresses) => {
  if (err) throw err;

  console.log(`addresses: ${JSON.stringify(addresses)}`);

  addresses.forEach((a) => {
    dns.reverse(a, (err, hostnames) => {
      if (err) {
        throw err;
      }
      console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`);
    });
  });
});

There are subtle consequences in choosing one over the other, please consult the Implementation considerations section for more information.

dns.getServers()#

Returns an array of IP address strings that are being used for name resolution.

dns.lookup(hostname[, options], callback)#

Resolves a hostname (e.g. 'nodejs.org') into the first found A (IPv4) or AAAA (IPv6) record. options can be an object or integer. If options is not provided, then IPv4 and IPv6 addresses are both valid. If options is an integer, then it must be 4 or 6.

Alternatively, options can be an object containing these properties:

  • family <Number> - The record family. If present, must be the integer 4 or 6. If not provided, both IP v4 and v6 addresses are accepted.
  • hints: <Number> - If present, it should be one or more of the supported getaddrinfo flags. If hints is not provided, then no flags are passed to getaddrinfo. Multiple flags can be passed through hints by logically ORing their values. See supported getaddrinfo flags for more information on supported flags.
  • all: <Boolean> - When true, the callback returns all resolved addresses in an array, otherwise returns a single address. Defaults to false.

All properties are optional. An example usage of options is shown below.

{
  family: 4,
  hints: dns.ADDRCONFIG | dns.V4MAPPED,
  all: false
}

The callback function has arguments (err, address, family). address is a string representation of an IPv4 or IPv6 address. family is either the integer 4 or 6 and denotes the family of address (not necessarily the value initially passed to lookup).

With the all option set to true, the arguments change to (err, addresses), with addresses being an array of objects with the properties address and family.

On error, err is an Error object, where err.code is the error code. Keep in mind that err.code will be set to 'ENOENT' not only when the hostname does not exist but also when the lookup fails in other ways such as no available file descriptors.

dns.lookup() does not necessarily have anything to do with the DNS protocol. The implementation uses an operating system facility that can associate names with addresses, and vice versa. This implementation can have subtle but important consequences on the behavior of any Node.js program. Please take some time to consult the Implementation considerations section before using dns.lookup().

Supported getaddrinfo flags#

The following flags can be passed as hints to dns.lookup().

  • dns.ADDRCONFIG: Returned address types are determined by the types of addresses supported by the current system. For example, IPv4 addresses are only returned if the current system has at least one IPv4 address configured. Loopback addresses are not considered.
  • dns.V4MAPPED: If the IPv6 family was specified, but no IPv6 addresses were found, then return IPv4 mapped IPv6 addresses. Note that it is not supported on some operating systems (e.g FreeBSD 10.1).

dns.lookupService(address, port, callback)#

Resolves the given address and port into a hostname and service using the operating system's underlying getnameinfo implementation.

If address is not a valid IP address, a TypeError will be thrown. The port will be coerced to a number. If it is not a legal port, a TypeError will be thrown.

The callback has arguments (err, hostname, service). The hostname and service arguments are strings (e.g. 'localhost' and 'http' respectively).

On error, err is an Error object, where err.code is the error code.

const dns = require('dns');
dns.lookupService('127.0.0.1', 22, (err, hostname, service) => {
  console.log(hostname, service);
    // Prints: localhost ssh
});

dns.resolve(hostname[, rrtype], callback)#

Uses the DNS protocol to resolve a hostname (e.g. 'nodejs.org') into an array of the record types specified by rrtype.

Valid values for rrtype are:

  • 'A' - IPV4 addresses, default
  • 'AAAA' - IPV6 addresses
  • 'MX' - mail exchange records
  • 'TXT' - text records
  • 'SRV' - SRV records
  • 'PTR' - PTR records
  • 'NS' - name server records
  • 'CNAME' - canonical name records
  • 'SOA' - start of authority record

The callback function has arguments (err, addresses). When successful, addresses will be an array. The type of each item in addresses is determined by the record type, and described in the documentation for the corresponding lookup methods.

On error, err is an Error object, where err.code is one of the error codes listed here.

dns.resolve4(hostname, callback)#

Uses the DNS protocol to resolve a IPv4 addresses (A records) for the hostname. The addresses argument passed to the callback function will contain an array of IPv4 addresses (e.g. ['74.125.79.104', '74.125.79.105', '74.125.79.106']).

dns.resolve6(hostname, callback)#

Uses the DNS protocol to resolve a IPv6 addresses (AAAA records) for the hostname. The addresses argument passed to the callback function will contain an array of IPv6 addresses.

dns.resolveCname(hostname, callback)#

Uses the DNS protocol to resolve CNAME records for the hostname. The addresses argument passed to the callback function will contain an array of canonical name records available for the hostname (e.g. ['bar.example.com']).

dns.resolveMx(hostname, callback)#

Uses the DNS protocol to resolve mail exchange records (MX records) for the hostname. The addresses argument passed to the callback function will contain an array of objects containing both a priority and exchange property (e.g. [{priority: 10, exchange: 'mx.example.com'}, ...]).

dns.resolveNs(hostname, callback)#

Uses the DNS protocol to resolve name server records (NS records) for the hostname. The addresses argument passed to the callback function will contain an array of name server records available for hostname (e.g., ['ns1.example.com', 'ns2.example.com']).

dns.resolveSoa(hostname, callback)#

Uses the DNS protocol to resolve a start of authority record (SOA record) for the hostname. The addresses argument passed to the callback function will be an object with the following properties:

  • nsname
  • hostmaster
  • serial
  • refresh
  • retry
  • expire
  • minttl
{
  nsname: 'ns.example.com',
  hostmaster: 'root.example.com',
  serial: 2013101809,
  refresh: 10000,
  retry: 2400,
  expire: 604800,
  minttl: 3600
}

dns.resolveSrv(hostname, callback)#

Uses the DNS protocol to resolve service records (SRV records) for the hostname. The addresses argument passed to the callback function will be an array of objects with the following properties:

  • priority
  • weight
  • port
  • name
{
  priority: 10,
  weight: 5,
  port: 21223,
  name: 'service.example.com'
}

dns.resolvePtr(hostname, callback)#

Uses the DNS protocol to resolve pointer records (PTR records) for the hostname. The addresses argument passed to the callback function will be an array of strings containing the reply records.

dns.resolveTxt(hostname, callback)#

Uses the DNS protocol to resolve text queries (TXT records) for the hostname. The addresses argument passed to the callback function is is a two-dimentional array of the text records available for hostname (e.g., [ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]). Each sub-array contains TXT chunks of one record. Depending on the use case, these could be either joined together or treated separately.

dns.reverse(ip, callback)#

Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an array of hostnames.

The callback function has arguments (err, hostnames), where hostnames is an array of resolved hostnames for the given ip.

On error, err is an Error object, where err.code is one of the DNS error codes.

dns.setServers(servers)#

Sets the IP addresses of the servers to be used when resolving. The servers argument is an array of IPv4 or IPv6 addresses.

If a port specified on the address it will be removed.

An error will be thrown if an invalid address is provided.

The dns.setServers() method must not be called while a DNS query is in progress.

Error codes#

Each DNS query can return one of the following error codes:

  • dns.NODATA: DNS server returned answer with no data.
  • dns.FORMERR: DNS server claims query was misformatted.
  • dns.SERVFAIL: DNS server returned general failure.
  • dns.NOTFOUND: Domain name not found.
  • dns.NOTIMP: DNS server does not implement requested operation.
  • dns.REFUSED: DNS server refused query.
  • dns.BADQUERY: Misformatted DNS query.
  • dns.BADNAME: Misformatted hostname.
  • dns.BADFAMILY: Unsupported address family.
  • dns.BADRESP: Misformatted DNS reply.
  • dns.CONNREFUSED: Could not contact DNS servers.
  • dns.TIMEOUT: Timeout while contacting DNS servers.
  • dns.EOF: End of file.
  • dns.FILE: Error reading file.
  • dns.NOMEM: Out of memory.
  • dns.DESTRUCTION: Channel is being destroyed.
  • dns.BADSTR: Misformatted string.
  • dns.BADFLAGS: Illegal flags specified.
  • dns.NONAME: Given hostname is not numeric.
  • dns.BADHINTS: Illegal hints flags specified.
  • dns.NOTINITIALIZED: c-ares library initialization not yet performed.
  • dns.LOADIPHLPAPI: Error loading iphlpapi.dll.
  • dns.ADDRGETNETWORKPARAMS: Could not find GetNetworkParams function.
  • dns.CANCELLED: DNS query cancelled.

Implementation considerations#

Although dns.lookup() and the various dns.resolve*()/dns.reverse() functions have the same goal of associating a network name with a network address (or vice versa), their behavior is quite different. These differences can have subtle but significant consequences on the behavior of Node.js programs.

dns.lookup()#

Under the hood, dns.lookup() uses the same operating system facilities as most other programs. For instance, dns.lookup() will almost always resolve a given name the same way as the ping command. On most POSIX-like operating systems, the behavior of the dns.lookup() function can be modified by changing settings in nsswitch.conf(5) and/or resolv.conf(5), but note that changing these files will change the behavior of all other programs running on the same operating system.

Though the call to dns.lookup() will be asynchronous from JavaScript's perspective, it is implemented as a synchronous call to getaddrinfo(3) that runs on libuv's threadpool. Because libuv's threadpool has a fixed size, it means that if for whatever reason the call to getaddrinfo(3) takes a long time, other operations that could run on libuv's threadpool (such as filesystem operations) will experience degraded performance. In order to mitigate this issue, one potential solution is to increase the size of libuv's threadpool by setting the 'UV_THREADPOOL_SIZE' environment variable to a value greater than 4 (its current default value). For more information on libuv's threadpool, see the official libuv documentation.

dns.resolve(), dns.resolve*() and dns.reverse()#

These functions are implemented quite differently than dns.lookup(). They do not use getaddrinfo(3) and they always perform a DNS query on the network. This network communication is always done asynchronously, and does not use libuv's threadpool.

As a result, these functions cannot have the same negative impact on other processing that happens on libuv's threadpool that dns.lookup() can have.

They do not use the same set of configuration files than what dns.lookup() uses. For instance, they do not use the configuration from /etc/hosts.

Domain#

Stability: 0 - Deprecated

This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Most end users should not have cause to use this module. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.

Domains provide a way to handle multiple different IO operations as a single group. If any of the event emitters or callbacks registered to a domain emit an 'error' event, or throw an error, then the domain object will be notified, rather than losing the context of the error in the process.on('uncaughtException') handler, or causing the program to exit immediately with an error code.

Warning: Don't Ignore Errors!#

Domain error handlers are not a substitute for closing down your process when an error occurs.

By the very nature of how throw works in JavaScript, there is almost never any way to safely "pick up where you left off", without leaking references, or creating some other sort of undefined brittle state.

The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else.

The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.

In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. For Node.js programs that scale to multiple machines, the terminating proxy or service registry can take note of the failure, and react accordingly.

For example, this is not a good idea:

// XXX WARNING!  BAD IDEA!

var d = require('domain').create();
d.on('error', (er) => {
  // The error won't crash the process, but what it does is worse!
  // Though we've prevented abrupt process restarting, we are leaking
  // resources like crazy if this ever happens.
  // This is no better than process.on('uncaughtException')!
  console.log('error, but oh well', er.message);
});
d.run(() => {
  require('http').createServer((req, res) => {
    handleRequest(req, res);
  }).listen(PORT);
});

By using the context of a domain, and the resilience of separating our program into multiple worker processes, we can react more appropriately, and handle errors with much greater safety.

// Much better!

const cluster = require('cluster');
const PORT = +process.env.PORT || 1337;

if (cluster.isMaster) {
  // In real life, you'd probably use more than just 2 workers,
  // and perhaps not put the master and worker in the same file.
  //
  // You can also of course get a bit fancier about logging, and
  // implement whatever custom logic you need to prevent DoS
  // attacks and other bad behavior.
  //
  // See the options in the cluster documentation.
  //
  // The important thing is that the master does very little,
  // increasing our resilience to unexpected errors.

  cluster.fork();
  cluster.fork();

  cluster.on('disconnect', (worker) => {
    console.error('disconnect!');
    cluster.fork();
  });

} else {
  // the worker
  //
  // This is where we put our bugs!

  const domain = require('domain');

  // See the cluster documentation for more details about using
  // worker processes to serve requests.  How it works, caveats, etc.

  const server = require('http').createServer((req, res) => {
    var d = domain.create();
    d.on('error', (er) => {
      console.error('error', er.stack);

      // Note: we're in dangerous territory!
      // By definition, something unexpected occurred,
      // which we probably didn't want.
      // Anything can happen now!  Be very careful!

      try {
        // make sure we close down within 30 seconds
        var killtimer = setTimeout(() => {
          process.exit(1);
        }, 30000);
        // But don't keep the process open just for that!
        killtimer.unref();

        // stop taking new requests.
        server.close();

        // Let the master know we're dead.  This will trigger a
        // 'disconnect' in the cluster master, and then it will fork
        // a new worker.
        cluster.worker.disconnect();

        // try to send an error to the request that triggered the problem
        res.statusCode = 500;
        res.setHeader('content-type', 'text/plain');
        res.end('Oops, there was a problem!\n');
      } catch (er2) {
        // oh well, not much we can do at this point.
        console.error('Error sending 500!', er2.stack);
      }
    });

    // Because req and res were created before this domain existed,
    // we need to explicitly add them.
    // See the explanation of implicit vs explicit binding below.
    d.add(req);
    d.add(res);

    // Now run the handler function in the domain.
    d.run(() => {
      handleRequest(req, res);
    });
  });
  server.listen(PORT);
}

// This part isn't important.  Just an example routing thing.
// You'd put your fancy application logic here.
function handleRequest(req, res) {
  switch(req.url) {
    case '/error':
      // We do some async stuff, and then...
      setTimeout(() => {
        // Whoops!
        flerb.bark();
      });
      break;
    default:
      res.end('ok');
  }
}

Additions to Error objects#

Any time an Error object is routed through a domain, a few extra fields are added to it.

  • error.domain The domain that first handled the error.
  • error.domainEmitter The event emitter that emitted an 'error' event with the error object.
  • error.domainBound The callback function which was bound to the domain, and passed an error as its first argument.
  • error.domainThrown A boolean indicating whether the error was thrown, emitted, or passed to a bound callback function.

Implicit Binding#

If domains are in use, then all new EventEmitter objects (including Stream objects, requests, responses, etc.) will be implicitly bound to the active domain at the time of their creation.

Additionally, callbacks passed to lowlevel event loop requests (such as to fs.open, or other callback-taking methods) will automatically be bound to the active domain. If they throw, then the domain will catch the error.

In order to prevent excessive memory usage, Domain objects themselves are not implicitly added as children of the active domain. If they were, then it would be too easy to prevent request and response objects from being properly garbage collected.

If you want to nest Domain objects as children of a parent Domain, then you must explicitly add them.

Implicit binding routes thrown errors and 'error' events to the Domain's 'error' event, but does not register the EventEmitter on the Domain, so domain.dispose() will not shut down the EventEmitter. Implicit binding only takes care of thrown errors and 'error' events.

Explicit Binding#

Sometimes, the domain in use is not the one that ought to be used for a specific event emitter. Or, the event emitter could have been created in the context of one domain, but ought to instead be bound to some other domain.

For example, there could be one domain in use for an HTTP server, but perhaps we would like to have a separate domain to use for each request.

That is possible via explicit binding.

For example:

// create a top-level domain for the server
const domain = require('domain');
const http = require('http');
const serverDomain = domain.create();

serverDomain.run(() => {
  // server is created in the scope of serverDomain
  http.createServer((req, res) => {
    // req and res are also created in the scope of serverDomain
    // however, we'd prefer to have a separate domain for each request.
    // create it first thing, and add req and res to it.
    var reqd = domain.create();
    reqd.add(req);
    reqd.add(res);
    reqd.on('error', (er) => {
      console.error('Error', er, req.url);
      try {
        res.writeHead(500);
        res.end('Error occurred, sorry.');
      } catch (er) {
        console.error('Error sending 500', er, req.url);
      }
    });
  }).listen(1337);
});

domain.create()#

  • return: <Domain>

Returns a new Domain object.

Class: Domain#

The Domain class encapsulates the functionality of routing errors and uncaught exceptions to the active Domain object.

Domain is a child class of EventEmitter. To handle the errors that it catches, listen to its 'error' event.

domain.run(fn[, arg][, ...])#

Run the supplied function in the context of the domain, implicitly binding all event emitters, timers, and lowlevel requests that are created in that context. Optionally, arguments can be passed to the function.

This is the most basic way to use a domain.

Example:

const domain = require('domain');
const fs = require('fs');
const d = domain.create();
d.on('error', (er) => {
  console.error('Caught error!', er);
});
d.run(() => {
  process.nextTick(() => {
    setTimeout(() => { // simulating some various async stuff
      fs.open('non-existent file', 'r', (er, fd) => {
        if (er) throw er;
        // proceed...
      });
    }, 100);
  });
});

In this example, the d.on('error') handler will be triggered, rather than crashing the program.

domain.members#

An array of timers and event emitters that have been explicitly added to the domain.

domain.add(emitter)#

Explicitly adds an emitter to the domain. If any event handlers called by the emitter throw an error, or if the emitter emits an 'error' event, it will be routed to the domain's 'error' event, just like with implicit binding.

This also works with timers that are returned from setInterval() and setTimeout(). If their callback function throws, it will be caught by the domain 'error' handler.

If the Timer or EventEmitter was already bound to a domain, it is removed from that one, and bound to this one instead.

domain.remove(emitter)#

The opposite of domain.add(emitter). Removes domain handling from the specified emitter.

domain.bind(callback)#

The returned function will be a wrapper around the supplied callback function. When the returned function is called, any errors that are thrown will be routed to the domain's 'error' event.

Example#

const d = domain.create();

function readSomeFile(filename, cb) {
  fs.readFile(filename, 'utf8', d.bind((er, data) => {
    // if this throws, it will also be passed to the domain
    return cb(er, data ? JSON.parse(data) : null);
  }));
}

d.on('error', (er) => {
  // an error occurred somewhere.
  // if we throw it now, it will crash the program
  // with the normal line number and stack message.
});

domain.intercept(callback)#

This method is almost identical to domain.bind(callback). However, in addition to catching thrown errors, it will also intercept Error objects sent as the first argument to the function.

In this way, the common if (err) return callback(err); pattern can be replaced with a single error handler in a single place.

Example#

const d = domain.create();

function readSomeFile(filename, cb) {
  fs.readFile(filename, 'utf8', d.intercept((data) => {
    // note, the first argument is never passed to the
    // callback since it is assumed to be the 'Error' argument
    // and thus intercepted by the domain.

    // if this throws, it will also be passed to the domain
    // so the error-handling logic can be moved to the 'error'
    // event on the domain instead of being repeated throughout
    // the program.
    return cb(null, JSON.parse(data));
  }));
}

d.on('error', (er) => {
  // an error occurred somewhere.
  // if we throw it now, it will crash the program
  // with the normal line number and stack message.
});

domain.enter()#

The enter method is plumbing used by the run, bind, and intercept methods to set the active domain. It sets domain.active and process.domain to the domain, and implicitly pushes the domain onto the domain stack managed by the domain module (see domain.exit() for details on the domain stack). The call to enter delimits the beginning of a chain of asynchronous calls and I/O operations bound to a domain.

Calling enter changes only the active domain, and does not alter the domain itself. enter and exit can be called an arbitrary number of times on a single domain.

If the domain on which enter is called has been disposed, enter will return without setting the domain.

domain.exit()#

The exit method exits the current domain, popping it off the domain stack. Any time execution is going to switch to the context of a different chain of asynchronous calls, it's important to ensure that the current domain is exited. The call to exit delimits either the end of or an interruption to the chain of asynchronous calls and I/O operations bound to a domain.

If there are multiple, nested domains bound to the current execution context, exit will exit any domains nested within this domain.

Calling exit changes only the active domain, and does not alter the domain itself. enter and exit can be called an arbitrary number of times on a single domain.

If the domain on which exit is called has been disposed, exit will return without exiting the domain.

domain.dispose()#

Stability: 0 - Deprecated.  Please recover from failed IO actions
explicitly via error event handlers set on the domain.

Once dispose has been called, the domain will no longer be used by callbacks bound into the domain via run, bind, or intercept, and a 'dispose' event is emitted.

Errors#

Applications running in Node.js will generally experience four categories of errors:

  • Standard JavaScript errors such as:
    • <EvalError> : thrown when a call to eval() fails.
    • <SyntaxError> : thrown in response to improper JavaScript language syntax.
    • <RangeError> : thrown when a value is not within an expected range
    • <ReferenceError> : thrown when using undefined variables
    • <TypeError> : thrown when passing arguments of the wrong type
    • <URIError> : thrown when a global URI handling function is misused.
  • System errors triggered by underlying operating system constraints such as attempting to open a file that does not exist, attempting to send data over a closed socket, etc;
  • And User-specified errors triggered by application code.
  • Assertion Errors are a special class of error that can be triggered whenever Node.js detects an exceptional logic violation that should never occur. These are raised typically by the assert module.

All JavaScript and System errors raised by Node.js inherit from, or are instances of, the standard JavaScript <Error> class and are guaranteed to provide at least the properties available on that class.

Error Propagation and Interception#

Node.js supports several mechanisms for propagating and handling errors that occur while an application is running. How these errors are reported and handled depends entirely on the type of Error and the style of the API that is called.

All JavaScript errors are handled as exceptions that immediately generate and throw an error using the standard JavaScript throw mechanism. These are handled using the try / catch construct provided by the JavaScript language.

// Throws with a ReferenceError because z is undefined
try {
  const m = 1;
  const n = m + z;
} catch (err) {
  // Handle the error here.
}

Any use of the JavaScript throw mechanism will raise an exception that must be handled using try / catch or the Node.js process will exit immediately.

With few exceptions, Synchronous APIs (any blocking method that does not accept a callback function, such as fs.readFileSync), will use throw to report errors.

Errors that occur within Asynchronous APIs may be reported in multiple ways:

  • Most asynchronous methods that accept a callback function will accept an Error object passed as the first argument to that function. If that first argument is not null and is an instance of Error, then an error occurred that should be handled.

    const fs = require('fs');
    fs.readFile('a file that does not exist', (err, data) => {
      if (err) {
        console.error('There was an error reading the file!', err);
        return;
      }
      // Otherwise handle the data
    });
  • When an asynchronous method is called on an object that is an EventEmitter, errors can be routed to that object's 'error' event.

    const net = require('net');
    const connection = net.connect('localhost');
    
    // Adding an 'error' event handler to a stream:
    connection.on('error', (err) => {
      // If the connection is reset by the server, or if it can't
      // connect at all, or on any sort of error encountered by
      // the connection, the error will be sent here.
      console.error(err);
    });
    
    connection.pipe(process.stdout);
  • A handful of typically asynchronous methods in the Node.js API may still use the throw mechanism to raise exceptions that must be handled using try / catch. There is no comprehensive list of such methods; please refer to the documentation of each method to determine the appropriate error handling mechanism required.

The use of the 'error' event mechanism is most common for stream-based and event emitter-based APIs, which themselves represent a series of asynchronous operations over time (as opposed to a single operation that may pass or fail).

For all EventEmitter objects, if an 'error' event handler is not provided, the error will be thrown, causing the Node.js process to report an unhandled exception and crash unless either: The domain module is used appropriately or a handler has been registered for the process.on('uncaughtException') event.

const EventEmitter = require('events');
const ee = new EventEmitter();

setImmediate(() => {
  // This will crash the process because no 'error' event
  // handler has been added.
  ee.emit('error', new Error('This will crash'));
});

Errors generated in this way cannot be intercepted using try / catch as they are thrown after the calling code has already exited.

Developers must refer to the documentation for each method to determine exactly how errors raised by those methods are propagated.

Node.js style callbacks#

Most asynchronous methods exposed by the Node.js core API follow an idiomatic pattern referred to as a "Node.js style callback". With this pattern, a callback function is passed to the method as an argument. When the operation either completes or an error is raised, the callback function is called with the Error object (if any) passed as the first argument. If no error was raised, the first argument will be passed as null.

const fs = require('fs');

function nodeStyleCallback(err, data) {
 if (err) {
   console.error('There was an error', err);
   return;
 }
 console.log(data);
}

fs.readFile('/some/file/that/does-not-exist', nodeStyleCallback);
fs.readFile('/some/file/that/does-exist', nodeStyleCallback)

The JavaScript try / catch mechanism cannot be used to intercept errors generated by asynchronous APIs. A common mistake for beginners is to try to use throw inside a Node.js style callback:

// THIS WILL NOT WORK:
const fs = require('fs');

try {
  fs.readFile('/some/file/that/does-not-exist', (err, data) => {
    // mistaken assumption: throwing here...
    if (err) {
      throw err;
    }
  });
} catch(err) {
  // This will not catch the throw!
  console.log(err);
}

This will not work because the callback function passed to fs.readFile() is called asynchronously. By the time the callback has been called, the surrounding code (including the try { } catch(err) { } block will have already exited. Throwing an error inside the callback can crash the Node.js process in most cases. If domains are enabled, or a handler has been registered with process.on('uncaughtException'), such errors can be intercepted.

Class: Error#

A generic JavaScript Error object that does not denote any specific circumstance of why the error occurred. Error objects capture a "stack trace" detailing the point in the code at which the Error was instantiated, and may provide a text description of the error.

All errors generated by Node.js, including all System and JavaScript errors, will either be instances of, or inherit from, the Error class.

new Error(message)#

Creates a new Error object and sets the error.message property to the provided text message. If an object is passed as message, the text message is generated by calling message.toString(). The error.stack property will represent the point in the code at which new Error() was called. Stack traces are dependent on V8's stack trace API. Stack traces extend only to either (a) the beginning of synchronous code execution, or (b) the number of frames given by the property Error.stackTraceLimit, whichever is smaller.

Error.captureStackTrace(targetObject[, constructorOpt])#

Creates a .stack property on targetObject, which when accessed returns a string representing the location in the code at which Error.captureStackTrace() was called.

const myObject = {};
Error.captureStackTrace(myObject);
myObject.stack  // similar to `new Error().stack`

The first line of the trace, instead of being prefixed with ErrorType: message, will be the result of calling targetObject.toString().

The optional constructorOpt argument accepts a function. If given, all frames above constructorOpt, including constructorOpt, will be omitted from the generated stack trace.

The constructorOpt argument is useful for hiding implementation details of error generation from an end user. For instance:

function MyError() {
  Error.captureStackTrace(this, MyError);
}

// Without passing MyError to captureStackTrace, the MyError
// frame would should up in the .stack property. by passing
// the constructor, we omit that frame and all frames above it.
new MyError().stack

Error.stackTraceLimit#

The Error.stackTraceLimit property specifies the number of stack frames collected by a stack trace (whether generated by new Error().stack or Error.captureStackTrace(obj)).

The default value is 10 but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.

If set to a non-number value, or set to a negative number, stack traces will not capture any frames.

error.message#

Returns the string description of error as set by calling new Error(message). The message passed to the constructor will also appear in the first line of the stack trace of the Error, however changing this property after the Error object is created may not change the first line of the stack trace.

const err = new Error('The message');
console.log(err.message);
  // Prints: The message

error.stack#

Returns a string describing the point in the code at which the Error was instantiated.

For example:

Error: Things keep happening!
   at /home/gbusey/file.js:525:2
   at Frobnicator.refrobulate (/home/gbusey/business-logic.js:424:21)
   at Actor.<anonymous> (/home/gbusey/actors.js:400:8)
   at increaseSynergy (/home/gbusey/actors.js:701:6)

The first line is formatted as <error class name>: <error message>, and is followed by a series of stack frames (each line beginning with "at "). Each frame describes a call site within the code that lead to the error being generated. V8 attempts to display a name for each function (by variable name, function name, or object method name), but occasionally it will not be able to find a suitable name. If V8 cannot determine a name for the function, only location information will be displayed for that frame. Otherwise, the determined function name will be displayed with location information appended in parentheses.

It is important to note that frames are only generated for JavaScript functions. If, for example, execution synchronously passes through a C++ addon function called cheetahify, which itself calls a JavaScript function, the frame representing the cheetahify call will not be present in the stack traces:

const cheetahify = require('./native-binding.node');

function makeFaster() {
  // cheetahify *synchronously* calls speedy.
  cheetahify(function speedy() {
    throw new Error('oh no!');
  });
}

makeFaster(); // will throw:
  // /home/gbusey/file.js:6
  //     throw new Error('oh no!');
  //           ^
  // Error: oh no!
  //     at speedy (/home/gbusey/file.js:6:11)
  //     at makeFaster (/home/gbusey/file.js:5:3)
  //     at Object.<anonymous> (/home/gbusey/file.js:10:1)
  //     at Module._compile (module.js:456:26)
  //     at Object.Module._extensions..js (module.js:474:10)
  //     at Module.load (module.js:356:32)
  //     at Function.Module._load (module.js:312:12)
  //     at Function.Module.runMain (module.js:497:10)
  //     at startup (node.js:119:16)
  //     at node.js:906:3

The location information will be one of:

  • native, if the frame represents a call internal to V8 (as in [].forEach).
  • plain-filename.js:line:column, if the frame represents a call internal to Node.js.
  • /absolute/path/to/file.js:line:column, if the frame represents a call in a user program, or its dependencies.

The string representing the stack trace is lazily generated when the error.stack property is accessed.

The number of frames captured by the stack trace is bounded by the smaller of Error.stackTraceLimit or the number of available frames on the current event loop tick.

System-level errors are generated as augmented Error instances, which are detailed here.

Class: RangeError#

A subclass of Error that indicates that a provided argument was not within the set or range of acceptable values for a function; whether that is a numeric range, or outside the set of options for a given function parameter.

For example:

require('net').connect(-1);
  // throws RangeError, port should be > 0 && < 65536

Node.js will generate and throw RangeError instances immediately as a form of argument validation.

Class: ReferenceError#

A subclass of Error that indicates that an attempt is being made to access a variable that is not defined. Such errors commonly indicate typos in code, or an otherwise broken program.

While client code may generate and propagate these errors, in practice, only V8 will do so.

doesNotExist;
  // throws ReferenceError, doesNotExist is not a variable in this program.

ReferenceError instances will have an error.arguments property whose value is an array containing a single element: a string representing the variable that was not defined.

const assert = require('assert');
try {
  doesNotExist;
} catch(err) {
  assert(err.arguments[0], 'doesNotExist');
}

Unless an application is dynamically generating and running code, ReferenceError instances should always be considered a bug in the code or its dependencies.

Class: SyntaxError#

A subclass of Error that indicates that a program is not valid JavaScript. These errors may only be generated and propagated as a result of code evaluation. Code evaluation may happen as a result of eval, Function, require, or vm. These errors are almost always indicative of a broken program.

try {
  require('vm').runInThisContext('binary ! isNotOk');
} catch(err) {
  // err will be a SyntaxError
}

SyntaxError instances are unrecoverable in the context that created them – they may only be caught by other contexts.

Class: TypeError#

A subclass of Error that indicates that a provided argument is not an allowable type. For example, passing a function to a parameter which expects a string would be considered a TypeError.

require('url').parse(() => { });
  // throws TypeError, since it expected a string

Node.js will generate and throw TypeError instances immediately as a form of argument validation.

Exceptions vs. Errors#

A JavaScript exception is a value that is thrown as a result of an invalid operation or as the target of a throw statement. While it is not required that these values are instances of Error or classes which inherit from Error, all exceptions thrown by Node.js or the JavaScript runtime will be instances of Error.

Some exceptions are unrecoverable at the JavaScript layer. Such exceptions will always cause the Node.js process to crash. Examples include assert() checks or abort() calls in the C++ layer.

System Errors#

System errors are generated when exceptions occur within the program's runtime environment. Typically, these are operational errors that occur when an application violates an operating system constraint such as attempting to read a file that does not exist or when the user does not have sufficient permissions.

System errors are typically generated at the syscall level: an exhaustive list of error codes and their meanings is available by running man 2 intro or man 3 errno on most Unices; or online.

In Node.js, system errors are represented as augmented Error objects with added properties.

Class: System Error#

error.code#

error.errno#

Returns a string representing the error code, which is always E followed by a sequence of capital letters, and may be referenced in man 2 intro.

The properties error.code and error.errno are aliases of one another and return the same value.

error.syscall#

Returns a string describing the syscall that failed.

Common System Errors#

This list is not exhaustive, but enumerates many of the common system errors encountered when writing a Node.js program. An exhaustive list may be found here.

  • EACCES (Permission denied): An attempt was made to access a file in a way forbidden by its file access permissions.

  • EADDRINUSE (Address already in use): An attempt to bind a server (net, http, or https) to a local address failed due to another server on the local system already occupying that address.

  • ECONNREFUSED (Connection refused): No connection could be made because the target machine actively refused it. This usually results from trying to connect to a service that is inactive on the foreign host.

  • ECONNRESET (Connection reset by peer): A connection was forcibly closed by a peer. This normally results from a loss of the connection on the remote socket due to a timeout or reboot. Commonly encountered via the http and net modules.

  • EEXIST (File exists): An existing file was the target of an operation that required that the target not exist.

  • EISDIR (Is a directory): An operation expected a file, but the given pathname was a directory.

  • EMFILE (Too many open files in system): Maximum number of file descriptors allowable on the system has been reached, and requests for another descriptor cannot be fulfilled until at least one has been closed. This is encountered when opening many files at once in parallel, especially on systems (in particular, OS X) where there is a low file descriptor limit for processes. To remedy a low limit, run ulimit -n 2048 in the same shell that will run the Node.js process.

  • ENOENT (No such file or directory): Commonly raised by fs operations to indicate that a component of the specified pathname does not exist -- no entity (file or directory) could be found by the given path.

  • ENOTDIR (Not a directory): A component of the given pathname existed, but was not a directory as expected. Commonly raised by fs.readdir.

  • ENOTEMPTY (Directory not empty): A directory with entries was the target of an operation that requires an empty directory -- usually fs.unlink.

  • EPERM (Operation not permitted): An attempt was made to perform an operation that requires elevated privileges.

  • EPIPE (Broken pipe): A write on a pipe, socket, or FIFO for which there is no process to read the data. Commonly encountered at the net and http layers, indicative that the remote side of the stream being written to has been closed.

  • ETIMEDOUT (Operation timed out): A connect or send request failed because the connected party did not properly respond after a period of time. Usually encountered by http or net -- often a sign that a socket.end() was not properly called.

Events#

Stability: 2 - Stable

Much of the Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") periodically emit named events that cause Function objects ("listeners") to be called.

For instance: a net.Server object emits an event each time a peer connects to it; a fs.ReadStream emits an event when the file is opened; a stream emits an event whenever data is available to be read.

All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more Functions to be attached to named events emitted by the object. Typically, event names are camel-cased strings but any valid JavaScript property key can be used.

When the EventEmitter object emits an event, all of the Functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and will be discarded.

The following example shows a simple EventEmitter instance with a single listener. The eventEmitter.on() method is used to register listeners, while the eventEmitter.emit() method is used to trigger the event.

const EventEmitter = require('events');
const util = require('util');

function MyEmitter() {
  EventEmitter.call(this);
}
util.inherits(MyEmitter, EventEmitter);

const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
  console.log('an event occurred!');
});
myEmitter.emit('event');

Any object can become an EventEmitter through inheritance. The example above uses the traditional Node.js style prototypical inheritance using the util.inherits() method. It is, however, possible to use ES6 classes as well:

const EventEmitter = require('events');

class MyEmitter extends EventEmitter {}

const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
  console.log('an event occurred!');
});
myEmitter.emit('event');

Passing arguments and this to listeners#

The eventEmitter.emit() method allows an arbitrary set of arguments to be passed to the listener functions. It is important to keep in mind that when an ordinary listener function is called by the EventEmitter, the standard this keyword is intentionally set to reference the EventEmitter to which the listener is attached.

const myEmitter = new MyEmitter();
myEmitter.on('event', function(a, b) {
  console.log(a, b, this);
    // Prints:
    //   a b MyEmitter {
    //     domain: null,
    //     _events: { event: [Function] },
    //     _eventsCount: 1,
    //     _maxListeners: undefined }
});
myEmitter.emit('event', 'a', 'b');

It is possible to use ES6 Arrow Functions as listeners, however, when doing so, the this keyword will no longer reference the EventEmitter instance:

const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
  console.log(a, b, this);
    // Prints: a b {}
});
myEmitter.emit('event', 'a', 'b');

Asynchronous vs. Synchronous#

The EventListener calls all listeners synchronously in the order in which they were registered. This is important to ensure the proper sequencing of events and to avoid race conditions or logic errors. When appropriate, listener functions can switch to an asynchronous mode of operation using the setImmediate() or process.nextTick() methods:

const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
  setImmediate(() => {
    console.log('this happens asynchronously');
  });
});
myEmitter.emit('event', 'a', 'b');

Handling events only once#

When a listener is registered using the eventEmitter.on() method, that listener will be invoked every time the named event is emitted.

const myEmitter = new MyEmitter();
var m = 0;
myEmitter.on('event', () => {
  console.log(++m);
});
myEmitter.emit('event');
  // Prints: 1
myEmitter.emit('event');
  // Prints: 2

Using the eventEmitter.once() method, it is possible to register a listener that is immediately unregistered after it is called.

const myEmitter = new MyEmitter();
var m = 0;
myEmitter.once('event', () => {
  console.log(++m);
});
myEmitter.emit('event');
  // Prints: 1
myEmitter.emit('event');
  // Ignored

Error events#

When an error occurs within an EventEmitter instance, the typical action is for an 'error' event to be emitted. These are treated as a special case within Node.js.

If an EventEmitter does not have at least one listener registered for the 'error' event, and an 'error' event is emitted, the error is thrown, a stack trace is printed, and the Node.js process exits.

const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
  // Throws and crashes Node.js

To guard against crashing the Node.js process, developers can either register a listener for the process.on('uncaughtException') event or use the domain module (Note, however, that the domain module has been deprecated).

const myEmitter = new MyEmitter();

process.on('uncaughtException', (err) => {
  console.log('whoops! there was an error');
});

myEmitter.emit('error', new Error('whoops!'));
  // Prints: whoops! there was an error

As a best practice, developers should always register listeners for the 'error' event:

const myEmitter = new MyEmitter();
myEmitter.on('error', (err) => {
  console.log('whoops! there was an error');
});
myEmitter.emit('error', new Error('whoops!'));
  // Prints: whoops! there was an error

Class: EventEmitter#

The EventEmitter class is defined and exposed by the events module:

const EventEmitter = require('events');

All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener' when a listener is removed.

Event: 'newListener'#

The EventEmitter instance will emit it's own 'newListener' event before a listener is added to it's internal array of listeners.

Listeners registered for the 'newListener' event will be passed the event name and a reference to the listener being added.

The fact that the event is triggered before adding the listener has a subtle but important side effect: any additional listeners registered to the same name within the 'newListener' callback will be inserted before the listener that is in the process of being added.

const myEmitter = new MyEmitter();
// Only do this once so we don't loop forever
myEmitter.once('newListener', (event, listener) => {
  if (event === 'event') {
    // Insert a new listener in front
    myEmitter.on('event', () => {
      console.log('B');
    });
  }
});
myEmitter.on('event', () => {
  console.log('A');
});
myEmitter.emit('event');
  // Prints:
  //   B
  //   A

Event: 'removeListener'#

The 'removeListener' event is emitted after a listener is removed.

EventEmitter.listenerCount(emitter, eventName)#

Stability: 0 - Deprecated: Use emitter.listenerCount() instead.

A class method that returns the number of listeners for the given eventName registered on the given emitter.

const myEmitter = new MyEmitter();
myEmitter.on('event', () => {});
myEmitter.on('event', () => {});
console.log(EventEmitter.listenerCount(myEmitter, 'event'));
  // Prints: 2

EventEmitter.defaultMaxListeners#

By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for all EventEmitter instances, the EventEmitter.defaultMaxListeners property can be used.

Take caution when setting the EventEmitter.defaultMaxListeners because the change effects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over EventEmitter.defaultMaxListeners.

Note that this is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a possible EventEmitter memory leak has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
  // do stuff
  emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
});

emitter.addListener(eventName, listener)#

Alias for emitter.on(eventName, listener).

emitter.emit(eventName[, arg1][, arg2][, ...])#

Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

Returns true if the event had listeners, false otherwise.

emitter.eventNames()#

Returns an array listing the events for which the emitter has registered listeners. The values in the array will be strings or Symbols.

const EventEmitter = require('events');
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});

const sym = Symbol('symbol');
myEE.on(sym, () => {});

console.log(myErr.eventNames());
  // Prints ['foo', 'bar', Symbol('symbol')]

emitter.getMaxListeners()#

Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

emitter.listenerCount(eventName)#

  • eventName <Value> The name of the event being listened for

Returns the number of listeners listening to the event named eventName.

emitter.listeners(eventName)#

Returns a copy of the array of listeners for the event named eventName.

server.on('connection', (stream) => {
  console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
  // Prints: [ [Function] ]

emitter.on(eventName, listener)#

Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

server.on('connection', (stream) => {
  console.log('someone connected!');
});

Returns a reference to the EventEmitter so calls can be chained.

emitter.once(eventName, listener)#

Adds a one time listener function for the event named eventName. This listener is invoked only the next time eventName is triggered, after which it is removed.

server.once('connection', (stream) => {
  console.log('Ah, we have our first user!');
});

Returns a reference to the EventEmitter so calls can be chained.

emitter.removeAllListeners([eventName])#

Removes all listeners, or those of the specified eventName.

Note that it is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

Returns a reference to the EventEmitter so calls can be chained.

emitter.removeListener(eventName, listener)#

Removes the specified listener from the listener array for the event named eventName.

var callback = (stream) => {
  console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);

removeListener will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener must be called multiple times to remove each instance.

Note that once an event has been emitted, all listeners attached to it at the time of emitting will be called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events will behave as expected.

const myEmitter = new MyEmitter();

var callbackA = () => {
  console.log('A');
  myEmitter.removeListener('event', callbackB);
};

var callbackB = () => {
  console.log('B');
};

myEmitter.on('event', callbackA);

myEmitter.on('event', callbackB);

// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
  // Prints:
  //   A
  //   B

// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
  // Prints:
  //   A

Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it will means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

Returns a reference to the EventEmitter so calls can be chained.

emitter.setMaxListeners(n)#

By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. Obviously, not all events should be limited to just 10 listeners. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) for to indicate an unlimited number of listeners.

Returns a reference to the EventEmitter so calls can be chained.

File System#

Stability: 2 - Stable

File I/O is provided by simple wrappers around standard POSIX functions. To use this module do require('fs'). All the methods have asynchronous and synchronous forms.

The asynchronous form always takes a completion callback as its last argument. The arguments passed to the completion callback depend on the method, but the first argument is always reserved for an exception. If the operation was completed successfully, then the first argument will be null or undefined.

When using the synchronous form any exceptions are immediately thrown. You can use try/catch to handle exceptions or allow them to bubble up.

Here is an example of the asynchronous version:

const fs = require('fs');

fs.unlink('/tmp/hello', (err) => {
  if (err) throw err;
  console.log('successfully deleted /tmp/hello');
});

Here is the synchronous version:

const fs = require('fs');

fs.unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');

With the asynchronous methods there is no guaranteed ordering. So the following is prone to error:

fs.rename('/tmp/hello', '/tmp/world', (err) => {
  if (err) throw err;
  console.log('renamed complete');
});
fs.stat('/tmp/world', (err, stats) => {
  if (err) throw err;
  console.log(`stats: ${JSON.stringify(stats)}`);
});

It could be that fs.stat is executed before fs.rename. The correct way to do this is to chain the callbacks.

fs.rename('/tmp/hello', '/tmp/world', (err) => {
  if (err) throw err;
  fs.stat('/tmp/world', (err, stats) => {
    if (err) throw err;
    console.log(`stats: ${JSON.stringify(stats)}`);
  });
});

In busy processes, the programmer is strongly encouraged to use the asynchronous versions of these calls. The synchronous versions will block the entire process until they complete--halting all connections.

The relative path to a filename can be used. Remember, however, that this path will be relative to process.cwd().

Most fs functions let you omit the callback argument. If you do, a default callback is used that rethrows errors. To get a trace to the original call site, set the NODE_DEBUG environment variable:

$ cat script.js
function bad() {
  require('fs').readFile('/');
}
bad();

$ env NODE_DEBUG=fs node script.js
fs.js:66
        throw err;
              ^
Error: EISDIR, read
    at rethrow (fs.js:61:21)
    at maybeCallback (fs.js:79:42)
    at Object.fs.readFile (fs.js:153:18)
    at bad (/path/to/script.js:2:17)
    at Object.<anonymous> (/path/to/script.js:5:1)
    <etc.>

Buffer API#

fs functions support passing and receiving paths as both strings and Buffers. The latter is intended to make it possible to work with filesystems that allow for non-UTF-8 filenames. For most typical uses, working with paths as Buffers will be unnecessary, as the string API converts to and from UTF-8 automatically.

Note that on certain file systems (such as NTFS and HFS+) filenames will always be encoded as UTF-8. On such file systems, passing non-UTF-8 encoded Buffers to fs functions will not work as expected.

Class: fs.FSWatcher#

Objects returned from fs.watch() are of this type.

Event: 'change'#

Emitted when something changes in a watched directory or file. See more details in fs.watch().

The filename argument may not be provided depending on operating system support. If filename is provided, it will be provided as a Buffer if fs.watch() is called with it's encoding option set to 'buffer', otherwise filename will be a string.

fs.watch('./tmp', {encoding: 'buffer'}, (event, filename) => {
  if (filename)
    console.log(filename);
    // Prints: <Buffer ...>
});

Event: 'error'#

Emitted when an error occurs.

watcher.close()#

Stop watching for changes on the given fs.FSWatcher.

Class: fs.ReadStream#

ReadStream is a Readable Stream.

Event: 'open'#

  • fd <Integer> Integer file descriptor used by the ReadStream.

Emitted when the ReadStream's file is opened.

readStream.path#

The path to the file the stream is reading from as specified in the first argument to fs.createReadStream(). If path is passed as a string, then readStream.path will be a string. If path is passed as a Buffer, then readStream.path will be a Buffer.

Class: fs.Stats#

Objects returned from fs.stat(), fs.lstat() and fs.fstat() and their synchronous counterparts are of this type.

  • stats.isFile()
  • stats.isDirectory()
  • stats.isBlockDevice()
  • stats.isCharacterDevice()
  • stats.isSymbolicLink() (only valid with fs.lstat())
  • stats.isFIFO()
  • stats.isSocket()

For a regular file util.inspect(stats) would return a string very similar to this:

{
  dev: 2114,
  ino: 48064969,
  mode: 33188,
  nlink: 1,
  uid: 85,
  gid: 100,
  rdev: 0,
  size: 527,
  blksize: 4096,
  blocks: 8,
  atime: Mon, 10 Oct 2011 23:24:11 GMT,
  mtime: Mon, 10 Oct 2011 23:24:11 GMT,
  ctime: Mon, 10 Oct 2011 23:24:11 GMT,
  birthtime: Mon, 10 Oct 2011 23:24:11 GMT
}

Please note that atime, mtime, birthtime, and ctime are instances of Date object and to compare the values of these objects you should use appropriate methods. For most general uses getTime() will return the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC and this integer should be sufficient for any comparison, however there are additional methods which can be used for displaying fuzzy information. More details can be found in the MDN JavaScript Reference page.

Stat Time Values#

The times in the stat object have the following semantics:

  • atime "Access Time" - Time when file data last accessed. Changed by the mknod(2), utimes(2), and read(2) system calls.
  • mtime "Modified Time" - Time when file data last modified. Changed by the mknod(2), utimes(2), and write(2) system calls.
  • ctime "Change Time" - Time when file status was last changed (inode data modification). Changed by the chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2), read(2), and write(2) system calls.
  • birthtime "Birth Time" - Time of file creation. Set once when the file is created. On filesystems where birthtime is not available, this field may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix epoch timestamp 0). Note that this value may be greater than atime or mtime in this case. On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an earlier value than the current birthtime using the utimes(2) system call.

Prior to Node v0.12, the ctime held the birthtime on Windows systems. Note that as of v0.12, ctime is not "creation time", and on Unix systems, it never was.

Class: fs.WriteStream#

WriteStream is a Writable Stream.

Event: 'open'#

  • fd <Integer> Integer file descriptor used by the WriteStream.

Emitted when the WriteStream's file is opened.

writeStream.bytesWritten#

The number of bytes written so far. Does not include data that is still queued for writing.

writeStream.path#

The path to the file the stream is writing to as specified in the first argument to fs.createWriteStream(). If path is passed as a string, then writeStream.path will be a string. If path is passed as a Buffer, then writeStream.path will be a Buffer.

fs.access(path[, mode], callback)#

Tests a user's permissions for the file specified by path. mode is an optional integer that specifies the accessibility checks to be performed. The following constants define the possible values of mode. It is possible to create a mask consisting of the bitwise OR of two or more values.

  • fs.F_OK - File is visible to the calling process. This is useful for determining if a file exists, but says nothing about rwx permissions. Default if no mode is specified.
  • fs.R_OK - File can be read by the calling process.
  • fs.W_OK - File can be written by the calling process.
  • fs.X_OK - File can be executed by the calling process. This has no effect on Windows (will behave like fs.F_OK).

The final argument, callback, is a callback function that is invoked with a possible error argument. If any of the accessibility checks fail, the error argument will be populated. The following example checks if the file /etc/passwd can be read and written by the current process.

fs.access('/etc/passwd', fs.R_OK | fs.W_OK, (err) => {
  console.log(err ? 'no access!' : 'can read/write');
});

fs.accessSync(path[, mode])#

Synchronous version of fs.access(). This throws if any accessibility checks fail, and does nothing otherwise.

fs.appendFile(file, data[, options], callback)#

Asynchronously append data to a file, creating the file if it does not yet exist. data can be a string or a buffer.

Example:

fs.appendFile('message.txt', 'data to append', (err) => {
  if (err) throw err;
  console.log('The "data to append" was appended to file!');
});

If options is a string, then it specifies the encoding. Example:

fs.appendFile('message.txt', 'data to append', 'utf8', callback);

Any specified file descriptor has to have been opened for appending.

Note: Specified file descriptors will not be closed automatically.

fs.appendFileSync(file, data[, options])#

The synchronous version of fs.appendFile(). Returns undefined.

fs.chmod(path, mode, callback)#

Asynchronous chmod(2). No arguments other than a possible exception are given to the completion callback.

fs.chmodSync(path, mode)#

Synchronous chmod(2). Returns undefined.

fs.chown(path, uid, gid, callback)#

Asynchronous chown(2). No arguments other than a possible exception are given to the completion callback.

fs.chownSync(path, uid, gid)#

Synchronous chown(2). Returns undefined.

fs.close(fd, callback)#

Asynchronous close(2). No arguments other than a possible exception are given to the completion callback.

fs.closeSync(fd)#

  • fd <Integer>

Synchronous close(2). Returns undefined.

fs.createReadStream(path[, options])#

Returns a new ReadStream object. (See Readable Stream).

Be aware that, unlike the default value set for highWaterMark on a readable stream (16 kb), the stream returned by this method has a default value of 64 kb for the same parameter.

options is an object or string with the following defaults:

{
  flags: 'r',
  encoding: null,
  fd: null,
  mode: 0o666,
  autoClose: true
}

options can include start and end values to read a range of bytes from the file instead of the entire file. Both start and end are inclusive and start at 0. The encoding can be any one of those accepted by Buffer.

If fd is specified, ReadStream will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. Note that fd should be blocking; non-blocking fds should be passed to net.Socket.

If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is your responsibility to close it and make sure there's no file descriptor leak. If autoClose is set to true (default behavior), on error or end the file descriptor will be closed automatically.

mode sets the file mode (permission and sticky bits), but only if the file was created.

An example to read the last 10 bytes of a file which is 100 bytes long:

fs.createReadStream('sample.txt', {start: 90, end: 99});

If options is a string, then it specifies the encoding.

fs.createWriteStream(path[, options])#

Returns a new WriteStream object. (See Writable Stream).

options is an object or string with the following defaults:

{
  flags: 'w',
  defaultEncoding: 'utf8',
  fd: null,
  mode: 0o666,
  autoClose: true
}

options may also include a start option to allow writing data at some position past the beginning of the file. Modifying a file rather than replacing it may require a flags mode of r+ rather than the default mode w. The defaultEncoding can be any one of those accepted by Buffer.

If autoClose is set to true (default behavior) on error or end the file descriptor will be closed automatically. If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is your responsibility to close it and make sure there's no file descriptor leak.

Like ReadStream, if fd is specified, WriteStream will ignore the path argument and will use the specified file descriptor. This means that no 'open' event will be emitted. Note that fd should be blocking; non-blocking fds should be passed to net.Socket.

If options is a string, then it specifies the encoding.

fs.exists(path, callback)#

Stability: 0 - Deprecated: Use fs.stat() or fs.access() instead.

Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false. Example:

fs.exists('/etc/passwd', (exists) => {
  console.log(exists ? 'it\'s there' : 'no passwd!');
});

fs.exists() should not be used to check if a file exists before calling fs.open(). Doing so introduces a race condition since other processes may change the file's state between the two calls. Instead, user code should call fs.open() directly and handle the error raised if the file is non-existent.

fs.existsSync(path)#

Stability: 0 - Deprecated: Use fs.statSync() or fs.accessSync() instead.

Synchronous version of fs.exists(). Returns true if the file exists, false otherwise.

fs.fchmod(fd, mode, callback)#

Asynchronous fchmod(2). No arguments other than a possible exception are given to the completion callback.

fs.fchmodSync(fd, mode)#

  • fd <Integer>
  • mode <Integer>

Synchronous fchmod(2). Returns undefined.

fs.fchown(fd, uid, gid, callback)#

  • fd <Integer>
  • uid <Integer>
  • gid <Integer>
  • callback <Function>

Asynchronous fchown(2). No arguments other than a possible exception are given to the completion callback.

fs.fchownSync(fd, uid, gid)#

  • fd <Integer>
  • uid <Integer>
  • gid <Integer>

Synchronous fchown(2). Returns undefined.

fs.fdatasync(fd, callback)#

Asynchronous fdatasync(2). No arguments other than a possible exception are given to the completion callback.

fs.fdatasyncSync(fd)#

  • fd <Integer>

Synchronous fdatasync(2). Returns undefined.

fs.fstat(fd, callback)#

Asynchronous fstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. fstat() is identical to stat(), except that the file to be stat-ed is specified by the file descriptor fd.

fs.fstatSync(fd)#

  • fd <Integer>

Synchronous fstat(2). Returns an instance of fs.Stats.

fs.fsync(fd, callback)#

Asynchronous fsync(2). No arguments other than a possible exception are given to the completion callback.

fs.fsyncSync(fd)#

  • fd <Integer>

Synchronous fsync(2). Returns undefined.

fs.ftruncate(fd, len, callback)#

Asynchronous ftruncate(2). No arguments other than a possible exception are given to the completion callback.

fs.ftruncateSync(fd, len)#

  • fd <Integer>
  • len <Integer>

Synchronous ftruncate(2). Returns undefined.

fs.futimes(fd, atime, mtime, callback)#

  • fd <Integer>
  • atime <Integer>
  • mtime <Integer>
  • callback <Function>

Change the file timestamps of a file referenced by the supplied file descriptor.

fs.futimesSync(fd, atime, mtime)#

  • fd <Integer>
  • atime <Integer>
  • mtime <Integer>

Synchronous version of fs.futimes(). Returns undefined.

fs.lchmod(path, mode, callback)#

Asynchronous lchmod(2). No arguments other than a possible exception are given to the completion callback.

Only available on Mac OS X.

fs.lchmodSync(path, mode)#

Synchronous lchmod(2). Returns undefined.

fs.lchown(path, uid, gid, callback)#

Asynchronous lchown(2). No arguments other than a possible exception are given to the completion callback.

fs.lchownSync(path, uid, gid)#

Synchronous lchown(2). Returns undefined.

fs.link(srcpath, dstpath, callback)#

Asynchronous link(2). No arguments other than a possible exception are given to the completion callback.

fs.linkSync(srcpath, dstpath)#

Synchronous link(2). Returns undefined.

fs.lstat(path, callback)#

Asynchronous lstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. lstat() is identical to stat(), except that if path is a symbolic link, then the link itself is stat-ed, not the file that it refers to.

fs.lstatSync(path)#

Synchronous lstat(2). Returns an instance of fs.Stats.

fs.mkdir(path[, mode], callback)#

Asynchronous mkdir(2). No arguments other than a possible exception are given to the completion callback. mode defaults to 0o777.

fs.mkdirSync(path[, mode])#

Synchronous mkdir(2). Returns undefined.

fs.mkdtemp(prefix, callback)#

Creates a unique temporary directory.

Generates six random characters to be appended behind a required prefix to create a unique temporary directory.

The created folder path is passed as a string to the callback's second parameter.

Example:

fs.mkdtemp('/tmp/foo-', (err, folder) => {
  console.log(folder);
    // Prints: /tmp/foo-itXde2
});

fs.mkdtempSync(template)#

The synchronous version of [fs.mkdtemp()][]. Returns the created folder path.

fs.open(path, flags[, mode], callback)#

Asynchronous file open. See open(2). flags can be:

  • 'r' - Open file for reading. An exception occurs if the file does not exist.

  • 'r+' - Open file for reading and writing. An exception occurs if the file does not exist.

  • 'rs' - Open file for reading in synchronous mode. Instructs the operating system to bypass the local file system cache.

    This is primarily useful for opening files on NFS mounts as it allows you to skip the potentially stale local cache. It has a very real impact on I/O performance so don't use this flag unless you need it.

    Note that this doesn't turn fs.open() into a synchronous blocking call. If that's what you want then you should be using fs.openSync()

  • 'rs+' - Open file for reading and writing, telling the OS to open it synchronously. See notes for 'rs' about using this with caution.

  • 'w' - Open file for writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'wx' - Like 'w' but fails if path exists.

  • 'w+' - Open file for reading and writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'wx+' - Like 'w+' but fails if path exists.

  • 'a' - Open file for appending. The file is created if it does not exist.

  • 'ax' - Like 'a' but fails if path exists.

  • 'a+' - Open file for reading and appending. The file is created if it does not exist.

  • 'ax+' - Like 'a+' but fails if path exists.

mode sets the file mode (permission and sticky bits), but only if the file was created. It defaults to 0666, readable and writable.

The callback gets two arguments (err, fd).

The exclusive flag 'x' (O_EXCL flag in open(2)) ensures that path is newly created. On POSIX systems, path is considered to exist even if it is a symlink to a non-existent file. The exclusive flag may or may not work with network file systems.

flags can also be a number as documented by open(2); commonly used constants are available from require('constants'). On Windows, flags are translated to their equivalent ones where applicable, e.g. O_WRONLY to FILE_GENERIC_WRITE, or O_EXCL|O_CREAT to CREATE_NEW, as accepted by CreateFileW.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

fs.openSync(path, flags[, mode])#

Synchronous version of fs.open(). Returns an integer representing the file descriptor.

fs.read(fd, buffer, offset, length, position, callback)#

Read data from the file specified by fd.

buffer is the buffer that the data will be written to.

offset is the offset in the buffer to start writing at.

length is an integer specifying the number of bytes to read.

position is an integer specifying where to begin reading from in the file. If position is null, data will be read from the current file position.

The callback is given the three arguments, (err, bytesRead, buffer).

fs.readdir(path[, options], callback)#

Asynchronous readdir(3). Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding '.' and '..'.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames passed to the callback. If the encoding is set to 'buffer', the filenames returned will be passed as Buffer objects.

fs.readdirSync(path[, options])#

Synchronous readdir(3). Returns an array of filenames excluding '.' and '..'.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the filenames passed to the callback. If the encoding is set to 'buffer', the filenames returned will be passed as Buffer objects.

fs.readFile(file[, options], callback)#

Asynchronously reads the entire contents of a file. Example:

fs.readFile('/etc/passwd', (err, data) => {
  if (err) throw err;
  console.log(data);
});

The callback is passed two arguments (err, data), where data is the contents of the file.

If no encoding is specified, then the raw buffer is returned.

If options is a string, then it specifies the encoding. Example:

fs.readFile('/etc/passwd', 'utf8', callback);

Any specified file descriptor has to support reading.

Note: Specified file descriptors will not be closed automatically.

fs.readFileSync(file[, options])#

Synchronous version of fs.readFile. Returns the contents of the file.

If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.

fs.readlink(path[, options], callback)#

Asynchronous readlink(2). The callback gets two arguments (err, linkString).

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path passed to the callback. If the encoding is set to 'buffer', the link path returned will be passed as a Buffer object.

fs.readlinkSync(path[, options])#

Synchronous readlink(2). Returns the symbolic link's string value.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the link path passed to the callback. If the encoding is set to 'buffer', the link path returned will be passed as a Buffer object.

fs.realpath(path[, options], callback)#

Asynchronous realpath(2). The callback gets two arguments (err, resolvedPath). May use process.cwd to resolve relative paths.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback. If the encoding is set to 'buffer', the path returned will be passed as a Buffer object.

fs.readSync(fd, buffer, offset, length, position)#

  • fd <Integer>
  • buffer <String> | <Buffer>
  • offset <Integer>
  • length <Integer>
  • position <Integer>

Synchronous version of fs.read(). Returns the number of bytesRead.

fs.realpathSync(path[, options])#

Synchronous realpath(2). Returns the resolved path.

The optional options argument can be a string specifying an encoding, or an object with an encoding property specifying the character encoding to use for the path passed to the callback. If the encoding is set to 'buffer', the path returned will be passed as a Buffer object.

fs.rename(oldPath, newPath, callback)#

Asynchronous rename(2). No arguments other than a possible exception are given to the completion callback.

fs.renameSync(oldPath, newPath)#

Synchronous rename(2). Returns undefined.

fs.rmdir(path, callback)#

Asynchronous rmdir(2). No arguments other than a possible exception are given to the completion callback.

fs.rmdirSync(path)#

Synchronous rmdir(2). Returns undefined.

fs.stat(path, callback)#

Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. See the fs.Stats section for more information.

fs.statSync(path)#

Synchronous stat(2). Returns an instance of fs.Stats.

fs.symlink(target, path[, type], callback)#

Asynchronous symlink(2). No arguments other than a possible exception are given to the completion callback. The type argument can be set to 'dir', 'file', or 'junction' (default is 'file') and is only available on Windows (ignored on other platforms). Note that Windows junction points require the destination path to be absolute. When using 'junction', the target argument will automatically be normalized to absolute path.

Here is an example below:

fs.symlink('./foo', './new-port');

It creates a symbolic link named "new-port" that points to "foo".

fs.symlinkSync(target, path[, type])#

Synchronous symlink(2). Returns undefined.

fs.truncate(path, len, callback)#

Asynchronous truncate(2). No arguments other than a possible exception are given to the completion callback. A file descriptor can also be passed as the first argument. In this case, fs.ftruncate() is called.

fs.truncateSync(path, len)#

Synchronous truncate(2). Returns undefined.

fs.unlink(path, callback)#

Asynchronous unlink(2). No arguments other than a possible exception are given to the completion callback.

fs.unlinkSync(path)#

Synchronous unlink(2). Returns undefined.

fs.unwatchFile(filename[, listener])#

Stop watching for changes on filename. If listener is specified, only that particular listener is removed. Otherwise, all listeners are removed and you have effectively stopped watching filename.

Calling fs.unwatchFile() with a filename that is not being watched is a no-op, not an error.

Note: fs.watch() is more efficient than fs.watchFile() and fs.unwatchFile(). fs.watch() should be used instead of fs.watchFile() and fs.unwatchFile() when possible.

fs.utimes(path, atime, mtime, callback)#

Change file timestamps of the file referenced by the supplied path.

Note: the arguments atime and mtime of the following related functions does follow the below rules:

  • If the value is a numberable string like '123456789', the value would get converted to corresponding number.
  • If the value is NaN or Infinity, the value would get converted to Date.now().

fs.utimesSync(path, atime, mtime)#

Synchronous version of fs.utimes(). Returns undefined.

fs.watch(filename[, options][, listener])#

  • filename <String> | <Buffer>
  • options <String> | <Object>
    • persistent <Boolean> Indicates whether the process should continue to run as long as files are being watched. default = true
    • recursive <Boolean> Indicates whether all subdirectories should be watched, or only the current directory. The applies when a directory is specified, and only on supported platforms (See Caveats). default = false
    • encoding <String> Specifies the character encoding to be used for the filename passed to the listener. default = 'utf8'
  • listener <Function>

Watch for changes on filename, where filename is either a file or a directory. The returned object is a fs.FSWatcher.

The second argument is optional. If options is provided as a string, it specifies the encoding. Otherwise options should be passed as an object.

The listener callback gets two arguments (event, filename). event is either 'rename' or 'change', and filename is the name of the file which triggered the event.

Caveats#

The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.

The recursive option is only supported on OS X and Windows.

Availability#

This feature depends on the underlying operating system providing a way to be notified of filesystem changes.

  • On Linux systems, this uses inotify.
  • On BSD systems, this uses kqueue.
  • On OS X, this uses kqueue for files and 'FSEvents' for directories.
  • On SunOS systems (including Solaris and SmartOS), this uses event ports.
  • On Windows systems, this feature depends on ReadDirectoryChangesW.

If the underlying functionality is not available for some reason, then fs.watch will not be able to function. For example, watching files or directories on network file systems (NFS, SMB, etc.) often doesn't work reliably or at all.

You can still use fs.watchFile, which uses stat polling, but it is slower and less reliable.

Inodes#

On Linux and OS X systems, fs.watch() resolves the path to an inode and watches the inode. If the watched path is deleted and recreated, it is assigned a new inode. The watch will emit an event for the delete but will continue watching the original inode. Events for the new inode will not be emitted. This is expected behavior.

Filename Argument#

Providing filename argument in the callback is only supported on Linux and Windows. Even on supported platforms, filename is not always guaranteed to be provided. Therefore, don't assume that filename argument is always provided in the callback, and have some fallback logic if it is null.

fs.watch('somedir', (event, filename) => {
  console.log(`event is: ${event}`);
  if (filename) {
    console.log(`filename provided: ${filename}`);
  } else {
    console.log('filename not provided');
  }
});

fs.watchFile(filename[, options], listener)#

Watch for changes on filename. The callback listener will be called each time the file is accessed.

The options argument may be omitted. If provided, it should be an object. The options object may contain a boolean named persistent that indicates whether the process should continue to run as long as files are being watched. The options object may specify an interval property indicating how often the target should be polled in milliseconds. The default is { persistent: true, interval: 5007 }.

The listener gets two arguments the current stat object and the previous stat object:

fs.watchFile('message.text', (curr, prev) => {
  console.log(`the current mtime is: ${curr.mtime}`);
  console.log(`the previous mtime was: ${prev.mtime}`);
});

These stat objects are instances of fs.Stat.

If you want to be notified when the file was modified, not just accessed, you need to compare curr.mtime and prev.mtime.

Note: when an fs.watchFile operation results in an ENOENT error, it will invoke the listener once, with all the fields zeroed (or, for dates, the Unix Epoch). In Windows, blksize and blocks fields will be undefined, instead of zero. If the file is created later on, the listener will be called again, with the latest stat objects. This is a change in functionality since v0.10.

Note: fs.watch() is more efficient than fs.watchFile and fs.unwatchFile. fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.

fs.write(fd, buffer, offset, length[, position], callback)#

Write buffer to the file specified by fd.

offset and length determine the part of the buffer to be written.

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number', the data will be written at the current position. See pwrite(2).

The callback will be given three arguments (err, written, buffer) where written specifies how many bytes were written from buffer.

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

fs.write(fd, data[, position[, encoding]], callback)#

Write data to the file specified by fd. If data is not a Buffer instance then the value will be coerced to a string.

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' the data will be written at the current position. See pwrite(2).

encoding is the expected string encoding.

The callback will receive the arguments (err, written, string) where written specifies how many bytes the passed string required to be written. Note that bytes written is not the same as string characters. See Buffer.byteLength.

Unlike when writing buffer, the entire string must be written. No substring may be specified. This is because the byte offset of the resulting data may not be the same as the string offset.

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

fs.writeFile(file, data[, options], callback)#

Asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer.

The encoding option is ignored if data is a buffer. It defaults to 'utf8'.

Example:

fs.writeFile('message.txt', 'Hello Node.js', (err) => {
  if (err) throw err;
  console.log('It\'s saved!');
});

If options is a string, then it specifies the encoding. Example:

fs.writeFile('message.txt', 'Hello Node.js', 'utf8', callback);

Any specified file descriptor has to support writing.

Note that it is unsafe to use fs.writeFile multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

Note: Specified file descriptors will not be closed automatically.

fs.writeFileSync(file, data[, options])#

The synchronous version of fs.writeFile(). Returns undefined.

fs.writeSync(fd, buffer, offset, length[, position])#

  • fd <Integer>
  • buffer <String> | <Buffer>
  • offset <Integer>
  • length <Integer>
  • position <Integer>

fs.writeSync(fd, data[, position[, encoding]])#

Synchronous versions of fs.write(). Returns the number of bytes written.

Global Objects#

These objects are available in all modules. Some of these objects aren't actually in the global scope but in the module scope - this will be noted.

Class: Buffer#

Used to handle binary data. See the buffer section.

__dirname#

The name of the directory that the currently executing script resides in.

Example: running node example.js from /Users/mjr

console.log(__dirname);
// /Users/mjr

__dirname isn't actually a global but rather local to each module.

For instance, given two modules: a and b, where b is a dependency of a and there is a directory structure of:

  • /Users/mjr/app/a.js
  • /Users/mjr/app/node_modules/b/b.js

References to __dirname within b.js will return /Users/mjr/app/node_modules/b while references to __dirname within a.js will return /Users/mj/app.

__filename#

The filename of the code being executed. This is the resolved absolute path of this code file. For a main program this is not necessarily the same filename used in the command line. The value inside a module is the path to that module file.

Example: running node example.js from /Users/mjr

console.log(__filename);
// /Users/mjr/example.js

__filename isn't actually a global but rather local to each module.

clearImmediate(immediateObject)#

clearImmediate is described in the timers section.

clearInterval(intervalObject)#

clearInterval is described in the timers section.

clearTimeout(timeoutObject)#

clearTimeout is described in the timers section.

console#

Used to print to stdout and stderr. See the console section.

exports#

A reference to the module.exports that is shorter to type. See module system documentation for details on when to use exports and when to use module.exports.

exports isn't actually a global but rather local to each module.

See the module system documentation for more information.

global#

In browsers, the top-level scope is the global scope. That means that in browsers if you're in the global scope var something will define a global variable. In Node.js this is different. The top-level scope is not the global scope; var something inside an Node.js module will be local to that module.

module#

A reference to the current module. In particular module.exports is used for defining what a module exports and makes available through require().

module isn't actually a global but rather local to each module.

See the module system documentation for more information.

process#

The process object. See the process object section.

require()#

To require modules. See the Modules section. require isn't actually a global but rather local to each module.

require.cache#

Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module. Note that this does not apply to native addons, for which reloading will result in an Error.

require.extensions#

Stability: 0 - Deprecated

Instruct require on how to handle certain file extensions.

Process files with the extension .sjs as .js:

require.extensions['.sjs'] = require.extensions['.js'];

Deprecated In the past, this list has been used to load non-JavaScript modules into Node.js by compiling them on-demand. However, in practice, there are much better ways to do this, such as loading modules via some other Node.js program, or compiling them to JavaScript ahead of time.

Since the Module system is locked, this feature will probably never go away. However, it may have subtle bugs and complexities that are best left untouched.

require.resolve()#

Use the internal require() machinery to look up the location of a module, but rather than loading the module, just return the resolved filename.

setImmediate(callback[, arg][, ...])#

setImmediate is described in the timers section.

setInterval(callback, delay[, arg][, ...])#

setInterval is described in the timers section.

setTimeout(callback, delay[, arg][, ...])#

setTimeout is described in the timers section.

HTTP#

Stability: 2 - Stable

To use the HTTP server and client one must require('http').

The HTTP interfaces in Node.js are designed to support many features of the protocol which have been traditionally difficult to use. In particular, large, possibly chunk-encoded, messages. The interface is careful to never buffer entire requests or responses--the user is able to stream data.

HTTP message headers are represented by an object like this:

{ 'content-length': '123',
  'content-type': 'text/plain',
  'connection': 'keep-alive',
  'host': 'mysite.com',
  'accept': '*/*' }

Keys are lowercased. Values are not modified.

In order to support the full spectrum of possible HTTP applications, Node.js's HTTP API is very low-level. It deals with stream handling and message parsing only. It parses a message into headers and body but it does not parse the actual headers or the body.

See message.headers for details on how duplicate headers are handled.

The raw headers as they were received are retained in the rawHeaders property, which is an array of [key, value, key2, value2, ...]. For example, the previous message header object might have a rawHeaders list like the following:

[ 'ConTent-Length', '123456',
  'content-LENGTH', '123',
  'content-type', 'text/plain',
  'CONNECTION', 'keep-alive',
  'Host', 'mysite.com',
  'accepT', '*/*' ]

Class: http.Agent#

The HTTP Agent is used for pooling sockets used in HTTP client requests.

The HTTP Agent also defaults client requests to using Connection:keep-alive. If no pending HTTP requests are waiting on a socket to become free the socket is closed. This means that Node.js's pool has the benefit of keep-alive when under load but still does not require developers to manually close the HTTP clients using KeepAlive.

If you opt into using HTTP KeepAlive, you can create an Agent object with that flag set to true. (See the constructor options.) Then, the Agent will keep unused sockets in a pool for later use. They will be explicitly marked so as to not keep the Node.js process running. However, it is still a good idea to explicitly destroy() KeepAlive agents when they are no longer in use, so that the Sockets will be shut down.

Sockets are removed from the agent's pool when the socket emits either a 'close' event or a special 'agentRemove' event. This means that if you intend to keep one HTTP request open for a long time and don't want it to stay in the pool you can do something along the lines of:

http.get(options, (res) => {
  // Do stuff
}).on('socket', (socket) => {
  socket.emit('agentRemove');
});

Alternatively, you could just opt out of pooling entirely using agent:false:

http.get({
  hostname: 'localhost',
  port: 80,
  path: '/',
  agent: false  // create a new agent just for this one request
}, (res) => {
  // Do stuff with response
})

new Agent(options)#

  • options <Object> Set of configurable options to set on the agent. Can have the following fields:
    • keepAlive <Boolean> Keep sockets around in a pool to be used by other requests in the future. Default = false
    • keepAliveMsecs <Integer> When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.
    • maxSockets <Number> Maximum number of sockets to allow per host. Default = Infinity.
    • maxFreeSockets <Number> Maximum number of sockets to leave open in a free state. Only relevant if keepAlive is set to true. Default = 256.

The default http.globalAgent that is used by http.request() has all of these values set to their respective defaults.

To configure any of them, you must create your own http.Agent object.

const http = require('http');
var keepAliveAgent = new http.Agent({ keepAlive: true });
options.agent = keepAliveAgent;
http.request(options, onResponseCallback);

agent.createConnection(options[, callback])#

Produces a socket/stream to be used for HTTP requests.

By default, this function is the same as net.createConnection(). However, custom Agents may override this method in case greater flexibility is desired.

A socket/stream can be supplied in one of two ways: by returning the socket/stream from this function, or by passing the socket/stream to callback.

callback has a signature of (err, stream).

agent.destroy()#

Destroy any sockets that are currently in use by the agent.

It is usually not necessary to do this. However, if you are using an agent with KeepAlive enabled, then it is best to explicitly shut down the agent when you know that it will no longer be used. Otherwise, sockets may hang open for quite a long time before the server terminates them.

agent.freeSockets#

An object which contains arrays of sockets currently awaiting use by the Agent when HTTP KeepAlive is used. Do not modify.

agent.getName(options)#

Get a unique name for a set of request options, to determine whether a connection can be reused. In the http agent, this returns host:port:localAddress. In the https agent, the name includes the CA, cert, ciphers, and other HTTPS/TLS-specific options that determine socket reusability.

Options:

  • host: A domain name or IP address of the server to issue the request to.
  • port: Port of remote server.
  • localAddress: Local interface to bind for network connections when issuing the request.

agent.maxFreeSockets#

By default set to 256. For Agents supporting HTTP KeepAlive, this sets the maximum number of sockets that will be left open in the free state.

agent.maxSockets#

By default set to Infinity. Determines how many concurrent sockets the agent can have open per origin. Origin is either a 'host:port' or 'host:port:localAddress' combination.

agent.requests#

An object which contains queues of requests that have not yet been assigned to sockets. Do not modify.

agent.sockets#

An object which contains arrays of sockets currently in use by the Agent. Do not modify.

Class: http.ClientRequest#

This object is created internally and returned from http.request(). It represents an in-progress request whose header has already been queued. The header is still mutable using the setHeader(name, value), getHeader(name), removeHeader(name) API. The actual header will be sent along with the first data chunk or when closing the connection.

To get the response, add a listener for 'response' to the request object. 'response' will be emitted from the request object when the response headers have been received. The 'response' event is executed with one argument which is an instance of http.IncomingMessage.

During the 'response' event, one can add listeners to the response object; particularly to listen for the 'data' event.

If no 'response' handler is added, then the response will be entirely discarded. However, if you add a 'response' event handler, then you must consume the data from the response object, either by calling response.read() whenever there is a 'readable' event, or by adding a 'data' handler, or by calling the .resume() method. Until the data is consumed, the 'end' event will not fire. Also, until the data is read it will consume memory that can eventually lead to a 'process out of memory' error.

Note: Node.js does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

The request implements the Writable Stream interface. This is an EventEmitter with the following events:

Event: 'abort'#

function () { }

Emitted when the request has been aborted by the client. This event is only emitted on the first call to abort().

Event: 'checkExpectation'#

function (request, response) { }

Emitted each time a request with an http Expect header is received, where the value is not 100-continue. If this event isn't listened for, the server will automatically respond with a 417 Expectation Failed as appropriate.

Note that when this event is emitted and handled, the request event will not be emitted.

Event: 'connect'#

function (response, socket, head) { }

Emitted each time a server responds to a request with a CONNECT method. If this event isn't being listened for, clients receiving a CONNECT method will have their connections closed.

A client server pair that show you how to listen for the 'connect' event.

const http = require('http');
const net = require('net');
const url = require('url');

// Create an HTTP tunneling proxy
var proxy = http.createServer( (req, res) => {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('okay');
});
proxy.on('connect', (req, cltSocket, head) => {
  // connect to an origin server
  var srvUrl = url.parse(`http://${req.url}`);
  var srvSocket = net.connect(srvUrl.port, srvUrl.hostname, () => {
    cltSocket.write('HTTP/1.1 200 Connection Established\r\n' +
                    'Proxy-agent: Node.js-Proxy\r\n' +
                    '\r\n');
    srvSocket.write(head);
    srvSocket.pipe(cltSocket);
    cltSocket.pipe(srvSocket);
  });
});

// now that proxy is running
proxy.listen(1337, '127.0.0.1', () => {

  // make a request to a tunneling proxy
  var options = {
    port: 1337,
    hostname: '127.0.0.1',
    method: 'CONNECT',
    path: 'www.google.com:80'
  };

  var req = http.request(options);
  req.end();

  req.on('connect', (res, socket, head) => {
    console.log('got connected!');

    // make a request over an HTTP tunnel
    socket.write('GET / HTTP/1.1\r\n' +
                 'Host: www.google.com:80\r\n' +
                 'Connection: close\r\n' +
                 '\r\n');
    socket.on('data', (chunk) => {
      console.log(chunk.toString());
    });
    socket.on('end', () => {
      proxy.close();
    });
  });
});

Event: 'continue'#

function () { }

Emitted when the server sends a '100 Continue' HTTP response, usually because the request contained 'Expect: 100-continue'. This is an instruction that the client should send the request body.

Event: 'response'#

function (response) { }

Emitted when a response is received to this request. This event is emitted only once. The response argument will be an instance of http.IncomingMessage.

Event: 'socket'#

function (socket) { }

Emitted after a socket is assigned to this request.

Event: 'upgrade'#

function (response, socket, head) { }

Emitted each time a server responds to a request with an upgrade. If this event isn't being listened for, clients receiving an upgrade header will have their connections closed.

A client server pair that show you how to listen for the 'upgrade' event.

const http = require('http');

// Create an HTTP server
var srv = http.createServer( (req, res) => {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('okay');
});
srv.on('upgrade', (req, socket, head) => {
  socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +
               'Upgrade: WebSocket\r\n' +
               'Connection: Upgrade\r\n' +
               '\r\n');

  socket.pipe(socket); // echo back
});

// now that server is running
srv.listen(1337, '127.0.0.1', () => {

  // make a request
  var options = {
    port: 1337,
    hostname: '127.0.0.1',
    headers: {
      'Connection': 'Upgrade',
      'Upgrade': 'websocket'
    }
  };

  var req = http.request(options);
  req.end();

  req.on('upgrade', (res, socket, upgradeHead) => {
    console.log('got upgraded!');
    socket.end();
    process.exit(0);
  });
});

request.abort()#

Marks the request as aborting. Calling this will cause remaining data in the response to be dropped and the socket to be destroyed.

request.end([data][, encoding][, callback])#

Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. If the request is chunked, this will send the terminating '0\r\n\r\n'.

If data is specified, it is equivalent to calling response.write(data, encoding) followed by request.end(callback).

If callback is specified, it will be called when the request stream is finished.

request.flushHeaders()#

Flush the request headers.

For efficiency reasons, Node.js normally buffers the request headers until you call request.end() or write the first chunk of request data. It then tries hard to pack the request headers and data into a single TCP packet.

That's usually what you want (it saves a TCP round-trip) but not when the first data isn't sent until possibly much later. request.flushHeaders() lets you bypass the optimization and kickstart the request.

request.setNoDelay([noDelay])#

Once a socket is assigned to this request and is connected socket.setNoDelay() will be called.

request.setSocketKeepAlive([enable][, initialDelay])#

Once a socket is assigned to this request and is connected socket.setKeepAlive() will be called.

request.setTimeout(timeout[, callback])#

Once a socket is assigned to this request and is connected socket.setTimeout() will be called.

  • timeout <Number> Milliseconds before a request is considered to be timed out.
  • callback <Function> Optional function to be called when a timeout occurs. Same as binding to the timeout event.

request.write(chunk[, encoding][, callback])#

Sends a chunk of the body. By calling this method many times, the user can stream a request body to a server--in that case it is suggested to use the ['Transfer-Encoding', 'chunked'] header line when creating the request.

The chunk argument should be a Buffer or a string.

The encoding argument is optional and only applies when chunk is a string. Defaults to 'utf8'.

The callback argument is optional and will be called when this chunk of data is flushed.

Returns request.

Class: http.Server#

This class inherits from net.Server and has the following additional events:

Event: 'checkContinue'#

function (request, response) { }

Emitted each time a request with an http Expect: 100-continue is received. If this event isn't listened for, the server will automatically respond with a 100 Continue as appropriate.

Handling this event involves calling response.writeContinue() if the client should continue to send the request body, or generating an appropriate HTTP response (e.g., 400 Bad Request) if the client should not continue to send the request body.

Note that when this event is emitted and handled, the 'request' event will not be emitted.

Event: 'clientError'#

function (exception, socket) { }

If a client connection emits an 'error' event, it will be forwarded here. Listener of this event is responsible for closing/destroying the underlying socket. For example, one may wish to more gracefully close the socket with an HTTP '400 Bad Request' response instead of abruptly severing the connection.

Default behavior is to destroy the socket immediately on malformed request.

socket is the net.Socket object that the error originated from.

const http = require('http');

const server = http.createServer((req, res) => {
  res.end();
});
server.on('clientError', (err, socket) => {
  socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');
});
server.listen(8000);

When the 'clientError' event occurs, there is no request or response object, so any HTTP response sent, including response headers and payload, must be written directly to the socket object. Care must be taken to ensure the response is a properly formatted HTTP response message.

Event: 'close'#

function () { }

Emitted when the server closes.

Event: 'connect'#

function (request, socket, head) { }

Emitted each time a client requests a http CONNECT method. If this event isn't listened for, then clients requesting a CONNECT method will have their connections closed.

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the tunneling stream, this may be empty.

After this event is emitted, the request's socket will not have a 'data' event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

Event: 'connection'#

function (socket) { }

When a new TCP stream is established. socket is an object of type net.Socket. Usually users will not want to access this event. In particular, the socket will not emit 'readable' events because of how the protocol parser attaches to the socket. The socket can also be accessed at request.connection.

Event: 'request'#

function (request, response) { }

Emitted each time there is a request. Note that there may be multiple requests per connection (in the case of keep-alive connections). request is an instance of http.IncomingMessage and response is an instance of http.ServerResponse.

Event: 'upgrade'#

function (request, socket, head) { }

Emitted each time a client requests a http upgrade. If this event isn't listened for, then clients requesting an upgrade will have their connections closed.

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the upgraded stream, this may be empty.

After this event is emitted, the request's socket will not have a 'data' event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

server.close([callback])#

Stops the server from accepting new connections. See net.Server.close().

server.listen(handle[, callback])#

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

Listening on a file descriptor is not supported on Windows.

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen().

Returns server.

server.listen(path[, callback])#

Start a UNIX socket server listening for connections on the given path.

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(path).

server.listen(port[, hostname][, backlog][, callback])#

Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections on any IPv6 address (::) when IPv6 is available, or any IPv4 address (0.0.0.0) otherwise. A port value of zero will assign a random port.

To listen to a unix socket, supply a filename instead of port and hostname.

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(port).

server.listening#

A Boolean indicating whether or not the server is listening for connections.

server.maxHeadersCount#

Limits maximum incoming headers count, equal to 1000 by default. If set to 0 - no limit will be applied.

server.setTimeout(msecs, callback)#

Sets the timeout value for sockets, and emits a 'timeout' event on the Server object, passing the socket as an argument, if a timeout occurs.

If there is a 'timeout' event listener on the Server object, then it will be called with the timed-out socket as an argument.

By default, the Server's timeout value is 2 minutes, and sockets are destroyed automatically if they time out. However, if you assign a callback to the Server's 'timeout' event, then you are responsible for handling socket timeouts.

Returns server.

server.timeout#

The number of milliseconds of inactivity before a socket is presumed to have timed out.

Note that the socket timeout logic is set up on connection, so changing this value only affects new connections to the server, not any existing connections.

Set to 0 to disable any kind of automatic timeout behavior on incoming connections.

Class: http.ServerResponse#

This object is created internally by a HTTP server--not by the user. It is passed as the second parameter to the 'request' event.

The response implements the Writable Stream interface. This is an EventEmitter with the following events:

Event: 'close'#

function () { }

Indicates that the underlying connection was terminated before response.end() was called or able to flush.

Event: 'finish'#

function () { }

Emitted when the response has been sent. More specifically, this event is emitted when the last segment of the response headers and body have been handed off to the operating system for transmission over the network. It does not imply that the client has received anything yet.

After this event, no more events will be emitted on the response object.

response.addTrailers(headers)#

This method adds HTTP trailing headers (a header but at the end of the message) to the response.

Trailers will only be emitted if chunked encoding is used for the response; if it is not (e.g., if the request was HTTP/1.0), they will be silently discarded.

Note that HTTP requires the Trailer header to be sent if you intend to emit trailers, with a list of the header fields in its value. E.g.,

response.writeHead(200, { 'Content-Type': 'text/plain',
                          'Trailer': 'Content-MD5' });
response.write(fileData);
response.addTrailers({'Content-MD5': '7895bf4b8828b55ceaf47747b4bca667'});
response.end();

Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

response.end([data][, encoding][, callback])#

This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end(callback).

If callback is specified, it will be called when the response stream is finished.

response.finished#

Boolean value that indicates whether the response has completed. Starts as false. After response.end() executes, the value will be true.

response.getHeader(name)#

Reads out a header that's already been queued but not sent to the client. Note that the name is case insensitive. This can only be called before headers get implicitly flushed.

Example:

var contentType = response.getHeader('content-type');

response.headersSent#

Boolean (read-only). True if headers were sent, false otherwise.

response.removeHeader(name)#

Removes a header that's queued for implicit sending.

Example:

response.removeHeader('Content-Encoding');

response.sendDate#

When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.

This should only be disabled for testing; HTTP requires the Date header in responses.

response.setHeader(name, value)#

Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here if you need to send multiple headers with the same name.

Example:

response.setHeader('Content-Type', 'text/html');

or

response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']);

Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

When headers have been set with response.setHeader(), they will be merged with any headers passed to response.writeHead(), with the headers passed to response.writeHead() given precedence.

// returns content-type = text/plain
const server = http.createServer((req,res) => {
  res.setHeader('Content-Type', 'text/html');
  res.setHeader('X-Foo', 'bar');
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('ok');
});

response.setTimeout(msecs, callback)#

Sets the Socket's timeout value to msecs. If a callback is provided, then it is added as a listener on the 'timeout' event on the response object.

If no 'timeout' listener is added to the request, the response, or the server, then sockets are destroyed when they time out. If you assign a handler on the request, the response, or the server's 'timeout' events, then it is your responsibility to handle timed out sockets.

Returns response.

response.statusCode#

When using implicit headers (not calling response.writeHead() explicitly), this property controls the status code that will be sent to the client when the headers get flushed.

Example:

response.statusCode = 404;

After response header was sent to the client, this property indicates the status code which was sent out.

response.statusMessage#

When using implicit headers (not calling response.writeHead() explicitly), this property controls the status message that will be sent to the client when the headers get flushed. If this is left as undefined then the standard message for the status code will be used.

Example:

response.statusMessage = 'Not found';

After response header was sent to the client, this property indicates the status message which was sent out.

response.write(chunk[, encoding][, callback])#

If this method is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers.

This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.

chunk can be a string or a buffer. If chunk is a string, the second parameter specifies how to encode it into a byte stream. By default the encoding is 'utf8'. The last parameter callback will be called when this chunk of data is flushed.

Note: This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.

The first time response.write() is called, it will send the buffered header information and the first body to the client. The second time response.write() is called, Node.js assumes you're going to be streaming data, and sends that separately. That is, the response is buffered up to the first chunk of body.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is free again.

response.writeContinue()#

Sends a HTTP/1.1 100 Continue message to the client, indicating that the request body should be sent. See the 'checkContinue' event on Server.

response.writeHead(statusCode[, statusMessage][, headers])#

Sends a response header to the request. The status code is a 3-digit HTTP status code, like 404. The last argument, headers, are the response headers. Optionally one can give a human-readable statusMessage as the second argument.

Example:

var body = 'hello world';
response.writeHead(200, {
  'Content-Length': body.length,
  'Content-Type': 'text/plain' });

This method must only be called once on a message and it must be called before response.end() is called.

If you call response.write() or response.end() before calling this, the implicit/mutable headers will be calculated and call this function for you.

When headers have been set with response.setHeader(), they will be merged with any headers passed to response.writeHead(), with the headers passed to response.writeHead() given precedence.

// returns content-type = text/plain
const server = http.createServer((req,res) => {
  res.setHeader('Content-Type', 'text/html');
  res.setHeader('X-Foo', 'bar');
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('ok');
});

Note that Content-Length is given in bytes not characters. The above example works because the string 'hello world' contains only single byte characters. If the body contains higher coded characters then Buffer.byteLength() should be used to determine the number of bytes in a given encoding. And Node.js does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

Class: http.IncomingMessage#

An IncomingMessage object is created by http.Server or http.ClientRequest and passed as the first argument to the 'request' and 'response' event respectively. It may be used to access response status, headers and data.

It implements the Readable Stream interface, as well as the following additional events, methods, and properties.

Event: 'close'#

function () { }

Indicates that the underlying connection was closed. Just like 'end', this event occurs only once per response.

message.headers#

The request/response headers object.

Key-value pairs of header names and values. Header names are lower-cased. Example:

// Prints something like:
//
// { 'user-agent': 'curl/7.22.0',
//   host: '127.0.0.1:8000',
//   accept: '*/*' }
console.log(request.headers);

Duplicates in raw headers are handled in the following ways, depending on the header name:

  • Duplicates of age, authorization, content-length, content-type, etag, expires, from, host, if-modified-since, if-unmodified-since, last-modified, location, max-forwards, proxy-authorization, referer, retry-after, or user-agent are discarded.
  • set-cookie is always an array. Duplicates are added to the array.
  • For all other headers, the values are joined together with ', '.

message.httpVersion#

In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Probably either '1.1' or '1.0'.

Also message.httpVersionMajor is the first integer and message.httpVersionMinor is the second.

message.method#

Only valid for request obtained from http.Server.

The request method as a string. Read only. Example: 'GET', 'DELETE'.

message.rawHeaders#

The raw request/response headers list exactly as they were received.

Note that the keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.

Header names are not lowercased, and duplicates are not merged.

// Prints something like:
//
// [ 'user-agent',
//   'this is invalid because there can be only one',
//   'User-Agent',
//   'curl/7.22.0',
//   'Host',
//   '127.0.0.1:8000',
//   'ACCEPT',
//   '*/*' ]
console.log(request.rawHeaders);

message.rawTrailers#

The raw request/response trailer keys and values exactly as they were received. Only populated at the 'end' event.

message.setTimeout(msecs, callback)#

Calls message.connection.setTimeout(msecs, callback).

Returns message.

message.statusCode#

Only valid for response obtained from http.ClientRequest.

The 3-digit HTTP response status code. E.G. 404.

message.statusMessage#

Only valid for response obtained from http.ClientRequest.

The HTTP response status message (reason phrase). E.G. OK or Internal Server Error.

message.socket#

The net.Socket object associated with the connection.

With HTTPS support, use request.socket.getPeerCertificate() to obtain the client's authentication details.

message.trailers#

The request/response trailers object. Only populated at the 'end' event.

message.url#

Only valid for request obtained from http.Server.

Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:

GET /status?name=ryan HTTP/1.1\r\n
Accept: text/plain\r\n
\r\n

Then request.url will be:

'/status?name=ryan'

If you would like to parse the URL into its parts, you can use require('url').parse(request.url). Example:

$ node
> require('url').parse('/status?name=ryan')
{
  href: '/status?name=ryan',
  search: '?name=ryan',
  query: 'name=ryan',
  pathname: '/status'
}

If you would like to extract the params from the query string, you can use the require('querystring').parse function, or pass true as the second argument to require('url').parse. Example:

$ node
> require('url').parse('/status?name=ryan', true)
{
  href: '/status?name=ryan',
  search: '?name=ryan',
  query: {name: 'ryan'},
  pathname: '/status'
}

http.METHODS#

A list of the HTTP methods that are supported by the parser.

http.STATUS_CODES#

A collection of all the standard HTTP response status codes, and the short description of each. For example, http.STATUS_CODES[404] === 'Not Found'.

http.createClient([port][, host])#

Stability: 0 - Deprecated: Use http.request() instead.

Constructs a new HTTP client. port and host refer to the server to be connected to.

http.createServer([requestListener])#

Returns a new instance of http.Server.

The requestListener is a function which is automatically added to the 'request' event.

http.get(options[, callback])#

Since most requests are GET requests without bodies, Node.js provides this convenience method. The only difference between this method and http.request() is that it sets the method to GET and calls req.end() automatically.

Example:

http.get('http://www.google.com/index.html', (res) => {
  console.log(`Got response: ${res.statusCode}`);
  // consume response body
  res.resume();
}).on('error', (e) => {
  console.log(`Got error: ${e.message}`);
});

http.globalAgent#

Global instance of Agent which is used as the default for all http client requests.

http.request(options[, callback])#

Node.js maintains several connections per server to make HTTP requests. This function allows one to transparently issue requests.

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

Options:

  • protocol: Protocol to use. Defaults to 'http:'.
  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: Alias for host. To support url.parse() hostname is preferred over host.
  • family: IP address family to use when resolving host and hostname. Valid values are 4 or 6. When unspecified, both IP v4 and v6 will be used.
  • port: Port of remote server. Defaults to 80.
  • localAddress: Local interface to bind for network connections.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath).
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use http.globalAgent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • createConnection: A function that produces a socket/stream to use for the request when the agent option is not used. This can be used to avoid creating a custom Agent class just to override the default createConnection function. See agent.createConnection() for more details.

The optional callback parameter will be added as a one time listener for the 'response' event.

http.request() returns an instance of the http.ClientRequest class. The ClientRequest instance is a writable stream. If one needs to upload a file with a POST request, then write to the ClientRequest object.

Example:

var postData = querystring.stringify({
  'msg' : 'Hello World!'
});

var options = {
  hostname: 'www.google.com',
  port: 80,
  path: '/upload',
  method: 'POST',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded',
    'Content-Length': postData.length
  }
};

var req = http.request(options, (res) => {
  console.log(`STATUS: ${res.statusCode}`);
  console.log(`HEADERS: ${JSON.stringify(res.headers)}`);
  res.setEncoding('utf8');
  res.on('data', (chunk) => {
    console.log(`BODY: ${chunk}`);
  });
  res.on('end', () => {
    console.log('No more data in response.')
  })
});

req.on('error', (e) => {
  console.log(`problem with request: ${e.message}`);
});

// write data to request body
req.write(postData);
req.end();

Note that in the example req.end() was called. With http.request() one must always call req.end() to signify that you're done with the request - even if there is no data being written to the request body.

If any error is encountered during the request (be that with DNS resolution, TCP level errors, or actual HTTP parse errors) an 'error' event is emitted on the returned request object. As with all 'error' events, if no listeners are registered the error will be thrown.

There are a few special headers that should be noted.

  • Sending a 'Connection: keep-alive' will notify Node.js that the connection to the server should be persisted until the next request.

  • Sending a 'Content-length' header will disable the default chunked encoding.

  • Sending an 'Expect' header will immediately send the request headers. Usually, when sending 'Expect: 100-continue', you should both set a timeout and listen for the 'continue' event. See RFC2616 Section 8.2.3 for more information.

  • Sending an Authorization header will override using the auth option to compute basic authentication.

HTTPS#

Stability: 2 - Stable

HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a separate module.

Class: https.Agent#

An Agent object for HTTPS similar to http.Agent. See https.request() for more information.

Class: https.Server#

This class is a subclass of tls.Server and emits events same as http.Server. See http.Server for more information.

server.setTimeout(msecs, callback)#

See http.Server#setTimeout().

server.timeout#

See http.Server#timeout.

https.createServer(options[, requestListener])#

Returns a new HTTPS web server object. The options is similar to tls.createServer(). The requestListener is a function which is automatically added to the 'request' event.

Example:

// curl -k https://localhost:8000/
const https = require('https');
const fs = require('fs');

const options = {
  key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
  cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};

https.createServer(options, (req, res) => {
  res.writeHead(200);
  res.end('hello world\n');
}).listen(8000);

Or

const https = require('https');
const fs = require('fs');

const options = {
  pfx: fs.readFileSync('server.pfx')
};

https.createServer(options, (req, res) => {
  res.writeHead(200);
  res.end('hello world\n');
}).listen(8000);

server.close([callback])#

See http.close() for details.

server.listen(handle[, callback])#

server.listen(path[, callback])#

server.listen(port[, host][, backlog][, callback])#

See http.listen() for details.

https.get(options, callback)#

Like http.get() but for HTTPS.

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

Example:

const https = require('https');

https.get('https://encrypted.google.com/', (res) => {
  console.log('statusCode: ', res.statusCode);
  console.log('headers: ', res.headers);

  res.on('data', (d) => {
    process.stdout.write(d);
  });

}).on('error', (e) => {
  console.error(e);
});

https.globalAgent#

Global instance of https.Agent for all HTTPS client requests.

https.request(options, callback)#

Makes a request to a secure web server.

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

All options from http.request() are valid.

Example:

const https = require('https');

var options = {
  hostname: 'encrypted.google.com',
  port: 443,
  path: '/',
  method: 'GET'
};

var req = https.request(options, (res) => {
  console.log('statusCode: ', res.statusCode);
  console.log('headers: ', res.headers);

  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
req.end();

req.on('error', (e) => {
  console.error(e);
});

The options argument has the following options

  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: Alias for host. To support url.parse() hostname is preferred over host.
  • family: IP address family to use when resolving host and hostname. Valid values are 4 or 6. When unspecified, both IP v4 and v6 will be used.
  • port: Port of remote server. Defaults to 443.
  • localAddress: Local interface to bind for network connections.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath).
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use globalAgent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.

The following options from tls.connect() can also be specified. However, a globalAgent silently ignores these.

  • pfx: Certificate, Private key and CA certificates to use for SSL. Default null.
  • key: Private key to use for SSL. Default null.
  • passphrase: A string of passphrase for the private key or pfx. Default null.
  • cert: Public x509 certificate to use. Default null.
  • ca: A string, Buffer or array of strings or Buffers of trusted certificates in PEM format. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.
  • ciphers: A string describing the ciphers to use or exclude. Consult https://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format.
  • rejectUnauthorized: If true, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails. Verification happens at the connection level, before the HTTP request is sent. Default true.
  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.
  • servername: Servername for SNI (Server Name Indication) TLS extension.

In order to specify these options, use a custom Agent.

Example:

var options = {
  hostname: 'encrypted.google.com',
  port: 443,
  path: '/',
  method: 'GET',
  key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
  cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};
options.agent = new https.Agent(options);

var req = https.request(options, (res) => {
  ...
}

Alternatively, opt out of connection pooling by not using an Agent.

Example:

var options = {
  hostname: 'encrypted.google.com',
  port: 443,
  path: '/',
  method: 'GET',
  key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
  cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem'),
  agent: false
};

var req = https.request(options, (res) => {
  ...
}

Modules#

Stability: 3 - Locked

Node.js has a simple module loading system. In Node.js, files and modules are in one-to-one correspondence. As an example, foo.js loads the module circle.js in the same directory.

The contents of foo.js:

const circle = require('./circle.js');
console.log( `The area of a circle of radius 4 is ${circle.area(4)}`);

The contents of circle.js:

const PI = Math.PI;

exports.area = (r) => PI * r * r;

exports.circumference = (r) => 2 * PI * r;

The module circle.js has exported the functions area() and circumference(). To add functions and objects to the root of your module, you can add them to the special exports object.

Variables local to the module will be private, as though the module was wrapped in a function. In this example the variable PI is private to circle.js.

If you want the root of your module's export to be a function (such as a constructor) or if you want to export a complete object in one assignment instead of building it one property at a time, assign it to module.exports instead of exports.

Below, bar.js makes use of the square module, which exports a constructor:

const square = require('./square.js');
var mySquare = square(2);
console.log(`The area of my square is ${mySquare.area()}`);

The square module is defined in square.js:

// assigning to exports will not modify module, must use module.exports
module.exports = (width) => {
  return {
    area: () => width * width
  };
}

The module system is implemented in the require("module") module.

Accessing the main module#

When a file is run directly from Node.js, require.main is set to its module. That means that you can determine whether a file has been run directly by testing

require.main === module

For a file foo.js, this will be true if run via node foo.js, but false if run by require('./foo').

Because module provides a filename property (normally equivalent to __filename), the entry point of the current application can be obtained by checking require.main.filename.

Addenda: Package Manager Tips#

The semantics of Node.js's require() function were designed to be general enough to support a number of reasonable directory structures. Package manager programs such as dpkg, rpm, and npm will hopefully find it possible to build native packages from Node.js modules without modification.

Below we give a suggested directory structure that could work:

Let's say that we wanted to have the folder at /usr/lib/node/<some-package>/<some-version> hold the contents of a specific version of a package.

Packages can depend on one another. In order to install package foo, you may have to install a specific version of package bar. The bar package may itself have dependencies, and in some cases, these dependencies may even collide or form cycles.

Since Node.js looks up the realpath of any modules it loads (that is, resolves symlinks), and then looks for their dependencies in the node_modules folders as described here, this situation is very simple to resolve with the following architecture:

  • /usr/lib/node/foo/1.2.3/ - Contents of the foo package, version 1.2.3.
  • /usr/lib/node/bar/4.3.2/ - Contents of the bar package that foo depends on.
  • /usr/lib/node/foo/1.2.3/node_modules/bar - Symbolic link to /usr/lib/node/bar/4.3.2/.
  • /usr/lib/node/bar/4.3.2/node_modules/* - Symbolic links to the packages that bar depends on.

Thus, even if a cycle is encountered, or if there are dependency conflicts, every module will be able to get a version of its dependency that it can use.

When the code in the foo package does require('bar'), it will get the version that is symlinked into /usr/lib/node/foo/1.2.3/node_modules/bar. Then, when the code in the bar package calls require('quux'), it'll get the version that is symlinked into /usr/lib/node/bar/4.3.2/node_modules/quux.

Furthermore, to make the module lookup process even more optimal, rather than putting packages directly in /usr/lib/node, we could put them in /usr/lib/node_modules/<name>/<version>. Then Node.js will not bother looking for missing dependencies in /usr/node_modules or /node_modules.

In order to make modules available to the Node.js REPL, it might be useful to also add the /usr/lib/node_modules folder to the $NODE_PATH environment variable. Since the module lookups using node_modules folders are all relative, and based on the real path of the files making the calls to require(), the packages themselves can be anywhere.

All Together...#

To get the exact filename that will be loaded when require() is called, use the require.resolve() function.

Putting together all of the above, here is the high-level algorithm in pseudocode of what require.resolve does:

require(X) from module at path Y
1. If X is a core module,
   a. return the core module
   b. STOP
2. If X begins with './' or '/' or '../'
   a. LOAD_AS_FILE(Y + X)
   b. LOAD_AS_DIRECTORY(Y + X)
3. LOAD_NODE_MODULES(X, dirname(Y))
4. THROW "not found"

LOAD_AS_FILE(X)
1. If X is a file, load X as JavaScript text.  STOP
2. If X.js is a file, load X.js as JavaScript text.  STOP
3. If X.json is a file, parse X.json to a JavaScript Object.  STOP
4. If X.node is a file, load X.node as binary addon.  STOP

LOAD_AS_DIRECTORY(X)
1. If X/package.json is a file,
   a. Parse X/package.json, and look for "main" field.
   b. let M = X + (json main field)
   c. LOAD_AS_FILE(M)
2. If X/index.js is a file, load X/index.js as JavaScript text.  STOP
3. If X/index.json is a file, parse X/index.json to a JavaScript object. STOP
4. If X/index.node is a file, load X/index.node as binary addon.  STOP

LOAD_NODE_MODULES(X, START)
1. let DIRS=NODE_MODULES_PATHS(START)
2. for each DIR in DIRS:
   a. LOAD_AS_FILE(DIR/X)
   b. LOAD_AS_DIRECTORY(DIR/X)

NODE_MODULES_PATHS(START)
1. let PARTS = path split(START)
2. let I = count of PARTS - 1
3. let DIRS = []
4. while I >= 0,
   a. if PARTS[I] = "node_modules" CONTINUE
   c. DIR = path join(PARTS[0 .. I] + "node_modules")
   b. DIRS = DIRS + DIR
   c. let I = I - 1
5. return DIRS

Caching#

Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.

Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.

If you want to have a module execute code multiple times, then export a function, and call that function.

Module Caching Caveats#

Modules are cached based on their resolved filename. Since modules may resolve to a different filename based on the location of the calling module (loading from node_modules folders), it is not a guarantee that require('foo') will always return the exact same object, if it would resolve to different files.

Additionally, on case-insensitive file systems or operating systems, different resolved filenames can point to the same file, but the cache will still treat them as different modules and will reload the file multiple times. For example, require('./foo') and require('./FOO') return two different objects, irrespective of whether or not ./foo and ./FOO are the same file.

Core Modules#

Node.js has several modules compiled into the binary. These modules are described in greater detail elsewhere in this documentation.

The core modules are defined within Node.js's source and are located in the lib/ folder.

Core modules are always preferentially loaded if their identifier is passed to require(). For instance, require('http') will always return the built in HTTP module, even if there is a file by that name.

Cycles#

When there are circular require() calls, a module might not have finished executing when it is returned.

Consider this situation:

a.js:

console.log('a starting');
exports.done = false;
const b = require('./b.js');
console.log('in a, b.done = %j', b.done);
exports.done = true;
console.log('a done');

b.js:

console.log('b starting');
exports.done = false;
const a = require('./a.js');
console.log('in b, a.done = %j', a.done);
exports.done = true;
console.log('b done');

main.js:

console.log('main starting');
const a = require('./a.js');
const b = require('./b.js');
console.log('in main, a.done=%j, b.done=%j', a.done, b.done);

When main.js loads a.js, then a.js in turn loads b.js. At that point, b.js tries to load a.js. In order to prevent an infinite loop, an unfinished copy of the a.js exports object is returned to the b.js module. b.js then finishes loading, and its exports object is provided to the a.js module.

By the time main.js has loaded both modules, they're both finished. The output of this program would thus be:

$ node main.js
main starting
a starting
b starting
in b, a.done = false
b done
in a, b.done = true
a done
in main, a.done=true, b.done=true

If you have cyclic module dependencies in your program, make sure to plan accordingly.

File Modules#

If the exact filename is not found, then Node.js will attempt to load the required filename with the added extensions: .js, .json, and finally .node.

.js files are interpreted as JavaScript text files, and .json files are parsed as JSON text files. .node files are interpreted as compiled addon modules loaded with dlopen.

A required module prefixed with '/' is an absolute path to the file. For example, require('/home/marco/foo.js') will load the file at /home/marco/foo.js.

A required module prefixed with './' is relative to the file calling require(). That is, circle.js must be in the same directory as foo.js for require('./circle') to find it.

Without a leading '/', './', or '../' to indicate a file, the module must either be a core module or is loaded from a node_modules folder.

If the given path does not exist, require() will throw an Error with its code property set to 'MODULE_NOT_FOUND'.

Folders as Modules#

It is convenient to organize programs and libraries into self-contained directories, and then provide a single entry point to that library. There are three ways in which a folder may be passed to require() as an argument.

The first is to create a package.json file in the root of the folder, which specifies a main module. An example package.json file might look like this:

{ "name" : "some-library",
  "main" : "./lib/some-library.js" }

If this was in a folder at ./some-library, then require('./some-library') would attempt to load ./some-library/lib/some-library.js.

This is the extent of Node.js's awareness of package.json files.

Note: If the file specified by the "main" entry of package.json is missing and can not be resolved, Node.js will report the entire module as missing with the default error:

Error: Cannot find module 'some-library'

If there is no package.json file present in the directory, then Node.js will attempt to load an index.js or index.node file out of that directory. For example, if there was no package.json file in the above example, then require('./some-library') would attempt to load:

  • ./some-library/index.js
  • ./some-library/index.node

Loading from node_modules Folders#

If the module identifier passed to require() is not a native module, and does not begin with '/', '../', or './', then Node.js starts at the parent directory of the current module, and adds /node_modules, and attempts to load the module from that location. Node will not append node_modules to a path already ending in node_modules.

If it is not found there, then it moves to the parent directory, and so on, until the root of the file system is reached.

For example, if the file at '/home/ry/projects/foo.js' called require('bar.js'), then Node.js would look in the following locations, in this order:

  • /home/ry/projects/node_modules/bar.js
  • /home/ry/node_modules/bar.js
  • /home/node_modules/bar.js
  • /node_modules/bar.js

This allows programs to localize their dependencies, so that they do not clash.

You can require specific files or sub modules distributed with a module by including a path suffix after the module name. For instance require('example-module/path/to/file') would resolve path/to/file relative to where example-module is located. The suffixed path follows the same module resolution semantics.

Loading from the global folders#

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then Node.js will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

NODE_PATH was originally created to support loading modules from varying paths before the current module resolution algorithm was frozen.

NODE_PATH is still supported, but is less necessary now that the Node.js ecosystem has settled on a convention for locating dependent modules. Sometimes deployments that rely on NODE_PATH show surprising behavior when people are unaware that NODE_PATH must be set. Sometimes a module's dependencies change, causing a different version (or even a different module) to be loaded as the NODE_PATH is searched.

Additionally, Node.js will search in the following locations:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user's home directory, and $PREFIX is Node.js's configured node_prefix.

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally in node_modules folders. They will be loaded faster, and more reliably.

The module Object#

In each module, the module free variable is a reference to the object representing the current module. For convenience, module.exports is also accessible via the exports module-global. module isn't actually a global but rather local to each module.

module.children#

The module objects required by this one.

module.exports#

The module.exports object is created by the Module system. Sometimes this is not acceptable; many want their module to be an instance of some class. To do this, assign the desired export object to module.exports. Note that assigning the desired object to exports will simply rebind the local exports variable, which is probably not what you want to do.

For example suppose we were making a module called a.js

const EventEmitter = require('events');

module.exports = new EventEmitter();

// Do some work, and after some time emit
// the 'ready' event from the module itself.
setTimeout(() => {
  module.exports.emit('ready');
}, 1000);

Then in another file we could do

const a = require('./a');
a.on('ready', () => {
  console.log('module a is ready');
});

Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. This does not work:

x.js:

setTimeout(() => {
  module.exports = { a: 'hello' };
}, 0);

y.js:

const x = require('./x');
console.log(x.a);

exports alias#

The exports variable that is available within a module starts as a reference to module.exports. As with any variable, if you assign a new value to it, it is no longer bound to the previous value.

To illustrate the behavior, imagine this hypothetical implementation of require():

function require(...) {
  // ...
  ((module, exports) => {
    // Your module code here
    exports = some_func;        // re-assigns exports, exports is no longer
                                // a shortcut, and nothing is exported.
    module.exports = some_func; // makes your module export 0
  })(module, module.exports);
  return module;
}

As a guideline, if the relationship between exports and module.exports seems like magic to you, ignore exports and only use module.exports.

module.filename#

The fully resolved filename to the module.

module.id#

The identifier for the module. Typically this is the fully resolved filename.

module.loaded#

Whether or not the module is done loading, or is in the process of loading.

module.parent#

The module that first required this one.

module.require(id)#

The module.require method provides a way to load a module as if require() was called from the original module.

Note that in order to do this, you must get a reference to the module object. Since require() returns the module.exports, and the module is typically only available within a specific module's code, it must be explicitly exported in order to be used.

net#

Stability: 2 - Stable

The net module provides you with an asynchronous network wrapper. It contains functions for creating both servers and clients (called streams). You can include this module with require('net');.

Class: net.Server#

This class is used to create a TCP or local server.

net.Server is an EventEmitter with the following events:

Event: 'close'#

Emitted when the server closes. Note that if connections exist, this event is not emitted until all connections are ended.

Event: 'connection'#

Emitted when a new connection is made. socket is an instance of net.Socket.

Event: 'error'#

Emitted when an error occurs. The 'close' event will be called directly following this event. See example in discussion of server.listen.

Event: 'listening'#

Emitted when the server has been bound after calling server.listen.

server.address()#

Returns the bound address, the address family name and port of the server as reported by the operating system. Useful to find which port was assigned when giving getting an OS-assigned address. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

Example:

var server = net.createServer((socket) => {
  socket.end('goodbye\n');
}).on('error', (err) => {
  // handle errors here
  throw err;
});

// grab a random port.
server.listen(() => {
  address = server.address();
  console.log('opened server on %j', address);
});

Don't call server.address() until the 'listening' event has been emitted.

server.close([callback])#

Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a 'close' event. The optional callback will be called once the 'close' event occurs. Unlike that event, it will be called with an Error as its only argument if the server was not open when it was closed.

server.connections#

Stability: 0 - Deprecated: Use server.getConnections() instead.

The number of concurrent connections on the server.

This becomes null when sending a socket to a child with child_process.fork(). To poll forks and get current number of active connections use asynchronous server.getConnections instead.

server.getConnections(callback)#

Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.

Callback should take two arguments err and count.

server.listen(handle[, backlog][, callback])#

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

Listening on a file descriptor is not supported on Windows.

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as a listener for the 'listening' event.

The parameter backlog behaves the same as in server.listen(port[, hostname][, backlog][, callback]).

server.listen(options[, callback])#

The port, host, and backlog properties of options, as well as the optional callback function, behave as they do on a call to server.listen(port[, hostname][, backlog][, callback]). Alternatively, the path option can be used to specify a UNIX socket.

If exclusive is false (default), then cluster workers will use the same underlying handle, allowing connection handling duties to be shared. When exclusive is true, the handle is not shared, and attempted port sharing results in an error. An example which listens on an exclusive port is shown below.

server.listen({
  host: 'localhost',
  port: 80,
  exclusive: true
});

server.listen(path[, backlog][, callback])#

Start a local socket server listening for connections on the given path.

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as a listener for the 'listening' event.

On UNIX, the local domain is usually known as the UNIX domain. The path is a filesystem path name. It is subject to the same naming conventions and permissions checks as would be done on file creation, will be visible in the filesystem, and will persist until unlinked.

On Windows, the local domain is implemented using a named pipe. The path must refer to an entry in \\?\pipe\ or \\.\pipe\. Any characters are permitted, but the latter may do some processing of pipe names, such as resolving .. sequences. Despite appearances, the pipe name space is flat. Pipes will not persist, they are removed when the last reference to them is closed. Do not forget JavaScript string escaping requires paths to be specified with double-backslashes, such as:

net.createServer().listen(
    path.join('\\\\?\\pipe', process.cwd(), 'myctl'))

The parameter backlog behaves the same as in server.listen(port[, hostname][, backlog][, callback]).

server.listen(port[, hostname][, backlog][, callback])#

Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections on any IPv6 address (::) when IPv6 is available, or any IPv4 address (0.0.0.0) otherwise. A port value of zero will assign a random port.

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as a listener for the 'listening' event.

One issue some users run into is getting EADDRINUSE errors. This means that another server is already running on the requested port. One way of handling this would be to wait a second and then try again. This can be done with

server.on('error', (e) => {
  if (e.code == 'EADDRINUSE') {
    console.log('Address in use, retrying...');
    setTimeout(() => {
      server.close();
      server.listen(PORT, HOST);
    }, 1000);
  }
});

(Note: All sockets in Node.js set SO_REUSEADDR already)

server.listening#

A Boolean indicating whether or not the server is listening for connections.

server.maxConnections#

Set this property to reject connections when the server's connection count gets high.

It is not recommended to use this option once a socket has been sent to a child with child_process.fork().

server.ref()#

Opposite of unref, calling ref on a previously unrefd server will not let the program exit if it's the only server left (the default behavior). If the server is refd calling ref again will have no effect.

Returns server.

server.unref()#

Calling unref on a server will allow the program to exit if this is the only active server in the event system. If the server is already unrefd calling unref again will have no effect.

Returns server.

Class: net.Socket#

This object is an abstraction of a TCP or local socket. net.Socket instances implement a duplex Stream interface. They can be created by the user and used as a client (with connect()) or they can be created by Node.js and passed to the user through the 'connection' event of a server.

new net.Socket(options)#

Construct a new socket object.

options is an object with the following defaults:

{
  fd: null,
  allowHalfOpen: false,
  readable: false,
  writable: false
}

fd allows you to specify the existing file descriptor of socket. Set readable and/or writable to true to allow reads and/or writes on this socket (NOTE: Works only when fd is passed). About allowHalfOpen, refer to createServer() and 'end' event.

net.Socket instances are EventEmitter with the following events:

Event: 'close'#

  • had_error <Boolean> true if the socket had a transmission error.

Emitted once the socket is fully closed. The argument had_error is a boolean which says if the socket was closed due to a transmission error.

Event: 'connect'#

Emitted when a socket connection is successfully established. See connect().

Event: 'data'#

Emitted when data is received. The argument data will be a Buffer or String. Encoding of data is set by socket.setEncoding(). (See the Readable Stream section for more information.)

Note that the data will be lost if there is no listener when a Socket emits a 'data' event.

Event: 'drain'#

Emitted when the write buffer becomes empty. Can be used to throttle uploads.

See also: the return values of socket.write()

Event: 'end'#

Emitted when the other end of the socket sends a FIN packet.

By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.

Event: 'error'#

Emitted when an error occurs. The 'close' event will be called directly following this event.

Event: 'lookup'#

Emitted after resolving the hostname but before connecting. Not applicable to UNIX sockets.

Event: 'timeout'#

Emitted if the socket times out from inactivity. This is only to notify that the socket has been idle. The user must manually close the connection.

See also: socket.setTimeout()

socket.address()#

Returns the bound address, the address family name and port of the socket as reported by the operating system. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

socket.bufferSize#

net.Socket has the property that socket.write() always works. This is to help users get up and running quickly. The computer cannot always keep up with the amount of data that is written to a socket - the network connection simply might be too slow. Node.js will internally queue up the data written to a socket and send it out over the wire when it is possible. (Internally it is polling on the socket's file descriptor for being writable).

The consequence of this internal buffering is that memory may grow. This property shows the number of characters currently buffered to be written. (Number of characters is approximately equal to the number of bytes to be written, but the buffer may contain strings, and the strings are lazily encoded, so the exact number of bytes is not known.)

Users who experience large or growing bufferSize should attempt to "throttle" the data flows in their program with pause() and resume().

socket.bytesRead#

The amount of received bytes.

socket.bytesWritten#

The amount of bytes sent.

socket.connect(options[, connectListener])#

Opens the connection for a given socket.

For TCP sockets, options argument should be an object which specifies:

  • port: Port the client should connect to (Required).

  • host: Host the client should connect to. Defaults to 'localhost'.

  • localAddress: Local interface to bind to for network connections.

  • localPort: Local port to bind to for network connections.

  • family : Version of IP stack. Defaults to 4.

  • hints: dns.lookup() hints. Defaults to 0.

  • lookup : Custom lookup function. Defaults to dns.lookup.

For local domain sockets, options argument should be an object which specifies:

  • path: Path the client should connect to (Required).

Normally this method is not needed, as net.createConnection opens the socket. Use this only if you are implementing a custom Socket.

This function is asynchronous. When the 'connect' event is emitted the socket is established. If there is a problem connecting, the 'connect' event will not be emitted, the 'error' event will be emitted with the exception.

The connectListener parameter will be added as a listener for the 'connect' event.

socket.connect(path[, connectListener])#

socket.connect(port[, host][, connectListener])#

As socket.connect(options\[, connectListener\]), with options either as either {port: port, host: host} or {path: path}.

socket.destroy()#

Ensures that no more I/O activity happens on this socket. Only necessary in case of errors (parse error or so).

socket.end([data][, encoding])#

Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

If data is specified, it is equivalent to calling socket.write(data, encoding) followed by socket.end().

socket.localAddress#

The string representation of the local IP address the remote client is connecting on. For example, if you are listening on '0.0.0.0' and the client connects on '192.168.1.1', the value would be '192.168.1.1'.

socket.localPort#

The numeric representation of the local port. For example, 80 or 21.

socket.pause()#

Pauses the reading of data. That is, 'data' events will not be emitted. Useful to throttle back an upload.

socket.ref()#

Opposite of unref, calling ref on a previously unrefd socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refd calling ref again will have no effect.

Returns socket.

socket.remoteAddress#

The string representation of the remote IP address. For example, '74.125.127.100' or '2001:4860:a005::68'. Value may be undefined if the socket is destroyed (for example, if the client disconnected).

socket.remoteFamily#

The string representation of the remote IP family. 'IPv4' or 'IPv6'.

socket.remotePort#

The numeric representation of the remote port. For example, 80 or 21.

socket.resume()#

Resumes reading after a call to pause().

socket.setEncoding([encoding])#

Set the encoding for the socket as a Readable Stream. See stream.setEncoding() for more information.

socket.setKeepAlive([enable][, initialDelay])#

Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. enable defaults to false.

Set initialDelay (in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Setting 0 for initialDelay will leave the value unchanged from the default (or previous) setting. Defaults to 0.

Returns socket.

socket.setNoDelay([noDelay])#

Disables the Nagle algorithm. By default TCP connections use the Nagle algorithm, they buffer data before sending it off. Setting true for noDelay will immediately fire off data each time socket.write() is called. noDelay defaults to true.

Returns socket.

socket.setTimeout(timeout[, callback])#

Sets the socket to timeout after timeout milliseconds of inactivity on the socket. By default net.Socket do not have a timeout.

When an idle timeout is triggered the socket will receive a 'timeout' event but the connection will not be severed. The user must manually end() or destroy() the socket.

If timeout is 0, then the existing idle timeout is disabled.

The optional callback parameter will be added as a one time listener for the 'timeout' event.

Returns socket.

socket.unref()#

Calling unref on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefd calling unref again will have no effect.

Returns socket.

socket.write(data[, encoding][, callback])#

Sends data on the socket. The second parameter specifies the encoding in the case of a string--it defaults to UTF8 encoding.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

The optional callback parameter will be executed when the data is finally written out - this may not be immediately.

net.connect(options[, connectListener])#

A factory function, which returns a new net.Socket and automatically connects with the supplied options.

The options are passed to both the net.Socket constructor and the socket.connect method.

The connectListener parameter will be added as a listener for the 'connect' event once.

Here is an example of a client of the previously described echo server:

const net = require('net');
const client = net.connect({port: 8124}, () => {
  // 'connect' listener
  console.log('connected to server!');
  client.write('world!\r\n');
});
client.on('data', (data) => {
  console.log(data.toString());
  client.end();
});
client.on('end', () => {
  console.log('disconnected from server');
});

To connect on the socket /tmp/echo.sock the second line would just be changed to

const client = net.connect({path: '/tmp/echo.sock'});

net.connect(path[, connectListener])#

A factory function, which returns a new unix net.Socket and automatically connects to the supplied path.

The connectListener parameter will be added as a listener for the 'connect' event once.

net.connect(port[, host][, connectListener])#

A factory function, which returns a new net.Socket and automatically connects to the supplied port and host.

If host is omitted, 'localhost' will be assumed.

The connectListener parameter will be added as a listener for the 'connect' event once.

net.createConnection(options[, connectListener])#

A factory function, which returns a new net.Socket and automatically connects with the supplied options.

The options are passed to both the net.Socket constructor and the socket.connect method.

The connectListener parameter will be added as a listener for the 'connect' event once.

Here is an example of a client of the previously described echo server:

const net = require('net');
const client = net.createConnection({port: 8124}, () => {
  //'connect' listener
  console.log('connected to server!');
  client.write('world!\r\n');
});
client.on('data', (data) => {
  console.log(data.toString());
  client.end();
});
client.on('end', () => {
  console.log('disconnected from server');
});

To connect on the socket /tmp/echo.sock the second line would just be changed to

const client = net.connect({path: '/tmp/echo.sock'});

net.createConnection(path[, connectListener])#

A factory function, which returns a new unix net.Socket and automatically connects to the supplied path.

The connectListener parameter will be added as a listener for the 'connect' event once.

net.createConnection(port[, host][, connectListener])#

A factory function, which returns a new net.Socket and automatically connects to the supplied port and host.

If host is omitted, 'localhost' will be assumed.

The connectListener parameter will be added as a listener for the 'connect' event once.

net.createServer([options][, connectionListener])#

Creates a new server. The connectionListener argument is automatically set as a listener for the 'connection' event.

options is an object with the following defaults:

{
  allowHalfOpen: false,
  pauseOnConnect: false
}

If allowHalfOpen is true, then the socket won't automatically send a FIN packet when the other end of the socket sends a FIN packet. The socket becomes non-readable, but still writable. You should call the end() method explicitly. See 'end' event for more information.

If pauseOnConnect is true, then the socket associated with each incoming connection will be paused, and no data will be read from its handle. This allows connections to be passed between processes without any data being read by the original process. To begin reading data from a paused socket, call resume().

Here is an example of an echo server which listens for connections on port 8124:

const net = require('net');
const server = net.createServer((c) => {
  // 'connection' listener
  console.log('client connected');
  c.on('end', () => {
    console.log('client disconnected');
  });
  c.write('hello\r\n');
  c.pipe(c);
});
server.on('error', (err) => {
  throw err;
});
server.listen(8124, () => {
  console.log('server bound');
});

Test this by using telnet:

telnet localhost 8124

To listen on the socket /tmp/echo.sock the third line from the last would just be changed to

server.listen('/tmp/echo.sock', () => {
  console.log('server bound');
});

Use nc to connect to a UNIX domain socket server:

nc -U /tmp/echo.sock

net.isIP(input)#

Tests if input is an IP address. Returns 0 for invalid strings, returns 4 for IP version 4 addresses, and returns 6 for IP version 6 addresses.

net.isIPv4(input)#

Returns true if input is a version 4 IP address, otherwise returns false.

net.isIPv6(input)#

Returns true if input is a version 6 IP address, otherwise returns false.

OS#

Stability: 2 - Stable

Provides a few basic operating-system related utility functions.

Use require('os') to access this module.

os.EOL#

A constant defining the appropriate End-of-line marker for the operating system.

os.arch()#

Returns the operating system CPU architecture. Possible values are 'x64', 'arm' and 'ia32'. Returns the value of process.arch.

os.cpus()#

Returns an array of objects containing information about each CPU/core installed: model, speed (in MHz), and times (an object containing the number of milliseconds the CPU/core spent in: user, nice, sys, idle, and irq).

Example inspection of os.cpus:

[ { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 252020,
       nice: 0,
       sys: 30340,
       idle: 1070356870,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 306960,
       nice: 0,
       sys: 26980,
       idle: 1071569080,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 248450,
       nice: 0,
       sys: 21750,
       idle: 1070919370,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 256880,
       nice: 0,
       sys: 19430,
       idle: 1070905480,
       irq: 20 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 511580,
       nice: 20,
       sys: 40900,
       idle: 1070842510,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 291660,
       nice: 0,
       sys: 34360,
       idle: 1070888000,
       irq: 10 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 308260,
       nice: 0,
       sys: 55410,
       idle: 1071129970,
       irq: 880 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 266450,
       nice: 1480,
       sys: 34920,
       idle: 1072572010,
       irq: 30 } } ]

Note that since nice values are UNIX centric in Windows the nice values of all processors are always 0.

os.endianness()#

Returns the endianness of the CPU. Possible values are 'BE' for big endian or 'LE' for little endian.

os.freemem()#

Returns the amount of free system memory in bytes.

os.homedir()#

Returns the home directory of the current user.

os.hostname()#

Returns the hostname of the operating system.

os.loadavg()#

Returns an array containing the 1, 5, and 15 minute load averages.

The load average is a measure of system activity, calculated by the operating system and expressed as a fractional number. As a rule of thumb, the load average should ideally be less than the number of logical CPUs in the system.

The load average is a very UNIX-y concept; there is no real equivalent on Windows platforms. That is why this function always returns [0, 0, 0] on Windows.

os.networkInterfaces()#

Get a list of network interfaces:

{ lo:
   [ { address: '127.0.0.1',
       netmask: '255.0.0.0',
       family: 'IPv4',
       mac: '00:00:00:00:00:00',
       internal: true },
     { address: '::1',
       netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
       family: 'IPv6',
       mac: '00:00:00:00:00:00',
       internal: true } ],
  eth0:
   [ { address: '192.168.1.108',
       netmask: '255.255.255.0',
       family: 'IPv4',
       mac: '01:02:03:0a:0b:0c',
       internal: false },
     { address: 'fe80::a00:27ff:fe4e:66a1',
       netmask: 'ffff:ffff:ffff:ffff::',
       family: 'IPv6',
       mac: '01:02:03:0a:0b:0c',
       internal: false } ] }

Note that due to the underlying implementation this will only return network interfaces that have been assigned an address.

os.platform()#

Returns the operating system platform. Possible values are 'darwin', 'freebsd', 'linux', 'sunos' or 'win32'. Returns the value of process.platform.

os.release()#

Returns the operating system release.

os.tmpdir()#

Returns the operating system's default directory for temporary files.

os.totalmem()#

Returns the total amount of system memory in bytes.

os.type()#

Returns the operating system name. For example 'Linux' on Linux, 'Darwin' on OS X and 'Windows_NT' on Windows.

os.uptime()#

Returns the system uptime in seconds.

os.userInfo(options)#

  • options <Object>
    • encoding <String> Character encoding used to interpret resulting strings. If encoding is set to 'buffer', the username, shell, and homedir values will be Buffer instances. (Default: 'utf8')

Returns a subset of the password file entry for the current effective user. The returned object includes the username, uid, gid, shell, and homedir. On Windows, the uid and gid fields are -1, and shell is null.

The value of homedir returned by userInfo() comes directly from the operating system. This differs from the result of os.homedir(), which queries several environment variables for the home directory before falling back to the operating system response.

Path#

Stability: 2 - Stable

This module contains utilities for handling and transforming file paths. The file system is not consulted to check whether paths are valid.

Use require('path') to use this module. The following methods are provided:

path.basename(path[, ext])#

Return the last portion of a path, similar to the Unix basename command. path must be a string. ext, if given, must also be a string.

Examples:

path.basename('/foo/bar/baz/asdf/quux.html')
// returns 'quux.html'

path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns 'quux'

path.delimiter#

The platform-specific path delimiter, ; or ':'.

An example on *nix:

console.log(process.env.PATH)
// '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'

process.env.PATH.split(path.delimiter)
// returns ['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

An example on Windows:

console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\node\'

process.env.PATH.split(path.delimiter)
// returns ['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\node\\']

path.dirname(path)#

Return the directory name of a path, similar to the Unix dirname command. path must be a string.

Example:

path.dirname('/foo/bar/baz/asdf/quux')
// returns '/foo/bar/baz/asdf'

path.extname(path)#

Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there is no '.' in the last portion of the path or the first character of it is '.', then it returns an empty string. path must be a string.

Examples:

path.extname('index.html')
// returns '.html'

path.extname('index.coffee.md')
// returns '.md'

path.extname('index.')
// returns '.'

path.extname('index')
// returns ''

path.extname('.index')
// returns ''

path.format(pathObject)#

Returns a path string from an object. This is the opposite of path.parse.

If pathObject has dir and base properties, the returned string will be a concatenation of the dir property, the platform-dependent path separator, and the base property.

If the dir property is not supplied, the root property will be used as the dir property. However, it will be assumed that the root property already ends with the platform-dependent path separator. In this case, the returned string will be the concatenation of the root property and the base property.

If both the dir and the root properties are not supplied, then the returned string will be the contents of the base property.

If the base property is not supplied, a concatenation of the name property and the ext property will be used as the base property.

Examples:

Some Posix system examples:

// If `dir` and `base` are provided, `dir` + platform separator + `base`
// will be returned.
path.format({
    dir: '/home/user/dir',
    base: 'file.txt'
});
// returns '/home/user/dir/file.txt'

// `root` will be used if `dir` is not specified.
// `name` + `ext` will be used if `base` is not specified.
// If only `root` is provided or `dir` is equal to `root` then the
// platform separator will not be included.
path.format({
    root: '/',
    base: 'file.txt'
});
// returns '/file.txt'

path.format({
    dir: '/',
    root: '/',
    name: 'file',
    ext: '.txt'
});
// returns '/file.txt'

// `base` will be returned if `dir` or `root` are not provided.
path.format({
    base: 'file.txt'
});
// returns 'file.txt'

An example on Windows:

path.format({
    root : "C:\\",
    dir : "C:\\path\\dir",
    base : "file.txt",
    ext : ".txt",
    name : "file"
})
// returns 'C:\\path\\dir\\file.txt'

path.isAbsolute(path)#

Determines whether path is an absolute path. An absolute path will always resolve to the same location, regardless of the working directory. path must be a string.

Examples on *nix:

path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..')  // true
path.isAbsolute('qux/')     // false
path.isAbsolute('.')        // false

Examples on Windows:

path.isAbsolute('//server')  // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz')  // false
path.isAbsolute('.')         // false

Note: If the path string passed as parameter is a zero-length string, unlike other path module functions, it will be used as-is and false will be returned.

path.join([path1][, path2][, ...])#

Join all arguments together and normalize the resulting path.

All arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an exception is thrown.

Examples:

path.join('/foo', 'bar', 'baz/asdf', 'quux', '..')
// returns '/foo/bar/baz/asdf'

path.join('foo', {}, 'bar')
// throws exception
TypeError: Arguments to path.join must be strings

Note: If the arguments to join have zero-length strings, unlike other path module functions, they will be ignored. If the joined path string is a zero-length string then '.' will be returned, which represents the current working directory.

path.normalize(path)#

Normalize a path, taking care of '..' and '.' parts. path must be a string.

When multiple slashes are found, they're replaced by a single one; when the path contains a trailing slash, it is preserved. On Windows backslashes are used.

Example:

path.normalize('/foo/bar//baz/asdf/quux/..')
// returns '/foo/bar/baz/asdf'

Note: If the path string passed as argument is a zero-length string then '.' will be returned, which represents the current working directory.

path.parse(path)#

Returns an object from a path. path must be a string.

An example on *nix:

path.parse('/home/user/dir/file.txt')
// returns
// {
//    root : "/",
//    dir : "/home/user/dir",
//    base : "file.txt",
//    ext : ".txt",
//    name : "file"
// }

An example on Windows:

path.parse('C:\\path\\dir\\index.html')
// returns
// {
//    root : "C:\\",
//    dir : "C:\\path\\dir",
//    base : "index.html",
//    ext : ".html",
//    name : "index"
// }

path.posix#

Provide access to aforementioned path methods but always interact in a posix compatible way.

path.relative(from, to)#

Solve the relative path from from to to. from and to must be strings.

At times we have two absolute paths, and we need to derive the relative path from one to the other. This is actually the reverse transform of path.resolve, which means we see that:

path.resolve(from, path.relative(from, to)) == path.resolve(to)

Examples:

path.relative('C:\\orandea\\test\\aaa', 'C:\\orandea\\impl\\bbb')
// returns '..\\..\\impl\\bbb'

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// returns '../../impl/bbb'

Note: If the arguments to relative have zero-length strings then the current working directory will be used instead of the zero-length strings. If both the paths are the same then a zero-length string will be returned.

path.resolve([from ...], to)#

Resolves to to an absolute path. All arguments must be strings.

If to isn't already absolute from arguments are prepended in right to left order, until an absolute path is found. If after using all from paths still no absolute path is found, the current working directory is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets resolved to the root directory. Empty string from arguments are ignored.

Another way to think of it is as a sequence of cd commands in a shell.

path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile')

Is similar to:

cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd

The difference is that the different paths don't need to exist and may also be files.

Examples:

path.resolve('/foo/bar', './baz')
// returns '/foo/bar/baz'

path.resolve('/foo/bar', '/tmp/file/')
// returns '/tmp/file'

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif')
// if currently in /home/myself/node, it returns
// '/home/myself/node/wwwroot/static_files/gif/image.gif'

path.sep#

The platform-specific file separator. '\\' or '/'.

An example on *nix:

'foo/bar/baz'.split(path.sep)
// returns ['foo', 'bar', 'baz']

An example on Windows:

'foo\\bar\\baz'.split(path.sep)
// returns ['foo', 'bar', 'baz']

path.win32#

Provide access to aforementioned path methods but always interact in a win32 compatible way.

process#

The process object is a global object and can be accessed from anywhere. It is an instance of EventEmitter.

Event: 'beforeExit'#

This event is emitted when Node.js empties its event loop and has nothing else to schedule. Normally, Node.js exits when there is no work scheduled, but a listener for 'beforeExit' can make asynchronous calls, and cause Node.js to continue.

'beforeExit' is not emitted for conditions causing explicit termination, such as process.exit() or uncaught exceptions, and should not be used as an alternative to the 'exit' event unless the intention is to schedule more work.

Event: 'exit'#

Emitted when the process is about to exit. There is no way to prevent the exiting of the event loop at this point, and once all 'exit' listeners have finished running the process will exit. Therefore you must only perform synchronous operations in this handler. This is a good hook to perform checks on the module's state (like for unit tests). The callback takes one argument, the code the process is exiting with.

This event is only emitted when Node.js exits explicitly by process.exit() or implicitly by the event loop draining.

Example of listening for 'exit':

process.on('exit', (code) => {
  // do *NOT* do this
  setTimeout(() => {
    console.log('This will not run');
  }, 0);
  console.log('About to exit with code:', code);
});

Event: 'message'#

Messages sent by ChildProcess.send() are obtained using the 'message' event on the child's process object.

Event: 'rejectionHandled'#

Emitted whenever a Promise was rejected and an error handler was attached to it (for example with .catch()) later than after an event loop turn. This event is emitted with the following arguments:

  • p the promise that was previously emitted in an 'unhandledRejection' event, but which has now gained a rejection handler.

There is no notion of a top level for a promise chain at which rejections can always be handled. Being inherently asynchronous in nature, a promise rejection can be handled at a future point in time — possibly much later than the event loop turn it takes for the 'unhandledRejection' event to be emitted.

Another way of stating this is that, unlike in synchronous code where there is an ever-growing list of unhandled exceptions, with promises there is a growing-and-shrinking list of unhandled rejections. In synchronous code, the 'uncaughtException' event tells you when the list of unhandled exceptions grows. And in asynchronous code, the 'unhandledRejection' event tells you when the list of unhandled rejections grows, while the 'rejectionHandled' event tells you when the list of unhandled rejections shrinks.

For example using the rejection detection hooks in order to keep a map of all the rejected promise reasons at a given time:

const unhandledRejections = new Map();
process.on('unhandledRejection', (reason, p) => {
  unhandledRejections.set(p, reason);
});
process.on('rejectionHandled', (p) => {
  unhandledRejections.delete(p);
});

This map will grow and shrink over time, reflecting rejections that start unhandled and then become handled. You could record the errors in some error log, either periodically (probably best for long-running programs, allowing you to clear the map, which in the case of a very buggy program could grow indefinitely) or upon process exit (more convenient for scripts).

Event: 'uncaughtException'#

The 'uncaughtException' event is emitted when an exception bubbles all the way back to the event loop. By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting. Adding a handler for the 'uncaughtException' event overrides this default behavior.

For example:

process.on('uncaughtException', (err) => {
  console.log(`Caught exception: ${err}`);
});

setTimeout(() => {
  console.log('This will still run.');
}, 500);

// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');

Warning: Using 'uncaughtException' correctly#

Note that 'uncaughtException' is a crude mechanism for exception handling intended to be used only as a last resort. The event should not be used as an equivalent to On Error Resume Next. Unhandled exceptions inherently mean that an application is in an undefined state. Attempting to resume application code without properly recovering from the exception can cause additional unforeseen and unpredictable issues.

Exceptions thrown from within the event handler will not be caught. Instead the process will exit with a non zero exit code and the stack trace will be printed. This is to avoid infinite recursion.

Attempting to resume normally after an uncaught exception can be similar to pulling out of the power cord when upgrading a computer -- nine out of ten times nothing happens - but the 10th time, the system becomes corrupted.

The correct use of 'uncaughtException' is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after 'uncaughtException'.

Event: 'unhandledRejection'#

Emitted whenever a Promise is rejected and no error handler is attached to the promise within a turn of the event loop. When programming with promises exceptions are encapsulated as rejected promises. Such promises can be caught and handled using promise.catch(...) and rejections are propagated through a promise chain. This event is useful for detecting and keeping track of promises that were rejected whose rejections were not handled yet. This event is emitted with the following arguments:

  • reason the object with which the promise was rejected (usually an Error instance).
  • p the promise that was rejected.

Here is an example that logs every unhandled rejection to the console

process.on('unhandledRejection', (reason, p) => {
    console.log("Unhandled Rejection at: Promise ", p, " reason: ", reason);
    // application specific logging, throwing an error, or other logic here
});

For example, here is a rejection that will trigger the 'unhandledRejection' event:

somePromise.then((res) => {
  return reportToUser(JSON.pasre(res)); // note the typo (`pasre`)
}); // no `.catch` or `.then`

Here is an example of a coding pattern that will also trigger 'unhandledRejection':

function SomeResource() {
  // Initially set the loaded status to a rejected promise
  this.loaded = Promise.reject(new Error('Resource not yet loaded!'));
}

var resource = new SomeResource();
// no .catch or .then on resource.loaded for at least a turn

In cases like this, you may not want to track the rejection as a developer error like you would for other 'unhandledRejection' events. To address this, you can either attach a dummy .catch(() => { }) handler to resource.loaded, preventing the 'unhandledRejection' event from being emitted, or you can use the 'rejectionHandled' event.

Event: 'warning'#

Emitted whenever Node.js emits a process warning.

A process warning is similar to an error in that it describes exceptional conditions that are being brought to the user's attention. However, warnings are not part of the normal Node.js and JavaScript error handling flow. Node.js can emit warnings whenever it detects bad coding practices that could lead to sub-optimal application performance, bugs or security vulnerabilities.

The event handler for 'warning' events is called with a single warning argument whose value is an Error object. There are three key properties that describe the warning:

  • name - The name of the warning (currently Warning by default).
  • message - A system-provided description of the warning.
  • stack - A stack trace to the location in the code where the warning was issued.
process.on('warning', (warning) => {
  console.warn(warning.name);    // Print the warning name
  console.warn(warning.message); // Print the warning message
  console.warn(warning.stack);   // Print the stack trace
});

By default, Node.js will print process warnings to stderr. The --no-warnings command-line option can be used to suppress the default console output but the 'warning' event will still be emitted by the process object.

The following example illustrates the warning that is printed to stderr when too many listeners have been added to an event

$ node
> event.defaultMaxListeners = 1;
> process.on('foo', () => {});
> process.on('foo', () => {});
> (node:38638) Warning: Possible EventEmitter memory leak detected. 2 foo
... listeners added. Use emitter.setMaxListeners() to increase limit

In contrast, the following example turns off the default warning output and adds a custom handler to the 'warning' event:

$ node --no-warnings
> var p = process.on('warning', (warning) => console.warn('Do not do that!'));
> event.defaultMaxListeners = 1;
> process.on('foo', () => {});
> process.on('foo', () => {});
> Do not do that!

The --trace-warnings command-line option can be used to have the default console output for warnings include the full stack trace of the warning.

Emitting custom warnings#

The process.emitWarning() method can be used to issue custom or application specific warnings.

// Emit a warning using a string...
process.emitWarning('Something happened!');
  // Prints: (node 12345) Warning: Something happened!

// Emit a warning using an object...
process.emitWarning('Something Happened!', 'CustomWarning');
  // Prints: (node 12345) CustomWarning: Something happened!

// Emit a warning using a custom Error object...
class CustomWarning extends Error {
  constructor(message) {
    super(message);
    this.name = 'CustomWarning';
    Error.captureStackTrace(this, CustomWarning);
  }
}
const myWarning = new CustomWarning('Something happened!');
process.emitWarning(myWarning);
  // Prints: (node 12345) CustomWarning: Something happened!

Emitting custom deprecation warnings#

Custom deprecation warnings can be emitted by setting the name of a custom warning to DeprecationWarning. For instance:

process.emitWarning('This API is deprecated', 'DeprecationWarning');

Or,

const err = new Error('This API is deprecated');
err.name = 'DeprecationWarning';
process.emitWarning(err);

Launching Node.js using the --throw-deprecation command line flag will cause custom deprecation warnings to be thrown as exceptions.

Using the --trace-deprecation command line flag will cause the custom deprecation to be printed to stderr along with the stack trace.

Using the --no-deprecation command line flag will suppress all reporting of the custom deprecation.

The *-deprecation command line flags only affect warnings that use the name DeprecationWarning.

Exit Codes#

Node.js will normally exit with a 0 status code when no more async operations are pending. The following status codes are used in other cases:

  • 1 Uncaught Fatal Exception - There was an uncaught exception, and it was not handled by a domain or an 'uncaughtException' event handler.
  • 2 - Unused (reserved by Bash for builtin misuse)
  • 3 Internal JavaScript Parse Error - The JavaScript source code internal in Node.js's bootstrapping process caused a parse error. This is extremely rare, and generally can only happen during development of Node.js itself.
  • 4 Internal JavaScript Evaluation Failure - The JavaScript source code internal in Node.js's bootstrapping process failed to return a function value when evaluated. This is extremely rare, and generally can only happen during development of Node.js itself.
  • 5 Fatal Error - There was a fatal unrecoverable error in V8. Typically a message will be printed to stderr with the prefix FATAL ERROR.
  • 6 Non-function Internal Exception Handler - There was an uncaught exception, but the internal fatal exception handler function was somehow set to a non-function, and could not be called.
  • 7 Internal Exception Handler Run-Time Failure - There was an uncaught exception, and the internal fatal exception handler function itself threw an error while attempting to handle it. This can happen, for example, if a process.on('uncaughtException') or domain.on('error') handler throws an error.
  • 8 - Unused. In previous versions of Node.js, exit code 8 sometimes indicated an uncaught exception.
  • 9 - Invalid Argument - Either an unknown option was specified, or an option requiring a value was provided without a value.
  • 10 Internal JavaScript Run-Time Failure - The JavaScript source code internal in Node.js's bootstrapping process threw an error when the bootstrapping function was called. This is extremely rare, and generally can only happen during development of Node.js itself.
  • 12 Invalid Debug Argument - The --debug and/or --debug-brk options were set, but an invalid port number was chosen.
  • >128 Signal Exits - If Node.js receives a fatal signal such as SIGKILL or SIGHUP, then its exit code will be 128 plus the value of the signal code. This is a standard Unix practice, since exit codes are defined to be 7-bit integers, and signal exits set the high-order bit, and then contain the value of the signal code.

Signal Events#

Emitted when the processes receives a signal. See sigaction(2) for a list of standard POSIX signal names such as SIGINT, SIGHUP, etc.

Example of listening for SIGINT:

// Start reading from stdin so we don't exit.
process.stdin.resume();

process.on('SIGINT', () => {
  console.log('Got SIGINT.  Press Control-D to exit.');
});

An easy way to send the SIGINT signal is with Control-C in most terminal programs.

Note:

  • SIGUSR1 is reserved by Node.js to start the debugger. It's possible to install a listener but that won't stop the debugger from starting.
  • SIGTERM and SIGINT have default handlers on non-Windows platforms that resets the terminal mode before exiting with code 128 + signal number. If one of these signals has a listener installed, its default behavior will be removed (Node.js will no longer exit).
  • SIGPIPE is ignored by default. It can have a listener installed.
  • SIGHUP is generated on Windows when the console window is closed, and on other platforms under various similar conditions, see signal(7). It can have a listener installed, however Node.js will be unconditionally terminated by Windows about 10 seconds later. On non-Windows platforms, the default behavior of SIGHUP is to terminate Node.js, but once a listener has been installed its default behavior will be removed.
  • SIGTERM is not supported on Windows, it can be listened on.
  • SIGINT from the terminal is supported on all platforms, and can usually be generated with CTRL+C (though this may be configurable). It is not generated when terminal raw mode is enabled.
  • SIGBREAK is delivered on Windows when CTRL+BREAK is pressed, on non-Windows platforms it can be listened on, but there is no way to send or generate it.
  • SIGWINCH is delivered when the console has been resized. On Windows, this will only happen on write to the console when the cursor is being moved, or when a readable tty is used in raw mode.
  • SIGKILL cannot have a listener installed, it will unconditionally terminate Node.js on all platforms.
  • SIGSTOP cannot have a listener installed.

Note that Windows does not support sending Signals, but Node.js offers some emulation with process.kill(), and child_process.kill(). Sending signal 0 can be used to test for the existence of a process. Sending SIGINT, SIGTERM, and SIGKILL cause the unconditional termination of the target process.

process.abort()#

This causes Node.js to emit an abort. This will cause Node.js to exit and generate a core file.

process.arch#

What processor architecture you're running on: 'arm', 'ia32', or 'x64'.

console.log('This processor architecture is ' + process.arch);

process.argv#

An array containing the command line arguments. The first element will be 'node', the second element will be the name of the JavaScript file. The next elements will be any additional command line arguments.

// print process.argv
process.argv.forEach((val, index, array) => {
  console.log(`${index}: ${val}`);
});

This will generate:

$ node process-2.js one two=three four
0: node
1: /Users/mjr/work/node/process-2.js
2: one
3: two=three
4: four

process.chdir(directory)#

Changes the current working directory of the process or throws an exception if that fails.

console.log(`Starting directory: ${process.cwd()}`);
try {
  process.chdir('/tmp');
  console.log(`New directory: ${process.cwd()}`);
}
catch (err) {
  console.log(`chdir: ${err}`);
}

process.config#

An Object containing the JavaScript representation of the configure options that were used to compile the current Node.js executable. This is the same as the config.gypi file that was produced when running the ./configure script.

An example of the possible output looks like:

{
  target_defaults:
   { cflags: [],
     default_configuration: 'Release',
     defines: [],
     include_dirs: [],
     libraries: [] },
  variables:
   {
     host_arch: 'x64',
     node_install_npm: 'true',
     node_prefix: '',
     node_shared_cares: 'false',
     node_shared_http_parser: 'false',
     node_shared_libuv: 'false',
     node_shared_zlib: 'false',
     node_use_dtrace: 'false',
     node_use_openssl: 'true',
     node_shared_openssl: 'false',
     strict_aliasing: 'true',
     target_arch: 'x64',
     v8_use_snapshot: 'true'
   }
}

process.connected#

  • <Boolean> Set to false after process.disconnect() is called

If process.connected is false, it is no longer possible to send messages.

process.cwd()#

Returns the current working directory of the process.

console.log(`Current directory: ${process.cwd()}`);

process.disconnect()#

Close the IPC channel to the parent process, allowing this child to exit gracefully once there are no other connections keeping it alive.

Identical to the parent process's ChildProcess.disconnect().

If Node.js was not spawned with an IPC channel, process.disconnect() will be undefined.

process.env#

An object containing the user environment. See environ(7).

An example of this object looks like:

{ TERM: 'xterm-256color',
  SHELL: '/usr/local/bin/bash',
  USER: 'maciej',
  PATH: '~/.bin/:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin',
  PWD: '/Users/maciej',
  EDITOR: 'vim',
  SHLVL: '1',
  HOME: '/Users/maciej',
  LOGNAME: 'maciej',
  _: '/usr/local/bin/node' }

You can write to this object, but changes won't be reflected outside of your process. That means that the following won't work:

$ node -e 'process.env.foo = "bar"' && echo $foo

But this will:

process.env.foo = 'bar';
console.log(process.env.foo);

Assigning a property on process.env will implicitly convert the value to a string.

Example:

process.env.test = null;
console.log(process.env.test);
// => 'null'
process.env.test = undefined;
console.log(process.env.test);
// => 'undefined'

Use delete to delete a property from process.env.

Example:

process.env.TEST = 1;
delete process.env.TEST;
console.log(process.env.TEST);
// => undefined

process.emitWarning(warning[, name][, ctor])#

  • warning <String> | <Error> The warning to emit.
  • name <String> When warning is a String, name is the name to use for the warning. Default: Warning.
  • ctor <Function> When warning is a String, ctor is an optional function used to limit the generated stack trace. Default process.emitWarning

The process.emitWarning() method can be used to emit custom or application specific process warnings. These can be listened for by adding a handler to the process.on('warning') event.

// Emit a warning using a string...
process.emitWarning('Something happened!');
  // Emits: (node: 56338) Warning: Something happened!
// Emit a warning using a string and a name...
process.emitWarning('Something Happened!', 'CustomWarning');
  // Emits: (node:56338) CustomWarning: Something Happened!

In each of the previous examples, an Error object is generated internally by process.emitWarning() and passed through to the process.on('warning') event.

process.on('warning', (warning) => {
  console.warn(warning.name);
  console.warn(warning.message);
  console.warn(warning.stack);
});

If warning is passed as an Error object, it will be passed through to the process.on('warning') event handler unmodified (and the optional name and ctor arguments will be ignored):

// Emit a warning using an Error object...
const myWarning = new Error('Warning! Something happened!');
myWarning.name = 'CustomWarning';

process.emitWarning(myWarning);
  // Emits: (node:56338) CustomWarning: Warning! Something Happened!

A TypeError is thrown if warning is anything other than a string or Error object.

Note that while process warnings use Error objects, the process warning mechanism is not a replacement for normal error handling mechanisms.

The following additional handling is implemented if the warning name is DeprecationWarning:

  • If the --throw-deprecation command-line flag is used, the deprecation warning is thrown as an exception rather than being emitted as an event.
  • If the --no-deprecation command-line flag is used, the deprecation warning is suppressed.
  • If the --trace-deprecation command-line flag is used, the deprecation warning is printed to stderr along with the full stack trace.

Avoiding duplicate warnings#

As a best practice, warnings should be emitted only once per process. To do so, it is recommended to place the emitWarning() behind a simple boolean flag as illustrated in the example below:

var warned = false;
function emitMyWarning() {
  if (!warned) {
    process.emitWarning('Only warn once!');
    warned = true;
  }
}
emitMyWarning();
  // Emits: (node: 56339) Warning: Only warn once!
emitMyWarning();
  // Emits nothing

process.execArgv#

This is the set of Node.js-specific command line options from the executable that started the process. These options do not show up in process.argv, and do not include the Node.js executable, the name of the script, or any options following the script name. These options are useful in order to spawn child processes with the same execution environment as the parent.

Example:

$ node --harmony script.js --version

results in process.execArgv:

['--harmony']

and process.argv:

['/usr/local/bin/node', 'script.js', '--version']

process.execPath#

This is the absolute pathname of the executable that started the process.

Example:

/usr/local/bin/node

process.exit([code])#

Ends the process with the specified code. If omitted, exit uses the 'success' code 0.

To exit with a 'failure' code:

process.exit(1);

The shell that executed Node.js should see the exit code as 1.

process.exitCode#

A number which will be the process exit code, when the process either exits gracefully, or is exited via process.exit() without specifying a code.

Specifying a code to process.exit(code) will override any previous setting of process.exitCode.

process.getegid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Gets the effective group identity of the process. (See getegid(2).) This is the numerical group id, not the group name.

if (process.getegid) {
  console.log(`Current gid: ${process.getegid()}`);
}

process.geteuid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Gets the effective user identity of the process. (See geteuid(2).) This is the numerical userid, not the username.

if (process.geteuid) {
  console.log(`Current uid: ${process.geteuid()}`);
}

process.getgid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Gets the group identity of the process. (See getgid(2).) This is the numerical group id, not the group name.

if (process.getgid) {
  console.log(`Current gid: ${process.getgid()}`);
}

process.getgroups()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Returns an array with the supplementary group IDs. POSIX leaves it unspecified if the effective group ID is included but Node.js ensures it always is.

process.getuid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Gets the user identity of the process. (See getuid(2).) This is the numerical userid, not the username.

if (process.getuid) {
  console.log(`Current uid: ${process.getuid()}`);
}

process.hrtime()#

Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array. It is relative to an arbitrary time in the past. It is not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals.

You may pass in the result of a previous call to process.hrtime() to get a diff reading, useful for benchmarks and measuring intervals:

var time = process.hrtime();
// [ 1800216, 25 ]

setTimeout(() => {
  var diff = process.hrtime(time);
  // [ 1, 552 ]

  console.log('benchmark took %d nanoseconds', diff[0] * 1e9 + diff[1]);
  // benchmark took 1000000527 nanoseconds
}, 1000);

process.initgroups(user, extra_group)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Reads /etc/group and initializes the group access list, using all groups of which the user is a member. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

user is a user name or user ID. extra_group is a group name or group ID.

Some care needs to be taken when dropping privileges. Example:

console.log(process.getgroups());         // [ 0 ]
process.initgroups('bnoordhuis', 1000);   // switch user
console.log(process.getgroups());         // [ 27, 30, 46, 1000, 0 ]
process.setgid(1000);                     // drop root gid
console.log(process.getgroups());         // [ 27, 30, 46, 1000 ]

process.kill(pid[, signal])#

Send a signal to a process. pid is the process id and signal is the string describing the signal to send. Signal names are strings like SIGINT or SIGHUP. If omitted, the signal will be SIGTERM. See Signal Events and kill(2) for more information.

Will throw an error if target does not exist, and as a special case, a signal of 0 can be used to test for the existence of a process. Windows platforms will throw an error if the pid is used to kill a process group.

Note that even though the name of this function is process.kill, it is really just a signal sender, like the kill system call. The signal sent may do something other than kill the target process.

Example of sending a signal to yourself:

process.on('SIGHUP', () => {
  console.log('Got SIGHUP signal.');
});

setTimeout(() => {
  console.log('Exiting.');
  process.exit(0);
}, 100);

process.kill(process.pid, 'SIGHUP');

Note: When SIGUSR1 is received by Node.js it starts the debugger, see Signal Events.

process.mainModule#

Alternate way to retrieve require.main. The difference is that if the main module changes at runtime, require.main might still refer to the original main module in modules that were required before the change occurred. Generally it's safe to assume that the two refer to the same module.

As with require.main, it will be undefined if there was no entry script.

process.memoryUsage()#

Returns an object describing the memory usage of the Node.js process measured in bytes.

const util = require('util');

console.log(util.inspect(process.memoryUsage()));

This will generate:

{ rss: 4935680,
  heapTotal: 1826816,
  heapUsed: 650472 }

heapTotal and heapUsed refer to V8's memory usage.

process.nextTick(callback[, arg][, ...])#

Once the current event loop turn runs to completion, call the callback function.

This is not a simple alias to setTimeout(fn, 0), it's much more efficient. It runs before any additional I/O events (including timers) fire in subsequent ticks of the event loop.

console.log('start');
process.nextTick(() => {
  console.log('nextTick callback');
});
console.log('scheduled');
// Output:
// start
// scheduled
// nextTick callback

This is important in developing APIs where you want to give the user the chance to assign event handlers after an object has been constructed, but before any I/O has occurred.

function MyThing(options) {
  this.setupOptions(options);

  process.nextTick(() => {
    this.startDoingStuff();
  });
}

var thing = new MyThing();
thing.getReadyForStuff();

// thing.startDoingStuff() gets called now, not before.

It is very important for APIs to be either 100% synchronous or 100% asynchronous. Consider this example:

// WARNING!  DO NOT USE!  BAD UNSAFE HAZARD!
function maybeSync(arg, cb) {
  if (arg) {
    cb();
    return;
  }

  fs.stat('file', cb);
}

This API is hazardous. If you do this:

maybeSync(true, () => {
  foo();
});
bar();

then it's not clear whether foo() or bar() will be called first.

This approach is much better:

function definitelyAsync(arg, cb) {
  if (arg) {
    process.nextTick(cb);
    return;
  }

  fs.stat('file', cb);
}

Note: the nextTick queue is completely drained on each pass of the event loop before additional I/O is processed. As a result, recursively setting nextTick callbacks will block any I/O from happening, just like a while(true); loop.

process.pid#

The PID of the process.

console.log(`This process is pid ${process.pid}`);

process.platform#

What platform you're running on: 'darwin', 'freebsd', 'linux', 'sunos' or 'win32'

console.log(`This platform is ${process.platform}`);

process.release#

An Object containing metadata related to the current release, including URLs for the source tarball and headers-only tarball.

process.release contains the following properties:

  • name: a string with a value that will always be 'node' for Node.js. For legacy io.js releases, this will be 'io.js'.
  • sourceUrl: a complete URL pointing to a .tar.gz file containing the source of the current release.
  • headersUrl: a complete URL pointing to a .tar.gz file containing only the header files for the current release. This file is significantly smaller than the full source file and can be used for compiling add-ons against Node.js.
  • libUrl: a complete URL pointing to an node.lib file matching the architecture and version of the current release. This file is used for compiling add-ons against Node.js. This property is only present on Windows builds of Node.js and will be missing on all other platforms.

e.g.

{ name: 'node',
  sourceUrl: 'https://nodejs.org/download/release/v4.0.0/node-v4.0.0.tar.gz',
  headersUrl: 'https://nodejs.org/download/release/v4.0.0/node-v4.0.0-headers.tar.gz',
  libUrl: 'https://nodejs.org/download/release/v4.0.0/win-x64/node.lib' }

In custom builds from non-release versions of the source tree, only the name property may be present. The additional properties should not be relied upon to exist.

process.send(message[, sendHandle[, options]][, callback])#

When Node.js is spawned with an IPC channel attached, it can send messages to its parent process using process.send(). Each will be received as a 'message' event on the parent's ChildProcess object.

Note: this function uses JSON.stringify() internally to serialize the message.

If Node.js was not spawned with an IPC channel, process.send() will be undefined.

process.setegid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Sets the effective group identity of the process. (See setegid(2).) This accepts either a numerical ID or a groupname string. If a groupname is specified, this method blocks while resolving it to a numerical ID.

if (process.getegid && process.setegid) {
  console.log(`Current gid: ${process.getegid()}`);
  try {
    process.setegid(501);
    console.log(`New gid: ${process.getegid()}`);
  }
  catch (err) {
    console.log(`Failed to set gid: ${err}`);
  }
}

process.seteuid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Sets the effective user identity of the process. (See seteuid(2).) This accepts either a numerical ID or a username string. If a username is specified, this method blocks while resolving it to a numerical ID.

if (process.geteuid && process.seteuid) {
  console.log(`Current uid: ${process.geteuid()}`);
  try {
    process.seteuid(501);
    console.log(`New uid: ${process.geteuid()}`);
  }
  catch (err) {
    console.log(`Failed to set uid: ${err}`);
  }
}

process.setgid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Sets the group identity of the process. (See setgid(2).) This accepts either a numerical ID or a groupname string. If a groupname is specified, this method blocks while resolving it to a numerical ID.

if (process.getgid && process.setgid) {
  console.log(`Current gid: ${process.getgid()}`);
  try {
    process.setgid(501);
    console.log(`New gid: ${process.getgid()}`);
  }
  catch (err) {
    console.log(`Failed to set gid: ${err}`);
  }
}

process.setgroups(groups)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Sets the supplementary group IDs. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

The list can contain group IDs, group names or both.

process.setuid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

Sets the user identity of the process. (See setuid(2).) This accepts either a numerical ID or a username string. If a username is specified, this method blocks while resolving it to a numerical ID.

if (process.getuid && process.setuid) {
  console.log(`Current uid: ${process.getuid()}`);
  try {
    process.setuid(501);
    console.log(`New uid: ${process.getuid()}`);
  }
  catch (err) {
    console.log(`Failed to set uid: ${err}`);
  }
}

process.stderr#

A writable stream to stderr (on fd 2).

process.stderr and process.stdout are unlike other streams in Node.js in that they cannot be closed (end() will throw), they never emit the finish event and that writes can block when output is redirected to a file (although disks are fast and operating systems normally employ write-back caching so it should be a very rare occurrence indeed.)

process.stdin#

A Readable Stream for stdin (on fd 0).

Example of opening standard input and listening for both events:

process.stdin.setEncoding('utf8');

process.stdin.on('readable', () => {
  var chunk = process.stdin.read();
  if (chunk !== null) {
    process.stdout.write(`data: ${chunk}`);
  }
});

process.stdin.on('end', () => {
  process.stdout.write('end');
});

As a Stream, process.stdin can also be used in "old" mode that is compatible with scripts written for node.js prior to v0.10. For more information see Stream compatibility.

In "old" Streams mode the stdin stream is paused by default, so one must call process.stdin.resume() to read from it. Note also that calling process.stdin.resume() itself would switch stream to "old" mode.

If you are starting a new project you should prefer a more recent "new" Streams mode over "old" one.

process.stdout#

A Writable Stream to stdout (on fd 1).

For example, a console.log equivalent could look like this:

console.log = (msg) => {
  process.stdout.write(`${msg}\n`);
};

process.stderr and process.stdout are unlike other streams in Node.js in that they cannot be closed (end() will throw), they never emit the 'finish' event and that writes can block when output is redirected to a file (although disks are fast and operating systems normally employ write-back caching so it should be a very rare occurrence indeed.)

To check if Node.js is being run in a TTY context, read the isTTY property on process.stderr, process.stdout, or process.stdin:

$ node -p "Boolean(process.stdin.isTTY)"
true
$ echo "foo" | node -p "Boolean(process.stdin.isTTY)"
false

$ node -p "Boolean(process.stdout.isTTY)"
true
$ node -p "Boolean(process.stdout.isTTY)" | cat
false

See the tty docs for more information.

process.title#

Getter/setter to set what is displayed in ps.

When used as a setter, the maximum length is platform-specific and probably short.

On Linux and OS X, it's limited to the size of the binary name plus the length of the command line arguments because it overwrites the argv memory.

v0.8 allowed for longer process title strings by also overwriting the environ memory but that was potentially insecure/confusing in some (rather obscure) cases.

process.umask([mask])#

Sets or reads the process's file mode creation mask. Child processes inherit the mask from the parent process. Returns the old mask if mask argument is given, otherwise returns the current mask.

const newmask = 0o022;
const oldmask = process.umask(newmask);
console.log(
  `Changed umask from ${oldmask.toString(8)} to ${newmask.toString(8)}`
);

process.uptime()#

Number of seconds Node.js has been running.

process.version#

A compiled-in property that exposes NODE_VERSION.

console.log(`Version: ${process.version}`);

process.versions#

A property exposing version strings of Node.js and its dependencies.

console.log(process.versions);

Will print something like:

{ http_parser: '2.3.0',
  node: '1.1.1',
  v8: '4.1.0.14',
  uv: '1.3.0',
  zlib: '1.2.8',
  ares: '1.10.0-DEV',
  modules: '43',
  icu: '55.1',
  openssl: '1.0.1k' }

punycode#

Stability: 2 - Stable

Punycode.js is bundled with Node.js v0.6.2+. Use require('punycode') to access it. (To use it with other Node.js versions, use npm to install the punycode module first.)

punycode.decode(string)#

Converts a Punycode string of ASCII-only symbols to a string of Unicode symbols.

// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)#

Converts a string of Unicode symbols to a Punycode string of ASCII-only symbols.

// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toASCII(domain)#

Converts a Unicode string representing a domain name to Punycode. Only the non-ASCII parts of the domain name will be converted, i.e. it doesn't matter if you call it with a domain that's already in ASCII.

// encode domain names
punycode.toASCII('mañana.com'); // 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com'); // 'xn----dqo34k.com'

punycode.toUnicode(domain)#

Converts a Punycode string representing a domain name to Unicode. Only the Punycoded parts of the domain name will be converted, i.e. it doesn't matter if you call it on a string that has already been converted to Unicode.

// decode domain names
punycode.toUnicode('xn--maana-pta.com'); // 'mañana.com'
punycode.toUnicode('xn----dqo34k.com'); // '☃-⌘.com'

punycode.ucs2#

punycode.ucs2.decode(string)#

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

punycode.ucs2.decode('abc'); // [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 tetragram for centre:
punycode.ucs2.decode('\uD834\uDF06'); // [0x1D306]

punycode.ucs2.encode(codePoints)#

Creates a string based on an array of numeric code point values.

punycode.ucs2.encode([0x61, 0x62, 0x63]); // 'abc'
punycode.ucs2.encode([0x1D306]); // '\uD834\uDF06'

punycode.version#

A string representing the current Punycode.js version number.

Query String#

Stability: 2 - Stable

This module provides utilities for dealing with query strings. It provides the following methods:

querystring.escape#

The escape function used by querystring.stringify, provided so that it could be overridden if necessary.

querystring.parse(str[, sep][, eq][, options])#

Deserialize a query string to an object. Optionally override the default separator ('&') and assignment ('=') characters.

Options object may contain maxKeys property (equal to 1000 by default), it'll be used to limit processed keys. Set it to 0 to remove key count limitation.

Options object may contain decodeURIComponent property (querystring.unescape by default), it can be used to decode a non-utf8 encoding string if necessary.

Example:

querystring.parse('foo=bar&baz=qux&baz=quux&corge')
// returns { foo: 'bar', baz: ['qux', 'quux'], corge: '' }

// Suppose gbkDecodeURIComponent function already exists,
// it can decode `gbk` encoding string
querystring.parse('w=%D6%D0%CE%C4&foo=bar', null, null,
  { decodeURIComponent: gbkDecodeURIComponent })
// returns { w: '中文', foo: 'bar' }

querystring.stringify(obj[, sep][, eq][, options])#

Serialize an object to a query string. Optionally override the default separator ('&') and assignment ('=') characters.

Options object may contain encodeURIComponent property (querystring.escape by default), it can be used to encode string with non-utf8 encoding if necessary.

Example:

querystring.stringify({ foo: 'bar', baz: ['qux', 'quux'], corge: '' })
// returns 'foo=bar&baz=qux&baz=quux&corge='

querystring.stringify({foo: 'bar', baz: 'qux'}, ';', ':')
// returns 'foo:bar;baz:qux'

// Suppose gbkEncodeURIComponent function already exists,
// it can encode string with `gbk` encoding
querystring.stringify({ w: '中文', foo: 'bar' }, null, null,
  { encodeURIComponent: gbkEncodeURIComponent })
// returns 'w=%D6%D0%CE%C4&foo=bar'

querystring.unescape#

The unescape function used by querystring.parse, provided so that it could be overridden if necessary.

It will try to use decodeURIComponent in the first place, but if that fails it falls back to a safer equivalent that doesn't throw on malformed URLs.

Readline#

Stability: 2 - Stable

To use this module, do require('readline'). Readline allows reading of a stream (such as process.stdin) on a line-by-line basis.

Note that once you've invoked this module, your Node.js program will not terminate until you've closed the interface. Here's how to allow your program to gracefully exit:

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

rl.question('What do you think of Node.js? ', (answer) => {
  // TODO: Log the answer in a database
  console.log('Thank you for your valuable feedback:', answer);

  rl.close();
});

Class: Interface#

The class that represents a readline interface with an input and output stream.

rl.close()#

Closes the Interface instance, relinquishing control on the input and output streams. The 'close' event will also be emitted.

rl.pause()#

Pauses the readline input stream, allowing it to be resumed later if needed.

Note that this doesn't immediately pause the stream of events. Several events may be emitted after calling pause, including line.

rl.prompt([preserveCursor])#

Readies readline for input from the user, putting the current setPrompt options on a new line, giving the user a new spot to write. Set preserveCursor to true to prevent the cursor placement being reset to 0.

This will also resume the input stream used with createInterface if it has been paused.

If output is set to null or undefined when calling createInterface, the prompt is not written.

rl.question(query, callback)#

Prepends the prompt with query and invokes callback with the user's response. Displays the query to the user, and then invokes callback with the user's response after it has been typed.

This will also resume the input stream used with createInterface if it has been paused.

If output is set to null or undefined when calling createInterface, nothing is displayed.

Example usage:

rl.question('What is your favorite food?', (answer) => {
  console.log(`Oh, so your favorite food is ${answer}`);
});

rl.resume()#

Resumes the readline input stream.

rl.setPrompt(prompt)#

Sets the prompt, for example when you run node on the command line, you see > , which is Node.js's prompt.

rl.write(data[, key])#

Writes data to output stream, unless output is set to null or undefined when calling createInterface. key is an object literal to represent a key sequence; available if the terminal is a TTY.

This will also resume the input stream if it has been paused.

Example:

rl.write('Delete me!');
// Simulate ctrl+u to delete the line written previously
rl.write(null, {ctrl: true, name: 'u'});

Events#

Event: 'close'#

function () {}

Emitted when close() is called.

Also emitted when the input stream receives its 'end' event. The Interface instance should be considered "finished" once this is emitted. For example, when the input stream receives ^D, respectively known as EOT.

This event is also called if there is no SIGINT event listener present when the input stream receives a ^C, respectively known as SIGINT.

Event: 'line'#

function (line) {}

Emitted whenever the input stream receives an end of line (\n, \r, or \r\n), usually received when the user hits enter, or return. This is a good hook to listen for user input.

Example of listening for 'line':

rl.on('line', (cmd) => {
  console.log(`You just typed: ${cmd}`);
});

Event: 'pause'#

function () {}

Emitted whenever the input stream is paused.

Also emitted whenever the input stream is not paused and receives the SIGCONT event. (See events SIGTSTP and SIGCONT)

Example of listening for 'pause':

rl.on('pause', () => {
  console.log('Readline paused.');
});

Event: 'resume'#

function () {}

Emitted whenever the input stream is resumed.

Example of listening for 'resume':

rl.on('resume', () => {
  console.log('Readline resumed.');
});

Event: 'SIGCONT'#

function () {}

This does not work on Windows.

Emitted whenever the input stream is sent to the background with ^Z, respectively known as SIGTSTP, and then continued with fg(1). This event only emits if the stream was not paused before sending the program to the background.

Example of listening for SIGCONT:

rl.on('SIGCONT', () => {
  // `prompt` will automatically resume the stream
  rl.prompt();
});

Event: 'SIGINT'#

function () {}

Emitted whenever the input stream receives a ^C, respectively known as SIGINT. If there is no SIGINT event listener present when the input stream receives a SIGINT, pause will be triggered.

Example of listening for SIGINT:

rl.on('SIGINT', () => {
  rl.question('Are you sure you want to exit?', (answer) => {
    if (answer.match(/^y(es)?$/i)) rl.pause();
  });
});

Event: 'SIGTSTP'#

function () {}

This does not work on Windows.

Emitted whenever the input stream receives a ^Z, respectively known as SIGTSTP. If there is no SIGTSTP event listener present when the input stream receives a SIGTSTP, the program will be sent to the background.

When the program is resumed with fg, the 'pause' and SIGCONT events will be emitted. You can use either to resume the stream.

The 'pause' and SIGCONT events will not be triggered if the stream was paused before the program was sent to the background.

Example of listening for SIGTSTP:

rl.on('SIGTSTP', () => {
  // This will override SIGTSTP and prevent the program from going to the
  // background.
  console.log('Caught SIGTSTP.');
});

Example: Tiny CLI#

Here's an example of how to use all these together to craft a tiny command line interface:

const readline = require('readline');
const rl = readline.createInterface(process.stdin, process.stdout);

rl.setPrompt('OHAI> ');
rl.prompt();

rl.on('line', (line) => {
  switch(line.trim()) {
    case 'hello':
      console.log('world!');
      break;
    default:
      console.log('Say what? I might have heard `' + line.trim() + '`');
      break;
  }
  rl.prompt();
}).on('close', () => {
  console.log('Have a great day!');
  process.exit(0);
});

Example: Read File Stream Line-by-Line#

A common case for readline's input option is to pass a filesystem readable stream to it. This is how one could craft line-by-line parsing of a file:

const readline = require('readline');
const fs = require('fs');

const rl = readline.createInterface({
  input: fs.createReadStream('sample.txt')
});

rl.on('line', (line) => {
  console.log('Line from file:', line);
});

readline.clearLine(stream, dir)#

Clears current line of given TTY stream in a specified direction. dir should have one of following values:

  • -1 - to the left from cursor
  • 1 - to the right from cursor
  • 0 - the entire line

readline.clearScreenDown(stream)#

Clears the screen from the current position of the cursor down.

readline.createInterface(options)#

Creates a readline Interface instance. Accepts an options Object that takes the following values:

  • input - the readable stream to listen to (Required).

  • output - the writable stream to write readline data to (Optional).

  • completer - an optional function that is used for Tab autocompletion. See below for an example of using this.

  • terminal - pass true if the input and output streams should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • historySize - maximum number of history lines retained. Defaults to 30.

The completer function is given the current line entered by the user, and is supposed to return an Array with 2 entries:

  1. An Array with matching entries for the completion.

  2. The substring that was used for the matching.

Which ends up looking something like: [[substr1, substr2, ...], originalsubstring].

Example:

function completer(line) {
  var completions = '.help .error .exit .quit .q'.split(' ')
  var hits = completions.filter((c) => { return c.indexOf(line) == 0 })
  // show all completions if none found
  return [hits.length ? hits : completions, line]
}

Also completer can be run in async mode if it accepts two arguments:

function completer(linePartial, callback) {
  callback(null, [['123'], linePartial]);
}

createInterface is commonly used with process.stdin and process.stdout in order to accept user input:

const readline = require('readline');
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

Once you have a readline instance, you most commonly listen for the 'line' event.

If terminal is true for this instance then the output stream will get the best compatibility if it defines an output.columns property, and fires a 'resize' event on the output if/when the columns ever change (process.stdout does this automatically when it is a TTY).

readline.cursorTo(stream, x, y)#

Move cursor to the specified position in a given TTY stream.

readline.emitKeypressEvents(stream)#

Causes stream to begin emitting 'keypress' events corresponding to its input.

readline.moveCursor(stream, dx, dy)#

Move cursor relative to it's current position in a given TTY stream.

REPL#

Stability: 2 - Stable

A Read-Eval-Print-Loop (REPL) is available both as a standalone program and easily includable in other programs. The REPL provides a way to interactively run JavaScript and see the results. It can be used for debugging, testing, or just trying things out.

By executing node without any arguments from the command-line you will be dropped into the REPL. It has simplistic emacs line-editing.

$ node
Type '.help' for options.
> a = [ 1, 2, 3];
[ 1, 2, 3 ]
> a.forEach((v) => {
...   console.log(v);
...   });
1
2
3

For advanced line-editors, start Node.js with the environmental variable NODE_NO_READLINE=1. This will start the main and debugger REPL in canonical terminal settings which will allow you to use with rlwrap.

For example, you could add this to your bashrc file:

alias node="env NODE_NO_READLINE=1 rlwrap node"

Environment Variable Options#

The built-in repl (invoked by running node or node -i) may be controlled via the following environment variables:

  • NODE_REPL_HISTORY - When a valid path is given, persistent REPL history will be saved to the specified file rather than .node_repl_history in the user's home directory. Setting this value to "" will disable persistent REPL history. Whitespace will be trimmed from the value.
  • NODE_REPL_HISTORY_SIZE - Defaults to 1000. Controls how many lines of history will be persisted if history is available. Must be a positive number.
  • NODE_REPL_MODE - May be any of sloppy, strict, or magic. Defaults to magic, which will automatically run "strict mode only" statements in strict mode.

Persistent History#

By default, the REPL will persist history between node REPL sessions by saving to a .node_repl_history file in the user's home directory. This can be disabled by setting the environment variable NODE_REPL_HISTORY="".

NODE_REPL_HISTORY_FILE#

Stability: 0 - Deprecated: Use NODE_REPL_HISTORY instead.

Previously in Node.js/io.js v2.x, REPL history was controlled by using a NODE_REPL_HISTORY_FILE environment variable, and the history was saved in JSON format. This variable has now been deprecated, and your REPL history will automatically be converted to using plain text. The new file will be saved to either your home directory, or a directory defined by the NODE_REPL_HISTORY variable, as documented here.

REPL Features#

Inside the REPL, Control+D will exit. Multi-line expressions can be input. Tab completion is supported for both global and local variables.

Core modules will be loaded on-demand into the environment. For example, accessing fs will require() the fs module as global.fs.

The special variable _ (underscore) contains the result of the last expression.

> [ 'a', 'b', 'c' ]
[ 'a', 'b', 'c' ]
> _.length
3
> _ += 1
4

Explicitly setting _ will disable this behavior until the context is reset.

The REPL provides access to any variables in the global scope. You can expose a variable to the REPL explicitly by assigning it to the context object associated with each REPLServer. For example:

// repl_test.js
const repl = require('repl');
var msg = 'message';

repl.start('> ').context.m = msg;

Things in the context object appear as local within the REPL:

$ node repl_test.js
> m
'message'

There are a few special REPL commands:

  • .break - While inputting a multi-line expression, sometimes you get lost or just don't care about completing it. .break will start over.
  • .clear - Resets the context object to an empty object and clears any multi-line expression.
  • .exit - Close the I/O stream, which will cause the REPL to exit.
  • .help - Show this list of special commands.
  • .save - Save the current REPL session to a file

    .save ./file/to/save.js

  • .load - Load a file into the current REPL session.

    .load ./file/to/load.js

The following key combinations in the REPL have these special effects:

  • <ctrl>C - Similar to the .break keyword. Terminates the current command. Press twice on a blank line to forcibly exit.
  • <ctrl>D - Similar to the .exit keyword.
  • <tab> - Show both global and local(scope) variables

Customizing Object displays in the REPL#

The REPL module internally uses util.inspect(), when printing values. However, util.inspect delegates the call to the object's inspect() function, if it has one. You can read more about this delegation here.

For example, if you have defined an inspect() function on an object, like this:

> var obj = {foo: 'this will not show up in the inspect() output'};
undefined
> obj.inspect = () => {
...   return {bar: 'baz'};
... };
[Function]

and try to print obj in REPL, it will invoke the custom inspect() function:

> obj
{bar: 'baz'}

Class: REPLServer#

This inherits from Readline Interface with the following events:

Event: 'exit'#

function () {}

Emitted when the user exits the REPL in any of the defined ways. Namely, typing .exit at the repl, pressing Ctrl+C twice to signal SIGINT, or pressing Ctrl+D to signal 'end' on the input stream.

Example of listening for exit:

replServer.on('exit', () => {
  console.log('Got "exit" event from repl!');
  process.exit();
});

Event: 'reset'#

function (context) {}

Emitted when the REPL's context is reset. This happens when you type .clear. If you start the repl with { useGlobal: true } then this event will never be emitted.

Example of listening for reset:

// Extend the initial repl context.
var replServer = repl.start({ options ... });
someExtension.extend(r.context);

// When a new context is created extend it as well.
replServer.on('reset', (context) => {
  console.log('repl has a new context');
  someExtension.extend(context);
});

replServer.defineCommand(keyword, cmd)#

Makes a command available in the REPL. The command is invoked by typing a . followed by the keyword. The cmd is an object with the following values:

  • help - help text to be displayed when .help is entered (Optional).
  • action - a function to execute, potentially taking in a string argument, when the command is invoked, bound to the REPLServer instance (Required).

If a function is provided instead of an object for cmd, it is treated as the action.

Example of defining a command:

// repl_test.js
const repl = require('repl');

var replServer = repl.start();
replServer.defineCommand('sayhello', {
  help: 'Say hello',
  action: function(name) {
    this.write(`Hello, ${name}!\n`);
    this.displayPrompt();
  }
});

Example of invoking that command from the REPL:

> .sayhello Node.js User
Hello, Node.js User!

replServer.displayPrompt([preserveCursor])#

Like readline.prompt except also adding indents with ellipses when inside blocks. The preserveCursor argument is passed to readline.prompt. This is used primarily with defineCommand. It's also used internally to render each prompt line.

repl.start(options)#

Returns and starts a REPLServer instance, that inherits from Readline Interface. Accepts an "options" Object that takes the following values:

  • prompt - the prompt and stream for all I/O. Defaults to > .

  • input - the readable stream to listen to. Defaults to process.stdin.

  • output - the writable stream to write readline data to. Defaults to process.stdout.

  • terminal - pass true if the stream should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • eval - function that will be used to eval each given line. Defaults to an async wrapper for eval(). See below for an example of a custom eval.

  • useColors - a boolean which specifies whether or not the writer function should output colors. If a different writer function is set then this does nothing. Defaults to the repl's terminal value.

  • useGlobal - if set to true, then the repl will use the global object, instead of running scripts in a separate context. Defaults to false.

  • ignoreUndefined - if set to true, then the repl will not output the return value of command if it's undefined. Defaults to false.

  • writer - the function to invoke for each command that gets evaluated which returns the formatting (including coloring) to display. Defaults to util.inspect.

  • replMode - controls whether the repl runs all commands in strict mode, default mode, or a hybrid mode ("magic" mode.) Acceptable values are:

    • repl.REPL_MODE_SLOPPY - run commands in sloppy mode.
    • repl.REPL_MODE_STRICT - run commands in strict mode. This is equivalent to prefacing every repl statement with 'use strict'.
    • repl.REPL_MODE_MAGIC - attempt to run commands in default mode. If they fail to parse, re-try in strict mode.

You can use your own eval function if it has following signature:

function eval(cmd, context, filename, callback) {
  callback(null, result);
}

On tab completion, eval will be called with .scope as an input string. It is expected to return an array of scope names to be used for the auto-completion.

Multiple REPLs may be started against the same running instance of Node.js. Each will share the same global object but will have unique I/O.

Here is an example that starts a REPL on stdin, a Unix socket, and a TCP socket:

const net = require('net');
const repl = require('repl');
var connections = 0;

repl.start({
  prompt: 'Node.js via stdin> ',
  input: process.stdin,
  output: process.stdout
});

net.createServer((socket) => {
  connections += 1;
  repl.start({
    prompt: 'Node.js via Unix socket> ',
    input: socket,
    output: socket
  }).on('exit', () => {
    socket.end();
  })
}).listen('/tmp/node-repl-sock');

net.createServer((socket) => {
  connections += 1;
  repl.start({
    prompt: 'Node.js via TCP socket> ',
    input: socket,
    output: socket
  }).on('exit', () => {
    socket.end();
  });
}).listen(5001);

Running this program from the command line will start a REPL on stdin. Other REPL clients may connect through the Unix socket or TCP socket. telnet is useful for connecting to TCP sockets, and socat can be used to connect to both Unix and TCP sockets.

By starting a REPL from a Unix socket-based server instead of stdin, you can connect to a long-running Node.js process without restarting it.

For an example of running a "full-featured" (terminal) REPL over a net.Server and net.Socket instance, see: https://gist.github.com/2209310

For an example of running a REPL instance over curl(1), see: https://gist.github.com/2053342

Stream#

Stability: 2 - Stable

A stream is an abstract interface implemented by various objects in Node.js. For example a request to an HTTP server is a stream, as is process.stdout. Streams are readable, writable, or both. All streams are instances of EventEmitter.

You can load the Stream base classes by doing require('stream'). There are base classes provided for Readable streams, Writable streams, Duplex streams, and Transform streams.

This document is split up into 3 sections:

  1. The first section explains the parts of the API that you need to be aware of to use streams in your programs.
  2. The second section explains the parts of the API that you need to use if you implement your own custom streams yourself. The API is designed to make this easy for you to do.
  3. The third section goes into more depth about how streams work, including some of the internal mechanisms and functions that you should probably not modify unless you definitely know what you are doing.

API for Stream Consumers#

Streams can be either Readable, Writable, or both (Duplex).

All streams are EventEmitters, but they also have other custom methods and properties depending on whether they are Readable, Writable, or Duplex.

If a stream is both Readable and Writable, then it implements all of the methods and events. So, a Duplex or Transform stream is fully described by this API, though their implementation may be somewhat different.

It is not necessary to implement Stream interfaces in order to consume streams in your programs. If you are implementing streaming interfaces in your own program, please also refer to API for Stream Implementors.

Almost all Node.js programs, no matter how simple, use Streams in some way. Here is an example of using Streams in an Node.js program:

const http = require('http');

var server = http.createServer( (req, res) => {
  // req is an http.IncomingMessage, which is a Readable Stream
  // res is an http.ServerResponse, which is a Writable Stream

  var body = '';
  // we want to get the data as utf8 strings
  // If you don't set an encoding, then you'll get Buffer objects
  req.setEncoding('utf8');

  // Readable streams emit 'data' events once a listener is added
  req.on('data', (chunk) => {
    body += chunk;
  });

  // the end event tells you that you have entire body
  req.on('end', () => {
    try {
      var data = JSON.parse(body);
    } catch (er) {
      // uh oh!  bad json!
      res.statusCode = 400;
      return res.end(`error: ${er.message}`);
    }

    // write back something interesting to the user:
    res.write(typeof data);
    res.end();
  });
});

server.listen(1337);

// $ curl localhost:1337 -d '{}'
// object
// $ curl localhost:1337 -d '"foo"'
// string
// $ curl localhost:1337 -d 'not json'
// error: Unexpected token o

Class: stream.Duplex#

Duplex streams are streams that implement both the Readable and Writable interfaces.

Examples of Duplex streams include:

Class: stream.Readable#

The Readable stream interface is the abstraction for a source of data that you are reading from. In other words, data comes out of a Readable stream.

A Readable stream will not start emitting data until you indicate that you are ready to receive it.

Readable streams have two "modes": a flowing mode and a paused mode. When in flowing mode, data is read from the underlying system and provided to your program as fast as possible. In paused mode, you must explicitly call stream.read() to get chunks of data out. Streams start out in paused mode.

Note: If no data event handlers are attached, and there are no stream.pipe() destinations, and the stream is switched into flowing mode, then data will be lost.

You can switch to flowing mode by doing any of the following:

You can switch back to paused mode by doing either of the following:

  • If there are no pipe destinations, by calling the stream.pause() method.
  • If there are pipe destinations, by removing any 'data' event handlers, and removing all pipe destinations by calling the stream.unpipe() method.

Note that, for backwards compatibility reasons, removing 'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling stream.pause() will not guarantee that the stream will remain paused once those destinations drain and ask for more data.

Examples of readable streams include:

Event: 'close'#

Emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted, and no further computation will occur.

Not all streams will emit the 'close' event.

Event: 'data'#

Attaching a 'data' event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.

If you just want to get all the data out of the stream as fast as possible, this is the best way to do so.

var readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
  console.log('got %d bytes of data', chunk.length);
});

Event: 'end'#

This event fires when there will be no more data to read.

Note that the 'end' event will not fire unless the data is completely consumed. This can be done by switching into flowing mode, or by calling stream.read() repeatedly until you get to the end.

var readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
  console.log('got %d bytes of data', chunk.length);
});
readable.on('end', () => {
  console.log('there will be no more data.');
});

Event: 'error'#

Emitted if there was an error receiving data.

Event: 'readable'#

When a chunk of data can be read from the stream, it will emit a 'readable' event.

In some cases, listening for a 'readable' event will cause some data to be read into the internal buffer from the underlying system, if it hadn't already.

var readable = getReadableStreamSomehow();
readable.on('readable', () => {
  // there is some data to read now
});

Once the internal buffer is drained, a 'readable' event will fire again when more data is available.

The 'readable' event is not emitted in the "flowing" mode with the sole exception of the last one, on end-of-stream.

The 'readable' event indicates that the stream has new information: either new data is available or the end of the stream has been reached. In the former case, stream.read() will return that data. In the latter case, stream.read() will return null. For instance, in the following example, foo.txt is an empty file:

const fs = require('fs');
var rr = fs.createReadStream('foo.txt');
rr.on('readable', () => {
  console.log('readable:', rr.read());
});
rr.on('end', () => {
  console.log('end');
});

The output of running this script is:

$ node test.js
readable: null
end

readable.isPaused()#

This method returns whether or not the readable has been explicitly paused by client code (using stream.pause() without a corresponding stream.resume()).

var readable = new stream.Readable

readable.isPaused() // === false
readable.pause()
readable.isPaused() // === true
readable.resume()
readable.isPaused() // === false

readable.pause()#

  • Return: this

This method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

var readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
  console.log('got %d bytes of data', chunk.length);
  readable.pause();
  console.log('there will be no more data for 1 second');
  setTimeout(() => {
    console.log('now data will start flowing again');
    readable.resume();
  }, 1000);
});

readable.pipe(destination[, options])#

This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream.

Multiple destinations can be piped to safely.

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt'
readable.pipe(writable);

This function returns the destination stream, so you can set up pipe chains like so:

var r = fs.createReadStream('file.txt');
var z = zlib.createGzip();
var w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);

For example, emulating the Unix cat command:

process.stdin.pipe(process.stdout);

By default stream.end() is called on the destination when the source stream emits 'end', so that destination is no longer writable. Pass { end: false } as options to keep the destination stream open.

This keeps writer open so that "Goodbye" can be written at the end.

reader.pipe(writer, { end: false });
reader.on('end', () => {
  writer.end('Goodbye\n');
});

Note that process.stderr and process.stdout are never closed until the process exits, regardless of the specified options.

readable.read([size])#

The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.

If you pass in a size argument, then it will return that many bytes. If size bytes are not available, then it will return null, unless we've ended, in which case it will return the data remaining in the buffer.

If you do not specify a size argument, then it will return all the data in the internal buffer.

This method should only be called in paused mode. In flowing mode, this method is called automatically until the internal buffer is drained.

var readable = getReadableStreamSomehow();
readable.on('readable', () => {
  var chunk;
  while (null !== (chunk = readable.read())) {
    console.log('got %d bytes of data', chunk.length);
  }
});

If this method returns a data chunk, then it will also trigger the emission of a 'data' event.

Note that calling stream.read([size]) after the 'end' event has been triggered will return null. No runtime error will be raised.

readable.resume()#

  • Return: this

This method will cause the readable stream to resume emitting 'data' events.

This method will switch the stream into flowing mode. If you do not want to consume the data from a stream, but you do want to get to its 'end' event, you can call stream.resume() to open the flow of data.

var readable = getReadableStreamSomehow();
readable.resume();
readable.on('end', () => {
  console.log('got to the end, but did not read anything');
});

readable.setEncoding(encoding)#

  • encoding <String> The encoding to use.
  • Return: this

Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects. For example, if you do readable.setEncoding('utf8'), then the output data will be interpreted as UTF-8 data, and returned as strings. If you do readable.setEncoding('hex'), then the data will be encoded in hexadecimal string format.

This properly handles multi-byte characters that would otherwise be potentially mangled if you simply pulled the Buffers directly and called buf.toString(encoding) on them. If you want to read the data as strings, always use this method.

Also you can disable any encoding at all with readable.setEncoding(null). This approach is very useful if you deal with binary data or with large multi-byte strings spread out over multiple chunks.

var readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', (chunk) => {
  assert.equal(typeof chunk, 'string');
  console.log('got %d characters of string data', chunk.length);
});

readable.unpipe([destination])#

This method will remove the hooks set up for a previous stream.pipe() call.

If the destination is not specified, then all pipes are removed.

If the destination is specified, but no pipe is set up for it, then this is a no-op.

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second
readable.pipe(writable);
setTimeout(() => {
  console.log('stop writing to file.txt');
  readable.unpipe(writable);
  console.log('manually close the file stream');
  writable.end();
}, 1000);

readable.unshift(chunk)#

This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party.

Note that stream.unshift(chunk) cannot be called after the 'end' event has been triggered; a runtime error will be raised.

If you find that you must often call stream.unshift(chunk) in your programs, consider implementing a Transform stream instead. (See API for Stream Implementors.)

// Pull off a header delimited by \n\n
// use unshift() if we get too much
// Call the callback with (error, header, stream)
const StringDecoder = require('string_decoder').StringDecoder;
function parseHeader(stream, callback) {
  stream.on('error', callback);
  stream.on('readable', onReadable);
  var decoder = new StringDecoder('utf8');
  var header = '';
  function onReadable() {
    var chunk;
    while (null !== (chunk = stream.read())) {
      var str = decoder.write(chunk);
      if (str.match(/\n\n/)) {
        // found the header boundary
        var split = str.split(/\n\n/);
        header += split.shift();
        var remaining = split.join('\n\n');
        var buf = new Buffer(remaining, 'utf8');
        if (buf.length)
          stream.unshift(buf);
        stream.removeListener('error', callback);
        stream.removeListener('readable', onReadable);
        // now the body of the message can be read from the stream.
        callback(null, header, stream);
      } else {
        // still reading the header.
        header += str;
      }
    }
  }
}

Note that, unlike stream.push(chunk), stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if unshift() is called during a read (i.e. from within a stream._read() implementation on a custom stream). Following the call to unshift() with an immediate stream.push('') will reset the reading state appropriately, however it is best to simply avoid calling unshift() while in the process of performing a read.

readable.wrap(stream)#

  • stream <Stream> An "old style" readable stream

Versions of Node.js prior to v0.10 had streams that did not implement the entire Streams API as it is today. (See Compatibility for more information.)

If you are using an older Node.js library that emits 'data' events and has a stream.pause() method that is advisory only, then you can use the wrap() method to create a Readable stream that uses the old stream as its data source.

You will very rarely ever need to call this function, but it exists as a convenience for interacting with old Node.js programs and libraries.

For example:

const OldReader = require('./old-api-module.js').OldReader;
const Readable = require('stream').Readable;
const oreader = new OldReader;
const myReader = new Readable().wrap(oreader);

myReader.on('readable', () => {
  myReader.read(); // etc.
});

Class: stream.Transform#

Transform streams are Duplex streams where the output is in some way computed from the input. They implement both the Readable and Writable interfaces.

Examples of Transform streams include:

Class: stream.Writable#

The Writable stream interface is an abstraction for a destination that you are writing data to.

Examples of writable streams include:

Event: 'drain'#

If a stream.write(chunk) call returns false, then the 'drain' event will indicate when it is appropriate to begin writing more data to the stream.

// Write the data to the supplied writable stream one million times.
// Be attentive to back-pressure.
function writeOneMillionTimes(writer, data, encoding, callback) {
  var i = 1000000;
  write();
  function write() {
    var ok = true;
    do {
      i -= 1;
      if (i === 0) {
        // last time!
        writer.write(data, encoding, callback);
      } else {
        // see if we should continue, or wait
        // don't pass the callback, because we're not done yet.
        ok = writer.write(data, encoding);
      }
    } while (i > 0 && ok);
    if (i > 0) {
      // had to stop early!
      // write some more once it drains
      writer.once('drain', write);
    }
  }
}

Event: 'error'#

Emitted if there was an error when writing or piping data.

Event: 'finish'#

When the stream.end() method has been called, and all data has been flushed to the underlying system, this event is emitted.

var writer = getWritableStreamSomehow();
for (var i = 0; i < 100; i ++) {
  writer.write('hello, #${i}!\n');
}
writer.end('this is the end\n');
writer.on('finish', () => {
  console.error('all writes are now complete.');
});

Event: 'pipe'#

This is emitted whenever the stream.pipe() method is called on a readable stream, adding this writable to its set of destinations.

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('pipe', (src) => {
  console.error('something is piping into the writer');
  assert.equal(src, reader);
});
reader.pipe(writer);

Event: 'unpipe'#

This is emitted whenever the stream.unpipe() method is called on a readable stream, removing this writable from its set of destinations.

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('unpipe', (src) => {
  console.error('something has stopped piping into the writer');
  assert.equal(src, reader);
});
reader.pipe(writer);
reader.unpipe(writer);

writable.cork()#

Forces buffering of all writes.

Buffered data will be flushed either at stream.uncork() or at stream.end() call.

writable.end([chunk][, encoding][, callback])#

  • chunk <String> | <Buffer> Optional data to write
  • encoding <String> The encoding, if chunk is a String
  • callback <Function> Optional callback for when the stream is finished

Call this method when no more data will be written to the stream. If supplied, the callback is attached as a listener on the 'finish' event.

Calling stream.write() after calling stream.end() will raise an error.

// write 'hello, ' and then end with 'world!'
var file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// writing more now is not allowed!

writable.setDefaultEncoding(encoding)#

  • encoding <String> The new default encoding

Sets the default encoding for a writable stream.

writable.uncork()#

Flush all data, buffered since stream.cork() call.

writable.write(chunk[, encoding][, callback])#

  • chunk <String> | <Buffer> The data to write
  • encoding <String> The encoding, if chunk is a String
  • callback <Function> Callback for when this chunk of data is flushed
  • Returns: <Boolean> true if the data was handled completely.

This method writes some data to the underlying system, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback may or may not be called with the error as its first argument. To detect write errors, listen for the 'error' event.

The return value indicates if you should continue writing right now. If the data had to be buffered internally, then it will return false. Otherwise, it will return true.

This return value is strictly advisory. You MAY continue to write, even if it returns false. However, writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the 'drain' event before writing more data.

API for Stream Implementors#

To implement any sort of stream, the pattern is the same:

  1. Extend the appropriate parent class in your own subclass. (The util.inherits() method is particularly helpful for this.)
  2. Call the appropriate parent class constructor in your constructor, to be sure that the internal mechanisms are set up properly.
  3. Implement one or more specific methods, as detailed below.

The class to extend and the method(s) to implement depend on the sort of stream class you are writing:

Use-case

Class

Method(s) to implement

Reading only

Readable

_read

Writing only

Writable

_write, _writev

Reading and writing

Duplex

_read, _write, _writev

Operate on written data, then read the result

Transform

_transform, _flush

In your implementation code, it is very important to never call the methods described in API for Stream Consumers. Otherwise, you can potentially cause adverse side effects in programs that consume your streaming interfaces.

Class: stream.Duplex#

A "duplex" stream is one that is both Readable and Writable, such as a TCP socket connection.

Note that stream.Duplex is an abstract class designed to be extended with an underlying implementation of the stream._read(size) and stream._write(chunk, encoding, callback) methods as you would with a Readable or Writable stream class.

Since JavaScript doesn't have multiple prototypal inheritance, this class prototypally inherits from Readable, and then parasitically from Writable. It is thus up to the user to implement both the low-level stream._read(n) method as well as the low-level stream._write(chunk, encoding, callback) method on extension duplex classes.

new stream.Duplex(options)#

  • options <Object> Passed to both Writable and Readable constructors. Also has the following fields:
    • allowHalfOpen <Boolean> Default = true. If set to false, then the stream will automatically end the readable side when the writable side ends and vice versa.
    • readableObjectMode <Boolean> Default = false. Sets objectMode for readable side of the stream. Has no effect if objectMode is true.
    • writableObjectMode <Boolean> Default = false. Sets objectMode for writable side of the stream. Has no effect if objectMode is true.

In classes that extend the Duplex class, make sure to call the constructor so that the buffering settings can be properly initialized.

Class: stream.PassThrough#

This is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is mainly for examples and testing, but there are occasionally use cases where it can come in handy as a building block for novel sorts of streams.

Class: stream.Readable#

stream.Readable is an abstract class designed to be extended with an underlying implementation of the stream._read(size) method.

Please see API for Stream Consumers for how to consume streams in your programs. What follows is an explanation of how to implement Readable streams in your programs.

new stream.Readable(options)#

  • options <Object>
    • highWaterMark <Number> The maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource. Default = 16384 (16kb), or 16 for objectMode streams
    • encoding <String> If specified, then buffers will be decoded to strings using the specified encoding. Default = null
    • objectMode <Boolean> Whether this stream should behave as a stream of objects. Meaning that stream.read(n) returns a single value instead of a Buffer of size n. Default = false
    • read <Function> Implementation for the stream._read() method.

In classes that extend the Readable class, make sure to call the Readable constructor so that the buffering settings can be properly initialized.

readable._read(size)#

  • size <Number> Number of bytes to read asynchronously

Note: Implement this method, but do NOT call it directly.

This method is prefixed with an underscore because it is internal to the class that defines it and should only be called by the internal Readable class methods. All Readable stream implementations must provide a _read method to fetch data from the underlying resource.

When _read() is called, if data is available from the resource, the _read() implementation should start pushing that data into the read queue by calling this.push(dataChunk). _read() should continue reading from the resource and pushing data until push returns false, at which point it should stop reading from the resource. Only when _read() is called again after it has stopped should it start reading more data from the resource and pushing that data onto the queue.

Note: once the _read() method is called, it will not be called again until the stream.push() method is called.

The size argument is advisory. Implementations where a "read" is a single call that returns data can use this to know how much data to fetch. Implementations where that is not relevant, such as TCP or TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no need, for example to "wait" until size bytes are available before calling stream.push(chunk).

readable.push(chunk[, encoding])#

  • chunk <Buffer> | <Null> | <String> Chunk of data to push into the read queue
  • encoding <String> Encoding of String chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'
  • return <Boolean> Whether or not more pushes should be performed

Note: This method should be called by Readable implementors, NOT by consumers of Readable streams.

If a value other than null is passed, The push() method adds a chunk of data into the queue for subsequent stream processors to consume. If null is passed, it signals the end of the stream (EOF), after which no more data can be written.

The data added with push() can be pulled out by calling the stream.read() method when the 'readable' event fires.

This API is designed to be as flexible as possible. For example, you may be wrapping a lower-level source which has some sort of pause/resume mechanism, and a data callback. In those cases, you could wrap the low-level source object by doing something like this:

// source is an object with readStop() and readStart() methods,
// and an `ondata` member that gets called when it has data, and
// an `onend` member that gets called when the data is over.

util.inherits(SourceWrapper, Readable);

function SourceWrapper(options) {
  Readable.call(this, options);

  this._source = getLowlevelSourceObject();

  // Every time there's data, we push it into the internal buffer.
  this._source.ondata = (chunk) => {
    // if push() returns false, then we need to stop reading from source
    if (!this.push(chunk))
      this._source.readStop();
  };

  // When the source ends, we push the EOF-signaling `null` chunk
  this._source.onend = () => {
    this.push(null);
  };
}

// _read will be called when the stream wants to pull more data in
// the advisory size argument is ignored in this case.
SourceWrapper.prototype._read = function(size) {
  this._source.readStart();
};

Example: A Counting Stream#

This is a basic example of a Readable stream. It emits the numerals from 1 to 1,000,000 in ascending order, and then ends.

const Readable = require('stream').Readable;
const util = require('util');
util.inherits(Counter, Readable);

function Counter(opt) {
  Readable.call(this, opt);
  this._max = 1000000;
  this._index = 1;
}

Counter.prototype._read = function() {
  var i = this._index++;
  if (i > this._max)
    this.push(null);
  else {
    var str = '' + i;
    var buf = new Buffer(str, 'ascii');
    this.push(buf);
  }
};

Example: SimpleProtocol v1 (Sub-optimal)#

This is similar to the parseHeader function described here, but implemented as a custom stream. Also, note that this implementation does not convert the incoming data to a string.

However, this would be better implemented as a Transform stream. See SimpleProtocol v2 for a better implementation.

// A parser for a simple data protocol.
// The "header" is a JSON object, followed by 2 \n characters, and
// then a message body.
//
// NOTE: This can be done more simply as a Transform stream!
// Using Readable directly for this is sub-optimal. See the
// alternative example below under the Transform section.

const Readable = require('stream').Readable;
const util = require('util');

util.inherits(SimpleProtocol, Readable);

function SimpleProtocol(source, options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(source, options);

  Readable.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;

  // source is a readable stream, such as a socket or file
  this._source = source;

  source.on('end', () => {
    this.push(null);
  });

  // give it a kick whenever the source is readable
  // read(0) will not consume any bytes
  source.on('readable', () => {
    this.read(0);
  });

  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._read = function(n) {
  if (!this._inBody) {
    var chunk = this._source.read();

    // if the source doesn't have data, we don't have data yet.
    if (chunk === null)
      return this.push('');

    // check if the chunk has a \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

    if (split === -1) {
      // still waiting for the \n\n
      // stash the chunk, and try again.
      this._rawHeader.push(chunk);
      this.push('');
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // now, because we got some extra data, unshift the rest
      // back into the read queue so that our consumer will see it.
      var b = chunk.slice(split);
      this.unshift(b);
      // calling unshift by itself does not reset the reading state
      // of the stream; since we're inside _read, doing an additional
      // push('') will reset the state appropriately.
      this.push('');

      // and let them know that we are done parsing the header.
      this.emit('header', this.header);
    }
  } else {
    // from there on, just provide the data to our consumer.
    // careful not to push(null), since that would indicate EOF.
    var chunk = this._source.read();
    if (chunk) this.push(chunk);
  }
};

// Usage:
// var parser = new SimpleProtocol(source);
// Now parser is a readable stream that will emit 'header'
// with the parsed header data.

Class: stream.Transform#

A "transform" stream is a duplex stream where the output is causally connected in some way to the input, such as a zlib stream or a crypto stream.

There is no requirement that the output be the same size as the input, the same number of chunks, or arrive at the same time. For example, a Hash stream will only ever have a single chunk of output which is provided when the input is ended. A zlib stream will produce output that is either much smaller or much larger than its input.

Rather than implement the stream._read() and stream._write() methods, Transform classes must implement the stream._transform() method, and may optionally also implement the stream._flush() method. (See below.)

new stream.Transform(options)#

In classes that extend the Transform class, make sure to call the constructor so that the buffering settings can be properly initialized.

Events: 'finish' and 'end'#

The 'finish' and 'end' events are from the parent Writable and Readable classes respectively. The 'finish' event is fired after stream.end() is called and all chunks have been processed by stream._transform(), 'end' is fired after all data has been output which is after the callback in stream._flush() has been called.

transform._flush(callback)#

  • callback <Function> Call this function (optionally with an error argument) when you are done flushing any remaining data.

Note: This function MUST NOT be called directly. It MAY be implemented by child classes, and if so, will be called by the internal Transform class methods only.

In some cases, your transform operation may need to emit a bit more data at the end of the stream. For example, a Zlib compression stream will store up some internal state so that it can optimally compress the output. At the end, however, it needs to do the best it can with what is left, so that the data will be complete.

In those cases, you can implement a _flush() method, which will be called at the very end, after all the written data is consumed, but before emitting 'end' to signal the end of the readable side. Just like with stream._transform(), call transform.push(chunk) zero or more times, as appropriate, and call callback when the flush operation is complete.

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

transform._transform(chunk, encoding, callback)#

  • chunk <Buffer> | <String> The chunk to be transformed. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding <String> If the chunk is a string, then this is the encoding type. If chunk is a buffer, then this is the special value - 'buffer', ignore it in this case.
  • callback <Function> Call this function (optionally with an error argument and data) when you are done processing the supplied chunk.

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Transform class methods only.

All Transform stream implementations must provide a _transform() method to accept input and produce output.

_transform() should do whatever has to be done in this specific Transform class, to handle the bytes being written, and pass them off to the readable portion of the interface. Do asynchronous I/O, process things, and so on.

Call transform.push(outputChunk) 0 or more times to generate output from this input chunk, depending on how much data you want to output as a result of this chunk.

Call the callback function only when the current chunk is completely consumed. Note that there may or may not be output as a result of any particular input chunk. If you supply a second argument to the callback it will be passed to the push method. In other words the following are equivalent:

transform.prototype._transform = function (data, encoding, callback) {
  this.push(data);
  callback();
};

transform.prototype._transform = function (data, encoding, callback) {
  callback(null, data);
};

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

Example: SimpleProtocol parser v2#

The example here of a simple protocol parser can be implemented simply by using the higher level Transform stream class, similar to the parseHeader and SimpleProtocol v1 examples.

In this example, rather than providing the input as an argument, it would be piped into the parser, which is a more idiomatic Node.js stream approach.

const util = require('util');
const Transform = require('stream').Transform;
util.inherits(SimpleProtocol, Transform);

function SimpleProtocol(options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

  Transform.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;
  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
  if (!this._inBody) {
    // check if the chunk has a \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

    if (split === -1) {
      // still waiting for the \n\n
      // stash the chunk, and try again.
      this._rawHeader.push(chunk);
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // and let them know that we are done parsing the header.
      this.emit('header', this.header);

      // now, because we got some extra data, emit this first.
      this.push(chunk.slice(split));
    }
  } else {
    // from there on, just provide the data to our consumer as-is.
    this.push(chunk);
  }
  done();
};

// Usage:
// var parser = new SimpleProtocol();
// source.pipe(parser)
// Now parser is a readable stream that will emit 'header'
// with the parsed header data.

Class: stream.Writable#

stream.Writable is an abstract class designed to be extended with an underlying implementation of the stream._write(chunk, encoding, callback) method.

Please see API for Stream Consumers for how to consume writable streams in your programs. What follows is an explanation of how to implement Writable streams in your programs.

new stream.Writable(options)#

In classes that extend the Writable class, make sure to call the constructor so that the buffering settings can be properly initialized.

writable._write(chunk, encoding, callback)#

  • chunk <Buffer> | <String> The chunk to be written. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding <String> If the chunk is a string, then this is the encoding type. If chunk is a buffer, then this is the special value - 'buffer', ignore it in this case.
  • callback <Function> Call this function (optionally with an error argument) when you are done processing the supplied chunk.

All Writable stream implementations must provide a stream._write() method to send data to the underlying resource.

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Writable class methods only.

Call the callback using the standard callback(error) pattern to signal that the write completed successfully or with an error.

If the decodeStrings flag is set in the constructor options, then chunk may be a string rather than a Buffer, and encoding will indicate the sort of string that it is. This is to support implementations that have an optimized handling for certain string data encodings. If you do not explicitly set the decodeStrings option to false, then you can safely ignore the encoding argument, and assume that chunk will always be a Buffer.

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

writable._writev(chunks, callback)#

  • chunks <Array> The chunks to be written. Each chunk has following format: { chunk: ..., encoding: ... }.
  • callback <Function> Call this function (optionally with an error argument) when you are done processing the supplied chunks.

Note: This function MUST NOT be called directly. It may be implemented by child classes, and called by the internal Writable class methods only.

This function is completely optional to implement. In most cases it is unnecessary. If implemented, it will be called with all the chunks that are buffered in the write queue.

Simplified Constructor API#

In simple cases there is now the added benefit of being able to construct a stream without inheritance.

This can be done by passing the appropriate methods as constructor options:

Examples:

Duplex#

var duplex = new stream.Duplex({
  read: function(n) {
    // sets this._read under the hood

    // push data onto the read queue, passing null
    // will signal the end of the stream (EOF)
    this.push(chunk);
  },
  write: function(chunk, encoding, next) {
    // sets this._write under the hood

    // An optional error can be passed as the first argument
    next()
  }
});

// or

var duplex = new stream.Duplex({
  read: function(n) {
    // sets this._read under the hood

    // push data onto the read queue, passing null
    // will signal the end of the stream (EOF)
    this.push(chunk);
  },
  writev: function(chunks, next) {
    // sets this._writev under the hood

    // An optional error can be passed as the first argument
    next()
  }
});

Readable#

var readable = new stream.Readable({
  read: function(n) {
    // sets this._read under the hood

    // push data onto the read queue, passing null
    // will signal the end of the stream (EOF)
    this.push(chunk);
  }
});

Transform#

var transform = new stream.Transform({
  transform: function(chunk, encoding, next) {
    // sets this._transform under the hood

    // generate output as many times as needed
    // this.push(chunk);

    // call when the current chunk is consumed
    next();
  },
  flush: function(done) {
    // sets this._flush under the hood

    // generate output as many times as needed
    // this.push(chunk);

    done();
  }
});

Writable#

var writable = new stream.Writable({
  write: function(chunk, encoding, next) {
    // sets this._write under the hood

    // An optional error can be passed as the first argument
    next()
  }
});

// or

var writable = new stream.Writable({
  writev: function(chunks, next) {
    // sets this._writev under the hood

    // An optional error can be passed as the first argument
    next()
  }
});

Streams: Under the Hood#

Buffering#

Both Writable and Readable streams will buffer data on an internal object which can be retrieved from _writableState.getBuffer() or _readableState.buffer, respectively.

The amount of data that will potentially be buffered depends on the highWaterMark option which is passed into the constructor.

Buffering in Readable streams happens when the implementation calls stream.push(chunk). If the consumer of the Stream does not call stream.read(), then the data will sit in the internal queue until it is consumed.

Buffering in Writable streams happens when the user calls stream.write(chunk) repeatedly, even when it returns false.

The purpose of streams, especially with the stream.pipe() method, is to limit the buffering of data to acceptable levels, so that sources and destinations of varying speed will not overwhelm the available memory.

Compatibility with Older Node.js Versions#

In versions of Node.js prior to v0.10, the Readable stream interface was simpler, but also less powerful and less useful.

  • Rather than waiting for you to call the stream.read() method, 'data' events would start emitting immediately. If you needed to do some I/O to decide how to handle data, then you had to store the chunks in some kind of buffer so that they would not be lost.
  • The stream.pause() method was advisory, rather than guaranteed. This meant that you still had to be prepared to receive 'data' events even when the stream was in a paused state.

In Node.js v0.10, the Readable class was added. For backwards compatibility with older Node.js programs, Readable streams switch into "flowing mode" when a 'data' event handler is added, or when the stream.resume() method is called. The effect is that, even if you are not using the new stream.read() method and 'readable' event, you no longer have to worry about losing 'data' chunks.

Most programs will continue to function normally. However, this introduces an edge case in the following conditions:

  • No 'data' event handler is added.
  • The stream.resume() method is never called.
  • The stream is not piped to any writable destination.

For example, consider the following code:

// WARNING!  BROKEN!
net.createServer((socket) => {

  // we add an 'end' method, but never consume the data
  socket.on('end', () => {
    // It will never get here.
    socket.end('I got your message (but didnt read it)\n');
  });

}).listen(1337);

In versions of Node.js prior to v0.10, the incoming message data would be simply discarded. However, in Node.js v0.10 and beyond, the socket will remain paused forever.

The workaround in this situation is to call the stream.resume() method to start the flow of data:

// Workaround
net.createServer((socket) => {

  socket.on('end', () => {
    socket.end('I got your message (but didnt read it)\n');
  });

  // start the flow of data, discarding it.
  socket.resume();

}).listen(1337);

In addition to new Readable streams switching into flowing mode, pre-v0.10 style streams can be wrapped in a Readable class using the stream.wrap() method.

Object Mode#

Normally, Streams operate on Strings and Buffers exclusively.

Streams that are in object mode can emit generic JavaScript values other than Buffers and Strings.

A Readable stream in object mode will always return a single item from a call to stream.read(size), regardless of what the size argument is.

A Writable stream in object mode will always ignore the encoding argument to stream.write(data, encoding).

The special value null still retains its special value for object mode streams. That is, for object mode readable streams, null as a return value from stream.read() indicates that there is no more data, and stream.push(null) will signal the end of stream data (EOF).

No streams in Node.js core are object mode streams. This pattern is only used by userland streaming libraries.

You should set objectMode in your stream child class constructor on the options object. Setting objectMode mid-stream is not safe.

For Duplex streams objectMode can be set exclusively for readable or writable side with readableObjectMode and writableObjectMode respectively. These options can be used to implement parsers and serializers with Transform streams.

const util = require('util');
const StringDecoder = require('string_decoder').StringDecoder;
const Transform = require('stream').Transform;
util.inherits(JSONParseStream, Transform);

// Gets \n-delimited JSON string data, and emits the parsed objects
function JSONParseStream() {
  if (!(this instanceof JSONParseStream))
    return new JSONParseStream();

  Transform.call(this, { readableObjectMode : true });

  this._buffer = '';
  this._decoder = new StringDecoder('utf8');
}

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {
  this._buffer += this._decoder.write(chunk);
  // split on newlines
  var lines = this._buffer.split(/\r?\n/);
  // keep the last partial line buffered
  this._buffer = lines.pop();
  for (var l = 0; l < lines.length; l++) {
    var line = lines[l];
    try {
      var obj = JSON.parse(line);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // push the parsed object out to the readable consumer
    this.push(obj);
  }
  cb();
};

JSONParseStream.prototype._flush = function(cb) {
  // Just handle any leftover
  var rem = this._buffer.trim();
  if (rem) {
    try {
      var obj = JSON.parse(rem);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // push the parsed object out to the readable consumer
    this.push(obj);
  }
  cb();
};

stream.read(0)#

There are some cases where you want to trigger a refresh of the underlying readable stream mechanisms, without actually consuming any data. In that case, you can call stream.read(0), which will always return null.

If the internal read buffer is below the highWaterMark, and the stream is not currently reading, then calling stream.read(0) will trigger a low-level stream._read() call.

There is almost never a need to do this. However, you will see some cases in Node.js's internals where this is done, particularly in the Readable stream class internals.

stream.push('')#

Pushing a zero-byte string or Buffer (when not in Object mode) has an interesting side effect. Because it is a call to stream.push(), it will end the reading process. However, it does not add any data to the readable buffer, so there's nothing for a user to consume.

Very rarely, there are cases where you have no data to provide now, but the consumer of your stream (or, perhaps, another bit of your own code) will know when to check again, by calling stream.read(0). In those cases, you may call stream.push('').

So far, the only use case for this functionality is in the tls.CryptoStream class, which is deprecated in Node.js/io.js v1.0. If you find that you have to use stream.push(''), please consider another approach, because it almost certainly indicates that something is horribly wrong.

StringDecoder#

Stability: 2 - Stable

To use this module, do require('string_decoder'). StringDecoder decodes a buffer to a string. It is a simple interface to buffer.toString() but provides additional support for utf8.

const StringDecoder = require('string_decoder').StringDecoder;
const decoder = new StringDecoder('utf8');

const cent = new Buffer([0xC2, 0xA2]);
console.log(decoder.write(cent));

const euro = new Buffer([0xE2, 0x82, 0xAC]);
console.log(decoder.write(euro));

Class: StringDecoder#

Accepts a single argument, encoding which defaults to 'utf8'.

decoder.end()#

Returns any trailing bytes that were left in the buffer.

decoder.write(buffer)#

Returns a decoded string.

Timers#

Stability: 3 - Locked

All of the timer functions are globals. You do not need to require() this module in order to use them.

clearImmediate(immediateObject)#

Stops an immediateObject, as created by setImmediate, from triggering.

clearInterval(intervalObject)#

Stops an intervalObject, as created by setInterval, from triggering.

clearTimeout(timeoutObject)#

Prevents a timeoutObject, as created by setTimeout, from triggering.

ref()#

If a timer was previously unref()d, then ref() can be called to explicitly request the timer hold the program open. If the timer is already refd calling ref again will have no effect.

Returns the timer.

setImmediate(callback[, arg][, ...])#

Schedules "immediate" execution of callback after I/O events' callbacks and before timers set by setTimeout and setInterval are triggered. Returns an immediateObject for possible use with clearImmediate. Additional optional arguments may be passed to the callback.

Callbacks for immediates are queued in the order in which they were created. The entire callback queue is processed every event loop iteration. If an immediate is queued from inside an executing callback, that immediate won't fire until the next event loop iteration.

If callback is not a function setImmediate() will throw immediately.

setInterval(callback, delay[, arg][, ...])#

Schedules repeated execution of callback every delay milliseconds. Returns a intervalObject for possible use with clearInterval. Additional optional arguments may be passed to the callback.

To follow browser behavior, when using delays larger than 2147483647 milliseconds (approximately 25 days) or less than 1, Node.js will use 1 as the delay.

If callback is not a function setInterval() will throw immediately.

setTimeout(callback, delay[, arg][, ...])#

Schedules execution of a one-time callback after delay milliseconds. Returns a timeoutObject for possible use with clearTimeout. Additional optional arguments may be passed to the callback.

The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering. The callback will be called as close as possible to the time specified.

To follow browser behavior, when using delays larger than 2147483647 milliseconds (approximately 25 days) or less than 1, the timeout is executed immediately, as if the delay was set to 1.

If callback is not a function setTimeout() will throw immediately.

unref()#

The opaque value returned by setTimeout and setInterval also has the method timer.unref() which allows the creation of a timer that is active but if it is the only item left in the event loop, it won't keep the program running. If the timer is already unrefd calling unref again will have no effect.

In the case of setTimeout, unref creates a separate timer that will wakeup the event loop, creating too many of these may adversely effect event loop performance -- use wisely.

Returns the timer.

TLS (SSL)#

Stability: 2 - Stable

Use require('tls') to access this module.

The tls module uses OpenSSL to provide Transport Layer Security and/or Secure Socket Layer: encrypted stream communication.

TLS/SSL is a public/private key infrastructure. Each client and each server must have a private key. A private key is created like this:

openssl genrsa -out ryans-key.pem 2048

All servers and some clients need to have a certificate. Certificates are public keys signed by a Certificate Authority or self-signed. The first step to getting a certificate is to create a "Certificate Signing Request" (CSR) file. This is done with:

openssl req -new -sha256 -key ryans-key.pem -out ryans-csr.pem

To create a self-signed certificate with the CSR, do this:

openssl x509 -req -in ryans-csr.pem -signkey ryans-key.pem -out ryans-cert.pem

Alternatively you can send the CSR to a Certificate Authority for signing.

For Perfect Forward Secrecy, it is required to generate Diffie-Hellman parameters:

openssl dhparam -outform PEM -out dhparam.pem 2048

To create a .pfx or .p12, do this:

openssl pkcs12 -export -in agent5-cert.pem -inkey agent5-key.pem \
      -certfile ca-cert.pem -out agent5.pfx
  • in: certificate
  • inkey: private key
  • certfile: all CA certs concatenated in one file like cat ca1-cert.pem ca2-cert.pem > ca-cert.pem

ALPN, NPN and SNI#

ALPN (Application-Layer Protocol Negotiation Extension), NPN (Next Protocol Negotiation) and, SNI (Server Name Indication) are TLS handshake extensions:

  • ALPN/NPN - Allows the use of one TLS server for multiple protocols (HTTP, SPDY, HTTP/2)
  • SNI - Allows the use of one TLS server for multiple hostnames with different SSL certificates.

Client-initiated renegotiation attack mitigation#

The TLS protocol lets the client renegotiate certain aspects of the TLS session. Unfortunately, session renegotiation requires a disproportionate amount of server-side resources, which makes it a potential vector for denial-of-service attacks.

To mitigate this, renegotiation is limited to three times every 10 minutes. An error is emitted on the tls.TLSSocket instance when the threshold is exceeded. These limits are configurable:

  • tls.CLIENT_RENEG_LIMIT: renegotiation limit, default is 3.

  • tls.CLIENT_RENEG_WINDOW: renegotiation window in seconds, default is 10 minutes.

Do not change the defaults without a full understanding of the implications.

To test the server, connect to it with openssl s_client -connect address:port and tap R<CR> (i.e., the letter R followed by a carriage return) a few times.

Modifying the Default TLS Cipher suite#

Node.js is built with a default suite of enabled and disabled TLS ciphers. Currently, the default cipher suite is:

ECDHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES128-GCM-SHA256:
ECDHE-RSA-AES256-GCM-SHA384:
ECDHE-ECDSA-AES256-GCM-SHA384:
DHE-RSA-AES128-GCM-SHA256:
ECDHE-RSA-AES128-SHA256:
DHE-RSA-AES128-SHA256:
ECDHE-RSA-AES256-SHA384:
DHE-RSA-AES256-SHA384:
ECDHE-RSA-AES256-SHA256:
DHE-RSA-AES256-SHA256:
HIGH:
!aNULL:
!eNULL:
!EXPORT:
!DES:
!RC4:
!MD5:
!PSK:
!SRP:
!CAMELLIA

This default can be overriden entirely using the --tls-cipher-list command line switch. For instance, the following makes ECDHE-RSA-AES128-GCM-SHA256:!RC4 the default TLS cipher suite:

node --tls-cipher-list="ECDHE-RSA-AES128-GCM-SHA256:!RC4"

Note that the default cipher suite included within Node.js has been carefully selected to reflect current security best practices and risk mitigation. Changing the default cipher suite can have a significant impact on the security of an application. The --tls-cipher-list switch should by used only if absolutely necessary.

Perfect Forward Secrecy#

The term "Forward Secrecy" or "Perfect Forward Secrecy" describes a feature of key-agreement (i.e., key-exchange) methods. Practically it means that even if the private key of a server is compromised, communication can only be decrypted by eavesdroppers if they manage to obtain the key-pair specifically generated for each session.

This is achieved by randomly generating a key pair for key-agreement on every handshake (in contrast to using the same key for all sessions). Methods implementing this technique, thus offering Perfect Forward Secrecy, are called "ephemeral".

Currently two methods are commonly used to achieve Perfect Forward Secrecy (note the character "E" appended to the traditional abbreviations):

  • DHE - An ephemeral version of the Diffie Hellman key-agreement protocol.
  • ECDHE - An ephemeral version of the Elliptic Curve Diffie Hellman key-agreement protocol.

Ephemeral methods may have some performance drawbacks, because key generation is expensive.

Class: CryptoStream#

Stability: 0 - Deprecated: Use tls.TLSSocket instead.

This is an encrypted stream.

cryptoStream.bytesWritten#

A proxy to the underlying socket's bytesWritten accessor, this will return the total bytes written to the socket, including the TLS overhead.

Class: SecurePair#

Returned by tls.createSecurePair.

Event: 'secure'#

This event is emitted from the SecurePair once the pair has successfully established a secure connection.

As with checking for the server secureConnection event, pair.cleartext.authorized should be inspected to confirm whether the certificate used is properly authorized.

Class: tls.Server#

This class is a subclass of net.Server and has the same methods on it. Instead of accepting only raw TCP connections, this accepts encrypted connections using TLS or SSL.

Event: 'tlsClientError'#

function (exception, tlsSocket) { }

When a client connection emits an 'error' event before a secure connection is established it will be forwarded here.

tlsSocket is the tls.TLSSocket that the error originated from.

Event: 'newSession'#

function (sessionId, sessionData, callback) { }

Emitted on creation of a TLS session. May be used to store sessions in external storage. callback must be invoked eventually, otherwise no data will be sent or received from the secure connection.

NOTE: adding this event listener will only have an effect on connections established after the addition of the event listener.

Event: 'OCSPRequest'#

function (certificate, issuer, callback) { }

Emitted when the client sends a certificate status request. The server's current certificate can be parsed to obtain the OCSP URL and certificate ID; after obtaining an OCSP response callback(null, resp) is then invoked, where resp is a Buffer instance. Both certificate and issuer are Buffer DER-representations of the primary and issuer's certificates. They can be used to obtain the OCSP certificate ID and OCSP endpoint URL.

Alternatively, callback(null, null) may be called, meaning that there was no OCSP response.

Calling callback(err) will result in a socket.destroy(err) call.

Typical flow:

  1. Client connects to the server and sends an 'OCSPRequest' to it (via status info extension in ClientHello).
  2. Server receives the request and invokes the 'OCSPRequest' event listener if present.
  3. Server extracts the OCSP URL from either the certificate or issuer and performs an OCSP request to the CA.
  4. Server receives OCSPResponse from the CA and sends it back to the client via the callback argument
  5. Client validates the response and either destroys the socket or performs a handshake.

NOTE: issuer could be null if the certificate is self-signed or if the issuer is not in the root certificates list. (An issuer may be provided via the ca option.)

NOTE: adding this event listener will only have an effect on connections established after the addition of the event listener.

NOTE: An npm module like asn1.js may be used to parse the certificates.

Event: 'resumeSession'#

function (sessionId, callback) { }

Emitted when the client wants to resume the previous TLS session. The event listener may perform a lookup in external storage using the given sessionId and invoke callback(null, sessionData) once finished. If the session can't be resumed (i.e., doesn't exist in storage) one may call callback(null, null). Calling callback(err) will terminate incoming connection and destroy the socket.

NOTE: adding this event listener will only have an effect on connections established after the addition of the event listener.

Here's an example for using TLS session resumption:

var tlsSessionStore = {};
server.on('newSession', (id, data, cb) => {
  tlsSessionStore[id.toString('hex')] = data;
  cb();
});
server.on('resumeSession', (id, cb) => {
  cb(null, tlsSessionStore[id.toString('hex')] || null);
});

Event: 'secureConnection'#

function (tlsSocket) {}

This event is emitted after the handshaking process for a new connection has successfully completed. The argument is an instance of tls.TLSSocket and has all the common stream methods and events.

socket.authorized is a boolean value which indicates if the client has been verified by one of the supplied certificate authorities for the server. If socket.authorized is false, then socket.authorizationError is set to describe how authorization failed. Implied but worth mentioning: depending on the settings of the TLS server, unauthorized connections may be accepted.

socket.npnProtocol is a string containing the selected NPN protocol and socket.alpnProtocol is a string containing the selected ALPN protocol. When both NPN and ALPN extensions are received, ALPN takes precedence over NPN and the next protocol is selected by ALPN. When ALPN has no selected protocol, this returns false.

socket.servername is a string containing the server name requested with SNI.

server.addContext(hostname, context)#

Add secure context that will be used if the client request's SNI hostname matches the supplied hostname (wildcards can be used). context can contain key, cert, ca or any other properties from tls.createSecureContext() options argument.

server.address()#

Returns the bound address, the address family name, and port of the server as reported by the operating system. See net.Server.address() for more information.

server.close([callback])#

Stops the server from accepting new connections. This function is asynchronous, the server is finally closed when the server emits a 'close' event. Optionally, you can pass a callback to listen for the 'close' event.

server.connections#

The number of concurrent connections on the server.

server.getTicketKeys()#

Returns a Buffer instance holding the keys currently used for encryption/decryption of the TLS Session Tickets

server.listen(port[, hostname][, callback])#

Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections on any IPv6 address (::) when IPv6 is available, or any IPv4 address (0.0.0.0) otherwise. A port value of zero will assign a random port.

This function is asynchronous. The last parameter callback will be called when the server has been bound.

See net.Server for more information.

server.setTicketKeys(keys)#

Updates the keys for encryption/decryption of the TLS Session Tickets.

NOTE: the buffer should be 48 bytes long. See ticketKeys option in tls.createServer for more information on how it is used.

NOTE: the change is effective only for future server connections. Existing or currently pending server connections will use the previous keys.

server.maxConnections#

Set this property to reject connections when the server's connection count exceeds the specified threshold.

Class: tls.TLSSocket#

This is a wrapped version of net.Socket that does transparent encryption of written data and all required TLS negotiation.

This instance implements the duplex Stream interface. It has all the common stream methods and events.

Methods that return TLS connection metadata (e.g. tls.TLSSocket.getPeerCertificate() will only return data while the connection is open.

new tls.TLSSocket(socket[, options])#

Construct a new TLSSocket object from an existing TCP socket.

socket is an instance of net.Socket

options is an optional object that might contain following properties: