sharp/docs/api-constructor.md
2020-12-22 11:47:54 +00:00

6.7 KiB

Sharp

Constructor factory to create an instance of sharp, to which further methods are chained.

JPEG, PNG, WebP, AVIF or TIFF format image data can be streamed out from this object. When using Stream based output, derived attributes are available from the info event.

Non-critical problems encountered during processing are emitted as warning events.

Implements the stream.Duplex class.

Parameters

  • input (Buffer | string)? if present, can be a Buffer containing JPEG, PNG, WebP, AVIF, GIF, SVG, TIFF or raw pixel image data, or a String containing the filesystem path to an JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image file. JPEG, PNG, WebP, AVIF, GIF, SVG, TIFF or raw pixel image data can be streamed into the object when not present.
  • options Object? if present, is an Object with optional attributes.
    • options.failOnError boolean by default halt processing and raise an error when loading invalid images. Set this flag to false if you'd rather apply a "best effort" to decode images, even if the data is corrupt or invalid. (optional, default true)
    • options.limitInputPixels (number | boolean) Do not process input images where the number of pixels (width x height) exceeds this limit. Assumes image dimensions contained in the input metadata can be trusted. An integral Number of pixels, zero or false to remove limit, true to use default limit of 268402689 (0x3FFF x 0x3FFF). (optional, default 268402689)
    • options.sequentialRead boolean Set this to true to use sequential rather than random access where possible. This can reduce memory usage and might improve performance on some systems. (optional, default false)
    • options.density number number representing the DPI for vector images in the range 1 to 100000. (optional, default 72)
    • options.pages number number of pages to extract for multi-page input (GIF, WebP, AVIF, TIFF, PDF), use -1 for all pages. (optional, default 1)
    • options.page number page number to start extracting from for multi-page input (GIF, WebP, AVIF, TIFF, PDF), zero based. (optional, default 0)
    • options.level number level to extract from a multi-level input (OpenSlide), zero based. (optional, default 0)
    • options.animated boolean Set to true to read all frames/pages of an animated image (equivalent of setting pages to -1). (optional, default false)
    • options.raw Object? describes raw pixel input image data. See raw() for pixel ordering.
    • options.create Object? describes a new image to be created.
      • options.create.width number?
      • options.create.height number?
      • options.create.channels number? 3-4
      • options.create.background (string | Object)? parsed by the color module to extract values for red, green, blue and alpha.

Examples

sharp('input.jpg')
  .resize(300, 200)
  .toFile('output.jpg', function(err) {
    // output.jpg is a 300 pixels wide and 200 pixels high image
    // containing a scaled and cropped version of input.jpg
  });
// Read image data from readableStream,
// resize to 300 pixels wide,
// emit an 'info' event with calculated dimensions
// and finally write image data to writableStream
var transformer = sharp()
  .resize(300)
  .on('info', function(info) {
    console.log('Image height is ' + info.height);
  });
readableStream.pipe(transformer).pipe(writableStream);
// Create a blank 300x200 PNG image of semi-transluent red pixels
sharp({
  create: {
    width: 300,
    height: 200,
    channels: 4,
    background: { r: 255, g: 0, b: 0, alpha: 0.5 }
  }
})
.png()
.toBuffer()
.then( ... );
// Convert an animated GIF to an animated WebP
await sharp('in.gif', { animated: true }).toFile('out.webp');
  • Throws Error Invalid parameters

Returns Sharp

clone

Take a "snapshot" of the Sharp instance, returning a new instance. Cloned instances inherit the input of their parent instance. This allows multiple output Streams and therefore multiple processing pipelines to share a single input Stream.

Examples

const pipeline = sharp().rotate();
pipeline.clone().resize(800, 600).pipe(firstWritableStream);
pipeline.clone().extract({ left: 20, top: 20, width: 100, height: 100 }).pipe(secondWritableStream);
readableStream.pipe(pipeline);
// firstWritableStream receives auto-rotated, resized readableStream
// secondWritableStream receives auto-rotated, extracted region of readableStream
// Create a pipeline that will download an image, resize it and format it to different files
// Using Promises to know when the pipeline is complete
const fs = require("fs");
const got = require("got");
const sharpStream = sharp({
  failOnError: false
});

const promises = [];

promises.push(
  sharpStream
    .clone()
    .jpeg({ quality: 100 })
    .toFile("originalFile.jpg")
);

promises.push(
  sharpStream
    .clone()
    .resize({ width: 500 })
    .jpeg({ quality: 80 })
    .toFile("optimized-500.jpg")
);

promises.push(
  sharpStream
    .clone()
    .resize({ width: 500 })
    .webp({ quality: 80 })
    .toFile("optimized-500.webp")
);

// https://github.com/sindresorhus/got#gotstreamurl-options
got.stream("https://www.example.com/some-file.jpg").pipe(sharpStream);

Promise.all(promises)
  .then(res => { console.log("Done!", res); })
  .catch(err => {
    console.error("Error processing files, let's clean it up", err);
    try {
      fs.unlinkSync("originalFile.jpg");
      fs.unlinkSync("optimized-500.jpg");
      fs.unlinkSync("optimized-500.webp");
    } catch (e) {}
  });

Returns Sharp