Deprecate background, add op-specific prop to resize/extend/flatten #1392

This commit is contained in:
Lovell Fuller 2018-10-01 20:58:55 +01:00
parent 6007e13a22
commit a64844689e
17 changed files with 378 additions and 296 deletions

View File

@ -1,23 +1,5 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. --> <!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## background
Set the background for the `embed`, `flatten` and `extend` operations.
The default background is `{r: 0, g: 0, b: 0, alpha: 1}`, black without transparency.
Delegates to the _color_ module, which can throw an Error
but is liberal in what it accepts, clipping values to sensible min/max.
The alpha value is a float between `0` (transparent) and `1` (opaque).
### Parameters
- `rgba` **([String][1] \| [Object][2])** parsed by the [color][3] module to extract values for red, green, blue and alpha.
- Throws **[Error][4]** Invalid parameter
Returns **Sharp**
## tint ## tint
Tint the image using the provided chroma while preserving the image luminance. Tint the image using the provided chroma while preserving the image luminance.

View File

@ -114,11 +114,11 @@ Returns **Sharp**
## flatten ## flatten
Merge alpha transparency channel, if any, with `background`. Merge alpha transparency channel, if any, with a background.
### Parameters ### Parameters
- `flatten` **[Boolean][6]** (optional, default `true`) - `options`
Returns **Sharp** Returns **Sharp**

View File

@ -2,179 +2,126 @@
## resize ## resize
Resize image to `width` x `height`. Resize image to `width`, `height` or `width x height`.
By default, the resized image is centre cropped to the exact size specified.
Possible kernels are: When both a `width` and `height` are provided, the possible methods by which the image should **fit** these are:
- `nearest`: Use [nearest neighbour interpolation][1]. - `cover`: Crop to cover both provided dimensions (the default).
- `cubic`: Use a [Catmull-Rom spline][2]. - `contain`: Embed within both provided dimensions.
- `lanczos2`: Use a [Lanczos kernel][3] with `a=2`. - `fill`: Ignore the aspect ratio of the input and stretch to both provided dimensions.
- `lanczos3`: Use a Lanczos kernel with `a=3` (the default). - `inside`: Preserving aspect ratio, resize the image to be as large as possible while ensuring its dimensions are less than or equal to both those specified.
- `outside`: Preserving aspect ratio, resize the image to be as small as possible while ensuring its dimensions are greater than or equal to both those specified.
Some of these values are based on the [object-fit][1] CSS property.
### Parameters When using a `fit` of `cover` or `contain`, the default **position** is `centre`. Other options are:
- `width` **[Number][4]?** pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height. - `sharp.position`: `top`, `right top`, `right`, `right bottom`, `bottom`, `left bottom`, `left`, `left top`.
- `height` **[Number][4]?** pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width. - `sharp.gravity`: `north`, `northeast`, `east`, `southeast`, `south`, `southwest`, `west`, `northwest`, `center` or `centre`.
- `options` **[Object][5]?** - `sharp.strategy`: `cover` only, dynamically crop using either the `entropy` or `attention` strategy.
- `options.kernel` **[String][6]** the kernel to use for image reduction. (optional, default `'lanczos3'`) Some of these values are based on the [object-position][2] CSS property.
- `options.fastShrinkOnLoad` **[Boolean][7]** take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images. (optional, default `true`)
### Examples
```javascript
sharp(inputBuffer)
.resize(200, 300, {
kernel: sharp.kernel.nearest
})
.background('white')
.embed()
.toFile('output.tiff')
.then(function() {
// output.tiff is a 200 pixels wide and 300 pixels high image
// containing a nearest-neighbour scaled version, embedded on a white canvas,
// of the image data in inputBuffer
});
```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## crop
Crop the resized image to the exact size specified, the default behaviour.
Possible attributes of the optional `sharp.gravity` are `north`, `northeast`, `east`, `southeast`, `south`,
`southwest`, `west`, `northwest`, `center` and `centre`.
The experimental strategy-based approach resizes so one dimension is at its target length The experimental strategy-based approach resizes so one dimension is at its target length
then repeatedly ranks edge regions, discarding the edge with the lowest score based on the selected strategy. then repeatedly ranks edge regions, discarding the edge with the lowest score based on the selected strategy.
- `entropy`: focus on the region with the highest [Shannon entropy][9]. - `entropy`: focus on the region with the highest [Shannon entropy][3].
- `attention`: focus on the region with the highest luminance frequency, colour saturation and presence of skin tones. - `attention`: focus on the region with the highest luminance frequency, colour saturation and presence of skin tones.
Possible interpolation kernels are:
- `nearest`: Use [nearest neighbour interpolation][4].
- `cubic`: Use a [Catmull-Rom spline][5].
- `lanczos2`: Use a [Lanczos kernel][6] with `a=2`.
- `lanczos3`: Use a Lanczos kernel with `a=3` (the default).
### Parameters ### Parameters
- `crop` **[String][6]** A member of `sharp.gravity` to crop to an edge/corner or `sharp.strategy` to crop dynamically. (optional, default `'centre'`) - `width` **[Number][7]?** pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
- `height` **[Number][7]?** pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
- `options`
### Examples ### Examples
```javascript
sharp(input)
.resize({ width: 100 })
.toBuffer()
.then(data => {
// 100 pixels wide, auto-scaled height
});
```
```javascript
sharp(input)
.resize({ height: 100 })
.toBuffer()
.then(data => {
// 100 pixels high, auto-scaled width
});
```
```javascript
sharp(input)
.resize(200, 300, {
kernel: sharp.kernel.nearest,
fit: 'contain',
position: 'right top',
background: { r: 255, g: 255, b: 255, alpha: 0.5 }
})
.toFile('output.png')
.then(() => {
// output.png is a 200 pixels wide and 300 pixels high image
// containing a nearest-neighbour scaled version
// contained within the north-east corner of a semi-transparent white canvas
});
```
```javascript ```javascript
const transformer = sharp() const transformer = sharp()
.resize(200, 200) .resize({
.crop(sharp.strategy.entropy) width: 200,
.on('error', function(err) { height: 200,
console.log(err); fit: sharp.fit.cover,
position: sharp.strategy.entropy
}); });
// Read image data from readableStream // Read image data from readableStream
// Write 200px square auto-cropped image data to writableStream // Write 200px square auto-cropped image data to writableStream
readableStream.pipe(transformer).pipe(writableStream); readableStream
.pipe(transformer)
.pipe(writableStream);
``` ```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## embed
Preserving aspect ratio, resize the image to the maximum `width` or `height` specified
then embed on a background of the exact `width` and `height` specified.
If the background contains an alpha value then WebP and PNG format output images will
contain an alpha channel, even when the input image does not.
### Parameters
- `embed` **[String][6]** A member of `sharp.gravity` to embed to an edge/corner. (optional, default `'centre'`)
### Examples
```javascript ```javascript
sharp('input.gif') sharp(input)
.resize(200, 300) .resize(200, 200, {
.background({r: 0, g: 0, b: 0, alpha: 0}) fit: sharp.fit.inside,
.embed() withoutEnlargement: true
.toFormat(sharp.format.webp) })
.toBuffer(function(err, outputBuffer) {
if (err) {
throw err;
}
// outputBuffer contains WebP image data of a 200 pixels wide and 300 pixels high
// containing a scaled version, embedded on a transparent canvas, of input.gif
});
```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## max
Preserving aspect ratio, resize the image to be as large as possible
while ensuring its dimensions are less than or equal to the `width` and `height` specified.
Both `width` and `height` must be provided via `resize` otherwise the behaviour will default to `crop`.
### Examples
```javascript
sharp(inputBuffer)
.resize(200, 200)
.max()
.toFormat('jpeg') .toFormat('jpeg')
.toBuffer() .toBuffer()
.then(function(outputBuffer) { .then(function(outputBuffer) {
// outputBuffer contains JPEG image data no wider than 200 pixels and no higher // outputBuffer contains JPEG image data
// than 200 pixels regardless of the inputBuffer image dimensions // no wider and no higher than 200 pixels
// and no larger than the input image
}); });
``` ```
Returns **Sharp** - Throws **[Error][8]** Invalid parameters
## min
Preserving aspect ratio, resize the image to be as small as possible
while ensuring its dimensions are greater than or equal to the `width` and `height` specified.
Both `width` and `height` must be provided via `resize` otherwise the behaviour will default to `crop`.
Returns **Sharp**
## ignoreAspectRatio
Ignoring the aspect ratio of the input, stretch the image to
the exact `width` and/or `height` provided via `resize`.
Returns **Sharp**
## withoutEnlargement
Do not enlarge the output image if the input image width _or_ height are already less than the required dimensions.
This is equivalent to GraphicsMagick's `>` geometry option:
"_change the dimensions of the image only if its width or height exceeds the geometry specification_".
Use with `max()` to preserve the image's aspect ratio.
The default behaviour _before_ function call is `false`, meaning the image will be enlarged.
### Parameters
- `withoutEnlargement` **[Boolean][7]** (optional, default `true`)
Returns **Sharp** Returns **Sharp**
## extend ## extend
Extends/pads the edges of the image with the colour provided to the `background` method. Extends/pads the edges of the image with the provided background colour.
This operation will always occur after resizing and extraction, if any. This operation will always occur after resizing and extraction, if any.
### Parameters ### Parameters
- `extend` **([Number][4] \| [Object][5])** single pixel count to add to all edges or an Object with per-edge counts - `extend` **([Number][7] \| [Object][9])** single pixel count to add to all edges or an Object with per-edge counts
- `extend.top` **[Number][4]?** - `extend.top` **[Number][7]?**
- `extend.left` **[Number][4]?** - `extend.left` **[Number][7]?**
- `extend.bottom` **[Number][4]?** - `extend.bottom` **[Number][7]?**
- `extend.right` **[Number][4]?** - `extend.right` **[Number][7]?**
- `extend.background` **([String][10] \| [Object][9])** background colour, parsed by the [color][11] module, defaults to black without transparency. (optional, default `{r:0,g:0,b:0,alpha:1}`)
### Examples ### Examples
@ -183,8 +130,14 @@ This operation will always occur after resizing and extraction, if any.
// to the top, left and right edges and 20 to the bottom edge // to the top, left and right edges and 20 to the bottom edge
sharp(input) sharp(input)
.resize(140) .resize(140)
.background({r: 0, g: 0, b: 0, alpha: 0}) .)
.extend({top: 10, bottom: 20, left: 10, right: 10}) .extend({
top: 10,
bottom: 20,
left: 10,
right: 10
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
... ...
``` ```
@ -202,11 +155,11 @@ Extract a region of the image.
### Parameters ### Parameters
- `options` **[Object][5]** - `options` **[Object][9]**
- `options.left` **[Number][4]** zero-indexed offset from left edge - `options.left` **[Number][7]** zero-indexed offset from left edge
- `options.top` **[Number][4]** zero-indexed offset from top edge - `options.top` **[Number][7]** zero-indexed offset from top edge
- `options.width` **[Number][4]** dimension of extracted image - `options.width` **[Number][7]** dimension of extracted image
- `options.height` **[Number][4]** dimension of extracted image - `options.height` **[Number][7]** dimension of extracted image
### Examples ### Examples
@ -238,27 +191,31 @@ Trim "boring" pixels from all edges that contain values within a percentage simi
### Parameters ### Parameters
- `tolerance` **[Number][4]** value between 1 and 99 representing the percentage similarity. (optional, default `10`) - `tolerance` **[Number][7]** value between 1 and 99 representing the percentage similarity. (optional, default `10`)
- Throws **[Error][8]** Invalid parameters - Throws **[Error][8]** Invalid parameters
Returns **Sharp** Returns **Sharp**
[1]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation [1]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
[2]: https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline [2]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-position
[3]: https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel [3]: https://en.wikipedia.org/wiki/Entropy_%28information_theory%29
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number [4]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object [5]: https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String [6]: https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean [7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error [8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[9]: https://en.wikipedia.org/wiki/Entropy_%28information_theory%29 [9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[11]: https://www.npmjs.org/package/color

View File

@ -15,6 +15,10 @@ Requires libvips v8.7.0.
`max().withoutEnlargement()` is now `resize(width, height, { fit: 'inside', withoutEnlargement: true })`. `max().withoutEnlargement()` is now `resize(width, height, { fit: 'inside', withoutEnlargement: true })`.
[#1135](https://github.com/lovell/sharp/issues/1135) [#1135](https://github.com/lovell/sharp/issues/1135)
* Deprecate the `background` function.
Per-operation `background` options added to `resize`, `extend` and `flatten` operations.
[#1392](https://github.com/lovell/sharp/issues/1392)
* Drop Node 4 support. * Drop Node 4 support.
[#1212](https://github.com/lovell/sharp/issues/1212) [#1212](https://github.com/lovell/sharp/issues/1212)

View File

@ -1,5 +1,7 @@
'use strict'; 'use strict';
const deprecate = require('util').deprecate;
const color = require('color'); const color = require('color');
const is = require('./is'); const is = require('./is');
@ -16,25 +18,20 @@ const colourspace = {
}; };
/** /**
* Set the background for the `embed`, `flatten` and `extend` operations. * @deprecated
* The default background is `{r: 0, g: 0, b: 0, alpha: 1}`, black without transparency. * @private
*
* Delegates to the _color_ module, which can throw an Error
* but is liberal in what it accepts, clipping values to sensible min/max.
* The alpha value is a float between `0` (transparent) and `1` (opaque).
*
* @param {String|Object} rgba - parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha.
* @returns {Sharp}
* @throws {Error} Invalid parameter
*/ */
function background (rgba) { function background (rgba) {
const colour = color(rgba); const colour = color(rgba);
this.options.background = [ const background = [
colour.red(), colour.red(),
colour.green(), colour.green(),
colour.blue(), colour.blue(),
Math.round(colour.alpha() * 255) Math.round(colour.alpha() * 255)
]; ];
this.options.resizeBackground = background;
this.options.extendBackground = background;
this.options.flattenBackground = background.slice(0, 3);
return this; return this;
} }
@ -102,23 +99,45 @@ function toColorspace (colorspace) {
return this.toColourspace(colorspace); return this.toColourspace(colorspace);
} }
/**
* Update a colour attribute of the this.options Object.
* @private
* @param {String} key
* @param {String|Object} val
* @throws {Error} Invalid key
*/
function _setColourOption (key, val) {
if (is.object(val) || is.string(val)) {
const colour = color(val);
this.options[key] = [
colour.red(),
colour.green(),
colour.blue(),
Math.round(colour.alpha() * 255)
];
}
}
/** /**
* Decorate the Sharp prototype with colour-related functions. * Decorate the Sharp prototype with colour-related functions.
* @private * @private
*/ */
module.exports = function (Sharp) { module.exports = function (Sharp) {
// Public instance functions
[ [
background, // Public
tint, tint,
greyscale, greyscale,
grayscale, grayscale,
toColourspace, toColourspace,
toColorspace toColorspace,
// Private
_setColourOption
].forEach(function (f) { ].forEach(function (f) {
Sharp.prototype[f.name] = f; Sharp.prototype[f.name] = f;
}); });
// Class attributes // Class attributes
Sharp.colourspace = colourspace; Sharp.colourspace = colourspace;
Sharp.colorspace = colourspace; Sharp.colorspace = colourspace;
// Deprecated
Sharp.prototype.background = deprecate(background, 'background(background) is deprecated, use resize({ background }), extend({ background }) or flatten({ background }) instead');
}; };

View File

@ -105,6 +105,7 @@ const Sharp = function (input, options) {
height: -1, height: -1,
canvas: 'crop', canvas: 'crop',
position: 0, position: 0,
resizeBackground: [0, 0, 0, 255],
useExifOrientation: false, useExifOrientation: false,
angle: 0, angle: 0,
rotationAngle: 0, rotationAngle: 0,
@ -116,14 +117,15 @@ const Sharp = function (input, options) {
extendBottom: 0, extendBottom: 0,
extendLeft: 0, extendLeft: 0,
extendRight: 0, extendRight: 0,
extendBackground: [0, 0, 0, 255],
withoutEnlargement: false, withoutEnlargement: false,
kernel: 'lanczos3', kernel: 'lanczos3',
fastShrinkOnLoad: true, fastShrinkOnLoad: true,
// operations // operations
background: [0, 0, 0, 255],
tintA: 128, tintA: 128,
tintB: 128, tintB: 128,
flatten: false, flatten: false,
flattenBackground: [0, 0, 0],
negate: false, negate: false,
medianSize: 0, medianSize: 0,
blurSigma: 0, blurSigma: 0,

View File

@ -171,12 +171,15 @@ function blur (sigma) {
} }
/** /**
* Merge alpha transparency channel, if any, with `background`. * Merge alpha transparency channel, if any, with a background.
* @param {Boolean} [flatten=true] * @param {String|Object} [options.background={r: 0, g: 0, b: 0}] - background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black.
* @returns {Sharp} * @returns {Sharp}
*/ */
function flatten (flatten) { function flatten (options) {
this.options.flatten = is.bool(flatten) ? flatten : true; this.options.flatten = is.bool(options) ? options : true;
if (is.object(options)) {
this._setColourOption('flattenBackground', options.background);
}
return this; return this;
} }

View File

@ -130,18 +130,18 @@ const mapFitToCanvas = {
* }); * });
* *
* @example * @example
* sharp(inputBuffer) * sharp(input)
* .resize(200, 300, { * .resize(200, 300, {
* kernel: sharp.kernel.nearest, * kernel: sharp.kernel.nearest,
* fit: 'contain', * fit: 'contain',
* position: 'right top' * position: 'right top',
* background: { r: 255, g: 255, b: 255, alpha: 0.5 }
* }) * })
* .background('white') * .toFile('output.png')
* .toFile('output.tiff')
* .then(() => { * .then(() => {
* // output.tiff is a 200 pixels wide and 300 pixels high image * // output.png is a 200 pixels wide and 300 pixels high image
* // containing a nearest-neighbour scaled version * // containing a nearest-neighbour scaled version
* // embedded in the north-east corner of a white canvas * // contained within the north-east corner of a semi-transparent white canvas
* }); * });
* *
* @example * @example
@ -159,7 +159,7 @@ const mapFitToCanvas = {
* .pipe(writableStream); * .pipe(writableStream);
* *
* @example * @example
* sharp(inputBuffer) * sharp(input)
* .resize(200, 200, { * .resize(200, 200, {
* fit: sharp.fit.inside, * fit: sharp.fit.inside,
* withoutEnlargement: true * withoutEnlargement: true
@ -178,6 +178,7 @@ const mapFitToCanvas = {
* @param {String} [options.height] - alternative means of specifying `height`. If both are present this take priority. * @param {String} [options.height] - alternative means of specifying `height`. If both are present this take priority.
* @param {String} [options.fit='cover'] - how the image should be resized to fit both provided dimensions, one of `cover`, `contain`, `fill`, `inside` or `outside`. * @param {String} [options.fit='cover'] - how the image should be resized to fit both provided dimensions, one of `cover`, `contain`, `fill`, `inside` or `outside`.
* @param {String} [options.position='centre'] - position, gravity or strategy to use when `fit` is `cover` or `contain`. * @param {String} [options.position='centre'] - position, gravity or strategy to use when `fit` is `cover` or `contain`.
* @param {String|Object} [options.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour when using a `fit` of `contain`, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @param {String} [options.kernel='lanczos3'] - the kernel to use for image reduction. * @param {String} [options.kernel='lanczos3'] - the kernel to use for image reduction.
* @param {Boolean} [options.withoutEnlargement=false] - do not enlarge if the width *or* height are already less than the specified dimensions, equivalent to GraphicsMagick's `>` geometry option. * @param {Boolean} [options.withoutEnlargement=false] - do not enlarge if the width *or* height are already less than the specified dimensions, equivalent to GraphicsMagick's `>` geometry option.
* @param {Boolean} [options.fastShrinkOnLoad=true] - take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images. * @param {Boolean} [options.fastShrinkOnLoad=true] - take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images.
@ -234,6 +235,10 @@ function resize (width, height, options) {
throw is.invalidParameterError('position', 'valid position/gravity/strategy', options.position); throw is.invalidParameterError('position', 'valid position/gravity/strategy', options.position);
} }
} }
// Background
if (is.defined(options.background)) {
this._setColourOption('resizeBackground', options.background);
}
// Kernel // Kernel
if (is.defined(options.kernel)) { if (is.defined(options.kernel)) {
if (is.string(kernel[options.kernel])) { if (is.string(kernel[options.kernel])) {
@ -255,7 +260,7 @@ function resize (width, height, options) {
} }
/** /**
* Extends/pads the edges of the image with the colour provided to the `background` method. * Extends/pads the edges of the image with the provided background colour.
* This operation will always occur after resizing and extraction, if any. * This operation will always occur after resizing and extraction, if any.
* *
* @example * @example
@ -263,8 +268,14 @@ function resize (width, height, options) {
* // to the top, left and right edges and 20 to the bottom edge * // to the top, left and right edges and 20 to the bottom edge
* sharp(input) * sharp(input)
* .resize(140) * .resize(140)
* .background({r: 0, g: 0, b: 0, alpha: 0}) * .)
* .extend({top: 10, bottom: 20, left: 10, right: 10}) * .extend({
* top: 10,
* bottom: 20,
* left: 10,
* right: 10
* background: { r: 0, g: 0, b: 0, alpha: 0 }
* })
* ... * ...
* *
* @param {(Number|Object)} extend - single pixel count to add to all edges or an Object with per-edge counts * @param {(Number|Object)} extend - single pixel count to add to all edges or an Object with per-edge counts
@ -272,6 +283,7 @@ function resize (width, height, options) {
* @param {Number} [extend.left] * @param {Number} [extend.left]
* @param {Number} [extend.bottom] * @param {Number} [extend.bottom]
* @param {Number} [extend.right] * @param {Number} [extend.right]
* @param {String|Object} [extend.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @returns {Sharp} * @returns {Sharp}
* @throws {Error} Invalid parameters * @throws {Error} Invalid parameters
*/ */
@ -292,6 +304,7 @@ function extend (extend) {
this.options.extendBottom = extend.bottom; this.options.extendBottom = extend.bottom;
this.options.extendLeft = extend.left; this.options.extendLeft = extend.left;
this.options.extendRight = extend.right; this.options.extendRight = extend.right;
this._setColourOption('extendBackground', extend.background);
} else { } else {
throw new Error('Invalid edge extension ' + extend); throw new Error('Invalid edge extension ' + extend);
} }

View File

@ -37,6 +37,14 @@ namespace sharp {
std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr) { std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr) {
return *Nan::Utf8String(Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked()); return *Nan::Utf8String(Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked());
} }
std::vector<double> AttrAsRgba(v8::Handle<v8::Object> obj, std::string attr) {
v8::Local<v8::Object> background = AttrAs<v8::Object>(obj, attr);
std::vector<double> rgba(4);
for (unsigned int i = 0; i < 4; i++) {
rgba[i] = AttrTo<double>(background, i);
}
return rgba;
}
// Create an InputDescriptor instance from a v8::Object describing an input image // Create an InputDescriptor instance from a v8::Object describing an input image
InputDescriptor* CreateInputDescriptor( InputDescriptor* CreateInputDescriptor(
@ -72,10 +80,7 @@ namespace sharp {
descriptor->createChannels = AttrTo<uint32_t>(input, "createChannels"); descriptor->createChannels = AttrTo<uint32_t>(input, "createChannels");
descriptor->createWidth = AttrTo<uint32_t>(input, "createWidth"); descriptor->createWidth = AttrTo<uint32_t>(input, "createWidth");
descriptor->createHeight = AttrTo<uint32_t>(input, "createHeight"); descriptor->createHeight = AttrTo<uint32_t>(input, "createHeight");
v8::Local<v8::Object> createBackground = AttrAs<v8::Object>(input, "createBackground"); descriptor->createBackground = AttrAsRgba(input, "createBackground");
for (unsigned int i = 0; i < 4; i++) {
descriptor->createBackground[i] = AttrTo<double>(createBackground, i);
}
} }
return descriptor; return descriptor;
} }
@ -605,7 +610,7 @@ namespace sharp {
/* /*
Apply the alpha channel to a given colour Apply the alpha channel to a given colour
*/ */
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, double colour[4]) { std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, std::vector<double> colour) {
// Scale up 8-bit values to match 16-bit input image // Scale up 8-bit values to match 16-bit input image
double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0; double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0;
// Create alphaColour colour // Create alphaColour colour

View File

@ -57,7 +57,7 @@ namespace sharp {
int createChannels; int createChannels;
int createWidth; int createWidth;
int createHeight; int createHeight;
double createBackground[4]; std::vector<double> createBackground;
InputDescriptor(): InputDescriptor():
buffer(nullptr), buffer(nullptr),
@ -70,17 +70,14 @@ namespace sharp {
page(0), page(0),
createChannels(0), createChannels(0),
createWidth(0), createWidth(0),
createHeight(0) { createHeight(0),
createBackground[0] = 0.0; createBackground{ 0.0, 0.0, 0.0, 255.0 } {}
createBackground[1] = 0.0;
createBackground[2] = 0.0;
createBackground[3] = 255.0;
}
}; };
// Convenience methods to access the attributes of a v8::Object // Convenience methods to access the attributes of a v8::Object
bool HasAttr(v8::Handle<v8::Object> obj, std::string attr); bool HasAttr(v8::Handle<v8::Object> obj, std::string attr);
std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr); std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr);
std::vector<double> AttrAsRgba(v8::Handle<v8::Object> obj, std::string attr);
template<typename T> v8::Local<T> AttrAs(v8::Handle<v8::Object> obj, std::string attr) { template<typename T> v8::Local<T> AttrAs(v8::Handle<v8::Object> obj, std::string attr) {
return Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked().As<T>(); return Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked().As<T>();
} }
@ -258,7 +255,7 @@ namespace sharp {
/* /*
Apply the alpha channel to a given colour Apply the alpha channel to a given colour
*/ */
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, double colour[4]); std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, std::vector<double> colour);
} // namespace sharp } // namespace sharp

View File

@ -318,9 +318,9 @@ class PipelineWorker : public Nan::AsyncWorker {
double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0; double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0;
// Background colour // Background colour
std::vector<double> background { std::vector<double> background {
baton->background[0] * multiplier, baton->flattenBackground[0] * multiplier,
baton->background[1] * multiplier, baton->flattenBackground[1] * multiplier,
baton->background[2] * multiplier baton->flattenBackground[2] * multiplier
}; };
image = image.flatten(VImage::option() image = image.flatten(VImage::option()
->set("background", background)); ->set("background", background));
@ -422,7 +422,7 @@ class PipelineWorker : public Nan::AsyncWorker {
if (image.width() != baton->width || image.height() != baton->height) { if (image.width() != baton->width || image.height() != baton->height) {
if (baton->canvas == Canvas::EMBED) { if (baton->canvas == Canvas::EMBED) {
std::vector<double> background; std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->background); std::tie(image, background) = sharp::ApplyAlpha(image, baton->resizeBackground);
// Embed // Embed
@ -492,7 +492,7 @@ class PipelineWorker : public Nan::AsyncWorker {
// Extend edges // Extend edges
if (baton->extendTop > 0 || baton->extendBottom > 0 || baton->extendLeft > 0 || baton->extendRight > 0) { if (baton->extendTop > 0 || baton->extendBottom > 0 || baton->extendLeft > 0 || baton->extendRight > 0) {
std::vector<double> background; std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->background); std::tie(image, background) = sharp::ApplyAlpha(image, baton->extendBackground);
// Embed // Embed
baton->width = image.width() + baton->extendLeft + baton->extendRight; baton->width = image.width() + baton->extendLeft + baton->extendRight;
@ -1097,6 +1097,7 @@ NAN_METHOD(pipeline) {
using sharp::AttrTo; using sharp::AttrTo;
using sharp::AttrAs; using sharp::AttrAs;
using sharp::AttrAsStr; using sharp::AttrAsStr;
using sharp::AttrAsRgba;
using sharp::CreateInputDescriptor; using sharp::CreateInputDescriptor;
// Input Buffers must not undergo GC compaction during processing // Input Buffers must not undergo GC compaction during processing
@ -1140,11 +1141,6 @@ NAN_METHOD(pipeline) {
} else if (canvas == "ignore_aspect") { } else if (canvas == "ignore_aspect") {
baton->canvas = Canvas::IGNORE_ASPECT; baton->canvas = Canvas::IGNORE_ASPECT;
} }
// Background colour
v8::Local<v8::Object> background = AttrAs<v8::Object>(options, "background");
for (unsigned int i = 0; i < 4; i++) {
baton->background[i] = AttrTo<double>(background, i);
}
// Tint chroma // Tint chroma
baton->tintA = AttrTo<double>(options, "tintA"); baton->tintA = AttrTo<double>(options, "tintA");
baton->tintB = AttrTo<double>(options, "tintB"); baton->tintB = AttrTo<double>(options, "tintB");
@ -1160,6 +1156,7 @@ NAN_METHOD(pipeline) {
// Resize options // Resize options
baton->withoutEnlargement = AttrTo<bool>(options, "withoutEnlargement"); baton->withoutEnlargement = AttrTo<bool>(options, "withoutEnlargement");
baton->position = AttrTo<int32_t>(options, "position"); baton->position = AttrTo<int32_t>(options, "position");
baton->resizeBackground = AttrAsRgba(options, "resizeBackground");
baton->kernel = AttrAsStr(options, "kernel"); baton->kernel = AttrAsStr(options, "kernel");
baton->fastShrinkOnLoad = AttrTo<bool>(options, "fastShrinkOnLoad"); baton->fastShrinkOnLoad = AttrTo<bool>(options, "fastShrinkOnLoad");
// Join Channel Options // Join Channel Options
@ -1177,6 +1174,7 @@ NAN_METHOD(pipeline) {
} }
// Operators // Operators
baton->flatten = AttrTo<bool>(options, "flatten"); baton->flatten = AttrTo<bool>(options, "flatten");
baton->flattenBackground = AttrAsRgba(options, "flattenBackground");
baton->negate = AttrTo<bool>(options, "negate"); baton->negate = AttrTo<bool>(options, "negate");
baton->blurSigma = AttrTo<double>(options, "blurSigma"); baton->blurSigma = AttrTo<double>(options, "blurSigma");
baton->medianSize = AttrTo<uint32_t>(options, "medianSize"); baton->medianSize = AttrTo<uint32_t>(options, "medianSize");
@ -1194,11 +1192,7 @@ NAN_METHOD(pipeline) {
baton->useExifOrientation = AttrTo<bool>(options, "useExifOrientation"); baton->useExifOrientation = AttrTo<bool>(options, "useExifOrientation");
baton->angle = AttrTo<int32_t>(options, "angle"); baton->angle = AttrTo<int32_t>(options, "angle");
baton->rotationAngle = AttrTo<double>(options, "rotationAngle"); baton->rotationAngle = AttrTo<double>(options, "rotationAngle");
// Rotation background colour baton->rotationBackground = AttrAsRgba(options, "rotationBackground");
v8::Local<v8::Object> rotationBackground = AttrAs<v8::Object>(options, "rotationBackground");
for (unsigned int i = 0; i < 4; i++) {
baton->rotationBackground[i] = AttrTo<double>(rotationBackground, i);
}
baton->rotateBeforePreExtract = AttrTo<bool>(options, "rotateBeforePreExtract"); baton->rotateBeforePreExtract = AttrTo<bool>(options, "rotateBeforePreExtract");
baton->flip = AttrTo<bool>(options, "flip"); baton->flip = AttrTo<bool>(options, "flip");
baton->flop = AttrTo<bool>(options, "flop"); baton->flop = AttrTo<bool>(options, "flop");
@ -1206,7 +1200,9 @@ NAN_METHOD(pipeline) {
baton->extendBottom = AttrTo<int32_t>(options, "extendBottom"); baton->extendBottom = AttrTo<int32_t>(options, "extendBottom");
baton->extendLeft = AttrTo<int32_t>(options, "extendLeft"); baton->extendLeft = AttrTo<int32_t>(options, "extendLeft");
baton->extendRight = AttrTo<int32_t>(options, "extendRight"); baton->extendRight = AttrTo<int32_t>(options, "extendRight");
baton->extendBackground = AttrAsRgba(options, "extendBackground");
baton->extractChannel = AttrTo<int32_t>(options, "extractChannel"); baton->extractChannel = AttrTo<int32_t>(options, "extractChannel");
baton->removeAlpha = AttrTo<bool>(options, "removeAlpha"); baton->removeAlpha = AttrTo<bool>(options, "removeAlpha");
if (HasAttr(options, "boolean")) { if (HasAttr(options, "boolean")) {
baton->boolean = CreateInputDescriptor(AttrAs<v8::Object>(options, "boolean"), buffersToPersist); baton->boolean = CreateInputDescriptor(AttrAs<v8::Object>(options, "boolean"), buffersToPersist);

View File

@ -62,16 +62,17 @@ struct PipelineBaton {
int channels; int channels;
Canvas canvas; Canvas canvas;
int position; int position;
std::vector<double> resizeBackground;
bool hasCropOffset; bool hasCropOffset;
int cropOffsetLeft; int cropOffsetLeft;
int cropOffsetTop; int cropOffsetTop;
bool premultiplied; bool premultiplied;
std::string kernel; std::string kernel;
bool fastShrinkOnLoad; bool fastShrinkOnLoad;
double background[4];
double tintA; double tintA;
double tintB; double tintB;
bool flatten; bool flatten;
std::vector<double> flattenBackground;
bool negate; bool negate;
double blurSigma; double blurSigma;
int medianSize; int medianSize;
@ -89,7 +90,7 @@ struct PipelineBaton {
bool useExifOrientation; bool useExifOrientation;
int angle; int angle;
double rotationAngle; double rotationAngle;
double rotationBackground[4]; std::vector<double> rotationBackground;
bool rotateBeforePreExtract; bool rotateBeforePreExtract;
bool flip; bool flip;
bool flop; bool flop;
@ -97,6 +98,7 @@ struct PipelineBaton {
int extendBottom; int extendBottom;
int extendLeft; int extendLeft;
int extendRight; int extendRight;
std::vector<double> extendBackground;
bool withoutEnlargement; bool withoutEnlargement;
VipsAccess accessMethod; VipsAccess accessMethod;
int jpegQuality; int jpegQuality;
@ -157,6 +159,7 @@ struct PipelineBaton {
channels(0), channels(0),
canvas(Canvas::CROP), canvas(Canvas::CROP),
position(0), position(0),
resizeBackground{ 0.0, 0.0, 0.0, 255.0 },
hasCropOffset(false), hasCropOffset(false),
cropOffsetLeft(0), cropOffsetLeft(0),
cropOffsetTop(0), cropOffsetTop(0),
@ -164,6 +167,7 @@ struct PipelineBaton {
tintA(128.0), tintA(128.0),
tintB(128.0), tintB(128.0),
flatten(false), flatten(false),
flattenBackground{ 0.0, 0.0, 0.0 },
negate(false), negate(false),
blurSigma(0.0), blurSigma(0.0),
medianSize(0), medianSize(0),
@ -181,12 +185,14 @@ struct PipelineBaton {
useExifOrientation(false), useExifOrientation(false),
angle(0), angle(0),
rotationAngle(0.0), rotationAngle(0.0),
rotationBackground{ 0.0, 0.0, 0.0, 255.0 },
flip(false), flip(false),
flop(false), flop(false),
extendTop(0), extendTop(0),
extendBottom(0), extendBottom(0),
extendLeft(0), extendLeft(0),
extendRight(0), extendRight(0),
extendBackground{ 0.0, 0.0, 0.0, 255.0 },
withoutEnlargement(false), withoutEnlargement(false),
jpegQuality(80), jpegQuality(80),
jpegProgressive(false), jpegProgressive(false),
@ -223,12 +229,7 @@ struct PipelineBaton {
tileContainer(VIPS_FOREIGN_DZ_CONTAINER_FS), tileContainer(VIPS_FOREIGN_DZ_CONTAINER_FS),
tileLayout(VIPS_FOREIGN_DZ_LAYOUT_DZ), tileLayout(VIPS_FOREIGN_DZ_LAYOUT_DZ),
tileAngle(0), tileAngle(0),
tileDepth(VIPS_FOREIGN_DZ_DEPTH_LAST){ tileDepth(VIPS_FOREIGN_DZ_DEPTH_LAST) {}
background[0] = 0.0;
background[1] = 0.0;
background[2] = 0.0;
background[3] = 255.0;
}
}; };
#endif // SRC_PIPELINE_H_ #endif // SRC_PIPELINE_H_

View File

@ -19,9 +19,10 @@ describe('Alpha transparency', function () {
it('Flatten to RGB orange', function (done) { it('Flatten to RGB orange', function (done) {
sharp(fixtures.inputPngWithTransparency) sharp(fixtures.inputPngWithTransparency)
.flatten()
.background({r: 255, g: 102, b: 0})
.resize(400, 300) .resize(400, 300)
.flatten({
background: { r: 255, g: 102, b: 0 }
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(400, info.width); assert.strictEqual(400, info.width);
@ -32,9 +33,8 @@ describe('Alpha transparency', function () {
it('Flatten to CSS/hex orange', function (done) { it('Flatten to CSS/hex orange', function (done) {
sharp(fixtures.inputPngWithTransparency) sharp(fixtures.inputPngWithTransparency)
.flatten()
.background('#ff6600')
.resize(400, 300) .resize(400, 300)
.flatten({ background: '#ff6600' })
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(400, info.width); assert.strictEqual(400, info.width);
@ -46,8 +46,9 @@ describe('Alpha transparency', function () {
it('Flatten 16-bit PNG with transparency to orange', function (done) { it('Flatten 16-bit PNG with transparency to orange', function (done) {
const output = fixtures.path('output.flatten-rgb16-orange.jpg'); const output = fixtures.path('output.flatten-rgb16-orange.jpg');
sharp(fixtures.inputPngWithTransparency16bit) sharp(fixtures.inputPngWithTransparency16bit)
.flatten() .flatten({
.background({r: 255, g: 102, b: 0}) background: { r: 255, g: 102, b: 0 }
})
.toFile(output, function (err, info) { .toFile(output, function (err, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, info.size > 0); assert.strictEqual(true, info.size > 0);
@ -71,8 +72,7 @@ describe('Alpha transparency', function () {
it('Ignored for JPEG', function (done) { it('Ignored for JPEG', function (done) {
sharp(fixtures.inputJpg) sharp(fixtures.inputJpg)
.background('#ff0000') .flatten({ background: '#ff0000' })
.flatten()
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual('jpeg', info.format); assert.strictEqual('jpeg', info.format);

View File

@ -69,8 +69,10 @@ describe('Colour space conversion', function () {
it('From CMYK to sRGB with white background, not yellow', function (done) { it('From CMYK to sRGB with white background, not yellow', function (done) {
sharp(fixtures.inputJpgWithCmykProfile) sharp(fixtures.inputJpgWithCmykProfile)
.resize(320, 240, { fit: sharp.fit.contain }) .resize(320, 240, {
.background('white') fit: sharp.fit.contain,
background: 'white'
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual('jpeg', info.format); assert.strictEqual('jpeg', info.format);

View File

@ -0,0 +1,73 @@
'use strict';
const assert = require('assert');
const fixtures = require('../fixtures');
const sharp = require('../../');
describe('Deprecated background', function () {
it('Flatten to RGB orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background({r: 255, g: 102, b: 0})
.resize(400, 300)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
assert.strictEqual(300, info.height);
fixtures.assertSimilar(fixtures.expected('flatten-orange.jpg'), data, done);
});
});
it('Flatten to CSS/hex orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background('#ff6600')
.resize(400, 300)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
assert.strictEqual(300, info.height);
fixtures.assertSimilar(fixtures.expected('flatten-orange.jpg'), data, done);
});
});
it('Flatten 16-bit PNG with transparency to orange', function (done) {
const output = fixtures.path('output.flatten-rgb16-orange.jpg');
sharp(fixtures.inputPngWithTransparency16bit)
.flatten()
.background({r: 255, g: 102, b: 0})
.toFile(output, function (err, info) {
if (err) throw err;
assert.strictEqual(true, info.size > 0);
assert.strictEqual(32, info.width);
assert.strictEqual(32, info.height);
fixtures.assertMaxColourDistance(output, fixtures.expected('flatten-rgb16-orange.jpg'), 25);
done();
});
});
it('Ignored for JPEG', function (done) {
sharp(fixtures.inputJpg)
.background('#ff0000')
.flatten()
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual('jpeg', info.format);
assert.strictEqual(3, info.channels);
done();
});
});
it('extend all sides equally with RGB', function (done) {
sharp(fixtures.inputJpg)
.resize(120)
.background({r: 255, g: 0, b: 0})
.extend(10)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(140, info.width);
assert.strictEqual(118, info.height);
fixtures.assertSimilar(fixtures.expected('extend-equal.jpg'), data, done);
});
});
});

View File

@ -9,8 +9,13 @@ describe('Extend', function () {
it('extend all sides equally with RGB', function (done) { it('extend all sides equally with RGB', function (done) {
sharp(fixtures.inputJpg) sharp(fixtures.inputJpg)
.resize(120) .resize(120)
.background({r: 255, g: 0, b: 0}) .extend({
.extend(10) top: 10,
bottom: 10,
left: 10,
right: 10,
background: { r: 255, g: 0, b: 0 }
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(140, info.width); assert.strictEqual(140, info.width);
@ -22,8 +27,13 @@ describe('Extend', function () {
it('extend sides unequally with RGBA', function (done) { it('extend sides unequally with RGBA', function (done) {
sharp(fixtures.inputPngWithTransparency16bit) sharp(fixtures.inputPngWithTransparency16bit)
.resize(120) .resize(120)
.background({r: 0, g: 0, b: 0, alpha: 0}) .extend({
.extend({top: 50, bottom: 0, left: 10, right: 35}) top: 50,
bottom: 0,
left: 10,
right: 35,
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(165, info.width); assert.strictEqual(165, info.width);
@ -50,9 +60,14 @@ describe('Extend', function () {
it('should add alpha channel before extending with a transparent Background', function (done) { it('should add alpha channel before extending with a transparent Background', function (done) {
sharp(fixtures.inputJpgWithLandscapeExif1) sharp(fixtures.inputJpgWithLandscapeExif1)
.background({r: 0, g: 0, b: 0, alpha: 0}) .extend({
top: 0,
bottom: 10,
left: 0,
right: 10,
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toFormat(sharp.format.png) .toFormat(sharp.format.png)
.extend({top: 0, bottom: 10, left: 0, right: 10})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(610, info.width); assert.strictEqual(610, info.width);
@ -63,8 +78,13 @@ describe('Extend', function () {
it('PNG with 2 channels', function (done) { it('PNG with 2 channels', function (done) {
sharp(fixtures.inputPngWithGreyAlpha) sharp(fixtures.inputPngWithGreyAlpha)
.background('transparent') .extend({
.extend({top: 0, bottom: 20, left: 0, right: 20}) top: 0,
bottom: 20,
left: 0,
right: 20,
background: 'transparent'
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);

View File

@ -38,8 +38,10 @@ describe('Resize fit=contain', function () {
it('JPEG within WebP, to include alpha channel', function (done) { it('JPEG within WebP, to include alpha channel', function (done) {
sharp(fixtures.inputJpg) sharp(fixtures.inputJpg)
.resize(320, 240, { fit: 'contain' }) .resize(320, 240, {
.background({r: 0, g: 0, b: 0, alpha: 0}) fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.webp() .webp()
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
@ -82,8 +84,10 @@ describe('Resize fit=contain', function () {
it('16-bit PNG with alpha channel onto RGBA', function (done) { it('16-bit PNG with alpha channel onto RGBA', function (done) {
sharp(fixtures.inputPngWithTransparency16bit) sharp(fixtures.inputPngWithTransparency16bit)
.resize(32, 16, { fit: 'contain' }) .resize(32, 16, {
.background({r: 0, g: 0, b: 0, alpha: 0}) fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -97,8 +101,10 @@ describe('Resize fit=contain', function () {
it('PNG with 2 channels', function (done) { it('PNG with 2 channels', function (done) {
sharp(fixtures.inputPngWithGreyAlpha) sharp(fixtures.inputPngWithGreyAlpha)
.resize(32, 16, { fit: 'contain' }) .resize(32, 16, {
.background({r: 0, g: 0, b: 0, alpha: 0}) fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -112,8 +118,10 @@ describe('Resize fit=contain', function () {
it.skip('TIFF in LAB colourspace onto RGBA background', function (done) { it.skip('TIFF in LAB colourspace onto RGBA background', function (done) {
sharp(fixtures.inputTiffCielab) sharp(fixtures.inputTiffCielab)
.resize(64, 128, { fit: 'contain' }) .resize(64, 128, {
.background({r: 255, g: 102, b: 0, alpha: 0.5}) fit: 'contain',
background: { r: 255, g: 102, b: 0, alpha: 0.5 }
})
.png() .png()
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
@ -152,9 +160,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'top' position: 'top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -170,9 +178,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right top' position: 'right top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -188,9 +196,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right' position: 'right'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -206,9 +214,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right bottom' position: 'right bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -224,9 +232,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'bottom' position: 'bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -242,9 +250,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left bottom' position: 'left bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -260,9 +268,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left' position: 'left'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -278,9 +286,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left top' position: 'left top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -296,9 +304,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.north position: sharp.gravity.north
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -314,9 +322,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northeast position: sharp.gravity.northeast
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -332,9 +340,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.east position: sharp.gravity.east
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -350,9 +358,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southeast position: sharp.gravity.southeast
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -368,9 +376,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.south position: sharp.gravity.south
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -386,9 +394,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southwest position: sharp.gravity.southwest
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -404,9 +412,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.west position: sharp.gravity.west
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -422,9 +430,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northwest position: sharp.gravity.northwest
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -440,9 +448,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 100, { .resize(200, 100, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.center position: sharp.gravity.center
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -458,9 +466,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'top' position: 'top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -476,9 +484,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right top' position: 'right top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -494,9 +502,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right' position: 'right'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -512,9 +520,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right bottom' position: 'right bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -530,9 +538,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'bottom' position: 'bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -548,9 +556,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left bottom' position: 'left bottom'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -566,9 +574,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left' position: 'left'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -584,9 +592,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left top' position: 'left top'
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -602,9 +610,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.north position: sharp.gravity.north
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -620,9 +628,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northeast position: sharp.gravity.northeast
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -638,9 +646,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.east position: sharp.gravity.east
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -656,9 +664,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southeast position: sharp.gravity.southeast
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -674,9 +682,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.south position: sharp.gravity.south
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -692,9 +700,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southwest position: sharp.gravity.southwest
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -710,9 +718,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.west position: sharp.gravity.west
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -728,9 +736,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northwest position: sharp.gravity.northwest
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);
@ -746,9 +754,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed) sharp(fixtures.inputPngEmbed)
.resize(200, 200, { .resize(200, 200, {
fit: sharp.fit.contain, fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.center position: sharp.gravity.center
}) })
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) { .toBuffer(function (err, data, info) {
if (err) throw err; if (err) throw err;
assert.strictEqual(true, data.length > 0); assert.strictEqual(true, data.length > 0);