Deprecate background, add op-specific prop to resize/extend/flatten #1392

This commit is contained in:
Lovell Fuller 2018-10-01 20:58:55 +01:00
parent 6007e13a22
commit a64844689e
17 changed files with 378 additions and 296 deletions

View File

@ -1,23 +1,5 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## background
Set the background for the `embed`, `flatten` and `extend` operations.
The default background is `{r: 0, g: 0, b: 0, alpha: 1}`, black without transparency.
Delegates to the _color_ module, which can throw an Error
but is liberal in what it accepts, clipping values to sensible min/max.
The alpha value is a float between `0` (transparent) and `1` (opaque).
### Parameters
- `rgba` **([String][1] \| [Object][2])** parsed by the [color][3] module to extract values for red, green, blue and alpha.
- Throws **[Error][4]** Invalid parameter
Returns **Sharp**
## tint
Tint the image using the provided chroma while preserving the image luminance.

View File

@ -114,11 +114,11 @@ Returns **Sharp**
## flatten
Merge alpha transparency channel, if any, with `background`.
Merge alpha transparency channel, if any, with a background.
### Parameters
- `flatten` **[Boolean][6]** (optional, default `true`)
- `options`
Returns **Sharp**

View File

@ -2,179 +2,126 @@
## resize
Resize image to `width` x `height`.
By default, the resized image is centre cropped to the exact size specified.
Resize image to `width`, `height` or `width x height`.
Possible kernels are:
When both a `width` and `height` are provided, the possible methods by which the image should **fit** these are:
- `nearest`: Use [nearest neighbour interpolation][1].
- `cubic`: Use a [Catmull-Rom spline][2].
- `lanczos2`: Use a [Lanczos kernel][3] with `a=2`.
- `lanczos3`: Use a Lanczos kernel with `a=3` (the default).
- `cover`: Crop to cover both provided dimensions (the default).
- `contain`: Embed within both provided dimensions.
- `fill`: Ignore the aspect ratio of the input and stretch to both provided dimensions.
- `inside`: Preserving aspect ratio, resize the image to be as large as possible while ensuring its dimensions are less than or equal to both those specified.
- `outside`: Preserving aspect ratio, resize the image to be as small as possible while ensuring its dimensions are greater than or equal to both those specified.
Some of these values are based on the [object-fit][1] CSS property.
### Parameters
When using a `fit` of `cover` or `contain`, the default **position** is `centre`. Other options are:
- `width` **[Number][4]?** pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
- `height` **[Number][4]?** pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
- `options` **[Object][5]?**
- `options.kernel` **[String][6]** the kernel to use for image reduction. (optional, default `'lanczos3'`)
- `options.fastShrinkOnLoad` **[Boolean][7]** take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images. (optional, default `true`)
### Examples
```javascript
sharp(inputBuffer)
.resize(200, 300, {
kernel: sharp.kernel.nearest
})
.background('white')
.embed()
.toFile('output.tiff')
.then(function() {
// output.tiff is a 200 pixels wide and 300 pixels high image
// containing a nearest-neighbour scaled version, embedded on a white canvas,
// of the image data in inputBuffer
});
```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## crop
Crop the resized image to the exact size specified, the default behaviour.
Possible attributes of the optional `sharp.gravity` are `north`, `northeast`, `east`, `southeast`, `south`,
`southwest`, `west`, `northwest`, `center` and `centre`.
- `sharp.position`: `top`, `right top`, `right`, `right bottom`, `bottom`, `left bottom`, `left`, `left top`.
- `sharp.gravity`: `north`, `northeast`, `east`, `southeast`, `south`, `southwest`, `west`, `northwest`, `center` or `centre`.
- `sharp.strategy`: `cover` only, dynamically crop using either the `entropy` or `attention` strategy.
Some of these values are based on the [object-position][2] CSS property.
The experimental strategy-based approach resizes so one dimension is at its target length
then repeatedly ranks edge regions, discarding the edge with the lowest score based on the selected strategy.
- `entropy`: focus on the region with the highest [Shannon entropy][9].
- `entropy`: focus on the region with the highest [Shannon entropy][3].
- `attention`: focus on the region with the highest luminance frequency, colour saturation and presence of skin tones.
Possible interpolation kernels are:
- `nearest`: Use [nearest neighbour interpolation][4].
- `cubic`: Use a [Catmull-Rom spline][5].
- `lanczos2`: Use a [Lanczos kernel][6] with `a=2`.
- `lanczos3`: Use a Lanczos kernel with `a=3` (the default).
### Parameters
- `crop` **[String][6]** A member of `sharp.gravity` to crop to an edge/corner or `sharp.strategy` to crop dynamically. (optional, default `'centre'`)
- `width` **[Number][7]?** pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
- `height` **[Number][7]?** pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
- `options`
### Examples
```javascript
sharp(input)
.resize({ width: 100 })
.toBuffer()
.then(data => {
// 100 pixels wide, auto-scaled height
});
```
```javascript
sharp(input)
.resize({ height: 100 })
.toBuffer()
.then(data => {
// 100 pixels high, auto-scaled width
});
```
```javascript
sharp(input)
.resize(200, 300, {
kernel: sharp.kernel.nearest,
fit: 'contain',
position: 'right top',
background: { r: 255, g: 255, b: 255, alpha: 0.5 }
})
.toFile('output.png')
.then(() => {
// output.png is a 200 pixels wide and 300 pixels high image
// containing a nearest-neighbour scaled version
// contained within the north-east corner of a semi-transparent white canvas
});
```
```javascript
const transformer = sharp()
.resize(200, 200)
.crop(sharp.strategy.entropy)
.on('error', function(err) {
console.log(err);
.resize({
width: 200,
height: 200,
fit: sharp.fit.cover,
position: sharp.strategy.entropy
});
// Read image data from readableStream
// Write 200px square auto-cropped image data to writableStream
readableStream.pipe(transformer).pipe(writableStream);
readableStream
.pipe(transformer)
.pipe(writableStream);
```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## embed
Preserving aspect ratio, resize the image to the maximum `width` or `height` specified
then embed on a background of the exact `width` and `height` specified.
If the background contains an alpha value then WebP and PNG format output images will
contain an alpha channel, even when the input image does not.
### Parameters
- `embed` **[String][6]** A member of `sharp.gravity` to embed to an edge/corner. (optional, default `'centre'`)
### Examples
```javascript
sharp('input.gif')
.resize(200, 300)
.background({r: 0, g: 0, b: 0, alpha: 0})
.embed()
.toFormat(sharp.format.webp)
.toBuffer(function(err, outputBuffer) {
if (err) {
throw err;
}
// outputBuffer contains WebP image data of a 200 pixels wide and 300 pixels high
// containing a scaled version, embedded on a transparent canvas, of input.gif
});
```
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## max
Preserving aspect ratio, resize the image to be as large as possible
while ensuring its dimensions are less than or equal to the `width` and `height` specified.
Both `width` and `height` must be provided via `resize` otherwise the behaviour will default to `crop`.
### Examples
```javascript
sharp(inputBuffer)
.resize(200, 200)
.max()
sharp(input)
.resize(200, 200, {
fit: sharp.fit.inside,
withoutEnlargement: true
})
.toFormat('jpeg')
.toBuffer()
.then(function(outputBuffer) {
// outputBuffer contains JPEG image data no wider than 200 pixels and no higher
// than 200 pixels regardless of the inputBuffer image dimensions
// outputBuffer contains JPEG image data
// no wider and no higher than 200 pixels
// and no larger than the input image
});
```
Returns **Sharp**
## min
Preserving aspect ratio, resize the image to be as small as possible
while ensuring its dimensions are greater than or equal to the `width` and `height` specified.
Both `width` and `height` must be provided via `resize` otherwise the behaviour will default to `crop`.
Returns **Sharp**
## ignoreAspectRatio
Ignoring the aspect ratio of the input, stretch the image to
the exact `width` and/or `height` provided via `resize`.
Returns **Sharp**
## withoutEnlargement
Do not enlarge the output image if the input image width _or_ height are already less than the required dimensions.
This is equivalent to GraphicsMagick's `>` geometry option:
"_change the dimensions of the image only if its width or height exceeds the geometry specification_".
Use with `max()` to preserve the image's aspect ratio.
The default behaviour _before_ function call is `false`, meaning the image will be enlarged.
### Parameters
- `withoutEnlargement` **[Boolean][7]** (optional, default `true`)
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
## extend
Extends/pads the edges of the image with the colour provided to the `background` method.
Extends/pads the edges of the image with the provided background colour.
This operation will always occur after resizing and extraction, if any.
### Parameters
- `extend` **([Number][4] \| [Object][5])** single pixel count to add to all edges or an Object with per-edge counts
- `extend.top` **[Number][4]?**
- `extend.left` **[Number][4]?**
- `extend.bottom` **[Number][4]?**
- `extend.right` **[Number][4]?**
- `extend` **([Number][7] \| [Object][9])** single pixel count to add to all edges or an Object with per-edge counts
- `extend.top` **[Number][7]?**
- `extend.left` **[Number][7]?**
- `extend.bottom` **[Number][7]?**
- `extend.right` **[Number][7]?**
- `extend.background` **([String][10] \| [Object][9])** background colour, parsed by the [color][11] module, defaults to black without transparency. (optional, default `{r:0,g:0,b:0,alpha:1}`)
### Examples
@ -183,8 +130,14 @@ This operation will always occur after resizing and extraction, if any.
// to the top, left and right edges and 20 to the bottom edge
sharp(input)
.resize(140)
.background({r: 0, g: 0, b: 0, alpha: 0})
.extend({top: 10, bottom: 20, left: 10, right: 10})
.)
.extend({
top: 10,
bottom: 20,
left: 10,
right: 10
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
...
```
@ -202,11 +155,11 @@ Extract a region of the image.
### Parameters
- `options` **[Object][5]**
- `options.left` **[Number][4]** zero-indexed offset from left edge
- `options.top` **[Number][4]** zero-indexed offset from top edge
- `options.width` **[Number][4]** dimension of extracted image
- `options.height` **[Number][4]** dimension of extracted image
- `options` **[Object][9]**
- `options.left` **[Number][7]** zero-indexed offset from left edge
- `options.top` **[Number][7]** zero-indexed offset from top edge
- `options.width` **[Number][7]** dimension of extracted image
- `options.height` **[Number][7]** dimension of extracted image
### Examples
@ -238,27 +191,31 @@ Trim "boring" pixels from all edges that contain values within a percentage simi
### Parameters
- `tolerance` **[Number][4]** value between 1 and 99 representing the percentage similarity. (optional, default `10`)
- `tolerance` **[Number][7]** value between 1 and 99 representing the percentage similarity. (optional, default `10`)
- Throws **[Error][8]** Invalid parameters
Returns **Sharp**
[1]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
[2]: https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
[2]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-position
[3]: https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel
[3]: https://en.wikipedia.org/wiki/Entropy_%28information_theory%29
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[4]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[5]: https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[6]: https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[9]: https://en.wikipedia.org/wiki/Entropy_%28information_theory%29
[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[11]: https://www.npmjs.org/package/color

View File

@ -15,6 +15,10 @@ Requires libvips v8.7.0.
`max().withoutEnlargement()` is now `resize(width, height, { fit: 'inside', withoutEnlargement: true })`.
[#1135](https://github.com/lovell/sharp/issues/1135)
* Deprecate the `background` function.
Per-operation `background` options added to `resize`, `extend` and `flatten` operations.
[#1392](https://github.com/lovell/sharp/issues/1392)
* Drop Node 4 support.
[#1212](https://github.com/lovell/sharp/issues/1212)

View File

@ -1,5 +1,7 @@
'use strict';
const deprecate = require('util').deprecate;
const color = require('color');
const is = require('./is');
@ -16,25 +18,20 @@ const colourspace = {
};
/**
* Set the background for the `embed`, `flatten` and `extend` operations.
* The default background is `{r: 0, g: 0, b: 0, alpha: 1}`, black without transparency.
*
* Delegates to the _color_ module, which can throw an Error
* but is liberal in what it accepts, clipping values to sensible min/max.
* The alpha value is a float between `0` (transparent) and `1` (opaque).
*
* @param {String|Object} rgba - parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha.
* @returns {Sharp}
* @throws {Error} Invalid parameter
* @deprecated
* @private
*/
function background (rgba) {
const colour = color(rgba);
this.options.background = [
const background = [
colour.red(),
colour.green(),
colour.blue(),
Math.round(colour.alpha() * 255)
];
this.options.resizeBackground = background;
this.options.extendBackground = background;
this.options.flattenBackground = background.slice(0, 3);
return this;
}
@ -102,23 +99,45 @@ function toColorspace (colorspace) {
return this.toColourspace(colorspace);
}
/**
* Update a colour attribute of the this.options Object.
* @private
* @param {String} key
* @param {String|Object} val
* @throws {Error} Invalid key
*/
function _setColourOption (key, val) {
if (is.object(val) || is.string(val)) {
const colour = color(val);
this.options[key] = [
colour.red(),
colour.green(),
colour.blue(),
Math.round(colour.alpha() * 255)
];
}
}
/**
* Decorate the Sharp prototype with colour-related functions.
* @private
*/
module.exports = function (Sharp) {
// Public instance functions
[
background,
// Public
tint,
greyscale,
grayscale,
toColourspace,
toColorspace
toColorspace,
// Private
_setColourOption
].forEach(function (f) {
Sharp.prototype[f.name] = f;
});
// Class attributes
Sharp.colourspace = colourspace;
Sharp.colorspace = colourspace;
// Deprecated
Sharp.prototype.background = deprecate(background, 'background(background) is deprecated, use resize({ background }), extend({ background }) or flatten({ background }) instead');
};

View File

@ -105,6 +105,7 @@ const Sharp = function (input, options) {
height: -1,
canvas: 'crop',
position: 0,
resizeBackground: [0, 0, 0, 255],
useExifOrientation: false,
angle: 0,
rotationAngle: 0,
@ -116,14 +117,15 @@ const Sharp = function (input, options) {
extendBottom: 0,
extendLeft: 0,
extendRight: 0,
extendBackground: [0, 0, 0, 255],
withoutEnlargement: false,
kernel: 'lanczos3',
fastShrinkOnLoad: true,
// operations
background: [0, 0, 0, 255],
tintA: 128,
tintB: 128,
flatten: false,
flattenBackground: [0, 0, 0],
negate: false,
medianSize: 0,
blurSigma: 0,

View File

@ -171,12 +171,15 @@ function blur (sigma) {
}
/**
* Merge alpha transparency channel, if any, with `background`.
* @param {Boolean} [flatten=true]
* Merge alpha transparency channel, if any, with a background.
* @param {String|Object} [options.background={r: 0, g: 0, b: 0}] - background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black.
* @returns {Sharp}
*/
function flatten (flatten) {
this.options.flatten = is.bool(flatten) ? flatten : true;
function flatten (options) {
this.options.flatten = is.bool(options) ? options : true;
if (is.object(options)) {
this._setColourOption('flattenBackground', options.background);
}
return this;
}

View File

@ -130,18 +130,18 @@ const mapFitToCanvas = {
* });
*
* @example
* sharp(inputBuffer)
* sharp(input)
* .resize(200, 300, {
* kernel: sharp.kernel.nearest,
* fit: 'contain',
* position: 'right top'
* position: 'right top',
* background: { r: 255, g: 255, b: 255, alpha: 0.5 }
* })
* .background('white')
* .toFile('output.tiff')
* .toFile('output.png')
* .then(() => {
* // output.tiff is a 200 pixels wide and 300 pixels high image
* // output.png is a 200 pixels wide and 300 pixels high image
* // containing a nearest-neighbour scaled version
* // embedded in the north-east corner of a white canvas
* // contained within the north-east corner of a semi-transparent white canvas
* });
*
* @example
@ -159,7 +159,7 @@ const mapFitToCanvas = {
* .pipe(writableStream);
*
* @example
* sharp(inputBuffer)
* sharp(input)
* .resize(200, 200, {
* fit: sharp.fit.inside,
* withoutEnlargement: true
@ -178,6 +178,7 @@ const mapFitToCanvas = {
* @param {String} [options.height] - alternative means of specifying `height`. If both are present this take priority.
* @param {String} [options.fit='cover'] - how the image should be resized to fit both provided dimensions, one of `cover`, `contain`, `fill`, `inside` or `outside`.
* @param {String} [options.position='centre'] - position, gravity or strategy to use when `fit` is `cover` or `contain`.
* @param {String|Object} [options.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour when using a `fit` of `contain`, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @param {String} [options.kernel='lanczos3'] - the kernel to use for image reduction.
* @param {Boolean} [options.withoutEnlargement=false] - do not enlarge if the width *or* height are already less than the specified dimensions, equivalent to GraphicsMagick's `>` geometry option.
* @param {Boolean} [options.fastShrinkOnLoad=true] - take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images.
@ -234,6 +235,10 @@ function resize (width, height, options) {
throw is.invalidParameterError('position', 'valid position/gravity/strategy', options.position);
}
}
// Background
if (is.defined(options.background)) {
this._setColourOption('resizeBackground', options.background);
}
// Kernel
if (is.defined(options.kernel)) {
if (is.string(kernel[options.kernel])) {
@ -255,7 +260,7 @@ function resize (width, height, options) {
}
/**
* Extends/pads the edges of the image with the colour provided to the `background` method.
* Extends/pads the edges of the image with the provided background colour.
* This operation will always occur after resizing and extraction, if any.
*
* @example
@ -263,8 +268,14 @@ function resize (width, height, options) {
* // to the top, left and right edges and 20 to the bottom edge
* sharp(input)
* .resize(140)
* .background({r: 0, g: 0, b: 0, alpha: 0})
* .extend({top: 10, bottom: 20, left: 10, right: 10})
* .)
* .extend({
* top: 10,
* bottom: 20,
* left: 10,
* right: 10
* background: { r: 0, g: 0, b: 0, alpha: 0 }
* })
* ...
*
* @param {(Number|Object)} extend - single pixel count to add to all edges or an Object with per-edge counts
@ -272,6 +283,7 @@ function resize (width, height, options) {
* @param {Number} [extend.left]
* @param {Number} [extend.bottom]
* @param {Number} [extend.right]
* @param {String|Object} [extend.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @returns {Sharp}
* @throws {Error} Invalid parameters
*/
@ -292,6 +304,7 @@ function extend (extend) {
this.options.extendBottom = extend.bottom;
this.options.extendLeft = extend.left;
this.options.extendRight = extend.right;
this._setColourOption('extendBackground', extend.background);
} else {
throw new Error('Invalid edge extension ' + extend);
}

View File

@ -37,6 +37,14 @@ namespace sharp {
std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr) {
return *Nan::Utf8String(Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked());
}
std::vector<double> AttrAsRgba(v8::Handle<v8::Object> obj, std::string attr) {
v8::Local<v8::Object> background = AttrAs<v8::Object>(obj, attr);
std::vector<double> rgba(4);
for (unsigned int i = 0; i < 4; i++) {
rgba[i] = AttrTo<double>(background, i);
}
return rgba;
}
// Create an InputDescriptor instance from a v8::Object describing an input image
InputDescriptor* CreateInputDescriptor(
@ -72,10 +80,7 @@ namespace sharp {
descriptor->createChannels = AttrTo<uint32_t>(input, "createChannels");
descriptor->createWidth = AttrTo<uint32_t>(input, "createWidth");
descriptor->createHeight = AttrTo<uint32_t>(input, "createHeight");
v8::Local<v8::Object> createBackground = AttrAs<v8::Object>(input, "createBackground");
for (unsigned int i = 0; i < 4; i++) {
descriptor->createBackground[i] = AttrTo<double>(createBackground, i);
}
descriptor->createBackground = AttrAsRgba(input, "createBackground");
}
return descriptor;
}
@ -605,7 +610,7 @@ namespace sharp {
/*
Apply the alpha channel to a given colour
*/
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, double colour[4]) {
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, std::vector<double> colour) {
// Scale up 8-bit values to match 16-bit input image
double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0;
// Create alphaColour colour

View File

@ -57,7 +57,7 @@ namespace sharp {
int createChannels;
int createWidth;
int createHeight;
double createBackground[4];
std::vector<double> createBackground;
InputDescriptor():
buffer(nullptr),
@ -70,17 +70,14 @@ namespace sharp {
page(0),
createChannels(0),
createWidth(0),
createHeight(0) {
createBackground[0] = 0.0;
createBackground[1] = 0.0;
createBackground[2] = 0.0;
createBackground[3] = 255.0;
}
createHeight(0),
createBackground{ 0.0, 0.0, 0.0, 255.0 } {}
};
// Convenience methods to access the attributes of a v8::Object
bool HasAttr(v8::Handle<v8::Object> obj, std::string attr);
std::string AttrAsStr(v8::Handle<v8::Object> obj, std::string attr);
std::vector<double> AttrAsRgba(v8::Handle<v8::Object> obj, std::string attr);
template<typename T> v8::Local<T> AttrAs(v8::Handle<v8::Object> obj, std::string attr) {
return Nan::Get(obj, Nan::New(attr).ToLocalChecked()).ToLocalChecked().As<T>();
}
@ -258,7 +255,7 @@ namespace sharp {
/*
Apply the alpha channel to a given colour
*/
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, double colour[4]);
std::tuple<VImage, std::vector<double>> ApplyAlpha(VImage image, std::vector<double> colour);
} // namespace sharp

View File

@ -318,9 +318,9 @@ class PipelineWorker : public Nan::AsyncWorker {
double const multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0;
// Background colour
std::vector<double> background {
baton->background[0] * multiplier,
baton->background[1] * multiplier,
baton->background[2] * multiplier
baton->flattenBackground[0] * multiplier,
baton->flattenBackground[1] * multiplier,
baton->flattenBackground[2] * multiplier
};
image = image.flatten(VImage::option()
->set("background", background));
@ -422,7 +422,7 @@ class PipelineWorker : public Nan::AsyncWorker {
if (image.width() != baton->width || image.height() != baton->height) {
if (baton->canvas == Canvas::EMBED) {
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->background);
std::tie(image, background) = sharp::ApplyAlpha(image, baton->resizeBackground);
// Embed
@ -492,7 +492,7 @@ class PipelineWorker : public Nan::AsyncWorker {
// Extend edges
if (baton->extendTop > 0 || baton->extendBottom > 0 || baton->extendLeft > 0 || baton->extendRight > 0) {
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->background);
std::tie(image, background) = sharp::ApplyAlpha(image, baton->extendBackground);
// Embed
baton->width = image.width() + baton->extendLeft + baton->extendRight;
@ -1097,6 +1097,7 @@ NAN_METHOD(pipeline) {
using sharp::AttrTo;
using sharp::AttrAs;
using sharp::AttrAsStr;
using sharp::AttrAsRgba;
using sharp::CreateInputDescriptor;
// Input Buffers must not undergo GC compaction during processing
@ -1140,11 +1141,6 @@ NAN_METHOD(pipeline) {
} else if (canvas == "ignore_aspect") {
baton->canvas = Canvas::IGNORE_ASPECT;
}
// Background colour
v8::Local<v8::Object> background = AttrAs<v8::Object>(options, "background");
for (unsigned int i = 0; i < 4; i++) {
baton->background[i] = AttrTo<double>(background, i);
}
// Tint chroma
baton->tintA = AttrTo<double>(options, "tintA");
baton->tintB = AttrTo<double>(options, "tintB");
@ -1160,6 +1156,7 @@ NAN_METHOD(pipeline) {
// Resize options
baton->withoutEnlargement = AttrTo<bool>(options, "withoutEnlargement");
baton->position = AttrTo<int32_t>(options, "position");
baton->resizeBackground = AttrAsRgba(options, "resizeBackground");
baton->kernel = AttrAsStr(options, "kernel");
baton->fastShrinkOnLoad = AttrTo<bool>(options, "fastShrinkOnLoad");
// Join Channel Options
@ -1177,6 +1174,7 @@ NAN_METHOD(pipeline) {
}
// Operators
baton->flatten = AttrTo<bool>(options, "flatten");
baton->flattenBackground = AttrAsRgba(options, "flattenBackground");
baton->negate = AttrTo<bool>(options, "negate");
baton->blurSigma = AttrTo<double>(options, "blurSigma");
baton->medianSize = AttrTo<uint32_t>(options, "medianSize");
@ -1194,11 +1192,7 @@ NAN_METHOD(pipeline) {
baton->useExifOrientation = AttrTo<bool>(options, "useExifOrientation");
baton->angle = AttrTo<int32_t>(options, "angle");
baton->rotationAngle = AttrTo<double>(options, "rotationAngle");
// Rotation background colour
v8::Local<v8::Object> rotationBackground = AttrAs<v8::Object>(options, "rotationBackground");
for (unsigned int i = 0; i < 4; i++) {
baton->rotationBackground[i] = AttrTo<double>(rotationBackground, i);
}
baton->rotationBackground = AttrAsRgba(options, "rotationBackground");
baton->rotateBeforePreExtract = AttrTo<bool>(options, "rotateBeforePreExtract");
baton->flip = AttrTo<bool>(options, "flip");
baton->flop = AttrTo<bool>(options, "flop");
@ -1206,7 +1200,9 @@ NAN_METHOD(pipeline) {
baton->extendBottom = AttrTo<int32_t>(options, "extendBottom");
baton->extendLeft = AttrTo<int32_t>(options, "extendLeft");
baton->extendRight = AttrTo<int32_t>(options, "extendRight");
baton->extendBackground = AttrAsRgba(options, "extendBackground");
baton->extractChannel = AttrTo<int32_t>(options, "extractChannel");
baton->removeAlpha = AttrTo<bool>(options, "removeAlpha");
if (HasAttr(options, "boolean")) {
baton->boolean = CreateInputDescriptor(AttrAs<v8::Object>(options, "boolean"), buffersToPersist);

View File

@ -62,16 +62,17 @@ struct PipelineBaton {
int channels;
Canvas canvas;
int position;
std::vector<double> resizeBackground;
bool hasCropOffset;
int cropOffsetLeft;
int cropOffsetTop;
bool premultiplied;
std::string kernel;
bool fastShrinkOnLoad;
double background[4];
double tintA;
double tintB;
bool flatten;
std::vector<double> flattenBackground;
bool negate;
double blurSigma;
int medianSize;
@ -89,7 +90,7 @@ struct PipelineBaton {
bool useExifOrientation;
int angle;
double rotationAngle;
double rotationBackground[4];
std::vector<double> rotationBackground;
bool rotateBeforePreExtract;
bool flip;
bool flop;
@ -97,6 +98,7 @@ struct PipelineBaton {
int extendBottom;
int extendLeft;
int extendRight;
std::vector<double> extendBackground;
bool withoutEnlargement;
VipsAccess accessMethod;
int jpegQuality;
@ -157,6 +159,7 @@ struct PipelineBaton {
channels(0),
canvas(Canvas::CROP),
position(0),
resizeBackground{ 0.0, 0.0, 0.0, 255.0 },
hasCropOffset(false),
cropOffsetLeft(0),
cropOffsetTop(0),
@ -164,6 +167,7 @@ struct PipelineBaton {
tintA(128.0),
tintB(128.0),
flatten(false),
flattenBackground{ 0.0, 0.0, 0.0 },
negate(false),
blurSigma(0.0),
medianSize(0),
@ -181,12 +185,14 @@ struct PipelineBaton {
useExifOrientation(false),
angle(0),
rotationAngle(0.0),
rotationBackground{ 0.0, 0.0, 0.0, 255.0 },
flip(false),
flop(false),
extendTop(0),
extendBottom(0),
extendLeft(0),
extendRight(0),
extendBackground{ 0.0, 0.0, 0.0, 255.0 },
withoutEnlargement(false),
jpegQuality(80),
jpegProgressive(false),
@ -223,12 +229,7 @@ struct PipelineBaton {
tileContainer(VIPS_FOREIGN_DZ_CONTAINER_FS),
tileLayout(VIPS_FOREIGN_DZ_LAYOUT_DZ),
tileAngle(0),
tileDepth(VIPS_FOREIGN_DZ_DEPTH_LAST){
background[0] = 0.0;
background[1] = 0.0;
background[2] = 0.0;
background[3] = 255.0;
}
tileDepth(VIPS_FOREIGN_DZ_DEPTH_LAST) {}
};
#endif // SRC_PIPELINE_H_

View File

@ -19,9 +19,10 @@ describe('Alpha transparency', function () {
it('Flatten to RGB orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background({r: 255, g: 102, b: 0})
.resize(400, 300)
.flatten({
background: { r: 255, g: 102, b: 0 }
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
@ -32,9 +33,8 @@ describe('Alpha transparency', function () {
it('Flatten to CSS/hex orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background('#ff6600')
.resize(400, 300)
.flatten({ background: '#ff6600' })
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
@ -46,8 +46,9 @@ describe('Alpha transparency', function () {
it('Flatten 16-bit PNG with transparency to orange', function (done) {
const output = fixtures.path('output.flatten-rgb16-orange.jpg');
sharp(fixtures.inputPngWithTransparency16bit)
.flatten()
.background({r: 255, g: 102, b: 0})
.flatten({
background: { r: 255, g: 102, b: 0 }
})
.toFile(output, function (err, info) {
if (err) throw err;
assert.strictEqual(true, info.size > 0);
@ -71,8 +72,7 @@ describe('Alpha transparency', function () {
it('Ignored for JPEG', function (done) {
sharp(fixtures.inputJpg)
.background('#ff0000')
.flatten()
.flatten({ background: '#ff0000' })
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual('jpeg', info.format);

View File

@ -69,8 +69,10 @@ describe('Colour space conversion', function () {
it('From CMYK to sRGB with white background, not yellow', function (done) {
sharp(fixtures.inputJpgWithCmykProfile)
.resize(320, 240, { fit: sharp.fit.contain })
.background('white')
.resize(320, 240, {
fit: sharp.fit.contain,
background: 'white'
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual('jpeg', info.format);

View File

@ -0,0 +1,73 @@
'use strict';
const assert = require('assert');
const fixtures = require('../fixtures');
const sharp = require('../../');
describe('Deprecated background', function () {
it('Flatten to RGB orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background({r: 255, g: 102, b: 0})
.resize(400, 300)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
assert.strictEqual(300, info.height);
fixtures.assertSimilar(fixtures.expected('flatten-orange.jpg'), data, done);
});
});
it('Flatten to CSS/hex orange', function (done) {
sharp(fixtures.inputPngWithTransparency)
.flatten()
.background('#ff6600')
.resize(400, 300)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(400, info.width);
assert.strictEqual(300, info.height);
fixtures.assertSimilar(fixtures.expected('flatten-orange.jpg'), data, done);
});
});
it('Flatten 16-bit PNG with transparency to orange', function (done) {
const output = fixtures.path('output.flatten-rgb16-orange.jpg');
sharp(fixtures.inputPngWithTransparency16bit)
.flatten()
.background({r: 255, g: 102, b: 0})
.toFile(output, function (err, info) {
if (err) throw err;
assert.strictEqual(true, info.size > 0);
assert.strictEqual(32, info.width);
assert.strictEqual(32, info.height);
fixtures.assertMaxColourDistance(output, fixtures.expected('flatten-rgb16-orange.jpg'), 25);
done();
});
});
it('Ignored for JPEG', function (done) {
sharp(fixtures.inputJpg)
.background('#ff0000')
.flatten()
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual('jpeg', info.format);
assert.strictEqual(3, info.channels);
done();
});
});
it('extend all sides equally with RGB', function (done) {
sharp(fixtures.inputJpg)
.resize(120)
.background({r: 255, g: 0, b: 0})
.extend(10)
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(140, info.width);
assert.strictEqual(118, info.height);
fixtures.assertSimilar(fixtures.expected('extend-equal.jpg'), data, done);
});
});
});

View File

@ -9,8 +9,13 @@ describe('Extend', function () {
it('extend all sides equally with RGB', function (done) {
sharp(fixtures.inputJpg)
.resize(120)
.background({r: 255, g: 0, b: 0})
.extend(10)
.extend({
top: 10,
bottom: 10,
left: 10,
right: 10,
background: { r: 255, g: 0, b: 0 }
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(140, info.width);
@ -22,8 +27,13 @@ describe('Extend', function () {
it('extend sides unequally with RGBA', function (done) {
sharp(fixtures.inputPngWithTransparency16bit)
.resize(120)
.background({r: 0, g: 0, b: 0, alpha: 0})
.extend({top: 50, bottom: 0, left: 10, right: 35})
.extend({
top: 50,
bottom: 0,
left: 10,
right: 35,
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(165, info.width);
@ -50,9 +60,14 @@ describe('Extend', function () {
it('should add alpha channel before extending with a transparent Background', function (done) {
sharp(fixtures.inputJpgWithLandscapeExif1)
.background({r: 0, g: 0, b: 0, alpha: 0})
.extend({
top: 0,
bottom: 10,
left: 0,
right: 10,
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toFormat(sharp.format.png)
.extend({top: 0, bottom: 10, left: 0, right: 10})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(610, info.width);
@ -63,8 +78,13 @@ describe('Extend', function () {
it('PNG with 2 channels', function (done) {
sharp(fixtures.inputPngWithGreyAlpha)
.background('transparent')
.extend({top: 0, bottom: 20, left: 0, right: 20})
.extend({
top: 0,
bottom: 20,
left: 0,
right: 20,
background: 'transparent'
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);

View File

@ -38,8 +38,10 @@ describe('Resize fit=contain', function () {
it('JPEG within WebP, to include alpha channel', function (done) {
sharp(fixtures.inputJpg)
.resize(320, 240, { fit: 'contain' })
.background({r: 0, g: 0, b: 0, alpha: 0})
.resize(320, 240, {
fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.webp()
.toBuffer(function (err, data, info) {
if (err) throw err;
@ -82,8 +84,10 @@ describe('Resize fit=contain', function () {
it('16-bit PNG with alpha channel onto RGBA', function (done) {
sharp(fixtures.inputPngWithTransparency16bit)
.resize(32, 16, { fit: 'contain' })
.background({r: 0, g: 0, b: 0, alpha: 0})
.resize(32, 16, {
fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -97,8 +101,10 @@ describe('Resize fit=contain', function () {
it('PNG with 2 channels', function (done) {
sharp(fixtures.inputPngWithGreyAlpha)
.resize(32, 16, { fit: 'contain' })
.background({r: 0, g: 0, b: 0, alpha: 0})
.resize(32, 16, {
fit: 'contain',
background: { r: 0, g: 0, b: 0, alpha: 0 }
})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -112,8 +118,10 @@ describe('Resize fit=contain', function () {
it.skip('TIFF in LAB colourspace onto RGBA background', function (done) {
sharp(fixtures.inputTiffCielab)
.resize(64, 128, { fit: 'contain' })
.background({r: 255, g: 102, b: 0, alpha: 0.5})
.resize(64, 128, {
fit: 'contain',
background: { r: 255, g: 102, b: 0, alpha: 0.5 }
})
.png()
.toBuffer(function (err, data, info) {
if (err) throw err;
@ -152,9 +160,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -170,9 +178,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -188,9 +196,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -206,9 +214,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -224,9 +232,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -242,9 +250,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -260,9 +268,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -278,9 +286,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -296,9 +304,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.north
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -314,9 +322,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northeast
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -332,9 +340,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.east
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -350,9 +358,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southeast
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -368,9 +376,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.south
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -386,9 +394,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southwest
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -404,9 +412,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.west
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -422,9 +430,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northwest
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -440,9 +448,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 100, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.center
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -458,9 +466,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -476,9 +484,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -494,9 +502,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -512,9 +520,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'right bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -530,9 +538,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -548,9 +556,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left bottom'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -566,9 +574,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -584,9 +592,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: 'left top'
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -602,9 +610,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.north
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -620,9 +628,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northeast
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -638,9 +646,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.east
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -656,9 +664,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southeast
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -674,9 +682,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.south
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -692,9 +700,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.southwest
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -710,9 +718,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.west
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -728,9 +736,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.northwest
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);
@ -746,9 +754,9 @@ describe('Resize fit=contain', function () {
sharp(fixtures.inputPngEmbed)
.resize(200, 200, {
fit: sharp.fit.contain,
background: { r: 0, g: 0, b: 0, alpha: 0 },
position: sharp.gravity.center
})
.background({r: 0, g: 0, b: 0, alpha: 0})
.toBuffer(function (err, data, info) {
if (err) throw err;
assert.strictEqual(true, data.length > 0);