Compare commits

...

183 Commits

Author SHA1 Message Date
Lovell Fuller
eefaa99872 Release v0.32.6 2023-09-18 20:33:39 +01:00
Lovell Fuller
dbce6fab79 Upgrade to libvips v8.14.5 2023-09-18 20:09:54 +01:00
Lovell Fuller
af0fcb37c2 Docs: changelog for #3799 2023-09-18 14:56:03 +01:00
Lovell Fuller
c6f54e59da Bump devDeps 2023-09-18 14:53:44 +01:00
ldrick
846563e45f TypeScript: add definitions for block and unblock (#3799) 2023-09-18 10:42:13 +01:00
Lovell Fuller
9c217ab580 Ensure withMetadata can add RGB16 profiles #3773 2023-08-31 12:49:50 +01:00
Lovell Fuller
e7381e522e Alternative fix for 4340d60, uses existing StaySequential 2023-08-31 12:09:11 +01:00
Lovell Fuller
4340d60ccf Ensure composite tile images fully decoded #3767 2023-08-31 09:04:51 +01:00
Lovell Fuller
7f64d464de Docs: add missing returns property to raw 2023-08-29 11:17:35 +01:00
Lovell Fuller
67e927bdb6 Docs: ensure all functions include method signature #3777 2023-08-29 11:16:18 +01:00
Lovell Fuller
9c7713ed54 Docs: remove mention of EXIF from flip/flop ops 2023-08-29 10:49:21 +01:00
Lovell Fuller
8be6da1def Docs: clarify when rotate op will remove EXIF Orientation 2023-08-29 10:19:07 +01:00
Lovell Fuller
95635683ac Ensure withMetadata skips default profile for RGB16 #3773 2023-08-24 18:13:00 +01:00
Lovell Fuller
44a0ee3fd3 Release v0.32.5 2023-08-15 19:29:42 +01:00
Lovell Fuller
ccd51c8cbf Upgrade to libvips v8.14.4 2023-08-15 16:40:22 +01:00
Lovell Fuller
bb7469b2d1 Ensure withMetadata adds default sRGB profile #3761 2023-08-15 13:02:20 +01:00
Kleis Auke Wolthuizen
a2cac61209 Simplify 90/270 orient-before-resize logic (#3762) 2023-08-15 07:56:07 +01:00
Lovell Fuller
5c19f6dd9b Ensure resize fit=inside respects 90/270 rotate #3756 2023-08-14 13:45:23 +01:00
Lovell Fuller
3d01775972 Docs: changelog entries for #3748 #3755 #3758 2023-08-14 13:33:13 +01:00
sho-xizz
87562a5111 TypeScript: Ensure WebpOptions minSize is boolean (#3758) 2023-08-09 13:45:10 +01:00
Kleis Auke Wolthuizen
2829e17743 Fix build with musl 1.2.4 (#3755) 2023-08-07 21:57:00 +01:00
pilotso11
ffefbd2ecc TypeScript: add missing WebpPresetEnum (#3748) 2023-08-04 10:51:06 +01:00
Kleis Auke Wolthuizen
bc8f983329 Tests: ensure Jimp benchmark uses bicubic as resizing kernel (#3745) 2023-07-30 11:25:45 +01:00
Kleis Auke Wolthuizen
440936a699 Tests: update benchmark deps and container (#3744)
Use Node 18.x in benchmark container
2023-07-30 11:24:27 +01:00
Lovell Fuller
0bc79cdb95 Docs: include paletteBitDepth metadata 2023-07-28 16:04:02 +01:00
Lovell Fuller
9a66e25f53 Docs: ensure resize fit image supports dark mode 2023-07-25 10:06:44 +01:00
Lovell Fuller
8370935ccf Docs: ensure 'fit' values are clearly separated
Smaller text, slightly closer to image, varied fill colour
2023-07-21 23:07:57 +01:00
Kleis Auke Wolthuizen
f908987f35 Docs: use SVG image for the resize fit property example (#3735) 2023-07-21 21:58:02 +01:00
Lovell Fuller
aea368a3a0 Release v0.32.4 2023-07-21 11:41:08 +01:00
Lovell Fuller
7ecbc20d3d Upgrade to libvips v8.14.3 2023-07-21 11:10:21 +01:00
Lovell Fuller
cb0e2a91c4 Bump dep 2023-07-19 16:55:12 +01:00
Lovell Fuller
739b317a6f Expose ability to (un)block libvips ops by name 2023-07-19 16:53:52 +01:00
Lovell Fuller
a0e1c39785 Release v0.32.3 2023-07-14 11:03:39 +01:00
Lovell Fuller
85b26dab68 Expose preset option for WebP output #3639 2023-07-12 19:12:04 +01:00
Lovell Fuller
66f7cef253 Docs: fix a few typos 2023-07-12 14:22:29 +01:00
Lovell Fuller
863174f201 CI: FreeBSD: Use 13.2 stable, upgrade to Node.js 20 2023-07-12 12:10:31 +01:00
Lovell Fuller
bcd865cc96 Ensure decoding remains sequential for all ops #3725 2023-07-12 11:35:59 +01:00
Lovell Fuller
16ea04fe80 Release v0.32.2 2023-07-11 11:47:37 +01:00
Lovell Fuller
9c547dc321 Use copy rather than cache to prevent affine overcompute
More predictable behaviour, see commit 14c3346 for context
2023-07-10 13:56:42 +01:00
Lovell Fuller
5522060e9e Limit HEIF output dimensions to 16384x16384
This is a slightly breaking change to sync with the behaviour of a
forthcoming libvips patch release. It also matches the libavif
limit. Having an arbitrary limit is safer than no limit.
2023-07-10 10:24:14 +01:00
Lovell Fuller
d2f0fa855b Tests: loosen threshold for affine rotate then extract
Ignores rounding errors under Rosetta emulation
2023-07-10 08:12:13 +01:00
Lovell Fuller
2bb3ea8170 Bump deps 2023-07-09 11:57:05 +01:00
Lovell Fuller
3434eef5b9 Guard use of smartcrop premultiplied option #3710 2023-07-09 09:57:20 +01:00
Lovell Fuller
2f67823c3d Allow seq read for EXIF-based auto-orient #3725 2023-07-09 09:26:58 +01:00
Lovell Fuller
38c760cdd7 Update to latest (temporary) prebuild patch 2023-07-09 09:10:24 +01:00
Lovell Fuller
14c3346800 Prevent over-compute in affine rotate #3722 2023-07-09 09:04:07 +01:00
Lovell Fuller
0da55bab7e Tests: remove unused dependency 2023-06-29 09:11:25 +01:00
Lovell Fuller
cfb659f576 Bump deps 2023-06-23 08:19:41 +01:00
Lovell Fuller
cc5ac5385f Docs: clarify use of extract before composite 2023-06-23 08:08:39 +01:00
Lovell Fuller
93fafb0c18 CI: Upgrade to latest git v2 within centos 7 containers 2023-06-05 12:32:47 +01:00
Lovell Fuller
41e3c8ca09 Temporarily use patched prebuild with node-gyp v9 2023-06-05 09:45:12 +01:00
Lovell Fuller
da61ea0199 Docs: changelog and credit for #3674 2023-06-05 09:35:07 +01:00
BJJ
7e6a70af44 Improve detection of jp2 filename extensions #3674 2023-06-05 09:31:25 +01:00
Lovell Fuller
f5845c7e61 Ensure exceptions are not thrown when terminating #3569 2023-06-03 11:51:44 +01:00
Lovell Fuller
eb1e53db83 Bump deps 2023-06-03 11:51:12 +01:00
Lovell Fuller
3340120aea Types: include base input options for composite #3669 2023-05-16 13:55:28 +01:00
Lovell Fuller
de0fc07092 Ensure same access method for all inputs #3669 2023-05-16 13:53:31 +01:00
Lovell Fuller
dc4b39f73f Docs: multi-page images cannot be flipped 2023-05-13 08:54:21 +01:00
Lovell Fuller
e873978e53 Docs: clarify which axis is used when mirroring 2023-05-11 10:24:24 +01:00
Lovell Fuller
5255964c79 Docs: ensure headings with digits appear 2023-04-27 10:23:28 +01:00
Lovell Fuller
dea319daf6 Release v0.32.1 2023-04-27 09:58:40 +01:00
Lovell Fuller
a2ca678854 Docs: clarify text align applies to multi-line 2023-04-27 09:00:11 +01:00
Lovell Fuller
e98993a6e2 Bump node-addon-api for Buffer::NewOrCopy 2023-04-23 15:43:54 +01:00
Lovell Fuller
90abd927c9 Install: coerce libc version to semver (refactor) 2023-04-23 11:54:41 +01:00
Lovell Fuller
4d7957a043 Install: coerce libc version to semver #3641 2023-04-23 11:37:43 +01:00
Lovell Fuller
bf9bb56367 Docs: fix affine interpolator example 2023-04-22 13:56:33 +01:00
Lovell Fuller
8408e99aa3 Ensure trim op works with CMYK input #3636 2023-04-20 10:49:39 +01:00
Lovell Fuller
a39f959dcc Docs: add security policy
- Latest version is supported
- Report vulnerabilities via e-mail
2023-04-20 10:46:04 +01:00
Lovell Fuller
d08baa20e6 Install: log possible error when removing vendor dir 2023-04-19 11:06:16 +01:00
Lovell Fuller
391018ad3d Bump semver dep 2023-04-19 11:04:03 +01:00
Lovell Fuller
afed876f90 Docs: ensure inclusion of jp2 function
A misplaced code coverage comment was preventing this.
See ef849fd for the original commit where this broke.
2023-04-17 20:55:12 +01:00
Lovell Fuller
d6b60a60c6 Docs: add example of how to set EXIF GPS metadata 2023-04-17 20:35:47 +01:00
Lovell Fuller
5f8646d937 Support modulate op with non-sRGB pipeline colourspace #3620 2023-04-17 19:53:48 +01:00
Lovell Fuller
b763801d68 Ensure profile-less CMYK roundtrip skips space conv #3620 2023-04-11 20:31:57 +01:00
Lovell Fuller
2e0f789c9b Tests: add retries to text test suite
as font discovery is occasionally slow
in Windows CI environment.
2023-04-09 21:42:09 +01:00
Lovell Fuller
a8645f0f38 Smartcrop performance can take future advantage of
https://github.com/libvips/libvips/commit/de43eea
2023-04-09 21:17:08 +01:00
Lovell Fuller
7b58ad9360 Docs: changelog entry for #3615 2023-04-07 12:23:21 +01:00
TomWis97
9ebbcc3701 Logging: fix notation of proxy URL (#3615) 2023-04-07 12:19:04 +01:00
Lovell Fuller
e87204b92c Doc update and changelog entry for #3461 2023-04-07 11:21:15 +01:00
Anton Marsden
a4c6eba7d4 Add unflatten operation to create an alpha channel (#3461) 2023-04-07 11:01:29 +01:00
Lovell Fuller
b9c3851515 Ensure linear op works with 16-bit input #3605 2023-04-01 12:08:14 +01:00
Lovell Fuller
97cf69c26a Ensure use of flip op forces random access read #3600 2023-03-31 09:04:22 +01:00
Lovell Fuller
d5be024bfd Bump devDeps 2023-03-28 14:39:25 +01:00
Lovell Fuller
de01fc44e7 Docs: ensure API fn name linking is consistent 2023-03-28 14:00:52 +01:00
Lovell Fuller
ca102ebd6c Docs: fix perf result copypasta
An ARM64 value was incorrectly using an AMD64 result
2023-03-28 12:08:07 +01:00
Lovell Fuller
b9d4c30a9f Release v0.32.0 2023-03-24 17:05:59 +00:00
Lovell Fuller
148760fe55 Docs: clarify resize reduction/enlargement options refer to scaling
Types: options can be passed as first resize parameter
2023-03-24 15:19:21 +00:00
Lovell Fuller
98ed237734 Docs: use only first year of copyright to match code 2023-03-24 09:59:36 +00:00
Lovell Fuller
b55e58f31e Trim space from end of libvips error messages 2023-03-24 09:58:21 +00:00
Lovell Fuller
0af070ed93 Docs: update performance results, include PNG-based task 2023-03-23 18:57:43 +00:00
Lovell Fuller
9fbb4fcaef Tests: bump benchmark deps 2023-03-22 11:03:16 +00:00
Lovell Fuller
6008ff8a08 Docs: tile-based output requires libgsf 2023-03-22 09:17:07 +00:00
Lovell Fuller
cd5e11bd50 Docs: ensure parameters are indexed as they now appear in a table 2023-03-22 09:04:54 +00:00
Lovell Fuller
08d6822265 Upgrade to libvips v8.14.2 2023-03-21 21:21:24 +00:00
Lovell Fuller
8b8a815fbb Tests: tile-based output optional, will require custom libvips
The prebuilt binaries provided by v0.32.0 will not support
tile-based output, which is (hopefully) a temporary situation
until upstream licensing issues are resolved.
2023-03-21 21:19:56 +00:00
Lovell Fuller
a44da850c1 Docs: add open graph title and image 2023-03-21 12:56:44 +00:00
Lovell Fuller
c5ef4677b1 Bump tsd dep to pick up TypeScript 5 improvements 2023-03-21 12:43:29 +00:00
Lovell Fuller
f8a430bdd3 Tests: reduce CPU cost of RGBA linear test, ~2s faster 2023-03-21 12:42:50 +00:00
Lovell Fuller
cd419a261b Docs: changelog and refresh for #3583 2023-03-21 10:16:31 +00:00
LachlanNewman
d7776e3b98 Add support to normalise for lower and upper percentiles (#3583) 2023-03-21 10:13:12 +00:00
Lovell Fuller
1eefd4e562 Docs: how to provide new integrity values for custom binaries 2023-03-17 09:25:24 +00:00
cychub
0a16d26ec7 Docs: fix sharp_binary_host example by adding version (#3568) 2023-03-12 12:58:08 +00:00
Lovell Fuller
fc03fba602 Docs: clarify metadata ignores chained ops 2023-03-10 13:35:06 +00:00
Lovell Fuller
c87fe512b4 Bump devDeps 2023-03-08 16:59:20 +00:00
Lovell Fuller
2eaab59c48 Docs: add note about API Gateway integration 2023-03-08 16:54:49 +00:00
Lovell Fuller
4ec883eaa0 Wrap all async JS callbacks, help avoid possible race #3569 2023-03-01 12:41:11 +00:00
Lovell Fuller
0063df4d4f Ensure clahe op uses random read, simplify validation 2023-02-28 21:59:31 +00:00
Lovell Fuller
6c61ad256f Ensure all source code files contain SPDX licence 2023-02-28 17:01:58 +00:00
Lovell Fuller
b90474affa Docs: clarify formats that support multi-page/anim 2023-02-28 14:39:49 +00:00
Lovell Fuller
34cbc6dec3 Docs: clarify that paths are relative to process working dir 2023-02-23 10:33:13 +00:00
Lovell Fuller
bb8de0cc26 Docs: refresh search index 2023-02-18 12:51:42 +00:00
Lovell Fuller
863e37455a Docs: changelog and credit for #3556 2023-02-18 12:50:58 +00:00
Tomasz Janowski
6f0e6f2e65 Add support to extend for extendWith, allows copy/mirror/repeat (#3556) 2023-02-17 14:01:24 +00:00
Lovell Fuller
ebf4ccd124 Bump deps 2023-02-12 19:49:32 +00:00
Lovell Fuller
b96c8e8ba4 Tests: use native fs.rm instead of rimraf 2023-02-12 19:32:00 +00:00
Lovell Fuller
42d2f07e44 Add ignoreIcc input option to ignore embedded ICC profile 2023-02-12 17:51:24 +00:00
Lovell Fuller
a2988c9edc macOS: use 10.13 as minimum to match prebuilt libvips
This allows clang to use SSE4 intrinsics
2023-02-12 16:10:21 +00:00
Lovell Fuller
24b3344937 Docs: changelog for #3548 2023-02-05 09:49:06 +00:00
Jérémy Lal
9608f219bd Add support for ArrayBuffer input (#3548) 2023-02-05 09:45:17 +00:00
Pascal Jufer
4798d9da64 Docs: clarify supported bit depth for AVIF images (#3541) 2023-02-02 17:47:07 +00:00
Lovell Fuller
8d8c6b70eb Prefer integer (un)premultiply for faster RGBA resize
Add changelog, loosen modulate test thresholds
2023-01-24 15:44:39 +00:00
Lovell Fuller
9e2207f376 Prefer integer (un)premultiply for faster RGBA resize 2023-01-24 15:24:58 +00:00
Lovell Fuller
802f560b9b Test: update benchmark dependencies 2023-01-24 15:21:31 +00:00
Lovell Fuller
a532659b0f Types: changes/additions relating to new v0.32.0 features
A separate commit is required as these were not part of the
initial definitions in the v0.31.3 snapshot.

From now on, new features and updates can include the relevant
TypeScript definition changes as part of the same
code/docs/tests commits.
2023-01-17 16:11:04 +00:00
Lovell Fuller
25c6da2bcd Docs: add a couple of missing params/props 2023-01-17 15:01:52 +00:00
Lovell Fuller
02f855d57a Expose own version as sharp.versions.sharp #3471 2023-01-17 09:56:58 +00:00
Lovell Fuller
c150263ef1 Respect fastShrinkOnLoad option for WebP input #3516 2023-01-17 09:39:23 +00:00
Lovell Fuller
9f79f80a93 Docs: fastShrinkOnLoad can round-down when auto-scaling 2023-01-16 12:06:50 +00:00
Lovell Fuller
069803b83d Docs: remove Heroku install section 2023-01-16 12:06:08 +00:00
Lovell Fuller
f79760b4f2 Docs: changelog and help for TypeScript defs #3369 #3370 2023-01-16 11:12:00 +00:00
Espen Hovlandsdal
aa5f0f4e40 Include and publish TypeScript definitions (#3370)
Definitions are a snapshot taken from `@types/sharp`,
which remain under the terms of MIT licensing.
2023-01-16 10:48:37 +00:00
Lovell Fuller
286a322622 Docs: changelog and doc refresh for #3470 2023-01-16 09:27:31 +00:00
Emanuel Jöbstl
6d404f4d2c Add coords to output when using attention based crop (#3470) 2023-01-16 09:20:42 +00:00
Lovell Fuller
bdc50e1d6e Unpin node-addon-api, cast CallbackInfo access to size_t
See https://github.com/nodejs/node-addon-api/pull/1253
2023-01-16 09:00:42 +00:00
Lovell Fuller
a9bd0e79f8 Pin node-addon-api to workaround possible bug in 5.1.0 2023-01-15 19:35:27 +00:00
Lovell Fuller
a1e464cc5e Switch to sequential read as default where possible 2023-01-15 18:43:50 +00:00
Lovell Fuller
081debd055 Reduce sharpen op max sigma from 10000 to 10 #3521 2023-01-10 16:29:40 +00:00
Lovell Fuller
ef849fd639 Docs: switch to well-maintained jsdoc2md for JSDoc parsing 2023-01-08 10:15:38 +00:00
Lovell Fuller
a42a975c46 Bump devDeps 2023-01-06 19:25:27 +00:00
Lovell Fuller
e8273580af Docs: add note about use of fastShrinkOnLoad with resize kernel 2023-01-06 19:24:32 +00:00
Lovell Fuller
5be36c2deb Install: log Rosetta detection, improve related docs 2023-01-04 21:11:21 +00:00
Kleis Auke Wolthuizen
6cda090ce2 Tests: remove ICC profile from CIELAB fixture (#3510)
This ICC profile is considered incompatible with this image.

See: https://github.com/libvips/libvips/issues/730
2023-01-01 21:12:29 +00:00
Lovell Fuller
eac6e8b261 Upgrade to libvips v8.14.0-rc1
- Replace GIF 'optimise' option with 'reuse'
- Add 'progressive' option to GIF
- Add 'wrap' option to text creation
- Add 'formatMagick' property to *magick input metadata
2022-12-29 15:53:50 +00:00
Lovell Fuller
844deaf480 Release v0.31.3 2022-12-21 15:57:10 +00:00
Lovell Fuller
efbb0c22fd Docs: add image with examples of resize fit property 2022-12-21 15:47:39 +00:00
Lovell Fuller
da0b594900 Docs: update benchmarks for latest versions, add ARM64 results 2022-12-20 19:49:29 +00:00
Lovell Fuller
78dada9126 Tests: skip mapnik and tensorflow for Docker-run benchmarks
Maintainance of mapnik seems to have stalled, no ARM64 support
Memory requirements of Tensorflow too high, hangs/crashes on AMD64
2022-12-20 18:20:59 +00:00
Lovell Fuller
15f5cd4671 Tests: move mapnik to optional deps
It does not currently support ARM64
2022-12-19 19:47:46 +00:00
Lovell Fuller
9eb2e94404 Tests: update benchmark dependencies 2022-12-17 14:29:11 +00:00
Lovell Fuller
e40b068628 Tests: update leak suppresions for latest dependencies 2022-12-14 21:57:42 +00:00
Lovell Fuller
2c46528269 Docs refresh 2022-12-14 16:17:42 +00:00
Lovell Fuller
584807b4f5 Add runtime detection of V8 memory cage #3384
When using the V8 memory cage, Buffers cannot be wrapped and then
later freed via a callback. When the cage is detected via a throw,
instead fall back to copying Buffer contents to V8 memory.

This approach will be used by Electron 21+ and you should expect
reduced performance and increased memory consumption/fragmentation.
2022-12-14 16:06:04 +00:00
Lovell Fuller
a7fa7014ef Add experimental support for JPEG-XL, requires libvips with libjxl
The prebuilt binaries do not include support for this format.
2022-12-13 21:55:17 +00:00
Lovell Fuller
f92e33fbff Bump devDeps 2022-12-13 10:31:06 +00:00
Lovell Fuller
0f1e7ef6f6 Install: add support for Linux with glibc patch version #3423 2022-12-09 12:03:41 +00:00
Lennart
89e204d824 Docs: clarify failOn property applies to decoding pixel values (#3481) 2022-12-08 16:13:18 +00:00
Lovell Fuller
2a71f1830f Expand range of sharpen params to match libvips #3427 2022-12-07 09:28:01 +00:00
Lovell Fuller
def99a294a Install: log proxy use, if any, to aid with debugging 2022-12-06 19:35:47 +00:00
Lovell Fuller
9d760f3958 Improve perf of ops that introduce non-opaque background #3465 2022-12-05 20:40:41 +00:00
Lovell Fuller
0265d305fe Ensure integral output of linear op #3468 2022-12-04 21:41:15 +00:00
Lovell Fuller
a472aea025 Ignore sequentialRead option for stats #3462 2022-11-20 21:30:45 +00:00
Lovell Fuller
01ffa80338 Improve extractChannel support for 16-bit output #3453 2022-11-15 15:00:32 +00:00
Lovell Fuller
789d4851ea Tests: remove flaky font assertions
Probably due to Windows CI env font discovery
2022-11-15 10:08:43 +00:00
Lovell Fuller
4490a93430 Tests: simplify beforeEach configuration
Remove legacy settings for previous CI providers/hardware
2022-11-15 09:54:29 +00:00
Ingvar Stepanyan
ac0dc10bd5 Tests: convert mocha hooks (#3450) 2022-11-15 08:58:09 +00:00
Lovell Fuller
5740f4545e Expose GIF opts: interFrameMaxError, interPaletteMaxError #3401 2022-11-14 16:09:52 +00:00
Lovell Fuller
a9d692fb43 Reduce chance of race condition in test for... race condition 2022-11-13 10:16:47 +00:00
Lovell Fuller
df971207b8 Prevent possible race condition when reading metadata #3451 2022-11-13 10:04:55 +00:00
Ingvar Stepanyan
3a64a0529a Tests: run in parallel, move settings to config file (#3449)
This allows to easily invoke Mocha alone via `npx mocha`, or for e.g. VSCode Test Explorer to find and run tests with the correct settings automatically.
2022-11-10 21:48:18 +00:00
Peter Whidden
76cda885fb Docs: fix minor typo in resize properties (#3444) 2022-11-09 08:44:05 +00:00
Ingvar Stepanyan
1a563360c6 Fix errors for missing OpenJPEG (#3442)
Fixes couple of minor issues with JP2 errors:

1. The tests passed as false-positives even if regex is changed to arbitary pattern, because the promise returned from `assert.rejects` was ignored and the test ended prematurely. This is fixed by removing `{ ... }` around the test function body.
2. This, in turn, hid an issue with `toFile` not throwing the expected error message which was instead propagating `Error: VipsOperation: class "jp2ksave" not found` from libvips. This is now fixed by manually checking the extension before calling into libvips.
3. Pre-creating error instances like `errJp2Save` did is sometimes tempting, but is problematic for debugging because it hides the actual stacktrace of the error (the stacktrace is collected at the moment of `new Error` creation). This is now turned into a function that creates error with the right stack.
2022-11-08 19:53:14 +00:00
Lovell Fuller
ca22af203f Docs: canvas on Windows uses MSVCRT, conflicts with UCRT 2022-11-07 21:14:51 +00:00
Lovell Fuller
9fa516e849 Release v0.31.2 2022-11-04 09:44:37 +00:00
Lovell Fuller
12f472126d CI: Only pin Python version on x64 macOS and Windows
See commit 18be09f
2022-11-03 14:49:12 +00:00
Lovell Fuller
18be09f1d7 CI: Pin Python to 3.10
Python 3.11 removes support for opening files in
'universal newline' mode (e.g. 'rU'), however older
versions of node-gyp such as v6 still use it.
2022-11-03 14:40:16 +00:00
Lovell Fuller
b3c3290f90 Upgrade to libvips v8.13.3 2022-11-03 14:09:23 +00:00
Lovell Fuller
123f95c85a Bump devDeps 2022-11-03 12:50:58 +00:00
Lovell Fuller
5b0fba4c01 Ensure auto-rotate always works without resize #3422 2022-11-02 13:59:34 +00:00
Lovell Fuller
37f7ccfff4 CI: upgrade to checkout v3 2022-10-17 16:05:30 +01:00
Lovell Fuller
51811d06e2 Bump deps 2022-10-17 16:05:03 +01:00
Lovell Fuller
181731f8f4 Tests: increase timeout to 30s
Font discovery appears to be slooow on Windows
2022-10-17 15:54:43 +01:00
Gino Emiliozzi
ae79d26ead Docs: help clarify 'fit' is option name, not value (#3410) 2022-10-17 15:26:59 +01:00
Lovell Fuller
eacb8337fa Ensure manual flip, rotate, resize op order #3391 2022-10-01 11:55:29 +01:00
179 changed files with 6884 additions and 2646 deletions

View File

@@ -1,5 +1,5 @@
freebsd_instance:
image_family: freebsd-14-0-snap
image_family: freebsd-13-2
task:
name: FreeBSD
@@ -9,7 +9,7 @@ task:
prerequisites_script:
- pkg update -f
- pkg upgrade -y
- pkg install -y devel/pkgconf graphics/vips www/node16 www/npm
- pkg install -y devel/git devel/pkgconf graphics/vips www/node20 www/npm
install_script:
- npm install --build-from-source --unsafe-perm
test_script:

View File

@@ -33,6 +33,7 @@ To test C++ changes, you can compile the module using `npm install --build-from-
Please add JavaScript [unit tests](https://github.com/lovell/sharp/tree/main/test/unit) to cover your new feature.
A test coverage report for the JavaScript code is generated in the `coverage/lcov-report` directory.
Please also update the [TypeScript definitions](https://github.com/lovell/sharp/tree/main/lib/index.d.ts), along with the [type definition tests](https://github.com/lovell/sharp/tree/main/test/types/sharp.test-d.ts).
Where possible, the functional tests use gradient-based perceptual hashes
based on [dHash](http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html)

18
.github/SECURITY.md vendored Normal file
View File

@@ -0,0 +1,18 @@
# Security Policy
## Supported Versions
The latest version of `sharp` as published to npm
and reported by `npm view sharp dist-tags.latest`
is supported with security updates.
## Reporting a Vulnerability
Please use
[e-mail](https://github.com/lovell/sharp/blob/main/package.json#L5)
to report a vulnerability.
You can expect a response within 48 hours
if you are a human reporting a genuine issue.
Thank you in advance.

View File

@@ -22,13 +22,13 @@ jobs:
run:
shell: /usr/bin/arch -arch arm64e /bin/bash -l {0}
steps:
- name: Dependencies
- name: Dependencies (Node.js)
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.nodejs_version }}
architecture: ${{ matrix.nodejs_arch }}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Install
run: npm install --build-from-source --unsafe-perm
- name: Test

View File

@@ -66,6 +66,7 @@ jobs:
if: contains(matrix.container, 'centos')
run: |
curl -sL https://rpm.nodesource.com/setup_${{ matrix.nodejs_version }}.x | bash -
yum install -y https://packages.endpointdev.com/rhel/7/os/x86_64/endpoint-repo.x86_64.rpm
yum install -y centos-release-scl
yum install -y devtoolset-11-gcc-c++ make git python3 nodejs fontconfig google-noto-sans-fonts
echo "/opt/rh/devtoolset-11/root/usr/bin" >> $GITHUB_PATH
@@ -78,14 +79,19 @@ jobs:
- name: Dependencies (Linux musl)
if: contains(matrix.container, 'alpine')
run: apk add build-base git python3 font-noto --update-cache
- name: Dependencies (macOS, Windows)
- name: Dependencies (Python 3.10 - macOS, Windows)
if: contains(matrix.os, 'macos') || contains(matrix.os, 'windows')
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Dependencies (Node.js - macOS, Windows)
if: contains(matrix.os, 'macos') || contains(matrix.os, 'windows')
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.nodejs_version }}
architecture: ${{ matrix.nodejs_arch }}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Fix working directory ownership
if: matrix.container
run: chown root.root .

1
.gitignore vendored
View File

@@ -3,6 +3,7 @@ node_modules
/coverage
test/bench/node_modules
test/fixtures/output*
test/fixtures/vips-properties.xml
test/leak/libvips.supp
test/saliency/report.json
test/saliency/Image*

7
.mocharc.jsonc Normal file
View File

@@ -0,0 +1,7 @@
{
"parallel": true,
"slow": 1000,
"timeout": 30000,
"require": "./test/beforeEach.js",
"spec": "./test/unit/*.js"
}

View File

@@ -98,11 +98,9 @@ readableStream
A [guide for contributors](https://github.com/lovell/sharp/blob/main/.github/CONTRIBUTING.md)
covers reporting bugs, requesting features and submitting code changes.
[![Node-API v5](https://img.shields.io/badge/Node--API-v5-green.svg)](https://nodejs.org/dist/latest/docs/api/n-api.html#n_api_n_api_version_matrix)
## Licensing
Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022 Lovell Fuller and contributors.
Copyright 2013 Lovell Fuller and others.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -70,7 +70,9 @@
}, {
'target_name': 'sharp-<(platform_and_arch)',
'defines': [
'NAPI_VERSION=7'
'NAPI_VERSION=7',
'NODE_ADDON_API_DISABLE_DEPRECATED',
'NODE_API_SWALLOW_UNTHROWABLE_EXCEPTIONS'
],
'dependencies': [
'<!(node -p "require(\'node-addon-api\').gyp")',
@@ -179,7 +181,7 @@
],
'xcode_settings': {
'CLANG_CXX_LANGUAGE_STANDARD': 'c++11',
'MACOSX_DEPLOYMENT_TARGET': '10.9',
'MACOSX_DEPLOYMENT_TARGET': '10.13',
'GCC_ENABLE_CPP_EXCEPTIONS': 'YES',
'GCC_ENABLE_CPP_RTTI': 'YES',
'OTHER_CPLUSPLUSFLAGS': [

View File

@@ -74,7 +74,7 @@ covers reporting bugs, requesting features and submitting code changes.
### Licensing
Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022 Lovell Fuller and contributors.
Copyright 2013 Lovell Fuller and others.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,14 +1,13 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## removeAlpha
> removeAlpha() ⇒ <code>Sharp</code>
Remove alpha channel, if any. This is a no-op if the image does not have an alpha channel.
See also [flatten][1].
See also [flatten](/api-operation#flatten).
### Examples
```javascript
**Example**
```js
sharp('rgba.png')
.removeAlpha()
.toFile('rgb.png', function(err, info) {
@@ -16,61 +15,66 @@ sharp('rgba.png')
});
```
Returns **Sharp**&#x20;
## ensureAlpha
> ensureAlpha([alpha]) ⇒ <code>Sharp</code>
Ensure the output image has an alpha transparency channel.
If missing, the added alpha channel will have the specified
transparency level, defaulting to fully-opaque (1).
This is a no-op if the image already has an alpha channel.
### Parameters
* `alpha` **[number][2]** alpha transparency level (0=fully-transparent, 1=fully-opaque) (optional, default `1`)
**Throws**:
### Examples
- <code>Error</code> Invalid alpha transparency level
```javascript
**Since**: 0.21.2
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [alpha] | <code>number</code> | <code>1</code> | alpha transparency level (0=fully-transparent, 1=fully-opaque) |
**Example**
```js
// rgba.png will be a 4 channel image with a fully-opaque alpha channel
await sharp('rgb.jpg')
.ensureAlpha()
.toFile('rgba.png')
```
```javascript
**Example**
```js
// rgba is a 4 channel image with a fully-transparent alpha channel
const rgba = await sharp(rgb)
.ensureAlpha(0)
.toBuffer();
```
* Throws **[Error][3]** Invalid alpha transparency level
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.21.2
## extractChannel
> extractChannel(channel) ⇒ <code>Sharp</code>
Extract a single channel from a multi-channel image.
### Parameters
* `channel` **([number][2] | [string][4])** zero-indexed channel/band number to extract, or `red`, `green`, `blue` or `alpha`.
**Throws**:
### Examples
- <code>Error</code> Invalid channel
```javascript
| Param | Type | Description |
| --- | --- | --- |
| channel | <code>number</code> \| <code>string</code> | zero-indexed channel/band number to extract, or `red`, `green`, `blue` or `alpha`. |
**Example**
```js
// green.jpg is a greyscale image containing the green channel of the input
await sharp(input)
.extractChannel('green')
.toFile('green.jpg');
```
```javascript
**Example**
```js
// red1 is the red value of the first pixel, red2 the second pixel etc.
const [red1, red2, ...] = await sharp(input)
.extractChannel(0)
@@ -78,45 +82,50 @@ const [red1, red2, ...] = await sharp(input)
.toBuffer();
```
* Throws **[Error][3]** Invalid channel
Returns **Sharp**&#x20;
## joinChannel
> joinChannel(images, options) ⇒ <code>Sharp</code>
Join one or more channels to the image.
The meaning of the added channels depends on the output colourspace, set with `toColourspace()`.
By default the output image will be web-friendly sRGB, with additional channels interpreted as alpha channels.
Channel ordering follows vips convention:
* sRGB: 0: Red, 1: Green, 2: Blue, 3: Alpha.
* CMYK: 0: Magenta, 1: Cyan, 2: Yellow, 3: Black, 4: Alpha.
- sRGB: 0: Red, 1: Green, 2: Blue, 3: Alpha.
- CMYK: 0: Magenta, 1: Cyan, 2: Yellow, 3: Black, 4: Alpha.
Buffers may be any of the image formats supported by sharp.
For raw pixel input, the `options` object should contain a `raw` attribute, which follows the format of the attribute of the same name in the `sharp()` constructor.
### Parameters
* `images` **([Array][5]<([string][4] | [Buffer][6])> | [string][4] | [Buffer][6])** one or more images (file paths, Buffers).
* `options` **[Object][7]** image options, see `sharp()` constructor.
**Throws**:
<!---->
- <code>Error</code> Invalid parameters
| Param | Type | Description |
| --- | --- | --- |
| images | <code>Array.&lt;(string\|Buffer)&gt;</code> \| <code>string</code> \| <code>Buffer</code> | one or more images (file paths, Buffers). |
| options | <code>Object</code> | image options, see `sharp()` constructor. |
* Throws **[Error][3]** Invalid parameters
Returns **Sharp**&#x20;
## bandbool
> bandbool(boolOp) ⇒ <code>Sharp</code>
Perform a bitwise boolean operation on all input image channels (bands) to produce a single channel output image.
### Parameters
* `boolOp` **[string][4]** one of `and`, `or` or `eor` to perform that bitwise operation, like the C logic operators `&`, `|` and `^` respectively.
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Description |
| --- | --- | --- |
| boolOp | <code>string</code> | one of `and`, `or` or `eor` to perform that bitwise operation, like the C logic operators `&`, `|` and `^` respectively. |
**Example**
```js
sharp('3-channel-rgb-input.png')
.bandbool(sharp.bool.and)
.toFile('1-channel-output.png', function (err, info) {
@@ -124,22 +133,4 @@ sharp('3-channel-rgb-input.png')
// If `I(1,1) = [247, 170, 14] = [0b11110111, 0b10101010, 0b00001111]`
// then `O(1,1) = 0b11110111 & 0b10101010 & 0b00001111 = 0b00000010 = 2`.
});
```
* Throws **[Error][3]** Invalid parameters
Returns **Sharp**&#x20;
[1]: /api-operation#flatten
[2]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[3]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array
[6]: https://nodejs.org/api/buffer.html
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
```

View File

@@ -1,27 +1,29 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## tint
> tint(rgb) ⇒ <code>Sharp</code>
Tint the image using the provided chroma while preserving the image luminance.
An alpha channel may be present and will be unchanged by the operation.
### Parameters
* `rgb` **([string][1] | [Object][2])** parsed by the [color][3] module to extract chroma values.
**Throws**:
### Examples
- <code>Error</code> Invalid parameter
```javascript
| Param | Type | Description |
| --- | --- | --- |
| rgb | <code>string</code> \| <code>Object</code> | parsed by the [color](https://www.npmjs.org/package/color) module to extract chroma values. |
**Example**
```js
const output = await sharp(input)
.tint({ r: 255, g: 240, b: 16 })
.toBuffer();
```
* Throws **[Error][4]** Invalid parameter
Returns **Sharp**&#x20;
## greyscale
> greyscale([greyscale]) ⇒ <code>Sharp</code>
Convert to 8-bit greyscale; 256 shades of grey.
This is a linear operation. If the input image is in a non-linear colour space such as sRGB, use `gamma()` with `greyscale()` for the best results.
@@ -30,44 +32,55 @@ This may be overridden by other sharp operations such as `toColourspace('b-w')`,
which will produce an output image containing one color channel.
An alpha channel may be present, and will be unchanged by the operation.
### Parameters
* `greyscale` **[Boolean][5]** (optional, default `true`)
### Examples
| Param | Type | Default |
| --- | --- | --- |
| [greyscale] | <code>Boolean</code> | <code>true</code> |
```javascript
**Example**
```js
const output = await sharp(input).greyscale().toBuffer();
```
Returns **Sharp**&#x20;
## grayscale
> grayscale([grayscale]) ⇒ <code>Sharp</code>
Alternative spelling of `greyscale`.
### Parameters
* `grayscale` **[Boolean][5]** (optional, default `true`)
Returns **Sharp**&#x20;
| Param | Type | Default |
| --- | --- | --- |
| [grayscale] | <code>Boolean</code> | <code>true</code> |
## pipelineColourspace
> pipelineColourspace([colourspace]) ⇒ <code>Sharp</code>
Set the pipeline colourspace.
The input image will be converted to the provided colourspace at the start of the pipeline.
All operations will use this colourspace before converting to the output colourspace, as defined by [toColourspace][6].
All operations will use this colourspace before converting to the output colourspace,
as defined by [toColourspace](#tocolourspace).
This feature is experimental and has not yet been fully-tested with all operations.
### Parameters
* `colourspace` **[string][1]?** pipeline colourspace e.g. `rgb16`, `scrgb`, `lab`, `grey16` [...][7]
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
**Since**: 0.29.0
| Param | Type | Description |
| --- | --- | --- |
| [colourspace] | <code>string</code> | pipeline colourspace e.g. `rgb16`, `scrgb`, `lab`, `grey16` [...](https://github.com/libvips/libvips/blob/41cff4e9d0838498487a00623462204eb10ee5b8/libvips/iofuncs/enumtypes.c#L774) |
**Example**
```js
// Run pipeline in 16 bits per channel RGB while converting final result to 8 bits per channel sRGB.
await sharp(input)
.pipelineColourspace('rgb16')
@@ -75,76 +88,60 @@ await sharp(input)
.toFile('16bpc-pipeline-to-8bpc-output.png')
```
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.29.0
## pipelineColorspace
> pipelineColorspace([colorspace]) ⇒ <code>Sharp</code>
Alternative spelling of `pipelineColourspace`.
### Parameters
* `colorspace` **[string][1]?** pipeline colorspace.
**Throws**:
<!---->
- <code>Error</code> Invalid parameters
| Param | Type | Description |
| --- | --- | --- |
| [colorspace] | <code>string</code> | pipeline colorspace. |
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
## toColourspace
> toColourspace([colourspace]) ⇒ <code>Sharp</code>
Set the output colourspace.
By default output image will be web-friendly sRGB, with additional channels interpreted as alpha channels.
### Parameters
* `colourspace` **[string][1]?** output colourspace e.g. `srgb`, `rgb`, `cmyk`, `lab`, `b-w` [...][8]
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Description |
| --- | --- | --- |
| [colourspace] | <code>string</code> | output colourspace e.g. `srgb`, `rgb`, `cmyk`, `lab`, `b-w` [...](https://github.com/libvips/libvips/blob/3c0bfdf74ce1dc37a6429bed47fa76f16e2cd70a/libvips/iofuncs/enumtypes.c#L777-L794) |
**Example**
```js
// Output 16 bits per pixel RGB
await sharp(input)
.toColourspace('rgb16')
.toFile('16-bpp.png')
```
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
## toColorspace
> toColorspace([colorspace]) ⇒ <code>Sharp</code>
Alternative spelling of `toColourspace`.
### Parameters
* `colorspace` **[string][1]?** output colorspace.
**Throws**:
<!---->
- <code>Error</code> Invalid parameters
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
[1]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[2]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[3]: https://www.npmjs.org/package/color
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[6]: #tocolourspace
[7]: https://github.com/libvips/libvips/blob/41cff4e9d0838498487a00623462204eb10ee5b8/libvips/iofuncs/enumtypes.c#L774
[8]: https://github.com/libvips/libvips/blob/3c0bfdf74ce1dc37a6429bed47fa76f16e2cd70a/libvips/iofuncs/enumtypes.c#L777-L794
| Param | Type | Description |
| --- | --- | --- |
| [colorspace] | <code>string</code> | output colorspace. |

View File

@@ -1,13 +1,12 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## composite
> composite(images) ⇒ <code>Sharp</code>
Composite image(s) over the processed (resized, extracted etc.) image.
The images to composite must be the same size or smaller than the processed image.
If both `top` and `left` options are provided, they take precedence over `gravity`.
Any resize or rotate operations in the same processing pipeline
Any resize, rotate or extract operations in the same processing pipeline
will always be applied to the input image before composition.
The `blend` option can be one of `clear`, `source`, `over`, `in`, `out`, `atop`,
@@ -17,52 +16,53 @@ The `blend` option can be one of `clear`, `source`, `over`, `in`, `out`, `atop`,
`hard-light`, `soft-light`, `difference`, `exclusion`.
More information about blend modes can be found at
[https://www.libvips.org/API/current/libvips-conversion.html#VipsBlendMode][1]
and [https://www.cairographics.org/operators/][2]
https://www.libvips.org/API/current/libvips-conversion.html#VipsBlendMode
and https://www.cairographics.org/operators/
### Parameters
* `images` **[Array][3]<[Object][4]>** Ordered list of images to composite
**Throws**:
* `images[].input` **([Buffer][5] | [String][6])?** Buffer containing image data, String containing the path to an image file, or Create object (see below)
- <code>Error</code> Invalid parameters
* `images[].input.create` **[Object][4]?** describes a blank overlay to be created.
**Since**: 0.22.0
* `images[].input.create.width` **[Number][7]?**&#x20;
* `images[].input.create.height` **[Number][7]?**&#x20;
* `images[].input.create.channels` **[Number][7]?** 3-4
* `images[].input.create.background` **([String][6] | [Object][4])?** parsed by the [color][8] module to extract values for red, green, blue and alpha.
* `images[].input.text` **[Object][4]?** describes a new text image to be created.
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| images | <code>Array.&lt;Object&gt;</code> | | Ordered list of images to composite |
| [images[].input] | <code>Buffer</code> \| <code>String</code> | | Buffer containing image data, String containing the path to an image file, or Create object (see below) |
| [images[].input.create] | <code>Object</code> | | describes a blank overlay to be created. |
| [images[].input.create.width] | <code>Number</code> | | |
| [images[].input.create.height] | <code>Number</code> | | |
| [images[].input.create.channels] | <code>Number</code> | | 3-4 |
| [images[].input.create.background] | <code>String</code> \| <code>Object</code> | | parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha. |
| [images[].input.text] | <code>Object</code> | | describes a new text image to be created. |
| [images[].input.text.text] | <code>string</code> | | text to render as a UTF-8 string. It can contain Pango markup, for example `<i>Le</i>Monde`. |
| [images[].input.text.font] | <code>string</code> | | font name to render with. |
| [images[].input.text.fontfile] | <code>string</code> | | absolute filesystem path to a font file that can be used by `font`. |
| [images[].input.text.width] | <code>number</code> | <code>0</code> | integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries. |
| [images[].input.text.height] | <code>number</code> | <code>0</code> | integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0. |
| [images[].input.text.align] | <code>string</code> | <code>&quot;&#x27;left&#x27;&quot;</code> | text alignment (`'left'`, `'centre'`, `'center'`, `'right'`). |
| [images[].input.text.justify] | <code>boolean</code> | <code>false</code> | set this to true to apply justification to the text. |
| [images[].input.text.dpi] | <code>number</code> | <code>72</code> | the resolution (size) at which to render the text. Does not take effect if `height` is specified. |
| [images[].input.text.rgba] | <code>boolean</code> | <code>false</code> | set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for Pango markup features like `<span foreground="red">Red!</span>`. |
| [images[].input.text.spacing] | <code>number</code> | <code>0</code> | text line height in points. Will use the font line height if none is specified. |
| [images[].blend] | <code>String</code> | <code>&#x27;over&#x27;</code> | how to blend this image with the image below. |
| [images[].gravity] | <code>String</code> | <code>&#x27;centre&#x27;</code> | gravity at which to place the overlay. |
| [images[].top] | <code>Number</code> | | the pixel offset from the top edge. |
| [images[].left] | <code>Number</code> | | the pixel offset from the left edge. |
| [images[].tile] | <code>Boolean</code> | <code>false</code> | set to true to repeat the overlay image across the entire image with the given `gravity`. |
| [images[].premultiplied] | <code>Boolean</code> | <code>false</code> | set to true to avoid premultiplying the image below. Equivalent to the `--premultiplied` vips option. |
| [images[].density] | <code>Number</code> | <code>72</code> | number representing the DPI for vector overlay image. |
| [images[].raw] | <code>Object</code> | | describes overlay when using raw pixel data. |
| [images[].raw.width] | <code>Number</code> | | |
| [images[].raw.height] | <code>Number</code> | | |
| [images[].raw.channels] | <code>Number</code> | | |
| [images[].animated] | <code>boolean</code> | <code>false</code> | Set to `true` to read all frames/pages of an animated image. |
| [images[].failOn] | <code>string</code> | <code>&quot;&#x27;warning&#x27;&quot;</code> | @see [constructor parameters](/api-constructor#parameters) |
| [images[].limitInputPixels] | <code>number</code> \| <code>boolean</code> | <code>268402689</code> | @see [constructor parameters](/api-constructor#parameters) |
* `images[].input.text.text` **[string][6]?** text to render as a UTF-8 string. It can contain Pango markup, for example `<i>Le</i>Monde`.
* `images[].input.text.font` **[string][6]?** font name to render with.
* `images[].input.text.fontfile` **[string][6]?** absolute filesystem path to a font file that can be used by `font`.
* `images[].input.text.width` **[number][7]** integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries. (optional, default `0`)
* `images[].input.text.height` **[number][7]** integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0. (optional, default `0`)
* `images[].input.text.align` **[string][6]** text alignment (`'left'`, `'centre'`, `'center'`, `'right'`). (optional, default `'left'`)
* `images[].input.text.justify` **[boolean][9]** set this to true to apply justification to the text. (optional, default `false`)
* `images[].input.text.dpi` **[number][7]** the resolution (size) at which to render the text. Does not take effect if `height` is specified. (optional, default `72`)
* `images[].input.text.rgba` **[boolean][9]** set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for pango markup features like `<span foreground="red">Red!</span>`. (optional, default `false`)
* `images[].input.text.spacing` **[number][7]** text line height in points. Will use the font line height if none is specified. (optional, default `0`)
* `images[].blend` **[String][6]** how to blend this image with the image below. (optional, default `'over'`)
* `images[].gravity` **[String][6]** gravity at which to place the overlay. (optional, default `'centre'`)
* `images[].top` **[Number][7]?** the pixel offset from the top edge.
* `images[].left` **[Number][7]?** the pixel offset from the left edge.
* `images[].tile` **[Boolean][9]** set to true to repeat the overlay image across the entire image with the given `gravity`. (optional, default `false`)
* `images[].premultiplied` **[Boolean][9]** set to true to avoid premultipling the image below. Equivalent to the `--premultiplied` vips option. (optional, default `false`)
* `images[].density` **[Number][7]** number representing the DPI for vector overlay image. (optional, default `72`)
* `images[].raw` **[Object][4]?** describes overlay when using raw pixel data.
* `images[].raw.width` **[Number][7]?**&#x20;
* `images[].raw.height` **[Number][7]?**&#x20;
* `images[].raw.channels` **[Number][7]?**&#x20;
* `images[].animated` **[boolean][9]** Set to `true` to read all frames/pages of an animated image. (optional, default `false`)
* `images[].failOn` **[string][6]** @see [constructor parameters][10] (optional, default `'warning'`)
* `images[].limitInputPixels` **([number][7] | [boolean][9])** @see [constructor parameters][10] (optional, default `268402689`)
### Examples
```javascript
**Example**
```js
await sharp(background)
.composite([
{ input: layer1, gravity: 'northwest' },
@@ -70,16 +70,16 @@ await sharp(background)
])
.toFile('combined.png');
```
```javascript
**Example**
```js
const output = await sharp('input.gif', { animated: true })
.composite([
{ input: 'overlay.png', tile: true, blend: 'saturate' }
])
.toBuffer();
```
```javascript
**Example**
```js
sharp('input.png')
.rotate(180)
.resize(300)
@@ -94,34 +94,4 @@ sharp('input.png')
// onto orange background, composited with overlay.png with SE gravity,
// sharpened, with metadata, 90% quality WebP image data. Phew!
});
```
* Throws **[Error][11]** Invalid parameters
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.22.0
[1]: https://www.libvips.org/API/current/libvips-conversion.html#VipsBlendMode
[2]: https://www.cairographics.org/operators/
[3]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[5]: https://nodejs.org/api/buffer.html
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[8]: https://www.npmjs.org/package/color
[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[10]: /api-constructor#parameters
[11]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
```

View File

@@ -1,6 +1,12 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## Sharp
> Sharp
**Emits**: <code>Sharp#event:info</code>, <code>Sharp#event:warning</code>
<a name="new_Sharp_new"></a>
### new
> new Sharp([input], [options])
Constructor factory to create an instance of `sharp`, to which further methods are chained.
@@ -9,64 +15,57 @@ When using Stream based output, derived attributes are available from the `info`
Non-critical problems encountered during processing are emitted as `warning` events.
Implements the [stream.Duplex][1] class.
Implements the [stream.Duplex](http://nodejs.org/api/stream.html#stream_class_stream_duplex) class.
### Parameters
**Throws**:
* `input` **([Buffer][2] | [Uint8Array][3] | [Uint8ClampedArray][4] | [Int8Array][5] | [Uint16Array][6] | [Int16Array][7] | [Uint32Array][8] | [Int32Array][9] | [Float32Array][10] | [Float64Array][11] | [string][12])?** if present, can be
a Buffer / Uint8Array / Uint8ClampedArray containing JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image data, or
a TypedArray containing raw pixel image data, or
a String containing the filesystem path to an JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image file.
JPEG, PNG, WebP, AVIF, GIF, SVG, TIFF or raw pixel image data can be streamed into the object when not present.
* `options` **[Object][13]?** if present, is an Object with optional attributes.
- <code>Error</code> Invalid parameters
* `options.failOn` **[string][12]** level of sensitivity to invalid images, one of (in order of sensitivity): 'none' (least), 'truncated', 'error' or 'warning' (most), highers level imply lower levels. (optional, default `'warning'`)
* `options.limitInputPixels` **([number][14] | [boolean][15])** Do not process input images where the number of pixels
(width x height) exceeds this limit. Assumes image dimensions contained in the input metadata can be trusted.
An integral Number of pixels, zero or false to remove limit, true to use default limit of 268402689 (0x3FFF x 0x3FFF). (optional, default `268402689`)
* `options.unlimited` **[boolean][15]** Set this to `true` to remove safety features that help prevent memory exhaustion (JPEG, PNG, SVG, HEIF). (optional, default `false`)
* `options.sequentialRead` **[boolean][15]** Set this to `true` to use sequential rather than random access where possible.
This can reduce memory usage and might improve performance on some systems. (optional, default `false`)
* `options.density` **[number][14]** number representing the DPI for vector images in the range 1 to 100000. (optional, default `72`)
* `options.pages` **[number][14]** number of pages to extract for multi-page input (GIF, WebP, AVIF, TIFF, PDF), use -1 for all pages. (optional, default `1`)
* `options.page` **[number][14]** page number to start extracting from for multi-page input (GIF, WebP, AVIF, TIFF, PDF), zero based. (optional, default `0`)
* `options.subifd` **[number][14]** subIFD (Sub Image File Directory) to extract for OME-TIFF, defaults to main image. (optional, default `-1`)
* `options.level` **[number][14]** level to extract from a multi-level input (OpenSlide), zero based. (optional, default `0`)
* `options.animated` **[boolean][15]** Set to `true` to read all frames/pages of an animated image (equivalent of setting `pages` to `-1`). (optional, default `false`)
* `options.raw` **[Object][13]?** describes raw pixel input image data. See `raw()` for pixel ordering.
* `options.raw.width` **[number][14]?** integral number of pixels wide.
* `options.raw.height` **[number][14]?** integral number of pixels high.
* `options.raw.channels` **[number][14]?** integral number of channels, between 1 and 4.
* `options.raw.premultiplied` **[boolean][15]?** specifies that the raw input has already been premultiplied, set to `true`
to avoid sharp premultiplying the image. (optional, default `false`)
* `options.create` **[Object][13]?** describes a new image to be created.
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [input] | <code>Buffer</code> \| <code>ArrayBuffer</code> \| <code>Uint8Array</code> \| <code>Uint8ClampedArray</code> \| <code>Int8Array</code> \| <code>Uint16Array</code> \| <code>Int16Array</code> \| <code>Uint32Array</code> \| <code>Int32Array</code> \| <code>Float32Array</code> \| <code>Float64Array</code> \| <code>string</code> | | if present, can be a Buffer / ArrayBuffer / Uint8Array / Uint8ClampedArray containing JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image data, or a TypedArray containing raw pixel image data, or a String containing the filesystem path to an JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image file. JPEG, PNG, WebP, AVIF, GIF, SVG, TIFF or raw pixel image data can be streamed into the object when not present. |
| [options] | <code>Object</code> | | if present, is an Object with optional attributes. |
| [options.failOn] | <code>string</code> | <code>&quot;&#x27;warning&#x27;&quot;</code> | when to abort processing of invalid pixel data, one of (in order of sensitivity): 'none' (least), 'truncated', 'error' or 'warning' (most), higher levels imply lower levels, invalid metadata will always abort. |
| [options.limitInputPixels] | <code>number</code> \| <code>boolean</code> | <code>268402689</code> | Do not process input images where the number of pixels (width x height) exceeds this limit. Assumes image dimensions contained in the input metadata can be trusted. An integral Number of pixels, zero or false to remove limit, true to use default limit of 268402689 (0x3FFF x 0x3FFF). |
| [options.unlimited] | <code>boolean</code> | <code>false</code> | Set this to `true` to remove safety features that help prevent memory exhaustion (JPEG, PNG, SVG, HEIF). |
| [options.sequentialRead] | <code>boolean</code> | <code>true</code> | Set this to `false` to use random access rather than sequential read. Some operations will do this automatically. |
| [options.density] | <code>number</code> | <code>72</code> | number representing the DPI for vector images in the range 1 to 100000. |
| [options.ignoreIcc] | <code>number</code> | <code>false</code> | should the embedded ICC profile, if any, be ignored. |
| [options.pages] | <code>number</code> | <code>1</code> | Number of pages to extract for multi-page input (GIF, WebP, TIFF), use -1 for all pages. |
| [options.page] | <code>number</code> | <code>0</code> | Page number to start extracting from for multi-page input (GIF, WebP, TIFF), zero based. |
| [options.subifd] | <code>number</code> | <code>-1</code> | subIFD (Sub Image File Directory) to extract for OME-TIFF, defaults to main image. |
| [options.level] | <code>number</code> | <code>0</code> | level to extract from a multi-level input (OpenSlide), zero based. |
| [options.animated] | <code>boolean</code> | <code>false</code> | Set to `true` to read all frames/pages of an animated image (GIF, WebP, TIFF), equivalent of setting `pages` to `-1`. |
| [options.raw] | <code>Object</code> | | describes raw pixel input image data. See `raw()` for pixel ordering. |
| [options.raw.width] | <code>number</code> | | integral number of pixels wide. |
| [options.raw.height] | <code>number</code> | | integral number of pixels high. |
| [options.raw.channels] | <code>number</code> | | integral number of channels, between 1 and 4. |
| [options.raw.premultiplied] | <code>boolean</code> | | specifies that the raw input has already been premultiplied, set to `true` to avoid sharp premultiplying the image. (optional, default `false`) |
| [options.create] | <code>Object</code> | | describes a new image to be created. |
| [options.create.width] | <code>number</code> | | integral number of pixels wide. |
| [options.create.height] | <code>number</code> | | integral number of pixels high. |
| [options.create.channels] | <code>number</code> | | integral number of channels, either 3 (RGB) or 4 (RGBA). |
| [options.create.background] | <code>string</code> \| <code>Object</code> | | parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha. |
| [options.create.noise] | <code>Object</code> | | describes a noise to be created. |
| [options.create.noise.type] | <code>string</code> | | type of generated noise, currently only `gaussian` is supported. |
| [options.create.noise.mean] | <code>number</code> | | mean of pixels in generated noise. |
| [options.create.noise.sigma] | <code>number</code> | | standard deviation of pixels in generated noise. |
| [options.text] | <code>Object</code> | | describes a new text image to be created. |
| [options.text.text] | <code>string</code> | | text to render as a UTF-8 string. It can contain Pango markup, for example `<i>Le</i>Monde`. |
| [options.text.font] | <code>string</code> | | font name to render with. |
| [options.text.fontfile] | <code>string</code> | | absolute filesystem path to a font file that can be used by `font`. |
| [options.text.width] | <code>number</code> | <code>0</code> | Integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries. |
| [options.text.height] | <code>number</code> | <code>0</code> | Maximum integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0. |
| [options.text.align] | <code>string</code> | <code>&quot;&#x27;left&#x27;&quot;</code> | Alignment style for multi-line text (`'left'`, `'centre'`, `'center'`, `'right'`). |
| [options.text.justify] | <code>boolean</code> | <code>false</code> | set this to true to apply justification to the text. |
| [options.text.dpi] | <code>number</code> | <code>72</code> | the resolution (size) at which to render the text. Does not take effect if `height` is specified. |
| [options.text.rgba] | <code>boolean</code> | <code>false</code> | set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for pango markup features like `<span foreground="red">Red!</span>`. |
| [options.text.spacing] | <code>number</code> | <code>0</code> | text line height in points. Will use the font line height if none is specified. |
| [options.text.wrap] | <code>string</code> | <code>&quot;&#x27;word&#x27;&quot;</code> | word wrapping style when width is provided, one of: 'word', 'char', 'charWord' (prefer char, fallback to word) or 'none'. |
* `options.create.width` **[number][14]?** integral number of pixels wide.
* `options.create.height` **[number][14]?** integral number of pixels high.
* `options.create.channels` **[number][14]?** integral number of channels, either 3 (RGB) or 4 (RGBA).
* `options.create.background` **([string][12] | [Object][13])?** parsed by the [color][16] module to extract values for red, green, blue and alpha.
* `options.create.noise` **[Object][13]?** describes a noise to be created.
* `options.create.noise.type` **[string][12]?** type of generated noise, currently only `gaussian` is supported.
* `options.create.noise.mean` **[number][14]?** mean of pixels in generated noise.
* `options.create.noise.sigma` **[number][14]?** standard deviation of pixels in generated noise.
* `options.text` **[Object][13]?** describes a new text image to be created.
* `options.text.text` **[string][12]?** text to render as a UTF-8 string. It can contain Pango markup, for example `<i>Le</i>Monde`.
* `options.text.font` **[string][12]?** font name to render with.
* `options.text.fontfile` **[string][12]?** absolute filesystem path to a font file that can be used by `font`.
* `options.text.width` **[number][14]** integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries. (optional, default `0`)
* `options.text.height` **[number][14]** integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0. (optional, default `0`)
* `options.text.align` **[string][12]** text alignment (`'left'`, `'centre'`, `'center'`, `'right'`). (optional, default `'left'`)
* `options.text.justify` **[boolean][15]** set this to true to apply justification to the text. (optional, default `false`)
* `options.text.dpi` **[number][14]** the resolution (size) at which to render the text. Does not take effect if `height` is specified. (optional, default `72`)
* `options.text.rgba` **[boolean][15]** set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for pango markup features like `<span foreground="red">Red!</span>`. (optional, default `false`)
* `options.text.spacing` **[number][14]** text line height in points. Will use the font line height if none is specified. (optional, default `0`)
### Examples
```javascript
**Example**
```js
sharp('input.jpg')
.resize(300, 200)
.toFile('output.jpg', function(err) {
@@ -74,8 +73,8 @@ sharp('input.jpg')
// containing a scaled and cropped version of input.jpg
});
```
```javascript
**Example**
```js
// Read image data from readableStream,
// resize to 300 pixels wide,
// emit an 'info' event with calculated dimensions
@@ -87,9 +86,9 @@ var transformer = sharp()
});
readableStream.pipe(transformer).pipe(writableStream);
```
```javascript
// Create a blank 300x200 PNG image of semi-transluent red pixels
**Example**
```js
// Create a blank 300x200 PNG image of semi-translucent red pixels
sharp({
create: {
width: 300,
@@ -102,13 +101,13 @@ sharp({
.toBuffer()
.then( ... );
```
```javascript
**Example**
```js
// Convert an animated GIF to an animated WebP
await sharp('in.gif', { animated: true }).toFile('out.webp');
```
```javascript
**Example**
```js
// Read a raw array of pixels and save it to a png
const input = Uint8Array.from([255, 255, 255, 0, 0, 0]); // or Uint8ClampedArray
const image = sharp(input, {
@@ -122,8 +121,8 @@ const image = sharp(input, {
});
await image.toFile('my-two-pixels.png');
```
```javascript
**Example**
```js
// Generate RGB Gaussian noise
await sharp({
create: {
@@ -138,8 +137,8 @@ await sharp({
}
}).toFile('noise.png');
```
```javascript
**Example**
```js
// Generate an image from text
await sharp({
text: {
@@ -149,8 +148,8 @@ await sharp({
}
}).toFile('text_bw.png');
```
```javascript
**Example**
```js
// Generate an rgba image from text using pango markup and font
await sharp({
text: {
@@ -162,19 +161,17 @@ await sharp({
}).toFile('text_rgba.png');
```
* Throws **[Error][17]** Invalid parameters
Returns **[Sharp][18]**&#x20;
## clone
> clone() ⇒ [<code>Sharp</code>](#Sharp)
Take a "snapshot" of the Sharp instance, returning a new instance.
Cloned instances inherit the input of their parent instance.
This allows multiple output Streams and therefore multiple processing pipelines to share a single input Stream.
### Examples
```javascript
**Example**
```js
const pipeline = sharp().rotate();
pipeline.clone().resize(800, 600).pipe(firstWritableStream);
pipeline.clone().extract({ left: 20, top: 20, width: 100, height: 100 }).pipe(secondWritableStream);
@@ -182,8 +179,8 @@ readableStream.pipe(pipeline);
// firstWritableStream receives auto-rotated, resized readableStream
// secondWritableStream receives auto-rotated, extracted region of readableStream
```
```javascript
**Example**
```js
// Create a pipeline that will download an image, resize it and format it to different files
// Using Promises to know when the pipeline is complete
const fs = require("fs");
@@ -228,42 +225,4 @@ Promise.all(promises)
fs.unlinkSync("optimized-500.webp");
} catch (e) {}
});
```
Returns **[Sharp][18]**&#x20;
[1]: http://nodejs.org/api/stream.html#stream_class_stream_duplex
[2]: https://nodejs.org/api/buffer.html
[3]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8ClampedArray
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Int8Array
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint16Array
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Int16Array
[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint32Array
[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Int32Array
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Float32Array
[11]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Float64Array
[12]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[13]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[14]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[15]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[16]: https://www.npmjs.org/package/color
[17]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[18]: #sharp
```

View File

@@ -1,57 +1,60 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## metadata
> metadata([callback]) ⇒ <code>Promise.&lt;Object&gt;</code> \| <code>Sharp</code>
Fast access to (uncached) image metadata without decoding any compressed pixel data.
This is taken from the header of the input image.
It does not include operations, such as resize, to be applied to the output image.
This is read from the header of the input image.
It does not take into consideration any operations to be applied to the output image,
such as resize or rotate.
Dimensions in the response will respect the `page` and `pages` properties of the
[constructor parameters][1].
[constructor parameters](/api-constructor#parameters).
A `Promise` is returned when `callback` is not provided.
* `format`: Name of decoder used to decompress image data e.g. `jpeg`, `png`, `webp`, `gif`, `svg`
* `size`: Total size of image in bytes, for Stream and Buffer input only
* `width`: Number of pixels wide (EXIF orientation is not taken into consideration, see example below)
* `height`: Number of pixels high (EXIF orientation is not taken into consideration, see example below)
* `space`: Name of colour space interpretation e.g. `srgb`, `rgb`, `cmyk`, `lab`, `b-w` [...][2]
* `channels`: Number of bands e.g. `3` for sRGB, `4` for CMYK
* `depth`: Name of pixel depth format e.g. `uchar`, `char`, `ushort`, `float` [...][3]
* `density`: Number of pixels per inch (DPI), if present
* `chromaSubsampling`: String containing JPEG chroma subsampling, `4:2:0` or `4:4:4` for RGB, `4:2:0:4` or `4:4:4:4` for CMYK
* `isProgressive`: Boolean indicating whether the image is interlaced using a progressive scan
* `pages`: Number of pages/frames contained within the image, with support for TIFF, HEIF, PDF, animated GIF and animated WebP
* `pageHeight`: Number of pixels high each page in a multi-page image will be.
* `loop`: Number of times to loop an animated image, zero refers to a continuous loop.
* `delay`: Delay in ms between each page in an animated image, provided as an array of integers.
* `pagePrimary`: Number of the primary page in a HEIF image
* `levels`: Details of each level in a multi-level image provided as an array of objects, requires libvips compiled with support for OpenSlide
* `subifds`: Number of Sub Image File Directories in an OME-TIFF image
* `background`: Default background colour, if present, for PNG (bKGD) and GIF images, either an RGB Object or a single greyscale value
* `compression`: The encoder used to compress an HEIF file, `av1` (AVIF) or `hevc` (HEIC)
* `resolutionUnit`: The unit of resolution (density), either `inch` or `cm`, if present
* `hasProfile`: Boolean indicating the presence of an embedded ICC profile
* `hasAlpha`: Boolean indicating the presence of an alpha transparency channel
* `orientation`: Number value of the EXIF Orientation header, if present
* `exif`: Buffer containing raw EXIF data, if present
* `icc`: Buffer containing raw [ICC][4] profile data, if present
* `iptc`: Buffer containing raw IPTC data, if present
* `xmp`: Buffer containing raw XMP data, if present
* `tifftagPhotoshop`: Buffer containing raw TIFFTAG\_PHOTOSHOP data, if present
- `format`: Name of decoder used to decompress image data e.g. `jpeg`, `png`, `webp`, `gif`, `svg`
- `size`: Total size of image in bytes, for Stream and Buffer input only
- `width`: Number of pixels wide (EXIF orientation is not taken into consideration, see example below)
- `height`: Number of pixels high (EXIF orientation is not taken into consideration, see example below)
- `space`: Name of colour space interpretation e.g. `srgb`, `rgb`, `cmyk`, `lab`, `b-w` [...](https://www.libvips.org/API/current/VipsImage.html#VipsInterpretation)
- `channels`: Number of bands e.g. `3` for sRGB, `4` for CMYK
- `depth`: Name of pixel depth format e.g. `uchar`, `char`, `ushort`, `float` [...](https://www.libvips.org/API/current/VipsImage.html#VipsBandFormat)
- `density`: Number of pixels per inch (DPI), if present
- `chromaSubsampling`: String containing JPEG chroma subsampling, `4:2:0` or `4:4:4` for RGB, `4:2:0:4` or `4:4:4:4` for CMYK
- `isProgressive`: Boolean indicating whether the image is interlaced using a progressive scan
- `pages`: Number of pages/frames contained within the image, with support for TIFF, HEIF, PDF, animated GIF and animated WebP
- `pageHeight`: Number of pixels high each page in a multi-page image will be.
- `paletteBitDepth`: Bit depth of palette-based image (GIF, PNG).
- `loop`: Number of times to loop an animated image, zero refers to a continuous loop.
- `delay`: Delay in ms between each page in an animated image, provided as an array of integers.
- `pagePrimary`: Number of the primary page in a HEIF image
- `levels`: Details of each level in a multi-level image provided as an array of objects, requires libvips compiled with support for OpenSlide
- `subifds`: Number of Sub Image File Directories in an OME-TIFF image
- `background`: Default background colour, if present, for PNG (bKGD) and GIF images, either an RGB Object or a single greyscale value
- `compression`: The encoder used to compress an HEIF file, `av1` (AVIF) or `hevc` (HEIC)
- `resolutionUnit`: The unit of resolution (density), either `inch` or `cm`, if present
- `hasProfile`: Boolean indicating the presence of an embedded ICC profile
- `hasAlpha`: Boolean indicating the presence of an alpha transparency channel
- `orientation`: Number value of the EXIF Orientation header, if present
- `exif`: Buffer containing raw EXIF data, if present
- `icc`: Buffer containing raw [ICC](https://www.npmjs.com/package/icc) profile data, if present
- `iptc`: Buffer containing raw IPTC data, if present
- `xmp`: Buffer containing raw XMP data, if present
- `tifftagPhotoshop`: Buffer containing raw TIFFTAG_PHOTOSHOP data, if present
- `formatMagick`: String containing format for images loaded via *magick
### Parameters
* `callback` **[Function][5]?** called with the arguments `(err, metadata)`
### Examples
| Param | Type | Description |
| --- | --- | --- |
| [callback] | <code>function</code> | called with the arguments `(err, metadata)` |
```javascript
**Example**
```js
const metadata = await sharp(input).metadata();
```
```javascript
**Example**
```js
const image = sharp(inputJpg);
image
.metadata()
@@ -65,8 +68,8 @@ image
// data contains a WebP image half the width and height of the original JPEG
});
```
```javascript
**Example**
```js
// Based on EXIF rotation metadata, get the right-side-up width and height:
const size = getNormalSize(await sharp(input).metadata());
@@ -78,39 +81,40 @@ function getNormalSize({ width, height, orientation }) {
}
```
Returns **([Promise][6]<[Object][7]> | Sharp)**&#x20;
## stats
> stats([callback]) ⇒ <code>Promise.&lt;Object&gt;</code>
Access to pixel-derived image statistics for every channel in the image.
A `Promise` is returned when `callback` is not provided.
* `channels`: Array of channel statistics for each channel in the image. Each channel statistic contains
* `min` (minimum value in the channel)
* `max` (maximum value in the channel)
* `sum` (sum of all values in a channel)
* `squaresSum` (sum of squared values in a channel)
* `mean` (mean of the values in a channel)
* `stdev` (standard deviation for the values in a channel)
* `minX` (x-coordinate of one of the pixel where the minimum lies)
* `minY` (y-coordinate of one of the pixel where the minimum lies)
* `maxX` (x-coordinate of one of the pixel where the maximum lies)
* `maxY` (y-coordinate of one of the pixel where the maximum lies)
* `isOpaque`: Is the image fully opaque? Will be `true` if the image has no alpha channel or if every pixel is fully opaque.
* `entropy`: Histogram-based estimation of greyscale entropy, discarding alpha channel if any.
* `sharpness`: Estimation of greyscale sharpness based on the standard deviation of a Laplacian convolution, discarding alpha channel if any.
* `dominant`: Object containing most dominant sRGB colour based on a 4096-bin 3D histogram.
- `channels`: Array of channel statistics for each channel in the image. Each channel statistic contains
- `min` (minimum value in the channel)
- `max` (maximum value in the channel)
- `sum` (sum of all values in a channel)
- `squaresSum` (sum of squared values in a channel)
- `mean` (mean of the values in a channel)
- `stdev` (standard deviation for the values in a channel)
- `minX` (x-coordinate of one of the pixel where the minimum lies)
- `minY` (y-coordinate of one of the pixel where the minimum lies)
- `maxX` (x-coordinate of one of the pixel where the maximum lies)
- `maxY` (y-coordinate of one of the pixel where the maximum lies)
- `isOpaque`: Is the image fully opaque? Will be `true` if the image has no alpha channel or if every pixel is fully opaque.
- `entropy`: Histogram-based estimation of greyscale entropy, discarding alpha channel if any.
- `sharpness`: Estimation of greyscale sharpness based on the standard deviation of a Laplacian convolution, discarding alpha channel if any.
- `dominant`: Object containing most dominant sRGB colour based on a 4096-bin 3D histogram.
**Note**: Statistics are derived from the original input image. Any operations performed on the image must first be
written to a buffer in order to run `stats` on the result (see third example).
### Parameters
* `callback` **[Function][5]?** called with the arguments `(err, stats)`
### Examples
| Param | Type | Description |
| --- | --- | --- |
| [callback] | <code>function</code> | called with the arguments `(err, stats)` |
```javascript
**Example**
```js
const image = sharp(inputJpg);
image
.stats()
@@ -118,32 +122,16 @@ image
// stats contains the channel-wise statistics array and the isOpaque value
});
```
```javascript
**Example**
```js
const { entropy, sharpness, dominant } = await sharp(input).stats();
const { r, g, b } = dominant;
```
```javascript
**Example**
```js
const image = sharp(input);
// store intermediate result
const part = await image.extract(region).toBuffer();
// create new instance to obtain statistics of extracted region
const stats = await sharp(part).stats();
```
Returns **[Promise][6]<[Object][7]>**&#x20;
[1]: /api-constructor#parameters
[2]: https://www.libvips.org/API/current/VipsImage.html#VipsInterpretation
[3]: https://www.libvips.org/API/current/VipsImage.html#VipsBandFormat
[4]: https://www.npmjs.com/package/icc
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/function
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
```

View File

@@ -1,12 +1,11 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## rotate
> rotate([angle], [options]) ⇒ <code>Sharp</code>
Rotate the output image by either an explicit angle
or auto-orient based on the EXIF `Orientation` tag.
If an angle is provided, it is converted to a valid positive degree rotation.
For example, `-450` will produce a 270deg rotation.
For example, `-450` will produce a 270 degree rotation.
When rotating by an angle other than a multiple of 90,
the background colour can be provided with the `background` option.
@@ -14,7 +13,7 @@ the background colour can be provided with the `background` option.
If no angle is provided, it is determined from the EXIF data.
Mirroring is supported and may infer the use of a flip operation.
The use of `rotate` implies the removal of the EXIF `Orientation` tag, if any.
The use of `rotate` without an angle will remove the EXIF `Orientation` tag, if any.
Only one rotation can occur per pipeline.
Previous calls to `rotate` in the same pipeline will be ignored.
@@ -22,16 +21,20 @@ Previous calls to `rotate` in the same pipeline will be ignored.
Method order is important when rotating, resizing and/or extracting regions,
for example `.rotate(x).extract(y)` will produce a different result to `.extract(y).rotate(x)`.
### Parameters
* `angle` **[number][1]** angle of rotation. (optional, default `auto`)
* `options` **[Object][2]?** if present, is an Object with optional attributes.
**Throws**:
* `options.background` **([string][3] | [Object][2])** parsed by the [color][4] module to extract values for red, green, blue and alpha. (optional, default `"#000000"`)
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [angle] | <code>number</code> | <code>auto</code> | angle of rotation. |
| [options] | <code>Object</code> | | if present, is an Object with optional attributes. |
| [options.background] | <code>string</code> \| <code>Object</code> | <code>&quot;\&quot;#000000\&quot;&quot;</code> | parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha. |
**Example**
```js
const pipeline = sharp()
.rotate()
.resize(null, 200)
@@ -42,8 +45,8 @@ const pipeline = sharp()
});
readableStream.pipe(pipeline);
```
```javascript
**Example**
```js
const rotateThenResize = await sharp(input)
.rotate(90)
.resize({ width: 16, height: 8, fit: 'fill' })
@@ -54,82 +57,87 @@ const resizeThenRotate = await sharp(input)
.toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## flip
> flip([flip]) ⇒ <code>Sharp</code>
Flip the image about the vertical Y axis. This always occurs before rotation, if any.
The use of `flip` implies the removal of the EXIF `Orientation` tag, if any.
Mirror the image vertically (up-down) about the x-axis.
This always occurs before rotation, if any.
### Parameters
This operation does not work correctly with multi-page images.
* `flip` **[Boolean][6]** (optional, default `true`)
### Examples
```javascript
| Param | Type | Default |
| --- | --- | --- |
| [flip] | <code>Boolean</code> | <code>true</code> |
**Example**
```js
const output = await sharp(input).flip().toBuffer();
```
Returns **Sharp**&#x20;
## flop
> flop([flop]) ⇒ <code>Sharp</code>
Flop the image about the horizontal X axis. This always occurs before rotation, if any.
The use of `flop` implies the removal of the EXIF `Orientation` tag, if any.
Mirror the image horizontally (left-right) about the y-axis.
This always occurs before rotation, if any.
### Parameters
* `flop` **[Boolean][6]** (optional, default `true`)
### Examples
| Param | Type | Default |
| --- | --- | --- |
| [flop] | <code>Boolean</code> | <code>true</code> |
```javascript
**Example**
```js
const output = await sharp(input).flop().toBuffer();
```
Returns **Sharp**&#x20;
## affine
> affine(matrix, [options]) ⇒ <code>Sharp</code>
Perform an affine transform on an image. This operation will always occur after resizing, extraction and rotation, if any.
You must provide an array of length 4 or a 2x2 affine transformation matrix.
By default, new pixels are filled with a black background. You can provide a background color with the `background` option.
A particular interpolator may also be specified. Set the `interpolator` option to an attribute of the `sharp.interpolator` Object e.g. `sharp.interpolator.nohalo`.
A particular interpolator may also be specified. Set the `interpolator` option to an attribute of the `sharp.interpolators` Object e.g. `sharp.interpolators.nohalo`.
In the case of a 2x2 matrix, the transform is:
* X = `matrix[0, 0]` \* (x + `idx`) + `matrix[0, 1]` \* (y + `idy`) + `odx`
* Y = `matrix[1, 0]` \* (x + `idx`) + `matrix[1, 1]` \* (y + `idy`) + `ody`
- X = `matrix[0, 0]` \* (x + `idx`) + `matrix[0, 1]` \* (y + `idy`) + `odx`
- Y = `matrix[1, 0]` \* (x + `idx`) + `matrix[1, 1]` \* (y + `idy`) + `ody`
where:
- x and y are the coordinates in input image.
- X and Y are the coordinates in output image.
- (0,0) is the upper left corner.
* x and y are the coordinates in input image.
* X and Y are the coordinates in output image.
* (0,0) is the upper left corner.
### Parameters
**Throws**:
* `matrix` **([Array][7]<[Array][7]<[number][1]>> | [Array][7]<[number][1]>)** affine transformation matrix
* `options` **[Object][2]?** if present, is an Object with optional attributes.
- <code>Error</code> Invalid parameters
* `options.background` **([String][3] | [Object][2])** parsed by the [color][4] module to extract values for red, green, blue and alpha. (optional, default `"#000000"`)
* `options.idx` **[Number][1]** input horizontal offset (optional, default `0`)
* `options.idy` **[Number][1]** input vertical offset (optional, default `0`)
* `options.odx` **[Number][1]** output horizontal offset (optional, default `0`)
* `options.ody` **[Number][1]** output vertical offset (optional, default `0`)
* `options.interpolator` **[String][3]** interpolator (optional, default `sharp.interpolators.bicubic`)
**Since**: 0.27.0
### Examples
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| matrix | <code>Array.&lt;Array.&lt;number&gt;&gt;</code> \| <code>Array.&lt;number&gt;</code> | | affine transformation matrix |
| [options] | <code>Object</code> | | if present, is an Object with optional attributes. |
| [options.background] | <code>String</code> \| <code>Object</code> | <code>&quot;#000000&quot;</code> | parsed by the [color](https://www.npmjs.org/package/color) module to extract values for red, green, blue and alpha. |
| [options.idx] | <code>Number</code> | <code>0</code> | input horizontal offset |
| [options.idy] | <code>Number</code> | <code>0</code> | input vertical offset |
| [options.odx] | <code>Number</code> | <code>0</code> | output horizontal offset |
| [options.ody] | <code>Number</code> | <code>0</code> | output vertical offset |
| [options.interpolator] | <code>String</code> | <code>sharp.interpolators.bicubic</code> | interpolator |
```javascript
**Example**
```js
const pipeline = sharp()
.affine([[1, 0.3], [0.1, 0.7]], {
background: 'white',
interpolate: sharp.interpolators.nohalo
interpolator: sharp.interpolators.nohalo
})
.toBuffer((err, outputBuffer, info) => {
// outputBuffer contains the transformed image
@@ -140,47 +148,47 @@ inputStream
.pipe(pipeline);
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.27.0
## sharpen
> sharpen([options], [flat], [jagged]) ⇒ <code>Sharp</code>
Sharpen the image.
When used without parameters, performs a fast, mild sharpen of the output image.
When a `sigma` is provided, performs a slower, more accurate sharpen of the L channel in the LAB colour space.
Separate control over the level of sharpening in "flat" and "jagged" areas is available.
Fine-grained control over the level of sharpening in "flat" (m1) and "jagged" (m2) areas is available.
See [libvips sharpen][8] operation.
See [libvips sharpen](https://www.libvips.org/API/current/libvips-convolution.html#vips-sharpen) operation.
### Parameters
* `options` **([Object][2] | [number][1])?** if present, is an Object with attributes or (deprecated) a number for `options.sigma`.
**Throws**:
* `options.sigma` **[number][1]?** the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`.
* `options.m1` **[number][1]** the level of sharpening to apply to "flat" areas. (optional, default `1.0`)
* `options.m2` **[number][1]** the level of sharpening to apply to "jagged" areas. (optional, default `2.0`)
* `options.x1` **[number][1]** threshold between "flat" and "jagged" (optional, default `2.0`)
* `options.y2` **[number][1]** maximum amount of brightening. (optional, default `10.0`)
* `options.y3` **[number][1]** maximum amount of darkening. (optional, default `20.0`)
* `flat` **[number][1]?** (deprecated) see `options.m1`.
* `jagged` **[number][1]?** (deprecated) see `options.m2`.
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> \| <code>number</code> | | if present, is an Object with attributes |
| [options.sigma] | <code>number</code> | | the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`, between 0.000001 and 10 |
| [options.m1] | <code>number</code> | <code>1.0</code> | the level of sharpening to apply to "flat" areas, between 0 and 1000000 |
| [options.m2] | <code>number</code> | <code>2.0</code> | the level of sharpening to apply to "jagged" areas, between 0 and 1000000 |
| [options.x1] | <code>number</code> | <code>2.0</code> | threshold between "flat" and "jagged", between 0 and 1000000 |
| [options.y2] | <code>number</code> | <code>10.0</code> | maximum amount of brightening, between 0 and 1000000 |
| [options.y3] | <code>number</code> | <code>20.0</code> | maximum amount of darkening, between 0 and 1000000 |
| [flat] | <code>number</code> | | (deprecated) see `options.m1`. |
| [jagged] | <code>number</code> | | (deprecated) see `options.m2`. |
**Example**
```js
const data = await sharp(input).sharpen().toBuffer();
```
```javascript
**Example**
```js
const data = await sharp(input).sharpen({ sigma: 2 }).toBuffer();
```
```javascript
**Example**
```js
const data = await sharp(input)
.sharpen({
sigma: 2,
@@ -193,34 +201,35 @@ const data = await sharp(input)
.toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## median
> median([size]) ⇒ <code>Sharp</code>
Apply median filter.
When used without parameters the default window is 3x3.
### Parameters
* `size` **[number][1]** square mask size: size x size (optional, default `3`)
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [size] | <code>number</code> | <code>3</code> | square mask size: size x size |
**Example**
```js
const output = await sharp(input).median().toBuffer();
```
```javascript
**Example**
```js
const output = await sharp(input).median(5).toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## blur
> blur([sigma]) ⇒ <code>Sharp</code>
Blur the image.
@@ -228,51 +237,81 @@ When used without parameters, performs a fast 3x3 box blur (equivalent to a box
When a `sigma` is provided, performs a slower, more accurate Gaussian blur.
### Parameters
* `sigma` **[number][1]?** a value between 0.3 and 1000 representing the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`.
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Description |
| --- | --- | --- |
| [sigma] | <code>number</code> | a value between 0.3 and 1000 representing the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`. |
**Example**
```js
const boxBlurred = await sharp(input)
.blur()
.toBuffer();
```
```javascript
**Example**
```js
const gaussianBlurred = await sharp(input)
.blur(5)
.toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## flatten
> flatten([options]) ⇒ <code>Sharp</code>
Merge alpha transparency channel, if any, with a background, then remove the alpha channel.
See also [removeAlpha][9].
See also [removeAlpha](/api-channel#removealpha).
### Parameters
* `options` **[Object][2]?**&#x20;
* `options.background` **([string][3] | [Object][2])** background colour, parsed by the [color][4] module, defaults to black. (optional, default `{r:0,g:0,b:0}`)
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.background] | <code>string</code> \| <code>Object</code> | <code>&quot;{r: 0, g: 0, b: 0}&quot;</code> | background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black. |
### Examples
```javascript
**Example**
```js
await sharp(rgbaInput)
.flatten({ background: '#F0A703' })
.toBuffer();
```
Returns **Sharp**&#x20;
## unflatten
> unflatten()
Ensure the image has an alpha channel
with all white pixel values made fully transparent.
Existing alpha channel values for non-white pixels remain unchanged.
This feature is experimental and the API may change.
**Since**: 0.32.1
**Example**
```js
await sharp(rgbInput)
.unflatten()
.toBuffer();
```
**Example**
```js
await sharp(rgbInput)
.threshold(128, { grayscale: false }) // converter bright pixels to white
.unflatten()
.toBuffer();
```
## gamma
> gamma([gamma], [gammaOut]) ⇒ <code>Sharp</code>
Apply a gamma correction by reducing the encoding (darken) pre-resize at a factor of `1/gamma`
then increasing the encoding (brighten) post-resize at a factor of `gamma`.
@@ -282,95 +321,122 @@ when applying a gamma correction.
Supply a second argument to use a different output gamma value, otherwise the first value is used in both cases.
### Parameters
* `gamma` **[number][1]** value between 1.0 and 3.0. (optional, default `2.2`)
* `gammaOut` **[number][1]?** value between 1.0 and 3.0. (optional, defaults to same as `gamma`)
**Throws**:
<!---->
- <code>Error</code> Invalid parameters
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [gamma] | <code>number</code> | <code>2.2</code> | value between 1.0 and 3.0. |
| [gammaOut] | <code>number</code> | | value between 1.0 and 3.0. (optional, defaults to same as `gamma`) |
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## negate
> negate([options]) ⇒ <code>Sharp</code>
Produce the "negative" of the image.
### Parameters
* `options` **[Object][2]?**&#x20;
* `options.alpha` **[Boolean][6]** Whether or not to negate any alpha channel (optional, default `true`)
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.alpha] | <code>Boolean</code> | <code>true</code> | Whether or not to negate any alpha channel |
### Examples
```javascript
**Example**
```js
const output = await sharp(input)
.negate()
.toBuffer();
```
```javascript
**Example**
```js
const output = await sharp(input)
.negate({ alpha: false })
.toBuffer();
```
Returns **Sharp**&#x20;
## normalise
> normalise([options]) ⇒ <code>Sharp</code>
Enhance output image contrast by stretching its luminance to cover the full dynamic range.
Enhance output image contrast by stretching its luminance to cover a full dynamic range.
### Parameters
Uses a histogram-based approach, taking a default range of 1% to 99% to reduce sensitivity to noise at the extremes.
* `normalise` **[Boolean][6]** (optional, default `true`)
Luminance values below the `lower` percentile will be underexposed by clipping to zero.
Luminance values above the `upper` percentile will be overexposed by clipping to the max pixel value.
### Examples
```javascript
const output = await sharp(input).normalise().toBuffer();
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.lower] | <code>number</code> | <code>1</code> | Percentile below which luminance values will be underexposed. |
| [options.upper] | <code>number</code> | <code>99</code> | Percentile above which luminance values will be overexposed. |
**Example**
```js
const output = await sharp(input)
.normalise()
.toBuffer();
```
**Example**
```js
const output = await sharp(input)
.normalise({ lower: 0, upper: 100 })
.toBuffer();
```
Returns **Sharp**&#x20;
## normalize
> normalize([options]) ⇒ <code>Sharp</code>
Alternative spelling of normalise.
### Parameters
* `normalize` **[Boolean][6]** (optional, default `true`)
### Examples
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.lower] | <code>number</code> | <code>1</code> | Percentile below which luminance values will be underexposed. |
| [options.upper] | <code>number</code> | <code>99</code> | Percentile above which luminance values will be overexposed. |
```javascript
const output = await sharp(input).normalize().toBuffer();
**Example**
```js
const output = await sharp(input)
.normalize()
.toBuffer();
```
Returns **Sharp**&#x20;
## clahe
> clahe(options) ⇒ <code>Sharp</code>
Perform contrast limiting adaptive histogram equalization
[CLAHE][10].
[CLAHE](https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE).
This will, in general, enhance the clarity of the image by bringing out darker details.
### Parameters
* `options` **[Object][2]**&#x20;
**Throws**:
* `options.width` **[number][1]** integer width of the region in pixels.
* `options.height` **[number][1]** integer height of the region in pixels.
* `options.maxSlope` **[number][1]** maximum value for the slope of the
cumulative histogram. A value of 0 disables contrast limiting. Valid values
are integers in the range 0-100 (inclusive) (optional, default `3`)
- <code>Error</code> Invalid parameters
### Examples
**Since**: 0.28.3
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| options | <code>Object</code> | | |
| options.width | <code>number</code> | | Integral width of the search window, in pixels. |
| options.height | <code>number</code> | | Integral height of the search window, in pixels. |
| [options.maxSlope] | <code>number</code> | <code>3</code> | Integral level of brightening, between 0 and 100, where 0 disables contrast limiting. |
**Example**
```js
const output = await sharp(input)
.clahe({
width: 3,
@@ -379,31 +445,29 @@ const output = await sharp(input)
.toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.28.3
## convolve
> convolve(kernel) ⇒ <code>Sharp</code>
Convolve the image with the specified kernel.
### Parameters
* `kernel` **[Object][2]**&#x20;
**Throws**:
* `kernel.width` **[number][1]** width of the kernel in pixels.
* `kernel.height` **[number][1]** height of the kernel in pixels.
* `kernel.kernel` **[Array][7]<[number][1]>** Array of length `width*height` containing the kernel values.
* `kernel.scale` **[number][1]** the scale of the kernel in pixels. (optional, default `sum`)
* `kernel.offset` **[number][1]** the offset of the kernel in pixels. (optional, default `0`)
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| kernel | <code>Object</code> | | |
| kernel.width | <code>number</code> | | width of the kernel in pixels. |
| kernel.height | <code>number</code> | | height of the kernel in pixels. |
| kernel.kernel | <code>Array.&lt;number&gt;</code> | | Array of length `width*height` containing the kernel values. |
| [kernel.scale] | <code>number</code> | <code>sum</code> | the scale of the kernel in pixels. |
| [kernel.offset] | <code>number</code> | <code>0</code> | the offset of the kernel in pixels. |
**Example**
```js
sharp(input)
.convolve({
width: 3,
@@ -417,74 +481,80 @@ sharp(input)
});
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## threshold
> threshold([threshold], [options]) ⇒ <code>Sharp</code>
Any pixel value greater than or equal to the threshold value will be set to 255, otherwise it will be set to 0.
### Parameters
* `threshold` **[number][1]** a value in the range 0-255 representing the level at which the threshold will be applied. (optional, default `128`)
* `options` **[Object][2]?**&#x20;
**Throws**:
* `options.greyscale` **[Boolean][6]** convert to single channel greyscale. (optional, default `true`)
* `options.grayscale` **[Boolean][6]** alternative spelling for greyscale. (optional, default `true`)
- <code>Error</code> Invalid parameters
<!---->
* Throws **[Error][5]** Invalid parameters
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [threshold] | <code>number</code> | <code>128</code> | a value in the range 0-255 representing the level at which the threshold will be applied. |
| [options] | <code>Object</code> | | |
| [options.greyscale] | <code>Boolean</code> | <code>true</code> | convert to single channel greyscale. |
| [options.grayscale] | <code>Boolean</code> | <code>true</code> | alternative spelling for greyscale. |
Returns **Sharp**&#x20;
## boolean
> boolean(operand, operator, [options]) ⇒ <code>Sharp</code>
Perform a bitwise boolean operation with operand image.
This operation creates an output image where each pixel is the result of
the selected bitwise boolean `operation` between the corresponding pixels of the input images.
### Parameters
* `operand` **([Buffer][11] | [string][3])** Buffer containing image data or string containing the path to an image file.
* `operator` **[string][3]** one of `and`, `or` or `eor` to perform that bitwise operation, like the C logic operators `&`, `|` and `^` respectively.
* `options` **[Object][2]?**&#x20;
**Throws**:
* `options.raw` **[Object][2]?** describes operand when using raw pixel data.
- <code>Error</code> Invalid parameters
* `options.raw.width` **[number][1]?**&#x20;
* `options.raw.height` **[number][1]?**&#x20;
* `options.raw.channels` **[number][1]?**&#x20;
<!---->
| Param | Type | Description |
| --- | --- | --- |
| operand | <code>Buffer</code> \| <code>string</code> | Buffer containing image data or string containing the path to an image file. |
| operator | <code>string</code> | one of `and`, `or` or `eor` to perform that bitwise operation, like the C logic operators `&`, `|` and `^` respectively. |
| [options] | <code>Object</code> | |
| [options.raw] | <code>Object</code> | describes operand when using raw pixel data. |
| [options.raw.width] | <code>number</code> | |
| [options.raw.height] | <code>number</code> | |
| [options.raw.channels] | <code>number</code> | |
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## linear
> linear([a], [b]) ⇒ <code>Sharp</code>
Apply the linear formula `a` \* input + `b` to the image to adjust image levels.
Apply the linear formula `a` * input + `b` to the image to adjust image levels.
When a single number is provided, it will be used for all image channels.
When an array of numbers is provided, the array length must match the number of channels.
### Parameters
* `a` **([number][1] | [Array][7]<[number][1]>)** multiplier (optional, default `[]`)
* `b` **([number][1] | [Array][7]<[number][1]>)** offset (optional, default `[]`)
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [a] | <code>number</code> \| <code>Array.&lt;number&gt;</code> | <code>[]</code> | multiplier |
| [b] | <code>number</code> \| <code>Array.&lt;number&gt;</code> | <code>[]</code> | offset |
**Example**
```js
await sharp(input)
.linear(0.5, 2)
.toBuffer();
```
```javascript
**Example**
```js
await sharp(rgbInput)
.linear(
[0.25, 0.5, 0.75],
@@ -493,21 +563,25 @@ await sharp(rgbInput)
.toBuffer();
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
## recomb
> recomb(inputMatrix) ⇒ <code>Sharp</code>
Recomb the image with the specified matrix.
Recombine the image with the specified matrix.
### Parameters
* `inputMatrix` **[Array][7]<[Array][7]<[number][1]>>** 3x3 Recombination matrix
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
**Since**: 0.21.1
| Param | Type | Description |
| --- | --- | --- |
| inputMatrix | <code>Array.&lt;Array.&lt;number&gt;&gt;</code> | 3x3 Recombination matrix |
**Example**
```js
sharp(input)
.recomb([
[0.3588, 0.7044, 0.1368],
@@ -516,37 +590,32 @@ sharp(input)
])
.raw()
.toBuffer(function(err, data, info) {
// data contains the raw pixel data after applying the recomb
// data contains the raw pixel data after applying the matrix
// With this example input, a sepia filter has been applied
});
```
* Throws **[Error][5]** Invalid parameters
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.21.1
## modulate
> modulate([options]) ⇒ <code>Sharp</code>
Transforms the image using brightness, saturation, hue rotation, and lightness.
Brightness and lightness both operate on luminance, with the difference being that
brightness is multiplicative whereas lightness is additive.
### Parameters
* `options` **[Object][2]?**&#x20;
**Since**: 0.22.1
* `options.brightness` **[number][1]?** Brightness multiplier
* `options.saturation` **[number][1]?** Saturation multiplier
* `options.hue` **[number][1]?** Degrees for hue rotation
* `options.lightness` **[number][1]?** Lightness addend
| Param | Type | Description |
| --- | --- | --- |
| [options] | <code>Object</code> | |
| [options.brightness] | <code>number</code> | Brightness multiplier |
| [options.saturation] | <code>number</code> | Saturation multiplier |
| [options.hue] | <code>number</code> | Degrees for hue rotation |
| [options.lightness] | <code>number</code> | Lightness addend |
### Examples
```javascript
**Example**
```js
// increase brightness by a factor of 2
const output = await sharp(input)
.modulate({
@@ -554,8 +623,8 @@ const output = await sharp(input)
})
.toBuffer();
```
```javascript
**Example**
```js
// hue-rotate by 180 degrees
const output = await sharp(input)
.modulate({
@@ -563,8 +632,8 @@ const output = await sharp(input)
})
.toBuffer();
```
```javascript
**Example**
```js
// increase lightness by +50
const output = await sharp(input)
.modulate({
@@ -572,9 +641,9 @@ const output = await sharp(input)
})
.toBuffer();
```
```javascript
// decreate brightness and saturation while also hue-rotating by 90 degrees
**Example**
```js
// decrease brightness and saturation while also hue-rotating by 90 degrees
const output = await sharp(input)
.modulate({
brightness: 0.5,
@@ -582,32 +651,4 @@ const output = await sharp(input)
hue: 90,
})
.toBuffer();
```
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.22.1
[1]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[2]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[3]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[4]: https://www.npmjs.org/package/color
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array
[8]: https://www.libvips.org/API/current/libvips-convolution.html#vips-sharpen
[9]: /api-channel#removealpha
[10]: https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE
[11]: https://nodejs.org/api/buffer.html
```

View File

@@ -1,6 +1,5 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## toFile
> toFile(fileOut, [callback]) ⇒ <code>Promise.&lt;Object&gt;</code>
Write output image data to a file.
@@ -9,92 +8,92 @@ with JPEG, PNG, WebP, AVIF, TIFF, GIF, DZI, and libvips' V format supported.
Note that raw pixel data is only supported for buffer output.
By default all metadata will be removed, which includes EXIF-based orientation.
See [withMetadata][1] for control over this.
See [withMetadata](#withmetadata) for control over this.
The caller is responsible for ensuring directory structures and permissions exist.
A `Promise` is returned when `callback` is not provided.
### Parameters
* `fileOut` **[string][2]** the path to write the image data to.
* `callback` **[Function][3]?** called on completion with two arguments `(err, info)`.
`info` contains the output image `format`, `size` (bytes), `width`, `height`,
`channels` and `premultiplied` (indicating if premultiplication was used).
When using a crop strategy also contains `cropOffsetLeft` and `cropOffsetTop`.
May also contain `textAutofitDpi` (dpi the font was rendered at) if image was created from text.
**Returns**: <code>Promise.&lt;Object&gt;</code> - - when no callback is provided
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Description |
| --- | --- | --- |
| fileOut | <code>string</code> | the path to write the image data to. |
| [callback] | <code>function</code> | called on completion with two arguments `(err, info)`. `info` contains the output image `format`, `size` (bytes), `width`, `height`, `channels` and `premultiplied` (indicating if premultiplication was used). When using a crop strategy also contains `cropOffsetLeft` and `cropOffsetTop`. When using the attention crop strategy also contains `attentionX` and `attentionY`, the focal point of the cropped region. May also contain `textAutofitDpi` (dpi the font was rendered at) if image was created from text. |
**Example**
```js
sharp(input)
.toFile('output.png', (err, info) => { ... });
```
```javascript
**Example**
```js
sharp(input)
.toFile('output.png')
.then(info => { ... })
.catch(err => { ... });
```
* Throws **[Error][4]** Invalid parameters
Returns **[Promise][5]<[Object][6]>** when no callback is provided
## toBuffer
> toBuffer([options], [callback]) ⇒ <code>Promise.&lt;Buffer&gt;</code>
Write output to a Buffer.
JPEG, PNG, WebP, AVIF, TIFF, GIF and raw pixel data output are supported.
Use [toFormat][7] or one of the format-specific functions such as [jpeg][8], [png][9] etc. to set the output format.
Use [toFormat](#toformat) or one of the format-specific functions such as [jpeg](#jpeg), [png](#png) etc. to set the output format.
If no explicit format is set, the output format will match the input image, except SVG input which becomes PNG output.
By default all metadata will be removed, which includes EXIF-based orientation.
See [withMetadata][1] for control over this.
See [withMetadata](#withmetadata) for control over this.
`callback`, if present, gets three arguments `(err, data, info)` where:
* `err` is an error, if any.
* `data` is the output image data.
* `info` contains the output image `format`, `size` (bytes), `width`, `height`,
`channels` and `premultiplied` (indicating if premultiplication was used).
When using a crop strategy also contains `cropOffsetLeft` and `cropOffsetTop`.
May also contain `textAutofitDpi` (dpi the font was rendered at) if image was created from text.
- `err` is an error, if any.
- `data` is the output image data.
- `info` contains the output image `format`, `size` (bytes), `width`, `height`,
`channels` and `premultiplied` (indicating if premultiplication was used).
When using a crop strategy also contains `cropOffsetLeft` and `cropOffsetTop`.
May also contain `textAutofitDpi` (dpi the font was rendered at) if image was created from text.
A `Promise` is returned when `callback` is not provided.
### Parameters
* `options` **[Object][6]?**&#x20;
**Returns**: <code>Promise.&lt;Buffer&gt;</code> - - when no callback is provided
* `options.resolveWithObject` **[boolean][10]?** Resolve the Promise with an Object containing `data` and `info` properties instead of resolving only with `data`.
* `callback` **[Function][3]?**&#x20;
| Param | Type | Description |
| --- | --- | --- |
| [options] | <code>Object</code> | |
| [options.resolveWithObject] | <code>boolean</code> | Resolve the Promise with an Object containing `data` and `info` properties instead of resolving only with `data`. |
| [callback] | <code>function</code> | |
### Examples
```javascript
**Example**
```js
sharp(input)
.toBuffer((err, data, info) => { ... });
```
```javascript
**Example**
```js
sharp(input)
.toBuffer()
.then(data => { ... })
.catch(err => { ... });
```
```javascript
**Example**
```js
sharp(input)
.png()
.toBuffer({ resolveWithObject: true })
.then(({ data, info }) => { ... })
.catch(err => { ... });
```
```javascript
**Example**
```js
const { data, info } = await sharp('my-image.jpg')
// output the raw pixels
.raw()
@@ -111,108 +110,123 @@ await sharp(pixelArray, { raw: { width, height, channels } })
.toFile('my-changed-image.jpg');
```
Returns **[Promise][5]<[Buffer][11]>** when no callback is provided
## withMetadata
> withMetadata([options]) ⇒ <code>Sharp</code>
Include all metadata (EXIF, XMP, IPTC) from the input image in the output image.
This will also convert to and add a web-friendly sRGB ICC profile unless a custom
output profile is provided.
This will also convert to and add a web-friendly sRGB ICC profile if appropriate,
unless a custom output profile is provided.
The default behaviour, when `withMetadata` is not used, is to convert to the device-independent
sRGB colour space and strip all metadata, including the removal of any ICC profile.
EXIF metadata is unsupported for TIFF output.
### Parameters
* `options` **[Object][6]?**&#x20;
**Throws**:
* `options.orientation` **[number][12]?** value between 1 and 8, used to update the EXIF `Orientation` tag.
* `options.icc` **[string][2]?** filesystem path to output ICC profile, defaults to sRGB.
* `options.exif` **[Object][6]<[Object][6]>** Object keyed by IFD0, IFD1 etc. of key/value string pairs to write as EXIF data. (optional, default `{}`)
* `options.density` **[number][12]?** Number of pixels per inch (DPI).
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.orientation] | <code>number</code> | | value between 1 and 8, used to update the EXIF `Orientation` tag. |
| [options.icc] | <code>string</code> | <code>&quot;&#x27;srgb&#x27;&quot;</code> | Filesystem path to output ICC profile, relative to `process.cwd()`, defaults to built-in sRGB. |
| [options.exif] | <code>Object.&lt;Object&gt;</code> | <code>{}</code> | Object keyed by IFD0, IFD1 etc. of key/value string pairs to write as EXIF data. |
| [options.density] | <code>number</code> | | Number of pixels per inch (DPI). |
**Example**
```js
sharp('input.jpg')
.withMetadata()
.toFile('output-with-metadata.jpg')
.then(info => { ... });
```
```javascript
// Set "IFD0-Copyright" in output EXIF metadata
**Example**
```js
// Set output EXIF metadata
const data = await sharp(input)
.withMetadata({
exif: {
IFD0: {
Copyright: 'Wernham Hogg'
Copyright: 'The National Gallery'
},
IFD3: {
GPSLatitudeRef: 'N',
GPSLatitude: '51/1 30/1 3230/100',
GPSLongitudeRef: 'W',
GPSLongitude: '0/1 7/1 4366/100'
}
}
})
.toBuffer();
```
```javascript
**Example**
```js
// Set output metadata to 96 DPI
const data = await sharp(input)
.withMetadata({ density: 96 })
.toBuffer();
```
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
## toFormat
> toFormat(format, options) ⇒ <code>Sharp</code>
Force output to a given format.
### Parameters
* `format` **([string][2] | [Object][6])** as a string or an Object with an 'id' attribute
* `options` **[Object][6]** output options
**Throws**:
### Examples
- <code>Error</code> unsupported format or options
```javascript
| Param | Type | Description |
| --- | --- | --- |
| format | <code>string</code> \| <code>Object</code> | as a string or an Object with an 'id' attribute |
| options | <code>Object</code> | output options |
**Example**
```js
// Convert any input to PNG output
const data = await sharp(input)
.toFormat('png')
.toBuffer();
```
* Throws **[Error][4]** unsupported format or options
Returns **Sharp**&#x20;
## jpeg
> jpeg([options]) ⇒ <code>Sharp</code>
Use these JPEG options for output image.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `80`)
* `options.progressive` **[boolean][10]** use progressive (interlace) scan (optional, default `false`)
* `options.chromaSubsampling` **[string][2]** set to '4:4:4' to prevent chroma subsampling otherwise defaults to '4:2:0' chroma subsampling (optional, default `'4:2:0'`)
* `options.optimiseCoding` **[boolean][10]** optimise Huffman coding tables (optional, default `true`)
* `options.optimizeCoding` **[boolean][10]** alternative spelling of optimiseCoding (optional, default `true`)
* `options.mozjpeg` **[boolean][10]** use mozjpeg defaults, equivalent to `{ trellisQuantisation: true, overshootDeringing: true, optimiseScans: true, quantisationTable: 3 }` (optional, default `false`)
* `options.trellisQuantisation` **[boolean][10]** apply trellis quantisation (optional, default `false`)
* `options.overshootDeringing` **[boolean][10]** apply overshoot deringing (optional, default `false`)
* `options.optimiseScans` **[boolean][10]** optimise progressive scans, forces progressive (optional, default `false`)
* `options.optimizeScans` **[boolean][10]** alternative spelling of optimiseScans (optional, default `false`)
* `options.quantisationTable` **[number][12]** quantization table to use, integer 0-8 (optional, default `0`)
* `options.quantizationTable` **[number][12]** alternative spelling of quantisationTable (optional, default `0`)
* `options.force` **[boolean][10]** force JPEG output, otherwise attempt to use input format (optional, default `true`)
- <code>Error</code> Invalid options
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>80</code> | quality, integer 1-100 |
| [options.progressive] | <code>boolean</code> | <code>false</code> | use progressive (interlace) scan |
| [options.chromaSubsampling] | <code>string</code> | <code>&quot;&#x27;4:2:0&#x27;&quot;</code> | set to '4:4:4' to prevent chroma subsampling otherwise defaults to '4:2:0' chroma subsampling |
| [options.optimiseCoding] | <code>boolean</code> | <code>true</code> | optimise Huffman coding tables |
| [options.optimizeCoding] | <code>boolean</code> | <code>true</code> | alternative spelling of optimiseCoding |
| [options.mozjpeg] | <code>boolean</code> | <code>false</code> | use mozjpeg defaults, equivalent to `{ trellisQuantisation: true, overshootDeringing: true, optimiseScans: true, quantisationTable: 3 }` |
| [options.trellisQuantisation] | <code>boolean</code> | <code>false</code> | apply trellis quantisation |
| [options.overshootDeringing] | <code>boolean</code> | <code>false</code> | apply overshoot deringing |
| [options.optimiseScans] | <code>boolean</code> | <code>false</code> | optimise progressive scans, forces progressive |
| [options.optimizeScans] | <code>boolean</code> | <code>false</code> | alternative spelling of optimiseScans |
| [options.quantisationTable] | <code>number</code> | <code>0</code> | quantization table to use, integer 0-8 |
| [options.quantizationTable] | <code>number</code> | <code>0</code> | alternative spelling of quantisationTable |
| [options.force] | <code>boolean</code> | <code>true</code> | force JPEG output, otherwise attempt to use input format |
**Example**
```js
// Convert any input to very high quality JPEG output
const data = await sharp(input)
.jpeg({
@@ -221,19 +235,17 @@ const data = await sharp(input)
})
.toBuffer();
```
```javascript
**Example**
```js
// Use mozjpeg to reduce output JPEG file size (slower)
const data = await sharp(input)
.jpeg({ mozjpeg: true })
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
## png
> png([options]) ⇒ <code>Sharp</code>
Use these PNG options for output image.
@@ -241,82 +253,87 @@ By default, PNG output is full colour at 8 or 16 bits per pixel.
Indexed PNG input at 1, 2 or 4 bits per pixel is converted to 8 bits per pixel.
Set `palette` to `true` for slower, indexed PNG output.
### Parameters
* `options` **[Object][6]?**&#x20;
**Throws**:
* `options.progressive` **[boolean][10]** use progressive (interlace) scan (optional, default `false`)
* `options.compressionLevel` **[number][12]** zlib compression level, 0 (fastest, largest) to 9 (slowest, smallest) (optional, default `6`)
* `options.adaptiveFiltering` **[boolean][10]** use adaptive row filtering (optional, default `false`)
* `options.palette` **[boolean][10]** quantise to a palette-based image with alpha transparency support (optional, default `false`)
* `options.quality` **[number][12]** use the lowest number of colours needed to achieve given quality, sets `palette` to `true` (optional, default `100`)
* `options.effort` **[number][12]** CPU effort, between 1 (fastest) and 10 (slowest), sets `palette` to `true` (optional, default `7`)
* `options.colours` **[number][12]** maximum number of palette entries, sets `palette` to `true` (optional, default `256`)
* `options.colors` **[number][12]** alternative spelling of `options.colours`, sets `palette` to `true` (optional, default `256`)
* `options.dither` **[number][12]** level of Floyd-Steinberg error diffusion, sets `palette` to `true` (optional, default `1.0`)
* `options.force` **[boolean][10]** force PNG output, otherwise attempt to use input format (optional, default `true`)
- <code>Error</code> Invalid options
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.progressive] | <code>boolean</code> | <code>false</code> | use progressive (interlace) scan |
| [options.compressionLevel] | <code>number</code> | <code>6</code> | zlib compression level, 0 (fastest, largest) to 9 (slowest, smallest) |
| [options.adaptiveFiltering] | <code>boolean</code> | <code>false</code> | use adaptive row filtering |
| [options.palette] | <code>boolean</code> | <code>false</code> | quantise to a palette-based image with alpha transparency support |
| [options.quality] | <code>number</code> | <code>100</code> | use the lowest number of colours needed to achieve given quality, sets `palette` to `true` |
| [options.effort] | <code>number</code> | <code>7</code> | CPU effort, between 1 (fastest) and 10 (slowest), sets `palette` to `true` |
| [options.colours] | <code>number</code> | <code>256</code> | maximum number of palette entries, sets `palette` to `true` |
| [options.colors] | <code>number</code> | <code>256</code> | alternative spelling of `options.colours`, sets `palette` to `true` |
| [options.dither] | <code>number</code> | <code>1.0</code> | level of Floyd-Steinberg error diffusion, sets `palette` to `true` |
| [options.force] | <code>boolean</code> | <code>true</code> | force PNG output, otherwise attempt to use input format |
**Example**
```js
// Convert any input to full colour PNG output
const data = await sharp(input)
.png()
.toBuffer();
```
```javascript
**Example**
```js
// Convert any input to indexed PNG output (slower)
const data = await sharp(input)
.png({ palette: true })
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
## webp
> webp([options]) ⇒ <code>Sharp</code>
Use these WebP options for output image.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `80`)
* `options.alphaQuality` **[number][12]** quality of alpha layer, integer 0-100 (optional, default `100`)
* `options.lossless` **[boolean][10]** use lossless compression mode (optional, default `false`)
* `options.nearLossless` **[boolean][10]** use near\_lossless compression mode (optional, default `false`)
* `options.smartSubsample` **[boolean][10]** use high quality chroma subsampling (optional, default `false`)
* `options.effort` **[number][12]** CPU effort, between 0 (fastest) and 6 (slowest) (optional, default `4`)
* `options.loop` **[number][12]** number of animation iterations, use 0 for infinite animation (optional, default `0`)
* `options.delay` **([number][12] | [Array][13]<[number][12]>)?** delay(s) between animation frames (in milliseconds)
* `options.minSize` **[boolean][10]** prevent use of animation key frames to minimise file size (slow) (optional, default `false`)
* `options.mixed` **[boolean][10]** allow mixture of lossy and lossless animation frames (slow) (optional, default `false`)
* `options.force` **[boolean][10]** force WebP output, otherwise attempt to use input format (optional, default `true`)
- <code>Error</code> Invalid options
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>80</code> | quality, integer 1-100 |
| [options.alphaQuality] | <code>number</code> | <code>100</code> | quality of alpha layer, integer 0-100 |
| [options.lossless] | <code>boolean</code> | <code>false</code> | use lossless compression mode |
| [options.nearLossless] | <code>boolean</code> | <code>false</code> | use near_lossless compression mode |
| [options.smartSubsample] | <code>boolean</code> | <code>false</code> | use high quality chroma subsampling |
| [options.preset] | <code>string</code> | <code>&quot;&#x27;default&#x27;&quot;</code> | named preset for preprocessing/filtering, one of: default, photo, picture, drawing, icon, text |
| [options.effort] | <code>number</code> | <code>4</code> | CPU effort, between 0 (fastest) and 6 (slowest) |
| [options.loop] | <code>number</code> | <code>0</code> | number of animation iterations, use 0 for infinite animation |
| [options.delay] | <code>number</code> \| <code>Array.&lt;number&gt;</code> | | delay(s) between animation frames (in milliseconds) |
| [options.minSize] | <code>boolean</code> | <code>false</code> | prevent use of animation key frames to minimise file size (slow) |
| [options.mixed] | <code>boolean</code> | <code>false</code> | allow mixture of lossy and lossless animation frames (slow) |
| [options.force] | <code>boolean</code> | <code>true</code> | force WebP output, otherwise attempt to use input format |
**Example**
```js
// Convert any input to lossless WebP output
const data = await sharp(input)
.webp({ lossless: true })
.toBuffer();
```
```javascript
**Example**
```js
// Optimise the file size of an animated WebP
const outputWebp = await sharp(inputWebp, { animated: true })
.webp({ effort: 6 })
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
## gif
> gif([options]) ⇒ <code>Sharp</code>
Use these GIF options for the output image.
@@ -324,79 +341,92 @@ The first entry in the palette is reserved for transparency.
The palette of the input image will be re-used if possible.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.reoptimise` **[boolean][10]** always generate new palettes (slow), re-use existing by default (optional, default `false`)
* `options.reoptimize` **[boolean][10]** alternative spelling of `options.reoptimise` (optional, default `false`)
* `options.colours` **[number][12]** maximum number of palette entries, including transparency, between 2 and 256 (optional, default `256`)
* `options.colors` **[number][12]** alternative spelling of `options.colours` (optional, default `256`)
* `options.effort` **[number][12]** CPU effort, between 1 (fastest) and 10 (slowest) (optional, default `7`)
* `options.dither` **[number][12]** level of Floyd-Steinberg error diffusion, between 0 (least) and 1 (most) (optional, default `1.0`)
* `options.loop` **[number][12]** number of animation iterations, use 0 for infinite animation (optional, default `0`)
* `options.delay` **([number][12] | [Array][13]<[number][12]>)?** delay(s) between animation frames (in milliseconds)
* `options.force` **[boolean][10]** force GIF output, otherwise attempt to use input format (optional, default `true`)
- <code>Error</code> Invalid options
### Examples
**Since**: 0.30.0
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.reuse] | <code>boolean</code> | <code>true</code> | re-use existing palette, otherwise generate new (slow) |
| [options.progressive] | <code>boolean</code> | <code>false</code> | use progressive (interlace) scan |
| [options.colours] | <code>number</code> | <code>256</code> | maximum number of palette entries, including transparency, between 2 and 256 |
| [options.colors] | <code>number</code> | <code>256</code> | alternative spelling of `options.colours` |
| [options.effort] | <code>number</code> | <code>7</code> | CPU effort, between 1 (fastest) and 10 (slowest) |
| [options.dither] | <code>number</code> | <code>1.0</code> | level of Floyd-Steinberg error diffusion, between 0 (least) and 1 (most) |
| [options.interFrameMaxError] | <code>number</code> | <code>0</code> | maximum inter-frame error for transparency, between 0 (lossless) and 32 |
| [options.interPaletteMaxError] | <code>number</code> | <code>3</code> | maximum inter-palette error for palette reuse, between 0 and 256 |
| [options.loop] | <code>number</code> | <code>0</code> | number of animation iterations, use 0 for infinite animation |
| [options.delay] | <code>number</code> \| <code>Array.&lt;number&gt;</code> | | delay(s) between animation frames (in milliseconds) |
| [options.force] | <code>boolean</code> | <code>true</code> | force GIF output, otherwise attempt to use input format |
**Example**
```js
// Convert PNG to GIF
await sharp(pngBuffer)
.gif()
.toBuffer();
```
```javascript
**Example**
```js
// Convert animated WebP to animated GIF
await sharp('animated.webp', { animated: true })
.toFile('animated.gif');
```
```javascript
**Example**
```js
// Create a 128x128, cropped, non-dithered, animated thumbnail of an animated GIF
const out = await sharp('in.gif', { animated: true })
.resize({ width: 128, height: 128 })
.gif({ dither: 0 })
.toBuffer();
```
**Example**
```js
// Lossy file size reduction of animated GIF
await sharp('in.gif', { animated: true })
.gif({ interFrameMaxError: 8 })
.toFile('optim.gif');
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.30.0
## jp2
> jp2([options]) ⇒ <code>Sharp</code>
Use these JP2 options for output image.
Requires libvips compiled with support for OpenJPEG.
The prebuilt binaries do not include this - see
[installing a custom libvips][14].
[installing a custom libvips](https://sharp.pixelplumbing.com/install#custom-libvips).
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `80`)
* `options.lossless` **[boolean][10]** use lossless compression mode (optional, default `false`)
* `options.tileWidth` **[number][12]** horizontal tile size (optional, default `512`)
* `options.tileHeight` **[number][12]** vertical tile size (optional, default `512`)
* `options.chromaSubsampling` **[string][2]** set to '4:2:0' to use chroma subsampling (optional, default `'4:4:4'`)
- <code>Error</code> Invalid options
### Examples
**Since**: 0.29.1
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>80</code> | quality, integer 1-100 |
| [options.lossless] | <code>boolean</code> | <code>false</code> | use lossless compression mode |
| [options.tileWidth] | <code>number</code> | <code>512</code> | horizontal tile size |
| [options.tileHeight] | <code>number</code> | <code>512</code> | vertical tile size |
| [options.chromaSubsampling] | <code>string</code> | <code>&quot;&#x27;4:4:4&#x27;&quot;</code> | set to '4:2:0' to use chroma subsampling |
**Example**
```js
// Convert any input to lossless JP2 output
const data = await sharp(input)
.jp2({ lossless: true })
.toBuffer();
```
```javascript
**Example**
```js
// Convert any input to very high quality JP2 output
const data = await sharp(input)
.jp2({
@@ -406,40 +436,39 @@ const data = await sharp(input)
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.29.1
## tiff
> tiff([options]) ⇒ <code>Sharp</code>
Use these TIFF options for output image.
The `density` can be set in pixels/inch via [withMetadata][1] instead of providing `xres` and `yres` in pixels/mm.
The `density` can be set in pixels/inch via [withMetadata](#withmetadata)
instead of providing `xres` and `yres` in pixels/mm.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `80`)
* `options.force` **[boolean][10]** force TIFF output, otherwise attempt to use input format (optional, default `true`)
* `options.compression` **[string][2]** compression options: none, jpeg, deflate, packbits, ccittfax4, lzw, webp, zstd, jp2k (optional, default `'jpeg'`)
* `options.predictor` **[string][2]** compression predictor options: none, horizontal, float (optional, default `'horizontal'`)
* `options.pyramid` **[boolean][10]** write an image pyramid (optional, default `false`)
* `options.tile` **[boolean][10]** write a tiled tiff (optional, default `false`)
* `options.tileWidth` **[number][12]** horizontal tile size (optional, default `256`)
* `options.tileHeight` **[number][12]** vertical tile size (optional, default `256`)
* `options.xres` **[number][12]** horizontal resolution in pixels/mm (optional, default `1.0`)
* `options.yres` **[number][12]** vertical resolution in pixels/mm (optional, default `1.0`)
* `options.resolutionUnit` **[string][2]** resolution unit options: inch, cm (optional, default `'inch'`)
* `options.bitdepth` **[number][12]** reduce bitdepth to 1, 2 or 4 bit (optional, default `8`)
- <code>Error</code> Invalid options
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>80</code> | quality, integer 1-100 |
| [options.force] | <code>boolean</code> | <code>true</code> | force TIFF output, otherwise attempt to use input format |
| [options.compression] | <code>string</code> | <code>&quot;&#x27;jpeg&#x27;&quot;</code> | compression options: none, jpeg, deflate, packbits, ccittfax4, lzw, webp, zstd, jp2k |
| [options.predictor] | <code>string</code> | <code>&quot;&#x27;horizontal&#x27;&quot;</code> | compression predictor options: none, horizontal, float |
| [options.pyramid] | <code>boolean</code> | <code>false</code> | write an image pyramid |
| [options.tile] | <code>boolean</code> | <code>false</code> | write a tiled tiff |
| [options.tileWidth] | <code>number</code> | <code>256</code> | horizontal tile size |
| [options.tileHeight] | <code>number</code> | <code>256</code> | vertical tile size |
| [options.xres] | <code>number</code> | <code>1.0</code> | horizontal resolution in pixels/mm |
| [options.yres] | <code>number</code> | <code>1.0</code> | vertical resolution in pixels/mm |
| [options.resolutionUnit] | <code>string</code> | <code>&quot;&#x27;inch&#x27;&quot;</code> | resolution unit options: inch, cm |
| [options.bitdepth] | <code>number</code> | <code>8</code> | reduce bitdepth to 1, 2 or 4 bit |
**Example**
```js
// Convert SVG input to LZW-compressed, 1 bit per pixel TIFF output
sharp('input.svg')
.tiff({
@@ -450,11 +479,9 @@ sharp('input.svg')
.then(info => { ... });
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
## avif
> avif([options]) ⇒ <code>Sharp</code>
Use these AVIF options for output image.
@@ -463,92 +490,125 @@ most web browsers do not display these properly.
AVIF image sequences are not supported.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `50`)
* `options.lossless` **[boolean][10]** use lossless compression (optional, default `false`)
* `options.effort` **[number][12]** CPU effort, between 0 (fastest) and 9 (slowest) (optional, default `4`)
* `options.chromaSubsampling` **[string][2]** set to '4:2:0' to use chroma subsampling (optional, default `'4:4:4'`)
- <code>Error</code> Invalid options
### Examples
**Since**: 0.27.0
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>50</code> | quality, integer 1-100 |
| [options.lossless] | <code>boolean</code> | <code>false</code> | use lossless compression |
| [options.effort] | <code>number</code> | <code>4</code> | CPU effort, between 0 (fastest) and 9 (slowest) |
| [options.chromaSubsampling] | <code>string</code> | <code>&quot;&#x27;4:4:4&#x27;&quot;</code> | set to '4:2:0' to use chroma subsampling |
**Example**
```js
const data = await sharp(input)
.avif({ effort: 2 })
.toBuffer();
```
```javascript
**Example**
```js
const data = await sharp(input)
.avif({ lossless: true })
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.27.0
## heif
> heif([options]) ⇒ <code>Sharp</code>
Use these HEIF options for output image.
Support for patent-encumbered HEIC images using `hevc` compression requires the use of a
globally-installed libvips compiled with support for libheif, libde265 and x265.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.quality` **[number][12]** quality, integer 1-100 (optional, default `50`)
* `options.compression` **[string][2]** compression format: av1, hevc (optional, default `'av1'`)
* `options.lossless` **[boolean][10]** use lossless compression (optional, default `false`)
* `options.effort` **[number][12]** CPU effort, between 0 (fastest) and 9 (slowest) (optional, default `4`)
* `options.chromaSubsampling` **[string][2]** set to '4:2:0' to use chroma subsampling (optional, default `'4:4:4'`)
- <code>Error</code> Invalid options
### Examples
**Since**: 0.23.0
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.quality] | <code>number</code> | <code>50</code> | quality, integer 1-100 |
| [options.compression] | <code>string</code> | <code>&quot;&#x27;av1&#x27;&quot;</code> | compression format: av1, hevc |
| [options.lossless] | <code>boolean</code> | <code>false</code> | use lossless compression |
| [options.effort] | <code>number</code> | <code>4</code> | CPU effort, between 0 (fastest) and 9 (slowest) |
| [options.chromaSubsampling] | <code>string</code> | <code>&quot;&#x27;4:4:4&#x27;&quot;</code> | set to '4:2:0' to use chroma subsampling |
**Example**
```js
const data = await sharp(input)
.heif({ compression: 'hevc' })
.toBuffer();
```
* Throws **[Error][4]** Invalid options
Returns **Sharp**&#x20;
## jxl
> jxl([options]) ⇒ <code>Sharp</code>
Use these JPEG-XL (JXL) options for output image.
This feature is experimental, please do not use in production systems.
Requires libvips compiled with support for libjxl.
The prebuilt binaries do not include this - see
[installing a custom libvips](https://sharp.pixelplumbing.com/install#custom-libvips).
Image metadata (EXIF, XMP) is unsupported.
**Throws**:
- <code>Error</code> Invalid options
**Since**: 0.31.3
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.distance] | <code>number</code> | <code>1.0</code> | maximum encoding error, between 0 (highest quality) and 15 (lowest quality) |
| [options.quality] | <code>number</code> | | calculate `distance` based on JPEG-like quality, between 1 and 100, overrides distance if specified |
| [options.decodingTier] | <code>number</code> | <code>0</code> | target decode speed tier, between 0 (highest quality) and 4 (lowest quality) |
| [options.lossless] | <code>boolean</code> | <code>false</code> | use lossless compression |
| [options.effort] | <code>number</code> | <code>7</code> | CPU effort, between 3 (fastest) and 9 (slowest) |
**Meta**
* **since**: 0.23.0
## raw
> raw([options]) ⇒ <code>Sharp</code>
Force output to be raw, uncompressed pixel data.
Pixel ordering is left-to-right, top-to-bottom, without padding.
Channel ordering will be RGB or RGBA for non-greyscale colourspaces.
### Parameters
* `options` **[Object][6]?** output options
**Throws**:
* `options.depth` **[string][2]** bit depth, one of: char, uchar (default), short, ushort, int, uint, float, complex, double, dpcomplex (optional, default `'uchar'`)
- <code>Error</code> Invalid options
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | output options |
| [options.depth] | <code>string</code> | <code>&quot;&#x27;uchar&#x27;&quot;</code> | bit depth, one of: char, uchar (default), short, ushort, int, uint, float, complex, double, dpcomplex |
**Example**
```js
// Extract raw, unsigned 8-bit RGB pixel data from JPEG input
const { data, info } = await sharp('input.jpg')
.raw()
.toBuffer({ resolveWithObject: true });
```
```javascript
**Example**
```js
// Extract alpha channel as raw, unsigned 16-bit pixel data from PNG input
const data = await sharp('input.png')
.ensureAlpha()
@@ -558,9 +618,9 @@ const data = await sharp('input.png')
.toBuffer();
```
* Throws **[Error][4]** Invalid options
## tile
> tile([options]) ⇒ <code>Sharp</code>
Use tile-based deep zoom (image pyramid) output.
@@ -569,26 +629,34 @@ Use a `.zip` or `.szi` file extension with `toFile` to write to a compressed arc
The container will be set to `zip` when the output is a Buffer or Stream, otherwise it will default to `fs`.
### Parameters
Requires libvips compiled with support for libgsf.
The prebuilt binaries do not include this - see
[installing a custom libvips](https://sharp.pixelplumbing.com/install#custom-libvips).
* `options` **[Object][6]?**&#x20;
* `options.size` **[number][12]** tile size in pixels, a value between 1 and 8192. (optional, default `256`)
* `options.overlap` **[number][12]** tile overlap in pixels, a value between 0 and 8192. (optional, default `0`)
* `options.angle` **[number][12]** tile angle of rotation, must be a multiple of 90. (optional, default `0`)
* `options.background` **([string][2] | [Object][6])** background colour, parsed by the [color][15] module, defaults to white without transparency. (optional, default `{r:255,g:255,b:255,alpha:1}`)
* `options.depth` **[string][2]?** how deep to make the pyramid, possible values are `onepixel`, `onetile` or `one`, default based on layout.
* `options.skipBlanks` **[number][12]** threshold to skip tile generation, a value 0 - 255 for 8-bit images or 0 - 65535 for 16-bit images (optional, default `-1`)
* `options.container` **[string][2]** tile container, with value `fs` (filesystem) or `zip` (compressed file). (optional, default `'fs'`)
* `options.layout` **[string][2]** filesystem layout, possible values are `dz`, `iiif`, `iiif3`, `zoomify` or `google`. (optional, default `'dz'`)
* `options.centre` **[boolean][10]** centre image in tile. (optional, default `false`)
* `options.center` **[boolean][10]** alternative spelling of centre. (optional, default `false`)
* `options.id` **[string][2]** when `layout` is `iiif`/`iiif3`, sets the `@id`/`id` attribute of `info.json` (optional, default `'https://example.com/iiif'`)
* `options.basename` **[string][2]?** the name of the directory within the zip file when container is `zip`.
**Throws**:
### Examples
- <code>Error</code> Invalid parameters
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> | | |
| [options.size] | <code>number</code> | <code>256</code> | tile size in pixels, a value between 1 and 8192. |
| [options.overlap] | <code>number</code> | <code>0</code> | tile overlap in pixels, a value between 0 and 8192. |
| [options.angle] | <code>number</code> | <code>0</code> | tile angle of rotation, must be a multiple of 90. |
| [options.background] | <code>string</code> \| <code>Object</code> | <code>&quot;{r: 255, g: 255, b: 255, alpha: 1}&quot;</code> | background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to white without transparency. |
| [options.depth] | <code>string</code> | | how deep to make the pyramid, possible values are `onepixel`, `onetile` or `one`, default based on layout. |
| [options.skipBlanks] | <code>number</code> | <code>-1</code> | threshold to skip tile generation, a value 0 - 255 for 8-bit images or 0 - 65535 for 16-bit images |
| [options.container] | <code>string</code> | <code>&quot;&#x27;fs&#x27;&quot;</code> | tile container, with value `fs` (filesystem) or `zip` (compressed file). |
| [options.layout] | <code>string</code> | <code>&quot;&#x27;dz&#x27;&quot;</code> | filesystem layout, possible values are `dz`, `iiif`, `iiif3`, `zoomify` or `google`. |
| [options.centre] | <code>boolean</code> | <code>false</code> | centre image in tile. |
| [options.center] | <code>boolean</code> | <code>false</code> | alternative spelling of centre. |
| [options.id] | <code>string</code> | <code>&quot;&#x27;https://example.com/iiif&#x27;&quot;</code> | when `layout` is `iiif`/`iiif3`, sets the `@id`/`id` attribute of `info.json` |
| [options.basename] | <code>string</code> | | the name of the directory within the zip file when container is `zip`. |
**Example**
```js
sharp('input.tiff')
.png()
.tile({
@@ -599,25 +667,23 @@ sharp('input.tiff')
// output_files contains 512x512 tiles grouped by zoom level
});
```
```javascript
**Example**
```js
const zipFileWithTiles = await sharp(input)
.tile({ basename: "tiles" })
.toBuffer();
```
```javascript
**Example**
```js
const iiififier = sharp().tile({ layout: "iiif" });
readableStream
.pipe(iiififier)
.pipe(writeableStream);
```
* Throws **[Error][4]** Invalid parameters
Returns **Sharp**&#x20;
## timeout
> timeout(options) ⇒ <code>Sharp</code>
Set a timeout for processing, in seconds.
Use a value of zero to continue processing indefinitely, the default behaviour.
@@ -625,15 +691,16 @@ Use a value of zero to continue processing indefinitely, the default behaviour.
The clock starts when libvips opens an input image for processing.
Time spent waiting for a libuv thread to become available is not included.
### Parameters
* `options` **[Object][6]**&#x20;
**Since**: 0.29.2
* `options.seconds` **[number][12]** Number of seconds after which processing will be stopped
| Param | Type | Description |
| --- | --- | --- |
| options | <code>Object</code> | |
| options.seconds | <code>number</code> | Number of seconds after which processing will be stopped |
### Examples
```javascript
**Example**
```js
// Ensure processing takes no longer than 3 seconds
try {
const data = await sharp(input)
@@ -643,40 +710,4 @@ try {
} catch (err) {
if (err.message.includes('timeout')) { ... }
}
```
Returns **Sharp**&#x20;
**Meta**
* **since**: 0.29.2
[1]: #withmetadata
[2]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[3]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/function
[4]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
[5]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise
[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[7]: #toformat
[8]: #jpeg
[9]: #png
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[11]: https://nodejs.org/api/buffer.html
[12]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[13]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array
[14]: https://sharp.pixelplumbing.com/install#custom-libvips
[15]: https://www.npmjs.org/package/color
```

View File

@@ -1,63 +1,64 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## resize
> resize([width], [height], [options]) ⇒ <code>Sharp</code>
Resize image to `width`, `height` or `width x height`.
When both a `width` and `height` are provided, the possible methods by which the image should **fit** these are:
- `cover`: (default) Preserving aspect ratio, attempt to ensure the image covers both provided dimensions by cropping/clipping to fit.
- `contain`: Preserving aspect ratio, contain within both provided dimensions using "letterboxing" where necessary.
- `fill`: Ignore the aspect ratio of the input and stretch to both provided dimensions.
- `inside`: Preserving aspect ratio, resize the image to be as large as possible while ensuring its dimensions are less than or equal to both those specified.
- `outside`: Preserving aspect ratio, resize the image to be as small as possible while ensuring its dimensions are greater than or equal to both those specified.
* `cover`: (default) Preserving aspect ratio, ensure the image covers both provided dimensions by cropping/clipping to fit.
* `contain`: Preserving aspect ratio, contain within both provided dimensions using "letterboxing" where necessary.
* `fill`: Ignore the aspect ratio of the input and stretch to both provided dimensions.
* `inside`: Preserving aspect ratio, resize the image to be as large as possible while ensuring its dimensions are less than or equal to both those specified.
* `outside`: Preserving aspect ratio, resize the image to be as small as possible while ensuring its dimensions are greater than or equal to both those specified.
Some of these values are based on the [object-fit](https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit) CSS property.
Some of these values are based on the [object-fit][1] CSS property.
<img alt="Examples of various values for the fit property when resizing" width="100%" style="aspect-ratio: 998/243" src="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/api-resize-fit.svg">
When using a `fit` of `cover` or `contain`, the default **position** is `centre`. Other options are:
When using a **fit** of `cover` or `contain`, the default **position** is `centre`. Other options are:
- `sharp.position`: `top`, `right top`, `right`, `right bottom`, `bottom`, `left bottom`, `left`, `left top`.
- `sharp.gravity`: `north`, `northeast`, `east`, `southeast`, `south`, `southwest`, `west`, `northwest`, `center` or `centre`.
- `sharp.strategy`: `cover` only, dynamically crop using either the `entropy` or `attention` strategy.
* `sharp.position`: `top`, `right top`, `right`, `right bottom`, `bottom`, `left bottom`, `left`, `left top`.
* `sharp.gravity`: `north`, `northeast`, `east`, `southeast`, `south`, `southwest`, `west`, `northwest`, `center` or `centre`.
* `sharp.strategy`: `cover` only, dynamically crop using either the `entropy` or `attention` strategy.
Some of these values are based on the [object-position][2] CSS property.
Some of these values are based on the [object-position](https://developer.mozilla.org/en-US/docs/Web/CSS/object-position) CSS property.
The experimental strategy-based approach resizes so one dimension is at its target length
then repeatedly ranks edge regions, discarding the edge with the lowest score based on the selected strategy.
* `entropy`: focus on the region with the highest [Shannon entropy][3].
* `attention`: focus on the region with the highest luminance frequency, colour saturation and presence of skin tones.
- `entropy`: focus on the region with the highest [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_%28information_theory%29).
- `attention`: focus on the region with the highest luminance frequency, colour saturation and presence of skin tones.
Possible interpolation kernels are:
* `nearest`: Use [nearest neighbour interpolation][4].
* `cubic`: Use a [Catmull-Rom spline][5].
* `mitchell`: Use a [Mitchell-Netravali spline][6].
* `lanczos2`: Use a [Lanczos kernel][7] with `a=2`.
* `lanczos3`: Use a Lanczos kernel with `a=3` (the default).
- `nearest`: Use [nearest neighbour interpolation](http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation).
- `cubic`: Use a [Catmull-Rom spline](https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline).
- `mitchell`: Use a [Mitchell-Netravali spline](https://www.cs.utexas.edu/~fussell/courses/cs384g-fall2013/lectures/mitchell/Mitchell.pdf).
- `lanczos2`: Use a [Lanczos kernel](https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel) with `a=2`.
- `lanczos3`: Use a Lanczos kernel with `a=3` (the default).
Only one resize can occur per pipeline.
Previous calls to `resize` in the same pipeline will be ignored.
### Parameters
* `width` **[number][8]?** pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
* `height` **[number][8]?** pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
* `options` **[Object][9]?**&#x20;
**Throws**:
* `options.width` **[String][10]?** alternative means of specifying `width`. If both are present this take priority.
* `options.height` **[String][10]?** alternative means of specifying `height`. If both are present this take priority.
* `options.fit` **[String][10]** how the image should be resized to fit both provided dimensions, one of `cover`, `contain`, `fill`, `inside` or `outside`. (optional, default `'cover'`)
* `options.position` **[String][10]** position, gravity or strategy to use when `fit` is `cover` or `contain`. (optional, default `'centre'`)
* `options.background` **([String][10] | [Object][9])** background colour when `fit` is `contain`, parsed by the [color][11] module, defaults to black without transparency. (optional, default `{r:0,g:0,b:0,alpha:1}`)
* `options.kernel` **[String][10]** the kernel to use for image reduction. (optional, default `'lanczos3'`)
* `options.withoutEnlargement` **[Boolean][12]** do not enlarge if the width *or* height are already less than the specified dimensions, equivalent to GraphicsMagick's `>` geometry option. (optional, default `false`)
* `options.withoutReduction` **[Boolean][12]** do not reduce if the width *or* height are already greater than the specified dimensions, equivalent to GraphicsMagick's `<` geometry option. (optional, default `false`)
* `options.fastShrinkOnLoad` **[Boolean][12]** take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images. (optional, default `true`)
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [width] | <code>number</code> | | How many pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height. |
| [height] | <code>number</code> | | How many pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width. |
| [options] | <code>Object</code> | | |
| [options.width] | <code>number</code> | | An alternative means of specifying `width`. If both are present this takes priority. |
| [options.height] | <code>number</code> | | An alternative means of specifying `height`. If both are present this takes priority. |
| [options.fit] | <code>String</code> | <code>&#x27;cover&#x27;</code> | How the image should be resized/cropped to fit the target dimension(s), one of `cover`, `contain`, `fill`, `inside` or `outside`. |
| [options.position] | <code>String</code> | <code>&#x27;centre&#x27;</code> | A position, gravity or strategy to use when `fit` is `cover` or `contain`. |
| [options.background] | <code>String</code> \| <code>Object</code> | <code>{r: 0, g: 0, b: 0, alpha: 1}</code> | background colour when `fit` is `contain`, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency. |
| [options.kernel] | <code>String</code> | <code>&#x27;lanczos3&#x27;</code> | The kernel to use for image reduction. Use the `fastShrinkOnLoad` option to control kernel vs shrink-on-load. |
| [options.withoutEnlargement] | <code>Boolean</code> | <code>false</code> | Do not scale up if the width *or* height are already less than the target dimensions, equivalent to GraphicsMagick's `>` geometry option. This may result in output dimensions smaller than the target dimensions. |
| [options.withoutReduction] | <code>Boolean</code> | <code>false</code> | Do not scale down if the width *or* height are already greater than the target dimensions, equivalent to GraphicsMagick's `<` geometry option. This may still result in a crop to reach the target dimensions. |
| [options.fastShrinkOnLoad] | <code>Boolean</code> | <code>true</code> | Take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern or round-down of an auto-scaled dimension. |
**Example**
```js
sharp(input)
.resize({ width: 100 })
.toBuffer()
@@ -65,8 +66,8 @@ sharp(input)
// 100 pixels wide, auto-scaled height
});
```
```javascript
**Example**
```js
sharp(input)
.resize({ height: 100 })
.toBuffer()
@@ -74,8 +75,8 @@ sharp(input)
// 100 pixels high, auto-scaled width
});
```
```javascript
**Example**
```js
sharp(input)
.resize(200, 300, {
kernel: sharp.kernel.nearest,
@@ -90,8 +91,8 @@ sharp(input)
// contained within the north-east corner of a semi-transparent white canvas
});
```
```javascript
**Example**
```js
const transformer = sharp()
.resize({
width: 200,
@@ -105,8 +106,8 @@ readableStream
.pipe(transformer)
.pipe(writableStream);
```
```javascript
**Example**
```js
sharp(input)
.resize(200, 200, {
fit: sharp.fit.inside,
@@ -120,8 +121,8 @@ sharp(input)
// and no larger than the input image
});
```
```javascript
**Example**
```js
sharp(input)
.resize(200, 200, {
fit: sharp.fit.outside,
@@ -135,8 +136,8 @@ sharp(input)
// and no smaller than the input image
});
```
```javascript
**Example**
```js
const scaleByHalf = await sharp(input)
.metadata()
.then(({ width }) => sharp(input)
@@ -145,28 +146,32 @@ const scaleByHalf = await sharp(input)
);
```
* Throws **[Error][13]** Invalid parameters
Returns **Sharp**&#x20;
## extend
> extend(extend) ⇒ <code>Sharp</code>
Extends/pads the edges of the image with the provided background colour.
Extend / pad / extrude one or more edges of the image with either
the provided background colour or pixels derived from the image.
This operation will always occur after resizing and extraction, if any.
### Parameters
* `extend` **([number][8] | [Object][9])** single pixel count to add to all edges or an Object with per-edge counts
**Throws**:
* `extend.top` **[number][8]** (optional, default `0`)
* `extend.left` **[number][8]** (optional, default `0`)
* `extend.bottom` **[number][8]** (optional, default `0`)
* `extend.right` **[number][8]** (optional, default `0`)
* `extend.background` **([String][10] | [Object][9])** background colour, parsed by the [color][11] module, defaults to black without transparency. (optional, default `{r:0,g:0,b:0,alpha:1}`)
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| extend | <code>number</code> \| <code>Object</code> | | single pixel count to add to all edges or an Object with per-edge counts |
| [extend.top] | <code>number</code> | <code>0</code> | |
| [extend.left] | <code>number</code> | <code>0</code> | |
| [extend.bottom] | <code>number</code> | <code>0</code> | |
| [extend.right] | <code>number</code> | <code>0</code> | |
| [extend.extendWith] | <code>String</code> | <code>&#x27;background&#x27;</code> | populate new pixels using this method, one of: background, copy, repeat, mirror. |
| [extend.background] | <code>String</code> \| <code>Object</code> | <code>{r: 0, g: 0, b: 0, alpha: 1}</code> | background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency. |
**Example**
```js
// Resize to 140 pixels wide, then add 10 transparent pixels
// to the top, left and right edges and 20 to the bottom edge
sharp(input)
@@ -180,8 +185,8 @@ sharp(input)
})
...
```
```javascript
**Example**
```js
// Add a row of 10 red pixels to the bottom
sharp(input)
.extend({
@@ -190,39 +195,51 @@ sharp(input)
})
...
```
**Example**
```js
// Extrude image by 8 pixels to the right, mirroring existing right hand edge
sharp(input)
.extend({
right: 8,
background: 'mirror'
})
...
```
* Throws **[Error][13]** Invalid parameters
Returns **Sharp**&#x20;
## extract
> extract(options) ⇒ <code>Sharp</code>
Extract/crop a region of the image.
* Use `extract` before `resize` for pre-resize extraction.
* Use `extract` after `resize` for post-resize extraction.
* Use `extract` before and after for both.
- Use `extract` before `resize` for pre-resize extraction.
- Use `extract` after `resize` for post-resize extraction.
- Use `extract` before and after for both.
### Parameters
* `options` **[Object][9]** describes the region to extract using integral pixel values
**Throws**:
* `options.left` **[number][8]** zero-indexed offset from left edge
* `options.top` **[number][8]** zero-indexed offset from top edge
* `options.width` **[number][8]** width of region to extract
* `options.height` **[number][8]** height of region to extract
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Description |
| --- | --- | --- |
| options | <code>Object</code> | describes the region to extract using integral pixel values |
| options.left | <code>number</code> | zero-indexed offset from left edge |
| options.top | <code>number</code> | zero-indexed offset from top edge |
| options.width | <code>number</code> | width of region to extract |
| options.height | <code>number</code> | height of region to extract |
**Example**
```js
sharp(input)
.extract({ left: left, top: top, width: width, height: height })
.toFile(output, function(err) {
// Extract a region of the input image, saving in the same format.
});
```
```javascript
**Example**
```js
sharp(input)
.extract({ left: leftOffsetPre, top: topOffsetPre, width: widthPre, height: heightPre })
.resize(width, height)
@@ -232,11 +249,9 @@ sharp(input)
});
```
* Throws **[Error][13]** Invalid parameters
Returns **Sharp**&#x20;
## trim
> trim(trim) ⇒ <code>Sharp</code>
Trim pixels from all edges that contain values similar to the given background colour, which defaults to that of the top-left pixel.
@@ -247,16 +262,20 @@ If the result of this operation would trim an image to nothing then no change is
The `info` response Object, obtained from callback of `.toFile()` or `.toBuffer()`,
will contain `trimOffsetLeft` and `trimOffsetTop` properties.
### Parameters
* `trim` **([string][10] | [number][8] | [Object][9])** the specific background colour to trim, the threshold for doing so or an Object with both.
**Throws**:
* `trim.background` **([string][10] | [Object][9])** background colour, parsed by the [color][11] module, defaults to that of the top-left pixel. (optional, default `'top-left pixel'`)
* `trim.threshold` **[number][8]** the allowed difference from the above colour, a positive number. (optional, default `10`)
- <code>Error</code> Invalid parameters
### Examples
```javascript
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| trim | <code>string</code> \| <code>number</code> \| <code>Object</code> | | the specific background colour to trim, the threshold for doing so or an Object with both. |
| [trim.background] | <code>string</code> \| <code>Object</code> | <code>&quot;&#x27;top-left pixel&#x27;&quot;</code> | background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to that of the top-left pixel. |
| [trim.threshold] | <code>number</code> | <code>10</code> | the allowed difference from the above colour, a positive number. |
**Example**
```js
// Trim pixels with a colour similar to that of the top-left pixel.
sharp(input)
.trim()
@@ -264,8 +283,8 @@ sharp(input)
...
});
```
```javascript
**Example**
```js
// Trim pixels with the exact same colour as that of the top-left pixel.
sharp(input)
.trim(0)
@@ -273,8 +292,8 @@ sharp(input)
...
});
```
```javascript
**Example**
```js
// Trim only pixels with a similar colour to red.
sharp(input)
.trim("#FF0000")
@@ -282,8 +301,8 @@ sharp(input)
...
});
```
```javascript
**Example**
```js
// Trim all "yellow-ish" pixels, being more lenient with the higher threshold.
sharp(input)
.trim({
@@ -293,34 +312,4 @@ sharp(input)
.toFile(output, function(err, info) {
...
});
```
* Throws **[Error][13]** Invalid parameters
Returns **Sharp**&#x20;
[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
[2]: https://developer.mozilla.org/en-US/docs/Web/CSS/object-position
[3]: https://en.wikipedia.org/wiki/Entropy_%28information_theory%29
[4]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
[5]: https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
[6]: https://www.cs.utexas.edu/~fussell/courses/cs384g-fall2013/lectures/mitchell/Mitchell.pdf
[7]: https://en.wikipedia.org/wiki/Lanczos_resampling#Lanczos_kernel
[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
[11]: https://www.npmjs.org/package/color
[12]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
[13]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error
```

View File

@@ -1,101 +1,110 @@
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
## format
An Object containing nested boolean values representing the available input and output formats/methods.
### Examples
```javascript
console.log(sharp.format);
```
Returns **[Object][1]**&#x20;
## interpolators
An Object containing the available interpolators and their proper values
Type: [string][2]
### nearest
[Nearest neighbour interpolation][3]. Suitable for image enlargement only.
### bilinear
[Bilinear interpolation][4]. Faster than bicubic but with less smooth results.
### bicubic
[Bicubic interpolation][5] (the default).
### locallyBoundedBicubic
[LBB interpolation][6]. Prevents some "[acutance][7]" but typically reduces performance by a factor of 2.
### nohalo
[Nohalo interpolation][8]. Prevents acutance but typically reduces performance by a factor of 3.
### vertexSplitQuadraticBasisSpline
[VSQBS interpolation][9]. Prevents "staircasing" when enlarging.
## versions
> versions
An Object containing the version numbers of libvips and its dependencies.
An Object containing the version numbers of sharp, libvips and its dependencies.
### Examples
```javascript
**Example**
```js
console.log(sharp.versions);
```
## interpolators
> interpolators : <code>enum</code>
An Object containing the available interpolators and their proper values
**Read only**: true
**Properties**
| Name | Type | Default | Description |
| --- | --- | --- | --- |
| nearest | <code>string</code> | <code>&quot;nearest&quot;</code> | [Nearest neighbour interpolation](http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation). Suitable for image enlargement only. |
| bilinear | <code>string</code> | <code>&quot;bilinear&quot;</code> | [Bilinear interpolation](http://en.wikipedia.org/wiki/Bilinear_interpolation). Faster than bicubic but with less smooth results. |
| bicubic | <code>string</code> | <code>&quot;bicubic&quot;</code> | [Bicubic interpolation](http://en.wikipedia.org/wiki/Bicubic_interpolation) (the default). |
| locallyBoundedBicubic | <code>string</code> | <code>&quot;lbb&quot;</code> | [LBB interpolation](https://github.com/libvips/libvips/blob/master/libvips/resample/lbb.cpp#L100). Prevents some "[acutance](http://en.wikipedia.org/wiki/Acutance)" but typically reduces performance by a factor of 2. |
| nohalo | <code>string</code> | <code>&quot;nohalo&quot;</code> | [Nohalo interpolation](http://eprints.soton.ac.uk/268086/). Prevents acutance but typically reduces performance by a factor of 3. |
| vertexSplitQuadraticBasisSpline | <code>string</code> | <code>&quot;vsqbs&quot;</code> | [VSQBS interpolation](https://github.com/libvips/libvips/blob/master/libvips/resample/vsqbs.cpp#L48). Prevents "staircasing" when enlarging. |
## format
> format ⇒ <code>Object</code>
An Object containing nested boolean values representing the available input and output formats/methods.
**Example**
```js
console.log(sharp.format);
```
## vendor
> vendor
An Object containing the platform and architecture
of the current and installed vendored binaries.
### Examples
```javascript
**Example**
```js
console.log(sharp.vendor);
```
## cache
Gets or, when options are provided, sets the limits of *libvips'* operation cache.
## queue
> queue
An EventEmitter that emits a `change` event when a task is either:
- queued, waiting for _libuv_ to provide a worker thread
- complete
**Example**
```js
sharp.queue.on('change', function(queueLength) {
console.log('Queue contains ' + queueLength + ' task(s)');
});
```
## cache
> cache([options]) ⇒ <code>Object</code>
Gets or, when options are provided, sets the limits of _libvips'_ operation cache.
Existing entries in the cache will be trimmed after any change in limits.
This method always returns cache statistics,
useful for determining how much working memory is required for a particular task.
### Parameters
* `options` **([Object][1] | [boolean][10])** Object with the following attributes, or boolean where true uses default cache settings and false removes all caching (optional, default `true`)
* `options.memory` **[number][11]** is the maximum memory in MB to use for this cache (optional, default `50`)
* `options.files` **[number][11]** is the maximum number of files to hold open (optional, default `20`)
* `options.items` **[number][11]** is the maximum number of operations to cache (optional, default `100`)
| Param | Type | Default | Description |
| --- | --- | --- | --- |
| [options] | <code>Object</code> \| <code>boolean</code> | <code>true</code> | Object with the following attributes, or boolean where true uses default cache settings and false removes all caching |
| [options.memory] | <code>number</code> | <code>50</code> | is the maximum memory in MB to use for this cache |
| [options.files] | <code>number</code> | <code>20</code> | is the maximum number of files to hold open |
| [options.items] | <code>number</code> | <code>100</code> | is the maximum number of operations to cache |
### Examples
```javascript
**Example**
```js
const stats = sharp.cache();
```
```javascript
**Example**
```js
sharp.cache( { items: 200 } );
sharp.cache( { files: 0 } );
sharp.cache(false);
```
Returns **[Object][1]**&#x20;
## concurrency
> concurrency([concurrency]) ⇒ <code>number</code>
Gets or, when a concurrency is provided, sets
the maximum number of threads *libvips* should use to process *each image*.
the maximum number of threads _libvips_ should use to process _each image_.
These are from a thread pool managed by glib,
which helps avoid the overhead of creating new threads.
@@ -115,57 +124,43 @@ The maximum number of images that sharp can process in parallel
is controlled by libuv's `UV_THREADPOOL_SIZE` environment variable,
which defaults to 4.
[https://nodejs.org/api/cli.html#uv\_threadpool\_sizesize][12]
https://nodejs.org/api/cli.html#uv_threadpool_sizesize
For example, by default, a machine with 8 CPU cores will process
4 images in parallel and use up to 8 threads per image,
so there will be up to 32 concurrent threads.
### Parameters
* `concurrency` **[number][11]?**&#x20;
**Returns**: <code>number</code> - concurrency
### Examples
| Param | Type |
| --- | --- |
| [concurrency] | <code>number</code> |
```javascript
**Example**
```js
const threads = sharp.concurrency(); // 4
sharp.concurrency(2); // 2
sharp.concurrency(0); // 4
```
Returns **[number][11]** concurrency
## queue
An EventEmitter that emits a `change` event when a task is either:
* queued, waiting for *libuv* to provide a worker thread
* complete
### Examples
```javascript
sharp.queue.on('change', function(queueLength) {
console.log('Queue contains ' + queueLength + ' task(s)');
});
```
## counters
> counters() ⇒ <code>Object</code>
Provides access to internal task counters.
- queue is the number of tasks this module has queued waiting for _libuv_ to provide a worker thread from its pool.
- process is the number of resize tasks currently being processed.
* queue is the number of tasks this module has queued waiting for *libuv* to provide a worker thread from its pool.
* process is the number of resize tasks currently being processed.
### Examples
```javascript
**Example**
```js
const counters = sharp.counters(); // { queue: 2, process: 4 }
```
Returns **[Object][1]**&#x20;
## simd
> simd([simd]) ⇒ <code>boolean</code>
Get and set use of SIMD vector unit instructions.
Requires libvips to have been compiled with liborc support.
@@ -173,44 +168,78 @@ Requires libvips to have been compiled with liborc support.
Improves the performance of `resize`, `blur` and `sharpen` operations
by taking advantage of the SIMD vector unit of the CPU, e.g. Intel SSE and ARM NEON.
### Parameters
* `simd` **[boolean][10]** (optional, default `true`)
### Examples
| Param | Type | Default |
| --- | --- | --- |
| [simd] | <code>boolean</code> | <code>true</code> |
```javascript
**Example**
```js
const simd = sharp.simd();
// simd is `true` if the runtime use of liborc is currently enabled
```
```javascript
**Example**
```js
const simd = sharp.simd(false);
// prevent libvips from using liborc at runtime
```
Returns **[boolean][10]**&#x20;
[1]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object
## block
> block(options)
[2]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String
Block libvips operations at runtime.
[3]: http://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
This is in addition to the `VIPS_BLOCK_UNTRUSTED` environment variable,
which when set will block all "untrusted" operations.
[4]: http://en.wikipedia.org/wiki/Bilinear_interpolation
[5]: http://en.wikipedia.org/wiki/Bicubic_interpolation
**Since**: 0.32.4
[6]: https://github.com/libvips/libvips/blob/master/libvips/resample/lbb.cpp#L100
| Param | Type | Description |
| --- | --- | --- |
| options | <code>Object</code> | |
| options.operation | <code>Array.&lt;string&gt;</code> | List of libvips low-level operation names to block. |
[7]: http://en.wikipedia.org/wiki/Acutance
**Example** *(Block all TIFF input.)*
```js
sharp.block({
operation: ['VipsForeignLoadTiff']
});
```
[8]: http://eprints.soton.ac.uk/268086/
[9]: https://github.com/libvips/libvips/blob/master/libvips/resample/vsqbs.cpp#L48
## unblock
> unblock(options)
[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean
Unblock libvips operations at runtime.
[11]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
This is useful for defining a list of allowed operations.
[12]: https://nodejs.org/api/cli.html#uv_threadpool_sizesize
**Since**: 0.32.4
| Param | Type | Description |
| --- | --- | --- |
| options | <code>Object</code> | |
| options.operation | <code>Array.&lt;string&gt;</code> | List of libvips low-level operation names to unblock. |
**Example** *(Block all input except WebP from the filesystem.)*
```js
sharp.block({
operation: ['VipsForeignLoad']
});
sharp.unblock({
operation: ['VipsForeignLoadWebpFile']
});
```
**Example** *(Block all input except JPEG and PNG from a Buffer or Stream.)*
```js
sharp.block({
operation: ['VipsForeignLoad']
});
sharp.unblock({
operation: ['VipsForeignLoadJpegBuffer', 'VipsForeignLoadPngBuffer']
});
```

View File

@@ -1,7 +1,11 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs').promises;
const path = require('path');
const jsdoc2md = require('jsdoc-to-markdown');
[
'constructor',
@@ -14,13 +18,21 @@ const path = require('path');
'output',
'utility'
].forEach(async (m) => {
const documentation = await import('documentation');
const input = path.join('lib', `${m}.js`);
const output = path.join('docs', `api-${m}.md`);
const ast = await documentation.build(input, { shallow: true });
const markdown = await documentation.formats.md(ast, { markdownToc: false });
const ast = await jsdoc2md.getTemplateData({ files: input });
const markdown = await jsdoc2md.render({
data: ast,
'global-index-format': 'none',
'module-index-format': 'none'
});
await fs.writeFile(output, markdown);
const cleanMarkdown = markdown
.replace(/(## )([A-Za-z0-9]+)([^\n]*)/g, '$1$2\n> $2$3\n') // simplify headings to match those of documentationjs, ensures existing URLs work
.replace(/<a name="[A-Za-z0-9+]+"><\/a>/g, '') // remove anchors, let docute add these (at markdown to HTML render time)
.replace(/\*\*Kind\*\*: global[^\n]+/g, '') // remove all "global" Kind labels (requires JSDoc refactoring)
.trim();
await fs.writeFile(output, cleanMarkdown);
});

View File

@@ -1,8 +1,220 @@
# Changelog
## v0.32 - *flow*
Requires libvips v8.14.5
### v0.32.6 - 18th September 2023
* Upgrade to libvips v8.14.5 for upstream bug fixes.
* Ensure composite tile images are fully decoded (regression in 0.32.0).
[#3767](https://github.com/lovell/sharp/issues/3767)
* Ensure `withMetadata` can add ICC profiles to RGB16 output.
[#3773](https://github.com/lovell/sharp/issues/3773)
* Ensure `withMetadata` does not reduce 16-bit images to 8-bit (regression in 0.32.5).
[#3773](https://github.com/lovell/sharp/issues/3773)
* TypeScript: Add definitions for block and unblock.
[#3799](https://github.com/lovell/sharp/pull/3799)
[@ldrick](https://github.com/ldrick)
### v0.32.5 - 15th August 2023
* Upgrade to libvips v8.14.4 for upstream bug fixes.
* TypeScript: Add missing `WebpPresetEnum` to definitions.
[#3748](https://github.com/lovell/sharp/pull/3748)
[@pilotso11](https://github.com/pilotso11)
* Ensure compilation using musl v1.2.4.
[#3755](https://github.com/lovell/sharp/pull/3755)
[@kleisauke](https://github.com/kleisauke)
* Ensure resize with a `fit` of `inside` respects 90/270 degree rotation.
[#3756](https://github.com/lovell/sharp/issues/3756)
* TypeScript: Ensure `minSize` property of `WebpOptions` is boolean.
[#3758](https://github.com/lovell/sharp/pull/3758)
[@sho-xizz](https://github.com/sho-xizz)
* Ensure `withMetadata` adds default sRGB profile.
[#3761](https://github.com/lovell/sharp/issues/3761)
### v0.32.4 - 21st July 2023
* Upgrade to libvips v8.14.3 for upstream bug fixes.
* Expose ability to (un)block low-level libvips operations by name.
* Prebuilt binaries: restore support for tile-based output.
[#3581](https://github.com/lovell/sharp/issues/3581)
### v0.32.3 - 14th July 2023
* Expose `preset` option for WebP output.
[#3639](https://github.com/lovell/sharp/issues/3639)
* Ensure decoding remains sequential for all operations (regression in 0.32.2).
[#3725](https://github.com/lovell/sharp/issues/3725)
### v0.32.2 - 11th July 2023
* Limit HEIF output dimensions to 16384x16384, matches libvips.
* Ensure exceptions are not thrown when terminating.
[#3569](https://github.com/lovell/sharp/issues/3569)
* Ensure the same access method is used for all inputs (regression in 0.32.0).
[#3669](https://github.com/lovell/sharp/issues/3669)
* Improve detection of jp2 filename extensions.
[#3674](https://github.com/lovell/sharp/pull/3674)
[@bianjunjie1981](https://github.com/bianjunjie1981)
* Guard use of smartcrop premultiplied option to prevent warning (regression in 0.32.1).
[#3710](https://github.com/lovell/sharp/issues/3710)
* Prevent over-compute in affine-based rotate before resize.
[#3722](https://github.com/lovell/sharp/issues/3722)
* Allow sequential read for EXIF-based auto-orientation.
[#3725](https://github.com/lovell/sharp/issues/3725)
### v0.32.1 - 27th April 2023
* Add experimental `unflatten` operation.
[#3461](https://github.com/lovell/sharp/pull/3461)
[@antonmarsden](https://github.com/antonmarsden)
* Ensure use of `flip` operation forces random access read (regression in 0.32.0).
[#3600](https://github.com/lovell/sharp/issues/3600)
* Ensure `linear` operation works with 16-bit input (regression in 0.31.3).
[#3605](https://github.com/lovell/sharp/issues/3605)
* Install: ensure proxy URLs are logged correctly.
[#3615](https://github.com/lovell/sharp/pull/3615)
[@TomWis97](https://github.com/TomWis97)
* Ensure profile-less CMYK to CMYK roundtrip skips colourspace conversion.
[#3620](https://github.com/lovell/sharp/issues/3620)
* Add support for `modulate` operation when using non-sRGB pipeline colourspace.
[#3620](https://github.com/lovell/sharp/issues/3620)
* Ensure `trim` operation works with CMYK images (regression in 0.31.0).
[#3636](https://github.com/lovell/sharp/issues/3636)
* Install: coerce libc version to semver.
[#3641](https://github.com/lovell/sharp/issues/3641)
### v0.32.0 - 24th March 2023
* Default to using sequential rather than random access read where possible.
* Replace GIF output `optimise` / `optimize` option with `reuse`.
* Add `progressive` option to GIF output for interlacing.
* Add `wrap` option to text image creation.
* Add `formatMagick` property to metadata of images loaded via *magick.
* Prefer integer (un)premultiply for faster resizing of RGBA images.
* Add `ignoreIcc` input option to ignore embedded ICC profile.
* Allow use of GPS (IFD3) EXIF metadata.
[#2767](https://github.com/lovell/sharp/issues/2767)
* TypeScript definitions are now maintained and published directly, deprecating the `@types/sharp` package.
[#3369](https://github.com/lovell/sharp/issues/3369)
* Prebuilt binaries: ensure macOS 10.13+ support, as documented.
[#3438](https://github.com/lovell/sharp/issues/3438)
* Prebuilt binaries: prevent use of glib slice allocator, improves QEMU support.
[#3448](https://github.com/lovell/sharp/issues/3448)
* Add focus point coordinates to output when using attention based crop.
[#3470](https://github.com/lovell/sharp/pull/3470)
[@ejoebstl](https://github.com/ejoebstl)
* Expose sharp version as `sharp.versions.sharp`.
[#3471](https://github.com/lovell/sharp/issues/3471)
* Respect `fastShrinkOnLoad` resize option for WebP input.
[#3516](https://github.com/lovell/sharp/issues/3516)
* Reduce sharpen `sigma` maximum from 10000 to 10.
[#3521](https://github.com/lovell/sharp/issues/3521)
* Add support for `ArrayBuffer` input.
[#3548](https://github.com/lovell/sharp/pull/3548)
[@kapouer](https://github.com/kapouer)
* Add support to `extend` operation for `extendWith` to allow copy/mirror/repeat.
[#3556](https://github.com/lovell/sharp/pull/3556)
[@janaz](https://github.com/janaz)
* Ensure all async JS callbacks are wrapped to help avoid possible race condition.
[#3569](https://github.com/lovell/sharp/issues/3569)
* Prebuilt binaries: support for tile-based output temporarily removed due to licensing issue.
[#3581](https://github.com/lovell/sharp/issues/3581)
* Add support to `normalise` for `lower` and `upper` percentiles.
[#3583](https://github.com/lovell/sharp/pull/3583)
[@LachlanNewman](https://github.com/LachlanNewman)
## v0.31 - *eagle*
Requires libvips v8.13.2
Requires libvips v8.13.3
### v0.31.3 - 21st December 2022
* Add experimental support for JPEG-XL images. Requires libvips compiled with libjxl.
[#2731](https://github.com/lovell/sharp/issues/2731)
* Add runtime detection of V8 memory cage, ensures compatibility with Electron 21 onwards.
[#3384](https://github.com/lovell/sharp/issues/3384)
* Expose `interFrameMaxError` and `interPaletteMaxError` GIF optimisation properties.
[#3401](https://github.com/lovell/sharp/issues/3401)
* Allow installation on Linux with glibc patch versions e.g. Fedora 38.
[#3423](https://github.com/lovell/sharp/issues/3423)
* Expand range of existing `sharpen` parameters to match libvips.
[#3427](https://github.com/lovell/sharp/issues/3427)
* Prevent possible race condition awaiting metadata of Stream-based input.
[#3451](https://github.com/lovell/sharp/issues/3451)
* Improve `extractChannel` support for 16-bit output colourspaces.
[#3453](https://github.com/lovell/sharp/issues/3453)
* Ignore `sequentialRead` option when calculating image statistics.
[#3462](https://github.com/lovell/sharp/issues/3462)
* Small performance improvement for operations that introduce a non-opaque background.
[#3465](https://github.com/lovell/sharp/issues/3465)
* Ensure integral output of `linear` operation.
[#3468](https://github.com/lovell/sharp/issues/3468)
### v0.31.2 - 4th November 2022
* Upgrade to libvips v8.13.3 for upstream bug fixes.
* Ensure manual flip, rotate, resize operation ordering (regression in 0.31.1)
[#3391](https://github.com/lovell/sharp/issues/3391)
* Ensure auto-rotation works without resize (regression in 0.31.1)
[#3422](https://github.com/lovell/sharp/issues/3422)
### v0.31.1 - 29th September 2022

View File

@@ -263,3 +263,15 @@ GitHub: https://github.com/antonmarsden
Name: Marcos Casagrande
GitHub: https://github.com/marcosc90
Name: Emanuel Jöbstl
GitHub: https://github.com/ejoebstl
Name: Tomasz Janowski
GitHub: https://github.com/janaz
Name: Lachlan Newman
GitHub: https://github.com/LachlanNewman
Name: BJJ
GitHub: https://github.com/bianjunjie1981

View File

@@ -0,0 +1,61 @@
<?xml version="1.0" encoding="utf-8"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="998" height="243" viewBox="0 0 998 243">
<defs>
<g id="placeholder">
<rect width="180" height="128" fill="#64bed8"/>
<circle cx="61.1" cy="36.8" r="19.3" fill="#ffefa9"/>
<circle cx="61.1" cy="36.8" r="18.1" fill="#fdda42"/>
<path d="m67.2 34.7 15.2 46L90 57.9l30.4 38 7.8-15.2 7.5 15.4H44z" fill="#6a696f"/>
<path d="m82.4 80.7-15.2-46-.3 69h22.9z" fill="#474749"/>
<path d="m90.1 58 12.2 15 18.2 23-13.9.1z" fill="#474749"/>
<path d="M135.8 96H131l-2.8-15.3z" fill="#474749"/>
<path d="M35.2 96h107.1c0 1.7-1.4 3.2-3.2 3.2H38.4a3.2 3.2 0 0 1-3.2-3.2z" fill="#b9c861"/>
<path d="m67.2 34.7-.1 31-6.2-3-5.3 2.7z" fill="#fff"/>
<path d="m67.2 34.7 7.6 23-7.7 8z" fill="#b3b1b4"/>
<rect width="30.8" height="7.7" x="71.1" y="27.2" rx="2.8" ry="4.1" fill="#fff"/>
<rect width="30.8" height="7.7" x="82.2" y="34.8" rx="2.8" ry="4.1" fill="#fff"/>
<rect width="30.8" height="7.7" x="36.2" y="19.6" rx="2.8" ry="4.1" fill="#fff"/>
<path d="m89.6 72.8-7.2 7.9L90 57.9l10 23z" fill="#fff"/>
<path d="m90.1 58 10 23 2.2-8z" fill="#b3b1b4"/>
<path d="M131.2 85.2 137 68l9 17.2-8 6z" fill="#8da128"/>
<rect width="109.4" height="6.8" x="33.9" y="99.1" rx="13.2" ry="11.4" fill="#22b0d6"/>
<path d="m137 68-5.8 17.2 6.8 6.1.3-13.7z" fill="#727d2e"/>
<rect width="83.3" height="6.8" x="50.8" y="103.6" rx="10" ry="11.4" fill="#22b0d6"/>
<rect width=".7" height="18.4" x="138" y="77.6" fill="#585657"/>
<rect width=".5" height="5.2" x="2" y="-161.3" fill="#585657" transform="rotate(120)"/>
<rect width=".5" height="5.3" x="5.5" y="-163.3" fill="#585657" transform="rotate(120)"/>
<rect width=".5" height="4.8" x="-142.4" y="77.7" fill="#585657" transform="rotate(240)"/>
<rect width=".5" height="5.1" x="-146" y="75.6" fill="#585657" transform="rotate(240)"/>
</g>
<pattern id="img" height="100%" width="100%" viewBox="0 0 180 128">
<use xlink:href="#placeholder"/>
</pattern>
<pattern id="img-fill" width="100%" height="100%" viewBox="0 0 180 128" preserveAspectRatio="none">
<use xlink:href="#placeholder"/>
</pattern>
</defs>
<rect x="0" y="0" width="998" height="243" fill="#ddd"/>
<g id="cover">
<rect x="22" y="28" width="180" height="132" fill="url(#img)"/>
<rect x="48" y="30" width="128" height="128" fill="none" stroke="#000" stroke-width="4"/>
<text x="112" y="85%" dominant-baseline="middle" text-anchor="middle" font-family="sans" font-size="32" font-weight="bold">cover</text>
</g>
<g id="contain">
<rect x="240" y="30" width="128" height="128" fill="url(#img)" stroke="#000" stroke-width="4"/>
<text x="304" y="85%" dominant-baseline="middle" text-anchor="middle" font-family="sans" font-size="32" font-weight="bold" fill="#555">contain</text>
</g>
<g id="fill">
<rect x="432" y="30" width="128" height="128" fill="url(#img-fill)" stroke="#000" stroke-width="4"/>
<text x="496" y="85%" dominant-baseline="middle" text-anchor="middle" font-family="sans" font-size="32" font-weight="bold">fill</text>
</g>
<g id="inside">
<rect x="624" y="48" width="128" height="92" fill="url(#img)" stroke="#000" stroke-width="4"/>
<rect x="624" y="30" width="128" height="128" fill="none" stroke="#000" stroke-width="4" stroke-dasharray="12 4" stroke-dashoffset="6"/>
<text x="688" y="85%" dominant-baseline="middle" text-anchor="middle" font-family="sans" font-size="32" font-weight="bold" fill="#555">inside</text>
</g>
<g id="outside">
<rect x="792" y="30" width="176" height="128" fill="url(#img)" stroke="#000" stroke-width="4"/>
<rect x="816" y="30" width="128" height="128" fill="none" stroke="#000" stroke-width="4" stroke-dasharray="12 4" stroke-dashoffset="-2"/>
<text x="880" y="85%" dominant-baseline="middle" text-anchor="middle" font-family="sans" font-size="32" font-weight="bold">outside</text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 652 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 661 B

View File

@@ -5,13 +5,15 @@
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="Resize large images in common formats to smaller, web-friendly JPEG, PNG, WebP, GIF and AVIF images of varying dimensions">
<meta property="og:title" content="sharp - High performance Node.js image processing">
<meta property="og:image" content="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/sharp-logo-600.png">
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; object-src 'none'; style-src 'unsafe-inline';
img-src 'unsafe-inline' data: https://cdn.jsdelivr.net/gh/lovell/ https://www.google-analytics.com;
connect-src 'self' https://www.google-analytics.com;
script-src 'self' 'unsafe-inline' 'unsafe-eval'
https://www.google-analytics.com/analytics.js;">
<link rel="icon" type="image/svg+xml" href="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/sharp-logo.svg">
<link rel="icon" type="image/png" sizes="32x32" href="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/sharp-logo.png">
<link rel="icon" type="image/png" sizes="32x32" href="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/sharp-logo-32.png">
<link rel="author" href="/humans.txt" type="text/plain">
<link rel="dns-prefetch" href="https://www.google-analytics.com">
<script type="application/ld+json">
@@ -29,7 +31,7 @@
"@type": "Person",
"name": "Lovell Fuller"
},
"copyrightYear": [2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022],
"copyrightYear": 2013,
"license": "https://www.apache.org/licenses/LICENSE-2.0"
}
</script>
@@ -77,9 +79,7 @@
.map(function (sidebarLink) {
return sidebarLink.title;
})[0];
return title
? md.replace(/<!-- Generated by documentation.js. Update this documentation by updating the source code. -->/, '# ' + title)
: md;
return title ? `# ${title}\n${md}` : md;
});
}
};

View File

@@ -28,7 +28,7 @@ is downloaded via HTTPS, verified via Subresource Integrity
and decompressed into `node_modules/sharp/vendor` during `npm install`.
This provides support for the
JPEG, PNG, WebP, AVIF, TIFF, GIF and SVG (input) image formats.
JPEG, PNG, WebP, AVIF (limited to 8-bit depth), TIFF, GIF and SVG (input) image formats.
The following platforms have prebuilt libvips but not sharp:
@@ -115,7 +115,8 @@ and that it can be located using `pkg-config --modversion vips-cpp`.
For help compiling libvips and its dependencies, please see
[building libvips from source](https://www.libvips.org/install.html#building-libvips-from-source).
The use of a globally-installed libvips is unsupported on Windows.
The use of a globally-installed libvips is unsupported on Windows
and on macOS when running Node.js under Rosetta.
## Building from source
@@ -147,7 +148,7 @@ or the `npm_config_sharp_local_prebuilds` environment variable.
URL example:
if `sharp_binary_host` is set to `https://hostname/path`
and the sharp version is `1.2.3` then the resultant URL will be
`https://hostname/path/sharp-v1.2.3-napi-v5-platform-arch.tar.gz`.
`https://hostname/path/v1.2.3/sharp-v1.2.3-napi-v5-platform-arch.tar.gz`.
Filename example:
if `sharp_local_prebuilds` is set to `/path`
@@ -178,6 +179,16 @@ and the libvips version is `4.5.6` then the resultant filename will be
See the Chinese mirror below for a further example.
If these binaries are modified, new integrity hashes can be provided
at install time via `npm_package_config_integrity_platform_arch`
environment variables, for example set
`npm_package_config_integrity_linux_x64` to `sha512-abc...`.
The integrity hash of a file can be generated via:
```sh
sha512sum libvips-x.y.z-platform-arch.tar.br | cut -f1 -d' ' | xxd -r -p | base64 -w 0
```
## Chinese mirror
A mirror site based in China, provided by Alibaba, contains binaries for both sharp and libvips.
@@ -227,16 +238,6 @@ the use of an alternative memory allocator such as
Those using musl-based Linux (e.g. Alpine) and non-Linux systems are
unaffected.
## Heroku
Add the
[jemalloc buildpack](https://github.com/gaffneyc/heroku-buildpack-jemalloc)
to reduce the effects of memory fragmentation.
Set
[NODE_MODULES_CACHE](https://devcenter.heroku.com/articles/nodejs-support#cache-behavior)
to `false` when using the `yarn` package manager.
## AWS Lambda
The `node_modules` directory of the
@@ -255,6 +256,9 @@ SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=gli
To get the best performance select the largest memory available.
A 1536 MB function provides ~12x more CPU time than a 128 MB function.
When integrating with AWS API Gateway, ensure it is configured with the relevant
[binary media types](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html).
## Bundlers
### webpack
@@ -301,6 +305,17 @@ custom:
- npm install --arch=x64 --platform=linux sharp
```
## TypeScript
TypeScript definitions are published as part of
the `sharp` package from v0.32.0.
Previously these were available via the `@types/sharp` package,
which is now deprecated.
When using Typescript, please ensure `devDependencies` includes
the `@types/node` package.
## Fonts
When creating text images or rendering SVG images that contain text elements,
@@ -343,9 +358,12 @@ Module did not self-register
### Canvas and Windows
The prebuilt binaries provided by `canvas` for Windows depend on the unmaintained GTK 2, last updated in 2011.
The prebuilt binaries provided by `canvas` for Windows
from v2.7.0 onwards depend on the Visual C++ Runtime (MSVCRT).
These conflict with the binaries provided by sharp,
which depend on the more modern Universal C Runtime (UCRT).
These conflict with the modern, up-to-date binaries provided by sharp.
See [Automattic/node-canvas#2155](https://github.com/Automattic/node-canvas/issues/2155).
If both modules are used in the same Windows process, the following error will occur:
```

View File

@@ -2,48 +2,104 @@
A test to benchmark the performance of this module relative to alternatives.
## The contenders
* [jimp](https://www.npmjs.com/package/jimp) v0.16.1 - Image processing in pure JavaScript. Provides bicubic interpolation.
* [mapnik](https://www.npmjs.org/package/mapnik) v4.5.9 - Whilst primarily a map renderer, Mapnik contains bitmap image utilities.
* [imagemagick](https://www.npmjs.com/package/imagemagick) v0.1.3 - Supports filesystem only and "*has been unmaintained for a long time*".
* [gm](https://www.npmjs.com/package/gm) v1.23.1 - Fully featured wrapper around GraphicsMagick's `gm` command line utility.
* [@squoosh/lib](https://www.npmjs.com/package/@squoosh/lib) v0.4.0 - Image libraries transpiled to WebAssembly, includes GPLv3 code.
* [@squoosh/cli](https://www.npmjs.com/package/@squoosh/cli) v0.7.2 - Command line wrapper around `@squoosh/lib`, avoids GPLv3 by spawning process.
* sharp v0.31.0 / libvips v8.13.1 - Caching within libvips disabled to ensure a fair comparison.
## The task
Decompress a 2725x2225 JPEG image,
resize to 720x588 using Lanczos 3 resampling (where available),
then compress to JPEG at a "quality" setting of 80.
## Test environment
* AWS EC2 eu-west-1 [c6a.xlarge](https://aws.amazon.com/ec2/instance-types/c6a/) (4x AMD EPYC 7R13)
* Ubuntu 22.04 (ami-051f7c00cb18501ee)
* Node.js 16.17.0
## Results
| Module | Input | Output | Ops/sec | Speed-up |
| :----------------- | :----- | :----- | ------: | -------: |
| jimp | buffer | buffer | 0.96 | 1.0 |
| squoosh-cli | file | file | 1.10 | 1.1 |
| squoosh-lib | buffer | buffer | 1.87 | 1.9 |
| mapnik | buffer | buffer | 3.48 | 3.6 |
| gm | buffer | buffer | 8.53 | 8.9 |
| gm | file | file | 8.60 | 9.0 |
| imagemagick | file | file | 9.30 | 9.7 |
| sharp | stream | stream | 32.86 | 34.2 |
| sharp | file | file | 34.82 | 36.3 |
| sharp | buffer | buffer | 35.41 | 36.9 |
Greater libvips performance can be expected with caching enabled (default)
and using 8+ core machines, especially those with larger L1/L2 CPU caches.
The I/O limits of the relevant (de)compression library will generally determine maximum throughput.
## Contenders
* [jimp](https://www.npmjs.com/package/jimp) v0.22.7 - Image processing in pure JavaScript.
* [imagemagick](https://www.npmjs.com/package/imagemagick) v0.1.3 - Supports filesystem only and "*has been unmaintained for a long time*".
* [gm](https://www.npmjs.com/package/gm) v1.25.0 - Fully featured wrapper around GraphicsMagick's `gm` command line utility.
* [@squoosh/lib](https://www.npmjs.com/package/@squoosh/lib) v0.4.0 - Image libraries transpiled to WebAssembly, includes GPLv3 code, but "*Project no longer maintained*".
* [@squoosh/cli](https://www.npmjs.com/package/@squoosh/cli) v0.7.3 - Command line wrapper around `@squoosh/lib`, avoids GPLv3 by spawning process, but "*Project no longer maintained*".
* sharp v0.32.0 / libvips v8.14.2 - Caching within libvips disabled to ensure a fair comparison.
## Environment
### AMD64
* AWS EC2 us-east-2 [c6a.xlarge](https://aws.amazon.com/ec2/instance-types/c6a/) (4x AMD EPYC 7R13)
* Ubuntu 22.04 20230303 (ami-0122295b0eb922138)
* Node.js 16.19.1
### ARM64
* AWS EC2 us-east-2 [c7g.xlarge](https://aws.amazon.com/ec2/instance-types/c7g/) (4x ARM Graviton3)
* Ubuntu 22.04 20230303 (ami-0af198159897e7a29)
* Node.js 16.19.1
## Task: JPEG
Decompress a 2725x2225 JPEG image,
resize to 720x588 using Lanczos 3 resampling (where available),
then compress to JPEG at a "quality" setting of 80.
Note: jimp does not support Lanczos 3, bicubic resampling used instead.
#### Results: JPEG (AMD64)
| Module | Input | Output | Ops/sec | Speed-up |
| :----------------- | :----- | :----- | ------: | -------: |
| jimp | buffer | buffer | 0.84 | 1.0 |
| squoosh-cli | file | file | 1.07 | 1.3 |
| squoosh-lib | buffer | buffer | 1.82 | 2.2 |
| gm | buffer | buffer | 8.41 | 10.0 |
| gm | file | file | 8.45 | 10.0 |
| imagemagick | file | file | 8.77 | 10.4 |
| sharp | stream | stream | 36.36 | 43.3 |
| sharp | file | file | 38.67 | 46.0 |
| sharp | buffer | buffer | 39.44 | 47.0 |
#### Results: JPEG (ARM64)
| Module | Input | Output | Ops/sec | Speed-up |
| :----------------- | :----- | :----- | ------: | -------: |
| jimp | buffer | buffer | 1.02 | 1.0 |
| squoosh-cli | file | file | 1.11 | 1.1 |
| squoosh-lib | buffer | buffer | 2.08 | 2.0 |
| gm | buffer | buffer | 8.80 | 8.6 |
| gm | file | file | 10.05 | 9.9 |
| imagemagick | file | file | 10.28 | 10.1 |
| sharp | stream | stream | 26.87 | 26.3 |
| sharp | file | file | 27.88 | 27.3 |
| sharp | buffer | buffer | 28.40 | 27.8 |
## Task: PNG
Decompress a 2048x1536 RGBA PNG image,
premultiply the alpha channel,
resize to 720x540 using Lanczos 3 resampling (where available),
unpremultiply then compress as PNG with a "default" zlib compression level of 6
and without adaptive filtering.
Note: jimp does not support premultiply/unpremultiply.
### Results: PNG (AMD64)
| Module | Input | Output | Ops/sec | Speed-up |
| :----------------- | :----- | :----- | ------: | -------: |
| squoosh-cli | file | file | 0.40 | 1.0 |
| squoosh-lib | buffer | buffer | 0.47 | 1.2 |
| gm | file | file | 6.47 | 16.2 |
| jimp | buffer | buffer | 6.60 | 16.5 |
| imagemagick | file | file | 7.08 | 17.7 |
| sharp | file | file | 17.80 | 44.5 |
| sharp | buffer | buffer | 18.02 | 45.0 |
### Results: PNG (ARM64)
| Module | Input | Output | Ops/sec | Speed-up |
| :----------------- | :----- | :----- | ------: | -------: |
| squoosh-cli | file | file | 0.40 | 1.0 |
| squoosh-lib | buffer | buffer | 0.48 | 1.2 |
| gm | file | file | 7.20 | 18.0 |
| jimp | buffer | buffer | 7.62 | 19.1 |
| imagemagick | file | file | 7.96 | 19.9 |
| sharp | file | file | 12.97 | 32.4 |
| sharp | buffer | buffer | 13.12 | 32.8 |
## Running the benchmark test
Requires Docker.

File diff suppressed because one or more lines are too long

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs');
@@ -37,7 +40,7 @@ for (const match of matches) {
].forEach((section) => {
const contents = fs.readFileSync(path.join(__dirname, '..', `api-${section}.md`), 'utf8');
const matches = contents.matchAll(
/\n## (?<title>[A-Za-z]+)\n\n(?<firstparagraph>.+?)\n\n(?<parameters>### Parameters.+?Returns)?/gs
/## (?<title>[A-Za-z]+)\n[^\n]+\n(?<firstparagraph>.+?)\n\n.+?(?<parameters>\| Param .+?\n\n)?\*\*Example/gs
);
for (const match of matches) {
const { title, firstparagraph, parameters } = match.groups;

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const stopWords = require('./stop-words');
@@ -13,8 +16,9 @@ const extractDescription = (str) =>
.trim();
const extractParameters = (str) =>
[...str.matchAll(/options\.(?<name>[^.`]+)/gs)]
[...str.matchAll(/options\.(?<name>[^.`\] ]+)/gs)]
.map((match) => match.groups.name)
.map((name) => name.replace(/([A-Z])/g, ' $1').toLowerCase())
.join(' ');
const extractKeywords = (str) =>

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
module.exports = [
@@ -12,9 +15,11 @@ module.exports = [
'and',
'any',
'are',
'available',
'based',
'been',
'before',
'best',
'both',
'call',
'callback',
@@ -59,14 +64,20 @@ module.exports = [
'must',
'non',
'not',
'now',
'occur',
'occurs',
'one',
'options',
'other',
'out',
'over',
'part',
'perform',
'performs',
'please',
'pre',
'previously',
'produce',
'provide',
'provided',
@@ -74,10 +85,12 @@ module.exports = [
'requires',
'requiresharp',
'returned',
'run',
'same',
'see',
'set',
'sets',
'sharp',
'should',
'since',
'site',
@@ -109,6 +122,7 @@ module.exports = [
'using',
'value',
'values',
'via',
'were',
'when',
'which',
@@ -116,5 +130,6 @@ module.exports = [
'will',
'with',
'without',
'you'
'you',
'your'
];

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const libvips = require('../lib/libvips');

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs');

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs');
@@ -8,6 +11,7 @@ const zlib = require('zlib');
const { createHash } = require('crypto');
const detectLibc = require('detect-libc');
const semverCoerce = require('semver/functions/coerce');
const semverLessThan = require('semver/functions/lt');
const semverSatisfies = require('semver/functions/satisfies');
const simpleGet = require('simple-get');
@@ -74,7 +78,11 @@ const verifyIntegrity = function (platformAndArch) {
flush: function (done) {
const digest = `sha512-${hash.digest('base64')}`;
if (expected !== digest) {
libvips.removeVendoredLibvips();
try {
libvips.removeVendoredLibvips();
} catch (err) {
libvips.log(err.message);
}
libvips.log(`Integrity expected: ${expected}`);
libvips.log(`Integrity received: ${digest}`);
done(new Error(`Integrity check failed for ${platformAndArch}`));
@@ -128,23 +136,23 @@ try {
if (arch === 'ia32' && !platformAndArch.startsWith('win32')) {
throw new Error(`Intel Architecture 32-bit systems require manual installation of libvips >= ${minimumLibvipsVersion}`);
}
if (platformAndArch === 'darwin-arm64') {
throw new Error("Please run 'brew install vips' to install libvips on Apple M1 (ARM64) systems");
}
if (platformAndArch === 'freebsd-x64' || platformAndArch === 'openbsd-x64' || platformAndArch === 'sunos-x64') {
throw new Error(`BSD/SunOS systems require manual installation of libvips >= ${minimumLibvipsVersion}`);
}
// Linux libc version check
const libcFamily = detectLibc.familySync();
const libcVersion = detectLibc.versionSync();
if (libcFamily === detectLibc.GLIBC && libcVersion && minimumGlibcVersionByArch[arch]) {
if (semverLessThan(`${libcVersion}.0`, `${minimumGlibcVersionByArch[arch]}.0`)) {
handleError(new Error(`Use with glibc ${libcVersion} requires manual installation of libvips >= ${minimumLibvipsVersion}`));
const libcVersionRaw = detectLibc.versionSync();
if (libcVersionRaw) {
const libcFamily = detectLibc.familySync();
const libcVersion = semverCoerce(libcVersionRaw).version;
if (libcFamily === detectLibc.GLIBC && minimumGlibcVersionByArch[arch]) {
if (semverLessThan(libcVersion, semverCoerce(minimumGlibcVersionByArch[arch]).version)) {
handleError(new Error(`Use with glibc ${libcVersionRaw} requires manual installation of libvips >= ${minimumLibvipsVersion}`));
}
}
}
if (libcFamily === detectLibc.MUSL && libcVersion) {
if (semverLessThan(libcVersion, '1.1.24')) {
handleError(new Error(`Use with musl ${libcVersion} requires manual installation of libvips >= ${minimumLibvipsVersion}`));
if (libcFamily === detectLibc.MUSL) {
if (semverLessThan(libcVersion, '1.1.24')) {
handleError(new Error(`Use with musl ${libcVersionRaw} requires manual installation of libvips >= ${minimumLibvipsVersion}`));
}
}
}
// Node.js minimum version check
@@ -152,7 +160,6 @@ try {
if (!semverSatisfies(process.versions.node, supportedNodeVersion)) {
handleError(new Error(`Expected Node.js version ${supportedNodeVersion} but found ${process.versions.node}`));
}
// Download to per-process temporary file
const tarFilename = ['libvips', minimumLibvipsVersionLabelled, platformAndArch].join('-') + '.tar.br';
const tarPathCache = path.join(libvips.cachePath(), tarFilename);
@@ -167,7 +174,7 @@ try {
} else {
const url = distBaseUrl + tarFilename;
libvips.log(`Downloading ${url}`);
simpleGet({ url: url, agent: agent() }, function (err, response) {
simpleGet({ url: url, agent: agent(libvips.log) }, function (err, response) {
if (err) {
fail(err);
} else if (response.statusCode === 404) {

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const url = require('url');
@@ -18,7 +21,7 @@ function env (key) {
return process.env[key];
}
module.exports = function () {
module.exports = function (log) {
try {
const proxy = new url.URL(proxies.map(env).find(is.string));
const tunnel = proxy.protocol === 'https:'
@@ -27,6 +30,7 @@ module.exports = function () {
const proxyAuth = proxy.username && proxy.password
? `${decodeURIComponent(proxy.username)}:${decodeURIComponent(proxy.password)}`
: null;
log(`Via proxy ${proxy.protocol}//${proxy.hostname}:${proxy.port} ${proxyAuth ? 'with' : 'no'} credentials`);
return tunnel({
proxy: {
port: Number(proxy.port),

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const is = require('./is');
@@ -98,7 +101,7 @@ function extractChannel (channel) {
} else {
throw is.invalidParameterError('channel', 'integer or one of: red, green, blue, alpha', channel);
}
return this.toColourspace('b-w');
return this;
}
/**

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const color = require('color');
@@ -67,7 +70,8 @@ function grayscale (grayscale) {
* Set the pipeline colourspace.
*
* The input image will be converted to the provided colourspace at the start of the pipeline.
* All operations will use this colourspace before converting to the output colourspace, as defined by {@link toColourspace}.
* All operations will use this colourspace before converting to the output colourspace,
* as defined by {@link #tocolourspace|toColourspace}.
*
* This feature is experimental and has not yet been fully-tested with all operations.
*

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const is = require('./is');
@@ -43,7 +46,7 @@ const blend = {
* The images to composite must be the same size or smaller than the processed image.
* If both `top` and `left` options are provided, they take precedence over `gravity`.
*
* Any resize or rotate operations in the same processing pipeline
* Any resize, rotate or extract operations in the same processing pipeline
* will always be applied to the input image before composition.
*
* The `blend` option can be one of `clear`, `source`, `over`, `in`, `out`, `atop`,
@@ -105,14 +108,14 @@ const blend = {
* @param {string} [images[].input.text.align='left'] - text alignment (`'left'`, `'centre'`, `'center'`, `'right'`).
* @param {boolean} [images[].input.text.justify=false] - set this to true to apply justification to the text.
* @param {number} [images[].input.text.dpi=72] - the resolution (size) at which to render the text. Does not take effect if `height` is specified.
* @param {boolean} [images[].input.text.rgba=false] - set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for pango markup features like `<span foreground="red">Red!</span>`.
* @param {boolean} [images[].input.text.rgba=false] - set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for Pango markup features like `<span foreground="red">Red!</span>`.
* @param {number} [images[].input.text.spacing=0] - text line height in points. Will use the font line height if none is specified.
* @param {String} [images[].blend='over'] - how to blend this image with the image below.
* @param {String} [images[].gravity='centre'] - gravity at which to place the overlay.
* @param {Number} [images[].top] - the pixel offset from the top edge.
* @param {Number} [images[].left] - the pixel offset from the left edge.
* @param {Boolean} [images[].tile=false] - set to true to repeat the overlay image across the entire image with the given `gravity`.
* @param {Boolean} [images[].premultiplied=false] - set to true to avoid premultipling the image below. Equivalent to the `--premultiplied` vips option.
* @param {Boolean} [images[].premultiplied=false] - set to true to avoid premultiplying the image below. Equivalent to the `--premultiplied` vips option.
* @param {Number} [images[].density=72] - number representing the DPI for vector overlay image.
* @param {Object} [images[].raw] - describes overlay when using raw pixel data.
* @param {Number} [images[].raw.width]

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const util = require('util');
@@ -46,7 +49,7 @@ const debuglog = util.debuglog('sharp');
* readableStream.pipe(transformer).pipe(writableStream);
*
* @example
* // Create a blank 300x200 PNG image of semi-transluent red pixels
* // Create a blank 300x200 PNG image of semi-translucent red pixels
* sharp({
* create: {
* width: 300,
@@ -113,25 +116,25 @@ const debuglog = util.debuglog('sharp');
* }
* }).toFile('text_rgba.png');
*
* @param {(Buffer|Uint8Array|Uint8ClampedArray|Int8Array|Uint16Array|Int16Array|Uint32Array|Int32Array|Float32Array|Float64Array|string)} [input] - if present, can be
* a Buffer / Uint8Array / Uint8ClampedArray containing JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image data, or
* @param {(Buffer|ArrayBuffer|Uint8Array|Uint8ClampedArray|Int8Array|Uint16Array|Int16Array|Uint32Array|Int32Array|Float32Array|Float64Array|string)} [input] - if present, can be
* a Buffer / ArrayBuffer / Uint8Array / Uint8ClampedArray containing JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image data, or
* a TypedArray containing raw pixel image data, or
* a String containing the filesystem path to an JPEG, PNG, WebP, AVIF, GIF, SVG or TIFF image file.
* JPEG, PNG, WebP, AVIF, GIF, SVG, TIFF or raw pixel image data can be streamed into the object when not present.
* @param {Object} [options] - if present, is an Object with optional attributes.
* @param {string} [options.failOn='warning'] - level of sensitivity to invalid images, one of (in order of sensitivity): 'none' (least), 'truncated', 'error' or 'warning' (most), highers level imply lower levels.
* @param {string} [options.failOn='warning'] - when to abort processing of invalid pixel data, one of (in order of sensitivity): 'none' (least), 'truncated', 'error' or 'warning' (most), higher levels imply lower levels, invalid metadata will always abort.
* @param {number|boolean} [options.limitInputPixels=268402689] - Do not process input images where the number of pixels
* (width x height) exceeds this limit. Assumes image dimensions contained in the input metadata can be trusted.
* An integral Number of pixels, zero or false to remove limit, true to use default limit of 268402689 (0x3FFF x 0x3FFF).
* @param {boolean} [options.unlimited=false] - Set this to `true` to remove safety features that help prevent memory exhaustion (JPEG, PNG, SVG, HEIF).
* @param {boolean} [options.sequentialRead=false] - Set this to `true` to use sequential rather than random access where possible.
* This can reduce memory usage and might improve performance on some systems.
* @param {boolean} [options.sequentialRead=true] - Set this to `false` to use random access rather than sequential read. Some operations will do this automatically.
* @param {number} [options.density=72] - number representing the DPI for vector images in the range 1 to 100000.
* @param {number} [options.pages=1] - number of pages to extract for multi-page input (GIF, WebP, AVIF, TIFF, PDF), use -1 for all pages.
* @param {number} [options.page=0] - page number to start extracting from for multi-page input (GIF, WebP, AVIF, TIFF, PDF), zero based.
* @param {number} [options.ignoreIcc=false] - should the embedded ICC profile, if any, be ignored.
* @param {number} [options.pages=1] - Number of pages to extract for multi-page input (GIF, WebP, TIFF), use -1 for all pages.
* @param {number} [options.page=0] - Page number to start extracting from for multi-page input (GIF, WebP, TIFF), zero based.
* @param {number} [options.subifd=-1] - subIFD (Sub Image File Directory) to extract for OME-TIFF, defaults to main image.
* @param {number} [options.level=0] - level to extract from a multi-level input (OpenSlide), zero based.
* @param {boolean} [options.animated=false] - Set to `true` to read all frames/pages of an animated image (equivalent of setting `pages` to `-1`).
* @param {boolean} [options.animated=false] - Set to `true` to read all frames/pages of an animated image (GIF, WebP, TIFF), equivalent of setting `pages` to `-1`.
* @param {Object} [options.raw] - describes raw pixel input image data. See `raw()` for pixel ordering.
* @param {number} [options.raw.width] - integral number of pixels wide.
* @param {number} [options.raw.height] - integral number of pixels high.
@@ -151,13 +154,14 @@ const debuglog = util.debuglog('sharp');
* @param {string} [options.text.text] - text to render as a UTF-8 string. It can contain Pango markup, for example `<i>Le</i>Monde`.
* @param {string} [options.text.font] - font name to render with.
* @param {string} [options.text.fontfile] - absolute filesystem path to a font file that can be used by `font`.
* @param {number} [options.text.width=0] - integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries.
* @param {number} [options.text.height=0] - integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0.
* @param {string} [options.text.align='left'] - text alignment (`'left'`, `'centre'`, `'center'`, `'right'`).
* @param {number} [options.text.width=0] - Integral number of pixels to word-wrap at. Lines of text wider than this will be broken at word boundaries.
* @param {number} [options.text.height=0] - Maximum integral number of pixels high. When defined, `dpi` will be ignored and the text will automatically fit the pixel resolution defined by `width` and `height`. Will be ignored if `width` is not specified or set to 0.
* @param {string} [options.text.align='left'] - Alignment style for multi-line text (`'left'`, `'centre'`, `'center'`, `'right'`).
* @param {boolean} [options.text.justify=false] - set this to true to apply justification to the text.
* @param {number} [options.text.dpi=72] - the resolution (size) at which to render the text. Does not take effect if `height` is specified.
* @param {boolean} [options.text.rgba=false] - set this to true to enable RGBA output. This is useful for colour emoji rendering, or support for pango markup features like `<span foreground="red">Red!</span>`.
* @param {number} [options.text.spacing=0] - text line height in points. Will use the font line height if none is specified.
* @param {string} [options.text.wrap='word'] - word wrapping style when width is provided, one of: 'word', 'char', 'charWord' (prefer char, fallback to word) or 'none'.
* @returns {Sharp}
* @throws {Error} Invalid parameters
*/
@@ -196,6 +200,7 @@ const Sharp = function (input, options) {
extendLeft: 0,
extendRight: 0,
extendBackground: [0, 0, 0, 255],
extendWith: 'background',
withoutEnlargement: false,
withoutReduction: false,
affineMatrix: [],
@@ -212,6 +217,7 @@ const Sharp = function (input, options) {
tintB: 128,
flatten: false,
flattenBackground: [0, 0, 0],
unflatten: false,
negate: false,
negateAlpha: true,
medianSize: 0,
@@ -230,6 +236,8 @@ const Sharp = function (input, options) {
gammaOut: 0,
greyscale: false,
normalise: false,
normaliseLower: 1,
normaliseUpper: 99,
claheWidth: 0,
claheHeight: 0,
claheMaxSlope: 3,
@@ -283,13 +291,17 @@ const Sharp = function (input, options) {
webpLossless: false,
webpNearLossless: false,
webpSmartSubsample: false,
webpPreset: 'default',
webpEffort: 4,
webpMinSize: false,
webpMixed: false,
gifBitdepth: 8,
gifEffort: 7,
gifDither: 1,
gifReoptimise: false,
gifInterFrameMaxError: 0,
gifInterPaletteMaxError: 3,
gifReuse: true,
gifProgressive: false,
tiffQuality: 80,
tiffCompression: 'jpeg',
tiffPredictor: 'horizontal',
@@ -306,6 +318,10 @@ const Sharp = function (input, options) {
heifCompression: 'av1',
heifEffort: 4,
heifChromaSubsampling: '4:4:4',
jxlDistance: 1,
jxlDecodingTier: 0,
jxlEffort: 7,
jxlLossless: false,
rawDepth: 'uchar',
tileSize: 256,
tileOverlap: 0,

1661
lib/index.d.ts vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const Sharp = require('./constructor');

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const color = require('color');
@@ -21,9 +24,9 @@ const align = {
* @private
*/
function _inputOptionsFromObject (obj) {
const { raw, density, limitInputPixels, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd } = obj;
return [raw, density, limitInputPixels, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd].some(is.defined)
? { raw, density, limitInputPixels, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd }
const { raw, density, limitInputPixels, ignoreIcc, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd } = obj;
return [raw, density, limitInputPixels, ignoreIcc, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd].some(is.defined)
? { raw, density, limitInputPixels, ignoreIcc, unlimited, sequentialRead, failOn, failOnError, animated, page, pages, subifd }
: undefined;
}
@@ -35,8 +38,9 @@ function _createInputDescriptor (input, inputOptions, containerOptions) {
const inputDescriptor = {
failOn: 'warning',
limitInputPixels: Math.pow(0x3FFF, 2),
ignoreIcc: false,
unlimited: false,
sequentialRead: false
sequentialRead: true
};
if (is.string(input)) {
// filesystem
@@ -47,6 +51,11 @@ function _createInputDescriptor (input, inputOptions, containerOptions) {
throw Error('Input Buffer is empty');
}
inputDescriptor.buffer = input;
} else if (is.arrayBuffer(input)) {
if (input.byteLength === 0) {
throw Error('Input bit Array is empty');
}
inputDescriptor.buffer = Buffer.from(input, 0, input.byteLength);
} else if (is.typedArray(input)) {
if (input.length === 0) {
throw Error('Input Bit Array is empty');
@@ -92,6 +101,14 @@ function _createInputDescriptor (input, inputOptions, containerOptions) {
throw is.invalidParameterError('density', 'number between 1 and 100000', inputOptions.density);
}
}
// Ignore embeddded ICC profile
if (is.defined(inputOptions.ignoreIcc)) {
if (is.bool(inputOptions.ignoreIcc)) {
inputDescriptor.ignoreIcc = inputOptions.ignoreIcc;
} else {
throw is.invalidParameterError('ignoreIcc', 'boolean', inputOptions.ignoreIcc);
}
}
// limitInputPixels
if (is.defined(inputOptions.limitInputPixels)) {
if (is.bool(inputOptions.limitInputPixels)) {
@@ -327,6 +344,13 @@ function _createInputDescriptor (input, inputOptions, containerOptions) {
throw is.invalidParameterError('text.spacing', 'number', inputOptions.text.spacing);
}
}
if (is.defined(inputOptions.text.wrap)) {
if (is.string(inputOptions.text.wrap) && is.inArray(inputOptions.text.wrap, ['word', 'char', 'wordChar', 'none'])) {
inputDescriptor.textWrap = inputOptions.text.wrap;
} else {
throw is.invalidParameterError('text.wrap', 'one of: word, char, wordChar, none', inputOptions.text.wrap);
}
}
delete inputDescriptor.buffer;
} else {
throw new Error('Expected a valid string to create an image with text.');
@@ -387,8 +411,9 @@ function _isStreamInput () {
/**
* Fast access to (uncached) image metadata without decoding any compressed pixel data.
*
* This is taken from the header of the input image.
* It does not include operations, such as resize, to be applied to the output image.
* This is read from the header of the input image.
* It does not take into consideration any operations to be applied to the output image,
* such as resize or rotate.
*
* Dimensions in the response will respect the `page` and `pages` properties of the
* {@link /api-constructor#parameters|constructor parameters}.
@@ -407,6 +432,7 @@ function _isStreamInput () {
* - `isProgressive`: Boolean indicating whether the image is interlaced using a progressive scan
* - `pages`: Number of pages/frames contained within the image, with support for TIFF, HEIF, PDF, animated GIF and animated WebP
* - `pageHeight`: Number of pixels high each page in a multi-page image will be.
* - `paletteBitDepth`: Bit depth of palette-based image (GIF, PNG).
* - `loop`: Number of times to loop an animated image, zero refers to a continuous loop.
* - `delay`: Delay in ms between each page in an animated image, provided as an array of integers.
* - `pagePrimary`: Number of the primary page in a HEIF image
@@ -423,6 +449,7 @@ function _isStreamInput () {
* - `iptc`: Buffer containing raw IPTC data, if present
* - `xmp`: Buffer containing raw XMP data, if present
* - `tifftagPhotoshop`: Buffer containing raw TIFFTAG_PHOTOSHOP data, if present
* - `formatMagick`: String containing format for images loaded via *magick
*
* @example
* const metadata = await sharp(input).metadata();
@@ -469,7 +496,7 @@ function metadata (callback) {
} else {
if (this._isStreamInput()) {
return new Promise((resolve, reject) => {
this.on('finish', () => {
const finished = () => {
this._flattenBufferIn();
sharp.metadata(this.options, (err, metadata) => {
if (err) {
@@ -478,7 +505,12 @@ function metadata (callback) {
resolve(metadata);
}
});
});
};
if (this.writableFinished) {
finished();
} else {
this.once('finish', finished);
}
});
} else {
return new Promise((resolve, reject) => {

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
/**
@@ -71,6 +74,14 @@ const typedArray = function (val) {
return false;
};
/**
* Is this value an ArrayBuffer object?
* @private
*/
const arrayBuffer = function (val) {
return val instanceof ArrayBuffer;
};
/**
* Is this value a non-empty string?
* @private
@@ -134,6 +145,7 @@ module.exports = {
bool: bool,
buffer: buffer,
typedArray: typedArray,
arrayBuffer: arrayBuffer,
string: string,
number: number,
integer: integer,

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs');
@@ -85,8 +88,7 @@ const hasVendoredLibvips = function () {
/* istanbul ignore next */
const removeVendoredLibvips = function () {
const rm = fs.rmSync ? fs.rmSync : fs.rmdirSync;
rm(vendorPath, { recursive: true, maxRetries: 3, force: true });
fs.rmSync(vendorPath, { recursive: true, maxRetries: 3, force: true });
};
/* istanbul ignore next */
@@ -115,6 +117,7 @@ const useGlobalLibvips = function () {
}
/* istanbul ignore next */
if (isRosetta()) {
log('Detected Rosetta, skipping search for globally-installed libvips');
return false;
}
const globalVipsVersion = globalLibvipsVersion();

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const color = require('color');
@@ -8,7 +11,7 @@ const is = require('./is');
* or auto-orient based on the EXIF `Orientation` tag.
*
* If an angle is provided, it is converted to a valid positive degree rotation.
* For example, `-450` will produce a 270deg rotation.
* For example, `-450` will produce a 270 degree rotation.
*
* When rotating by an angle other than a multiple of 90,
* the background colour can be provided with the `background` option.
@@ -16,7 +19,7 @@ const is = require('./is');
* If no angle is provided, it is determined from the EXIF data.
* Mirroring is supported and may infer the use of a flip operation.
*
* The use of `rotate` implies the removal of the EXIF `Orientation` tag, if any.
* The use of `rotate` without an angle will remove the EXIF `Orientation` tag, if any.
*
* Only one rotation can occur per pipeline.
* Previous calls to `rotate` in the same pipeline will be ignored.
@@ -77,8 +80,10 @@ function rotate (angle, options) {
}
/**
* Flip the image about the vertical Y axis. This always occurs before rotation, if any.
* The use of `flip` implies the removal of the EXIF `Orientation` tag, if any.
* Mirror the image vertically (up-down) about the x-axis.
* This always occurs before rotation, if any.
*
* This operation does not work correctly with multi-page images.
*
* @example
* const output = await sharp(input).flip().toBuffer();
@@ -92,8 +97,8 @@ function flip (flip) {
}
/**
* Flop the image about the horizontal X axis. This always occurs before rotation, if any.
* The use of `flop` implies the removal of the EXIF `Orientation` tag, if any.
* Mirror the image horizontally (left-right) about the y-axis.
* This always occurs before rotation, if any.
*
* @example
* const output = await sharp(input).flop().toBuffer();
@@ -111,7 +116,7 @@ function flop (flop) {
*
* You must provide an array of length 4 or a 2x2 affine transformation matrix.
* By default, new pixels are filled with a black background. You can provide a background color with the `background` option.
* A particular interpolator may also be specified. Set the `interpolator` option to an attribute of the `sharp.interpolator` Object e.g. `sharp.interpolator.nohalo`.
* A particular interpolator may also be specified. Set the `interpolator` option to an attribute of the `sharp.interpolators` Object e.g. `sharp.interpolators.nohalo`.
*
* In the case of a 2x2 matrix, the transform is:
* - X = `matrix[0, 0]` \* (x + `idx`) + `matrix[0, 1]` \* (y + `idy`) + `odx`
@@ -128,7 +133,7 @@ function flop (flop) {
* const pipeline = sharp()
* .affine([[1, 0.3], [0.1, 0.7]], {
* background: 'white',
* interpolate: sharp.interpolators.nohalo
* interpolator: sharp.interpolators.nohalo
* })
* .toBuffer((err, outputBuffer, info) => {
* // outputBuffer contains the transformed image
@@ -205,9 +210,11 @@ function affine (matrix, options) {
/**
* Sharpen the image.
*
* When used without parameters, performs a fast, mild sharpen of the output image.
*
* When a `sigma` is provided, performs a slower, more accurate sharpen of the L channel in the LAB colour space.
* Separate control over the level of sharpening in "flat" and "jagged" areas is available.
* Fine-grained control over the level of sharpening in "flat" (m1) and "jagged" (m2) areas is available.
*
* See {@link https://www.libvips.org/API/current/libvips-convolution.html#vips-sharpen|libvips sharpen} operation.
*
@@ -229,13 +236,13 @@ function affine (matrix, options) {
* })
* .toBuffer();
*
* @param {Object|number} [options] - if present, is an Object with attributes or (deprecated) a number for `options.sigma`.
* @param {number} [options.sigma] - the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`.
* @param {number} [options.m1=1.0] - the level of sharpening to apply to "flat" areas.
* @param {number} [options.m2=2.0] - the level of sharpening to apply to "jagged" areas.
* @param {number} [options.x1=2.0] - threshold between "flat" and "jagged"
* @param {number} [options.y2=10.0] - maximum amount of brightening.
* @param {number} [options.y3=20.0] - maximum amount of darkening.
* @param {Object|number} [options] - if present, is an Object with attributes
* @param {number} [options.sigma] - the sigma of the Gaussian mask, where `sigma = 1 + radius / 2`, between 0.000001 and 10
* @param {number} [options.m1=1.0] - the level of sharpening to apply to "flat" areas, between 0 and 1000000
* @param {number} [options.m2=2.0] - the level of sharpening to apply to "jagged" areas, between 0 and 1000000
* @param {number} [options.x1=2.0] - threshold between "flat" and "jagged", between 0 and 1000000
* @param {number} [options.y2=10.0] - maximum amount of brightening, between 0 and 1000000
* @param {number} [options.y3=20.0] - maximum amount of darkening, between 0 and 1000000
* @param {number} [flat] - (deprecated) see `options.m1`.
* @param {number} [jagged] - (deprecated) see `options.m2`.
* @returns {Sharp}
@@ -268,44 +275,44 @@ function sharpen (options, flat, jagged) {
}
}
} else if (is.plainObject(options)) {
if (is.number(options.sigma) && is.inRange(options.sigma, 0.01, 10000)) {
if (is.number(options.sigma) && is.inRange(options.sigma, 0.000001, 10)) {
this.options.sharpenSigma = options.sigma;
} else {
throw is.invalidParameterError('options.sigma', 'number between 0.01 and 10000', options.sigma);
throw is.invalidParameterError('options.sigma', 'number between 0.000001 and 10', options.sigma);
}
if (is.defined(options.m1)) {
if (is.number(options.m1) && is.inRange(options.m1, 0, 10000)) {
if (is.number(options.m1) && is.inRange(options.m1, 0, 1000000)) {
this.options.sharpenM1 = options.m1;
} else {
throw is.invalidParameterError('options.m1', 'number between 0 and 10000', options.m1);
throw is.invalidParameterError('options.m1', 'number between 0 and 1000000', options.m1);
}
}
if (is.defined(options.m2)) {
if (is.number(options.m2) && is.inRange(options.m2, 0, 10000)) {
if (is.number(options.m2) && is.inRange(options.m2, 0, 1000000)) {
this.options.sharpenM2 = options.m2;
} else {
throw is.invalidParameterError('options.m2', 'number between 0 and 10000', options.m2);
throw is.invalidParameterError('options.m2', 'number between 0 and 1000000', options.m2);
}
}
if (is.defined(options.x1)) {
if (is.number(options.x1) && is.inRange(options.x1, 0, 10000)) {
if (is.number(options.x1) && is.inRange(options.x1, 0, 1000000)) {
this.options.sharpenX1 = options.x1;
} else {
throw is.invalidParameterError('options.x1', 'number between 0 and 10000', options.x1);
throw is.invalidParameterError('options.x1', 'number between 0 and 1000000', options.x1);
}
}
if (is.defined(options.y2)) {
if (is.number(options.y2) && is.inRange(options.y2, 0, 10000)) {
if (is.number(options.y2) && is.inRange(options.y2, 0, 1000000)) {
this.options.sharpenY2 = options.y2;
} else {
throw is.invalidParameterError('options.y2', 'number between 0 and 10000', options.y2);
throw is.invalidParameterError('options.y2', 'number between 0 and 1000000', options.y2);
}
}
if (is.defined(options.y3)) {
if (is.number(options.y3) && is.inRange(options.y3, 0, 10000)) {
if (is.number(options.y3) && is.inRange(options.y3, 0, 1000000)) {
this.options.sharpenY3 = options.y3;
} else {
throw is.invalidParameterError('options.y3', 'number between 0 and 10000', options.y3);
throw is.invalidParameterError('options.y3', 'number between 0 and 1000000', options.y3);
}
}
} else {
@@ -400,6 +407,32 @@ function flatten (options) {
return this;
}
/**
* Ensure the image has an alpha channel
* with all white pixel values made fully transparent.
*
* Existing alpha channel values for non-white pixels remain unchanged.
*
* This feature is experimental and the API may change.
*
* @since 0.32.1
*
* @example
* await sharp(rgbInput)
* .unflatten()
* .toBuffer();
*
* @example
* await sharp(rgbInput)
* .threshold(128, { grayscale: false }) // converter bright pixels to white
* .unflatten()
* .toBuffer();
*/
function unflatten () {
this.options.unflatten = true;
return this;
}
/**
* Apply a gamma correction by reducing the encoding (darken) pre-resize at a factor of `1/gamma`
* then increasing the encoding (brighten) post-resize at a factor of `gamma`.
@@ -464,16 +497,50 @@ function negate (options) {
}
/**
* Enhance output image contrast by stretching its luminance to cover the full dynamic range.
* Enhance output image contrast by stretching its luminance to cover a full dynamic range.
*
* Uses a histogram-based approach, taking a default range of 1% to 99% to reduce sensitivity to noise at the extremes.
*
* Luminance values below the `lower` percentile will be underexposed by clipping to zero.
* Luminance values above the `upper` percentile will be overexposed by clipping to the max pixel value.
*
* @example
* const output = await sharp(input).normalise().toBuffer();
* const output = await sharp(input)
* .normalise()
* .toBuffer();
*
* @param {Boolean} [normalise=true]
* @example
* const output = await sharp(input)
* .normalise({ lower: 0, upper: 100 })
* .toBuffer();
*
* @param {Object} [options]
* @param {number} [options.lower=1] - Percentile below which luminance values will be underexposed.
* @param {number} [options.upper=99] - Percentile above which luminance values will be overexposed.
* @returns {Sharp}
*/
function normalise (normalise) {
this.options.normalise = is.bool(normalise) ? normalise : true;
function normalise (options) {
if (is.plainObject(options)) {
if (is.defined(options.lower)) {
if (is.number(options.lower) && is.inRange(options.lower, 0, 99)) {
this.options.normaliseLower = options.lower;
} else {
throw is.invalidParameterError('lower', 'number between 0 and 99', options.lower);
}
}
if (is.defined(options.upper)) {
if (is.number(options.upper) && is.inRange(options.upper, 1, 100)) {
this.options.normaliseUpper = options.upper;
} else {
throw is.invalidParameterError('upper', 'number between 1 and 100', options.upper);
}
}
}
if (this.options.normaliseLower >= this.options.normaliseUpper) {
throw is.invalidParameterError('range', 'lower to be less than upper',
`${this.options.normaliseLower} >= ${this.options.normaliseUpper}`);
}
this.options.normalise = true;
return this;
}
@@ -481,13 +548,17 @@ function normalise (normalise) {
* Alternative spelling of normalise.
*
* @example
* const output = await sharp(input).normalize().toBuffer();
* const output = await sharp(input)
* .normalize()
* .toBuffer();
*
* @param {Boolean} [normalize=true]
* @param {Object} [options]
* @param {number} [options.lower=1] - Percentile below which luminance values will be underexposed.
* @param {number} [options.upper=99] - Percentile above which luminance values will be overexposed.
* @returns {Sharp}
*/
function normalize (normalize) {
return this.normalise(normalize);
function normalize (options) {
return this.normalise(options);
}
/**
@@ -507,35 +578,34 @@ function normalize (normalize) {
* .toBuffer();
*
* @param {Object} options
* @param {number} options.width - integer width of the region in pixels.
* @param {number} options.height - integer height of the region in pixels.
* @param {number} [options.maxSlope=3] - maximum value for the slope of the
* cumulative histogram. A value of 0 disables contrast limiting. Valid values
* are integers in the range 0-100 (inclusive)
* @param {number} options.width - Integral width of the search window, in pixels.
* @param {number} options.height - Integral height of the search window, in pixels.
* @param {number} [options.maxSlope=3] - Integral level of brightening, between 0 and 100, where 0 disables contrast limiting.
* @returns {Sharp}
* @throws {Error} Invalid parameters
*/
function clahe (options) {
if (!is.plainObject(options)) {
if (is.plainObject(options)) {
if (is.integer(options.width) && options.width > 0) {
this.options.claheWidth = options.width;
} else {
throw is.invalidParameterError('width', 'integer greater than zero', options.width);
}
if (is.integer(options.height) && options.height > 0) {
this.options.claheHeight = options.height;
} else {
throw is.invalidParameterError('height', 'integer greater than zero', options.height);
}
if (is.defined(options.maxSlope)) {
if (is.integer(options.maxSlope) && is.inRange(options.maxSlope, 0, 100)) {
this.options.claheMaxSlope = options.maxSlope;
} else {
throw is.invalidParameterError('maxSlope', 'integer between 0 and 100', options.maxSlope);
}
}
} else {
throw is.invalidParameterError('options', 'plain object', options);
}
if (!('width' in options) || !is.integer(options.width) || options.width <= 0) {
throw is.invalidParameterError('width', 'integer above zero', options.width);
} else {
this.options.claheWidth = options.width;
}
if (!('height' in options) || !is.integer(options.height) || options.height <= 0) {
throw is.invalidParameterError('height', 'integer above zero', options.height);
} else {
this.options.claheHeight = options.height;
}
if (!is.defined(options.maxSlope)) {
this.options.claheMaxSlope = 3;
} else if (!is.integer(options.maxSlope) || options.maxSlope < 0 || options.maxSlope > 100) {
throw is.invalidParameterError('maxSlope', 'integer 0-100', options.maxSlope);
} else {
this.options.claheMaxSlope = options.maxSlope;
}
return this;
}
@@ -698,7 +768,7 @@ function linear (a, b) {
}
/**
* Recomb the image with the specified matrix.
* Recombine the image with the specified matrix.
*
* @since 0.21.1
*
@@ -711,7 +781,7 @@ function linear (a, b) {
* ])
* .raw()
* .toBuffer(function(err, data, info) {
* // data contains the raw pixel data after applying the recomb
* // data contains the raw pixel data after applying the matrix
* // With this example input, a sepia filter has been applied
* });
*
@@ -768,7 +838,7 @@ function recomb (inputMatrix) {
* .toBuffer();
*
* @example
* // decreate brightness and saturation while also hue-rotating by 90 degrees
* // decrease brightness and saturation while also hue-rotating by 90 degrees
* const output = await sharp(input)
* .modulate({
* brightness: 0.5,
@@ -833,6 +903,7 @@ module.exports = function (Sharp) {
median,
blur,
flatten,
unflatten,
gamma,
negate,
normalise,

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const path = require('path');
@@ -22,10 +25,13 @@ const formats = new Map([
['jp2', 'jp2'],
['jpx', 'jp2'],
['j2k', 'jp2'],
['j2c', 'jp2']
['j2c', 'jp2'],
['jxl', 'jxl']
]);
const errJp2Save = new Error('JP2 output requires libvips with support for OpenJPEG');
const jp2Regex = /\.(jp[2x]|j2[kc])$/i;
const errJp2Save = () => new Error('JP2 output requires libvips with support for OpenJPEG');
const bitdepthFromColourCount = (colours) => 1 << 31 - Math.clz32(Math.ceil(Math.log2(colours)));
@@ -37,7 +43,7 @@ const bitdepthFromColourCount = (colours) => 1 << 31 - Math.clz32(Math.ceil(Math
* Note that raw pixel data is only supported for buffer output.
*
* By default all metadata will be removed, which includes EXIF-based orientation.
* See {@link withMetadata} for control over this.
* See {@link #withmetadata|withMetadata} for control over this.
*
* The caller is responsible for ensuring directory structures and permissions exist.
*
@@ -58,6 +64,7 @@ const bitdepthFromColourCount = (colours) => 1 << 31 - Math.clz32(Math.ceil(Math
* `info` contains the output image `format`, `size` (bytes), `width`, `height`,
* `channels` and `premultiplied` (indicating if premultiplication was used).
* When using a crop strategy also contains `cropOffsetLeft` and `cropOffsetTop`.
* When using the attention crop strategy also contains `attentionX` and `attentionY`, the focal point of the cropped region.
* May also contain `textAutofitDpi` (dpi the font was rendered at) if image was created from text.
* @returns {Promise<Object>} - when no callback is provided
* @throws {Error} Invalid parameters
@@ -68,6 +75,8 @@ function toFile (fileOut, callback) {
err = new Error('Missing output file path');
} else if (is.string(this.options.input.file) && path.resolve(this.options.input.file) === path.resolve(fileOut)) {
err = new Error('Cannot use same file for input and output');
} else if (jp2Regex.test(path.extname(fileOut)) && !this.constructor.format.jp2k.output.file) {
err = errJp2Save();
}
if (err) {
if (is.fn(callback)) {
@@ -86,12 +95,12 @@ function toFile (fileOut, callback) {
* Write output to a Buffer.
* JPEG, PNG, WebP, AVIF, TIFF, GIF and raw pixel data output are supported.
*
* Use {@link toFormat} or one of the format-specific functions such as {@link jpeg}, {@link png} etc. to set the output format.
* Use {@link #toformat|toFormat} or one of the format-specific functions such as {@link jpeg}, {@link png} etc. to set the output format.
*
* If no explicit format is set, the output format will match the input image, except SVG input which becomes PNG output.
*
* By default all metadata will be removed, which includes EXIF-based orientation.
* See {@link withMetadata} for control over this.
* See {@link #withmetadata|withMetadata} for control over this.
*
* `callback`, if present, gets three arguments `(err, data, info)` where:
* - `err` is an error, if any.
@@ -153,8 +162,8 @@ function toBuffer (options, callback) {
/**
* Include all metadata (EXIF, XMP, IPTC) from the input image in the output image.
* This will also convert to and add a web-friendly sRGB ICC profile unless a custom
* output profile is provided.
* This will also convert to and add a web-friendly sRGB ICC profile if appropriate,
* unless a custom output profile is provided.
*
* The default behaviour, when `withMetadata` is not used, is to convert to the device-independent
* sRGB colour space and strip all metadata, including the removal of any ICC profile.
@@ -168,12 +177,18 @@ function toBuffer (options, callback) {
* .then(info => { ... });
*
* @example
* // Set "IFD0-Copyright" in output EXIF metadata
* // Set output EXIF metadata
* const data = await sharp(input)
* .withMetadata({
* exif: {
* IFD0: {
* Copyright: 'Wernham Hogg'
* Copyright: 'The National Gallery'
* },
* IFD3: {
* GPSLatitudeRef: 'N',
* GPSLatitude: '51/1 30/1 3230/100',
* GPSLongitudeRef: 'W',
* GPSLongitude: '0/1 7/1 4366/100'
* }
* }
* })
@@ -187,7 +202,7 @@ function toBuffer (options, callback) {
*
* @param {Object} [options]
* @param {number} [options.orientation] value between 1 and 8, used to update the EXIF `Orientation` tag.
* @param {string} [options.icc] filesystem path to output ICC profile, defaults to sRGB.
* @param {string} [options.icc='srgb'] Filesystem path to output ICC profile, relative to `process.cwd()`, defaults to built-in sRGB.
* @param {Object<Object>} [options.exif={}] Object keyed by IFD0, IFD1 etc. of key/value string pairs to write as EXIF data.
* @param {number} [options.density] Number of pixels per inch (DPI).
* @returns {Sharp}
@@ -468,6 +483,7 @@ function png (options) {
* @param {boolean} [options.lossless=false] - use lossless compression mode
* @param {boolean} [options.nearLossless=false] - use near_lossless compression mode
* @param {boolean} [options.smartSubsample=false] - use high quality chroma subsampling
* @param {string} [options.preset='default'] - named preset for preprocessing/filtering, one of: default, photo, picture, drawing, icon, text
* @param {number} [options.effort=4] - CPU effort, between 0 (fastest) and 6 (slowest)
* @param {number} [options.loop=0] - number of animation iterations, use 0 for infinite animation
* @param {number|number[]} [options.delay] - delay(s) between animation frames (in milliseconds)
@@ -502,6 +518,13 @@ function webp (options) {
if (is.defined(options.smartSubsample)) {
this._setBooleanOption('webpSmartSubsample', options.smartSubsample);
}
if (is.defined(options.preset)) {
if (is.string(options.preset) && is.inArray(options.preset, ['default', 'photo', 'picture', 'drawing', 'icon', 'text'])) {
this.options.webpPreset = options.preset;
} else {
throw is.invalidParameterError('preset', 'one of: default, photo, picture, drawing, icon, text', options.preset);
}
}
if (is.defined(options.effort)) {
if (is.integer(options.effort) && is.inRange(options.effort, 0, 6)) {
this.options.webpEffort = options.effort;
@@ -547,13 +570,21 @@ function webp (options) {
* .gif({ dither: 0 })
* .toBuffer();
*
* @example
* // Lossy file size reduction of animated GIF
* await sharp('in.gif', { animated: true })
* .gif({ interFrameMaxError: 8 })
* .toFile('optim.gif');
*
* @param {Object} [options] - output options
* @param {boolean} [options.reoptimise=false] - always generate new palettes (slow), re-use existing by default
* @param {boolean} [options.reoptimize=false] - alternative spelling of `options.reoptimise`
* @param {boolean} [options.reuse=true] - re-use existing palette, otherwise generate new (slow)
* @param {boolean} [options.progressive=false] - use progressive (interlace) scan
* @param {number} [options.colours=256] - maximum number of palette entries, including transparency, between 2 and 256
* @param {number} [options.colors=256] - alternative spelling of `options.colours`
* @param {number} [options.effort=7] - CPU effort, between 1 (fastest) and 10 (slowest)
* @param {number} [options.dither=1.0] - level of Floyd-Steinberg error diffusion, between 0 (least) and 1 (most)
* @param {number} [options.interFrameMaxError=0] - maximum inter-frame error for transparency, between 0 (lossless) and 32
* @param {number} [options.interPaletteMaxError=3] - maximum inter-palette error for palette reuse, between 0 and 256
* @param {number} [options.loop=0] - number of animation iterations, use 0 for infinite animation
* @param {number|number[]} [options.delay] - delay(s) between animation frames (in milliseconds)
* @param {boolean} [options.force=true] - force GIF output, otherwise attempt to use input format
@@ -562,10 +593,11 @@ function webp (options) {
*/
function gif (options) {
if (is.object(options)) {
if (is.defined(options.reoptimise)) {
this._setBooleanOption('gifReoptimise', options.reoptimise);
} else if (is.defined(options.reoptimize)) {
this._setBooleanOption('gifReoptimise', options.reoptimize);
if (is.defined(options.reuse)) {
this._setBooleanOption('gifReuse', options.reuse);
}
if (is.defined(options.progressive)) {
this._setBooleanOption('gifProgressive', options.progressive);
}
const colours = options.colours || options.colors;
if (is.defined(colours)) {
@@ -589,11 +621,26 @@ function gif (options) {
throw is.invalidParameterError('dither', 'number between 0.0 and 1.0', options.dither);
}
}
if (is.defined(options.interFrameMaxError)) {
if (is.number(options.interFrameMaxError) && is.inRange(options.interFrameMaxError, 0, 32)) {
this.options.gifInterFrameMaxError = options.interFrameMaxError;
} else {
throw is.invalidParameterError('interFrameMaxError', 'number between 0.0 and 32.0', options.interFrameMaxError);
}
}
if (is.defined(options.interPaletteMaxError)) {
if (is.number(options.interPaletteMaxError) && is.inRange(options.interPaletteMaxError, 0, 256)) {
this.options.gifInterPaletteMaxError = options.interPaletteMaxError;
} else {
throw is.invalidParameterError('interPaletteMaxError', 'number between 0.0 and 256.0', options.interPaletteMaxError);
}
}
}
trySetAnimationOptions(options, this.options);
return this._updateFormatOut('gif', options);
}
/* istanbul ignore next */
/**
* Use these JP2 options for output image.
*
@@ -627,10 +674,9 @@ function gif (options) {
* @returns {Sharp}
* @throws {Error} Invalid options
*/
/* istanbul ignore next */
function jp2 (options) {
if (!this.constructor.format.jp2k.output.buffer) {
throw errJp2Save;
throw errJp2Save();
}
if (is.object(options)) {
if (is.defined(options.quality)) {
@@ -663,7 +709,7 @@ function jp2 (options) {
}
if (is.defined(options.chromaSubsampling)) {
if (is.string(options.chromaSubsampling) && is.inArray(options.chromaSubsampling, ['4:2:0', '4:4:4'])) {
this.options.heifChromaSubsampling = options.chromaSubsampling;
this.options.jp2ChromaSubsampling = options.chromaSubsampling;
} else {
throw is.invalidParameterError('chromaSubsampling', 'one of: 4:2:0, 4:4:4', options.chromaSubsampling);
}
@@ -708,7 +754,8 @@ function trySetAnimationOptions (source, target) {
/**
* Use these TIFF options for output image.
*
* The `density` can be set in pixels/inch via {@link withMetadata} instead of providing `xres` and `yres` in pixels/mm.
* The `density` can be set in pixels/inch via {@link #withmetadata|withMetadata}
* instead of providing `xres` and `yres` in pixels/mm.
*
* @example
* // Convert SVG input to LZW-compressed, 1 bit per pixel TIFF output
@@ -912,6 +959,71 @@ function heif (options) {
return this._updateFormatOut('heif', options);
}
/**
* Use these JPEG-XL (JXL) options for output image.
*
* This feature is experimental, please do not use in production systems.
*
* Requires libvips compiled with support for libjxl.
* The prebuilt binaries do not include this - see
* {@link https://sharp.pixelplumbing.com/install#custom-libvips installing a custom libvips}.
*
* Image metadata (EXIF, XMP) is unsupported.
*
* @since 0.31.3
*
* @param {Object} [options] - output options
* @param {number} [options.distance=1.0] - maximum encoding error, between 0 (highest quality) and 15 (lowest quality)
* @param {number} [options.quality] - calculate `distance` based on JPEG-like quality, between 1 and 100, overrides distance if specified
* @param {number} [options.decodingTier=0] - target decode speed tier, between 0 (highest quality) and 4 (lowest quality)
* @param {boolean} [options.lossless=false] - use lossless compression
* @param {number} [options.effort=7] - CPU effort, between 3 (fastest) and 9 (slowest)
* @returns {Sharp}
* @throws {Error} Invalid options
*/
function jxl (options) {
if (is.object(options)) {
if (is.defined(options.quality)) {
if (is.integer(options.quality) && is.inRange(options.quality, 1, 100)) {
// https://github.com/libjxl/libjxl/blob/0aeea7f180bafd6893c1db8072dcb67d2aa5b03d/tools/cjxl_main.cc#L640-L644
this.options.jxlDistance = options.quality >= 30
? 0.1 + (100 - options.quality) * 0.09
: 53 / 3000 * options.quality * options.quality - 23 / 20 * options.quality + 25;
} else {
throw is.invalidParameterError('quality', 'integer between 1 and 100', options.quality);
}
} else if (is.defined(options.distance)) {
if (is.number(options.distance) && is.inRange(options.distance, 0, 15)) {
this.options.jxlDistance = options.distance;
} else {
throw is.invalidParameterError('distance', 'number between 0.0 and 15.0', options.distance);
}
}
if (is.defined(options.decodingTier)) {
if (is.integer(options.decodingTier) && is.inRange(options.decodingTier, 0, 4)) {
this.options.jxlDecodingTier = options.decodingTier;
} else {
throw is.invalidParameterError('decodingTier', 'integer between 0 and 4', options.decodingTier);
}
}
if (is.defined(options.lossless)) {
if (is.bool(options.lossless)) {
this.options.jxlLossless = options.lossless;
} else {
throw is.invalidParameterError('lossless', 'boolean', options.lossless);
}
}
if (is.defined(options.effort)) {
if (is.integer(options.effort) && is.inRange(options.effort, 3, 9)) {
this.options.jxlEffort = options.effort;
} else {
throw is.invalidParameterError('effort', 'integer between 3 and 9', options.effort);
}
}
}
return this._updateFormatOut('jxl', options);
}
/**
* Force output to be raw, uncompressed pixel data.
* Pixel ordering is left-to-right, top-to-bottom, without padding.
@@ -934,6 +1046,7 @@ function heif (options) {
*
* @param {Object} [options] - output options
* @param {string} [options.depth='uchar'] - bit depth, one of: char, uchar (default), short, ushort, int, uint, float, complex, double, dpcomplex
* @returns {Sharp}
* @throws {Error} Invalid options
*/
function raw (options) {
@@ -959,6 +1072,10 @@ function raw (options) {
*
* The container will be set to `zip` when the output is a Buffer or Stream, otherwise it will default to `fs`.
*
* Requires libvips compiled with support for libgsf.
* The prebuilt binaries do not include this - see
* {@link https://sharp.pixelplumbing.com/install#custom-libvips installing a custom libvips}.
*
* @example
* sharp('input.tiff')
* .png()
@@ -1282,6 +1399,7 @@ module.exports = function (Sharp) {
tiff,
avif,
heif,
jxl,
gif,
raw,
tile,

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const detectLibc = require('detect-libc');

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const is = require('./is');
@@ -36,6 +39,18 @@ const position = {
'left top': 8
};
/**
* How to extend the image.
* @member
* @private
*/
const extendWith = {
background: 'background',
copy: 'copy',
repeat: 'repeat',
mirror: 'mirror'
};
/**
* Strategies for automagic cover behaviour.
* @member
@@ -103,7 +118,7 @@ function isResizeExpected (options) {
* Resize image to `width`, `height` or `width x height`.
*
* When both a `width` and `height` are provided, the possible methods by which the image should **fit** these are:
* - `cover`: (default) Preserving aspect ratio, ensure the image covers both provided dimensions by cropping/clipping to fit.
* - `cover`: (default) Preserving aspect ratio, attempt to ensure the image covers both provided dimensions by cropping/clipping to fit.
* - `contain`: Preserving aspect ratio, contain within both provided dimensions using "letterboxing" where necessary.
* - `fill`: Ignore the aspect ratio of the input and stretch to both provided dimensions.
* - `inside`: Preserving aspect ratio, resize the image to be as large as possible while ensuring its dimensions are less than or equal to both those specified.
@@ -111,7 +126,9 @@ function isResizeExpected (options) {
*
* Some of these values are based on the [object-fit](https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit) CSS property.
*
* When using a `fit` of `cover` or `contain`, the default **position** is `centre`. Other options are:
* <img alt="Examples of various values for the fit property when resizing" width="100%" style="aspect-ratio: 998/243" src="https://cdn.jsdelivr.net/gh/lovell/sharp@main/docs/image/api-resize-fit.svg">
*
* When using a **fit** of `cover` or `contain`, the default **position** is `centre`. Other options are:
* - `sharp.position`: `top`, `right top`, `right`, `right bottom`, `bottom`, `left bottom`, `left`, `left top`.
* - `sharp.gravity`: `north`, `northeast`, `east`, `southeast`, `south`, `southwest`, `west`, `northwest`, `center` or `centre`.
* - `sharp.strategy`: `cover` only, dynamically crop using either the `entropy` or `attention` strategy.
@@ -214,32 +231,32 @@ function isResizeExpected (options) {
* .toBuffer()
* );
*
* @param {number} [width] - pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
* @param {number} [height] - pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
* @param {number} [width] - How many pixels wide the resultant image should be. Use `null` or `undefined` to auto-scale the width to match the height.
* @param {number} [height] - How many pixels high the resultant image should be. Use `null` or `undefined` to auto-scale the height to match the width.
* @param {Object} [options]
* @param {String} [options.width] - alternative means of specifying `width`. If both are present this take priority.
* @param {String} [options.height] - alternative means of specifying `height`. If both are present this take priority.
* @param {String} [options.fit='cover'] - how the image should be resized to fit both provided dimensions, one of `cover`, `contain`, `fill`, `inside` or `outside`.
* @param {String} [options.position='centre'] - position, gravity or strategy to use when `fit` is `cover` or `contain`.
* @param {number} [options.width] - An alternative means of specifying `width`. If both are present this takes priority.
* @param {number} [options.height] - An alternative means of specifying `height`. If both are present this takes priority.
* @param {String} [options.fit='cover'] - How the image should be resized/cropped to fit the target dimension(s), one of `cover`, `contain`, `fill`, `inside` or `outside`.
* @param {String} [options.position='centre'] - A position, gravity or strategy to use when `fit` is `cover` or `contain`.
* @param {String|Object} [options.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour when `fit` is `contain`, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @param {String} [options.kernel='lanczos3'] - the kernel to use for image reduction.
* @param {Boolean} [options.withoutEnlargement=false] - do not enlarge if the width *or* height are already less than the specified dimensions, equivalent to GraphicsMagick's `>` geometry option.
* @param {Boolean} [options.withoutReduction=false] - do not reduce if the width *or* height are already greater than the specified dimensions, equivalent to GraphicsMagick's `<` geometry option.
* @param {Boolean} [options.fastShrinkOnLoad=true] - take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern on some images.
* @param {String} [options.kernel='lanczos3'] - The kernel to use for image reduction. Use the `fastShrinkOnLoad` option to control kernel vs shrink-on-load.
* @param {Boolean} [options.withoutEnlargement=false] - Do not scale up if the width *or* height are already less than the target dimensions, equivalent to GraphicsMagick's `>` geometry option. This may result in output dimensions smaller than the target dimensions.
* @param {Boolean} [options.withoutReduction=false] - Do not scale down if the width *or* height are already greater than the target dimensions, equivalent to GraphicsMagick's `<` geometry option. This may still result in a crop to reach the target dimensions.
* @param {Boolean} [options.fastShrinkOnLoad=true] - Take greater advantage of the JPEG and WebP shrink-on-load feature, which can lead to a slight moiré pattern or round-down of an auto-scaled dimension.
* @returns {Sharp}
* @throws {Error} Invalid parameters
*/
function resize (width, height, options) {
function resize (widthOrOptions, height, options) {
if (isResizeExpected(this.options)) {
this.options.debuglog('ignoring previous resize options');
}
if (is.defined(width)) {
if (is.object(width) && !is.defined(options)) {
options = width;
} else if (is.integer(width) && width > 0) {
this.options.width = width;
if (is.defined(widthOrOptions)) {
if (is.object(widthOrOptions) && !is.defined(options)) {
options = widthOrOptions;
} else if (is.integer(widthOrOptions) && widthOrOptions > 0) {
this.options.width = widthOrOptions;
} else {
throw is.invalidParameterError('width', 'positive integer', width);
throw is.invalidParameterError('width', 'positive integer', widthOrOptions);
}
} else {
this.options.width = -1;
@@ -320,7 +337,8 @@ function resize (width, height, options) {
}
/**
* Extends/pads the edges of the image with the provided background colour.
* Extend / pad / extrude one or more edges of the image with either
* the provided background colour or pixels derived from the image.
* This operation will always occur after resizing and extraction, if any.
*
* @example
@@ -346,11 +364,21 @@ function resize (width, height, options) {
* })
* ...
*
* @example
* // Extrude image by 8 pixels to the right, mirroring existing right hand edge
* sharp(input)
* .extend({
* right: 8,
* background: 'mirror'
* })
* ...
*
* @param {(number|Object)} extend - single pixel count to add to all edges or an Object with per-edge counts
* @param {number} [extend.top=0]
* @param {number} [extend.left=0]
* @param {number} [extend.bottom=0]
* @param {number} [extend.right=0]
* @param {String} [extend.extendWith='background'] - populate new pixels using this method, one of: background, copy, repeat, mirror.
* @param {String|Object} [extend.background={r: 0, g: 0, b: 0, alpha: 1}] - background colour, parsed by the [color](https://www.npmjs.org/package/color) module, defaults to black without transparency.
* @returns {Sharp}
* @throws {Error} Invalid parameters
@@ -391,6 +419,13 @@ function extend (extend) {
}
}
this._setBackgroundColourOption('extendBackground', extend.background);
if (is.defined(extend.extendWith)) {
if (is.string(extendWith[extend.extendWith])) {
this.options.extendWith = extendWith[extend.extendWith];
} else {
throw is.invalidParameterError('extendWith', 'one of: background, copy, repeat, mirror', extend.extendWith);
}
}
} else {
throw is.invalidParameterError('extend', 'integer or object', extend);
}

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const platformAndArch = require('./platform')();

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const fs = require('fs');
@@ -43,7 +46,7 @@ const interpolators = {
};
/**
* An Object containing the version numbers of libvips and its dependencies.
* An Object containing the version numbers of sharp, libvips and its dependencies.
* @member
* @example
* console.log(sharp.versions);
@@ -54,6 +57,7 @@ let versions = {
try {
versions = require(`../vendor/${versions.vips}/${platformAndArch}/versions.json`);
} catch (_err) { /* ignore */ }
versions.sharp = require('../package.json').version;
/**
* An Object containing the platform and architecture
@@ -198,6 +202,72 @@ function simd (simd) {
}
simd(true);
/**
* Block libvips operations at runtime.
*
* This is in addition to the `VIPS_BLOCK_UNTRUSTED` environment variable,
* which when set will block all "untrusted" operations.
*
* @since 0.32.4
*
* @example <caption>Block all TIFF input.</caption>
* sharp.block({
* operation: ['VipsForeignLoadTiff']
* });
*
* @param {Object} options
* @param {Array<string>} options.operation - List of libvips low-level operation names to block.
*/
function block (options) {
if (is.object(options)) {
if (Array.isArray(options.operation) && options.operation.every(is.string)) {
sharp.block(options.operation, true);
} else {
throw is.invalidParameterError('operation', 'Array<string>', options.operation);
}
} else {
throw is.invalidParameterError('options', 'object', options);
}
}
/**
* Unblock libvips operations at runtime.
*
* This is useful for defining a list of allowed operations.
*
* @since 0.32.4
*
* @example <caption>Block all input except WebP from the filesystem.</caption>
* sharp.block({
* operation: ['VipsForeignLoad']
* });
* sharp.unblock({
* operation: ['VipsForeignLoadWebpFile']
* });
*
* @example <caption>Block all input except JPEG and PNG from a Buffer or Stream.</caption>
* sharp.block({
* operation: ['VipsForeignLoad']
* });
* sharp.unblock({
* operation: ['VipsForeignLoadJpegBuffer', 'VipsForeignLoadPngBuffer']
* });
*
* @param {Object} options
* @param {Array<string>} options.operation - List of libvips low-level operation names to unblock.
*/
function unblock (options) {
if (is.object(options)) {
if (Array.isArray(options.operation) && options.operation.every(is.string)) {
sharp.block(options.operation, false);
} else {
throw is.invalidParameterError('operation', 'Array<string>', options.operation);
}
} else {
throw is.invalidParameterError('options', 'object', options);
}
}
/**
* Decorate the Sharp class with utility-related functions.
* @private
@@ -212,4 +282,6 @@ module.exports = function (Sharp) {
Sharp.versions = versions;
Sharp.vendor = vendor;
Sharp.queue = queue;
Sharp.block = block;
Sharp.unblock = unblock;
};

View File

@@ -1,7 +1,7 @@
{
"name": "sharp",
"description": "High performance Node.js image processing, the fastest module to resize JPEG, PNG, WebP, GIF, AVIF and TIFF images",
"version": "0.31.1",
"version": "0.32.6",
"author": "Lovell Fuller <npm@lovell.info>",
"homepage": "https://github.com/lovell/sharp",
"contributors": [
@@ -85,21 +85,24 @@
"Brodan <christopher.hranj@gmail.com",
"Ankur Parihar <ankur.github@gmail.com>",
"Brahim Ait elhaj <brahima@gmail.com>",
"Mart Jansink <m.jansink@gmail.com>"
"Mart Jansink <m.jansink@gmail.com>",
"Lachlan Newman <lachnewman007@gmail.com>"
],
"scripts": {
"install": "(node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)",
"clean": "rm -rf node_modules/ build/ vendor/ .nyc_output/ coverage/ test/fixtures/output.*",
"test": "npm run test-lint && npm run test-unit && npm run test-licensing",
"test": "npm run test-lint && npm run test-unit && npm run test-licensing && npm run test-types",
"test-lint": "semistandard && cpplint",
"test-unit": "nyc --reporter=lcov --reporter=text --check-coverage --branches=100 mocha --slow=1000 --timeout=20000 ./test/unit/*.js",
"test-unit": "nyc --reporter=lcov --reporter=text --check-coverage --branches=100 mocha",
"test-licensing": "license-checker --production --summary --onlyAllow=\"Apache-2.0;BSD;ISC;MIT\"",
"test-leak": "./test/leak/leak.sh",
"docs-build": "documentation lint lib && node docs/build && node docs/search-index/build",
"test-types": "tsd",
"docs-build": "node docs/build && node docs/search-index/build",
"docs-serve": "cd docs && npx serve",
"docs-publish": "cd docs && npx firebase-tools deploy --project pixelplumbing --only hosting:pixelplumbing-sharp"
},
"main": "lib/index.js",
"types": "lib/index.d.ts",
"files": [
"binding.gyp",
"install/**",
@@ -130,44 +133,45 @@
],
"dependencies": {
"color": "^4.2.3",
"detect-libc": "^2.0.1",
"node-addon-api": "^5.0.0",
"detect-libc": "^2.0.2",
"node-addon-api": "^6.1.0",
"prebuild-install": "^7.1.1",
"semver": "^7.3.7",
"semver": "^7.5.4",
"simple-get": "^4.0.1",
"tar-fs": "^2.1.1",
"tar-fs": "^3.0.4",
"tunnel-agent": "^0.6.0"
},
"devDependencies": {
"@types/node": "*",
"async": "^3.2.4",
"cc": "^3.0.1",
"documentation": "^14.0.0",
"exif-reader": "^1.0.3",
"exif-reader": "^1.2.0",
"extract-zip": "^2.0.1",
"icc": "^2.0.0",
"icc": "^3.0.0",
"jsdoc-to-markdown": "^8.0.0",
"license-checker": "^25.0.1",
"mocha": "^10.0.0",
"mock-fs": "^5.1.4",
"mocha": "^10.2.0",
"mock-fs": "^5.2.0",
"nyc": "^15.1.0",
"prebuild": "^11.0.4",
"rimraf": "^3.0.2",
"semistandard": "^16.0.1"
"prebuild": "^12.0.0",
"semistandard": "^16.0.1",
"tsd": "^0.29.0"
},
"license": "Apache-2.0",
"config": {
"libvips": "8.13.2",
"libvips": "8.14.5",
"integrity": {
"darwin-arm64v8": "sha512-4tsE/HMQDT9srV/ovSJlr7IxKnhvH9qpArCAf5Xpb/uNcAiT7BcZ+HYwX2lbf3UY8REB1TR4ThEL/lmPnzMUHw==",
"darwin-x64": "sha512-D4ZSvlgLpf+KzKB2OD+K8NWl0JKzzIbvWwIjjwBycIHTMkaiams3Kp/AQ/bKudqof02Ks6LtP0X4XWvCaoRoUA==",
"linux-arm64v8": "sha512-9ZvUM2NBluhoeUz9X7/zJ48xJ5d7KzI1cO6lsiv4HKo5fOYw/vEY28XodFJzhyfu9NuKxh3Hs9FtoQGNvvAFkw==",
"linux-armv6": "sha512-vu0R8DF0k7KseU62fzrJadHNk5oeJriFLVn3KxCKEfV+Wkj7rX4lQhiPmOuD7/wRcUY+GGdoZ52vysDwMQhfzA==",
"linux-armv7": "sha512-UdfhJTjGFgrwc3Kaos5G1ZAK2+t/16Prtnl6FAT+m7cG5EXzYAqzgvk4qtakAH7UTnVe8MUgOfbTLt0YiRpfsg==",
"linux-x64": "sha512-sv92VpPyN+3oBv0vi4wDjx51demGdtyhEjd+vDfC3h8S/RSuIUE9Pt/+dBFuf+iv9tRdIq9hH9vzAvsLVy6NYg==",
"linuxmusl-arm64v8": "sha512-TjhK/wHAS/m55l46T8PZ0qvlK+PKYFZGTQfh+c9aG8/z1v/VtG7TQOLNmPWfg0SFDTkXV7YqnJCqvgYLmJPZUg==",
"linuxmusl-x64": "sha512-/su96pn/H9+lDdnlM1xB2whWEoeEDJICFp/RNRJb0+bJPJhnL/IDVIhF4VnVNBq/9AlldBWii3hqMq5rY2eEAA==",
"win32-arm64v8": "sha512-UnSmwCcx3F5u4UOXyrdwTdYsuMK/RtQYc+1y+QxqIkBHiSL7dOlTIH/vKOSQvSaDQTPqxVLFt3wkMN1U7LZwyg==",
"win32-ia32": "sha512-KH/H6vpx5lJ6NEzLQmwxU/QnDg8p1Jxd+WKaPiyWmXq/HpwyKrZhi3WDoyKD4fLwnlfhAXEfVLZbUbhX21pDpQ==",
"win32-x64": "sha512-Xim5F21pqx7MuVQViaQNhSz24zWIiKHC9bm4KCdi7q/ytbvdMhm6bzWDI/mvFGNjI62NRB2SBkTTaqwJvM/pUg=="
"darwin-arm64v8": "sha512-1QZzICfCJd4wAO0P6qmYI5e5VFMt9iCE4QgefI8VMMbdSzjIXA9L/ARN6pkMQPZ3h20Y9RtJ2W1skgCsvCIccw==",
"darwin-x64": "sha512-sMIKMYXsdU9FlIfztj6Kt/SfHlhlDpP0Ups7ftVFqwjaszmYmpI9y/d/q3mLb4jrzuSiSUEislSWCwBnW7MPTw==",
"linux-arm64v8": "sha512-CD8owELzkDumaom+O3jJ8fKamILAQdj+//KK/VNcHK3sngUcFpdjx36C8okwbux9sml/T7GTB/gzpvReDrAejQ==",
"linux-armv6": "sha512-wk6IPHatDFVWKJy7lI1TJezHGHPQut1wF2bwx256KlZwXUQU3fcVcMpV1zxXjgLFewHq2+uhyMkoSGBPahWzlA==",
"linux-armv7": "sha512-HEZC9KYtkmBK5rUR2MqBhrVarnQVZ/TwLUeLkKq0XuoM2pc/eXI6N0Fh5NGEFwdXI2XE8g1ySf+OYS6DDi+xCQ==",
"linux-x64": "sha512-SlFWrITSW5XVUkaFPQOySAaSGXnhkGJCj8X2wGYYta9hk5piZldQyMp4zwy0z6UeRu1qKTKtZvmq28W3Gnh9xA==",
"linuxmusl-arm64v8": "sha512-ga9iX7WUva3sG/VsKkOD318InLlCfPIztvzCZKZ2/+izQXRbQi8VoXWMHgEN4KHACv45FTl7mJ/8CRqUzhS8wQ==",
"linuxmusl-x64": "sha512-yeaHnpfee1hrZLok2l4eFceHzlfq8gN3QOu0R4Mh8iMK5O5vAUu97bdtxeZZeJJvHw8tfh2/msGi0qysxKN8bw==",
"win32-arm64v8": "sha512-kR91hy9w1+GEXK56hLh51+hBCBo7T+ijM4Slkmvb/2PsYZySq5H7s61n99iDYl6kTJP2y9sW5Xcvm3uuXDaDgg==",
"win32-ia32": "sha512-HrnofEbzHNpHJ0vVnjsTj5yfgVdcqdWshXuwFO2zc8xlEjA83BvXZ0lVj9MxPxkxJ2ta+/UlLr+CFzc5bOceMw==",
"win32-x64": "sha512-BwKckinJZ0Fu/EcunqiLPwOLEBWp4xf8GV7nvmVuKKz5f6B+GxoA2k9aa2wueqv4r4RJVgV/aWXZWFKOIjre/Q=="
},
"runtime": "napi",
"target": 7
@@ -193,5 +197,8 @@
"filter": [
"build/include"
]
},
"tsd": {
"directory": "test/types/"
}
}

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <cstdlib>
#include <string>
@@ -93,6 +82,10 @@ namespace sharp {
if (HasAttr(input, "density")) {
descriptor->density = AttrAsDouble(input, "density");
}
// Should we ignore any embedded ICC profile
if (HasAttr(input, "ignoreIcc")) {
descriptor->ignoreIcc = AttrAsBool(input, "ignoreIcc");
}
// Raw pixel input
if (HasAttr(input, "rawChannels")) {
descriptor->rawDepth = AttrAsEnum<VipsBandFormat>(input, "rawDepth", VIPS_TYPE_BAND_FORMAT);
@@ -159,6 +152,9 @@ namespace sharp {
if (HasAttr(input, "textSpacing")) {
descriptor->textSpacing = AttrAsUint32(input, "textSpacing");
}
if (HasAttr(input, "textWrap")) {
descriptor->textWrap = AttrAsEnum<VipsTextWrap>(input, "textWrap", VIPS_TYPE_TEXT_WRAP);
}
}
// Limit input images to a given number of pixels, where pixels = width * height
descriptor->limitInputPixels = static_cast<uint64_t>(AttrAsInt64(input, "limitInputPixels"));
@@ -207,6 +203,9 @@ namespace sharp {
bool IsAvif(std::string const &str) {
return EndsWith(str, ".avif") || EndsWith(str, ".AVIF");
}
bool IsJxl(std::string const &str) {
return EndsWith(str, ".jxl") || EndsWith(str, ".JXL");
}
bool IsDz(std::string const &str) {
return EndsWith(str, ".dzi") || EndsWith(str, ".DZI");
}
@@ -217,6 +216,13 @@ namespace sharp {
return EndsWith(str, ".v") || EndsWith(str, ".V") || EndsWith(str, ".vips") || EndsWith(str, ".VIPS");
}
/*
Trim space from end of string.
*/
std::string TrimEnd(std::string const &str) {
return str.substr(0, str.find_last_not_of(" \n\r\f") + 1);
}
/*
Provide a string identifier for the given image type.
*/
@@ -237,6 +243,7 @@ namespace sharp {
case ImageType::PPM: id = "ppm"; break;
case ImageType::FITS: id = "fits"; break;
case ImageType::EXR: id = "exr"; break;
case ImageType::JXL: id = "jxl"; break;
case ImageType::VIPS: id = "vips"; break;
case ImageType::RAW: id = "raw"; break;
case ImageType::UNKNOWN: id = "unknown"; break;
@@ -281,6 +288,8 @@ namespace sharp {
{ "VipsForeignLoadPpmFile", ImageType::PPM },
{ "VipsForeignLoadFitsFile", ImageType::FITS },
{ "VipsForeignLoadOpenexr", ImageType::EXR },
{ "VipsForeignLoadJxlFile", ImageType::JXL },
{ "VipsForeignLoadJxlBuffer", ImageType::JXL },
{ "VipsForeignLoadVips", ImageType::VIPS },
{ "VipsForeignLoadVipsFile", ImageType::VIPS },
{ "VipsForeignLoadRaw", ImageType::RAW }
@@ -440,6 +449,7 @@ namespace sharp {
->set("justify", descriptor->textJustify)
->set("rgba", descriptor->textRgba)
->set("spacing", descriptor->textSpacing)
->set("wrap", descriptor->textWrap)
->set("autofit_dpi", &descriptor->textAutofitDpi);
if (descriptor->textWidth > 0) {
textOptions->set("width", descriptor->textWidth);
@@ -598,6 +608,15 @@ namespace sharp {
return copy;
}
/*
Remove GIF palette from image.
*/
VImage RemoveGifPalette(VImage image) {
VImage copy = image.copy();
copy.remove("gif-palette");
return copy;
}
/*
Does this image have a non-default density?
*/
@@ -650,6 +669,10 @@ namespace sharp {
if (image.width() > 65535 || height > 65535) {
throw vips::VError("Processed image is too large for the GIF format");
}
} else if (imageType == ImageType::HEIF) {
if (image.width() > 16384 || height > 16384) {
throw vips::VError("Processed image is too large for the HEIF format");
}
}
}
@@ -913,7 +936,7 @@ namespace sharp {
// Add non-transparent alpha channel, if required
if (colour[3] < 255.0 && !HasAlpha(image)) {
image = image.bandjoin(
VImage::new_matrix(image.width(), image.height()).new_from_image(255 * multiplier));
VImage::new_matrix(image.width(), image.height()).new_from_image(255 * multiplier).cast(image.format()));
}
return std::make_tuple(image, alphaColour);
}
@@ -941,12 +964,7 @@ namespace sharp {
}
std::pair<double, double> ResolveShrink(int width, int height, int targetWidth, int targetHeight,
Canvas canvas, bool swap, bool withoutEnlargement, bool withoutReduction) {
if (swap && canvas != Canvas::IGNORE_ASPECT) {
// Swap input width and height when requested.
std::swap(width, height);
}
Canvas canvas, bool withoutEnlargement, bool withoutReduction) {
double hshrink = 1.0;
double vshrink = 1.0;
@@ -1012,4 +1030,13 @@ namespace sharp {
return std::make_pair(hshrink, vshrink);
}
/*
Ensure decoding remains sequential.
*/
VImage StaySequential(VImage image, VipsAccess access, bool condition) {
if (access == VIPS_ACCESS_SEQUENTIAL && condition) {
return image.copy_memory();
}
return image;
}
} // namespace sharp

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_COMMON_H_
#define SRC_COMMON_H_
@@ -25,9 +14,9 @@
// Verify platform and compiler compatibility
#if (VIPS_MAJOR_VERSION < 8) || \
(VIPS_MAJOR_VERSION == 8 && VIPS_MINOR_VERSION < 13) || \
(VIPS_MAJOR_VERSION == 8 && VIPS_MINOR_VERSION == 13 && VIPS_MICRO_VERSION < 2)
#error "libvips version 8.13.2+ is required - please see https://sharp.pixelplumbing.com/install"
(VIPS_MAJOR_VERSION == 8 && VIPS_MINOR_VERSION < 14) || \
(VIPS_MAJOR_VERSION == 8 && VIPS_MINOR_VERSION == 14 && VIPS_MICRO_VERSION < 5)
#error "libvips version 8.14.5+ is required - please see https://sharp.pixelplumbing.com/install"
#endif
#if ((!defined(__clang__)) && defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 6)))
@@ -55,6 +44,7 @@ namespace sharp {
size_t bufferLength;
bool isBuffer;
double density;
bool ignoreIcc;
VipsBandFormat rawDepth;
int rawChannels;
int rawWidth;
@@ -81,6 +71,7 @@ namespace sharp {
int textDpi;
bool textRgba;
int textSpacing;
VipsTextWrap textWrap;
int textAutofitDpi;
InputDescriptor():
@@ -92,6 +83,7 @@ namespace sharp {
bufferLength(0),
isBuffer(FALSE),
density(72.0),
ignoreIcc(FALSE),
rawDepth(VIPS_FORMAT_UCHAR),
rawChannels(0),
rawWidth(0),
@@ -114,6 +106,7 @@ namespace sharp {
textDpi(72),
textRgba(FALSE),
textSpacing(0),
textWrap(VIPS_TEXT_WRAP_WORD),
textAutofitDpi(0) {}
};
@@ -152,6 +145,7 @@ namespace sharp {
PPM,
FITS,
EXR,
JXL,
VIPS,
RAW,
UNKNOWN,
@@ -182,10 +176,16 @@ namespace sharp {
bool IsHeic(std::string const &str);
bool IsHeif(std::string const &str);
bool IsAvif(std::string const &str);
bool IsJxl(std::string const &str);
bool IsDz(std::string const &str);
bool IsDzZip(std::string const &str);
bool IsV(std::string const &str);
/*
Trim space from end of string.
*/
std::string TrimEnd(std::string const &str);
/*
Provide a string identifier for the given image type.
*/
@@ -252,6 +252,11 @@ namespace sharp {
*/
VImage RemoveAnimationProperties(VImage image);
/*
Remove GIF palette from image.
*/
VImage RemoveGifPalette(VImage image);
/*
Does this image have a non-default density?
*/
@@ -357,13 +362,15 @@ namespace sharp {
VImage EnsureAlpha(VImage image, double const value);
/*
Calculate the shrink factor, taking into account auto-rotate, the canvas
mode, and so on. The hshrink/vshrink are the amount to shrink the input
image axes by in order for the output axes (ie. after rotation) to match
the required thumbnail width/height and canvas mode.
Calculate the horizontal and vertical shrink factors, taking the canvas mode into account.
*/
std::pair<double, double> ResolveShrink(int width, int height, int targetWidth, int targetHeight,
Canvas canvas, bool swap, bool withoutEnlargement, bool withoutReduction);
Canvas canvas, bool withoutEnlargement, bool withoutReduction);
/*
Ensure decoding remains sequential.
*/
VImage StaySequential(VImage image, VipsAccess access, bool condition = TRUE);
} // namespace sharp

View File

@@ -3679,6 +3679,13 @@ VipsBlob *VImage::webpsave_buffer( VOption *options ) const
return( buffer );
}
void VImage::webpsave_mime( VOption *options ) const
{
call( "webpsave_mime",
(options ? options : VImage::option())->
set( "in", *this ) );
}
void VImage::webpsave_target( VTarget target, VOption *options ) const
{
call( "webpsave_target",

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <numeric>
#include <vector>
@@ -80,6 +69,9 @@ class MetadataWorker : public Napi::AsyncWorker {
if (image.get_typeof(VIPS_META_RESOLUTION_UNIT) == VIPS_TYPE_REF_STRING) {
baton->resolutionUnit = image.get_string(VIPS_META_RESOLUTION_UNIT);
}
if (image.get_typeof("magick-format") == VIPS_TYPE_REF_STRING) {
baton->formatMagick = image.get_string("magick-format");
}
if (image.get_typeof("openslide.level-count") == VIPS_TYPE_REF_STRING) {
int const levels = std::stoi(image.get_string("openslide.level-count"));
for (int l = 0; l < levels; l++) {
@@ -153,7 +145,7 @@ class MetadataWorker : public Napi::AsyncWorker {
// Handle warnings
std::string warning = sharp::VipsWarningPop();
while (!warning.empty()) {
debuglog.Call({ Napi::String::New(env, warning) });
debuglog.MakeCallback(Receiver().Value(), { Napi::String::New(env, warning) });
warning = sharp::VipsWarningPop();
}
@@ -204,6 +196,9 @@ class MetadataWorker : public Napi::AsyncWorker {
if (!baton->resolutionUnit.empty()) {
info.Set("resolutionUnit", baton->resolutionUnit == "in" ? "inch" : baton->resolutionUnit);
}
if (!baton->formatMagick.empty()) {
info.Set("formatMagick", baton->formatMagick);
}
if (!baton->levels.empty()) {
int i = 0;
Napi::Array levels = Napi::Array::New(env, static_cast<size_t>(baton->levels.size()));
@@ -235,24 +230,25 @@ class MetadataWorker : public Napi::AsyncWorker {
info.Set("orientation", baton->orientation);
}
if (baton->exifLength > 0) {
info.Set("exif", Napi::Buffer<char>::New(env, baton->exif, baton->exifLength, sharp::FreeCallback));
info.Set("exif", Napi::Buffer<char>::NewOrCopy(env, baton->exif, baton->exifLength, sharp::FreeCallback));
}
if (baton->iccLength > 0) {
info.Set("icc", Napi::Buffer<char>::New(env, baton->icc, baton->iccLength, sharp::FreeCallback));
info.Set("icc", Napi::Buffer<char>::NewOrCopy(env, baton->icc, baton->iccLength, sharp::FreeCallback));
}
if (baton->iptcLength > 0) {
info.Set("iptc", Napi::Buffer<char>::New(env, baton->iptc, baton->iptcLength, sharp::FreeCallback));
info.Set("iptc", Napi::Buffer<char>::NewOrCopy(env, baton->iptc, baton->iptcLength, sharp::FreeCallback));
}
if (baton->xmpLength > 0) {
info.Set("xmp", Napi::Buffer<char>::New(env, baton->xmp, baton->xmpLength, sharp::FreeCallback));
info.Set("xmp", Napi::Buffer<char>::NewOrCopy(env, baton->xmp, baton->xmpLength, sharp::FreeCallback));
}
if (baton->tifftagPhotoshopLength > 0) {
info.Set("tifftagPhotoshop",
Napi::Buffer<char>::New(env, baton->tifftagPhotoshop, baton->tifftagPhotoshopLength, sharp::FreeCallback));
Napi::Buffer<char>::NewOrCopy(env, baton->tifftagPhotoshop,
baton->tifftagPhotoshopLength, sharp::FreeCallback));
}
Callback().MakeCallback(Receiver().Value(), { env.Null(), info });
} else {
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, baton->err).Value() });
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, sharp::TrimEnd(baton->err)).Value() });
}
delete baton->input;
@@ -270,7 +266,7 @@ class MetadataWorker : public Napi::AsyncWorker {
Napi::Value metadata(const Napi::CallbackInfo& info) {
// V8 objects are converted to non-V8 types held in the baton struct
MetadataBaton *baton = new MetadataBaton;
Napi::Object options = info[0].As<Napi::Object>();
Napi::Object options = info[size_t(0)].As<Napi::Object>();
// Input
baton->input = sharp::CreateInputDescriptor(options.Get("input").As<Napi::Object>());
@@ -279,7 +275,7 @@ Napi::Value metadata(const Napi::CallbackInfo& info) {
Napi::Function debuglog = options.Get("debuglog").As<Napi::Function>();
// Join queue for worker thread
Napi::Function callback = info[1].As<Napi::Function>();
Napi::Function callback = info[size_t(1)].As<Napi::Function>();
MetadataWorker *worker = new MetadataWorker(callback, baton, debuglog);
worker->Receiver().Set("options", options);
worker->Queue();

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_METADATA_H_
#define SRC_METADATA_H_
@@ -41,6 +30,7 @@ struct MetadataBaton {
int pagePrimary;
std::string compression;
std::string resolutionUnit;
std::string formatMagick;
std::vector<std::pair<int, int>> levels;
int subifds;
std::vector<double> background;

View File

@@ -1,23 +1,11 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <algorithm>
#include <functional>
#include <memory>
#include <tuple>
#include <vector>
#include <vips/vips8>
#include "common.h"
@@ -57,7 +45,7 @@ namespace sharp {
/*
* Stretch luminance to cover full dynamic range.
*/
VImage Normalise(VImage image) {
VImage Normalise(VImage image, int const lower, int const upper) {
// Get original colourspace
VipsInterpretation typeBeforeNormalize = image.interpretation();
if (typeBeforeNormalize == VIPS_INTERPRETATION_RGB) {
@@ -67,9 +55,11 @@ namespace sharp {
VImage lab = image.colourspace(VIPS_INTERPRETATION_LAB);
// Extract luminance
VImage luminance = lab[0];
// Find luminance range
int const min = luminance.percent(1);
int const max = luminance.percent(99);
int const min = lower == 0 ? luminance.min() : luminance.percent(lower);
int const max = upper == 100 ? luminance.max() : luminance.percent(upper);
if (std::abs(max - min) > 1) {
// Extract chroma
VImage chroma = lab.extract_band(1, VImage::option()->set("n", 2));
@@ -196,6 +186,7 @@ namespace sharp {
VImage Modulate(VImage image, double const brightness, double const saturation,
int const hue, double const lightness) {
VipsInterpretation colourspaceBeforeModulate = image.interpretation();
if (HasAlpha(image)) {
// Separate alpha channel
VImage alpha = image[image.bands() - 1];
@@ -205,7 +196,7 @@ namespace sharp {
{ brightness, saturation, 1},
{ lightness, 0.0, static_cast<double>(hue) }
)
.colourspace(VIPS_INTERPRETATION_sRGB)
.colourspace(colourspaceBeforeModulate)
.bandjoin(alpha);
} else {
return image
@@ -214,7 +205,7 @@ namespace sharp {
{ brightness, saturation, 1 },
{ lightness, 0.0, static_cast<double>(hue) }
)
.colourspace(VIPS_INTERPRETATION_sRGB);
.colourspace(colourspaceBeforeModulate);
}
}
@@ -278,30 +269,20 @@ namespace sharp {
if (image.width() < 3 && image.height() < 3) {
throw VError("Image to trim must be at least 3x3 pixels");
}
// Scale up 8-bit values to match 16-bit input image
double multiplier = sharp::Is16Bit(image.interpretation()) ? 256.0 : 1.0;
threshold *= multiplier;
std::vector<double> backgroundAlpha(1);
if (background.size() == 0) {
// Top-left pixel provides the default background colour if none is given
background = image.extract_area(0, 0, 1, 1)(0, 0);
multiplier = 1.0;
} else if (sharp::Is16Bit(image.interpretation())) {
for (size_t i = 0; i < background.size(); i++) {
background[i] *= 256.0;
}
threshold *= 256.0;
}
if (HasAlpha(image) && background.size() == 4) {
// Just discard the alpha because flattening the background colour with
// itself (effectively what find_trim() does) gives the same result
backgroundAlpha[0] = background[3] * multiplier;
}
if (image.bands() > 2) {
background = {
background[0] * multiplier,
background[1] * multiplier,
background[2] * multiplier
};
std::vector<double> backgroundAlpha({ background.back() });
if (HasAlpha(image)) {
background.pop_back();
} else {
background[0] = background[0] * multiplier;
background.resize(image.bands());
}
int left, top, width, height;
left = image.find_trim(&top, &width, &height, VImage::option()
@@ -342,12 +323,26 @@ namespace sharp {
if (a.size() > bands) {
throw VError("Band expansion using linear is unsupported");
}
bool const uchar = !Is16Bit(image.interpretation());
if (HasAlpha(image) && a.size() != bands && (a.size() == 1 || a.size() == bands - 1 || bands - 1 == 1)) {
// Separate alpha channel
VImage alpha = image[bands - 1];
return RemoveAlpha(image).linear(a, b).bandjoin(alpha);
return RemoveAlpha(image).linear(a, b, VImage::option()->set("uchar", uchar)).bandjoin(alpha);
} else {
return image.linear(a, b);
return image.linear(a, b, VImage::option()->set("uchar", uchar));
}
}
/*
* Unflatten
*/
VImage Unflatten(VImage image) {
if (HasAlpha(image)) {
VImage alpha = image[image.bands() - 1];
VImage noAlpha = RemoveAlpha(image);
return noAlpha.bandjoin(alpha & (noAlpha.colourspace(VIPS_INTERPRETATION_B_W) < 255));
} else {
return image.bandjoin(image.colourspace(VIPS_INTERPRETATION_B_W) < 255);
}
}
@@ -395,11 +390,11 @@ namespace sharp {
* Split into frames, embed each frame, reassemble, and update pageHeight.
*/
VImage EmbedMultiPage(VImage image, int left, int top, int width, int height,
std::vector<double> background, int nPages, int *pageHeight) {
VipsExtend extendWith, std::vector<double> background, int nPages, int *pageHeight) {
if (top == 0 && height == *pageHeight) {
// Fast path; no need to adjust the height of the multi-page image
return image.embed(left, 0, width, image.height(), VImage::option()
->set("extend", VIPS_EXTEND_BACKGROUND)
->set("extend", extendWith)
->set("background", background));
} else if (left == 0 && width == image.width()) {
// Fast path; no need to adjust the width of the multi-page image
@@ -411,7 +406,7 @@ namespace sharp {
// Do the embed on the wide image
image = image.embed(0, top, image.width(), height, VImage::option()
->set("extend", VIPS_EXTEND_BACKGROUND)
->set("extend", extendWith)
->set("background", background));
// Split the wide image into frames
@@ -441,7 +436,7 @@ namespace sharp {
// Embed each frame in the target size
for (int i = 0; i < nPages; i++) {
pages[i] = pages[i].embed(left, top, width, height, VImage::option()
->set("extend", VIPS_EXTEND_BACKGROUND)
->set("extend", extendWith)
->set("background", background));
}

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_OPERATIONS_H_
#define SRC_OPERATIONS_H_
@@ -33,7 +22,7 @@ namespace sharp {
/*
* Stretch luminance to cover full dynamic range.
*/
VImage Normalise(VImage image);
VImage Normalise(VImage image, int const lower, int const upper);
/*
* Contrast limiting adapative histogram equalization (CLAHE)
@@ -97,6 +86,11 @@ namespace sharp {
*/
VImage Linear(VImage image, std::vector<double> const a, std::vector<double> const b);
/*
* Unflatten
*/
VImage Unflatten(VImage image);
/*
* Recomb with a Matrix of the given bands/channel size.
* Eg. RGB will be a 3x3 matrix.
@@ -124,7 +118,7 @@ namespace sharp {
* Split into frames, embed each frame, reassemble, and update pageHeight.
*/
VImage EmbedMultiPage(VImage image, int left, int top, int width, int height,
std::vector<double> background, int nPages, int *pageHeight);
VipsExtend extendWith, std::vector<double> background, int nPages, int *pageHeight);
} // namespace sharp

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <algorithm>
#include <cmath>
@@ -31,18 +20,15 @@
#include "operations.h"
#include "pipeline.h"
#if defined(WIN32)
#ifdef _WIN32
#define STAT64_STRUCT __stat64
#define STAT64_FUNCTION _stat64
#elif defined(__APPLE__)
#define STAT64_STRUCT stat
#define STAT64_FUNCTION stat
#elif defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) || defined(__DragonFly__)
#define STAT64_STRUCT stat
#define STAT64_FUNCTION stat
#else
#elif defined(_LARGEFILE64_SOURCE)
#define STAT64_STRUCT stat64
#define STAT64_FUNCTION stat64
#else
#define STAT64_STRUCT stat
#define STAT64_FUNCTION stat
#endif
class PipelineWorker : public Napi::AsyncWorker {
@@ -67,6 +53,7 @@ class PipelineWorker : public Napi::AsyncWorker {
vips::VImage image;
sharp::ImageType inputImageType;
std::tie(image, inputImageType) = sharp::OpenInput(baton->input);
VipsAccess access = baton->input->access;
image = sharp::EnsureColourspace(image, baton->colourspaceInput);
int nPages = baton->input->pages;
@@ -81,46 +68,67 @@ class PipelineWorker : public Napi::AsyncWorker {
int pageHeight = sharp::GetPageHeight(image);
// Calculate angle of rotation
VipsAngle rotation;
bool flip = FALSE;
bool flop = FALSE;
VipsAngle rotation = VIPS_ANGLE_D0;
VipsAngle autoRotation = VIPS_ANGLE_D0;
bool autoFlip = FALSE;
bool autoFlop = FALSE;
if (baton->useExifOrientation) {
// Rotate and flip image according to Exif orientation
std::tie(rotation, flip, flop) = CalculateExifRotationAndFlip(sharp::ExifOrientation(image));
std::tie(autoRotation, autoFlip, autoFlop) = CalculateExifRotationAndFlip(sharp::ExifOrientation(image));
image = sharp::RemoveExifOrientation(image);
} else {
rotation = CalculateAngleRotation(baton->angle);
}
// Rotate pre-extract
bool const shouldRotateBefore = baton->rotateBeforePreExtract &&
(rotation != VIPS_ANGLE_D0 || flip || flop || baton->rotationAngle != 0.0);
(rotation != VIPS_ANGLE_D0 || autoRotation != VIPS_ANGLE_D0 ||
autoFlip || baton->flip || autoFlop || baton->flop ||
baton->rotationAngle != 0.0);
if (shouldRotateBefore) {
image = sharp::StaySequential(image, access,
rotation != VIPS_ANGLE_D0 ||
autoRotation != VIPS_ANGLE_D0 ||
autoFlip ||
baton->flip ||
baton->rotationAngle != 0.0);
if (autoRotation != VIPS_ANGLE_D0) {
image = image.rot(autoRotation);
autoRotation = VIPS_ANGLE_D0;
}
if (autoFlip) {
image = image.flip(VIPS_DIRECTION_VERTICAL);
autoFlip = FALSE;
} else if (baton->flip) {
image = image.flip(VIPS_DIRECTION_VERTICAL);
baton->flip = FALSE;
}
if (autoFlop) {
image = image.flip(VIPS_DIRECTION_HORIZONTAL);
autoFlop = FALSE;
} else if (baton->flop) {
image = image.flip(VIPS_DIRECTION_HORIZONTAL);
baton->flop = FALSE;
}
if (rotation != VIPS_ANGLE_D0) {
image = image.rot(rotation);
rotation = VIPS_ANGLE_D0;
}
if (flip) {
image = image.flip(VIPS_DIRECTION_VERTICAL);
}
if (flop) {
image = image.flip(VIPS_DIRECTION_HORIZONTAL);
}
if (rotation != VIPS_ANGLE_D0 || flip || flop) {
image = sharp::RemoveExifOrientation(image);
}
flop = FALSE;
flip = FALSE;
if (baton->rotationAngle != 0.0) {
MultiPageUnsupported(nPages, "Rotate");
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->rotationBackground, FALSE);
image = image.rotate(baton->rotationAngle, VImage::option()->set("background", background));
image = image.rotate(baton->rotationAngle, VImage::option()->set("background", background)).copy_memory();
}
}
// Trim
if (baton->trimThreshold > 0.0) {
MultiPageUnsupported(nPages, "Trim");
image = sharp::StaySequential(image, access);
image = sharp::Trim(image, baton->trimBackground, baton->trimThreshold);
baton->trimOffsetLeft = image.xoffset();
baton->trimOffsetTop = image.yoffset();
@@ -149,13 +157,18 @@ class PipelineWorker : public Napi::AsyncWorker {
int targetResizeWidth = baton->width;
int targetResizeHeight = baton->height;
// Swap input output width and height when rotating by 90 or 270 degrees
bool swap = !baton->rotateBeforePreExtract && (rotation == VIPS_ANGLE_D90 || rotation == VIPS_ANGLE_D270);
// When auto-rotating by 90 or 270 degrees, swap the target width and
// height to ensure the behavior aligns with how it would have been if
// the rotation had taken place *before* resizing.
if (!baton->rotateBeforePreExtract &&
(autoRotation == VIPS_ANGLE_D90 || autoRotation == VIPS_ANGLE_D270)) {
std::swap(targetResizeWidth, targetResizeHeight);
}
// Shrink to pageHeight, so we work for multi-page images
std::tie(hshrink, vshrink) = sharp::ResolveShrink(
inputWidth, pageHeight, targetResizeWidth, targetResizeHeight,
baton->canvas, swap, baton->withoutEnlargement, baton->withoutReduction);
baton->canvas, baton->withoutEnlargement, baton->withoutReduction);
// The jpeg preload shrink.
int jpegShrinkOnLoad = 1;
@@ -191,7 +204,7 @@ class PipelineWorker : public Napi::AsyncWorker {
if (jpegShrinkOnLoad > 1 && static_cast<int>(shrink) == jpegShrinkOnLoad) {
jpegShrinkOnLoad /= 2;
}
} else if (inputImageType == sharp::ImageType::WEBP && shrink > 1.0) {
} else if (inputImageType == sharp::ImageType::WEBP && baton->fastShrinkOnLoad && shrink > 1.0) {
// Avoid upscaling via webp
scale = 1.0 / shrink;
} else if (inputImageType == sharp::ImageType::SVG ||
@@ -205,7 +218,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// pdfload* and svgload*
if (jpegShrinkOnLoad > 1) {
vips::VOption *option = VImage::option()
->set("access", baton->input->access)
->set("access", access)
->set("shrink", jpegShrinkOnLoad)
->set("unlimited", baton->input->unlimited)
->set("fail_on", baton->input->failOn);
@@ -220,7 +233,7 @@ class PipelineWorker : public Napi::AsyncWorker {
}
} else if (scale != 1.0) {
vips::VOption *option = VImage::option()
->set("access", baton->input->access)
->set("access", access)
->set("scale", scale)
->set("fail_on", baton->input->failOn);
if (inputImageType == sharp::ImageType::WEBP) {
@@ -289,7 +302,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Shrink to pageHeight, so we work for multi-page images
std::tie(hshrink, vshrink) = sharp::ResolveShrink(
inputWidth, pageHeight, targetResizeWidth, targetResizeHeight,
baton->canvas, swap, baton->withoutEnlargement, baton->withoutReduction);
baton->canvas, baton->withoutEnlargement, baton->withoutReduction);
int targetHeight = static_cast<int>(std::rint(static_cast<double>(pageHeight) / vshrink));
int targetPageHeight = targetHeight;
@@ -306,18 +319,22 @@ class PipelineWorker : public Napi::AsyncWorker {
if (
sharp::HasProfile(image) &&
image.interpretation() != VIPS_INTERPRETATION_LABS &&
image.interpretation() != VIPS_INTERPRETATION_GREY16
image.interpretation() != VIPS_INTERPRETATION_GREY16 &&
!baton->input->ignoreIcc
) {
// Convert to sRGB/P3 using embedded profile
try {
image = image.icc_transform(processingProfile, VImage::option()
->set("embedded", TRUE)
->set("depth", image.interpretation() == VIPS_INTERPRETATION_RGB16 ? 16 : 8)
->set("depth", sharp::Is16Bit(image.interpretation()) ? 16 : 8)
->set("intent", VIPS_INTENT_PERCEPTUAL));
} catch(...) {
// Ignore failure of embedded profile
sharp::VipsWarningCallback(nullptr, G_LOG_LEVEL_WARNING, "Invalid embedded profile", nullptr);
}
} else if (image.interpretation() == VIPS_INTERPRETATION_CMYK) {
} else if (
image.interpretation() == VIPS_INTERPRETATION_CMYK &&
baton->colourspaceInput != VIPS_INTERPRETATION_CMYK
) {
image = image.icc_transform(processingProfile, VImage::option()
->set("input_profile", "cmyk")
->set("intent", VIPS_INTENT_PERCEPTUAL));
@@ -353,11 +370,12 @@ class PipelineWorker : public Napi::AsyncWorker {
image = sharp::EnsureAlpha(image, 1);
}
VipsBandFormat premultiplyFormat = image.format();
bool const shouldPremultiplyAlpha = sharp::HasAlpha(image) &&
(shouldResize || shouldBlur || shouldConv || shouldSharpen);
if (shouldPremultiplyAlpha) {
image = image.premultiply();
image = image.premultiply().cast(premultiplyFormat);
}
// Resize
@@ -367,43 +385,41 @@ class PipelineWorker : public Napi::AsyncWorker {
->set("kernel", baton->kernel));
}
// Flip (mirror about Y axis)
if (baton->flip || flip) {
image = sharp::StaySequential(image, access,
autoRotation != VIPS_ANGLE_D0 ||
baton->flip ||
autoFlip ||
rotation != VIPS_ANGLE_D0);
// Auto-rotate post-extract
if (autoRotation != VIPS_ANGLE_D0) {
image = image.rot(autoRotation);
}
// Mirror vertically (up-down) about the x-axis
if (baton->flip || autoFlip) {
image = image.flip(VIPS_DIRECTION_VERTICAL);
image = sharp::RemoveExifOrientation(image);
}
// Flop (mirror about X axis)
if (baton->flop || flop) {
// Mirror horizontally (left-right) about the y-axis
if (baton->flop || autoFlop) {
image = image.flip(VIPS_DIRECTION_HORIZONTAL);
image = sharp::RemoveExifOrientation(image);
}
// Rotate post-extract 90-angle
if (!baton->rotateBeforePreExtract && rotation != VIPS_ANGLE_D0) {
if (rotation != VIPS_ANGLE_D0) {
image = image.rot(rotation);
if (flip) {
image = image.flip(VIPS_DIRECTION_VERTICAL);
flip = FALSE;
}
if (flop) {
image = image.flip(VIPS_DIRECTION_HORIZONTAL);
flop = FALSE;
}
image = sharp::RemoveExifOrientation(image);
}
// Join additional color channels to the image
if (baton->joinChannelIn.size() > 0) {
if (!baton->joinChannelIn.empty()) {
VImage joinImage;
sharp::ImageType joinImageType = sharp::ImageType::UNKNOWN;
for (unsigned int i = 0; i < baton->joinChannelIn.size(); i++) {
baton->joinChannelIn[i]->access = access;
std::tie(joinImage, joinImageType) = sharp::OpenInput(baton->joinChannelIn[i]);
joinImage = sharp::EnsureColourspace(joinImage, baton->colourspaceInput);
image = image.bandjoin(joinImage);
}
image = image.copy(VImage::option()->set("interpretation", baton->colourspace));
image = sharp::RemoveGifPalette(image);
}
inputWidth = image.width();
@@ -433,7 +449,7 @@ class PipelineWorker : public Napi::AsyncWorker {
image = nPages > 1
? sharp::EmbedMultiPage(image,
left, top, width, height, background, nPages, &targetPageHeight)
left, top, width, height, VIPS_EXTEND_BACKGROUND, background, nPages, &targetPageHeight)
: image.embed(left, top, width, height, VImage::option()
->set("extend", VIPS_EXTEND_BACKGROUND)
->set("background", background));
@@ -450,6 +466,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Gravity-based crop
int left;
int top;
std::tie(left, top) = sharp::CalculateCrop(
inputWidth, inputHeight, baton->width, baton->height, baton->position);
int width = std::min(inputWidth, baton->width);
@@ -460,16 +477,25 @@ class PipelineWorker : public Napi::AsyncWorker {
left, top, width, height, nPages, &targetPageHeight)
: image.extract_area(left, top, width, height);
} else {
int attention_x;
int attention_y;
// Attention-based or Entropy-based crop
MultiPageUnsupported(nPages, "Resize strategy");
image = image.tilecache(VImage::option()
->set("access", VIPS_ACCESS_RANDOM)
->set("threaded", TRUE));
image = sharp::StaySequential(image, access);
image = image.smartcrop(baton->width, baton->height, VImage::option()
->set("interesting", baton->position == 16 ? VIPS_INTERESTING_ENTROPY : VIPS_INTERESTING_ATTENTION));
->set("interesting", baton->position == 16 ? VIPS_INTERESTING_ENTROPY : VIPS_INTERESTING_ATTENTION)
#if (VIPS_MAJOR_VERSION >= 8 && VIPS_MINOR_VERSION >= 15)
->set("premultiplied", shouldPremultiplyAlpha)
#endif
->set("attention_x", &attention_x)
->set("attention_y", &attention_y));
baton->hasCropOffset = true;
baton->cropOffsetLeft = static_cast<int>(image.xoffset());
baton->cropOffsetTop = static_cast<int>(image.yoffset());
baton->hasAttentionCenter = true;
baton->attentionX = static_cast<int>(attention_x * jpegShrinkOnLoad / scale);
baton->attentionY = static_cast<int>(attention_y * jpegShrinkOnLoad / scale);
}
}
}
@@ -477,6 +503,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Rotate post-extract non-90 angle
if (!baton->rotateBeforePreExtract && baton->rotationAngle != 0.0) {
MultiPageUnsupported(nPages, "Rotate");
image = sharp::StaySequential(image, access);
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->rotationBackground, shouldPremultiplyAlpha);
image = image.rotate(baton->rotationAngle, VImage::option()->set("background", background));
@@ -498,8 +525,9 @@ class PipelineWorker : public Napi::AsyncWorker {
}
// Affine transform
if (baton->affineMatrix.size() > 0) {
if (!baton->affineMatrix.empty()) {
MultiPageUnsupported(nPages, "Affine");
image = sharp::StaySequential(image, access);
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->affineBackground, shouldPremultiplyAlpha);
vips::VInterpolate interp = vips::VInterpolate::new_from_name(
@@ -514,24 +542,37 @@ class PipelineWorker : public Napi::AsyncWorker {
// Extend edges
if (baton->extendTop > 0 || baton->extendBottom > 0 || baton->extendLeft > 0 || baton->extendRight > 0) {
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->extendBackground, shouldPremultiplyAlpha);
// Embed
baton->width = image.width() + baton->extendLeft + baton->extendRight;
baton->height = (nPages > 1 ? targetPageHeight : image.height()) + baton->extendTop + baton->extendBottom;
image = nPages > 1
? sharp::EmbedMultiPage(image,
baton->extendLeft, baton->extendTop, baton->width, baton->height, background, nPages, &targetPageHeight)
: image.embed(baton->extendLeft, baton->extendTop, baton->width, baton->height,
VImage::option()->set("extend", VIPS_EXTEND_BACKGROUND)->set("background", background));
if (baton->extendWith == VIPS_EXTEND_BACKGROUND) {
std::vector<double> background;
std::tie(image, background) = sharp::ApplyAlpha(image, baton->extendBackground, shouldPremultiplyAlpha);
image = nPages > 1
? sharp::EmbedMultiPage(image,
baton->extendLeft, baton->extendTop, baton->width, baton->height,
baton->extendWith, background, nPages, &targetPageHeight)
: image.embed(baton->extendLeft, baton->extendTop, baton->width, baton->height,
VImage::option()->set("extend", baton->extendWith)->set("background", background));
} else {
std::vector<double> ignoredBackground(1);
image = nPages > 1
? sharp::EmbedMultiPage(image,
baton->extendLeft, baton->extendTop, baton->width, baton->height,
baton->extendWith, ignoredBackground, nPages, &targetPageHeight)
: image.embed(baton->extendLeft, baton->extendTop, baton->width, baton->height,
VImage::option()->set("extend", baton->extendWith));
}
}
// Median - must happen before blurring, due to the utility of blurring after thresholding
if (baton->medianSize > 0) {
image = image.median(baton->medianSize);
}
// Threshold - must happen before blurring, due to the utility of blurring after thresholding
// Threshold - must happen before unflatten to enable non-white unflattening
if (baton->threshold != 0) {
image = sharp::Threshold(image, baton->threshold, baton->thresholdGrayscale);
}
@@ -541,6 +582,11 @@ class PipelineWorker : public Napi::AsyncWorker {
image = sharp::Blur(image, baton->blurSigma);
}
// Unflatten the image
if (baton->unflatten) {
image = sharp::Unflatten(image);
}
// Convolve
if (shouldConv) {
image = sharp::Convolve(image,
@@ -567,13 +613,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Reverse premultiplication after all transformations
if (shouldPremultiplyAlpha) {
image = image.unpremultiply();
// Cast pixel values to integer
if (sharp::Is16Bit(image.interpretation())) {
image = image.cast(VIPS_FORMAT_USHORT);
} else {
image = image.cast(VIPS_FORMAT_UCHAR);
}
image = image.unpremultiply().cast(premultiplyFormat);
}
baton->premultiplied = shouldPremultiplyAlpha;
@@ -584,6 +624,7 @@ class PipelineWorker : public Napi::AsyncWorker {
for (Composite *composite : baton->composite) {
VImage compositeImage;
sharp::ImageType compositeImageType = sharp::ImageType::UNKNOWN;
composite->input->access = access;
std::tie(compositeImage, compositeImageType) = sharp::OpenInput(composite->input);
compositeImage = sharp::EnsureColourspace(compositeImage, baton->colourspaceInput);
// Verify within current dimensions
@@ -612,7 +653,7 @@ class PipelineWorker : public Napi::AsyncWorker {
if (across != 0 || down != 0) {
int left;
int top;
compositeImage = compositeImage.replicate(across, down);
compositeImage = sharp::StaySequential(compositeImage, access).replicate(across, down);
if (composite->hasOffset) {
std::tie(left, top) = sharp::CalculateCrop(
compositeImage.width(), compositeImage.height(), image.width(), image.height(),
@@ -655,6 +696,7 @@ class PipelineWorker : public Napi::AsyncWorker {
ys.push_back(top);
}
image = VImage::composite(images, modes, VImage::option()->set("x", xs)->set("y", ys));
image = sharp::RemoveGifPalette(image);
}
// Gamma decoding (brighten)
@@ -669,11 +711,13 @@ class PipelineWorker : public Napi::AsyncWorker {
// Apply normalisation - stretch luminance to cover full dynamic range
if (baton->normalise) {
image = sharp::Normalise(image);
image = sharp::StaySequential(image, access);
image = sharp::Normalise(image, baton->normaliseLower, baton->normaliseUpper);
}
// Apply contrast limiting adaptive histogram equalization (CLAHE)
if (baton->claheWidth != 0 && baton->claheHeight != 0) {
image = sharp::StaySequential(image, access);
image = sharp::Clahe(image, baton->claheWidth, baton->claheHeight, baton->claheMaxSlope);
}
@@ -681,9 +725,11 @@ class PipelineWorker : public Napi::AsyncWorker {
if (baton->boolean != nullptr) {
VImage booleanImage;
sharp::ImageType booleanImageType = sharp::ImageType::UNKNOWN;
baton->boolean->access = access;
std::tie(booleanImage, booleanImageType) = sharp::OpenInput(baton->boolean);
booleanImage = sharp::EnsureColourspace(booleanImage, baton->colourspaceInput);
image = sharp::Boolean(image, booleanImage, baton->booleanOp);
image = sharp::RemoveGifPalette(image);
}
// Apply per-channel Bandbool bitwise operations after all other operations
@@ -696,24 +742,6 @@ class PipelineWorker : public Napi::AsyncWorker {
image = sharp::Tint(image, baton->tintA, baton->tintB);
}
// Extract an image channel (aka vips band)
if (baton->extractChannel > -1) {
if (baton->extractChannel >= image.bands()) {
if (baton->extractChannel == 3 && sharp::HasAlpha(image)) {
baton->extractChannel = image.bands() - 1;
} else {
(baton->err).append("Cannot extract channel from image. Too few channels in image.");
return Error();
}
}
VipsInterpretation const interpretation = sharp::Is16Bit(image.interpretation())
? VIPS_INTERPRETATION_GREY16
: VIPS_INTERPRETATION_B_W;
image = image
.extract_band(baton->extractChannel)
.copy(VImage::option()->set("interpretation", interpretation));
}
// Remove alpha channel, if any
if (baton->removeAlpha) {
image = sharp::RemoveAlpha(image);
@@ -735,17 +763,39 @@ class PipelineWorker : public Napi::AsyncWorker {
if (baton->withMetadata && sharp::HasProfile(image) && baton->withMetadataIcc.empty()) {
image = image.icc_transform("srgb", VImage::option()
->set("embedded", TRUE)
->set("depth", sharp::Is16Bit(image.interpretation()) ? 16 : 8)
->set("intent", VIPS_INTENT_PERCEPTUAL));
}
}
// Extract channel
if (baton->extractChannel > -1) {
if (baton->extractChannel >= image.bands()) {
if (baton->extractChannel == 3 && sharp::HasAlpha(image)) {
baton->extractChannel = image.bands() - 1;
} else {
(baton->err)
.append("Cannot extract channel ").append(std::to_string(baton->extractChannel))
.append(" from image with channels 0-").append(std::to_string(image.bands() - 1));
return Error();
}
}
VipsInterpretation colourspace = sharp::Is16Bit(image.interpretation())
? VIPS_INTERPRETATION_GREY16
: VIPS_INTERPRETATION_B_W;
image = image
.extract_band(baton->extractChannel)
.copy(VImage::option()->set("interpretation", colourspace));
}
// Apply output ICC profile
if (!baton->withMetadataIcc.empty()) {
if (baton->withMetadata) {
image = image.icc_transform(
const_cast<char*>(baton->withMetadataIcc.data()),
baton->withMetadataIcc.empty() ? "srgb" : const_cast<char*>(baton->withMetadataIcc.data()),
VImage::option()
->set("input_profile", processingProfile)
->set("embedded", TRUE)
->set("depth", sharp::Is16Bit(image.interpretation()) ? 16 : 8)
->set("intent", VIPS_INTENT_PERCEPTUAL));
}
// Override EXIF Orientation tag
@@ -846,6 +896,7 @@ class PipelineWorker : public Napi::AsyncWorker {
->set("lossless", baton->webpLossless)
->set("near_lossless", baton->webpNearLossless)
->set("smart_subsample", baton->webpSmartSubsample)
->set("preset", baton->webpPreset)
->set("effort", baton->webpEffort)
->set("min_size", baton->webpMinSize)
->set("mixed", baton->webpMixed)
@@ -863,7 +914,10 @@ class PipelineWorker : public Napi::AsyncWorker {
->set("strip", !baton->withMetadata)
->set("bitdepth", baton->gifBitdepth)
->set("effort", baton->gifEffort)
->set("reoptimise", baton->gifReoptimise)
->set("reuse", baton->gifReuse)
->set("interlace", baton->gifProgressive)
->set("interframe_maxerror", baton->gifInterFrameMaxError)
->set("interpalette_maxerror", baton->gifInterPaletteMaxError)
->set("dither", baton->gifDither)));
baton->bufferOut = static_cast<char*>(area->data);
baton->bufferOutLength = area->length;
@@ -902,6 +956,7 @@ class PipelineWorker : public Napi::AsyncWorker {
} else if (baton->formatOut == "heif" ||
(baton->formatOut == "input" && inputImageType == sharp::ImageType::HEIF)) {
// Write HEIF to buffer
sharp::AssertImageTypeDimensions(image, sharp::ImageType::HEIF);
image = sharp::RemoveAnimationProperties(image).cast(VIPS_FORMAT_UCHAR);
VipsArea *area = reinterpret_cast<VipsArea*>(image.heifsave_buffer(VImage::option()
->set("strip", !baton->withMetadata)
@@ -923,6 +978,7 @@ class PipelineWorker : public Napi::AsyncWorker {
if (!sharp::HasAlpha(image)) {
baton->tileBackground.pop_back();
}
image = sharp::StaySequential(image, access, baton->tileAngle != 0);
vips::VOption *options = BuildOptionsDZ(baton);
VipsArea *area = reinterpret_cast<VipsArea*>(image.dzsave_buffer(options));
baton->bufferOut = static_cast<char*>(area->data);
@@ -930,6 +986,21 @@ class PipelineWorker : public Napi::AsyncWorker {
area->free_fn = nullptr;
vips_area_unref(area);
baton->formatOut = "dz";
} else if (baton->formatOut == "jxl" ||
(baton->formatOut == "input" && inputImageType == sharp::ImageType::JXL)) {
// Write JXL to buffer
image = sharp::RemoveAnimationProperties(image);
VipsArea *area = reinterpret_cast<VipsArea*>(image.jxlsave_buffer(VImage::option()
->set("strip", !baton->withMetadata)
->set("distance", baton->jxlDistance)
->set("tier", baton->jxlDecodingTier)
->set("effort", baton->jxlEffort)
->set("lossless", baton->jxlLossless)));
baton->bufferOut = static_cast<char*>(area->data);
baton->bufferOutLength = area->length;
area->free_fn = nullptr;
vips_area_unref(area);
baton->formatOut = "jxl";
} else if (baton->formatOut == "raw" ||
(baton->formatOut == "input" && inputImageType == sharp::ImageType::RAW)) {
// Write raw, uncompressed image data to buffer
@@ -968,6 +1039,7 @@ class PipelineWorker : public Napi::AsyncWorker {
bool const isTiff = sharp::IsTiff(baton->fileOut);
bool const isJp2 = sharp::IsJp2(baton->fileOut);
bool const isHeif = sharp::IsHeif(baton->fileOut);
bool const isJxl = sharp::IsJxl(baton->fileOut);
bool const isDz = sharp::IsDz(baton->fileOut);
bool const isDzZip = sharp::IsDzZip(baton->fileOut);
bool const isV = sharp::IsV(baton->fileOut);
@@ -1030,6 +1102,7 @@ class PipelineWorker : public Napi::AsyncWorker {
->set("lossless", baton->webpLossless)
->set("near_lossless", baton->webpNearLossless)
->set("smart_subsample", baton->webpSmartSubsample)
->set("preset", baton->webpPreset)
->set("effort", baton->webpEffort)
->set("min_size", baton->webpMinSize)
->set("mixed", baton->webpMixed)
@@ -1043,7 +1116,8 @@ class PipelineWorker : public Napi::AsyncWorker {
->set("strip", !baton->withMetadata)
->set("bitdepth", baton->gifBitdepth)
->set("effort", baton->gifEffort)
->set("reoptimise", baton->gifReoptimise)
->set("reuse", baton->gifReuse)
->set("interlace", baton->gifProgressive)
->set("dither", baton->gifDither));
baton->formatOut = "gif";
} else if (baton->formatOut == "tiff" || (mightMatchInput && isTiff) ||
@@ -1074,6 +1148,7 @@ class PipelineWorker : public Napi::AsyncWorker {
} else if (baton->formatOut == "heif" || (mightMatchInput && isHeif) ||
(willMatchInput && inputImageType == sharp::ImageType::HEIF)) {
// Write HEIF to file
sharp::AssertImageTypeDimensions(image, sharp::ImageType::HEIF);
image = sharp::RemoveAnimationProperties(image).cast(VIPS_FORMAT_UCHAR);
image.heifsave(const_cast<char*>(baton->fileOut.data()), VImage::option()
->set("strip", !baton->withMetadata)
@@ -1085,6 +1160,17 @@ class PipelineWorker : public Napi::AsyncWorker {
? VIPS_FOREIGN_SUBSAMPLE_OFF : VIPS_FOREIGN_SUBSAMPLE_ON)
->set("lossless", baton->heifLossless));
baton->formatOut = "heif";
} else if (baton->formatOut == "jxl" || (mightMatchInput && isJxl) ||
(willMatchInput && inputImageType == sharp::ImageType::JXL)) {
// Write JXL to file
image = sharp::RemoveAnimationProperties(image);
image.jxlsave(const_cast<char*>(baton->fileOut.data()), VImage::option()
->set("strip", !baton->withMetadata)
->set("distance", baton->jxlDistance)
->set("tier", baton->jxlDecodingTier)
->set("effort", baton->jxlEffort)
->set("lossless", baton->jxlLossless));
baton->formatOut = "jxl";
} else if (baton->formatOut == "dz" || isDz || isDzZip) {
// Write DZ to file
if (isDzZip) {
@@ -1093,6 +1179,7 @@ class PipelineWorker : public Napi::AsyncWorker {
if (!sharp::HasAlpha(image)) {
baton->tileBackground.pop_back();
}
image = sharp::StaySequential(image, access, baton->tileAngle != 0);
vips::VOption *options = BuildOptionsDZ(baton);
image.dzsave(const_cast<char*>(baton->fileOut.data()), options);
baton->formatOut = "dz";
@@ -1128,7 +1215,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Handle warnings
std::string warning = sharp::VipsWarningPop();
while (!warning.empty()) {
debuglog.Call({ Napi::String::New(env, warning) });
debuglog.MakeCallback(Receiver().Value(), { Napi::String::New(env, warning) });
warning = sharp::VipsWarningPop();
}
@@ -1157,6 +1244,10 @@ class PipelineWorker : public Napi::AsyncWorker {
info.Set("cropOffsetLeft", static_cast<int32_t>(baton->cropOffsetLeft));
info.Set("cropOffsetTop", static_cast<int32_t>(baton->cropOffsetTop));
}
if (baton->hasAttentionCenter) {
info.Set("attentionX", static_cast<int32_t>(baton->attentionX));
info.Set("attentionY", static_cast<int32_t>(baton->attentionY));
}
if (baton->trimThreshold > 0.0) {
info.Set("trimOffsetLeft", static_cast<int32_t>(baton->trimOffsetLeft));
info.Set("trimOffsetTop", static_cast<int32_t>(baton->trimOffsetTop));
@@ -1170,7 +1261,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Add buffer size to info
info.Set("size", static_cast<uint32_t>(baton->bufferOutLength));
// Pass ownership of output data to Buffer instance
Napi::Buffer<char> data = Napi::Buffer<char>::New(env, static_cast<char*>(baton->bufferOut),
Napi::Buffer<char> data = Napi::Buffer<char>::NewOrCopy(env, static_cast<char*>(baton->bufferOut),
baton->bufferOutLength, sharp::FreeCallback);
Callback().MakeCallback(Receiver().Value(), { env.Null(), data, info });
} else {
@@ -1182,7 +1273,7 @@ class PipelineWorker : public Napi::AsyncWorker {
Callback().MakeCallback(Receiver().Value(), { env.Null(), info });
}
} else {
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, baton->err).Value() });
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, sharp::TrimEnd(baton->err)).Value() });
}
// Delete baton
@@ -1200,7 +1291,7 @@ class PipelineWorker : public Napi::AsyncWorker {
// Decrement processing task counter
g_atomic_int_dec_and_test(&sharp::counterProcess);
Napi::Number queueLength = Napi::Number::New(env, static_cast<double>(sharp::counterQueue));
queueListener.Call(Receiver().Value(), { queueLength });
queueListener.MakeCallback(Receiver().Value(), { queueLength });
}
private:
@@ -1290,6 +1381,7 @@ class PipelineWorker : public Napi::AsyncWorker {
{"lossless", baton->webpLossless ? "TRUE" : "FALSE"},
{"near_lossless", baton->webpNearLossless ? "TRUE" : "FALSE"},
{"smart_subsample", baton->webpSmartSubsample ? "TRUE" : "FALSE"},
{"preset", vips_enum_nick(VIPS_TYPE_FOREIGN_WEBP_PRESET, baton->webpPreset)},
{"min_size", baton->webpMinSize ? "TRUE" : "FALSE"},
{"mixed", baton->webpMixed ? "TRUE" : "FALSE"},
{"effort", std::to_string(baton->webpEffort)}
@@ -1346,7 +1438,7 @@ class PipelineWorker : public Napi::AsyncWorker {
Napi::Value pipeline(const Napi::CallbackInfo& info) {
// V8 objects are converted to non-V8 types held in the baton struct
PipelineBaton *baton = new PipelineBaton;
Napi::Object options = info[0].As<Napi::Object>();
Napi::Object options = info[size_t(0)].As<Napi::Object>();
// Input
baton->input = sharp::CreateInputDescriptor(options.Get("input").As<Napi::Object>());
@@ -1408,6 +1500,7 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
// Operators
baton->flatten = sharp::AttrAsBool(options, "flatten");
baton->flattenBackground = sharp::AttrAsVectorOfDouble(options, "flattenBackground");
baton->unflatten = sharp::AttrAsBool(options, "unflatten");
baton->negate = sharp::AttrAsBool(options, "negate");
baton->negateAlpha = sharp::AttrAsBool(options, "negateAlpha");
baton->blurSigma = sharp::AttrAsDouble(options, "blurSigma");
@@ -1432,6 +1525,8 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
baton->linearB = sharp::AttrAsVectorOfDouble(options, "linearB");
baton->greyscale = sharp::AttrAsBool(options, "greyscale");
baton->normalise = sharp::AttrAsBool(options, "normalise");
baton->normaliseLower = sharp::AttrAsUint32(options, "normaliseLower");
baton->normaliseUpper = sharp::AttrAsUint32(options, "normaliseUpper");
baton->tintA = sharp::AttrAsDouble(options, "tintA");
baton->tintB = sharp::AttrAsDouble(options, "tintB");
baton->claheWidth = sharp::AttrAsUint32(options, "claheWidth");
@@ -1449,6 +1544,7 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
baton->extendLeft = sharp::AttrAsInt32(options, "extendLeft");
baton->extendRight = sharp::AttrAsInt32(options, "extendRight");
baton->extendBackground = sharp::AttrAsVectorOfDouble(options, "extendBackground");
baton->extendWith = sharp::AttrAsEnum<VipsExtend>(options, "extendWith", VIPS_TYPE_EXTEND);
baton->extractChannel = sharp::AttrAsInt32(options, "extractChannel");
baton->affineMatrix = sharp::AttrAsVectorOfDouble(options, "affineMatrix");
baton->affineBackground = sharp::AttrAsVectorOfDouble(options, "affineBackground");
@@ -1538,13 +1634,17 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
baton->webpLossless = sharp::AttrAsBool(options, "webpLossless");
baton->webpNearLossless = sharp::AttrAsBool(options, "webpNearLossless");
baton->webpSmartSubsample = sharp::AttrAsBool(options, "webpSmartSubsample");
baton->webpPreset = sharp::AttrAsEnum<VipsForeignWebpPreset>(options, "webpPreset", VIPS_TYPE_FOREIGN_WEBP_PRESET);
baton->webpEffort = sharp::AttrAsUint32(options, "webpEffort");
baton->webpMinSize = sharp::AttrAsBool(options, "webpMinSize");
baton->webpMixed = sharp::AttrAsBool(options, "webpMixed");
baton->gifBitdepth = sharp::AttrAsUint32(options, "gifBitdepth");
baton->gifEffort = sharp::AttrAsUint32(options, "gifEffort");
baton->gifDither = sharp::AttrAsDouble(options, "gifDither");
baton->gifReoptimise = sharp::AttrAsBool(options, "gifReoptimise");
baton->gifInterFrameMaxError = sharp::AttrAsDouble(options, "gifInterFrameMaxError");
baton->gifInterPaletteMaxError = sharp::AttrAsDouble(options, "gifInterPaletteMaxError");
baton->gifReuse = sharp::AttrAsBool(options, "gifReuse");
baton->gifProgressive = sharp::AttrAsBool(options, "gifProgressive");
baton->tiffQuality = sharp::AttrAsUint32(options, "tiffQuality");
baton->tiffPyramid = sharp::AttrAsBool(options, "tiffPyramid");
baton->tiffBitdepth = sharp::AttrAsUint32(options, "tiffBitdepth");
@@ -1568,6 +1668,10 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
options, "heifCompression", VIPS_TYPE_FOREIGN_HEIF_COMPRESSION);
baton->heifEffort = sharp::AttrAsUint32(options, "heifEffort");
baton->heifChromaSubsampling = sharp::AttrAsStr(options, "heifChromaSubsampling");
baton->jxlDistance = sharp::AttrAsDouble(options, "jxlDistance");
baton->jxlDecodingTier = sharp::AttrAsUint32(options, "jxlDecodingTier");
baton->jxlEffort = sharp::AttrAsUint32(options, "jxlEffort");
baton->jxlLossless = sharp::AttrAsBool(options, "jxlLossless");
baton->rawDepth = sharp::AttrAsEnum<VipsBandFormat>(options, "rawDepth", VIPS_TYPE_BAND_FORMAT);
// Animated output properties
if (sharp::HasAttr(options, "loop")) {
@@ -1590,20 +1694,6 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
baton->tileId = sharp::AttrAsStr(options, "tileId");
baton->tileBasename = sharp::AttrAsStr(options, "tileBasename");
// Force random access for certain operations
if (baton->input->access == VIPS_ACCESS_SEQUENTIAL) {
if (
baton->trimThreshold > 0.0 ||
baton->normalise ||
baton->position == 16 || baton->position == 17 ||
baton->angle % 360 != 0 ||
fmod(baton->rotationAngle, 360.0) != 0.0 ||
baton->useExifOrientation
) {
baton->input->access = VIPS_ACCESS_RANDOM;
}
}
// Function to notify of libvips warnings
Napi::Function debuglog = options.Get("debuglog").As<Napi::Function>();
@@ -1611,7 +1701,7 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
Napi::Function queueListener = options.Get("queueListener").As<Napi::Function>();
// Join queue for worker thread
Napi::Function callback = info[1].As<Napi::Function>();
Napi::Function callback = info[size_t(1)].As<Napi::Function>();
PipelineWorker *worker = new PipelineWorker(callback, baton, debuglog, queueListener);
worker->Receiver().Set("options", options);
worker->Queue();
@@ -1619,7 +1709,7 @@ Napi::Value pipeline(const Napi::CallbackInfo& info) {
// Increment queued task counter
g_atomic_int_inc(&sharp::counterQueue);
Napi::Number queueLength = Napi::Number::New(info.Env(), static_cast<double>(sharp::counterQueue));
queueListener.Call(info.This(), { queueLength });
queueListener.MakeCallback(info.This(), { queueLength });
return info.Env().Undefined();
}

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_PIPELINE_H_
#define SRC_PIPELINE_H_
@@ -74,6 +63,9 @@ struct PipelineBaton {
bool hasCropOffset;
int cropOffsetLeft;
int cropOffsetTop;
bool hasAttentionCenter;
int attentionX;
int attentionY;
bool premultiplied;
bool tileCentre;
bool fastShrinkOnLoad;
@@ -81,6 +73,7 @@ struct PipelineBaton {
double tintB;
bool flatten;
std::vector<double> flattenBackground;
bool unflatten;
bool negate;
bool negateAlpha;
double blurSigma;
@@ -107,6 +100,8 @@ struct PipelineBaton {
double gammaOut;
bool greyscale;
bool normalise;
int normaliseLower;
int normaliseUpper;
int claheWidth;
int claheHeight;
int claheMaxSlope;
@@ -122,6 +117,7 @@ struct PipelineBaton {
int extendLeft;
int extendRight;
std::vector<double> extendBackground;
VipsExtend extendWith;
bool withoutEnlargement;
bool withoutReduction;
std::vector<double> affineMatrix;
@@ -157,13 +153,17 @@ struct PipelineBaton {
bool webpNearLossless;
bool webpLossless;
bool webpSmartSubsample;
VipsForeignWebpPreset webpPreset;
int webpEffort;
bool webpMinSize;
bool webpMixed;
int gifBitdepth;
int gifEffort;
double gifDither;
bool gifReoptimise;
double gifInterFrameMaxError;
double gifInterPaletteMaxError;
bool gifReuse;
bool gifProgressive;
int tiffQuality;
VipsForeignTiffCompression tiffCompression;
VipsForeignTiffPredictor tiffPredictor;
@@ -180,6 +180,10 @@ struct PipelineBaton {
int heifEffort;
std::string heifChromaSubsampling;
bool heifLossless;
double jxlDistance;
int jxlDecodingTier;
int jxlEffort;
bool jxlLossless;
VipsBandFormat rawDepth;
std::string err;
bool withMetadata;
@@ -229,11 +233,15 @@ struct PipelineBaton {
hasCropOffset(false),
cropOffsetLeft(0),
cropOffsetTop(0),
hasAttentionCenter(false),
attentionX(0),
attentionY(0),
premultiplied(false),
tintA(128.0),
tintB(128.0),
flatten(false),
flattenBackground{ 0.0, 0.0, 0.0 },
unflatten(false),
negate(false),
negateAlpha(true),
blurSigma(0.0),
@@ -259,6 +267,8 @@ struct PipelineBaton {
gamma(0.0),
greyscale(false),
normalise(false),
normaliseLower(1),
normaliseUpper(99),
claheWidth(0),
claheHeight(0),
claheMaxSlope(3),
@@ -273,6 +283,7 @@ struct PipelineBaton {
extendLeft(0),
extendRight(0),
extendBackground{ 0.0, 0.0, 0.0, 255.0 },
extendWith(VIPS_EXTEND_BACKGROUND),
withoutEnlargement(false),
withoutReduction(false),
affineMatrix{ 1.0, 0.0, 0.0, 1.0 },
@@ -308,13 +319,17 @@ struct PipelineBaton {
webpNearLossless(false),
webpLossless(false),
webpSmartSubsample(false),
webpPreset(VIPS_FOREIGN_WEBP_PRESET_DEFAULT),
webpEffort(4),
webpMinSize(false),
webpMixed(false),
gifBitdepth(8),
gifEffort(7),
gifDither(1.0),
gifReoptimise(false),
gifInterFrameMaxError(0.0),
gifInterPaletteMaxError(3.0),
gifReuse(true),
gifProgressive(false),
tiffQuality(80),
tiffCompression(VIPS_FOREIGN_TIFF_COMPRESSION_JPEG),
tiffPredictor(VIPS_FOREIGN_TIFF_PREDICTOR_HORIZONTAL),
@@ -331,6 +346,10 @@ struct PipelineBaton {
heifEffort(4),
heifChromaSubsampling("4:4:4"),
heifLossless(false),
jxlDistance(1.0),
jxlDecodingTier(0),
jxlEffort(7),
jxlLossless(false),
rawDepth(VIPS_FORMAT_UCHAR),
withMetadata(false),
withMetadataOrientation(-1),

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <napi.h>
#include <vips/vips8>
@@ -42,6 +31,7 @@ Napi::Object init(Napi::Env env, Napi::Object exports) {
exports.Set("simd", Napi::Function::New(env, simd));
exports.Set("libvipsVersion", Napi::Function::New(env, libvipsVersion));
exports.Set("format", Napi::Function::New(env, format));
exports.Set("block", Napi::Function::New(env, block));
exports.Set("_maxColourDistance", Napi::Function::New(env, _maxColourDistance));
exports.Set("_isUsingJemalloc", Napi::Function::New(env, _isUsingJemalloc));
exports.Set("stats", Napi::Function::New(env, stats));

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <numeric>
#include <vector>
@@ -117,7 +106,7 @@ class StatsWorker : public Napi::AsyncWorker {
// Handle warnings
std::string warning = sharp::VipsWarningPop();
while (!warning.empty()) {
debuglog.Call({ Napi::String::New(env, warning) });
debuglog.MakeCallback(Receiver().Value(), { Napi::String::New(env, warning) });
warning = sharp::VipsWarningPop();
}
@@ -154,7 +143,7 @@ class StatsWorker : public Napi::AsyncWorker {
info.Set("dominant", dominant);
Callback().MakeCallback(Receiver().Value(), { env.Null(), info });
} else {
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, baton->err).Value() });
Callback().MakeCallback(Receiver().Value(), { Napi::Error::New(env, sharp::TrimEnd(baton->err)).Value() });
}
delete baton->input;
@@ -172,16 +161,17 @@ class StatsWorker : public Napi::AsyncWorker {
Napi::Value stats(const Napi::CallbackInfo& info) {
// V8 objects are converted to non-V8 types held in the baton struct
StatsBaton *baton = new StatsBaton;
Napi::Object options = info[0].As<Napi::Object>();
Napi::Object options = info[size_t(0)].As<Napi::Object>();
// Input
baton->input = sharp::CreateInputDescriptor(options.Get("input").As<Napi::Object>());
baton->input->access = VIPS_ACCESS_RANDOM;
// Function to notify of libvips warnings
Napi::Function debuglog = options.Get("debuglog").As<Napi::Function>();
// Join queue for worker thread
Napi::Function callback = info[1].As<Napi::Function>();
Napi::Function callback = info[size_t(1)].As<Napi::Function>();
StatsWorker *worker = new StatsWorker(callback, baton, debuglog);
worker->Receiver().Set("options", options);
worker->Queue();

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_STATS_H_
#define SRC_STATS_H_

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#include <cmath>
#include <string>
@@ -30,16 +19,16 @@ Napi::Value cache(const Napi::CallbackInfo& info) {
Napi::Env env = info.Env();
// Set memory limit
if (info[0].IsNumber()) {
vips_cache_set_max_mem(info[0].As<Napi::Number>().Int32Value() * 1048576);
if (info[size_t(0)].IsNumber()) {
vips_cache_set_max_mem(info[size_t(0)].As<Napi::Number>().Int32Value() * 1048576);
}
// Set file limit
if (info[1].IsNumber()) {
vips_cache_set_max_files(info[1].As<Napi::Number>().Int32Value());
if (info[size_t(1)].IsNumber()) {
vips_cache_set_max_files(info[size_t(1)].As<Napi::Number>().Int32Value());
}
// Set items limit
if (info[2].IsNumber()) {
vips_cache_set_max(info[2].As<Napi::Number>().Int32Value());
if (info[size_t(2)].IsNumber()) {
vips_cache_set_max(info[size_t(2)].As<Napi::Number>().Int32Value());
}
// Get memory stats
@@ -69,8 +58,8 @@ Napi::Value cache(const Napi::CallbackInfo& info) {
*/
Napi::Value concurrency(const Napi::CallbackInfo& info) {
// Set concurrency
if (info[0].IsNumber()) {
vips_concurrency_set(info[0].As<Napi::Number>().Int32Value());
if (info[size_t(0)].IsNumber()) {
vips_concurrency_set(info[size_t(0)].As<Napi::Number>().Int32Value());
}
// Get concurrency
return Napi::Number::New(info.Env(), vips_concurrency_get());
@@ -91,8 +80,8 @@ Napi::Value counters(const Napi::CallbackInfo& info) {
*/
Napi::Value simd(const Napi::CallbackInfo& info) {
// Set state
if (info[0].IsBoolean()) {
vips_vector_set_enabled(info[0].As<Napi::Boolean>().Value());
if (info[size_t(0)].IsBoolean()) {
vips_vector_set_enabled(info[size_t(0)].As<Napi::Boolean>().Value());
}
// Get state
return Napi::Boolean::New(info.Env(), vips_vector_isenabled());
@@ -115,7 +104,7 @@ Napi::Value format(const Napi::CallbackInfo& info) {
Napi::Object format = Napi::Object::New(env);
for (std::string const f : {
"jpeg", "png", "webp", "tiff", "magick", "openslide", "dz",
"ppm", "fits", "gif", "svg", "heif", "pdf", "vips", "jp2k"
"ppm", "fits", "gif", "svg", "heif", "pdf", "vips", "jp2k", "jxl"
}) {
// Input
const VipsObjectClass *oc = vips_class_find("VipsOperation", (f + "load").c_str());
@@ -175,6 +164,17 @@ Napi::Value format(const Napi::CallbackInfo& info) {
return format;
}
/*
(Un)block libvips operations at runtime.
*/
void block(const Napi::CallbackInfo& info) {
Napi::Array ops = info[size_t(0)].As<Napi::Array>();
bool const state = info[size_t(1)].As<Napi::Boolean>().Value();
for (unsigned int i = 0; i < ops.Length(); i++) {
vips_operation_block_set(ops.Get(i).As<Napi::String>().Utf8Value().c_str(), state);
}
}
/*
Synchronous, internal-only method used by some of the functional tests.
Calculates the maximum colour distance using the DE2000 algorithm
@@ -185,10 +185,10 @@ Napi::Value _maxColourDistance(const Napi::CallbackInfo& info) {
// Open input files
VImage image1;
sharp::ImageType imageType1 = sharp::DetermineImageType(info[0].As<Napi::String>().Utf8Value().data());
sharp::ImageType imageType1 = sharp::DetermineImageType(info[size_t(0)].As<Napi::String>().Utf8Value().data());
if (imageType1 != sharp::ImageType::UNKNOWN) {
try {
image1 = VImage::new_from_file(info[0].As<Napi::String>().Utf8Value().c_str());
image1 = VImage::new_from_file(info[size_t(0)].As<Napi::String>().Utf8Value().c_str());
} catch (...) {
throw Napi::Error::New(env, "Input file 1 has corrupt header");
}
@@ -196,10 +196,10 @@ Napi::Value _maxColourDistance(const Napi::CallbackInfo& info) {
throw Napi::Error::New(env, "Input file 1 is of an unsupported image format");
}
VImage image2;
sharp::ImageType imageType2 = sharp::DetermineImageType(info[1].As<Napi::String>().Utf8Value().data());
sharp::ImageType imageType2 = sharp::DetermineImageType(info[size_t(1)].As<Napi::String>().Utf8Value().data());
if (imageType2 != sharp::ImageType::UNKNOWN) {
try {
image2 = VImage::new_from_file(info[1].As<Napi::String>().Utf8Value().c_str());
image2 = VImage::new_from_file(info[size_t(1)].As<Napi::String>().Utf8Value().c_str());
} catch (...) {
throw Napi::Error::New(env, "Input file 2 has corrupt header");
}

View File

@@ -1,16 +1,5 @@
// Copyright 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Lovell Fuller and contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
#ifndef SRC_UTILITIES_H_
#define SRC_UTILITIES_H_
@@ -23,6 +12,7 @@ Napi::Value counters(const Napi::CallbackInfo& info);
Napi::Value simd(const Napi::CallbackInfo& info);
Napi::Value libvipsVersion(const Napi::CallbackInfo& info);
Napi::Value format(const Napi::CallbackInfo& info);
void block(const Napi::CallbackInfo& info);
Napi::Value _maxColourDistance(const Napi::CallbackInfo& info);
Napi::Value _isUsingJemalloc(const Napi::CallbackInfo& info);

24
test/beforeEach.js Normal file
View File

@@ -0,0 +1,24 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const sharp = require('../');
const usingCache = !process.env.G_DEBUG;
const usingSimd = !process.env.VIPS_NOVECTOR;
const concurrency = Number(process.env.VIPS_CONCURRENCY) || 0;
exports.mochaHooks = {
beforeEach () {
sharp.cache(usingCache);
sharp.simd(usingSimd);
sharp.concurrency(concurrency);
},
afterEach () {
if (global.gc) {
global.gc();
}
}
};

View File

@@ -5,11 +5,11 @@ ARG BRANCH=main
RUN apt-get -y update && apt-get install -y build-essential curl git
# Install latest Node.js LTS
RUN curl -fsSL https://deb.nodesource.com/setup_16.x | bash -
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash -
RUN apt-get install -y nodejs
# Install benchmark dependencies
RUN apt-get install -y imagemagick libmagick++-dev graphicsmagick libmapnik-dev
RUN apt-get install -y imagemagick libmagick++-dev graphicsmagick
# Install sharp
RUN mkdir /tmp/sharp
@@ -17,10 +17,15 @@ RUN cd /tmp && git clone --single-branch --branch $BRANCH https://github.com/lov
RUN cd /tmp/sharp && npm install --build-from-source
# Install benchmark test
RUN cd /tmp/sharp/test/bench && npm install
RUN cd /tmp/sharp/test/bench && npm install --omit optional
RUN cat /etc/os-release | grep VERSION=
RUN node -v
WORKDIR /tmp/sharp/test/bench
# Workaround for: https://github.com/emscripten-core/emscripten/pull/16917
# This could be removed once Squoosh is an optional dependency.
ENV NODE_OPTIONS="--no-experimental-fetch"
CMD [ "node", "perf" ]

View File

@@ -7,17 +7,18 @@
"scripts": {
"test": "node perf && node random && node parallel"
},
"devDependencies": {
"@squoosh/cli": "0.7.2",
"@squoosh/lib": "0.4.0",
"@tensorflow/tfjs-node": "3.20.0",
"dependencies": {
"@squoosh/cli": "0.7.3",
"@squoosh/lib": "0.5.3",
"async": "3.2.4",
"benchmark": "2.1.4",
"gm": "1.24.0",
"gm": "1.25.0",
"imagemagick": "0.1.3",
"jimp": "0.16.2",
"mapnik": "4.5.9",
"semver": "7.3.7"
"jimp": "0.22.10"
},
"optionalDependencies": {
"@tensorflow/tfjs-node": "4.9.0",
"mapnik": "4.5.9"
},
"license": "Apache-2.0",
"engines": {

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
process.env.UV_THREADPOOL_SIZE = 64;

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const os = require('os');
@@ -7,15 +10,22 @@ const { exec } = require('child_process');
const async = require('async');
const Benchmark = require('benchmark');
const safeRequire = (name) => {
try {
return require(name);
} catch (err) {}
return null;
};
// Contenders
const sharp = require('../../');
const gm = require('gm');
const imagemagick = require('imagemagick');
const mapnik = require('mapnik');
const mapnik = safeRequire('mapnik');
const jimp = require('jimp');
const squoosh = require('@squoosh/lib');
process.env.TF_CPP_MIN_LOG_LEVEL = 1;
const tfjs = require('@tensorflow/tfjs-node');
const tfjs = safeRequire('@tensorflow/tfjs-node');
const fixtures = require('../fixtures');
@@ -25,6 +35,7 @@ const outputWebP = fixtures.path('output.webp');
const width = 720;
const height = 588;
const heightPng = 540;
// Disable libvips cache to ensure tests are as fair as they can be
sharp.cache(false);
@@ -98,7 +109,7 @@ async.series({
jpegSuite.add('squoosh-lib-buffer-buffer', {
defer: true,
fn: function (deferred) {
const pool = new squoosh.ImagePool();
const pool = new squoosh.ImagePool(os.cpus().length);
const image = pool.ingestImage(inputJpgBuffer);
image.decoded
.then(function () {
@@ -138,7 +149,7 @@ async.series({
}
});
// mapnik
jpegSuite.add('mapnik-file-file', {
mapnik && jpegSuite.add('mapnik-file-file', {
defer: true,
fn: function (deferred) {
mapnik.Image.open(fixtures.inputJpg, function (err, img) {
@@ -253,7 +264,7 @@ async.series({
}
});
// tfjs
jpegSuite.add('tfjs-node-buffer-buffer', {
tfjs && jpegSuite.add('tfjs-node-buffer-buffer', {
defer: true,
fn: function (deferred) {
const decoded = tfjs.node.decodeJpeg(inputJpgBuffer);
@@ -528,10 +539,10 @@ async.series({
}
});
}
}).add('sharp-sequentialRead', {
}).add('sharp-random-access-read', {
defer: true,
fn: function (deferred) {
sharp(inputJpgBuffer, { sequentialRead: true })
sharp(inputJpgBuffer, { sequentialRead: false })
.resize(width, height)
.toBuffer(function (err) {
if (err) {
@@ -641,7 +652,7 @@ async.series({
throw err;
} else {
image
.resize(width, height)
.resize(width, heightPng, jimp.RESIZE_BICUBIC)
.deflateLevel(6)
.filterType(0)
.getBuffer(jimp.MIME_PNG, function (err) {
@@ -662,7 +673,7 @@ async.series({
throw err;
} else {
image
.resize(width, height)
.resize(width, heightPng, jimp.RESIZE_BICUBIC)
.deflateLevel(6)
.filterType(0)
.write(outputPng, function (err) {
@@ -676,21 +687,74 @@ async.series({
});
}
});
// squoosh-cli
pngSuite.add('squoosh-cli-file-file', {
defer: true,
fn: function (deferred) {
exec(`./node_modules/.bin/squoosh-cli \
--output-dir ${os.tmpdir()} \
--resize '{"enabled":true,"width":${width},"height":${heightPng},"method":"lanczos3","premultiply":true,"linearRGB":false}' \
--oxipng '{"level":1}' \
"${fixtures.inputPngAlphaPremultiplicationLarge}"`, function (err) {
if (err) {
throw err;
}
deferred.resolve();
});
}
});
// squoosh-lib (GPLv3)
pngSuite.add('squoosh-lib-buffer-buffer', {
defer: true,
fn: function (deferred) {
const pool = new squoosh.ImagePool(os.cpus().length);
const image = pool.ingestImage(inputPngBuffer);
image.decoded
.then(function () {
return image.preprocess({
resize: {
enabled: true,
width,
height: heightPng,
method: 'lanczos3',
premultiply: true,
linearRGB: false
}
});
})
.then(function () {
return image.encode({
oxipng: {
level: 1
}
});
})
.then(function () {
return pool.close();
})
.then(function () {
return image.encodedWith.oxipng;
})
.then(function () {
deferred.resolve();
});
}
});
// mapnik
pngSuite.add('mapnik-file-file', {
mapnik && pngSuite.add('mapnik-file-file', {
defer: true,
fn: function (deferred) {
mapnik.Image.open(fixtures.inputPngAlphaPremultiplicationLarge, function (err, img) {
if (err) throw err;
img.premultiply(function (err, img) {
if (err) throw err;
img.resize(width, height, {
img.resize(width, heightPng, {
scaling_method: mapnik.imageScaling.lanczos
}, function (err, img) {
if (err) throw err;
img.demultiply(function (err, img) {
if (err) throw err;
img.save(outputPng, 'png', function (err) {
img.save(outputPng, 'png32:f=no:z=6', function (err) {
if (err) throw err;
deferred.resolve();
});
@@ -706,13 +770,13 @@ async.series({
if (err) throw err;
img.premultiply(function (err, img) {
if (err) throw err;
img.resize(width, height, {
img.resize(width, heightPng, {
scaling_method: mapnik.imageScaling.lanczos
}, function (err, img) {
if (err) throw err;
img.demultiply(function (err, img) {
if (err) throw err;
img.encode('png', function (err) {
img.encode('png32:f=no:z=6', function (err) {
if (err) throw err;
deferred.resolve();
});
@@ -730,7 +794,7 @@ async.series({
srcPath: fixtures.inputPngAlphaPremultiplicationLarge,
dstPath: outputPng,
width: width,
height: height,
height: heightPng,
filter: 'Lanczos',
customArgs: [
'-define', 'PNG:compression-level=6',
@@ -751,7 +815,7 @@ async.series({
fn: function (deferred) {
gm(fixtures.inputPngAlphaPremultiplicationLarge)
.filter('Lanczos')
.resize(width, height)
.resize(width, heightPng)
.define('PNG:compression-level=6')
.define('PNG:compression-filter=0')
.write(outputPng, function (err) {
@@ -767,7 +831,7 @@ async.series({
fn: function (deferred) {
gm(fixtures.inputPngAlphaPremultiplicationLarge)
.filter('Lanczos')
.resize(width, height)
.resize(width, heightPng)
.define('PNG:compression-level=6')
.define('PNG:compression-filter=0')
.toBuffer(function (err) {
@@ -785,7 +849,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(inputPngBuffer)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 6 })
.toFile(outputPng, function (err) {
if (err) {
@@ -800,9 +864,9 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(inputPngBuffer)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 6 })
.toBuffer(function (err) {
.toBuffer(function (err, data) {
if (err) {
throw err;
} else {
@@ -815,7 +879,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(fixtures.inputPngAlphaPremultiplicationLarge)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 6 })
.toFile(outputPng, function (err) {
if (err) {
@@ -830,7 +894,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(fixtures.inputPngAlphaPremultiplicationLarge)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 6 })
.toBuffer(function (err) {
if (err) {
@@ -845,7 +909,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(inputPngBuffer)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 6, progressive: true })
.toBuffer(function (err) {
if (err) {
@@ -860,7 +924,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(inputPngBuffer)
.resize(width, height)
.resize(width, heightPng)
.png({ adaptiveFiltering: true, compressionLevel: 6 })
.toBuffer(function (err) {
if (err) {
@@ -875,7 +939,7 @@ async.series({
minSamples,
fn: function (deferred) {
sharp(inputPngBuffer)
.resize(width, height)
.resize(width, heightPng)
.png({ compressionLevel: 9 })
.toBuffer(function (err) {
if (err) {

View File

@@ -1,3 +1,6 @@
// Copyright 2013 Lovell Fuller and others.
// SPDX-License-Identifier: Apache-2.0
'use strict';
const imagemagick = require('imagemagick');

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 179 KiB

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 4.1 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 262 B

After

Width:  |  Height:  |  Size: 255 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

BIN
test/fixtures/expected/linear-16bit.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 301 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 301 KiB

Some files were not shown because too many files have changed in this diff Show More