-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Canvas 2.0 🚀 #743
Comments
Sounds great! Especially pango. Does anyone with necessary know-how know if getting svg import (not export!) #494 . Would be possible? Happy to sponsor someone's time to work on this. In fact generally is it possible to donate some money for your and other developers' time? |
Awesome! I'd add:
Re: remove sync streams, any use in adding a sync API? (e.g. |
That's right, I forgot about that one 👍 I think we already have sync api? |
Would love to see CMYK JPG import support too #425, or at least an exception thrown if trying to add one to a canvas. |
I would like to have static build so there's no dependencies on libraries installed on the system, this will fix a lot of install issues. Also I would like to have configurable backend support, so node-canvas could be used to draw on the Linux framebuffer and other on-screen outputs. I have support for both of them on my pull-request. |
node-pre-gyp seems to be the new way do do that. Canvas could move to that and there wouldn't have to be compilation on the client. I think people would still have to have Cairo and stuff though and of course Windows is still a nightmare. Downsides are that it adds another step for publishing, and it relies on S3 which has to be paid for |
node-pre-gyp is a partial fix, my pull-request fix it for all platforms
|
Another thing I would like to see is a "compliant mode", a way to have a Canvas object that only implements the W3C Canvas API without the node-canvas extras, so it would be easy to stick to it to be sure that a project will be compatible with both environments. |
I have finally managed to update my pull-request with support for statically linked build and configurable graphic backend to nan2, with direct screen support for FBDev and X11 but it could be possible easily to add other backends. Could be able someone to review and merge it or give me some comments? |
I'm of the opinion I started using |
Any update on 2.0? |
I'm splitting my pull-request on several smaller ones hopping they could be easily reviewed and merged. The first one I've done so far is for static building, that would also fix the build dependencies problems both on WIndows and OSX but also on Linux. |
Do you accept pull-requests gocused gor the 2.0 release? |
Yes, that's the plan 👍 |
I did a cleaned one for the static build, I'll do the same for the backends. I'll first do it for the current available ones and later move for the native graphic systems, it will be easier to merge and develop. What do you think? |
I've splitted and isolated the support for backends on the pull-request #829 and also re-enabled the support for PDF and SVG backends so tests can pass, now I hope it could be easier reviewed and merged. I know it's already a big one, but hope it's easier to review, and also I'm open for comments. Once that we could get it merged in master, I'll also port the support for FBDev and X11 backends as independent pull-requests, and in the future we could also look to add support for GDI (Windows) and Cocoa (OSX). |
did node-conva support node js 6.0? |
Both 1.x and 2.x supports Node.js 6.0 |
In an attempt to jump start the 2.0 discussion... #740 is ready and I think we still need a PR to remove the sync streams? I'd be happy to do that or anything else needed to get this done. A beta of what is currently on master would be nice too! |
Nice! I'd totally missed that one. I'll merge it, remove sync streams and then publish the first beta in the next few days :) |
@LinusU, could you be able to review the static build pull-request too? I would like to have it merged before moving forward the backends support and start adding graphic device backends like fbdev or X11. I think these would be good additions for the 2.0 version and more people are asking for them too... |
The static build is backwards compatible though? So that could land regardless of 2.x or not :) Yeah, I've really been meaning to take a proper look at it, I'm sorry that it taking so long, there is only so much time in a day |
Yes, it is fully backwards compatible, the code of the "dynamic" version is untouched :-) Static version will be build automatically if Cairo library is not found (maybe should the other ones and/or the heards check too) or explicitly when using the
Yeah, seems everybody is too busy lately with work and real-life dutties, me too... :-/ |
Heretical question here, but ... why not remove streams altogether? They incur more CPU overhead than the encoding itself. They don't save a palpable amount of memory, since the encoded output is an order of magnitude smaller than the original canvas buffer. They're race-prone unless you dup the buffer -- in which case they cost a ghastly amount of memory, plus the time it takes to malloc. Encoding a 640x480 PNG takes 3ms (Skylake i5-6600K), and encoding JPEG is far faster. On a typical web server, I feel it's smarter to encode first, then stream the encoded Buffer. I'd expect an order of magnitude of memory savings (since fewer raw buffers would hang around) and throughput (since the CPU wouldn't waste cycles on async-function overhead). A 3ms stall is a small price to pay for such a performance increase. What's the real-world use case in which streaming does more good than harm? |
I tend to agree (at least that the non-streaming API must be solid), but here's one example where streaming was essentially required: #778 |
Yeah I was thinking of my issue #778 - I never got any further with that, but fortunately didn't have any other customers (yet) with such big files. |
So as I understand it, streaming good when the output data is >1GB, because v8 doesn't allow values >1GB. For the sake of brainstorming, here's an idea: change the The reason I'm advocating for destroying streaming APIs: when I first started using node-canvas, I thought I "should" use streams: the README only allows some options (PNG palettes, JPEG encoding) in the async/streaming APIs. The sync APIs are spartan, which suggested to me -- a newbie -- that I shouldn't want them. My impression was completely wrong: for most people, Buffers are the way to go. Streaming has its place, but only after the image is encoded and the Buffer is freed. Separate libraries like stream-buffers are a good fit for that. It seems to me like the streaming APIs add an unneeded learning curve. |
Personally I'd rather keep the streaming APIs so we don't break compatibility and since streams are a reasonable solution for big canvases. Specifying an output file wouldn't (nicely) help in cases where e.g. you want to pipe a big image to an HTTP response. +1 for making your performance points clear in the documentation. On my to-do list is making a significant update to README.md and I'll include these points if no one else does it before I do.
It definitely makes sense to ensure the streaming and non-streaming APIs support the same options. It looks like all of the interfaces are consistent, as well as they can be, but again the docs need to be improved.
(Edit: or we add
BTW, I'm curious what makes you say this. ("Just" performance?) libpng's encoder is at its core a streaming encoder; it works nicely together with node's streams. |
Keen to answer :)
Yes, I did that successfully: first with my custom-made lightnpng, and then with turbo-jpeg when I found out how slow PNG conversion is and will always be. (The task was to generate 200,000 images in <2hrs, and PNG conversion can only be as fast as zlib compression.) Raw access is definitely fantastic. At the same time, HTML5
Sure, I'll ramble on the topic :). Yes, PNG and JPEG formats both support streaming. But that stream is far more useful for the client (when decoding) than it is for the server (when encoding). The reason: on the server, if you're generating lots of images concurrently, you'll be limited by CPU and memory. "Great," you'll say, "streaming means I get effectively unlimited memory, and streaming means one task won't hog the CPU to the detriment of other tasks." But that's misleading in this specific case. Streaming actually costs memory in node-canvas: as long as the stream is open, node-canvas keeps an uncompressed buffer with the entire image contents -- that's an order of magnitude more data held in memory than the stream will transmit in total. For instance, streaming a 1024x1024 PNG costs 4MB, even if the stream only produces 200kb of PNG data. Under load, this is out-of-memory territory. A programmer's instinct should be to limit the number of simultaneously-open memory buffers ... which is hard to do with async APIs. The easiest way to limit memory usage is to guarantee a maximum of one open memory buffer at a time -- that is, generate the entire image in a single synchronous function. As for CPU: jpeg-turbo and (especially) libpng hog CPU, yes ... but for a large image, we're talking milliseconds or tens of milliseconds. If a sync API solves out-of-memory errors at the cost of 5ms stutters when generating images, that seems like a win to me. Furthermore, without the overhead of async function calls, image generation is much faster. (I'm sorry to say I've misplaced my benchmarks that led me to this conclusion.) Furthermore, streaming has to happen after encoding, otherwise there will be races: ctx.fillRect(0, 0, 10, 10)
canvas.toDataURL((err, result) => console.log(err, result))
ctx.clearRect(0, 0, canvas.width, canvas.height) ... if toDataURL() is async, then the value sent to It's great that libpng and turbo-jpeg support streaming: that's really useful when transcribing an uncompressed file into a compressed bytestream using a teensy amount of memory. But transcribing uncompressed memory into a compressed bytestream is a completely different problem. Streams could only be useful for an expert asynchronous programmer generating enormous images concurrently on a RAM-heavy computer. So implementation-wise, if there is to be an async API, I think the best implementation is to first generate a compressed Buffer (synchronously), and then turn that compressed Buffer into a stream. That should lead to better performance, lower RAM usage, and no races. |
Side note: I'm happy to talk about fast PNG encoding if you want, my email is in my profile. I've tested a bunch of combinations of PNG encoders, DEFLATE implementations, compression levels and other tweaks and wound up with a method that is 5-10x faster than libpng with similar resulting file sizes (using Intel's igzip and FFmpeg's adler32 impl). Second to that (since igzip is a pain to compile), libdeflate with the custom adler32 was still a significant speedup over zlib. An issue that came up was that encoding time can't be evaluated separately from the file size, as the network or disk I/O times for the difference in file sizes got to be on-par with the difference in encoding times. /ramble. Agree with everything you say 🍻. Two points though:
(BTW, (Also, there's a distinction between async and streaming; I still lean toward (a) improving the docs to encourage people to use |
Since it's been two years... 😨 Once #1111 and #1110 are dealt with (polishing up the prebuild issues), is it time to just publish 2.0.0 and defer other major changes to 3.0.0? A lot of new and open issues are (a) users who don't realize the default README.md is for 2.x, (b) issues that are fixed in 2.x but users don't want to use an alpha. 3.0.0 doesn't have to be far down the road and the pending semver major issues are all gonna be hefty to start/finish/land. |
I don't find it good to have 19 open pull-requests, so I'm not sure if they should be merged previously to release a major 2.0.0, or do a stable release and start accepting them just after that. |
I would like to start bringing up the discussion on the next major bump. This is a quick list on the top of my head on what I personally would like to get in.
Would love to get some comments from both maintainers and users of the module 🙌
The text was updated successfully, but these errors were encountered: