Skip to content
This repository has been archived by the owner on Mar 25, 2021. It is now read-only.

Feedback about Workers! #6

Closed
addaleax opened this issue Oct 3, 2018 · 33 comments
Closed

Feedback about Workers! #6

addaleax opened this issue Oct 3, 2018 · 33 comments

Comments

@addaleax
Copy link
Member

addaleax commented Oct 3, 2018

Hi everyone! Something that starts coming up more and more frequently is moving Workers out of experimental status; we’re tracking that in nodejs/node#22940.

Something that would be very helpful is having feedback from people who use Workers, build libraries on top of it, or can share experiences with the API. Please use this thread for that! Whether you feedback is positive, negative, or anything else, it’s always going to be helpful as long as it’s on-topic.

If you have requests about specific changes you’d like to see, other than moving out of experimental status, feel free to comment here or open an issue at https://github.com/nodejs/node.

@wilk
Copy link

wilk commented Nov 4, 2018

Woah, I missed this post! That's great!
And of course thanks for your work on Node.js worker threads! I think they are the missing piece of Node.js platform 💪

Here's my feedback.

I'm the author of microjob, a tiny lib built on top of Node.js worker threads.
The purpose of this lib is to simulate the GoLang's lightweight threads, called goroutines.
Please, take a look to this example to get the idea: goroutines
Basically, it provides a way to execute some parts of the program into another thread, sharing the runtime context with ease.
For instance: https://play.golang.org/p/D4yU5ndG8bR

AFAIK, on Node.js things are different.
Usually, worker threads are spawned ahead, in a static separated file, as suggested inside the docs.
In my case, I needed to be able to send user defined dynamic piece of program to an empty worker thread, providing the user the ability to put in background some heavy CPU load ops, executed in a custom runtime context.

So, I found that there are two ways to pass something to worker threads: via message passing (strings, postMessage) or shared array buffer.
The first one can be achieved using v8 serialize and deserialize or via JSON.stringify/JSON.parse.
The second one is a little bit trickier because it needs you to convert your something into a buffer.

Now, both approaches do work for data and that's great.
However, the situation is quite different with runtime context, such as class instances (both custom and native).
For instance, I would like to pass a new Date() instance (not the stringified version) from the main thread to the worker.
Another example: I've my Person class defined in the main thread with both static and instance methods/props and I want to be able to call them inside the worker.
Remember that microjob cannot access the source code of the main thread so the only way (I've found) is to pass the context using postMessage.
I've also opened an issue on Node.js Help repo: nodejs/help#1558

So, I would ask to have an easy way to pass and execute runtime context (maybe in conjunction with vm.runInContext ?) as I do with GoLang.
Here is an example of runtime context used in microjob.

Another thing I'd like to ask: microjob implements the thread-pool pattern and I think it would be cool having it built-in.

Thanks again and keep going this way! 🔝 💪 👏

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

Edit: Foiled by daylight savings.

1. Read this

#6 (comment)

2. Then This

Recently streaming myself working on a system where one node.js process coordinates with a second one via message passing.

Start at 3m35s .. you should get the main gist by 10m

https://www.twitch.tv/videos/330954507?t=3m35s

@wilk
Copy link

wilk commented Nov 4, 2018

@Akamaozu

microjob uses a worker pool to avoid performance issues in spawning new threads for each new execution.
Your suggestion entails microjob to spawn a new worker for each job call, breaking the above feature.

Further more, passing class instances to the workerData does not solve the problem because it uses the same v8 algorithm to serialize/deserialize data.

Quoting from the docs:

workerData Any JavaScript value that will be cloned and made available as require('worker_threads').workerData. The cloning will occur as described in the HTML structured clone algorithm, and an error will be thrown if the object cannot be cloned (e.g. because it contains functions).

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

@wilk

You don't need to spawn a new process per job.

Spawn the worker, then feed it work via a queue like Rabbitmq.

Most queuing systems should have a way to control concurrency, so you can limit it to X level of concurrency per worker.

@wilk
Copy link

wilk commented Nov 4, 2018

@Akamaozu

I was talking about spawning new threads, not new processes.
Spawning new threads is more expensive than feeding existing ones.

Maybe the following example can explain the situation better:

class Person {
  constructor(name) {
    this.name = name
  }

  hello() {
    console.log(`hello from ${this.name}`)
  }
}

const foo = new Person('foo')

// passing it to an existing worker thread
job(() => foo.hello())

Now, I don't want the user to define and instantiate Person inside job call nor inside a worker thread separated source file.
The user should be able to pass the source of the action to perform (like the foo.hello()) and the corresponding context.

For instance:

job(() => foo.hello(), {ctx: {foo}})

I think that the same serialization/deserialization problem applies to your case (feeding via a queue).
Further more, I don't want the user to be tied to an external service (like RabbitMQ).

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

@wilk

Another option might be to send the program to the data, instead of the other way around.

This means:

  • Data starts off in the worker

  • the remote place where the object originally was (call/creation site) is now original call site.

  • original call site sends messages to worker to do x on data, which it does and sends results back to original object creation site.

Now you can work on objects remotely.

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

@wilk Ok let's use your example

var data = {
  class_path: "./class/person.js", 
  init_args: [ "foo" ],
  action: "get-foo"
};

job( data, ( err, foo )=> {
  // use result
});

@wilk
Copy link

wilk commented Nov 4, 2018

@Akamaozu

The internal state of a class instance may vary in time and it does not only depend on initial args.
You should provide something like event-sourcing to reproduce the actual state and this is too much effort to put on the user shoulders.

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

@wilk we are entirely in agreement here 👍

  1. Been working on taking the load off user's shoulders here
    https://github.com/akamaozu/node-supe

  2. You can play with a live demo of code using this.

Things to note about the demo

- brain doesn't touch any of the files being moved around. it sends instructions and the other node.js process works with the data, similarly to what you're describing.
- you can see what the application is doing via console.
- demo's interactive. kill processes and see what happens.

@Akamaozu
Copy link

Akamaozu commented Nov 4, 2018

Can't assume you'll be interested in spinning it up to check, so I took a screenshot of some log output.

Red is where brain instructs other node.js processes to do work.
Green is them doing the work and logging it.

image

@devsnek
Copy link
Member

devsnek commented Nov 4, 2018

@wilk It isn't possible to actually share the runtime between two workers. JS's prototype design makes it pretty much untenable to share any object between two threads (with the exception of SharedArrayBuffer, which works because we can perform atomic operations on it)

@wilk
Copy link

wilk commented Nov 4, 2018

@Akamaozu
Thanks again!
Actually, I'd like to avoid this kind of workaround, especially here where we can make the difference before worker threads get out of experimental status 😄

@devsnek
I see.
It's ok working with SAB but then I'd ask for a new api to let the user serialize/deserialize runtime context easily.
Maybe we need to scale up to TC39 ?

@devsnek
Copy link
Member

devsnek commented Nov 4, 2018

@wilk the problem isn't that no one has bothered to specify how objects would be shared, its that there is no sane way to share objects besides stopping each thread when the other thread is running, in which case, there's no point in threading, and you can put all your code in the same file.

@wilk
Copy link

wilk commented Nov 4, 2018

@devsnek
Well, talking about GoLang, working on shared memory (like objects) can be achieved in different ways, such as Mutex and atomics.
In theory the same result could be achieved with SAB between worker threads but AFAIK there's no way to serialize/deserialize a class instance to/from buffer.

The feature introduced by microjob is handy, especially if you want to declare some parts of your program to be executed in background on another thread (for instance a heavy math calculation).

@addaleax
Copy link
Member Author

addaleax commented Nov 4, 2018

So … on the general issue of sharing JS objects between threads: Yes, that’s nothing that Node.js can do on its own; it needs JS engine support, and there’s a good chance JS engines aren’t willing to do this without some sort of standardization (there’s also a good chance that that’s not the case, I guess).

People have put work into modifying V8 to allow some sort of shared heap, though:

But, again, there’s not much that we can do directly as Node.js on our own.

@mogill
Copy link

mogill commented Nov 4, 2018

Although sharing data objects between processes/threads is not possible with Javascript's memory model, this does not prohibit implementing Extended Memory Semantics that make possible atomic operations on persistent, shared data objects in JS (and Python and other languages).

EMS is a native module that implements shared data objects (JSON only) for Node that is agnostic to the source of parallelism so it works with Cluster and OS processes, but like most native add-ons is not compatible with Workers' implementation. EMS predates other parallel programming models for JS so it has built-in support for Bulk Synchronous Parallelism, Fork-Join parallelism, and loop-level parallelism.

(Disclaimer: I am the author of EMS)

@pioardi
Copy link

pioardi commented Jan 25, 2020

Hi @addaleax ,
worker threads is a great feature that was missing on nodejs 💯
My feedback is that should be provided an example on how to use AsyncResource when implementing a pool .
Moreover I find a lot of thread pool implementations , have you never considered to merge these implementations in node ?

@addaleax
Copy link
Member Author

addaleax commented Feb 1, 2020

@pioardi Hi, and thanks for the feedback! Sorry I haven’t gotten around to replying yet.

My feedback is that should be provided an example on how to use AsyncResource when implementing a pool .

I agree – that’s overdue. I’ve opened nodejs/node#31601 to hopefully address that.

Moreover I find a lot of thread pool implementations , have you never considered to merge these implementations in node ?

I have, but I feel like there’s a few tradeoffs to be made there, and I’m not sure that Node.js should be picking a default. For example, you could have a very generic Worker pool, where you would post code to be run, or highly specialized Workers that only handle a single type of task.

Also, I’m a fan of implementing features that don’t need to be in Node.js core in userland in general. But that’s just my personal opinion.

@pioardi
Copy link

pioardi commented Feb 1, 2020

Hi @addaleax ,
thanks for your reply , I will look into your pull request in the meanwhile that is reviewed.

I have, but I feel like there’s a few tradeoffs to be made there, and I’m not sure that Node.js should be picking a default. For example, you could have a very generic Worker pool, where you would post code to be run, or highly specialized Workers that only handle a single type of task.

I understand your opinion, it could be interesting to have different types of pools in nodejs and a decision tree that guides the developers on which is the most suitable, I am trying to do that so if you want take a look or you will consider in future to include this in Node core than let me know.

Thanks for your help @addaleax

@ollisal
Copy link

ollisal commented Jun 15, 2020

I've been following various solutions for multi-processing in Node.js for many years now, since some of my use-cases for Node.js have been quite processing-intensive, ranging from web scraping, to some very complex rule evaluations in an interactive context. Thus, I'm very happy that we nowadays have a standardized worker threads implementation in Node.js core.

That said, I've been having a hard time utilizing worker threads for very many practical use-cases, due to the very restricted data sharing between threads, and copying data between threads being relatively slow.

I have recently finished some benchmarks transferring different kinds of data between Node.js threads. The throughput is not admirable, but what I'm happy about is how the use of worker threads can help keep the main loop event processing responsive at all times.

See here https://www.jakso.me/blog/nodejs-14-worker-threads-benchmarks

@ollisal
Copy link

ollisal commented Jun 15, 2020

Something that could truly enable a lot more use cases to benefit from worker threads would be, if we could "give away" all kinds of (at least JSON-safe) data to another thread, similar to the current transferList support for ArrayBuffer and MessagePort objects. And of course, this being lightweight, or at least significantly cheaper than JSON.stringify + postMessage(string) + JSON.parse. Some kind of zero-copy object ownership transfer. Dunno how well the v8 architecture lends to that, probably not that well.

@addaleax
Copy link
Member Author

And of course, this being lightweight, or at least significantly cheaper than JSON.stringify + postMessage(string) + JSON.parse. Some kind of zero-copy object ownership transfer. Dunno how well the v8 architecture lends to that, probably not that well.

@ollisal What postMessage() is doing internally is quite similar to JSON.stringify() + JSON.parse(), but using V8’s internal implementation of the HTML structured clone algorithm. I have looked into it a few times, but have not found any obvious ways to make it significantly faster.

One should not attempt to offload very small tasks to workers, and even with tasks of reasonable size, artificial waiting for previous worker operations to have completed should be kept to a minimum.

I think this sentence from your blog post puts it quite well – communication using .postMessage() is always going to have some overhead, and if you need something faster, SharedArrayBuffer is your friend, but as you noted, that only makes sense when there is no need to create a copy of the data again.

I also agree with all of the takeaways from your post, btw.

@stevenvachon
Copy link

stevenvachon commented Jun 16, 2020

@ollisal perhaps the solution is for JS engines to run JSON.* functions within a separate thread. However, they are sync/blocking functions, so it may not matter.

@ollisal
Copy link

ollisal commented Jun 17, 2020

@stevenvachon Right, so some kind of asynchronous JSON.stringify & parse variants could be added, with the work done in a native worker thread? Right, if that could be done it would help quite a few common use cases and also enable utilizing worker threads for other more complex tasks because you could use them to communicate more complex data with your workers more efficiently.

For JSON.stringify that probably has the same issue/danger as what I suggested about transferring ownership of entire object trees to another thread - there can still be references to parts of that object tree in the main thread. And it could be changed through those references while it's being JSON.stringified or otherwise used in the other thread, which would lead to unpredictable results.

Of course, it could be documented that you shouldn't access any parts of the object tree reachable through a transferred root object after the post call or after starting to asynchronously stringify them. But that would be a very unusual mechanism in a JS API.

The transferList support for ArrayBuffer does not have this problem because the ArrayBuffer is just one object, and supports "neutering" which is used to mark it as unusable in the thread doing the postMessage. With more complex trees of all kinds of objects, all of them would need to be neutered somehow, which could be expensive as well, and there isn't such a commonly understood way of "neutering" just any regular JS object.

Another problem I guess is that technically there are separate copies of the Array, Object etc standard prototype classes in the JS context of worker threads, and things like instanceof Array wouldn't work even if we could otherwise transfer objects to the other JS context's heap as-is.

@addaleax
Copy link
Member Author

@ollisal @stevenvachon I don’t think asynchronous serialization is possible here, unfortunately:

const a = startAsyncJSONStringify(obj);
otherObj.property = 42;

The JS engine can’t know in advance whether otherObj occurs somewhere inside obj, possibly deep inside the tree, so it will have to wait for the stringify operation to finish before it can continue with JS execution.

Asynchronous parsing is another story, and quite an interesting idea. I don’t see any reasons why this should be impossible, although this would likely require a decent amount of JS engine modifications.

There are some solutions for this problem, though. Microsoft and Alibaba have experimented in the past with shared heaps, i.e. actually making JS objects accessible in multiple threads – that’s not trivial, and in particular nothing that Node.js can do on its own because it also requires extensive JS engine modifications.

The other approach is sharing complex objects through SharedArrayBuffer, e.g. https://github.com/Bnaya/objectbuffer/ follows this approach (although of course some types of data can’t be serialized that way).

@stevenvachon
Copy link

stevenvachon commented Jun 17, 2020

@addaleax How would it be different if startAsyncJSONStringify() was a user-land function running in the main thread? Wouldn't it either stringify with the value before the mutation or after? I can visualize how it might happen during the mutation, in which case, the JS engine could take a snapshot of references (kind of like a clone) as a means of ignoring mutations.

@addaleax
Copy link
Member Author

@stevenvachon If the serialization happens on the main thread, that’s not really an issue, true – but then I’m not sure if there’s any advantage over JSON.stringify() anymore?

If serialization and JS mutations happen in parallel, I think it’s too easy to end up with inconsistent results, unfortunately…

I can visualize how it might happen during the mutation, in which case, the JS engine could take a snapshot of references (kind of like a clone) as a means of ignoring mutations.

If we could take a snapshot of an object tree, I don’t think we’d need the serialization step at all :)

@stevenvachon
Copy link

stevenvachon commented Jun 17, 2020

If we could take a snapshot of an object tree, I don’t think we’d need the serialization step at all :)

We still need serialization if we want to use Worker, correct? SharedArrayBuffer still uses Worker.

I guess it'd need to be implemented and benchmarked to see which is faster (sync serialization or async threaded serialization with snapshot).

@addaleax
Copy link
Member Author

@stevenvachon I mean, I guess that depends on what you’re referring to when you say “serialization”… postMessage() does require that there is some kind of object representation that can be used to re-create an object on the receiving side, but how that looks like is pretty open – for example, when a MessagePort instance is being posted, that representation is a pointer to its internal data structures, and not a serialization of its contents into a byte sequence.

@davisjam
Copy link

Just wandering by to share this research paper with techniques for fast JSON parsing. Mison: A Fast JSON Parser for Data Analytics by Li et al.

@addaleax
Copy link
Member Author

@davisjam That paper certainly looks interesting :) I’m having a hard time thinking of a way to apply the idea to Node.js core, though – on the one side, we can’t just start using JSON because JSON and HTML structured cloning serialize objects differently, and on the other hand, we don’t just want raw C++ representations of objects as in Mison, but need them in a format in which V8 understands them. It’s definitely a good idea for people who are okay with doing their own serialization/deserialization and just posting strings in that case.

@davisjam
Copy link

davisjam commented Jul 2, 2020

@addaleax

people doing their own de/serialization

Yep, that was my intent. Probably not a good candidate for Node.js core :-).

@addaleax
Copy link
Member Author

Thanks everyone! Future feedback can go to https://github.com/nodejs/node/ directly. :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants