You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
init_orca_context default cores=2, and users may not be aware of this setting and may just call it with default parameters. And in this case if user compare the local performance between the original TF/PyTorch script with the Orca version, they will find much performance drop since Orca is only using two cores while the original script will use all cores in most cases.
Shall we change default cores to "*"?
PyTorch Training Operator train_batch and forward_batch can be merged to avoid duplicate code.
For the above issues in this section, fixes can only be applied to ray and pyspark backend, bigdl backend has even less possibility to support those cases.
[Customer1 code specific issues]
Support multiple output and multiple loss functions (MMoE for multi-task learning)
hkvision
changed the title
Umbrella issue of improving user experience (update from time to time)
Umbrella issue of improving Orca user experience (update from time to time)
Apr 29, 2022
[General]
Shall we change default cores to "*"?
Default KMP settings could slow down torch estimator #4370
Default OMP, KMP values would impact other applications #4372
[Customized data and train]
PyTorch Ray Estimator for customized data and train loop #3557
Orca do not support preprocess inputs before training in forward() #4410
Orca only supports basic metrics, some customized metrics do not support #4414
yolox does not require loss creator when running on orca #4412
[Customer1 code specific issues]
cc @jason-dai @shane-huang
The text was updated successfully, but these errors were encountered: