You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the server side we can also specify a base MLPluginAPI which wouldn't be tensorflow specific. This would open up Pytorch backends without any unreal frontend code change.
Having a remote server component would also enable linux/mac builds without restricting python/tensorflow versions to compatible ones with unrealenginepython. It would also enable remote ML on e.g. phone devices (native may support TFLite at some point).
Motivation
It should be possible to switch the machine learning backend while keeping the unreal frontend api the same, allowing for a very flexible dev environment. By specifying an abstract
UMachineLearningBaseComponent
(https://github.com/getnamo/machine-learning-remote-ue4/blob/master/Source/MachineLearningBase/Public/MachineLearningBaseComponent.h) we can sub-class these components into remote, unreal python, and native variants. Depending on which backend is needed these should be swappable without any dev code changes.From the server side we can also specify a base
MLPluginAPI
which wouldn't be tensorflow specific. This would open up Pytorch backends without any unreal frontend code change.Having a remote server component would also enable linux/mac builds without restricting python/tensorflow versions to compatible ones with unrealenginepython. It would also enable remote ML on e.g. phone devices (native may support TFLite at some point).
Remote work
Native work
Intended to be inference focused initially
Tensorflow-ue4 Refactor
UMachineLearningBaseComponent
for this plugin's base class, enabling C++ and compatibility with other variants.The text was updated successfully, but these errors were encountered: