Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated godot example to use logistic regression usecase #63

Merged
merged 6 commits into from
Sep 27, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions examples/godot_logistic_regression/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.import
godot_engine/godot
Empty file.
3 changes: 3 additions & 0 deletions examples/godot_logistic_regression/custom_module/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
vulkan-kompute
lib
godot
66 changes: 66 additions & 0 deletions examples/godot_logistic_regression/custom_module/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Vulkan Kompute Godot Example

## Set Up Dependencies

### Vulkan

You will need the Vulkan SDK, in this case we use version `1.2.148.1`, which you can get at the official site https://vulkan.lunarg.com/sdk/home#windows

This will have the following contents that will be required later on:

* The VulkanSDK static library `vulkan-1`

### Kompute

We will be using v0.3.1 of Kompute, and similar to above we will need the built static library, but in this case we will build it.

We can start by cloning the repository on the v0.3.1 branch:

```
git clone --branch v0.3.1 https://github.com/EthicalML/vulkan-kompute/
```

You will be able to use cmake to generate the build files for your platform.

```
cmake vulkan-kompute/. -Bvulkan-kompute/build
```

You need to make sure that the build is configured with the same flags required for godot, for example, in windows you will need:

* Release build
* Configuration type: static library
* Runtime lib: Multi-threaded / multi-threaded debug

Now you should see the library built under `build/src/Release`

## Building Godot

Now to build godot you will need to set up a couple of things for the Scons file to work - namely setting up the following:

* Copy the `vulkan-1` library from your vulkan sdk folder to `lib/vulkan-1.lib`
* Copy the `kompute.lib` library from the Kompute build to `lib/kompute.lib`
* Make sure the versions above match as we provide the headers in the `include` folder - if you used different versions make sure these match as well

### Clone godot repository

Now we can clone the godot repository - it must be on a separate repository, so you can use the parent directory if you are on the Kompute repo.

```
cd ../../godot_engine

git clone --branch 3.2.3-stable https://github.com/godotengine/godot

cd godot/
```

And now we can build against our module

```
wscons -j16 custom_modules=../../custom_module/ platform=windows target=release_debug
```

Once we have built it we can now run the generated godot engine in the `bin/` folder, and we will be able to access the custom module from anywhere in the project, as well as creating new nodes from the user interface.



Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
/* summator.cpp */

#include <vector>

#include "KomputeModelMLNode.h"

KomputeModelMLNode::KomputeModelMLNode() {
std::cout << "CALLING CONSTRUCTOR" << std::endl;
this->_init();
}

void KomputeModelMLNode::train(Array yArr, Array xIArr, Array xJArr) {

assert(yArr.size() == xIArr.size());
assert(xIArr.size() == xJArr.size());

std::vector<float> yData;
std::vector<float> xIData;
std::vector<float> xJData;
std::vector<float> zerosData;

for (size_t i = 0; i < yArr.size(); i++) {
yData.push_back(yArr[i]);
xIData.push_back(xIArr[i]);
xJData.push_back(xJArr[i]);
zerosData.push_back(0);
}

uint32_t ITERATIONS = 100;
float learningRate = 0.1;

std::shared_ptr<kp::Tensor> xI{ new kp::Tensor(xIData) };
std::shared_ptr<kp::Tensor> xJ{ new kp::Tensor(xJData) };

std::shared_ptr<kp::Tensor> y{ new kp::Tensor(yData) };

std::shared_ptr<kp::Tensor> wIn{ new kp::Tensor({ 0.001, 0.001 }) };
std::shared_ptr<kp::Tensor> wOutI{ new kp::Tensor(zerosData) };
std::shared_ptr<kp::Tensor> wOutJ{ new kp::Tensor(zerosData) };

std::shared_ptr<kp::Tensor> bIn{ new kp::Tensor({ 0 }) };
std::shared_ptr<kp::Tensor> bOut{ new kp::Tensor(zerosData) };

std::shared_ptr<kp::Tensor> lOut{ new kp::Tensor(zerosData) };

std::vector<std::shared_ptr<kp::Tensor>> params = { xI, xJ, y,
wIn, wOutI, wOutJ,
bIn, bOut, lOut };

{
kp::Manager mgr;

if (std::shared_ptr<kp::Sequence> sq =
mgr.getOrCreateManagedSequence("createTensors").lock()) {

sq->begin();

sq->record<kp::OpTensorCreate>(params);

sq->end();
sq->eval();

// Record op algo base
sq->begin();

sq->record<kp::OpTensorSyncDevice>({ wIn, bIn });

sq->record<kp::OpAlgoBase<>>(
params, std::vector<char>(LR_SHADER.begin(), LR_SHADER.end()));

sq->record<kp::OpTensorSyncLocal>({ wOutI, wOutJ, bOut, lOut });

sq->end();

// Iterate across all expected iterations
for (size_t i = 0; i < ITERATIONS; i++) {

sq->eval();

for (size_t j = 0; j < bOut->size(); j++) {
wIn->data()[0] -= learningRate * wOutI->data()[j];
wIn->data()[1] -= learningRate * wOutJ->data()[j];
bIn->data()[0] -= learningRate * bOut->data()[j];
}
}
}
}

SPDLOG_INFO("RESULT: <<<<<<<<<<<<<<<<<<<");
SPDLOG_INFO(wIn->data()[0]);
SPDLOG_INFO(wIn->data()[1]);
SPDLOG_INFO(bIn->data()[0]);

this->mWeights = kp::Tensor(wIn->data());
this->mBias = kp::Tensor(bIn->data());
}

Array KomputeModelMLNode::predict(Array xI, Array xJ) {
assert(xI.size() == xJ.size());

Array retArray;

// We run the inference in the CPU for simplicity
// BUt you can also implement the inference on GPU
// GPU implementation would speed up minibatching
for (size_t i = 0; i < xI.size(); i++) {
float xIVal = xI[i];
float xJVal = xJ[i];
float result = (xIVal * this->mWeights.data()[0]
+ xJVal * this->mWeights.data()[1]
+ this->mBias.data()[0]);

// Instead of using sigmoid we'll just return full numbers
Variant var = result > 0 ? 1 : 0;
retArray.push_back(var);
}

return retArray;
}

Array KomputeModelMLNode::get_params() {
Array retArray;

SPDLOG_INFO(this->mWeights.size() + this->mBias.size());

if(this->mWeights.size() + this->mBias.size() == 0) {
return retArray;
}

retArray.push_back(this->mWeights.data()[0]);
retArray.push_back(this->mWeights.data()[1]);
retArray.push_back(this->mBias.data()[0]);
retArray.push_back(99.0);

return retArray;
}

void KomputeModelMLNode::_init() {
std::cout << "CALLING INIT" << std::endl;
}

void KomputeModelMLNode::_process(float delta) {

}

void KomputeModelMLNode::_bind_methods() {
ClassDB::bind_method(D_METHOD("_process", "delta"), &KomputeModelMLNode::_process);
ClassDB::bind_method(D_METHOD("_init"), &KomputeModelMLNode::_init);

ClassDB::bind_method(D_METHOD("train", "yArr", "xIArr", "xJArr"), &KomputeModelMLNode::train);
ClassDB::bind_method(D_METHOD("predict", "xI", "xJ"), &KomputeModelMLNode::predict);
ClassDB::bind_method(D_METHOD("get_params"), &KomputeModelMLNode::get_params);
}

Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#pragma once

#include <memory>

#include "kompute/Kompute.hpp"

#include "scene/main/node.h"

class KomputeModelMLNode : public Node {
GDCLASS(KomputeModelMLNode, Node);

public:
KomputeModelMLNode();

void train(Array y, Array xI, Array xJ);

Array predict(Array xI, Array xJ);

Array get_params();

void _process(float delta);
void _init();

protected:
static void _bind_methods();

private:
kp::Tensor mWeights;
kp::Tensor mBias;
};

static std::string LR_SHADER = R"(
#version 450

layout (constant_id = 0) const uint M = 0;

layout (local_size_x = 1) in;

layout(set = 0, binding = 0) buffer bxi { float xi[]; };
layout(set = 0, binding = 1) buffer bxj { float xj[]; };
layout(set = 0, binding = 2) buffer by { float y[]; };
layout(set = 0, binding = 3) buffer bwin { float win[]; };
layout(set = 0, binding = 4) buffer bwouti { float wouti[]; };
layout(set = 0, binding = 5) buffer bwoutj { float woutj[]; };
layout(set = 0, binding = 6) buffer bbin { float bin[]; };
layout(set = 0, binding = 7) buffer bbout { float bout[]; };
layout(set = 0, binding = 8) buffer blout { float lout[]; };

float m = float(M);

float sigmoid(float z) {
return 1.0 / (1.0 + exp(-z));
}

float inference(vec2 x, vec2 w, float b) {
// Compute the linear mapping function
float z = dot(w, x) + b;
// Calculate the y-hat with sigmoid
float yHat = sigmoid(z);
return yHat;
}

float calculateLoss(float yHat, float y) {
return -(y * log(yHat) + (1.0 - y) * log(1.0 - yHat));
}

void main() {
uint idx = gl_GlobalInvocationID.x;

vec2 wCurr = vec2(win[0], win[1]);
float bCurr = bin[0];

vec2 xCurr = vec2(xi[idx], xj[idx]);
float yCurr = y[idx];

float yHat = inference(xCurr, wCurr, bCurr);

float dZ = yHat - yCurr;
vec2 dW = (1. / m) * xCurr * dZ;
float dB = (1. / m) * dZ;
wouti[idx] = dW.x;
woutj[idx] = dW.y;
bout[idx] = dB;

lout[idx] = calculateLoss(yHat, yCurr);
}
)";

Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import os

Import('env')

dir_path = os.getcwd()

# Kompute & Vulkan header files
env.Append(CPPPATH = ['include/'])

env.add_source_files(env.modules_sources, "*.cpp")

# Kompute & Vulkan libraries
env.Append(LIBS=[
File(dir_path +'/lib/kompute.lib'),
File(dir_path +'/lib/vulkan-1.lib'),
])

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
def can_build(env, platform):
return True

def configure(env):
pass
Loading