Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Orca: Add 2 NCF PyTorch examples with data_loader or XShards as inputs. #5691

Merged
merged 77 commits into from
Nov 3, 2022

Conversation

zpeng1898
Copy link
Contributor

@zpeng1898 zpeng1898 commented Sep 8, 2022

Add NCF pytorch examples : train_data_loader.py and train_xshards.py to the NCF directory, with a shared NCF-model model.py.

1.The train_data_loader.py example takes data_loader as the input of the model, supporting fitting the estimator with ray or spark backend:

# create the estimator
est = Estimator.from_torch(model=model_creator, optimizer=optimizer_creator,loss=loss_function, metrics=[Accuracy()],backend=Config["backend"])# backend="ray" or "spark"
# fit the estimator
est.fit(data=train_loader_func, epochs=1)

2.The train_xshards.py example takes XShards as the input of the model, supporting fitting the estimator with ray or spark backend:

# create the estimator
est = Estimator.from_torch(model=model_creator, optimizer=optimizer_creator,loss=loss_function, metrics=[Accuracy()],backend=Config["backend"])# backend="ray" or "spark"
# fit the estimator
est.fit(data=train_shards, epochs=1,batch_size=Config["batch_size"],feature_cols=["x"],label_cols =["y"])

@hkvision
Copy link
Contributor

#5738 can remove model_dir after this PR merges.


#Step 0: Parameters And Configuration

Config={
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use command line option and arguments instead of config dict?

"model_dir": "./model_dir/",
}

Config["train_rating"]=Config["main_path"]+ Config["dataset"]+".train.rating"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls check code style. (space between operators.)

invalidInputError(isinstance(right, SparkXShards), "right should be a SparkXShards")

from bigdl.orca.data.utils import spark_df_to_pd_sparkxshards
left_df, right_df=left.to_spark_df(), right.to_spark_df()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls check code style.

@lalalapotter
Copy link
Contributor

Can we merge three train_*.py files?

@hkvision
Copy link
Contributor

hkvision commented Sep 19, 2022

Can we merge three train_*.py files?

To demonstrate different inputs, clearer to use separate scripts.

Comment on lines 84 to 90
# transform dataset into dict
#train_data = train_data.to_numpy()
#test_data = test_data.to_numpy()
#train_data = {"x": train_data[:, : -1].astype(np.int64),
# "y": train_data[:, -1].astype(np.float)}
#test_data = {"x": test_data[:, : -1].astype(np.int64),
# "y": test_data[:, -1].astype(np.float)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove these comments?

Comment on lines 108 to 109
def forward(self, *args):
user, item = args[0], args[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put user, item in the args directly?


import numpy as np
import pandas as pd
import scipy.sparse as sp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move import scipy to local?

train_data, _ = train_test_split(data_X, test_size=0.1, random_state=100)

train_dataset = NCFData(train_data, item_num=item_num, train_mat=train_mat, num_ng=4, is_training=True)
train_loader = data.DataLoader(train_dataset, batch_size=256, shuffle=True, num_workers=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_workers=4 in the original code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

batch_size=batch_size, and put 256 in fit

_, test_data = train_test_split(data_X, test_size=0.1, random_state=100)

test_dataset = NCFData(test_data)
test_loader = data.DataLoader(test_dataset, shuffle=False, num_workers=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing batch_size

loss=loss_function, metrics=[Accuracy()], backend=backend)

# Fit the estimator
est.fit(data=train_loader_func, epochs=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the original script trains for 20 epochs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

batch_size=256

# Step 5: Save and Load the Model

# Evaluate the model
result = est.evaluate(data=test_loader_func)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add one more print to say it is evaluation results?


import numpy as np
import pandas as pd
import scipy.sparse as sp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above


# Step 2: Define Dataset

from bigdl.orca.data import XShards
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this import necessary?

return data_XY


def transform_to_dict(data):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename this func

Comment on lines 76 to 77
data_XY["y"] = labels_fill
data_XY["y"] = data_XY["y"].astype(np.float)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use label as the column name?

@hkvision hkvision merged commit a8119fc into intel-analytics:main Nov 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants