Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct FIL export format for sklearn/cuml to treelite checkpoint #124

Merged

Conversation

oliverholworthy
Copy link
Member

Correct FIL export format for sklearn/cuml to treelite checkpoint.

This requires treelite and treelite_runtime packages and the version must match the triton version to work correctly.

@oliverholworthy oliverholworthy added the bug Something isn't working label Jun 17, 2022
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494, no merge conflicts.
Running as SYSTEM
Setting status of eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494 to PENDING with url https://10.20.13.93:8080/job/merlin_systems/96/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494^{commit} # timeout=10
Checking out Revision eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494 # timeout=10
Commit message: "Add test for exported fil model filenames"
 > git rev-list --no-walk 644c74ae55dbd17cb11e55e44b438f8ab9bfc9b1 # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins5571583820072609199.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 17 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py ... [ 23%]
tests/unit/systems/test_ensemble_ops.py .. [ 35%]
tests/unit/systems/test_export.py . [ 41%]
tests/unit/systems/test_graph.py . [ 47%]
tests/unit/systems/test_inference_ops.py .. [ 58%]
tests/unit/systems/test_op_runner.py .... [ 82%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============ 17 passed, 2 skipped, 15 warnings in 71.05s (0:01:11) =============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins14167444468338410888.sh

@github-actions
Copy link

Documentation preview

https://nvidia-merlin.github.io/systems/review/pr-124

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit 976729d00438fe722aea2ff985f998349e2b1a8a, no merge conflicts.
Running as SYSTEM
Setting status of 976729d00438fe722aea2ff985f998349e2b1a8a to PENDING with url https://10.20.13.93:8080/job/merlin_systems/97/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse 976729d00438fe722aea2ff985f998349e2b1a8a^{commit} # timeout=10
Checking out Revision 976729d00438fe722aea2ff985f998349e2b1a8a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 976729d00438fe722aea2ff985f998349e2b1a8a # timeout=10
Commit message: "Add treelite dependencies to fil workflow"
 > git rev-list --no-walk eaeb8f4a5b0833021e0f4bd23f3cd97e91f62494 # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins2747109493792480676.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 17 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py FF. [ 23%]
tests/unit/systems/test_ensemble_ops.py FF [ 35%]
tests/unit/systems/test_export.py . [ 41%]
tests/unit/systems/test_graph.py . [ 47%]
tests/unit/systems/test_inference_ops.py .. [ 58%]
tests/unit/systems/test_op_runner.py .... [ 82%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=================================== FAILURES ===================================
______________ test_workflow_tf_e2e_config_verification[parquet] _______________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-12/test_workflow_tf_e2e_config_ve0')
dataset = <merlin.io.dataset.Dataset object at 0x7fbb1273e4f0>
engine = 'parquet'

@pytest.mark.skipif(not TRITON_SERVER_PATH, reason="triton server not found")
@pytest.mark.parametrize("engine", ["parquet"])
def test_workflow_tf_e2e_config_verification(tmpdir, dataset, engine):
    # Create a Workflow
    schema = dataset.schema
    for name in ["x", "y", "id"]:
        dataset.schema.column_schemas[name] = dataset.schema.column_schemas[name].with_tags(
            [Tags.USER]
        )
    selector = ColumnSelector(["x", "y", "id"])

    workflow_ops = selector >> wf_ops.Rename(postfix="_nvt")
    workflow = Workflow(workflow_ops["x_nvt"])
    workflow.fit(dataset)

    # Create Tensorflow Model
    model = tf.keras.models.Sequential(
        [
            tf.keras.Input(name="x_nvt", dtype=tf.float64, shape=(1,)),
            tf.keras.layers.Dense(16, activation="relu"),
            tf.keras.layers.Dropout(0.2),
            tf.keras.layers.Dense(1, name="output"),
        ]
    )
    model.compile(
        optimizer="adam",
        loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
        metrics=[tf.metrics.SparseCategoricalAccuracy()],
    )

    # Creating Triton Ensemble
    triton_chain = (
        selector >> TransformWorkflow(workflow, cats=["x_nvt"]) >> PredictTensorflow(model)
    )
    triton_ens = Ensemble(triton_chain, schema)

    # Creating Triton Ensemble Config
    ensemble_config, node_configs = triton_ens.export(str(tmpdir))

    config_path = tmpdir / "ensemble_model" / "config.pbtxt"

    # Checking Triton Ensemble Config
    with open(config_path, "rb") as f:
        config = model_config.ModelConfig()
        raw_config = f.read()
        parsed = text_format.Parse(raw_config, config)

        # The config file contents are correct
        assert parsed.name == "ensemble_model"
        assert parsed.platform == "ensemble"
        assert hasattr(parsed, "ensemble_scheduling")

    df = make_df({"x": [1.0, 2.0, 3.0], "y": [4.0, 5.0, 6.0], "id": [7, 8, 9]})

    output_columns = triton_ens.graph.output_schema.column_names
  response = _run_ensemble_on_tritonserver(str(tmpdir), output_columns, df, triton_ens.name)

tests/unit/systems/test_ensemble.py:112:


tests/unit/systems/utils/triton.py:40: in _run_ensemble_on_tritonserver
response = client.infer(model_name, inputs, outputs=outputs)
/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:1295: in infer
raise_error_grpc(rpc_error)


rpc_error = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Request for unknown model...all.cc","file_line":1069,"grpc_message":"Request for unknown model: 'ensemble_model' is not found","grpc_status":14}"

def raise_error_grpc(rpc_error):
  raise get_error_grpc(rpc_error) from None

E tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Request for unknown model: 'ensemble_model' is not found

/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:62: InferenceServerException
----------------------------- Captured stderr call -----------------------------
2022-06-17 17:42:16.207872: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-17 17:42:17.215414: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0
2022-06-17 17:42:17.216199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 15157 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0
------------------------------ Captured log call -------------------------------
WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually.
WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually.
__________________ test_workflow_tf_e2e_multi_op_run[parquet] __________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-12/test_workflow_tf_e2e_multi_op_0')
dataset = <merlin.io.dataset.Dataset object at 0x7fbae81d29a0>
engine = 'parquet'

@pytest.mark.skipif(not TRITON_SERVER_PATH, reason="triton server not found")
@pytest.mark.parametrize("engine", ["parquet"])
def test_workflow_tf_e2e_multi_op_run(tmpdir, dataset, engine):
    # Create a Workflow
    schema = dataset.schema
    for name in ["x", "y", "id"]:
        dataset.schema.column_schemas[name] = dataset.schema.column_schemas[name].with_tags(
            [Tags.USER]
        )

    workflow_ops = ["name-cat"] >> wf_ops.Categorify(cat_cache="host")
    workflow = Workflow(workflow_ops)
    workflow.fit(dataset)

    embedding_shapes_1 = wf_ops.get_embedding_sizes(workflow)

    cats = ["name-string"] >> wf_ops.Categorify(cat_cache="host")
    workflow_2 = Workflow(cats)
    workflow_2.fit(dataset)

    embedding_shapes = wf_ops.get_embedding_sizes(workflow_2)
    embedding_shapes_1.update(embedding_shapes)
    # Create Tensorflow Model
    model = create_tf_model(["name-cat", "name-string"], [], embedding_shapes_1)

    # Creating Triton Ensemble
    triton_chain_1 = ["name-cat"] >> TransformWorkflow(workflow)
    triton_chain_2 = ["name-string"] >> TransformWorkflow(workflow_2)
    triton_chain = (triton_chain_1 + triton_chain_2) >> PredictTensorflow(model)

    triton_ens = Ensemble(triton_chain, schema)

    # Creating Triton Ensemble Config
    ensemble_config, nodes_config = triton_ens.export(str(tmpdir))
    config_path = tmpdir / "ensemble_model" / "config.pbtxt"

    # Checking Triton Ensemble Config
    with open(config_path, "rb") as f:
        config = model_config.ModelConfig()
        raw_config = f.read()
        parsed = text_format.Parse(raw_config, config)

        # The config file contents are correct
        assert parsed.name == "ensemble_model"
        assert parsed.platform == "ensemble"
        assert hasattr(parsed, "ensemble_scheduling")

    df = dataset.to_ddf().compute()[["name-string", "name-cat"]].iloc[:3]
  response = _run_ensemble_on_tritonserver(str(tmpdir), ["output"], df, triton_ens.name)

tests/unit/systems/test_ensemble.py:165:


tests/unit/systems/utils/triton.py:40: in _run_ensemble_on_tritonserver
response = client.infer(model_name, inputs, outputs=outputs)
/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:1295: in infer
raise_error_grpc(rpc_error)


rpc_error = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Request for unknown model...all.cc","file_line":1069,"grpc_message":"Request for unknown model: 'ensemble_model' is not found","grpc_status":14}"

def raise_error_grpc(rpc_error):
  raise get_error_grpc(rpc_error) from None

E tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Request for unknown model: 'ensemble_model' is not found

/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:62: InferenceServerException
----------------------------- Captured stderr call -----------------------------
I0617 17:42:24.100353 26728 tensorflow.cc:2176] TRITONBACKEND_Initialize: tensorflow
I0617 17:42:24.100472 26728 tensorflow.cc:2186] Triton TRITONBACKEND API version: 1.8
I0617 17:42:24.100479 26728 tensorflow.cc:2192] 'tensorflow' TRITONBACKEND API version: 1.8
I0617 17:42:24.100485 26728 tensorflow.cc:2216] backend configuration:
{"cmdline":{"version":"2"}}
I0617 17:42:24.276836 26728 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f1a66000000' with size 268435456
I0617 17:42:24.277563 26728 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0617 17:42:24.281747 26728 model_repository_manager.cc:997] loading: 0_transformworkflow:1
I0617 17:42:24.381995 26728 model_repository_manager.cc:997] loading: 1_transformworkflow:1
I0617 17:42:24.389089 26728 python.cc:1903] TRITONBACKEND_ModelInstanceInitialize: 0_transformworkflow (GPU device 0)
I0617 17:42:24.482297 26728 model_repository_manager.cc:997] loading: 2_predicttensorflow:1
------------------------------ Captured log call -------------------------------
WARNING absl:signature_serialization.py:146 Function _wrapped_model contains input name(s) name-cat, name-string with unsupported characters which will be renamed to name_cat, name_string in the SavedModel.
WARNING absl:save.py:133 <nvtabular.framework_utils.tensorflow.layers.embedding.DenseFeatures object at 0x7fbae865fd00> has the same name 'DenseFeatures' as a built-in Keras object. Consider renaming <class 'nvtabular.framework_utils.tensorflow.layers.embedding.DenseFeatures'> to avoid naming conflicts when loading with tf.keras.models.load_model. If renaming is not possible, pass the object in the custom_objects parameter of the load function.
WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually.
WARNING absl:signature_serialization.py:146 Function _wrapped_model contains input name(s) name-cat, name-string with unsupported characters which will be renamed to name_cat, name_string in the SavedModel.
WARNING absl:save.py:133 <nvtabular.framework_utils.tensorflow.layers.embedding.DenseFeatures object at 0x7fbae865fd00> has the same name 'DenseFeatures' as a built-in Keras object. Consider renaming <class 'nvtabular.framework_utils.tensorflow.layers.embedding.DenseFeatures'> to avoid naming conflicts when loading with tf.keras.models.load_model. If renaming is not possible, pass the object in the custom_objects parameter of the load function.
WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually.
____________________________ test_softmax_sampling _____________________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-12/test_softmax_sampling0')

@pytest.mark.skipif(not TRITON_SERVER_PATH, reason="triton server not found")
def test_softmax_sampling(tmpdir):
    request_schema = Schema(
        [
            ColumnSchema("movie_ids", dtype=np.int32),
            ColumnSchema("output_1", dtype=np.float32),
        ]
    )

    combined_features = {
        "movie_ids": np.random.randint(0, 10000, 100).astype(np.int32),
        "output_1": np.random.random(100).astype(np.float32),
    }

    request = make_df(combined_features)

    ordering = ["movie_ids"] >> SoftmaxSampling(relevance_col="output_1", topk=10, temperature=20.0)

    ensemble = Ensemble(ordering, request_schema)
    ens_config, node_configs = ensemble.export(tmpdir)
  response = _run_ensemble_on_tritonserver(
        tmpdir, ensemble.graph.output_schema.column_names, request, "ensemble_model"
    )

tests/unit/systems/test_ensemble_ops.py:52:


tests/unit/systems/utils/triton.py:40: in _run_ensemble_on_tritonserver
response = client.infer(model_name, inputs, outputs=outputs)
/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:1295: in infer
raise_error_grpc(rpc_error)


rpc_error = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Request for unknown model...all.cc","file_line":1069,"grpc_message":"Request for unknown model: 'ensemble_model' is not found","grpc_status":14}"

def raise_error_grpc(rpc_error):
  raise get_error_grpc(rpc_error) from None

E tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Request for unknown model: 'ensemble_model' is not found

/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:62: InferenceServerException
--------------------------- Captured stderr teardown ---------------------------
0617 17:42:29.085025 26751 pb_stub.cc:821] Non-graceful termination detected.
____________________________ test_filter_candidates ____________________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-12/test_filter_candidates0')

@pytest.mark.skipif(not TRITON_SERVER_PATH, reason="triton server not found")
def test_filter_candidates(tmpdir):
    request_schema = Schema(
        [
            ColumnSchema("candidate_ids", dtype=np.int32),
            ColumnSchema("movie_ids", dtype=np.int32),
        ]
    )

    candidate_ids = np.random.randint(1, 100000, 100).astype(np.int32)
    movie_ids_1 = np.zeros(100, dtype=np.int32)
    movie_ids_1[:20] = np.unique(candidate_ids)[:20]

    combined_features = {
        "candidate_ids": candidate_ids,
        "movie_ids": movie_ids_1,
    }

    request = make_df(combined_features)

    filtering = ["candidate_ids"] >> FilterCandidates(filter_out=["movie_ids"])

    ensemble = Ensemble(filtering, request_schema)
    ens_config, node_configs = ensemble.export(tmpdir)
  response = _run_ensemble_on_tritonserver(
        tmpdir, ensemble.graph.output_schema.column_names, request, "ensemble_model"
    )

tests/unit/systems/test_ensemble_ops.py:84:


tests/unit/systems/utils/triton.py:40: in _run_ensemble_on_tritonserver
response = client.infer(model_name, inputs, outputs=outputs)
/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:1295: in infer
raise_error_grpc(rpc_error)


rpc_error = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Request for unknown model...all.cc","file_line":1069,"grpc_message":"Request for unknown model: 'ensemble_model' is not found","grpc_status":14}"

def raise_error_grpc(rpc_error):
  raise get_error_grpc(rpc_error) from None

E tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Request for unknown model: 'ensemble_model' is not found

/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/init.py:62: InferenceServerException
=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/unit/systems/test_ensemble.py::test_workflow_tf_e2e_config_verification[parquet]
FAILED tests/unit/systems/test_ensemble.py::test_workflow_tf_e2e_multi_op_run[parquet]
FAILED tests/unit/systems/test_ensemble_ops.py::test_softmax_sampling - trito...
FAILED tests/unit/systems/test_ensemble_ops.py::test_filter_candidates - trit...
============ 4 failed, 13 passed, 2 skipped, 15 warnings in 34.21s =============
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins3232587724725404595.sh


def save(self, version_path):
"""Save model to version_path."""
model_path = pathlib.Path(version_path) / self.model_filename
with open(model_path, "wb") as model_file:
pickle.dump(self.model, model_file)
self.model.convert_to_treelite_model().to_treelite_checkpoint(str(model_path))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we haven't currently got cuml installed in the test env, so this is untested in CI currently

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit fc4e464729df3bd367bb990310b5f2119af35a46, no merge conflicts.
Running as SYSTEM
Setting status of fc4e464729df3bd367bb990310b5f2119af35a46 to PENDING with url https://10.20.13.93:8080/job/merlin_systems/99/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse fc4e464729df3bd367bb990310b5f2119af35a46^{commit} # timeout=10
Checking out Revision fc4e464729df3bd367bb990310b5f2119af35a46 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fc4e464729df3bd367bb990310b5f2119af35a46 # timeout=10
Commit message: "Merge branch 'main' into fil-treelite-checkpoint"
 > git rev-list --no-walk 0d4edc87c4b65e32cacf09ff459314a943375fb0 # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins14804937426934297153.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 17 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py ... [ 23%]
tests/unit/systems/test_ensemble_ops.py .. [ 35%]
tests/unit/systems/test_export.py . [ 41%]
tests/unit/systems/test_graph.py . [ 47%]
tests/unit/systems/test_inference_ops.py .. [ 58%]
tests/unit/systems/test_op_runner.py .... [ 82%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============ 17 passed, 2 skipped, 15 warnings in 71.90s (0:01:11) =============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins16967667769923700365.sh

@karlhigley
Copy link
Contributor

@jperez999 There might be some additional dependencies to add to the CI environments from this PR too

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef, no merge conflicts.
Running as SYSTEM
Setting status of d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef to PENDING with url https://10.20.13.93:8080/job/merlin_systems/101/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef^{commit} # timeout=10
Checking out Revision d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef # timeout=10
Commit message: "Add test for FIL ensemble with sklearn treelite checkpoint"
 > git rev-list --no-walk 8986c8a491173b051732e8d54adbbcd04cca1454 # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins8448001381901058517.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 18 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py ...F [ 27%]
tests/unit/systems/test_ensemble_ops.py .F [ 38%]
tests/unit/systems/test_export.py . [ 44%]
tests/unit/systems/test_graph.py . [ 50%]
tests/unit/systems/test_inference_ops.py .. [ 61%]
tests/unit/systems/test_op_runner.py .... [ 83%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=================================== FAILURES ===================================
__________________________ test_fil_treelite_ensemble __________________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-3/test_fil_treelite_ensemble0')

def test_fil_treelite_ensemble(tmpdir):
    rows = 200
    num_features = 16
    X, y = sklearn.datasets.make_regression(
        n_samples=rows,
        n_features=num_features,
        n_informative=num_features // 3,
        random_state=0,
    )
    feature_names = [str(i) for i in range(num_features)]
    df = pd.DataFrame(X, columns=feature_names, dtype=np.float32)

    # Fit RF
    model = sklearn.ensemble.RandomForestRegressor()
    model.fit(X, y)

    input_column_schemas = [ColumnSchema(col, dtype=np.float32) for col in feature_names]
    input_schema = Schema(input_column_schemas)
    selector = ColumnSelector(feature_names)

    triton_chain = selector >> PredictForest(model, input_schema)

    triton_ens = Ensemble(triton_chain, input_schema)

    request_df = df[:5]
  triton_ens.export(tmpdir)

tests/unit/systems/test_ensemble.py:211:


merlin/systems/dag/ensemble.py:105: in export
node_config = node.export(export_path, node_id=node_id, version=version)
merlin/systems/dag/node.py:44: in export
return self.op.export(
merlin/systems/dag/ops/fil.py:94: in export
fil_model_config = self.fil_op.export(
merlin/systems/dag/ops/fil.py:293: in export
self.fil_model.save(version_path)


self = <merlin.systems.dag.ops.fil.SKLearnRandomForest object at 0x7f58c4e53280>
version_path = PosixPath('/tmp/pytest-of-jenkins/pytest-3/test_fil_treelite_ensemble0/0_fil/1')

def save(self, version_path):
    """Save model to version_path."""
    model_path = pathlib.Path(version_path) / self.model_filename
    if treelite_sklearn is None:
      raise RuntimeError(
            "Both 'treelite' and 'treelite_runtime' "
            "are required to save an sklearn random forest model."
        )

E RuntimeError: Both 'treelite' and 'treelite_runtime' are required to save an sklearn random forest model.

merlin/systems/dag/ops/fil.py:493: RuntimeError
____________________________ test_filter_candidates ____________________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-3/test_filter_candidates0')

@pytest.mark.skipif(not TRITON_SERVER_PATH, reason="triton server not found")
def test_filter_candidates(tmpdir):
    request_schema = Schema(
        [
            ColumnSchema("candidate_ids", dtype=np.int32),
            ColumnSchema("movie_ids", dtype=np.int32),
        ]
    )

    candidate_ids = np.random.randint(1, 100000, 100).astype(np.int32)
    movie_ids_1 = np.zeros(100, dtype=np.int32)
    movie_ids_1[:20] = np.unique(candidate_ids)[:20]

    combined_features = {
        "candidate_ids": candidate_ids,
        "movie_ids": movie_ids_1,
    }

    request = make_df(combined_features)

    filtering = ["candidate_ids"] >> FilterCandidates(filter_out=["movie_ids"])

    ensemble = Ensemble(filtering, request_schema)
    ens_config, node_configs = ensemble.export(tmpdir)

    response = _run_ensemble_on_tritonserver(
        tmpdir, ensemble.graph.output_schema.column_names, request, "ensemble_model"
    )
    assert response is not None
  assert len(response.as_numpy("filtered_ids")) == 80

E AssertionError: assert 79 == 80
E + where 79 = len(array([[54633],\n [69540],\n [35671],\n [80656],\n [46034],\n [62855],\n [36342],\n ... [38988],\n [25573],\n [80110],\n [66964],\n [57573],\n [40016],\n [96063]], dtype=int32))
E + where array([[54633],\n [69540],\n [35671],\n [80656],\n [46034],\n [62855],\n [36342],\n ... [38988],\n [25573],\n [80110],\n [66964],\n [57573],\n [40016],\n [96063]], dtype=int32) = <bound method InferResult.as_numpy of <tritonclient.grpc.InferResult object at 0x7f58b84bf730>>('filtered_ids')
E + where <bound method InferResult.as_numpy of <tritonclient.grpc.InferResult object at 0x7f58b84bf730>> = <tritonclient.grpc.InferResult object at 0x7f58b84bf730>.as_numpy

tests/unit/systems/test_ensemble_ops.py:88: AssertionError
----------------------------- Captured stdout call -----------------------------
Signal (2) received.
----------------------------- Captured stderr call -----------------------------
I0623 12:42:23.655003 31571 tensorflow.cc:2176] TRITONBACKEND_Initialize: tensorflow
I0623 12:42:23.655127 31571 tensorflow.cc:2186] Triton TRITONBACKEND API version: 1.8
I0623 12:42:23.655135 31571 tensorflow.cc:2192] 'tensorflow' TRITONBACKEND API version: 1.8
I0623 12:42:23.655141 31571 tensorflow.cc:2216] backend configuration:
{"cmdline":{"version":"2"}}
I0623 12:42:23.847589 31571 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f5776000000' with size 268435456
I0623 12:42:23.848302 31571 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0623 12:42:23.850813 31571 model_repository_manager.cc:997] loading: 0_filtercandidates:1
I0623 12:42:23.956500 31571 python.cc:1903] TRITONBACKEND_ModelInstanceInitialize: 0_filtercandidates (GPU device 0)
I0623 12:42:26.310652 31571 model_repository_manager.cc:1152] successfully loaded '0_filtercandidates' version 1
I0623 12:42:26.310989 31571 model_repository_manager.cc:997] loading: ensemble_model:1
I0623 12:42:26.411497 31571 model_repository_manager.cc:1152] successfully loaded 'ensemble_model' version 1
I0623 12:42:26.411645 31571 server.cc:524]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0623 12:42:26.411740 31571 server.cc:551]
+------------+-----------------------------------------------------------------+-----------------------------+
| Backend | Path | Config |
+------------+-----------------------------------------------------------------+-----------------------------+
| tensorflow | /opt/tritonserver/backends/tensorflow2/libtriton_tensorflow2.so | {"cmdline":{"version":"2"}} |
| python | /opt/tritonserver/backends/python/libtriton_python.so | {} |
+------------+-----------------------------------------------------------------+-----------------------------+

I0623 12:42:26.411809 31571 server.cc:594]
+--------------------+---------+--------+
| Model | Version | Status |
+--------------------+---------+--------+
| 0_filtercandidates | 1 | READY |
| ensemble_model | 1 | READY |
+--------------------+---------+--------+

I0623 12:42:26.460148 31571 metrics.cc:651] Collecting metrics for GPU 0: Tesla P100-DGXS-16GB
I0623 12:42:26.461768 31571 tritonserver.cc:1962]
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.20.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
| model_repository_path[0] | /tmp/pytest-of-jenkins/pytest-3/test_filter_candidates0 |
| model_control_mode | MODE_NONE |
| strict_model_config | 1 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0623 12:42:26.462838 31571 grpc_server.cc:4421] Started GRPCInferenceService at 0.0.0.0:8001
I0623 12:42:26.463061 31571 http_server.cc:3113] Started HTTPService at 0.0.0.0:8000
I0623 12:42:26.528200 31571 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002
W0623 12:42:27.488375 31571 metrics.cc:469] Unable to get energy consumption for GPU 0. Status:Success, value:0
W0623 12:42:28.488585 31571 metrics.cc:469] Unable to get energy consumption for GPU 0. Status:Success, value:0
I0623 12:42:29.411482 31571 server.cc:252] Waiting for in-flight requests to complete.
I0623 12:42:29.411517 31571 model_repository_manager.cc:1029] unloading: ensemble_model:1
I0623 12:42:29.411631 31571 model_repository_manager.cc:1029] unloading: 0_filtercandidates:1
I0623 12:42:29.411750 31571 server.cc:267] Timeout 30: Found 2 live models and 0 in-flight non-inference requests
I0623 12:42:29.411765 31571 model_repository_manager.cc:1135] successfully unloaded 'ensemble_model' version 1
W0623 12:42:29.512938 31571 metrics.cc:469] Unable to get energy consumption for GPU 0. Status:Success, value:0
I0623 12:42:30.411851 31571 server.cc:267] Timeout 29: Found 1 live models and 0 in-flight non-inference requests
I0623 12:42:30.713394 31571 model_repository_manager.cc:1135] successfully unloaded '0_filtercandidates' version 1
I0623 12:42:31.411984 31571 server.cc:267] Timeout 28: Found 0 live models and 0 in-flight non-inference requests
=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/unit/systems/test_ensemble.py::test_fil_treelite_ensemble - Runt...
FAILED tests/unit/systems/test_ensemble_ops.py::test_filter_candidates - Asse...
======= 2 failed, 16 passed, 2 skipped, 15 warnings in 71.27s (0:01:11) ========
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins1691989754933073807.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit 078e196a5d84e2e5e41d9cfd50c26fb783162109, no merge conflicts.
Running as SYSTEM
Setting status of 078e196a5d84e2e5e41d9cfd50c26fb783162109 to PENDING with url https://10.20.13.93:8080/job/merlin_systems/102/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse 078e196a5d84e2e5e41d9cfd50c26fb783162109^{commit} # timeout=10
Checking out Revision 078e196a5d84e2e5e41d9cfd50c26fb783162109 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 078e196a5d84e2e5e41d9cfd50c26fb783162109 # timeout=10
Commit message: "Move ensemble treelite test from test_ensemble to test_forest"
 > git rev-list --no-walk d1785a8a3f0e2eb07eaf2d21ed52e287a12f76ef # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins18228505484791559165.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 17 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py ... [ 23%]
tests/unit/systems/test_ensemble_ops.py .. [ 35%]
tests/unit/systems/test_export.py . [ 41%]
tests/unit/systems/test_graph.py . [ 47%]
tests/unit/systems/test_inference_ops.py .. [ 58%]
tests/unit/systems/test_op_runner.py .... [ 82%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============ 17 passed, 2 skipped, 15 warnings in 70.50s (0:01:10) =============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins5952065253141822704.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #124 of commit fc4e464729df3bd367bb990310b5f2119af35a46, no merge conflicts.
Running as SYSTEM
Setting status of fc4e464729df3bd367bb990310b5f2119af35a46 to PENDING with url https://10.20.13.93:8080/job/merlin_systems/104/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_systems
using credential fce1c729-5d7c-48e8-90cb-b0c314b1076e
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/systems # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/systems
 > git --version # timeout=10
using GIT_ASKPASS to set credentials login for merlin-systems user + githubtoken
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/systems +refs/pull/124/*:refs/remotes/origin/pr/124/* # timeout=10
 > git rev-parse fc4e464729df3bd367bb990310b5f2119af35a46^{commit} # timeout=10
Checking out Revision fc4e464729df3bd367bb990310b5f2119af35a46 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fc4e464729df3bd367bb990310b5f2119af35a46 # timeout=10
Commit message: "Merge branch 'main' into fil-treelite-checkpoint"
 > git rev-list --no-walk 0d4edc87c4b65e32cacf09ff459314a943375fb0 # timeout=10
[merlin_systems] $ /bin/bash /tmp/jenkins11772426066268581552.sh
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_systems/systems, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 17 items / 2 skipped

tests/unit/test_version.py . [ 5%]
tests/unit/systems/test_ensemble.py ... [ 23%]
tests/unit/systems/test_ensemble_ops.py .. [ 35%]
tests/unit/systems/test_export.py . [ 41%]
tests/unit/systems/test_graph.py . [ 47%]
tests/unit/systems/test_inference_ops.py .. [ 58%]
tests/unit/systems/test_op_runner.py .... [ 82%]
tests/unit/systems/test_tensorflow_inf_op.py ... [100%]

=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18
/var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular/framework_utils/init.py:18: DeprecationWarning: The nvtabular.framework_utils module is being replaced by the Merlin Models library. Support for importing from nvtabular.framework_utils is deprecated, and will be removed in a future version. Please consider using the models and layers from Merlin Models instead.
warnings.warn(

tests/unit/systems/test_ensemble.py: 4 warnings
tests/unit/systems/test_export.py: 1 warning
tests/unit/systems/test_inference_ops.py: 2 warnings
tests/unit/systems/test_op_runner.py: 4 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column x is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column y is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

tests/unit/systems/test_export.py::test_export_run_ensemble_triton[tensorflow-parquet]
/var/jenkins_home/workspace/merlin_systems/systems/merlin/systems/triton/export.py:304: UserWarning: Column id is being generated by NVTabular workflow but is unused in test_name_tf model
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============ 17 passed, 2 skipped, 15 warnings in 69.95s (0:01:09) =============
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/systems/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_systems] $ /bin/bash /tmp/jenkins3814515974108644763.sh

@karlhigley karlhigley merged commit efbf7f6 into NVIDIA-Merlin:main Jun 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants