Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment Tracking in Kedro #1070

Closed
merelcht opened this issue Nov 25, 2021 · 12 comments
Closed

Experiment Tracking in Kedro #1070

merelcht opened this issue Nov 25, 2021 · 12 comments
Labels
Component: Experiment Tracking 🧪 Issue/PR that addresses functionality related to experiment tracking pinned Issue shouldn't be closed by stale bot Type: Technical DR 💾 Decision Records (technical decisions made)

Comments

@merelcht
Copy link
Member

merelcht commented Nov 25, 2021

Why should we care about Experiment Tracking?

Experiment tracking is a way to record all information that you would need to recreate a data science experiment. We think of it as logging for parameters, metrics, models and other artefacts.

Kedro currently has parts of this functionality. For example, it’s possible to log parameters as part of your codebase and snapshot models and other artefacts like plots with Kedro’s versioning capabilities for datasets. However, Kedro is missing a way to log metrics and capture all this logged metadata as a timestamped run of an experiment. It is also missing a way for users to visualise, discover and compare this logged metadata.

This change is essential to us because we want to standardise how logging for ML is done. There should be one easy way to capture this information, and we’re going to give users the Kedro way to do this.

This functionality is also expected to increase Kedro Lab usage by Data Scientists as it has anecdotally been known that people performing the Data Engineering workflow get the most benefits from Kedro-Viz while the Data Science workflow is not accounted for.

What evidence do we have to suggest that we do this?

Our users sense the gap, and one of the most common usage patterns of Kedro is with MLFlow Tracking, which provides this additional functionality. We have seen evidence here:

We also know that our internal users relied on PerformanceAI for this functionality. We sunset PerformanceAI, but PerformanceAI was fantastic to use because:

  • It allowed multiple collaborators to share results
  • It integrated nicely with Kedro
  • The UI was great

Our vertical teams, namely C1 (@deepyaman), InsureX (@imdoroshenko @benhorsburgh) and OptimusAI (@mkretsch327) consider this high priority and will be confirmed users of this functionality.

What metrics will we track to prove the success of this feature?

  • kedro viz terminal runs
  • A metric that points to the use of this feature
  • Full adoption of the feature by all vertical teams

What design requirements do we have?

We must allow users to:

  • Keep track of their metrics
  • See the concept of an experiment on Kedro Lab

We must think about:

  • Minimising the number of changes a user would need to make to activate this project from a current Kedro project
  • How users would share their experiment results with other team members
  • How this functionality would work with other MLFlow tools (e.g. model serving)
  • How users would disable runs so that they don’t clutter run history
  • How this functionality works with the KedroSession
@merelcht
Copy link
Member Author

merelcht commented Nov 25, 2021

(Comment copied over, originally written by @limdauto)

Technical Design

Introduction

This document describes the design of a set of features that enable experiment tracking as a native capability in Kedro. It will also break down the technical work required to implement it in an iterative manner.

Background

Experiment tracking in Kedro means:

  1. The ability to record artefacts related to an experiment. These artefacts include: models, parameters, metrics and other miscellaneous artefacts, e.g. generated plots, pdfs, etc.
  2. The ability to view the recorded experiment artefacts in an iterative manner.
  3. The ability to integrate the recorded data with external systems, e.g. models with external model registries and deployment systems.

Being Kedro native means experiment tracking concepts/abstractions map directly to Kedro concepts/abstractions and can be visualised transparently with Kedro-Viz. This principle informs the following technical choices:

  1. An experiment maps one-to-one to a Kedro pipeline. For the MVP, there will only be one global experiment that maps directly to the main Kedro pipeline. For later iterations, we can support multiple experiments by mapping experiments to modular pipelines within the main Kedro pipeline.
  2. An experiment run, or iteration, maps to a Kedro run.
  3. Each artefact maps directly to a Kedro dataset. Integration with external systems will be handled by the dataset implementation.
  4. The artefact's content is displayed in Kedro-Viz in the metadata side panel, similar to other datasets.
  5. Configuration for external system integration is defined in conf/ directory.

Milestones

The MVP will be released iteratively with the following milestones (some of these milestones can be worked on in parallel):

Milestone 1: Visualisation of metrics & other JSON-compatible artefacts

During an experiment, users will want to track performance metrics of their ML models. This is usually captured as a dictionary of metric name and metric value. Furthermore, sometimes they would want to track arbitrary key-value pair as seen with the use of mlflow.log_param. We can cater to both of these use-cases with a tracking.MetricsDataSet and tracking.JSONDataSet. For example, consider the following node:

node(
    train_model,
    inputs="model_input",
    outputs=dict(
        model="model",
        features="features",
        metrics="metrics",
    )
)

To track and visualise quantitative metrics, users can use the following dataset in their catalog:

metrics:
    type: tracking.MetricsDataSet
    path: data/06_models/metrics.json
  • MetricsDataSet is a dictionary with string key and numerical value.
  • It will be versioned by default.
  • When it gets displayed on viz, it will load data from previous X versions and plotted as a timeseries.

To track and visualise an unstructured features list, users can specify

features:
    type: tracking.JSONDataSet
    path: data/05_model_input/features.json

These datasets are versioned by default. For the first milestone release, Kedro-Viz can simply pick up this dataset's tracking namespace and display it on the metadata side panel with the JSON tree view currently used for parameters:

Screenshot 2021-08-04 at 17 56 38

Screenshot 2021-08-04 at 17 59 03

For metrics, we can construct a timeseries of the metrics by loading data from previous runs and automatically render a plot:

Screenshot 2021-08-04 at 18 38 54

Milestone 2: Runs list visualisation & comparison

Screenshot 2021-09-14 at 09 22 11

See #1070 (comment)

Milestone 3: Mlflow compatibility

blog-mlflow-model-1

See #1070 (comment)

Milestone 4: Timeline view

@merelcht
Copy link
Member Author

(Comments copied over, originally written by @datajoely)

On Milestone 4 - I had a good chat with @mkretsch327 re the benefits of being able to compare different plotly vizs together. I think there is a lot of scope for (a) comparing one particular plot to different runs (b) selecting multiple plots and seeing them side by side.

I would also say that Milestone 2 opens up two important points:

  • Do we enable runs from the UI? I think I'm of the view we should allow any CLI command a GUI counterpart
  • Style wise a run list can be done in a very MVP way, however I really want something like keylines does: and I know @idanov things this way too!
time-bar-play-540px.mp4

In general there is a lot we could [get inspiration] from keylines...

kronograph-flow-net.mp4

@merelcht
Copy link
Member Author

(Comments copied over, originally written by @yetudada )

This is fantastic work! I have the following comments:

  • I really like the structure of the milestones because it allows us to get this in the hands of our users sooner while building on features down the line
  • Can we use PAI's comparison view as the basis for Milestone 4? It was one of the most viewed features, we could copy the structure of that feature. I also like @datajoely and @studioswong's thoughts on this.
  • I'm happy for Milestone 5 to be the timeline view; we have a lot of evidence to create this functionality

I have one question about Milestone 1:

  • How does the design to plot metrics over time scale with the number of metrics? This example has three metrics; would there be three graphs?

@merelcht
Copy link
Member Author

(Comment copied over, originally written by @limdauto)

Milestone 2: Runs list visualisation - Technical Design

Introduction

In this milestone, on Kedro-Viz, we will display:

  • A list of previous runs of a Kedro project.
  • When users click on a particular run, they can see an aggregation of all tracked data (metrics and JSON) in the run.

Design (internal)

Prototype: https://projects.invisionapp.com/share/E2113DWF5A7R#/screens/452880665
Visual Design: https://app.zeplin.io/project/5d5ea7a05efca76a74f7d0ea/screen/6116543d6a57ce9a026b6bff

Background

Some background knowledge that will be useful:

  • Every Kedro's pipeline run is managed in a session.
  • There are other kinds of sessions, e.g. CLI session, viz session, etc., apart from run sessions.
  • The default session store is non persistent.
  • The default persistent session store in Kedro is the file-based ShelveStore.
  • Session data in ShelveStore is a simple dictionary with the following shape:
{
  "package_name": "spaceflights_0174",
  "session_id": "2021-08-10T12.12.42.311Z",
  "cli": {
    "args": [],
    "params": {
      "from_inputs": [],
      "to_outputs": [],
      "from_nodes": [],
      "to_nodes": [],
      "node_names": [],
      "runner": null,
      "parallel": false,
      "is_async": false,
      "env": null,
      "tag": [],
      "load_version": {},
      "pipeline": null,
      "config": null,
      "params": {}
    },
    "command_name": "run",
    "command_path": "kedro run"
  },
  "project_path": "/Users/lim_Hoang/Projects/spaceflights-0174"
}
  • Every time a session is created with ShelveStore, a new store is created at sessions/<session_id>/store.db

Screenshot 2021-08-10 at 16 09 10

Challenges

  • Currently, there is no easy way to get the list of previous run_ids and related run data from the store location, as non-run sessions are stored in the same location.
  • For built-in CLI commands, we do set save_on_close explicitly to False, so these sessions won't be stored. But since the default of KedroSession.create is True, we can't guarantee there won't be any non-run sessions.
  • In other words, the list of session_ids in the store's location is a superset of run_ids, some of which won't contain experiment tracking information.

To solve this challenge, we could adapt the store location to be divided into sessions/runs and sessions/non-runs. But as we have seen with mlflow, this is essentially reinventing a relational database using the filesystem. It won't scale well to additional requirements such as querying joined data, e.g. finding all runs where the precision metric is greater than a certain threshold.

Proposal

For this milestone, I propose that:

  • We build a new sqlite-backed session store.
  • We could initially host this session store in Kedro-Viz at kedro_viz.integrations.kedro.session_store.SQLiteSession
  • For the first few releases while we finalise the features, this will be opt-in: if users want to use it and see runs list, they will have to explicitly set the SESSION_STORE_CLASS in settings.py to SQLiteSession.
  • Once we are happy with the schema and implementation of the store, we can move it into Kedro core and make it the default session store.

Technical

The implementation for this milestone can live entirely in Kedro-Viz as follows:

Session store schema

To start with, we can use the following initial relational schema for the session store:

Screenshot 2021-08-10 at 14 49 45

A few notes:

  • Session type is null if it's not a run session. When we move this into core, we can also set the type for CLI session as well.
  • The session data will be exactly the same as the current data saved by ShelveStore. It will be serialised as a JSON string to avoid having to load JSON extension in sqlite.
  • We only store the tracked dataset names in the session store. To get the tracked data itself, we can iterate through the list of tracked dataset names and load the data using catalog.load(tracked_dataset_name)

Data Access

In kedro_viz.data_access.repositories, add a new SessionRepositoriy to manage the loading of previously saved session data. Specifically, it needs to have at least 2 public methods:

  • sessions_repository.get_runs(page_size: int, page_num: int) -> List[Run]: List previous runs from the session store. This can be paginated and ordered by timestamp.
  • sessions_repository.get_tracked_dataset_names_by_run_id(run_id: string) -> List[str]: Get the name of tracked datasets for a given run_id.

In the future, we will allow users to query by metrics. To that extend, we need a metrics-friendly search index. At the very least, we need to setup an index in sqlite to do it: https://www.tutorialspoint.com/sqlite/sqlite_indexes.htm -- but there are other solution, including an in-memory search index where we pay the cost up front when starting viz or we can even us full-blown disk-based search index too: https://whoosh.readthedocs.io/en/latest/index.html. There are pros & cons for each approach. I will write a separate design doc just for the metrics query. But it will be for later iteration.

Front-end

The technical design for the frontend will be dependent on the product design, i.e. where we want to show the runs list. We might do the following:

  • On first page load, the backend will return a list of runs in the project as part of /api/main response.
  • When user clicks on a run, the client will call /api/runs/<run-id> to get data related to that particular run, including experiments metrics.

API

The backend design is trivial: simply integrate the API responses for /api/main and /api/runs with the data access layer mentioned earlier.

Open problems

  • [FE] New navigation architecture.
  • [FE] New communication protocol between FE and BE for real-time update of runs.
  • Where to implement run search (FE or BE)
  • Multi-user experience (?)

@merelcht
Copy link
Member Author

Milestone 3: MLflow compatibility

Introduction

In this milestone we will add compatibility with the MLflow Model Registry and potentially the MLflow UI. This means that Kedro users will be able to log MLflow models that can then be registered, viewed and served with MLflow.

Background

MLflow is a popular tool to use for experiment tracking, which also contains a model registry. Experiment tracking in Kedro will focus on logging and visualising run data, but not on managing model lineage. Offering compatibility with MLflow models will allow Kedro users to use the MLflow model registry to manage model lifecycles.

Proposal

To enable compatibility with the MLflow Model Registry I propose the following implementation:

  • A MLflowModelDataSet to log MLflow specific models, similar to the dataset used to log MLflow models in the MLOps2.0 project (internal)

    A catalog entry for such a dataset would look like this:

    regressor:
      type: tracking.mlflow.MLflowModelDataSet
      flavor: mlflow.sklearn
      model_name: regressor
      signature:
        inputs: '[
          {"name": "engines", "type": "double"},
          {"name": "passenger_capacity", "type": "long"},
          {"name": "crew", "type": "double"},
          {"name": "d_check_complete", "type": "boolean"},
          {"name": "moon_clearance_complete", "type": "boolean"}
          ]'
        outputs:
          '[{"name": "price", "type": "double"}]'
      input_example:
        engines: 2.0
        passenger_capacity: 4
        crew: 3.0
        d_check_complete: false
        moon_clearance_complete: false
  • A model is logged to MLflow by calling the log_model method and it can be automatically registered to the model registry by providing a registered_model_name

  • The model registry requires a DB backed storage location to store registry related model data, i.e. the versions of the model that have been registered and what stage (Staging, Production, Archived) the model is in. By default, we can use the sqlite session store (milestone 2) for this.

  • By default, all data logged during a Kedro run is stored to one MLflow run. If it's necessary to manage MLflow experiment and run flows differently, we can use a Hook to control the starting and ending of runs and experiments. → Need to verify with MLflow Registry users how they log and separate their data.

  • Use a configuration file to allow users to set MLflow storage and server locations, just like in the Kedro-PAI integration and MLOps2.0 → This does depend on the ongoing research around config in Kedro

Outstanding questions

  • Will allowing users to log MLflow models be enough to make the MLflow model registry useful to them or will they need to log metrics and parameters to MLflow as well?
  • How would users use Viz if all experiment tracking data is logged in a MLflow compatible format (meaning they can view/use it in the MLflow UI)?

@merelcht
Copy link
Member Author

(Comment copied over, originally written by @AntonyMilneQB)

This is great stuff, amazing work all! I'm so happy we ended up going with the metrics = dataset idea in the end 😀

So far I've had a good think about everything until Milestone 3 and just have a few suggestions and challenges to make. Not wanting to undermine anything, since I agree with pretty much everything said above, but just some extra things I think we should consider 🙂

General concepts

Experiment

An experiment maps one-to-one to a Kedro pipeline. For the MVP, there will only be one global experiment that maps directly to the main Kedro pipeline. For later iterations, we can support multiple experiments by mapping experiments to modular pipelines within the main Kedro pipeline. [@limdauto]

No experiment (or rather, everything is grouped under one experiment that is the Kedro project) [@MerelTheisenQB]

I think the concept that one kedro run = one set of tracked data = one experiment is correct. However, I wouldn't see this as an MVP but rather the full solution. If a user wants to track multiple models in their kedro run or generate lots of different metrics in different nodes then that's already possible. And the name, organisation and topology of the node/pipeline that generates the tracked datasets already provides the organisation into different "experiments" without the need for explicitly introducing experiment as a new concept.

As such, I would propose that we don't use the term "experiment" at all within kedro. It just seems to be introducing more terminology for something that we don't need, and there are already enough concepts within kedro for a new user to pick up. It does make sense to describe our feature as adding "experiment tracking" to kedro, because that's what mlflow etc. refer to it as. This would provide a bridge for existing mlflow users to understand that kedro now supports experiment tracking as a feature and see how it fits into already existing kedro concepts. But apart from that, I don't see the need for the concept or terminology of experiments in kedro at all, as part of the MVP or the full version.

Milestone 1

How to mark which datasets are tracked on kedro viz

We need to make it clear which datasets are the tracked ones in kedro viz pipeline view or make these datasets easy to find in the search. Chances are that it's just going to be one or two datasets in a big pipeline with lots of datasets and nodes, so the user should easily be able to pick out which ones they need to click on to see their metrics.

How to visualise metrics dataset

Say you've tracked two metrics (accuracy and MSE) over 3 different runs and got this:
image

What @limdauto suggests is something like the following plot (N.B. x axis should have uniformly spaced points even if timestamps aren't uniform):
image

As per Yetu/Ignacio's comments, this doesn't work well if you're tracking metrics which have drastically different ranges. In the above plot accuracy is essentially just a flat line at the bottom of the plot since the whole scale is overwhelmed by the values for MSE.

To fix this, you could have two separate y-axes with different scales, but as soon as you have 3 metrics this runs into problems. So it's best just to rescale the points like this:
image

@datajoely In all the above graphs lines connect points corresponding to the same metric. The alternative way to plot that @mkretsch327 is suggesting is a parallel coordinates plot in which metric has its own y axis, and lines connect points corresponding to the same timestamp. Here this would look like:
image

This naturally extends nicely to plotting many metrics, since you can have as many parallel y-axes as you like. PAI plotted metrics like this but with the axes arranged radially rather than parallel (called a radial or spider plot).

Both the time series view and the parallel coordinate plots are useful in different scenarios, so if possible then I think it would be nice to have both options available in kedro viz (though just time series is fine as MVP).

@merelcht
Copy link
Member Author

merelcht commented Nov 25, 2021

(Comment copied over, originally written by @AntonyMilneQB)

Milestone 2

This sounds awesome 🔥 and generally makes a lot of sense but I don't fully understand the design here, sorry @limdauto. I also have some concerns about scalability.

Session type

Session type is null if it's not a run session. When we move this into core, we can also set the type for CLI session as well.

So as I understand it the options are:

  1. KedroSession.create with no session.run inside it - this is what happens when I do kedro X from CLI, where X is anything other than run. We have session_type = cli
  2. KedroSession.create with 1 session.run inside it - this is what happens when I do kedro run from CLI. We have session_type = run
  3. KedroSession.create with more than 1 session.run inside it - this sounds exotic, but I've seen it done in Jupyter notebooks (see KED-2629)

My questions here would be: what is session_type = null? And what should happen in case 3? Should we just disallow it?

Scalability of querying by metric

We only store the tracked dataset names in the session store. To get the tracked data itself, we can iterate through the list of tracked dataset names and load the data using catalog.load(tracked_dataset_name)

The biggest problem with PAI was always the performance, which came from the limitation of mlflow's storage system that you mentioned. Do not underestimate how many kedro runs people do! Let's say you have a pipeline with 10 metrics, you're on a team of 10 people, each of whom runs the pipeline 10 times a day and logs to the same place. These are not unrealistic numbers (on Tiberius we did way more than this). Over the course of a month you'll have 3000 session_ids saved in the database, each of which contains 10 metrics (which could be one dataset or split across several).

Now let's say you want to find all the runs that have accuracy > 0.8. How would this perform? Presumably you need to catalog.load(dataset_name) every single dataset_name in the tracked_data table, even those that don't even contain accuracy. You'll be loading datasets that aren't even metrics. How long would that take for many thousands of json files? (Genuine question... I don't know)

I'm wondering whether it would be wise to speed up querying by including some other information in the tracked_data table, like the dataset type (allows you to load up only tracking.MetricsDataSet datasets). Or we could just say that you can only ever query by numerical value and only need to include metrics datasets in the tracked_data table in the first place. Or should we just store the metric values directly in this table to avoid the catalog.load calls at all? In that case you're duplicating information with the dataset though which isn't great.

I really don't have an idea of how performant the proposed scheme is, so maybe this is going to be a complete non-issue. I would just caution that people are going to end up with a lot of metrics stored over the course of a project, and we should have something that scales well to that.

Scalability of many runs

Related to the above, I'd just warn that you're going to end up with potentially a very long list of runs, and that would need to scale well. Not part of the MVP I know, but I think we should consider how people are going to be able to browse and filter a huge list. In PAI you could filter by run time, author and run tags. This filtering was absolutely essential to be able to use the tool (in particular tags, which allow for very powerful and flexible filtering). We should consider adding some of these things to the session. Dmitrii suggested in the past that the kedro run --tag argument could be used to label and then query runs/sessions.

@merelcht
Copy link
Member Author

(Comment copied over, originally written by @limdauto)

@AntonyMilneQB thanks for the amazing comments as always!

Re: General Concept

100% agree that we don't need experiment as an abstraction. I wrote "we can" but I also don't think "we should" do it. I'd be interested to see if any user has any legitimate use case after trying our workflow. It's nice to have an escape hatch in the design.

Re: Milestone 1

How to mark which datasets are tracked on kedro viz

Yea actually this is a great point. Let me bring it up with @GabrielComymQB tomorrow. We can do something similar to the parameters.

Metrics Plot

  • Re x-axis: definitely uniformedly spaced.
  • Re y-axis with different scales: I was thinking we could do this but rescaling on single axis works too!

Re: Milestone 2

Session Type

I think I'm specifically discussing data type here when we represent the session in the viz database. For experimentation tracking purpose, we only care about run vs non-run session, so I'm thinking to just set other session to null for now, including CLI session. For CLI, I don't know how granular we want to be, e.g. do we want to split cli and jupyter even though we launch jupyter through the CLI?

Scalability of querying by metrics

This touches on a design iteration that I haven't mentioned. If we want to query by metrics, we need a metrics-friendly search index. At the very least, we need to setup an index in sqlite to do it: https://www.tutorialspoint.com/sqlite/sqlite_indexes.htm -- but there are other solution, including an in-memory search index where we pay the cost up front when starting viz or we can even us full-blown disk-based search index too: https://whoosh.readthedocs.io/en/latest/index.html. There are pros & cons for each approach. I will write a separate design doc just for the metrics query. But it will be for later iteration.

Scalability of many runs

Since this was still being (visually) designed when I wrote the tech design, I didn't put it in. But I absolutely agree with you that the ability to find runs in a long list is essential. In the first iteration, from a product point of view, our solution is:

  • Allowing user to favourite, rename and add a note to a run from the viz UI.
  • Finding runs for the first iteration will be done purely with text search.
  • For later iterations, we will add more to the text search box with structured query, e.g. accuracy>=0.8

In terms of technical performance, I'm still considering the pros and cons of whether to perform the search client-side or server-side. But I know for a fact we can do text search client side up to thousands of rows easily. For millions of rows, you can employ an embedded in-memory search index like this one to help: https://github.com/techfort/LokiJS. I'm still debating though.

@merelcht merelcht added the pinned Issue shouldn't be closed by stale bot label Nov 25, 2021
@kedro-org kedro-org deleted a comment from merelcht Nov 25, 2021
Galileo-Galilei pushed a commit to Galileo-Galilei/kedro that referenced this issue Feb 19, 2022
@merelcht merelcht added Type: Technical DR 💾 Decision Records (technical decisions made) Component: Experiment Tracking 🧪 Issue/PR that addresses functionality related to experiment tracking labels Mar 15, 2022
@noklam
Copy link
Contributor

noklam commented Jun 22, 2022

Experiment tracking is a way to record all information that you would need to recreate a data science experiment. We think of it as logging for parameters, metrics, models and other artifacts.

Some thoughts after today's tech design session.

The general statement above gives me an impression that Kedro is offering some "MLOps" capabilities.

I tried to group the experiment tracking features into 2 different categories:

  1. Metrics Tracking / Comparison - More about how to visualize stuff on UI and help DS to do their work.
  2. Reproducible experiment - Artefacts/Code/Environment to fully reproduce an experiment

I think the main focus of this GH issue is on point 1 and I see a lot of consideration with mlflow, but I argue mlflow isn't the best reference for this space. There are many more offered by tools like wandb, neptune, or clearml. This article concluded them quite well as Dashboard as Operating System

So my question is, how much do we expect Kedro plays in this space and how far do we want to go? Or what are the things that we are not going to do for experiment tracking ? (Like kedro is not going to do any orchestration work)
@yetudada @NeroOkwa
CC: @AntonyMilneQB

@astrojuanlu
Copy link
Member

Is there a way to know at which milestone do we stand at the moment? Or is progress mostly captured in linked issues?

@merelcht
Copy link
Member Author

merelcht commented Mar 8, 2023

@astrojuanlu I think the plan has evolved a bit from what's written here, after research we did last year. We've done 1 and 2, but 3 isn't really a focus at the moment. We're now working on kedro-org/kedro-viz#1218. AFAIK most tickets are now tracked on the Kedro-Viz project. @NeroOkwa can probably give more insights as well on the priorities now 🙂

Perhaps I should close this issue so it's clear this is not the active plan anymore.

@astrojuanlu
Copy link
Member

kedro-org/kedro-viz#1218 was closed, so as per @merelcht's comment above, I'm closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Experiment Tracking 🧪 Issue/PR that addresses functionality related to experiment tracking pinned Issue shouldn't be closed by stale bot Type: Technical DR 💾 Decision Records (technical decisions made)
Projects
None yet
Development

No branches or pull requests

3 participants