Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS Feedback - Paper text consistency and clarification #13

Merged
merged 1 commit into from
Mar 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -133,13 +133,11 @@ @article{golub2018learning
doi={10.1038/s41593-018-0095-3}
}

@misc{kubeflow,
author = {A. M. Smith and K. Thaney and M. Hahnel},
@softwareversion{kubeflow,
title = {Kubeflow: Machine Learning Toolkit for Kubernetes},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://github.com/kubeflow/kubeflow}
url = {https://github.com/kubeflow/kubeflow},
version = {\href{https://archive.softwareheritage.org/swh:1:dir:086e4c66360c96571dccaa8d12645d4316a6b991;origin=https://github.com/kubeflow/kubeflow;visit=swh:1:snp:698e9549e4522b550ae2fea3a204c49e1843e21b;anchor=swh:1:rev:1e9535ec06cae3823c33f29fc7890df9d32fcaf9}{swh:1:dir:086e4c66360c96571dccaa8d12645d4316a6b991}}
}

@article{vu2018shared,
Expand Down
4 changes: 2 additions & 2 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,11 +80,11 @@ The two distributed workflows provided, Ray and KubeFlow based, each have their

# Evaluation

A core innovation of AutoLFADS is the integration of PBT for hyperparameter exploration. As the underlying job scheduler and PBT implementation are unique in KubeFlow, we used the MC Maze dataset [@churchland2021mc_maze] from the Neural Latents Benchmark [@pei2021neural] to train and evaluate two AutoLFADS models. One model was trained with the Ray solution and the other with the KubeFlow solution using matching PBT hyperparameters and model configurations to ensure that models of comparable quality can be learned across both solutions. A comprehensive description of the AutoLFADS algorithm and results applying the algorithm to neural data using Ray can be found in @keshtkaran2021large. We demonstrate similar converged model performances on metrics relevant to the quality of inferred firing rates (see table below) [@pei2021neural]. In \autoref{fig:inferred_rates}, inferred firing rates from the KubeFlow trained AutoLFADS model are shown along with conventional firing rate estimation strategies. Qualitatively, these example inferences are similar to those described in @keshtkaran2021large, showing similar consistency across trials and resemblance to peristimulus time histogram PSTHs. In \autoref{fig:hp_progression}, we plot the hyperparameter and associated loss values for the KubeFlow based implementation of AutoLFADS to provide a visualization of the PBT based optimization process on these data. These results demonstrate that although PBT is stochastic, both the original Ray and novel KubeFlow implementations are converging to stable, comparable solutions.
A core innovation of AutoLFADS is the integration of PBT for hyperparameter exploration. As the underlying job scheduler and PBT implementation are unique in KubeFlow, we used the MC Maze dataset [@churchland2021mc_maze] from the Neural Latents Benchmark [@pei2021neural] to train and evaluate two AutoLFADS models. One model was trained with the Ray solution and the other with the KubeFlow solution using matching PBT hyperparameters and model configurations to ensure that models of comparable quality can be learned across both solutions. A comprehensive description of the AutoLFADS algorithm and results applying the algorithm to neural data using Ray can be found in @keshtkaran2021large. We demonstrate similar converged model performances on metrics relevant to the quality of inferred firing rates (see table below) [@pei2021neural]. In \autoref{fig:inferred_rates}, inferred firing rates from the KubeFlow trained AutoLFADS model are shown along with conventional firing rate estimation strategies. Qualitatively, these example inferences are similar to those described in @keshtkaran2021large, showing similar consistency across trials and resemblance to peristimulus time histograms (PSTH). In \autoref{fig:hp_progression}, we plot the hyperparameter and associated loss values for the KubeFlow based implementation of AutoLFADS to provide a visualization of the PBT based optimization process on these data. These results demonstrate that although PBT is stochastic, both the original Ray and novel KubeFlow implementations are converging to stable, comparable solutions.

\pagebreak

: AutoLFADS Performance. An evaluation of AutoLFADS performance on Ray and KubeFlow. Test trial performance comparison on four neurally relevant metrics for evaluating latent variable models: co-smoothing on held-out neurons (co-bps), hand trajectory decoding on held-out neurons (vel R2), match to peri-stimulus time histogram (PSTH) on held-out neurons (psth R2), forward prediction on held-in neurons (fp-bps). The trained models converge with less than 5% difference between the frameworks on the above metrics. The percent difference is calculated with respect to the Ray framework.
: AutoLFADS Performance. An evaluation of AutoLFADS performance on Ray and KubeFlow. Test trial performance comparison on four neurally relevant metrics for evaluating latent variable models: co-smoothing on held-out neurons (co-bps), hand trajectory decoding on held-out neurons (vel R2), match to peristimulus time histogram (PSTH) on held-out neurons (psth R2), forward prediction on held-in neurons (fp-bps). The trained models converge with less than 5% difference between the frameworks on the above metrics. The percent difference is calculated with respect to the Ray framework.


| Framework | co-bps (↑) | vel R2 (↑) | psth R2 (↑) | fp-bps (↑) |
Expand Down
Binary file modified paper/paper.pdf
Binary file not shown.