Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nano HPO Roadmap #4890

Open
46 of 73 tasks
shane-huang opened this issue Jun 20, 2022 · 0 comments
Open
46 of 73 tasks

Nano HPO Roadmap #4890

shane-huang opened this issue Jun 20, 2022 · 0 comments
Labels

Comments

@shane-huang
Copy link
Contributor

shane-huang commented Jun 20, 2022

Design of HPO is described in issue #3712 (tensorflow) and issue #3925 (pytorch).

Below table summarized major functions and corresponding PR.

category pr
Common global hpo settings[#4486, #4488], space[#4458], backend[#4461], callcache[#4461], decorators[#4464], search functions(#4467), utils[#4461,#4464,#4486]
Tensorflow objective[#4467], mixin[#4467],keras.Model[#4499], keras.Sequential[#4499],tf.keras.layers[#4486], tf.keras.activations[#4486], tf.keras.Input[#4486], tf.* functions(e.g. tf.cast[#4486]), tf.keras.optimizers[#4608],learning rate tuning[#4608,#4610], batch size tuning[#4612], allow fit without search[#4566]
TF Use Cases sequetial[#4499], functional[#4499], subclassing model[#4499], transfer_learning[TODO]), quickstart notebook[on-going]
PyTorch objective[#4559], hposearcher[#4559], pl.Trainer[#4560], learning rate tuning[#4631], batch size tuning[#4631]
PyTorch Use Cases subclassing pl.module[#4559], transfer_learning[TODO]
Parallel Search refactor[#4680, #4718], pytorch[#4709], tf custom[#4725], tf functional[#4726], tf sequential[#4728], example[TODO]
Resume Search fix a resume flag behavior[#4733], quickstart notebook[#4632]
Visualization quickstart notebook[#4632]
Miscellaneous docs style[#4524], deps refactor[#4560,#4570], user-friendly error reporting, documentation[#4639,#4666,#4734,#4735], ci/cd(PR validation jobs[Done], move pr job to github[TODO], bug fixes[#4564, #4572, #4623, #4646, #4648]

A list of more detailed functions and future TODOs.

  • Basic building blocks
    • Search spaces
    • Decorators to create auto objects (class, func, model)
    • search functions (search, search_summary, etc.)
    • Searcher backend (Optuna utils)
    • [TF][PyTorch] Objective classes
    • [TF] Mixins for Sequential/Model and AutoModel (add search, etc. to class)
    • [TF] Sequential class
    • [TF] Model class
    • [PyTorch] HPOSearcher (similar architecture as Tuner)
    • [PyTorch] Trainer.search/search_summary
  • Basic Worflow Enabling
    • [TF] Sequential API work flow
    • [TF] Functional API workflow (lazily build computation graphs, etc.)
    • [TF] Custom Model API workflow (subclassing tf.keras.Model)
    • [TF] auto layers, functions, etc. (nano.tf.layers.*)
    • [PyTorch] Custom Model (subclassing pl.LightningModule)
    • transfer learning/ fine tune use case Nano HPO: support transfer learning use case  #4866
    • a flag to enable and disable hpo
  • RecSys pipeline support
    • Functional API
    • layers: Embedding, Concatenate, Dense, Activation, etc.
    • keras.Input,
    • tensor operations (i.e. slicing)
    • special functions: tf.cast
    • activations functions: sigmoid
    • enable tuning learning rate
    • support fit (with fixed hyper params) without search
    • result tuning
    • demo
  • Enhancements, Usability & Performance
  • Bug
  • Examples & Tests
    • [TF] Sequantial example
    • [TF] Custom Model example (using decorator)
    • [TF] Functional API example
    • [TF] no-hpo at all
    • [PyTorch] lightning module example
  • Miscellaneous
    • fix doc string
    • add automl tests to PR validation jobs
    • move optional lib imports to deps
    • add unit tests for various components.
    • clean up the decorator code
    • refactor code to let tensorflow and pytorch hpo share common codes
    • refactor OptunaBackend to map better map config space to optuna trial suggest
    • merge updates of nano.tf.keras.layers
    • merge nano.automl.tf.keras.Sequantial/Model to nano.tf.keras.Sequantial/Model (PENDING)
    • automatically forward method calls to internal model (use proxy_methods)
@shane-huang shane-huang changed the title Nano HPO Umbrella Nano HPO Roadmap Jun 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant