Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Seq2Seq-gait-analysis/README #6

Open
escorciav opened this issue Apr 22, 2019 · 0 comments
Open

Update Seq2Seq-gait-analysis/README #6

escorciav opened this issue Apr 22, 2019 · 0 comments
Labels
enhancement New feature or request

Comments

@escorciav
Copy link

escorciav commented Apr 22, 2019

Is it reasonable to break this into different sections?

  1. Setup and installation
  2. Data setup (optional)
  3. Using a pre-trained model
  4. Training on your own data
  5. Replicating SVM baseline
  • You may consider adding a "table of content" to ease navigation. Don't forget to use markdown links.

  • It's fine if they overlap content as long as it's easier to follow for the user.

  • If one step requires the output of a previous step, consider mentioning that after citing the command. It helps to troubleshoot errors.

    For example

    # Training
    
    `python conv_classifier_eval.py`
    
    This generates a `model.npz` file with the learned parameters.
    

    In that way, the user knows that if they don't see model.npz the next command is most likely failing 'cause of that.

  • Mention complementary comments after you show the command. For example,

    Train the Seq2Seq encoder-decoder model
    
    `python bidirectional_autoencoder.py`
    
    Using a GPU reduces the time to complete this step.
    

Resources
This short course outlines the mindset that we should follow.

@Soldelli Soldelli added the enhancement New feature or request label Apr 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants