Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: create an azure-ml pipeline for eval_prompts() #36

Merged
merged 6 commits into from
Feb 7, 2024

Conversation

anujsinha3
Copy link
Collaborator

@anujsinha3 anujsinha3 commented Feb 4, 2024

Change Description

Created an azure-ml pipeline for eval_prompts()
closes #35

  • My PR includes a link to the issue that I am addressing

Solution Description

Code Quality

  • I have read the Contribution Guide
  • My code follows the code style of this project
  • My code builds (or compiles) cleanly without any errors or warnings
  • My code contains relevant comments and necessary documentation

Project-Specific Pull Request Checklists

Bug Fix Checklist

  • My fix includes a new test that breaks as a result of the bug (if possible)
  • My change includes a breaking change
    • My change includes backwards compatibility and deprecation warnings (if possible)

New Feature Checklist

  • I have added or updated the docstrings associated with my feature using the NumPy docstring format
  • I have updated the tutorial to highlight my new feature (if appropriate)
  • I have added unit/End-to-End (E2E) test cases to cover my new feature
  • My change includes a breaking change
    • My change includes backwards compatibility and deprecation warnings (if possible)

Documentation Change Checklist

Build/CI Change Checklist

  • If required or optional dependencies have changed (including version numbers), I have updated the README to reflect this
  • If this is a new CI setup, I have added the associated badge to the README

Other Change Checklist

  • Any new or updated docstrings use the NumPy docstring format.
  • I have updated the tutorial to highlight my new feature (if appropriate)
  • I have added unit/End-to-End (E2E) test cases to cover any changes
  • My change includes a breaking change
    • My change includes backwards compatibility and deprecation warnings (if possible)

@anujsinha3 anujsinha3 self-assigned this Feb 4, 2024
@codecov-commenter
Copy link

codecov-commenter commented Feb 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (7891902) 97.85% compared to head (364aa18) 97.32%.

❗ Current head 364aa18 differs from pull request most recent head d3105e8. Consider uploading reports for the commit d3105e8 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #36      +/-   ##
==========================================
- Coverage   97.85%   97.32%   -0.54%     
==========================================
  Files           5        5              
  Lines         233      224       -9     
==========================================
- Hits          228      218      -10     
- Misses          5        6       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@anujsinha3 anujsinha3 requested a review from carlosgjs February 5, 2024 14:09
Copy link
Collaborator

@carlosgjs carlosgjs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one comment inline.

Also, can you add a link to a run of this workflow in the description of the PR?

command: >
python -m autora.doc.pipelines.main eval-prompts
${{inputs.data_dir}}/data.jsonl
${{inputs.data_dir}}/all_prompt.json
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make the prompts file its own input parameter. This will make it easier to run experiments.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should I parameterize the data file as well? The default value can be 'data.jsonl' but a user can modify input if they want. WDYT?

@anujsinha3 anujsinha3 merged commit 5615549 into main Feb 7, 2024
9 checks passed
@anujsinha3 anujsinha3 deleted the feature-azure-ml-pipeline-eval-prompts branch February 7, 2024 00:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create an azure-ml pipeline for eval_prompts()
3 participants