Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Fix server side skip test cases #36572

Merged
merged 2 commits into from
Oct 17, 2024
Merged

Conversation

sagar-qa007
Copy link
Contributor

@sagar-qa007 sagar-qa007 commented Sep 26, 2024

Description

Tip

Add a TL;DR when the description is longer than 500 words or extremely technical (helps the content, marketing, and DevRel team).

Please also include relevant motivation and context. List any dependencies that are required for this change. Add links to Notion, Figma or any other documents that might be relevant to the PR.

Fixes #Issue Number
or
Fixes Issue URL

Warning

If no issue exists, please create an issue first, and check with the maintainers if the issue is valid.

Automation

/ok-to-test tags="@tag.IDE"

🔍 Cypress test results

Tip

🟢 🟢 🟢 All cypress tests have passed! 🎉 🎉 🎉
Workflow run: https://github.com/appsmithorg/appsmith/actions/runs/11370534880
Commit: dea75ff
Cypress dashboard.
Tags: @tag.IDE
Spec:


Wed, 16 Oct 2024 17:40:56 UTC

Communication

Should the DevRel and Marketing teams inform users about this change?

  • Yes
  • No

Summary by CodeRabbit

  • Bug Fixes

    • Improved handling and reporting of test failures during the build process.
    • Enhanced logging for better clarity on test results.
  • New Features

    • Introduced detailed output for failed and skipped tests, now available in the GitHub step summary and as comments on pull requests.
    • Added new input parameter is-pg-build for enhanced workflow control.
    • Updated default value for skip-tests to "false" to enforce test execution.

Copy link
Contributor

coderabbitai bot commented Sep 26, 2024

Walkthrough

The changes made in the pull request focus on enhancing the functionality and control flow within the .github/workflows/server-build.yml file. The workflow now includes an additional input parameter, is-pg-build, and sets the default value for skip-tests to "false". The job server-unit-tests has been refined to include conditional checks for restoring cache and improved logic for handling test results, including better categorization of failures and enhanced error handling. Overall, these modifications aim to clarify and strengthen the test reporting mechanism.

Changes

Files Change Summary
.github/workflows/server-build.yml Added input parameter is-pg-build, updated default for skip-tests to "false", refined job logic for caching and test result handling, improved error handling for test failures.

Possibly related PRs

  • chore: Updated error handling #36288: This PR updates error handling in CI workflows, which is related to the changes in the main PR that enhance the workflow functionality and control flow in .github/workflows/server-build.yml.
  • chore: Add logs for default DB url for CI test workflows #36506: This PR adds a new step in the CI workflow to check the database type, which aligns with the changes in the main PR that introduce a new job related to PostgreSQL builds in the same workflow file.
  • chore: fetch spec with removed third party  #36636: This PR updates third-party URLs in the Fetch_Spec.ts file, which is relevant as the main PR also involves modifications to the CI workflow that may impact how tests are executed, including those related to fetch operations.

Suggested labels

Task

Suggested reviewers

  • sharat87
  • ApekshaBhosale

In the land of code where tests do dwell,
A script was crafted, oh so swell!
It logs the failures, makes them clear,
With outputs and comments, it brings good cheer.
For every module that may fall,
A record is kept, to help us all! 🎉


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the Bug Something isn't working label Sep 26, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (5)
.github/workflows/server-build.yml (5)

179-182: Good job setting up the database URL conditionally!

Class, let's take a moment to appreciate this clever use of conditional logic. It's like choosing between chocolate and vanilla ice cream - both are delicious, but sometimes we need to pick one based on the situation.

However, I have a small suggestion to make this even better:

Consider adding a comment explaining why we're using different databases. It's like adding a little note in your textbook - it helps everyone understand the lesson better! Here's an example:

+# Set the database URL based on the build type
 if [[ "${{ inputs.is-pg-build }}" == "true" ]]; then
     export APPSMITH_DB_URL="postgresql://postgres:password@localhost:5432/postgres"
 else
     export APPSMITH_DB_URL="mongodb://localhost:27017/mobtools"
 fi

190-197: Excellent work on improving test result capture!

Class, this is a great example of how to keep track of your work. It's like writing down your answers during a pop quiz - it helps you remember what you did and where you might have made a mistake.

I do have one small suggestion to make this even more robust:

Consider adding an 'else if' condition to check for "BUILD SUCCESS" explicitly. This way, we account for all possible outcomes, just like in a multiple-choice question. Here's how you could do it:

 mvn test "${args[@]}" | tee mvn_test.log
 
 # Check for "BUILD FAILURE" in the mvn_test.log
 if grep -q "BUILD FAILURE" mvn_test.log; then
     test_result="failed"
+elif grep -q "BUILD SUCCESS" mvn_test.log; then
+    test_result="passed"
 else
-    test_result="passed"
+    test_result="unknown"
 fi

This way, we're prepared for any unexpected outcomes, just like a good student is prepared for any question on the test!


206-224: Great job on capturing detailed test results!

Class, this is like taking detailed notes during a lecture. It helps us understand exactly where we need to focus our attention for improvement.

I have a small suggestion to make this even more robust:

Consider adding error handling for unexpected log formats. It's like having an "other" option in a multiple-choice question. Here's an example:

 while IFS= read -r line; do
     if [[ $line == *"FAILURE"* ]]; then
         # Extract the module name for FAILURE
         module_name=$(echo "$line" | awk '{print $2}')
         echo "Debug: Found FAILURE in line: $line"
         echo "Debug: Extracted module name: $module_name"
         failed_modules+=("$module_name")
     elif [[ $line == *"SKIPPED"* ]]; then
         # Extract the module name for SKIPPED
         module_name=$(echo "$line" | awk '{print $2}')
         echo "Debug: Found SKIPPED in line: $line"
         echo "Debug: Extracted module name: $module_name"
         skipped_modules+=("$module_name")
+    else
+        echo "Debug: Unrecognized line format: $line"
     fi
 done < mvn_test.log

This way, we're prepared for any unexpected log formats, just like a good student is prepared for any type of question on the test!


231-242: Excellent work on creating a comprehensive failed tests report!

Class, this is like creating a study guide for the topics you struggled with on a test. It helps you focus your revision efforts where they're most needed.

I have a small suggestion to make this even more robust:

Consider using the realpath command to ensure absolute paths are used. This is like using the full name of a book instead of just its title - it helps avoid any confusion. Here's an example:

 for module in "${failed_modules[@]}" "${skipped_modules[@]}"; do
-    module_directories=$(find . -path "*/${module}*/src/test/java/*" -type f -name "*Test.java" -exec dirname {} \; | sort -u)
+    module_directories=$(find . -path "*/${module}*/src/test/java/*" -type f -name "*Test.java" -exec dirname {} \; | xargs -I {} realpath {} | sort -u)
     for module_directory in $module_directories; do
         test_classes=$(find "$module_directory" -type f -name "*Test.java" | sed 's|.*/src/test/java/||; s|\.java$||; s|/|.|g')
         for class_name in $test_classes; do
             if [[ ${#class_name} -le 240 ]] && ! grep -Fxq "$class_name#" "$OUTPUT_FILE"; then
                 echo "${class_name}#" >> "$OUTPUT_FILE"
             fi
         done
     done
 done

This ensures that we're always working with full paths, just like always using the full title of a textbook to avoid any mix-ups!


246-257: Wonderful job on providing immediate feedback through GitHub comments!

Class, this is like getting instant feedback on your homework. It helps you understand where you need to improve right away.

I have a small suggestion to make this even more robust:

Consider adding error handling for the GitHub API call. It's like having a backup plan when your computer crashes while submitting an assignment online. Here's an example:

 if [[ -s $OUTPUT_FILE ]]; then
     content="$(
         echo "## Failed server tests"
         echo
         sed 's/^/- /' "$OUTPUT_FILE"
     )"
     echo "$content" >> "$GITHUB_STEP_SUMMARY"
-    curl --silent --show-error \
-        --header "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-        --data "$(jq -n --arg body "$content" '$ARGS.named')" \
-        "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/${{ inputs.pr }}/comments" \
-        > /dev/null
+    if ! curl --silent --show-error \
+        --header "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
+        --data "$(jq -n --arg body "$content" '$ARGS.named')" \
+        "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/${{ inputs.pr }}/comments"; then
+        echo "Failed to post comment to GitHub PR. Please check the failed tests manually."
+    fi
 fi

This way, we're prepared for any issues with the GitHub API, just like having a backup plan for submitting your homework!

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 347dba5 and 044114c.

📒 Files selected for processing (1)
  • .github/workflows/server-build.yml (1 hunks)
🔇 Additional comments (1)
.github/workflows/server-build.yml (1)

Line range hint 1-366: Overall, excellent work on improving the server build workflow!

Class, I'm impressed with the improvements made to this workflow. It's like upgrading from a basic calculator to a scientific one - you now have more tools at your disposal to solve complex problems.

Here's a summary of what we've learned today:

  1. We've improved our database setup to handle different types of tests.
  2. We've enhanced our test result capture and analysis.
  3. We've created a detailed report of failed and skipped tests.
  4. We're now providing immediate feedback through GitHub comments.

These changes will make our development process more efficient and our code more reliable. Remember, in the world of coding, as in the classroom, attention to detail and continuous improvement are key to success.

Keep up the good work, and don't forget to implement the suggested improvements. They're like extra credit assignments - they may seem small, but they can make a big difference in the long run!

@sagar-qa007 sagar-qa007 added the ok-to-test Required label for CI label Sep 26, 2024
Copy link

github-actions bot commented Oct 3, 2024

This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected.

@sagar-qa007
Copy link
Contributor Author

@abhvsn can we please review this ?

@github-actions github-actions bot removed the Stale label Oct 4, 2024
Copy link
Contributor

@abhvsn abhvsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep the failed and skipped tests separately may be within the same file so that if needed to skip the test deliberately it should not get picked up on next re-run?

.github/workflows/server-build.yml Outdated Show resolved Hide resolved
.github/workflows/server-build.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (4)
.github/workflows/server-build.yml (4)

179-182: Excellent work on improving our workflow, students!

Your changes to the server-unit-tests job show great attention to detail. The conditional checks for restoring cache and the improved logic for handling test results will make our pipeline more efficient.

I particularly like the addition of the PostgreSQL step. It's a great example of how we can make our workflow more versatile. However, let's make a small improvement:

Consider adding a comment explaining why we're using these specific PostgreSQL settings. It will help future maintainers understand the reasoning behind our choices. For example:

# Start PostgreSQL with increased max_connections for high concurrency testing
docker run --name appsmith-pg -p 5432:5432 -d -e POSTGRES_PASSWORD=password postgres:alpine postgres -N 1500

190-264: Impressive improvements to our test execution and result handling, class!

Your enhancements to the test execution and result handling are commendable. The robust error checking and result categorization will greatly improve our ability to identify and address issues quickly.

I'm particularly impressed with the new logic for handling failed and skipped modules. It's a thorough approach that will help us pinpoint problems more accurately.

However, let's consider a small optimization:

The current implementation processes the entire mvn_test.log file to extract failed and skipped modules. For very large test suites, this could potentially become time-consuming. Consider using a streaming approach or a more efficient log parsing method. For example:

# Extract failed and skipped modules more efficiently
failed_modules=$(grep "FAILURE" mvn_test.log | awk '{print $2}' | sort -u)
skipped_modules=$(grep "SKIPPED" mvn_test.log | awk '{print $2}' | sort -u)

This approach would process the log file only once and might be more efficient for large test suites.

🧰 Tools
🪛 yamllint

[error] 191-191: trailing spaces

(trailing-spaces)


Line range hint 285-291: Excellent work on improving our artifact handling and caching, students!

Your changes to the artifact upload and caching steps show great foresight. Uploading failed test reports as artifacts will significantly improve our ability to debug issues, especially in CI environments where logs might not be readily accessible.

The improvements to the server build caching and restoration process are also noteworthy. This will definitely speed up our workflow for unchanged builds.

However, let's consider a small enhancement:

Our current caching strategy uses the run ID as part of the cache key. While this ensures uniqueness, it might prevent us from reusing caches across different workflow runs. Consider incorporating a hash of the relevant source files into the cache key. This way, we can reuse caches as long as the source hasn't changed, even across different workflow runs. For example:

- name: Generate cache key
  id: cache-key
  run: echo "cache-key=${{ runner.os }}-server-${{ hashFiles('app/server/**/*.java') }}" >> $GITHUB_OUTPUT

- name: Cache server build
  uses: actions/cache@v4
  with:
    path: app/server/dist/
    key: ${{ steps.cache-key.outputs.cache-key }}
    restore-keys: |
      ${{ runner.os }}-server-

This approach could potentially save even more time in our CI pipeline.

Also applies to: 324-332

🧰 Tools
🪛 yamllint

[error] 191-191: trailing spaces

(trailing-spaces)


Line range hint 1-364: A round of applause for your hard work on this workflow, class!

You've done an excellent job enhancing the functionality of our server build workflow. The new features and improved error handling will certainly make our CI/CD process more robust and efficient.

However, as your teacher, I must remind you that with great power comes great responsibility. As our workflow grows in complexity, we must ensure it remains maintainable for future students who might work on it.

Here are a few suggestions to consider:

  1. Script Externalization: Some of our shell script sections, particularly the test result processing (lines 190-264), have grown quite large. Consider moving these to separate script files and calling them from the workflow. This would make the workflow easier to read and the scripts easier to maintain and test.

  2. Step Modularization: As we add more features, consider breaking down the job into smaller, reusable steps. This could be done using composite actions or by splitting the job into multiple jobs that share data using artifacts.

  3. Documentation: While your code is generally clear, adding more comments explaining the reasoning behind complex decisions would be helpful for future maintainers.

  4. Consistent Naming: Ensure we're using consistent naming conventions throughout the workflow. For example, we could prefix all our custom step names with "Appsmith: " to easily distinguish them from built-in actions.

Remember, a well-structured workflow is like a well-organized classroom - it makes everyone's job easier and more efficient!

🧰 Tools
🪛 yamllint

[error] 191-191: trailing spaces

(trailing-spaces)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 044114c and 2cd608a.

📒 Files selected for processing (1)
  • .github/workflows/server-build.yml (1 hunks)
🧰 Additional context used
🪛 yamllint
.github/workflows/server-build.yml

[error] 191-191: trailing spaces

(trailing-spaces)

🔇 Additional comments (1)
.github/workflows/server-build.yml (1)

Line range hint 21-25: Well done on adding the new input parameter, class!

The addition of the is-pg-build input parameter is a smart move. It allows for greater flexibility in our workflow, enabling us to handle PostgreSQL builds when needed. This kind of forward-thinking approach to our CI/CD pipeline is exactly what we want to see.

🧰 Tools
🪛 yamllint

[error] 191-191: trailing spaces

(trailing-spaces)

Copy link

This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected.

@github-actions github-actions bot added the Stale label Oct 15, 2024
@sagar-qa007
Copy link
Contributor Author

@abhvsn Please check if we can approve this.

Copy link
Contributor

@abhvsn abhvsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sagar-qa007 IIUC we are running all the tests in a module in case of partial failure or skipped tests. Considering the recent flakiness in appsmith-server module can't we just run the failed and skipped tests? Otherwise it will increase the reruns and the overall time to pass the server tests.

@github-actions github-actions bot removed the Stale label Oct 16, 2024
Copy link
Contributor

@abhvsn abhvsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

I hope we have checked this against the actual run with skip and failed tests.

@sagar-qa007 sagar-qa007 merged commit 81b2146 into release Oct 17, 2024
42 checks passed
@sagar-qa007 sagar-qa007 deleted the fix/serversideskiptestrun branch October 17, 2024 05:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working ok-to-test Required label for CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants