-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Fix server side skip test cases #36572
Conversation
WalkthroughThe changes made in the pull request focus on enhancing the functionality and control flow within the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (5)
.github/workflows/server-build.yml (5)
179-182
: Good job setting up the database URL conditionally!Class, let's take a moment to appreciate this clever use of conditional logic. It's like choosing between chocolate and vanilla ice cream - both are delicious, but sometimes we need to pick one based on the situation.
However, I have a small suggestion to make this even better:
Consider adding a comment explaining why we're using different databases. It's like adding a little note in your textbook - it helps everyone understand the lesson better! Here's an example:
+# Set the database URL based on the build type if [[ "${{ inputs.is-pg-build }}" == "true" ]]; then export APPSMITH_DB_URL="postgresql://postgres:password@localhost:5432/postgres" else export APPSMITH_DB_URL="mongodb://localhost:27017/mobtools" fi
190-197
: Excellent work on improving test result capture!Class, this is a great example of how to keep track of your work. It's like writing down your answers during a pop quiz - it helps you remember what you did and where you might have made a mistake.
I do have one small suggestion to make this even more robust:
Consider adding an 'else if' condition to check for "BUILD SUCCESS" explicitly. This way, we account for all possible outcomes, just like in a multiple-choice question. Here's how you could do it:
mvn test "${args[@]}" | tee mvn_test.log # Check for "BUILD FAILURE" in the mvn_test.log if grep -q "BUILD FAILURE" mvn_test.log; then test_result="failed" +elif grep -q "BUILD SUCCESS" mvn_test.log; then + test_result="passed" else - test_result="passed" + test_result="unknown" fiThis way, we're prepared for any unexpected outcomes, just like a good student is prepared for any question on the test!
206-224
: Great job on capturing detailed test results!Class, this is like taking detailed notes during a lecture. It helps us understand exactly where we need to focus our attention for improvement.
I have a small suggestion to make this even more robust:
Consider adding error handling for unexpected log formats. It's like having an "other" option in a multiple-choice question. Here's an example:
while IFS= read -r line; do if [[ $line == *"FAILURE"* ]]; then # Extract the module name for FAILURE module_name=$(echo "$line" | awk '{print $2}') echo "Debug: Found FAILURE in line: $line" echo "Debug: Extracted module name: $module_name" failed_modules+=("$module_name") elif [[ $line == *"SKIPPED"* ]]; then # Extract the module name for SKIPPED module_name=$(echo "$line" | awk '{print $2}') echo "Debug: Found SKIPPED in line: $line" echo "Debug: Extracted module name: $module_name" skipped_modules+=("$module_name") + else + echo "Debug: Unrecognized line format: $line" fi done < mvn_test.logThis way, we're prepared for any unexpected log formats, just like a good student is prepared for any type of question on the test!
231-242
: Excellent work on creating a comprehensive failed tests report!Class, this is like creating a study guide for the topics you struggled with on a test. It helps you focus your revision efforts where they're most needed.
I have a small suggestion to make this even more robust:
Consider using the
realpath
command to ensure absolute paths are used. This is like using the full name of a book instead of just its title - it helps avoid any confusion. Here's an example:for module in "${failed_modules[@]}" "${skipped_modules[@]}"; do - module_directories=$(find . -path "*/${module}*/src/test/java/*" -type f -name "*Test.java" -exec dirname {} \; | sort -u) + module_directories=$(find . -path "*/${module}*/src/test/java/*" -type f -name "*Test.java" -exec dirname {} \; | xargs -I {} realpath {} | sort -u) for module_directory in $module_directories; do test_classes=$(find "$module_directory" -type f -name "*Test.java" | sed 's|.*/src/test/java/||; s|\.java$||; s|/|.|g') for class_name in $test_classes; do if [[ ${#class_name} -le 240 ]] && ! grep -Fxq "$class_name#" "$OUTPUT_FILE"; then echo "${class_name}#" >> "$OUTPUT_FILE" fi done done doneThis ensures that we're always working with full paths, just like always using the full title of a textbook to avoid any mix-ups!
246-257
: Wonderful job on providing immediate feedback through GitHub comments!Class, this is like getting instant feedback on your homework. It helps you understand where you need to improve right away.
I have a small suggestion to make this even more robust:
Consider adding error handling for the GitHub API call. It's like having a backup plan when your computer crashes while submitting an assignment online. Here's an example:
if [[ -s $OUTPUT_FILE ]]; then content="$( echo "## Failed server tests" echo sed 's/^/- /' "$OUTPUT_FILE" )" echo "$content" >> "$GITHUB_STEP_SUMMARY" - curl --silent --show-error \ - --header "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \ - --data "$(jq -n --arg body "$content" '$ARGS.named')" \ - "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/${{ inputs.pr }}/comments" \ - > /dev/null + if ! curl --silent --show-error \ + --header "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \ + --data "$(jq -n --arg body "$content" '$ARGS.named')" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/${{ inputs.pr }}/comments"; then + echo "Failed to post comment to GitHub PR. Please check the failed tests manually." + fi fiThis way, we're prepared for any issues with the GitHub API, just like having a backup plan for submitting your homework!
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- .github/workflows/server-build.yml (1 hunks)
🔇 Additional comments (1)
.github/workflows/server-build.yml (1)
Line range hint
1-366
: Overall, excellent work on improving the server build workflow!Class, I'm impressed with the improvements made to this workflow. It's like upgrading from a basic calculator to a scientific one - you now have more tools at your disposal to solve complex problems.
Here's a summary of what we've learned today:
- We've improved our database setup to handle different types of tests.
- We've enhanced our test result capture and analysis.
- We've created a detailed report of failed and skipped tests.
- We're now providing immediate feedback through GitHub comments.
These changes will make our development process more efficient and our code more reliable. Remember, in the world of coding, as in the classroom, attention to detail and continuous improvement are key to success.
Keep up the good work, and don't forget to implement the suggested improvements. They're like extra credit assignments - they may seem small, but they can make a big difference in the long run!
This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected. |
@abhvsn can we please review this ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep the failed and skipped tests separately may be within the same file so that if needed to skip the test deliberately it should not get picked up on next re-run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
.github/workflows/server-build.yml (4)
179-182
: Excellent work on improving our workflow, students!Your changes to the
server-unit-tests
job show great attention to detail. The conditional checks for restoring cache and the improved logic for handling test results will make our pipeline more efficient.I particularly like the addition of the PostgreSQL step. It's a great example of how we can make our workflow more versatile. However, let's make a small improvement:
Consider adding a comment explaining why we're using these specific PostgreSQL settings. It will help future maintainers understand the reasoning behind our choices. For example:
# Start PostgreSQL with increased max_connections for high concurrency testing docker run --name appsmith-pg -p 5432:5432 -d -e POSTGRES_PASSWORD=password postgres:alpine postgres -N 1500
190-264
: Impressive improvements to our test execution and result handling, class!Your enhancements to the test execution and result handling are commendable. The robust error checking and result categorization will greatly improve our ability to identify and address issues quickly.
I'm particularly impressed with the new logic for handling failed and skipped modules. It's a thorough approach that will help us pinpoint problems more accurately.
However, let's consider a small optimization:
The current implementation processes the entire
mvn_test.log
file to extract failed and skipped modules. For very large test suites, this could potentially become time-consuming. Consider using a streaming approach or a more efficient log parsing method. For example:# Extract failed and skipped modules more efficiently failed_modules=$(grep "FAILURE" mvn_test.log | awk '{print $2}' | sort -u) skipped_modules=$(grep "SKIPPED" mvn_test.log | awk '{print $2}' | sort -u)This approach would process the log file only once and might be more efficient for large test suites.
🧰 Tools
🪛 yamllint
[error] 191-191: trailing spaces
(trailing-spaces)
Line range hint
285-291
: Excellent work on improving our artifact handling and caching, students!Your changes to the artifact upload and caching steps show great foresight. Uploading failed test reports as artifacts will significantly improve our ability to debug issues, especially in CI environments where logs might not be readily accessible.
The improvements to the server build caching and restoration process are also noteworthy. This will definitely speed up our workflow for unchanged builds.
However, let's consider a small enhancement:
Our current caching strategy uses the run ID as part of the cache key. While this ensures uniqueness, it might prevent us from reusing caches across different workflow runs. Consider incorporating a hash of the relevant source files into the cache key. This way, we can reuse caches as long as the source hasn't changed, even across different workflow runs. For example:
- name: Generate cache key id: cache-key run: echo "cache-key=${{ runner.os }}-server-${{ hashFiles('app/server/**/*.java') }}" >> $GITHUB_OUTPUT - name: Cache server build uses: actions/cache@v4 with: path: app/server/dist/ key: ${{ steps.cache-key.outputs.cache-key }} restore-keys: | ${{ runner.os }}-server-This approach could potentially save even more time in our CI pipeline.
Also applies to: 324-332
🧰 Tools
🪛 yamllint
[error] 191-191: trailing spaces
(trailing-spaces)
Line range hint
1-364
: A round of applause for your hard work on this workflow, class!You've done an excellent job enhancing the functionality of our server build workflow. The new features and improved error handling will certainly make our CI/CD process more robust and efficient.
However, as your teacher, I must remind you that with great power comes great responsibility. As our workflow grows in complexity, we must ensure it remains maintainable for future students who might work on it.
Here are a few suggestions to consider:
Script Externalization: Some of our shell script sections, particularly the test result processing (lines 190-264), have grown quite large. Consider moving these to separate script files and calling them from the workflow. This would make the workflow easier to read and the scripts easier to maintain and test.
Step Modularization: As we add more features, consider breaking down the job into smaller, reusable steps. This could be done using composite actions or by splitting the job into multiple jobs that share data using artifacts.
Documentation: While your code is generally clear, adding more comments explaining the reasoning behind complex decisions would be helpful for future maintainers.
Consistent Naming: Ensure we're using consistent naming conventions throughout the workflow. For example, we could prefix all our custom step names with "Appsmith: " to easily distinguish them from built-in actions.
Remember, a well-structured workflow is like a well-organized classroom - it makes everyone's job easier and more efficient!
🧰 Tools
🪛 yamllint
[error] 191-191: trailing spaces
(trailing-spaces)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- .github/workflows/server-build.yml (1 hunks)
🧰 Additional context used
🪛 yamllint
.github/workflows/server-build.yml
[error] 191-191: trailing spaces
(trailing-spaces)
🔇 Additional comments (1)
.github/workflows/server-build.yml (1)
Line range hint
21-25
: Well done on adding the new input parameter, class!The addition of the
is-pg-build
input parameter is a smart move. It allows for greater flexibility in our workflow, enabling us to handle PostgreSQL builds when needed. This kind of forward-thinking approach to our CI/CD pipeline is exactly what we want to see.🧰 Tools
🪛 yamllint
[error] 191-191: trailing spaces
(trailing-spaces)
This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected. |
@abhvsn Please check if we can approve this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sagar-qa007 IIUC we are running all the tests in a module in case of partial failure or skipped tests. Considering the recent flakiness in appsmith-server module can't we just run the failed and skipped tests? Otherwise it will increase the reruns and the overall time to pass the server tests.
99bcf16
to
8be9d50
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I hope we have checked this against the actual run with skip and failed tests.
Description
Tip
Add a TL;DR when the description is longer than 500 words or extremely technical (helps the content, marketing, and DevRel team).
Please also include relevant motivation and context. List any dependencies that are required for this change. Add links to Notion, Figma or any other documents that might be relevant to the PR.
Fixes #
Issue Number
or
Fixes
Issue URL
Warning
If no issue exists, please create an issue first, and check with the maintainers if the issue is valid.
Automation
/ok-to-test tags="@tag.IDE"
🔍 Cypress test results
Tip
🟢 🟢 🟢 All cypress tests have passed! 🎉 🎉 🎉
Workflow run: https://github.com/appsmithorg/appsmith/actions/runs/11370534880
Commit: dea75ff
Cypress dashboard.
Tags:
@tag.IDE
Spec:
Wed, 16 Oct 2024 17:40:56 UTC
Communication
Should the DevRel and Marketing teams inform users about this change?
Summary by CodeRabbit
Bug Fixes
New Features
is-pg-build
for enhanced workflow control.skip-tests
to "false" to enforce test execution.