-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove logging to debug list dataset routes #8248
Conversation
📝 WalkthroughWalkthroughThe pull request introduces modifications to the Changes
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
app/controllers/DatasetController.scala (1)
226-240
: Consider fixing inconsistent indentation.The nested for-comprehensions have inconsistent indentation which affects readability. Consider aligning the indentation levels.
- organization <- organizationDAO.findOne(byOrgaTuple._1) - groupedByDataStore = byOrgaTuple._2.groupBy(_._dataStore).toList - result <- Fox.serialCombined(groupedByDataStore) { byDataStoreTuple: (String, List[Dataset]) => - for { - dataStore <- dataStoreDAO.findOneByName(byDataStoreTuple._1.trim)(GlobalAccessContext) - resultByDataStore: Seq[JsObject] <- Fox.serialCombined(byDataStoreTuple._2) { d => - datasetService.publicWrites( - d, - requestingUser, - Some(organization), - Some(dataStore), - requestingUserTeamManagerMemberships) ?~> Messages("dataset.list.writesFailed", d.name) - } - } yield resultByDataStore - } + organization <- organizationDAO.findOne(byOrgaTuple._1) + groupedByDataStore = byOrgaTuple._2.groupBy(_._dataStore).toList + result <- Fox.serialCombined(groupedByDataStore) { byDataStoreTuple: (String, List[Dataset]) => + for { + dataStore <- dataStoreDAO.findOneByName(byDataStoreTuple._1.trim)(GlobalAccessContext) + resultByDataStore: Seq[JsObject] <- Fox.serialCombined(byDataStoreTuple._2) { d => + datasetService.publicWrites( + d, + requestingUser, + Some(organization), + Some(dataStore), + requestingUserTeamManagerMemberships) ?~> Messages("dataset.list.writesFailed", d.name) + } + } yield resultByDataStore + }app/models/dataset/Dataset.scala (1)
118-123
: Consider implementing a centralized error handling strategy.With the removal of inline error messages, this might be a good opportunity to implement a more structured error handling approach:
- Consider introducing a centralized error logging/monitoring service
- Define error severity levels to distinguish between debug logs and critical errors
- Implement structured error types for different parsing failures
Also applies to: 223-223
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
app/controllers/DatasetController.scala
(1 hunks)app/models/dataset/Dataset.scala
(2 hunks)
🔇 Additional comments (4)
app/controllers/DatasetController.scala (2)
179-214
: LGTM! Clean removal of logging while maintaining core functionality.
The changes to the list
method successfully remove unnecessary logging while preserving the essential error handling and both compact and detailed listing modes. The code remains clean and follows functional programming principles.
222-242
: LGTM! Clean removal of logging while maintaining core functionality.
The changes to the listGrouped
method successfully remove unnecessary logging while preserving the essential grouping functionality and error handling.
app/models/dataset/Dataset.scala (2)
223-223
: LGTM! Consistent with logging removal objective.
The removal of error messaging here is consistent with the PR's objective of removing debug logging while maintaining the core parsing functionality.
118-123
: Verify error tracking capabilities after removing error messages.
While removing error messages aligns with the PR objectives, ensure that there are alternative ways to track and debug parsing failures in production environments.
✅ Verification successful
Error tracking is sufficiently maintained through alternative mechanisms
Based on the search results, the codebase has comprehensive error tracking through:
- Dedicated error logging services (e.g.,
DatasetErrorLoggingService
) - Consistent error handling patterns using
Fox.failure
and logger calls - Structured error reporting in various components
The removal of ?~>
operators in the dataset parsing code doesn't impact the overall error tracking capabilities since:
- The core validation logic remains intact
- Errors are still propagated through the
Fox
monad - Higher-level error logging mechanisms are in place to catch and report failures
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for error tracking mechanisms in the codebase
# Look for error tracking/monitoring patterns
rg -A 3 "(?:log|monitor|track|report).*(?:error|failure|exception)" --type scala
# Check for error handling patterns around dataset parsing
ast-grep --pattern 'parse($$$) { $$$_error$$$ }'
Length of output: 17807
Undos #8247 and #8246
URL of deployed dev instance (used for testing):
Steps to test:
http://localhost:9000/api/datasets?isActive=false
still works