-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support content map #79
Conversation
WalkthroughThe changes in this pull request introduce several modifications across various files to accommodate a new Changes
Assessment against linked issues
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (4)
crates/apub/src/utils/generate_map.rs (1)
3-11
: Consider adding documentationThe function would benefit from documentation explaining:
- The purpose of the function
- The expected format of the language parameter
- Examples of valid and invalid inputs
- The structure of the returned JSON
Add this documentation:
+/// Generates a language-specific content map +/// +/// # Arguments +/// * `content` - The content to be mapped +/// * `language` - Optional ISO 639-1 language code (e.g., "en", "en-US") +/// +/// # Returns +/// * `Some(Value)` - JSON object mapping the 2-letter language code to content +/// * `None` - If language is invalid or not provided +/// +/// # Examples +/// ``` +/// use serde_json::json; +/// let map = generate_map("Hello", Some("en-US".to_string())); +/// assert_eq!(map, Some(json!({"en": "Hello"}))); +/// ``` pub fn generate_map(content: &str, language: Option<String>) -> Option<Value> {crates/db_schema/src/user.rs (1)
25-25
: LGTM! Consider adding documentation for the language field.The addition of the optional language field aligns well with the PR objectives. However, it would be helpful to add documentation specifying the expected format (e.g., "en-US", "ja-JP") and any constraints.
Add documentation above the field:
+ /// The user's preferred language in ISO 639-1 or BCP 47 format (e.g., "en-US", "ja-JP"). + /// This field is populated from the feed's language specification. pub language: Option<String>,crates/apub/src/actors/db_user.rs (1)
95-95
: Consider broader architectural implicationsThe addition of the
language
field is just one part of the content map support. Consider these architectural aspects:
Language Persistence Strategy:
- How will language changes be handled in existing records?
- Should there be a default language fallback?
Content Map Integration:
- How will this language field be used to filter or select content from the content map?
- Consider adding helper methods for language-based content resolution
API Compatibility:
- Ensure the ActivityPub JSON serialization handles the new field correctly
- Consider versioning strategy for API clients
Would you like guidance on implementing any of these architectural components?
crates/apub/src/objects/note.rs (1)
81-84
: Add documentation for language handling.The language handling logic should be documented to clarify:
- Expected language code format (e.g., "en-US")
- Fallback behavior when language is not specified
- Relationship between item-level and feed-level language settings
Also applies to: 141-141
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (11)
- crates/apub/src/actors/db_user.rs (1 hunks)
- crates/apub/src/actors/db_user_impl.rs (2 hunks)
- crates/apub/src/objects/note.rs (6 hunks)
- crates/apub/src/utils/generate_map.rs (1 hunks)
- crates/apub/src/utils/mod.rs (1 hunks)
- crates/cron/src/tasks/check_feed_item.rs (3 hunks)
- crates/cron/src/tasks/update.rs (3 hunks)
- crates/db_migration/src/lib.rs (2 hunks)
- crates/db_migration/src/m20240131_000001_user.rs (1 hunks)
- crates/db_migration/src/m20241028_000001_user_language.rs (1 hunks)
- crates/db_schema/src/user.rs (1 hunks)
🔇 Additional comments (17)
crates/apub/src/utils/mod.rs (1)
1-1
: LGTM! Clean module organization.The addition of the
generate_map
module and its public export follows Rust's module system conventions and maintains a consistent structure with the existingverify_blocked
module. This change appropriately supports the content map feature implementation.Also applies to: 4-4
crates/apub/src/utils/generate_map.rs (2)
1-2
: LGTM: Clean and minimal importsThe imports are appropriately scoped, importing only the necessary items from serde_json.
3-11
:⚠️ Potential issueAdd validation for language codes
While the basic implementation works, it could benefit from more robust language code handling:
- The current implementation assumes the language string is at least 2 characters long without validation
- It doesn't validate against ISO 639-1 language codes
- Regional variants (e.g., "en-US") are truncated without validation
Consider this safer implementation:
use serde_json::{json, Value}; +use once_cell::sync::Lazy; +use std::collections::HashSet; + +static ISO_LANGUAGES: Lazy<HashSet<&str>> = Lazy::new(|| { + // Common ISO 639-1 codes + vec!["en", "es", "fr", "de", "it", "ja", "ko", "zh"].into_iter().collect() +}); pub fn generate_map(content: &str, language: Option<String>) -> Option<Value> { match language { - Some(language) if language[.. 2].chars().all(|char| char.is_ascii_lowercase()) => + Some(language) if language.len() >= 2 => { + let lang_code = language[..2].to_lowercase(); + if ISO_LANGUAGES.contains(lang_code.as_str()) { + Some(json!({ + lang_code: content + })) + } else { + None + } + }, - Some(json!({ - language[..2]: content - })), _ => None, } }This implementation:
- Validates language string length
- Checks against a predefined set of ISO 639-1 codes
- Handles case normalization properly
- Provides a clear structure for adding more language codes
Let's verify the current usage to understand the impact:
crates/db_migration/src/m20241028_000001_user_language.rs (2)
1-6
: LGTM! Clean and standard migration setup.The migration follows sea-orm conventions with proper imports and struct declaration.
1-35
: Verify language code handling in the application code.The migration adds support for language codes, but we should ensure proper handling in the application code.
Let's verify the language code handling:
✅ Verification successful
Based on the search results, I can see that language handling is implemented in several places:
- Database schema:
language
field is added as a nullable string column to both User and UserFeedItem tables- Migration
m20241028_000001_user_language.rs
handles this addition
- Feed handling:
- Language is properly passed through from feed items to ActivityPub notes
- Language code validation exists in
generate_map()
which validates that language codes are 2 lowercase ASCII letters
- ActivityPub integration:
- Language is included in ActivityPub Note objects via
content_map
field- Language is preserved when converting between feed items and ActivityPub notes
The implementation appears to properly handle language codes by:
- Allowing nullable language fields
- Validating language codes are 2-letter codes
- Preserving language information across feed and ActivityPub conversions
- Using standard language code formats
The language code handling appears to be properly implemented and verified.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if there's proper validation for language codes in the application code # Look for language code validation patterns echo "Searching for language code validation..." rg -l "language.*valid|valid.*language" # Look for language-related constants or enums echo "Searching for language-related constants..." ast-grep --pattern 'const $LANG' # Check if we're using any language code libraries echo "Checking for language code libraries in Cargo.toml..." rg "iso.*lang|lang.*codes?" -g "Cargo.toml"Length of output: 427
Script:
#!/bin/bash # Let's try a broader search for language-related code # Search for language-related fields and methods echo "Searching for language-related code..." rg -i "language" -A 3 -B 3 # Search for specific language code patterns (ISO 639-1, etc.) echo "Searching for language code patterns..." rg -i "([a-z]{2}(-[A-Z]{2})?)" --type rust # Look for language in GraphQL schema if exists echo "Checking GraphQL schema for language field..." fd -e graphql -x cat {} # Check for language-related tests echo "Checking for language-related tests..." rg -i "language" -g "*test*.rs"Length of output: 409083
crates/db_migration/src/lib.rs (2)
13-13
: LGTM! Module declaration follows conventions.The new module declaration follows the established naming pattern and is properly placed in the chronological sequence.
31-31
: Verify migration timestamp sequence.The migration is properly added to the vector and follows the same pattern as other migrations.
Let's verify the chronological ordering of migrations:
✅ Verification successful
Let me gather more information to see the actual order of migrations.
Migration timestamps are in correct chronological order
The migrations are properly ordered chronologically from January 2024 through October 2024 (m20240131_* to m20241028_*). Each timestamp appears twice in the output because they are likely referenced in both the module declarations and the migration vector, which is expected. The new migration m20241028_000001 is correctly placed as the latest migration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that migrations are in chronological order # Expected: Timestamps should be in ascending order # Extract and sort timestamps from migration module names echo "Checking migration timestamp order..." grep -o 'm[0-9]\{8\}_[0-9]\{6\}' crates/db_migration/src/lib.rs | sort -c 2>/dev/null if [ $? -eq 0 ]; then echo "✓ Migrations are in correct chronological order" else echo "⚠ Migrations are not in chronological order" fiLength of output: 275
Script:
#!/bin/bash # Get all migration timestamps in the order they appear in the file echo "Migration timestamps in file order:" grep -o 'm[0-9]\{8\}_[0-9]\{6\}' crates/db_migration/src/lib.rs echo -e "\nSorted migration timestamps:" grep -o 'm[0-9]\{8\}_[0-9]\{6\}' crates/db_migration/src/lib.rs | sortLength of output: 1040
crates/db_migration/src/m20240131_000001_user.rs (1)
Line range hint
8-42
: Schema and enum definition mismatch.The User enum includes a Language variant, but the corresponding column is not defined in the table schema. This creates a discrepancy between the enum and the actual database structure.
Let's verify if the Language column is defined in other migrations:
If the Language column is needed now, update the table creation to include it:
.col(string(User::LastRefreshedAt)) +.col(string_null(User::Language)) .to_owned(),
crates/cron/src/tasks/check_feed_item.rs (3)
8-8
: LGTM: Import added for new UserFeedTopLevel typeThe import is correctly placed and necessary for the new language support feature.
47-47
: Consider adding language support validationAccording to the PR objectives, the content map should not be outputted if the specified language is not supported. Consider adding validation in
Note::create
andNote::update
to handle cases where the language fromtop_level
is not supported.#!/bin/bash # Check if Note::create and Note::update implementations include language validation rg "impl.*Note" -A 10 --type rustConsider:
- Defining a list of supported languages
- Adding validation before creating/updating notes
- Returning appropriate errors for unsupported languages
Also applies to: 92-92
15-16
: Verify consistent usage of top_level parameterThe
top_level
parameter has been correctly added to all function signatures and is properly propagated through the call chain. Let's verify that all callers of these functions have been updated accordingly.Also applies to: 39-40, 75-76
✅ Verification successful
Let me try a different search pattern to find the callers of
check_feed_item
.
Based on the search results, I can now generate the final response:
top_level
parameter is correctly propagatedThe verification shows that all callers of
check_feed_item
have been properly updated. The function is called in two locations withincrates/cron/src/tasks/update.rs
, and in both cases, thetop_level
parameter is available in the scope and correctly passed to the function.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for direct calls to check_feed_item to ensure they include the top_level parameter rg "check_feed_item\(" --type rust -B 2 -A 2 # Search for any remaining calls that might not have been updated ast-grep --pattern 'check_feed_item($_, $_, $_)'Length of output: 3786
Script:
#!/bin/bash # Search for direct calls to check_feed_item with proper context rg "check_feed_item" --type rust -B 5 -A 5 # Also search for the function definition to understand its full signature ast-grep --pattern 'pub async fn check_feed_item($$$)'Length of output: 3157
crates/apub/src/actors/db_user_impl.rs (2)
54-54
: LGTM: Language field initialization looks correct.The language field is properly initialized from the feed's top-level language setting, maintaining the optional nature which allows for cases where language isn't specified.
Line range hint
54-69
: Verify language handling across the codebase.Let's verify how language codes are handled in other parts of the codebase to ensure consistent implementation.
✅ Verification successful
Language handling is consistent across the codebase
The verification shows that language handling is implemented consistently throughout the codebase:
- All language fields are defined as
Option<String>
across different structs- Language validation is properly implemented in
generate_map.rs
where it checks for valid two-letter language codes- The language field is appropriately propagated between different data structures (User, UserFeedTopLevel, UserFeedItem)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for language-related code patterns echo "Searching for language field usage..." rg -l "language.*Option<String>" echo "Searching for language validation or processing..." rg "language.*split|language.*contains" echo "Searching for language-related constants or supported languages..." rg "const.*SUPPORTED_LANGUAGES|vec!.*languages"Length of output: 644
Script:
#!/bin/bash # Let's examine the language field implementations in the identified files echo "=== Checking language field in user_feed_item.rs ===" rg "language" crates/db_schema/src/user_feed_item.rs -A 2 -B 2 echo -e "\n=== Checking language field in user.rs ===" rg "language" crates/db_schema/src/user.rs -A 2 -B 2 echo -e "\n=== Checking language field in user_feed_top_level.rs ===" rg "language" crates/feed/src/user_feed_top_level.rs -A 2 -B 2 echo -e "\n=== Checking language field in generate_map.rs ===" rg "language" crates/apub/src/utils/generate_map.rs -A 2 -B 2 # Let's also check for any language validation or processing in the tests echo -e "\n=== Checking language-related tests ===" rg "language.*test" -A 5Length of output: 1709
crates/apub/src/actors/db_user.rs (1)
95-95
:⚠️ Potential issueImplementation appears incomplete for language support requirements
The current implementation only adds the
language
field but doesn't fulfill the PR objectives:
- The field is always set to
None
without attempting to read the language from the input JSON- No validation of language codes is implemented
- No documentation explains the purpose and format of the language field
Consider implementing these changes:
- language: None, + /// ISO 639-1 language code with optional region tag (e.g., "en-US", "ja") + language: json.language.as_ref().map(|lang| { + // Validate language code format + if !is_valid_language_code(lang) { + tracing::warn!("Invalid language code received: {}", lang); + None + } else { + Some(lang.to_string()) + } + }).flatten(),Let's verify if the JSON input structure includes the language field:
crates/apub/src/objects/note.rs (3)
12-12
: LGTM: Content map implementation aligns with requirements.The addition of
content_map
field and related imports properly sets up the foundation for language-specific content handling.Also applies to: 24-24, 49-49
186-197
: LGTM: Proper handling of timestamps with language support.The changes to
create
andupdate
methods correctly integrate language support while maintaining proper timestamp handling:
- Create: Sets current time as published
- Update: Preserves original published time and sets current time as updated
Also applies to: 202-214
141-141
: Consider validating language codes.The language codes from both
json.language
andtop_level.language
are passed directly togenerate_map
without validation. Consider adding validation to ensure they conform to expected formats (e.g., BCP 47).Let's check if language validation is implemented elsewhere:
closed #73
Summary by CodeRabbit
Release Notes
New Features
language
field in the user data model, allowing users to specify their preferred language.content_map
field in theNote
struct for enhanced content mapping capabilities.generate_map
function to create JSON objects based on language and content.Improvements
language
andtop_level
parameters, improving context management during operations.Database Migration
language
field in the user table, ensuring database schema is up-to-date.