-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: 관리자 기능 개선 및 추가 #459
feat: 관리자 기능 개선 및 추가 #459
Conversation
…onLogs - getCronLogs 메소드에 key 파라미터를 추가하여 taskName을 기준으로 필터링 가능하도록 수정 - Cron 작업의 로그 태스크 이름을 간결하게 변경
…onLogs - getCronLogs 메소드에 key 파라미터를 추가하여 taskName을 기준으로 필터링 가능하도록 수정 - Cron 작업의 로그 태스크 이름을 간결하게 변경
# Conflicts: # src/modules/cron/cron.service.ts
# Conflicts: # src/modules/user/user.service.ts
# Conflicts: # src/common/guards/jwtAuth.guard.ts
# Conflicts: # package-lock.json # package.json
# Conflicts: # package-lock.json # package.json
# Conflicts: # eslint.config.js # package-lock.json # package.json
# Conflicts: # eslint.config.js
# Conflicts: # src/modules/bury/bury.module.ts
WalkthroughThis pull request introduces several enhancements across multiple files, focusing on database configuration, user and essay management, and pagination support. Key changes include the addition of connection management properties in the TypeORM configuration, new columns in the user entity for improved search capabilities, and migrations to support these schema changes. The admin controllers and services have been updated to include pagination in various methods, enhancing data retrieval efficiency. Additionally, new DTOs have been defined to structure responses better. Overall, the modifications aim to improve the functionality and performance of the application. Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Outside diff range and nitpick comments (9)
src/modules/essay/dto/essay.dto.ts (2)
Line range hint
47-50
: Consider enhancing location validation.Since we're moving from precise coordinates to a string-based location, consider adding more specific validation for the location field.
Consider applying this enhancement:
- @IsString() - @Expose() - location: string; + @IsString() + @Expose() + @ApiProperty({ description: 'Human-readable location string' }) + @IsOptional() + @IsString({ message: 'Location must be a valid string' }) + location: string;
2-2
: Clean up unused imports.The
IsLatitude
andIsLongitude
validators are no longer needed since we removed the corresponding properties.src/migrations/2024/10/1730048056148-addVectorUser.ts (1)
7-7
: Move extension creation to a separate migration.Creating PostgreSQL extensions should be handled in dedicated migrations as they:
- Affect the entire database
- May require superuser privileges
- Should be tracked independently for better maintenance
src/entities/user.entity.ts (2)
185-186
: Consider using a more specific type for search_vector.Instead of using
any
, consider creating a custom type or using a more specific type for better type safety.- search_vector?: any; + search_vector?: unknown;
185-189
: Add documentation for search-related columns.These columns are crucial for the search functionality. Consider adding JSDoc comments to explain their purpose and relationship with the email field.
+ /** + * Full-text search vector for email field. + * @remarks Updated automatically via database trigger + */ @Column({ type: 'tsvector', select: false, nullable: true }) search_vector?: unknown; + /** + * Normalized (unaccented) version of email for improved search. + * @remarks Updated automatically via database trigger when email changes + */ @Column({ type: 'text', nullable: true, select: false }) unaccented_email?: string;src/modules/admin/admin.repository.ts (1)
198-205
: Add parameter validation for pagination.The pagination implementation looks good, but consider adding validation for the pagination parameters to handle edge cases.
Apply this diff to add parameter validation:
async findAdmins(activated: boolean, page?: number, limit?: number) { + const validatedPage = Math.max(1, page || 1); + const validatedLimit = Math.max(1, Math.min(limit || 10, 100)); const [admins, total] = await this.adminRepository.findAndCount({ where: activated !== undefined ? { activated } : {}, - skip: (page - 1) * limit, - take: limit, + skip: (validatedPage - 1) * validatedLimit, + take: validatedLimit, order: { createdDate: 'DESC' }, }); return { admins, total }; }This change:
- Ensures page is at least 1
- Ensures limit is between 1 and 100
- Provides default values (page=1, limit=10)
src/modules/essay/essay.repository.ts (2)
Line range hint
728-748
: Consider adding spatial indexing for location-based queries.The
findNearbyEssays
method performs spatial queries. Consider adding a spatial index to improve performance for location-based searches.Add the following index to your database migrations:
CREATE INDEX idx_essay_coordinates ON essay USING GIST (coordinates);
444-450
: Consider implementing query result caching.For frequently accessed data, consider implementing query result caching using Redis or a similar caching solution. This would be particularly beneficial for:
- Search results with common keywords
- Full essay listings with pagination
Also applies to: 571-624
src/migrations/2024/10/1730047331380-addTrigramUser.ts (1)
46-50
: Ensure the 'down' method reverses all changes correctly.After modifying the migration to use a generated column, make sure the
down
method accurately reverses these changes by removing any redundant drop statements related to triggers and functions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (12)
- src/config/typeorm.config.ts (1 hunks)
- src/entities/user.entity.ts (1 hunks)
- src/migrations/2024/10/1730047331380-addTrigramUser.ts (1 hunks)
- src/migrations/2024/10/1730048056148-addVectorUser.ts (1 hunks)
- src/modules/admin/admin.controller.ts (6 hunks)
- src/modules/admin/admin.repository.ts (1 hunks)
- src/modules/admin/admin.service.ts (5 hunks)
- src/modules/admin/dto/response/adminsRes.dto.ts (1 hunks)
- src/modules/admin/dto/response/essayInfoRes.dto.ts (1 hunks)
- src/modules/essay/dto/essay.dto.ts (1 hunks)
- src/modules/essay/essay.repository.ts (1 hunks)
- src/modules/user/user.repository.ts (1 hunks)
🔇 Additional comments (23)
src/modules/admin/dto/response/adminsRes.dto.ts (3)
3-4
: LGTM: Import statements are appropriateThe added imports from class-transformer and class-validator are necessary for the DTO's validation and serialization requirements.
8-9
: LGTM: Proper serialization decorator addedThe @expose() decorator ensures the admins array is properly included in the serialized response, maintaining consistency with the new pagination properties.
1-24
: Verify consistent usage of pagination DTOThe DTO structure looks good and aligns with the PR objectives. Let's verify its consistent usage across the codebase.
✅ Verification successful
Let me gather more information about the service implementation to ensure proper usage of the DTO.
Pagination DTO implementation is correctly used
The verification shows that the AdminsResDto is properly implemented and consistently used:
- The service correctly sets all required properties (admins, total, page, totalPage)
- The pagination calculation follows the same pattern as other paginated endpoints
- The DTO transformation is handled through the utilsService.transformToDto utility
- The controller properly declares the return type and documents it with Swagger decorators
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the consistent usage of AdminsResDto pagination properties # Check for any direct instantiation of AdminsResDto to ensure all properties are set ast-grep --pattern 'new AdminsResDto()' # Check the controller's return type annotation rg -A 5 'AdminsResDto' src/modules/admin/Length of output: 2040
Script:
#!/bin/bash # Check the service implementation for AdminsResDto usage rg -A 10 "getAdmins.*{" src/modules/admin/ # Check for any transformation or mapping to AdminsResDto ast-grep --pattern 'plainToInstance($_, AdminsResDto, $_)'Length of output: 193
Script:
#!/bin/bash # Check the service implementation for AdminsResDto usage with fixed regex rg -A 10 "getAdmins" src/modules/admin/ # Check for class-transformer usage in admin service rg -A 5 "plainToInstance" src/modules/admin/admin.service.ts # Look for pagination-related code in admin service rg -A 10 "page.*limit" src/modules/admin/admin.service.tsLength of output: 10731
src/modules/essay/dto/essay.dto.ts (1)
Line range hint
1-52
: Verify the impact of removing geo-coordinates.While the removal of latitude/longitude properties simplifies the DTO, we should ensure this change doesn't impact existing features or data consistency.
Let's verify the usage of these coordinates in the codebase:
src/migrations/2024/10/1730048056148-addVectorUser.ts (1)
1-5
: LGTM! Migration class follows TypeORM conventions.The class name and timestamp follow the standard TypeORM migration naming pattern.
src/config/typeorm.config.ts (1)
28-29
: LGTM: Connection pool settings are well-configured.The addition of
keepAlive
andidleTimeoutMillis
settings will help maintain stable connections for the new search features. The values chosen are reasonable:
keepAlive: true
prevents connection dropsidleTimeoutMillis: 30000
(30s) is a good balance between resource utilization and connection reusesrc/entities/user.entity.ts (1)
185-189
: Verify database triggers and indexes.The search functionality depends on database triggers and indexes. Let's verify their setup:
✅ Verification successful
Let me gather more information about the GIN index specifically for the user table.
Database triggers and indexes are properly configured
The verification shows that:
- A GIN index
IDX_USER_SEARCH_VECTOR
is created on thesearch_vector
column- The
search_vector
is a STORED generated column usingto_tsvector
on the email field- The
unaccent
extension is properly installed- No trigger is needed as the column is GENERATED ALWAYS AS, which automatically updates the vector
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the presence of required triggers and indexes for search functionality # Test 1: Check for triggers on the user table echo "Checking triggers..." rg -A 5 "CREATE TRIGGER.*user" src/migrations # Test 2: Check for GIN index on search_vector echo "Checking indexes..." rg -A 5 "CREATE INDEX.*GIN.*search_vector" src/migrations # Test 3: Check for unaccent extension echo "Checking unaccent extension..." rg -A 5 "CREATE EXTENSION.*unaccent" src/migrationsLength of output: 2650
Script:
#!/bin/bash # Check for GIN index specifically on user table's search_vector echo "Checking user table GIN index..." rg -A 5 "CREATE INDEX.*user.*GIN" src/migrations # Check for any trigger setup in the vector migration echo "Checking vector migration..." cat src/migrations/2024/10/1730048056148-addVectorUser.tsLength of output: 1353
src/modules/user/user.repository.ts (2)
149-151
: LGTM: Clean pagination implementation.The pagination logic is implemented correctly with proper offset calculation.
155-162
: Verify the performance impact of trigram searches.While the trigram-based search implementation is correct, it might benefit from an index to optimize performance.
Consider adding a GiST index on the unaccented_email column using the trigram operator:
CREATE INDEX idx_user_email_trigram ON "user" USING gist (unaccented_email gist_trgm_ops);Let's verify if this index exists:
✅ Verification successful
The previous search didn't find any files because we need to first verify if there are any SQL or migration files in the repository. Let's also broaden our search to include any trigram-related configurations or setups.
Trigram search performance is already optimized with GIN index
The codebase already has the necessary trigram index for the
unaccented_email
column in theuser
table. This was added in migration1730047331380-addTrigramUser.ts
which creates a GIN index:CREATE INDEX "IDX_USER_UNACCENTED_EMAIL" ON "user" USING gin (unaccented_email gin_trgm_ops)This GIN index is actually more efficient than the suggested GIST index for this use case, as GIN indexes are optimized for static data and perform better for exact and pattern matching queries, which is exactly what's being used in the code (
ILIKE
with wildcards).🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for trigram index creation in migration files rg -l "CREATE INDEX.*trigram" --type sqlLength of output: 191
Script:
#!/bin/bash # Search for any SQL migration files first fd -e sql # Then search for any files that might contain database schema or migration info fd -e typeorm.ts -e migration.ts # Look for trigram-related content in any file rg -i "trigram|trgm"Length of output: 4532
src/modules/essay/essay.repository.ts (2)
Line range hint
571-624
: Well-implemented search optimization!The search implementation is excellent, with:
- Smart switching between trigram and full-text search based on keyword length
- Proper weighting of title and content matches
- Efficient use of PostgreSQL's full-text search capabilities
444-450
: Verify data completeness after join removals.The removal of joins for
story
,reports
, andreviews
might impact features that depend on this data. Please ensure that these relations are not required by the consuming components.✅ Verification successful
Let me analyze the findings from the shell script results:
essay.story
is extensively used across multiple files:
- In story.service.ts for managing story associations
- In essay.service.ts for updating story relationships
- In essay.repository.ts for various query operations including filtering and joins
- Defined as a relationship in story.entity.ts
essay.reports
is used in:
- admin.service.ts for processing reports
- essay.repository.ts for querying unprocessed reports
- Defined as a relationship in reportQueue.entity.ts
essay.reviews
is used in:
- admin.service.ts for processing reviews
- essay.repository.ts for querying reviews
- Defined as a relationship in reviewQueue.entity.ts
Let me verify if these relations are still maintained in the current implementation of
findFullEssays
:
Let me try a different approach to get the context of the findFullEssays method:
Let me check where this method is being used to understand its purpose and requirements:
Based on the findings, I can now provide a final response:
The removal of joins is safe and intentional
The
findFullEssays
method is only used by the admin service'sgetFullEssays
endpoint for basic essay listing with pagination. The removed joins (story
,reports
,reviews
) are not required for this specific use case as:
- The admin service transforms the results to
EssayInfoResDto
which doesn't need these relations- When detailed essay information is needed, other repository methods like
findFullEssay
(which still maintains all joins) are used instead- The method is correctly optimized for its specific use case of paginated listing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for usages of removed relations in the codebase echo "Searching for story usage..." rg -A 3 'essay\.story' --type ts echo "Searching for reports usage..." rg -A 3 'essay\.reports' --type ts echo "Searching for reviews usage..." rg -A 3 'essay\.reviews' --type tsLength of output: 8967
Script:
#!/bin/bash # Get the full context of findFullEssays method ast-grep --pattern 'async findFullEssays($_) { $$$ }'Length of output: 57
Script:
#!/bin/bash # Search for the findFullEssays method implementation echo "Searching for findFullEssays method..." rg -B 5 -A 15 "findFullEssays" src/modules/essay/essay.repository.tsLength of output: 757
Script:
#!/bin/bash # Search for usages of findFullEssays echo "Searching for findFullEssays usage..." rg -B 3 -A 3 "findFullEssays" --type ts --glob '!*.repository.ts' # Also get the method signature from the repository interface echo -e "\nSearching for repository interface definition..." rg -B 2 -A 2 "findFullEssays" --type ts --glob '*interface*'Length of output: 1000
src/modules/admin/admin.service.ts (2)
70-71
: LGTM: Imports are correctly added.The new imports for
UserResDto
andEssayService
are properly placed and necessary for the enhanced search functionality.
85-85
: LGTM: Dependency injection is properly implemented.The
EssayService
is correctly injected following NestJS dependency injection patterns.src/modules/admin/dto/response/essayInfoRes.dto.ts (3)
2-4
: Imports are correctly addedThe import statements for
Expose
,Type
,EssayDto
, andUserDto
are appropriate and necessary for the updated functionality.
9-10
:⚠️ Potential issueEnsure
UserDto
does not expose sensitive informationIncluding the
author
property of typeUserDto
may inadvertently expose sensitive user information when serialized. Verify thatUserDto
only contains fields safe for public exposure, avoiding PII such as emails, passwords, or personal identifiers.Run the following script to list the properties of
UserDto
and ensure no sensitive fields are included:#!/bin/bash # Description: List the properties of UserDto to verify exposed fields. # Test: Retrieve the properties of UserDto. rg -A 20 'export class UserDto'
6-6
:⚠️ Potential issueConfirm that extending
EssayDto
doesn't introduce unintended propertiesBy extending
EssayDto
,EssayInfoResDto
inherits all properties fromEssayDto
. Ensure that all inherited properties are appropriate for exposure in this DTO, and that no sensitive information is unintentionally included in API responses.Run the following script to list the properties of
EssayDto
and verify that only appropriate fields are inherited:✅ Verification successful
Let me try to locate the
EssayDto
file first.
Now let's examine the content of the EssayDto class to see its properties.
Let me check the EssayInfoResDto to compare and understand the context better.
Inheritance of
EssayDto
properties is appropriate and secureThe inheritance is well-designed and safe because:
- All properties in
EssayDto
are explicitly marked with@Expose()
decorator, indicating intentional exposure- The inherited properties (id, title, content, linkedOutGauge, createdDate, updatedDate, thumbnail) are all appropriate for an essay info response
EssayInfoResDto
only adds anauthor
field which is also properly exposed and typed- No sensitive information is present in the parent
EssayDto
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: List the properties of EssayDto to verify inherited fields. # Test: Retrieve the properties of EssayDto. rg -A 20 'export class EssayDto'Length of output: 1091
Script:
#!/bin/bash # Search for EssayDto file fd -e ts -e js "essay.dto" # If direct filename search fails, try searching for the class definition rg -l "class EssayDto"Length of output: 201
Script:
#!/bin/bash # Show the content of EssayDto class with context rg -A 30 "export class EssayDto" src/modules/essay/dto/essay.dto.tsLength of output: 446
Script:
#!/bin/bash # Show the content of EssayInfoResDto cat src/modules/admin/dto/response/essayInfoRes.dto.tsLength of output: 386
src/migrations/2024/10/1730047331380-addTrigramUser.ts (2)
7-8
: Verify database permissions for creating extensions.Ensure that the database user executing this migration has the necessary permissions to create the
pg_trgm
andunaccent
extensions. Without proper permissions, the migration may fail.
40-42
: Assess the necessity of the full-text search index on email.The index
IDX_USER_SEARCH_VECTOR
uses full-text search on thesrc/modules/admin/admin.controller.ts (5)
440-441
: LGTM: Added documentation for pagination parametersThe added documentation correctly describes the
page
andlimit
query parameters with their default values, enhancing the API documentation for thegetAdmins
method.
453-455
: LGTM: Included pagination query parameters in method definitionThe
@ApiQuery
decorators forpage
andlimit
have been properly added to thegetAdmins
method, ensuring that the Swagger documentation reflects these query parameters.
456-460
: LGTM: Added pagination parameters to thegetAdmins
method signatureThe
page
andlimit
parameters with default values have been correctly added to the method signature and are being passed to the service layer. This will enable pagination functionality in thegetAdmins
endpoint.
482-493
: LGTM: Added endpoint for admins to retrieve their own detailsThe new
getMyAdmin
method allows an admin to fetch their own information usingreq.user.id
. This addition enhances the admin's ability to access their profile details.
1151-1162
:⚠️ Potential issueFix inconsistency between path and query parameters
The
keyword
parameter is defined as a path parameter in the route (:keyword
), but it's being accessed as a query parameter using@Query('keyword')
. To resolve this inconsistency, accesskeyword
using@Param('keyword')
and update the Swagger decorators accordingly.@Get('essays/search/:keyword') -@ApiQuery({ name: 'keyword', required: true }) +@ApiParam({ name: 'keyword', required: true }) @ApiQuery({ name: 'page', required: false }) @ApiQuery({ name: 'limit', required: false }) @ApiResponse({ status: 200, type: SummaryEssaysResDto, }) async searchEssays( - @Query('keyword') keyword: string, + @Param('keyword') keyword: string, @Query('page', new PagingParseIntPipe(1)) page: number, @Query('limit', new PagingParseIntPipe(10)) limit: number, ) { return this.adminService.searchEssays(keyword, page, limit); }Likely invalid or redundant comment.
@ApiProperty() | ||
@IsNumber() | ||
@Expose() | ||
page: number; | ||
|
||
@ApiProperty() | ||
@IsNumber() | ||
@Expose() | ||
total: number; | ||
|
||
@ApiProperty() | ||
@IsNumber() | ||
@Expose() | ||
totalPage: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding min/max validation constraints
While the number validation is good, consider adding more specific constraints:
page
should be >= 1total
should be >= 0totalPage
should be >= 0
Here's how you can enhance the validation:
@ApiProperty()
- @IsNumber()
+ @IsNumber()
+ @Min(1)
@Expose()
page: number;
@ApiProperty()
- @IsNumber()
+ @IsNumber()
+ @Min(0)
@Expose()
total: number;
@ApiProperty()
- @IsNumber()
+ @IsNumber()
+ @Min(0)
@Expose()
totalPage: number;
Don't forget to add the import:
import { IsNumber, Min } from 'class-validator';
await queryRunner.query(` | ||
DO $$ | ||
BEGIN | ||
IF NOT EXISTS (SELECT 1 FROM pg_indexes WHERE indexname = 'IDX_USER_SEARCH_VECTOR') THEN | ||
CREATE INDEX "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector); | ||
END IF; | ||
END | ||
$$; | ||
`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify index creation using CREATE INDEX IF NOT EXISTS.
The current DO block implementation is unnecessarily complex. PostgreSQL supports CREATE INDEX IF NOT EXISTS directly.
- await queryRunner.query(`
- DO $$
- BEGIN
- IF NOT EXISTS (SELECT 1 FROM pg_indexes WHERE indexname = 'IDX_USER_SEARCH_VECTOR') THEN
- CREATE INDEX "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector);
- END IF;
- END
- $$;
- `);
+ await queryRunner.query(
+ `CREATE INDEX IF NOT EXISTS "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector)`,
+ );
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await queryRunner.query(` | |
DO $$ | |
BEGIN | |
IF NOT EXISTS (SELECT 1 FROM pg_indexes WHERE indexname = 'IDX_USER_SEARCH_VECTOR') THEN | |
CREATE INDEX "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector); | |
END IF; | |
END | |
$$; | |
`); | |
await queryRunner.query( | |
`CREATE INDEX IF NOT EXISTS "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector)`, | |
); |
// Add search_vector column for full-text search on email | ||
await queryRunner.query( | ||
`ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('simple', coalesce(email, ''))) STORED`, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a more robust text search configuration.
The current implementation uses the 'simple' text search configuration, which:
- Doesn't handle language-specific features
- Ignores stop words
- Doesn't support stemming
Consider using 'english' or another language-specific configuration if the emails contain natural language content.
- `ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('simple', coalesce(email, ''))) STORED`,
+ `ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('english', coalesce(email, ''))) STORED`,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Add search_vector column for full-text search on email | |
await queryRunner.query( | |
`ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('simple', coalesce(email, ''))) STORED`, | |
); | |
// Add search_vector column for full-text search on email | |
await queryRunner.query( | |
`ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('english', coalesce(email, ''))) STORED`, | |
); |
public async down(queryRunner: QueryRunner): Promise<void> { | ||
await queryRunner.query(`DROP INDEX "IDX_USER_SEARCH_VECTOR"`); | ||
await queryRunner.query(`ALTER TABLE "user" DROP COLUMN "search_vector"`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance down method robustness.
The current down method implementation could fail if objects don't exist and lacks transaction control.
public async down(queryRunner: QueryRunner): Promise<void> {
- await queryRunner.query(`DROP INDEX "IDX_USER_SEARCH_VECTOR"`);
- await queryRunner.query(`ALTER TABLE "user" DROP COLUMN "search_vector"`);
+ await queryRunner.startTransaction();
+ try {
+ await queryRunner.query(`DROP INDEX IF EXISTS "IDX_USER_SEARCH_VECTOR"`);
+ await queryRunner.query(`ALTER TABLE "user" DROP COLUMN IF EXISTS "search_vector"`);
+ await queryRunner.commitTransaction();
+ } catch (err) {
+ await queryRunner.rollbackTransaction();
+ throw err;
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public async down(queryRunner: QueryRunner): Promise<void> { | |
await queryRunner.query(`DROP INDEX "IDX_USER_SEARCH_VECTOR"`); | |
await queryRunner.query(`ALTER TABLE "user" DROP COLUMN "search_vector"`); | |
} | |
public async down(queryRunner: QueryRunner): Promise<void> { | |
await queryRunner.startTransaction(); | |
try { | |
await queryRunner.query(`DROP INDEX IF EXISTS "IDX_USER_SEARCH_VECTOR"`); | |
await queryRunner.query(`ALTER TABLE "user" DROP COLUMN IF EXISTS "search_vector"`); | |
await queryRunner.commitTransaction(); | |
} catch (err) { | |
await queryRunner.rollbackTransaction(); | |
throw err; | |
} | |
} |
public async up(queryRunner: QueryRunner): Promise<void> { | ||
await queryRunner.query(`CREATE EXTENSION IF NOT EXISTS unaccent`); | ||
|
||
// Add search_vector column for full-text search on email | ||
await queryRunner.query( | ||
`ALTER TABLE "user" ADD COLUMN IF NOT EXISTS "search_vector" TSVECTOR GENERATED ALWAYS AS (to_tsvector('simple', coalesce(email, ''))) STORED`, | ||
); | ||
|
||
// Check and create index if not exists | ||
await queryRunner.query(` | ||
DO $$ | ||
BEGIN | ||
IF NOT EXISTS (SELECT 1 FROM pg_indexes WHERE indexname = 'IDX_USER_SEARCH_VECTOR') THEN | ||
CREATE INDEX "IDX_USER_SEARCH_VECTOR" ON "user" USING gin (search_vector); | ||
END IF; | ||
END | ||
$$; | ||
`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add transaction control to ensure atomic migration.
The migration should be wrapped in a transaction to ensure all changes are applied atomically.
public async up(queryRunner: QueryRunner): Promise<void> {
+ await queryRunner.startTransaction();
+ try {
// ... existing migration code ...
+ await queryRunner.commitTransaction();
+ } catch (err) {
+ await queryRunner.rollbackTransaction();
+ throw err;
+ }
}
Committable suggestion was skipped due to low confidence.
async getAdmins(page: number, limit: number, activated?: boolean) { | ||
const { admins, total } = await this.adminRepository.findAdmins(activated, page, limit); | ||
|
||
const totalPage: number = Math.ceil(total / limit); | ||
|
||
const adminsDto = this.utilsService.transformToDto(AdminResDto, admins); | ||
|
||
return { admins: adminsDto }; | ||
return { admins: adminsDto, total, page, totalPage }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for pagination parameters in getAdmins.
Similar to the search methods, pagination parameters should be validated.
Add validation:
async getAdmins(page: number, limit: number, activated?: boolean) {
+ if (page < 1 || limit < 1 || limit > 100) {
+ throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST);
+ }
const { admins, total } = await this.adminRepository.findAdmins(activated, page, limit);
// ...
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async getAdmins(page: number, limit: number, activated?: boolean) { | |
const { admins, total } = await this.adminRepository.findAdmins(activated, page, limit); | |
const totalPage: number = Math.ceil(total / limit); | |
const adminsDto = this.utilsService.transformToDto(AdminResDto, admins); | |
return { admins: adminsDto }; | |
return { admins: adminsDto, total, page, totalPage }; | |
async getAdmins(page: number, limit: number, activated?: boolean) { | |
if (page < 1 || limit < 1 || limit > 100) { | |
throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST); | |
} | |
const { admins, total } = await this.adminRepository.findAdmins(activated, page, limit); | |
const totalPage: number = Math.ceil(total / limit); | |
const adminsDto = this.utilsService.transformToDto(AdminResDto, admins); | |
return { admins: adminsDto, total, page, totalPage }; |
async searchUsers(keyword: string, page: number, limit: number) { | ||
const { users, total } = await this.userRepository.searchUsers(keyword, page, limit); | ||
|
||
return this.utilsService.transformToDto(UserDetailResDto, user); | ||
const totalPage: number = Math.ceil(total / limit); | ||
|
||
const usersDto = this.utilsService.transformToDto(UserResDto, users); | ||
return { usersDto, total, page, totalPage }; | ||
} | ||
|
||
async searchEssays(keyword: string, page: number, limit: number) { | ||
return await this.essayService.searchEssays(keyword, page, limit); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for pagination parameters.
The pagination parameters (page
and limit
) should be validated to prevent potential DoS attacks through large page sizes or negative values.
Add validation at the start of both methods:
async searchUsers(keyword: string, page: number, limit: number) {
+ if (page < 1 || limit < 1 || limit > 100) {
+ throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST);
+ }
const { users, total } = await this.userRepository.searchUsers(keyword, page, limit);
// ...
}
async searchEssays(keyword: string, page: number, limit: number) {
+ if (page < 1 || limit < 1 || limit > 100) {
+ throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST);
+ }
return await this.essayService.searchEssays(keyword, page, limit);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async searchUsers(keyword: string, page: number, limit: number) { | |
const { users, total } = await this.userRepository.searchUsers(keyword, page, limit); | |
return this.utilsService.transformToDto(UserDetailResDto, user); | |
const totalPage: number = Math.ceil(total / limit); | |
const usersDto = this.utilsService.transformToDto(UserResDto, users); | |
return { usersDto, total, page, totalPage }; | |
} | |
async searchEssays(keyword: string, page: number, limit: number) { | |
return await this.essayService.searchEssays(keyword, page, limit); | |
async searchUsers(keyword: string, page: number, limit: number) { | |
if (page < 1 || limit < 1 || limit > 100) { | |
throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST); | |
} | |
const { users, total } = await this.userRepository.searchUsers(keyword, page, limit); | |
const totalPage: number = Math.ceil(total / limit); | |
const usersDto = this.utilsService.transformToDto(UserResDto, users); | |
return { usersDto, total, page, totalPage }; | |
} | |
async searchEssays(keyword: string, page: number, limit: number) { | |
if (page < 1 || limit < 1 || limit > 100) { | |
throw new HttpException('Invalid pagination parameters', HttpStatus.BAD_REQUEST); | |
} | |
return await this.essayService.searchEssays(keyword, page, limit); |
await queryRunner.query(`ALTER TABLE "user" ADD COLUMN "unaccented_email" TEXT`); | ||
|
||
// Populate unaccented_email initially | ||
await queryRunner.query(`UPDATE "user" SET "unaccented_email" = unaccent(email)`); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify by using a generated column instead of triggers.
Consider using a generated column for unaccented_email
instead of manually maintaining it with triggers. This approach reduces complexity and improves maintainability.
Apply this diff to modify the migration:
-await queryRunner.query(`ALTER TABLE "user" ADD COLUMN "unaccented_email" TEXT`);
-
-// Populate unaccented_email initially
-await queryRunner.query(`UPDATE "user" SET "unaccented_email" = unaccent(email)`);
-
-// Create trigger function for unaccented_email updates
-await queryRunner.query(`
- CREATE OR REPLACE FUNCTION update_unaccented_email()
- RETURNS TRIGGER AS $$
- BEGIN
- NEW.unaccented_email = unaccent(NEW.email);
- RETURN NEW;
- END;
- $$ LANGUAGE plpgsql;
-`);
-
-// Apply trigger to update unaccented_email on email change
-await queryRunner.query(`
- CREATE TRIGGER trigger_update_unaccented_email
- BEFORE INSERT OR UPDATE OF email ON "user"
- FOR EACH ROW EXECUTE FUNCTION update_unaccented_email();
-`);
+await queryRunner.query(`
+ ALTER TABLE "user" ADD COLUMN "unaccented_email" TEXT GENERATED ALWAYS AS (unaccent(email)) STORED
+`);
Don't forget to adjust the down
method to drop the generated column:
-await queryRunner.query(`DROP TRIGGER trigger_update_unaccented_email ON "user"`);
-await queryRunner.query(`DROP FUNCTION update_unaccented_email`);
await queryRunner.query(`ALTER TABLE "user" DROP COLUMN "unaccented_email"`);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await queryRunner.query(`ALTER TABLE "user" ADD COLUMN "unaccented_email" TEXT`); | |
// Populate unaccented_email initially | |
await queryRunner.query(`UPDATE "user" SET "unaccented_email" = unaccent(email)`); | |
await queryRunner.query(` | |
ALTER TABLE "user" ADD COLUMN "unaccented_email" TEXT GENERATED ALWAYS AS (unaccent(email)) STORED | |
`); |
async searchUsers( | ||
@Param('keyword') email: string, | ||
@Query('page', new PagingParseIntPipe(1)) page: number, | ||
@Query('limit', new PagingParseIntPipe(10)) limit: number, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Rename the parameter for clarity and consistency
In the searchUsers
method, the route parameter is :keyword
, but it is assigned to the variable email
. For clarity, consider renaming the variable to keyword
to match the route parameter. This will improve code readability and maintain consistency.
async searchUsers(
- @Param('keyword') email: string,
+ @Param('keyword') keyword: string,
@Query('page', new PagingParseIntPipe(1)) page: number,
@Query('limit', new PagingParseIntPipe(10)) limit: number,
) {
- return this.adminService.searchUsers(email, page, limit);
+ return this.adminService.searchUsers(keyword, page, limit);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async searchUsers( | |
@Param('keyword') email: string, | |
@Query('page', new PagingParseIntPipe(1)) page: number, | |
@Query('limit', new PagingParseIntPipe(10)) limit: number, | |
) { | |
async searchUsers( | |
@Param('keyword') keyword: string, | |
@Query('page', new PagingParseIntPipe(1)) page: number, | |
@Query('limit', new PagingParseIntPipe(10)) limit: number, | |
) { | |
return this.adminService.searchUsers(keyword, page, limit); | |
} |
@@ -67,6 +67,7 @@ | |||
import { CreateItemReqDto } from './dto/request/createItemReq.dto'; | |||
import { ItemsResDto } from '../home/dto/response/itemsRes.dto'; | |||
import { ThemesResDto } from '../home/dto/response/themesRes.dto'; | |||
import { SummaryEssaysResDto } from '../essay/dto/response/SummaryEssaysRes.dto'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the import path to match the file name
The import statement at line 70 may have an incorrect file name in the import path. Ensure that the file name matches the actual file in the directory. If the file is named SummaryEssaysResDto.ts
, the import should be adjusted accordingly.
-import { SummaryEssaysResDto } from '../essay/dto/response/SummaryEssaysRes.dto';
+import { SummaryEssaysResDto } from '../essay/dto/response/SummaryEssaysResDto';
Committable suggestion was skipped due to low confidence.
# [1.19.0](v1.18.0...v1.19.0) (2024-10-27) ### Features * 관리자 기능 개선 및 추가 ([#459](#459)) ([805af60](805af60))
🎉 This PR is included in version 1.19.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
PR Type
Summary
관리자, 유저, 그리고 에세이 검색 API에 페이지네이션과 키워드 검색 기능을 추가하여 효율적인 데이터 조회 및 검색이 가능하도록 개선됩니다. 관리자 검색 API는 활성화 여부와 페이지네이션을 기반으로 결과를 반환하고 유저와 에세이는 키워드 검색을 통해 유사한 항목을 찾을 수 있습니다.
Describe your changes
주요 변경 사항
유저 이메일 키워드 검색 기능 추가:
similarity
와ts_rank_cd
함수를 사용해 trigram 및 full-text search로 검색 결과를 정렬하고 페이지네이션 기능이 추가됩니다.GET /admin-auth/users/search/:keyword
: 이메일 키워드를 사용해 유사한 유저 목록을 검색하는 엔드포인트 추가.에세이 키워드 검색 기능 추가:
trigram
과 full-text search를 적용해 성능을 최적화하며 페이지네이션을 통해 결과를 페이지 단위로 반환합니다.GET /admin-auth/essays/search/:keyword
: 키워드가 포함된 에세이 목록을 검색하는 엔드포인트 추가.관리자 검색 API에 페이지네이션 기능 추가:
getAdmins
API에page
와limit
쿼리 파라미터를 추가하여 활성 상태와 페이지네이션 기반으로 결과를 반환할 수 있도록 개선됩니다.AdminService
와AdminRepository
를 리팩토링하여 페이지네이션을 위한total
,page
,totalPage
정보를 함께 반환됩니다.GET /admin-info/admins
: 페이지네이션과 활성 상태 필터를 통해 관리자 목록을 반환.어드민 자기 정보 조회 엔드포인트 추가:
GET /admin-info/my
경로를 통해 현재 로그인된 어드민의 정보를 조회할 수 있는 엔드포인트를 추가했습니다.DTO 및 응답 구조 수정:
AdminsResDto
에 페이지네이션 메타 데이터(page
,total
,totalPage
)를 추가하여 프론트엔드에서 페이지네이션을 쉽게 구현할 수 있도록 수정됩니다.UserResDto
와EssayResDto
도 필요한 페이지네이션 정보를 포함해 응답을 제공합니다.Issue Number or Link
#407