-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(internal-script-development): [nan-1629] add internal deploy logic #2655
feat(internal-script-development): [nan-1629] add internal deploy logic #2655
Conversation
WalkthroughThe changes introduce a new GitHub Actions workflow for automating the deployment of integration templates and enhance the CLI and server components for internal deployment operations. New commands, methods, and API endpoints have been added to facilitate the management and deployment of Nango integrations across various environments. Additionally, type definitions have been improved for better clarity and maintainability. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant DeployService
participant API
participant ConfigService
participant ConnectionService
User->>CLI: Run admin:deploy-internal
CLI->>DeployService: Call internalDeploy
DeployService->>ConfigService: Copy provider config credentials
DeployService->>ConnectionService: Get connections by environment
DeployService->>API: Trigger postDeployInternal
API-->>User: Return deployment result
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (9)
- .github/workflows/deploy-scripts-internally.yaml (1 hunks)
- packages/cli/lib/index.ts (1 hunks)
- packages/cli/lib/services/deploy.service.ts (3 hunks)
- packages/cli/lib/types.ts (1 hunks)
- packages/server/lib/controllers/sync/deploy/postDeployInternal.ts (1 hunks)
- packages/server/lib/routes.ts (2 hunks)
- packages/shared/lib/services/config.service.ts (1 hunks)
- packages/shared/lib/services/connection.service.ts (1 hunks)
- packages/types/lib/deploy/api.ts (2 hunks)
Additional context used
actionlint
.github/workflows/deploy-scripts-internally.yaml
25-25: shellcheck reported issue in this script: SC2086:info:3:26: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:5:47: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:7:6: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:8:9: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:8:51: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:12:40: Double quote to prevent globbing and word splitting
(shellcheck)
Biome
packages/shared/lib/services/config.service.ts
[error] 294-294: Avoid the delete operator which can impact performance.
Unsafe fix: Use an undefined assignment instead.
(lint/performance/noDelete)
Additional comments not posted (7)
packages/cli/lib/types.ts (2)
9-9
: Type aliasENV
usage inDeployOptions
is approved.Centralizing environment types into the
ENV
type alias enhances maintainability and clarity.
18-20
: New interfaceInternalDeployOptions
is approved.The introduction of
InternalDeployOptions
using theENV
type for itsenv
property is a good practice, promoting separation of concerns between internal and external deployment configurations.packages/server/lib/controllers/sync/deploy/postDeployInternal.ts (2)
1-10
: Initial setup looks good.The imports and initial setup are appropriate for the functionality described in the PR.
12-28
: Validation setup is comprehensive.The use of Zod for schema validation is a good practice, and the defined schemas are comprehensive for the expected inputs.
packages/cli/lib/index.ts (1)
296-305
: New CLI commandadmin:deploy-internal
is well-configured.The command is correctly marked as hidden and includes necessary options for specifying the environment and remote environment. This setup is appropriate for internal use and aligns with the PR's objectives.
packages/server/lib/routes.ts (1)
168-168
: New internal deployment route added.The addition of the route
/sync/deploy/internal
is correctly implemented with theapiAuth
middleware ensuring basic security. However, given the sensitivity of deployment operations, it's crucial to verify thatapiAuth
provides adequate security measures for this context.Would you like me to review the
postDeployInternal
function to ensure it handles the deployment logic correctly?Verification successful
Security Measures in
apiAuth
Middleware VerifiedThe
apiAuth
middleware includes bothauthMiddleware.secretKeyAuth
for authentication andrateLimiterMiddleware
for rate limiting. These components provide adequate security measures for the/sync/deploy/internal
route, ensuring that only authorized requests are processed and preventing abuse through excessive requests.
authMiddleware.secretKeyAuth
: Provides key-based authentication.rateLimiterMiddleware
: Applies rate limiting to control request frequency.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the security level of `apiAuth` middleware. # Test: Search for the `apiAuth` middleware implementation. Expect: Security features like rate limiting, key validation, etc. rg --type python -A 5 $'apiAuth'Length of output: 62
Script:
#!/bin/bash # Description: Verify the security level of `apiAuth` middleware. # Test: Search for the `apiAuth` middleware implementation in TypeScript files. Expect: Security features like rate limiting, key validation, etc. rg --type ts -A 10 'apiAuth'Length of output: 16910
packages/cli/lib/services/deploy.service.ts (1)
295-356
: New methodinternalDeploy
added for custom environment deployments.The
internalDeploy
method is well-structured and covers all necessary steps for deploying to a custom environment, including file verification, environment setup, compilation, and deployment. The use ofPostDeployInternal['Body']
for the deployment payload is noted, and it's important to ensure that this type definition is correctly updated to reflect the new deployment logic.The method is approved for merging.
Verification successful
Type Definition for
PostDeployInternal
VerifiedThe
PostDeployInternal
type definition includes the necessary fields used in theinternalDeploy
method, such asreconcile
andnangoYamlBody
. Ensure that any additional fields likedebug
andsingleDeployMode
are correctly handled in the payload structure. The method is approved for merging.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the type definition changes for `PostDeployInternal`. # Test: Search for the `PostDeployInternal` type definition. Expect: Fields that match the deployment logic. rg --type typescript -A 5 $'PostDeployInternal'Length of output: 81
Script:
#!/bin/bash # Description: Verify the type definition changes for `PostDeployInternal`. # Test: Search for the `PostDeployInternal` type definition. Expect: Fields that match the deployment logic. rg --type ts -A 10 $'PostDeployInternal'Length of output: 6142
run: | | ||
dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2) | ||
for dir in $dirs; do | ||
integration=$(basename $dir) | ||
|
||
mkdir -p /tmp/nango-temp/nango-integrations/$integration | ||
|
||
cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/ | ||
cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/ | ||
|
||
cd /tmp/nango-temp/nango-integrations | ||
npm install zod soap botbuilder | ||
nango admin:deploy-internal template-$integration | ||
|
||
cd - | ||
rm -rf /tmp/nango-temp | ||
done | ||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address shellcheck issues to prevent potential bugs.
The workflow is well-structured, but there are several shellcheck issues related to unquoted variables that could lead to globbing and word splitting. It's crucial to quote these variables to ensure the script behaves as expected.
Apply this diff to address the shellcheck issues:
- dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2)
+ dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2)
- for dir in $dirs; do
+ for dir in "$dirs"; do
- cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/
+ cp "$dir"/nango.yaml /tmp/nango-temp/nango-integrations/
- cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/
+ cp -r "$dir"/* /tmp/nango-temp/nango-integrations/"$integration"/
Committable suggestion was skipped due to low confidence.
Tools
actionlint
25-25: shellcheck reported issue in this script: SC2086:info:3:26: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:5:47: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:7:6: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:8:9: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:8:51: Double quote to prevent globbing and word splitting
(shellcheck)
25-25: shellcheck reported issue in this script: SC2086:info:12:40: Double quote to prevent globbing and word splitting
(shellcheck)
@@ -1,6 +1,6 @@ | |||
import type { JSONSchema7 } from 'json-schema'; | |||
|
|||
import type { Endpoint } from '../api.js'; | |||
import type { Endpoint, ApiError } from '../api.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approve import changes and new type PostDeployInternal
.
The addition of ApiError
to the imports enhances error handling capabilities. The new type PostDeployInternal
is well-defined, providing a clear structure for internal deployment requests. However, the Success
type is marked with a TODO to move SyncDeploymentResult
here, which should be addressed soon for better documentation and clarity.
Would you like me to help document the Success
type or open a GitHub issue to track this task?
Also applies to: 35-52
return; | ||
} | ||
|
||
const queryParamValues = queryStringValidation.safeParse(req.query); | ||
|
||
if (!queryParamValues.success) { | ||
res.status(400).send({ | ||
error: { code: 'invalid_query_params', errors: zodErrorToHTTP(queryParamValues.error) } | ||
}); | ||
return; | ||
} | ||
|
||
const body: PostDeployInternal['Body'] = val.data; | ||
|
||
const { account } = res.locals; | ||
|
||
if (account.uuid !== process.env['NANGO_SHARED_DEV_ACCOUNT_UUID']) { | ||
res.status(403).send({ error: { code: 'forbidden', message: 'This endpoint is only available for Nango internal use' } }); | ||
return; | ||
} | ||
|
||
const environmentName = queryParamValues.data.customEnvironment; | ||
|
||
let environment = await environmentService.getByEnvironmentName(account.id, environmentName); | ||
|
||
if (!environment) { | ||
environment = await environmentService.createEnvironment(account.id, environmentName); | ||
|
||
if (!environment) { | ||
res.status(500).send({ | ||
error: { code: 'environment_creation_error', message: 'There was an error creating the environment, please try again' } | ||
}); | ||
return; | ||
} | ||
|
||
// since we're making a new environment, we want to make sure the config creds and | ||
// connections are copied from the dev environment | ||
// so that the syncs and actions can be run right away | ||
const allProviderConfigKeys = body.flowConfigs.map((flow) => flow.providerConfigKey); | ||
const uniqueProviderConfigKeys = [...new Set(allProviderConfigKeys)]; | ||
|
||
for (const providerConfigKey of uniqueProviderConfigKeys) { | ||
const devEnvironment = await environmentService.getByEnvironmentName(account.id, 'dev'); | ||
if (devEnvironment) { | ||
const copiedResponse = await configService.copyProviderConfigCreds(devEnvironment.id, environment.id, providerConfigKey); | ||
|
||
if (copiedResponse) { | ||
const { copiedFromId, copiedToId } = copiedResponse; | ||
const connections = await connectionService.getConnectionsByEnvironmentAndConfigId(devEnvironment.id, copiedFromId); | ||
if (connections.length > 0) { | ||
await connectionService.copyConnections(connections, environment.id, copiedToId); | ||
} | ||
} | ||
} | ||
} | ||
} | ||
|
||
const { | ||
success, | ||
error, | ||
response: syncConfigDeployResult | ||
} = await deploy({ | ||
environment, | ||
account, | ||
flows: body.flowConfigs, | ||
nangoYamlBody: body.nangoYamlBody, | ||
postConnectionScriptsByProvider: body.postConnectionScriptsByProvider, | ||
debug: body.debug, | ||
jsonSchema: req.body.jsonSchema, | ||
logContextGetter, | ||
orchestrator | ||
}); | ||
|
||
if (!success || !syncConfigDeployResult) { | ||
errorManager.errResFromNangoErr(res, error); | ||
return; | ||
} | ||
|
||
if (body.reconcile) { | ||
const logCtx = syncConfigDeployResult.logCtx; | ||
const success = await getAndReconcileDifferences({ | ||
environmentId: environment.id, | ||
flows: body.flowConfigs, | ||
performAction: body.reconcile, | ||
debug: body.debug, | ||
singleDeployMode: body.singleDeployMode, | ||
logCtx, | ||
logContextGetter, | ||
orchestrator | ||
}); | ||
if (!success) { | ||
res.status(500).send({ | ||
error: { code: 'server_error', message: 'There was an error deploying syncs, please check the activity tab and report this issue to support' } | ||
}); | ||
return; | ||
} | ||
} | ||
|
||
res.send(syncConfigDeployResult.result); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Endpoint logic is well-structured but consider enhancing error handling.
The main logic of the endpoint is well-structured and performs necessary operations effectively. However, consider adding more specific error messages and handling potential exceptions more gracefully to improve maintainability and debuggability.
Additionally, the restriction based on the account UUID (NANGO_SHARED_DEV_ACCOUNT_UUID
) is a critical security measure that should be highlighted in the documentation or code comments for clarity.
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) { | ||
const newConnections = connections.map((connection) => { | ||
return { | ||
...connection, | ||
id: undefined, | ||
environment_id, | ||
config_id | ||
}; | ||
}); | ||
|
||
await db.knex.batchInsert('_nango_connections', newConnections); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review: copyConnections
method implementation.
The method efficiently handles the copying of connections to a new environment and config ID. Consider adding error handling for the batch insertion operation to manage potential failures.
The code changes are approved.
Add error handling for batch insertion:
+ try {
await db.knex.batchInsert('_nango_connections', newConnections);
+ } catch (error) {
+ logger.error(`Failed to copy connections: ${error}`);
+ throw new NangoError('batch_insert_failed', { detail: error.message });
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) { | |
const newConnections = connections.map((connection) => { | |
return { | |
...connection, | |
id: undefined, | |
environment_id, | |
config_id | |
}; | |
}); | |
await db.knex.batchInsert('_nango_connections', newConnections); | |
} | |
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) { | |
const newConnections = connections.map((connection) => { | |
return { | |
...connection, | |
id: undefined, | |
environment_id, | |
config_id | |
}; | |
}); | |
try { | |
await db.knex.batchInsert('_nango_connections', newConnections); | |
} catch (error) { | |
logger.error(`Failed to copy connections: ${error}`); | |
throw new NangoError('batch_insert_failed', { detail: error.message }); | |
} | |
} |
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> { | ||
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false }); | ||
|
||
if (!result || result.length == 0 || !result[0]) { | ||
return []; | ||
} | ||
|
||
return result; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review: getConnectionsByEnvironmentAndConfigId
method implementation.
The method is implemented correctly to fetch connections based on environment_id
and config_id
. However, consider adding error handling to manage potential database query failures gracefully.
The code changes are approved.
Add error handling for database operations:
+ try {
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false });
if (!result || result.length == 0 || !result[0]) {
return [];
}
return result;
+ } catch (error) {
+ logger.error(`Failed to fetch connections: ${error}`);
+ throw new NangoError('database_query_failed', { detail: error.message });
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> { | |
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false }); | |
if (!result || result.length == 0 || !result[0]) { | |
return []; | |
} | |
return result; | |
} | |
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> { | |
try { | |
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false }); | |
if (!result || result.length == 0 || !result[0]) { | |
return []; | |
} | |
return result; | |
} catch (error) { | |
logger.error(`Failed to fetch connections: ${error}`); | |
throw new NangoError('database_query_failed', { detail: error.message }); | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- packages/shared/lib/services/config.service.ts (1 hunks)
Additional comments not posted (1)
packages/shared/lib/services/config.service.ts (1)
281-304
: Review of thecopyProviderConfigCreds
methodThe newly introduced
copyProviderConfigCreds
method in theConfigService
class is designed to copy provider configuration credentials from one environment to another. Here are some observations and suggestions:
Correctness and Error Handling:
- The method correctly checks if the configuration exists and if it has a valid ID before proceeding with the copying process. This is good practice to avoid null reference errors.
- The method returns
null
if the configuration cannot be found or if the new configuration cannot be created, which is a clear and consistent error handling strategy.Performance Considerations:
- The method uses object destructuring to avoid using the
delete
operator, which is a performance improvement over the previous implementation. This change addresses the performance issue raised in the previous review.Security and Data Integrity:
- The method ensures that the
unique_key
is preserved when copying the configuration, which is crucial for maintaining data integrity across environments.- However, there is no explicit transaction handling. Consider wrapping the database operations in a transaction to ensure that either all operations succeed or none do, which would enhance the robustness of the method.
Maintainability:
- The method is well-structured and uses modern JavaScript features like async/await and destructuring, making it easy to read and maintain.
- The use of descriptive variable names (
fromConfig
,configWithoutId
,foundConfigId
) enhances readability.Suggestions for Improvement:
- Add Transaction Handling: To improve the robustness of the method, consider adding transaction handling to ensure that the database operations are atomic.
- Logging and Observability: Since the method involves critical operations of copying configurations, adding logging at key steps would improve observability and make debugging easier.
Overall, the method is well-implemented with attention to detail in error handling and performance. With the addition of transaction handling and logging, it would be even more robust and maintainable.
The code changes are approved with suggestions for further enhancements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks okay.
I feel like it would be so much better to have a way to create environment programmatically (via the API) but it's not possible unfortunately with the API key being tied to the env :/
Most of the code new code is super specific to us and this env, a bit annoying to expose and maintain that.
integration=$(basename $dir) | ||
|
||
mkdir -p /tmp/nango-temp/nango-integrations/$integration | ||
|
||
cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/ | ||
cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/ | ||
|
||
cd /tmp/nango-temp/nango-integrations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the Nth time we have done that, just because we enforce nango-integrations
to be the folder, without too many reasons. Maybe at some point, we should just check if there is nango.yaml and that's good
const { debug, nangoRemoteEnvironment } = this.opts(); | ||
const fullPath = process.cwd(); | ||
await deployService.internalDeploy({ fullPath, environment, debug, options: { env: nangoRemoteEnvironment || 'prod' } }); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels bad to expose internal stuff in a public api :/
|
||
// since we're making a new environment, we want to make sure the config creds and | ||
// connections are copied from the dev environment | ||
// so that the syncs and actions can be run right away |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels unnecessary, we have about 60 connections in total that could have been moved manually
…9-github-action-that-attempts-to-deploy-all-public-templates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range, codebase verification and nitpick comments (1)
packages/shared/lib/services/config.service.ts (1)
281-304
: Consider handling the scenario where the target environment already has a configuration with the sameunique_key
.The method currently creates a new configuration with the same
unique_key
in the target environment, even if a configuration with thatunique_key
already exists. This may not be the desired behavior in all cases.Consider adding a check to see if a configuration with the same
unique_key
already exists in the target environment. If it does, you can either:
- Update the existing configuration in the target environment with the copied configuration data.
- Generate a new unique
unique_key
for the copied configuration to avoid conflicts.Here's an example of how you can check for an existing configuration and update it:
+ const existingConfig = await this.getProviderConfig(fromConfig.unique_key, toEnvironmentId); + if (existingConfig) { + await this.editProviderConfig({ + ...configWithoutId, + id: existingConfig.id, + environment_id: toEnvironmentId, + unique_key: fromConfig.unique_key + }); + return { copiedToId: existingConfig.id, copiedFromId: foundConfigId }; + }
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- packages/shared/lib/services/config.service.ts (1 hunks)
Additional comments not posted (1)
packages/shared/lib/services/config.service.ts (1)
281-304
: LGTM!The
copyProviderConfigCreds
method is well-implemented for its purpose. It handles error scenarios appropriately and uses object destructuring to omit theid
property when creating a new configuration.
Describe your changes
Adds an internal only command
admin:deploy-internal ${custom-environment}
that when the allow listed account uuid is provided will deploy syncs and actions to a custom environment that is created dynamically with the integration and connections copied from thedev
environment if they exist.Issue ticket number and link
NAN-1629
Checklist before requesting a review (skip if just adding/editing APIs & templates)
Summary by CodeRabbit
New Features
ConfigService
to copy provider configuration credentials between environments.ConnectionService
for retrieving and duplicating connections based on environment and configuration IDs.Bug Fixes
Documentation