Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(internal-script-development): [nan-1629] add internal deploy logic #2655

Conversation

khaliqgant
Copy link
Member

@khaliqgant khaliqgant commented Aug 30, 2024

Describe your changes

Adds an internal only command admin:deploy-internal ${custom-environment} that when the allow listed account uuid is provided will deploy syncs and actions to a custom environment that is created dynamically with the integration and connections copied from the dev environment if they exist.

  • Adds a Github action that iterates over all templates and attempts to deploy to a custom environment in the shared dev account
  • Adds an additional internal admin endpoint which handles creating the custom environment, copying credentials if they exist and copying connections if they exist, and then deploying

Issue ticket number and link

NAN-1629

Checklist before requesting a review (skip if just adding/editing APIs & templates)

  • I added tests, otherwise the reason is:
  • I added observability, otherwise the reason is:
  • I added analytics, otherwise the reason is:

Summary by CodeRabbit

  • New Features

    • Introduced a new GitHub Actions workflow for automating the deployment of integration templates.
    • Added a hidden CLI command for internal deployment of Nango integrations.
    • Implemented a new API endpoint for internal deployment operations.
    • Enhanced the ConfigService to copy provider configuration credentials between environments.
    • Added methods to the ConnectionService for retrieving and duplicating connections based on environment and configuration IDs.
  • Bug Fixes

    • Improved error handling and validation for deployment operations.
  • Documentation

    • Updated type definitions for better clarity and error management in the API.

Copy link

coderabbitai bot commented Aug 30, 2024

Walkthrough

The changes introduce a new GitHub Actions workflow for automating the deployment of integration templates and enhance the CLI and server components for internal deployment operations. New commands, methods, and API endpoints have been added to facilitate the management and deployment of Nango integrations across various environments. Additionally, type definitions have been improved for better clarity and maintainability.

Changes

Files Change Summary
.github/workflows/deploy-scripts-internally.yaml New workflow for automating deployment of integration templates on master branch pushes.
packages/cli/lib/index.ts Added hidden CLI command admin:deploy-internal for internal deployment with environment specification.
packages/cli/lib/services/deploy.service.ts Introduced internalDeploy method in DeployService for handling internal deployments with enhanced parameters.
packages/cli/lib/types.ts Updated DeployOptions interface and added InternalDeployOptions for better type management.
packages/server/lib/controllers/sync/deploy/postDeployInternal.ts New endpoint for post-deployment operations with input validation and environment management.
packages/server/lib/routes.ts Added new route for internal deployment operations, integrating the postDeployInternal function.
packages/shared/lib/services/config.service.ts Added copyProviderConfigCreds method for duplicating provider configurations across environments.
packages/shared/lib/services/connection.service.ts Introduced getConnectionsByEnvironmentAndConfigId and copyConnections methods for managing connections.
packages/types/lib/deploy/api.ts Expanded type definitions, including new PostDeployInternal type for the internal deployment API.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant DeployService
    participant API
    participant ConfigService
    participant ConnectionService

    User->>CLI: Run admin:deploy-internal
    CLI->>DeployService: Call internalDeploy
    DeployService->>ConfigService: Copy provider config credentials
    DeployService->>ConnectionService: Get connections by environment
    DeployService->>API: Trigger postDeployInternal
    API-->>User: Return deployment result
Loading

🐰 In a field of green and bright,
I hop with joy, what a delight!
New scripts and commands, oh what fun,
Deploying swiftly, one by one!
With every change, we leap and cheer,
Nango's magic is finally here! 🌼✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.
Early access features: disabled

We are currently testing the following features in early access:

  • Anthropic claude-3-5-sonnet for code reviews: Anthropic claims that the new Claude model has stronger code understanding and code generation capabilities than their previous models. Note: Our default code review model was also updated late last week. Please compare the quality of the reviews between the two models by toggling the early access feature.

Note:

  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.
  • Please join our Discord Community to provide feedback and report issues on the discussion post.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between dbd2c05 and f7ae819.

Files selected for processing (9)
  • .github/workflows/deploy-scripts-internally.yaml (1 hunks)
  • packages/cli/lib/index.ts (1 hunks)
  • packages/cli/lib/services/deploy.service.ts (3 hunks)
  • packages/cli/lib/types.ts (1 hunks)
  • packages/server/lib/controllers/sync/deploy/postDeployInternal.ts (1 hunks)
  • packages/server/lib/routes.ts (2 hunks)
  • packages/shared/lib/services/config.service.ts (1 hunks)
  • packages/shared/lib/services/connection.service.ts (1 hunks)
  • packages/types/lib/deploy/api.ts (2 hunks)
Additional context used
actionlint
.github/workflows/deploy-scripts-internally.yaml

25-25: shellcheck reported issue in this script: SC2086:info:3:26: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:5:47: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:7:6: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:8:9: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:8:51: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:12:40: Double quote to prevent globbing and word splitting

(shellcheck)

Biome
packages/shared/lib/services/config.service.ts

[error] 294-294: Avoid the delete operator which can impact performance.

Unsafe fix: Use an undefined assignment instead.

(lint/performance/noDelete)

Additional comments not posted (7)
packages/cli/lib/types.ts (2)

9-9: Type alias ENV usage in DeployOptions is approved.

Centralizing environment types into the ENV type alias enhances maintainability and clarity.


18-20: New interface InternalDeployOptions is approved.

The introduction of InternalDeployOptions using the ENV type for its env property is a good practice, promoting separation of concerns between internal and external deployment configurations.

packages/server/lib/controllers/sync/deploy/postDeployInternal.ts (2)

1-10: Initial setup looks good.

The imports and initial setup are appropriate for the functionality described in the PR.


12-28: Validation setup is comprehensive.

The use of Zod for schema validation is a good practice, and the defined schemas are comprehensive for the expected inputs.

packages/cli/lib/index.ts (1)

296-305: New CLI command admin:deploy-internal is well-configured.

The command is correctly marked as hidden and includes necessary options for specifying the environment and remote environment. This setup is appropriate for internal use and aligns with the PR's objectives.

packages/server/lib/routes.ts (1)

168-168: New internal deployment route added.

The addition of the route /sync/deploy/internal is correctly implemented with the apiAuth middleware ensuring basic security. However, given the sensitivity of deployment operations, it's crucial to verify that apiAuth provides adequate security measures for this context.

Would you like me to review the postDeployInternal function to ensure it handles the deployment logic correctly?

Verification successful

Security Measures in apiAuth Middleware Verified

The apiAuth middleware includes both authMiddleware.secretKeyAuth for authentication and rateLimiterMiddleware for rate limiting. These components provide adequate security measures for the /sync/deploy/internal route, ensuring that only authorized requests are processed and preventing abuse through excessive requests.

  • authMiddleware.secretKeyAuth: Provides key-based authentication.
  • rateLimiterMiddleware: Applies rate limiting to control request frequency.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the security level of `apiAuth` middleware.

# Test: Search for the `apiAuth` middleware implementation. Expect: Security features like rate limiting, key validation, etc.
rg --type python -A 5 $'apiAuth'

Length of output: 62


Script:

#!/bin/bash
# Description: Verify the security level of `apiAuth` middleware.

# Test: Search for the `apiAuth` middleware implementation in TypeScript files. Expect: Security features like rate limiting, key validation, etc.
rg --type ts -A 10 'apiAuth'

Length of output: 16910

packages/cli/lib/services/deploy.service.ts (1)

295-356: New method internalDeploy added for custom environment deployments.

The internalDeploy method is well-structured and covers all necessary steps for deploying to a custom environment, including file verification, environment setup, compilation, and deployment. The use of PostDeployInternal['Body'] for the deployment payload is noted, and it's important to ensure that this type definition is correctly updated to reflect the new deployment logic.

The method is approved for merging.

Verification successful

Type Definition for PostDeployInternal Verified

The PostDeployInternal type definition includes the necessary fields used in the internalDeploy method, such as reconcile and nangoYamlBody. Ensure that any additional fields like debug and singleDeployMode are correctly handled in the payload structure. The method is approved for merging.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the type definition changes for `PostDeployInternal`.

# Test: Search for the `PostDeployInternal` type definition. Expect: Fields that match the deployment logic.
rg --type typescript -A 5 $'PostDeployInternal'

Length of output: 81


Script:

#!/bin/bash
# Description: Verify the type definition changes for `PostDeployInternal`.

# Test: Search for the `PostDeployInternal` type definition. Expect: Fields that match the deployment logic.
rg --type ts -A 10 $'PostDeployInternal'

Length of output: 6142

Comment on lines +25 to +42
run: |
dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2)
for dir in $dirs; do
integration=$(basename $dir)

mkdir -p /tmp/nango-temp/nango-integrations/$integration

cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/
cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/

cd /tmp/nango-temp/nango-integrations
npm install zod soap botbuilder
nango admin:deploy-internal template-$integration

cd -
rm -rf /tmp/nango-temp
done
env:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Address shellcheck issues to prevent potential bugs.

The workflow is well-structured, but there are several shellcheck issues related to unquoted variables that could lead to globbing and word splitting. It's crucial to quote these variables to ensure the script behaves as expected.

Apply this diff to address the shellcheck issues:

- dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2)
+ dirs=$(find integration-templates -maxdepth 1 -type d | tail -n +2)
- for dir in $dirs; do
+ for dir in "$dirs"; do
- cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/
+ cp "$dir"/nango.yaml /tmp/nango-temp/nango-integrations/
- cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/
+ cp -r "$dir"/* /tmp/nango-temp/nango-integrations/"$integration"/

Committable suggestion was skipped due to low confidence.

Tools
actionlint

25-25: shellcheck reported issue in this script: SC2086:info:3:26: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:5:47: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:7:6: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:8:9: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:8:51: Double quote to prevent globbing and word splitting

(shellcheck)


25-25: shellcheck reported issue in this script: SC2086:info:12:40: Double quote to prevent globbing and word splitting

(shellcheck)

@@ -1,6 +1,6 @@
import type { JSONSchema7 } from 'json-schema';

import type { Endpoint } from '../api.js';
import type { Endpoint, ApiError } from '../api.js';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve import changes and new type PostDeployInternal.

The addition of ApiError to the imports enhances error handling capabilities. The new type PostDeployInternal is well-defined, providing a clear structure for internal deployment requests. However, the Success type is marked with a TODO to move SyncDeploymentResult here, which should be addressed soon for better documentation and clarity.

Would you like me to help document the Success type or open a GitHub issue to track this task?

Also applies to: 35-52

Comment on lines +30 to +133
return;
}

const queryParamValues = queryStringValidation.safeParse(req.query);

if (!queryParamValues.success) {
res.status(400).send({
error: { code: 'invalid_query_params', errors: zodErrorToHTTP(queryParamValues.error) }
});
return;
}

const body: PostDeployInternal['Body'] = val.data;

const { account } = res.locals;

if (account.uuid !== process.env['NANGO_SHARED_DEV_ACCOUNT_UUID']) {
res.status(403).send({ error: { code: 'forbidden', message: 'This endpoint is only available for Nango internal use' } });
return;
}

const environmentName = queryParamValues.data.customEnvironment;

let environment = await environmentService.getByEnvironmentName(account.id, environmentName);

if (!environment) {
environment = await environmentService.createEnvironment(account.id, environmentName);

if (!environment) {
res.status(500).send({
error: { code: 'environment_creation_error', message: 'There was an error creating the environment, please try again' }
});
return;
}

// since we're making a new environment, we want to make sure the config creds and
// connections are copied from the dev environment
// so that the syncs and actions can be run right away
const allProviderConfigKeys = body.flowConfigs.map((flow) => flow.providerConfigKey);
const uniqueProviderConfigKeys = [...new Set(allProviderConfigKeys)];

for (const providerConfigKey of uniqueProviderConfigKeys) {
const devEnvironment = await environmentService.getByEnvironmentName(account.id, 'dev');
if (devEnvironment) {
const copiedResponse = await configService.copyProviderConfigCreds(devEnvironment.id, environment.id, providerConfigKey);

if (copiedResponse) {
const { copiedFromId, copiedToId } = copiedResponse;
const connections = await connectionService.getConnectionsByEnvironmentAndConfigId(devEnvironment.id, copiedFromId);
if (connections.length > 0) {
await connectionService.copyConnections(connections, environment.id, copiedToId);
}
}
}
}
}

const {
success,
error,
response: syncConfigDeployResult
} = await deploy({
environment,
account,
flows: body.flowConfigs,
nangoYamlBody: body.nangoYamlBody,
postConnectionScriptsByProvider: body.postConnectionScriptsByProvider,
debug: body.debug,
jsonSchema: req.body.jsonSchema,
logContextGetter,
orchestrator
});

if (!success || !syncConfigDeployResult) {
errorManager.errResFromNangoErr(res, error);
return;
}

if (body.reconcile) {
const logCtx = syncConfigDeployResult.logCtx;
const success = await getAndReconcileDifferences({
environmentId: environment.id,
flows: body.flowConfigs,
performAction: body.reconcile,
debug: body.debug,
singleDeployMode: body.singleDeployMode,
logCtx,
logContextGetter,
orchestrator
});
if (!success) {
res.status(500).send({
error: { code: 'server_error', message: 'There was an error deploying syncs, please check the activity tab and report this issue to support' }
});
return;
}
}

res.send(syncConfigDeployResult.result);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Endpoint logic is well-structured but consider enhancing error handling.

The main logic of the endpoint is well-structured and performs necessary operations effectively. However, consider adding more specific error messages and handling potential exceptions more gracefully to improve maintainability and debuggability.

Additionally, the restriction based on the account UUID (NANGO_SHARED_DEV_ACCOUNT_UUID) is a critical security measure that should be highlighted in the documentation or code comments for clarity.

packages/shared/lib/services/config.service.ts Outdated Show resolved Hide resolved
Comment on lines +641 to +652
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) {
const newConnections = connections.map((connection) => {
return {
...connection,
id: undefined,
environment_id,
config_id
};
});

await db.knex.batchInsert('_nango_connections', newConnections);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: copyConnections method implementation.

The method efficiently handles the copying of connections to a new environment and config ID. Consider adding error handling for the batch insertion operation to manage potential failures.

The code changes are approved.

Add error handling for batch insertion:

+ try {
    await db.knex.batchInsert('_nango_connections', newConnections);
+ } catch (error) {
+     logger.error(`Failed to copy connections: ${error}`);
+     throw new NangoError('batch_insert_failed', { detail: error.message });
+ }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) {
const newConnections = connections.map((connection) => {
return {
...connection,
id: undefined,
environment_id,
config_id
};
});
await db.knex.batchInsert('_nango_connections', newConnections);
}
public async copyConnections(connections: StoredConnection[], environment_id: number, config_id: number) {
const newConnections = connections.map((connection) => {
return {
...connection,
id: undefined,
environment_id,
config_id
};
});
try {
await db.knex.batchInsert('_nango_connections', newConnections);
} catch (error) {
logger.error(`Failed to copy connections: ${error}`);
throw new NangoError('batch_insert_failed', { detail: error.message });
}
}

Comment on lines +631 to +639
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> {
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false });

if (!result || result.length == 0 || !result[0]) {
return [];
}

return result;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: getConnectionsByEnvironmentAndConfigId method implementation.

The method is implemented correctly to fetch connections based on environment_id and config_id. However, consider adding error handling to manage potential database query failures gracefully.

The code changes are approved.

Add error handling for database operations:

+ try {
    const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false });
    if (!result || result.length == 0 || !result[0]) {
        return [];
    }
    return result;
+ } catch (error) {
+     logger.error(`Failed to fetch connections: ${error}`);
+     throw new NangoError('database_query_failed', { detail: error.message });
+ }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> {
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false });
if (!result || result.length == 0 || !result[0]) {
return [];
}
return result;
}
public async getConnectionsByEnvironmentAndConfigId(environment_id: number, config_id: number): Promise<StoredConnection[]> {
try {
const result = await db.knex.from<StoredConnection>(`_nango_connections`).select('*').where({ environment_id, config_id, deleted: false });
if (!result || result.length == 0 || !result[0]) {
return [];
}
return result;
} catch (error) {
logger.error(`Failed to fetch connections: ${error}`);
throw new NangoError('database_query_failed', { detail: error.message });
}
}

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between f7ae819 and 31bb72a.

Files selected for processing (1)
  • packages/shared/lib/services/config.service.ts (1 hunks)
Additional comments not posted (1)
packages/shared/lib/services/config.service.ts (1)

281-304: Review of the copyProviderConfigCreds method

The newly introduced copyProviderConfigCreds method in the ConfigService class is designed to copy provider configuration credentials from one environment to another. Here are some observations and suggestions:

  1. Correctness and Error Handling:

    • The method correctly checks if the configuration exists and if it has a valid ID before proceeding with the copying process. This is good practice to avoid null reference errors.
    • The method returns null if the configuration cannot be found or if the new configuration cannot be created, which is a clear and consistent error handling strategy.
  2. Performance Considerations:

    • The method uses object destructuring to avoid using the delete operator, which is a performance improvement over the previous implementation. This change addresses the performance issue raised in the previous review.
  3. Security and Data Integrity:

    • The method ensures that the unique_key is preserved when copying the configuration, which is crucial for maintaining data integrity across environments.
    • However, there is no explicit transaction handling. Consider wrapping the database operations in a transaction to ensure that either all operations succeed or none do, which would enhance the robustness of the method.
  4. Maintainability:

    • The method is well-structured and uses modern JavaScript features like async/await and destructuring, making it easy to read and maintain.
    • The use of descriptive variable names (fromConfig, configWithoutId, foundConfigId) enhances readability.
  5. Suggestions for Improvement:

    • Add Transaction Handling: To improve the robustness of the method, consider adding transaction handling to ensure that the database operations are atomic.
    • Logging and Observability: Since the method involves critical operations of copying configurations, adding logging at key steps would improve observability and make debugging easier.

Overall, the method is well-implemented with attention to detail in error handling and performance. With the addition of transaction handling and logging, it would be even more robust and maintainable.

The code changes are approved with suggestions for further enhancements.

Copy link
Collaborator

@bodinsamuel bodinsamuel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks okay.

I feel like it would be so much better to have a way to create environment programmatically (via the API) but it's not possible unfortunately with the API key being tied to the env :/
Most of the code new code is super specific to us and this env, a bit annoying to expose and maintain that.

Comment on lines +28 to +35
integration=$(basename $dir)

mkdir -p /tmp/nango-temp/nango-integrations/$integration

cp $dir/nango.yaml /tmp/nango-temp/nango-integrations/
cp -r $dir/* /tmp/nango-temp/nango-integrations/$integration/

cd /tmp/nango-temp/nango-integrations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the Nth time we have done that, just because we enforce nango-integrations to be the folder, without too many reasons. Maybe at some point, we should just check if there is nango.yaml and that's good

const { debug, nangoRemoteEnvironment } = this.opts();
const fullPath = process.cwd();
await deployService.internalDeploy({ fullPath, environment, debug, options: { env: nangoRemoteEnvironment || 'prod' } });
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels bad to expose internal stuff in a public api :/


// since we're making a new environment, we want to make sure the config creds and
// connections are copied from the dev environment
// so that the syncs and actions can be run right away
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels unnecessary, we have about 60 connections in total that could have been moved manually

packages/shared/lib/services/config.service.ts Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range, codebase verification and nitpick comments (1)
packages/shared/lib/services/config.service.ts (1)

281-304: Consider handling the scenario where the target environment already has a configuration with the same unique_key.

The method currently creates a new configuration with the same unique_key in the target environment, even if a configuration with that unique_key already exists. This may not be the desired behavior in all cases.

Consider adding a check to see if a configuration with the same unique_key already exists in the target environment. If it does, you can either:

  • Update the existing configuration in the target environment with the copied configuration data.
  • Generate a new unique unique_key for the copied configuration to avoid conflicts.

Here's an example of how you can check for an existing configuration and update it:

+ const existingConfig = await this.getProviderConfig(fromConfig.unique_key, toEnvironmentId);
+ if (existingConfig) {
+   await this.editProviderConfig({
+     ...configWithoutId,
+     id: existingConfig.id,
+     environment_id: toEnvironmentId,
+     unique_key: fromConfig.unique_key
+   });
+   return { copiedToId: existingConfig.id, copiedFromId: foundConfigId };
+ }
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 31bb72a and 845c6c5.

Files selected for processing (1)
  • packages/shared/lib/services/config.service.ts (1 hunks)
Additional comments not posted (1)
packages/shared/lib/services/config.service.ts (1)

281-304: LGTM!

The copyProviderConfigCreds method is well-implemented for its purpose. It handles error scenarios appropriately and uses object destructuring to omit the id property when creating a new configuration.

@khaliqgant khaliqgant merged commit d84b2d3 into master Sep 3, 2024
22 checks passed
@khaliqgant khaliqgant deleted the khaliq/nan-1629-github-action-that-attempts-to-deploy-all-public-templates branch September 3, 2024 06:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants