Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(config): fix docker compose local setup #1372

Merged
merged 4 commits into from
Jun 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ body:
If yes, please provide the value of the `x-request-id` response header for helping us debug your issue.

If not (or if building/running locally), please provide the following details:
1. Operating System or Linux distribution:
1. Operating System or Linux distribution:
2. Rust version (output of `rustc --version`): ``
3. App version (output of `cargo r -- --version`): ``
validations:
Expand Down
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
- [ ] This PR modifies the database schema
- [ ] This PR modifies application configuration/environment variables

<!--
<!--
Provide links to the files with corresponding changes.

Following are the paths where you can find config files:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ jobs:
- name: Cargo hack storage_models
if: env.storage_models_changes_exist == 'true'
run: cargo hack check --each-feature --no-dev-deps -p storage_models

typos:
name: Spell check
runs-on: ubuntu-latest
Expand All @@ -384,4 +384,4 @@ jobs:

- name: Spell check
uses: crate-ci/typos@master

4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ All notable changes to HyperSwitch will be documented here.
* **bank_redirects:** modify api contract for sofort (#880) (fc2e4514)
* add template code for connector forte (#854) (7a581a6)
* add template code for connector nexinets (#852) (dee5f61)

### Bug Fixes

* **connector:** [coinbase] make metadata as option parameter (#887) (f5728955)
Expand All @@ -335,7 +335,7 @@ All notable changes to HyperSwitch will be documented here.

### Enhancement

* **payments:** make TokenizationAction clonable (#895)
* **payments:** make TokenizationAction clonable (#895)

### Integration

Expand Down
410 changes: 0 additions & 410 deletions Cargo.nix

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions INSTALL_dependencies.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#
# Global config

if [[ "${TRACE-0}" == "1" ]]; then
if [[ "${TRACE-0}" == "1" ]]; then
set -o xtrace
fi

Expand Down Expand Up @@ -44,8 +44,8 @@ else
SUDO=""
fi

ver () {
printf "%03d%03d%03d%03d" `echo "$1" | tr '.' ' '`;
ver () {
printf "%03d%03d%03d%03d" `echo "$1" | tr '.' ' '`;
}

PROGNAME=`basename $0`
Expand All @@ -59,7 +59,7 @@ err () {
}

need_cmd () {
if ! command -v $1 > /dev/null
if ! command -v $1 > /dev/null
then
err "Command \"${1}\" not found. Bailing out"
fi
Expand Down Expand Up @@ -187,7 +187,7 @@ if [[ ! -x "`command -v psql`" ]] || [[ ! -x "`command -v redis-server`" ]] ; th
install_dep postgresql
install_dep postgresql-contrib # not needed for macos?
install_dep postgresql-devel # needed for diesel_cli in some linux distributions
install_dep postgresql-libs # needed for diesel_cli in some linux distributions
install_dep postgresql-libs # needed for diesel_cli in some linux distributions
init_start_postgres # installing libpq messes with initdb creating two copies. better to run it better libpq.
install_dep libpq-dev || install_dep libpq
else
Expand Down
2 changes: 1 addition & 1 deletion config/config.example.toml
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ region = "" # The AWS region used by the KMS SDK for decrypting data.
# EmailClient configuration. Only applicable when the `email` feature flag is enabled.
[email]
from_email = "[email protected]" # Sender email
aws_region = "" # AWS region used by AWS SES
aws_region = "" # AWS region used by AWS SES
base_url = "" # Base url used when adding links that should redirect to self

[dummy_connector]
Expand Down
34 changes: 17 additions & 17 deletions config/redis.conf
Original file line number Diff line number Diff line change
Expand Up @@ -909,10 +909,10 @@ replica-priority 100
# commands. For instance ~* allows all the keys. The pattern
# is a glob-style pattern like the one of KEYS.
# It is possible to specify multiple patterns.
# %R~<pattern> Add key read pattern that specifies which keys can be read
# %R~<pattern> Add key read pattern that specifies which keys can be read
# from.
# %W~<pattern> Add key write pattern that specifies which keys can be
# written to.
# written to.
# allkeys Alias for ~*
# resetkeys Flush the list of allowed keys patterns.
# &<pattern> Add a glob-style pattern of Pub/Sub channels that can be
Expand All @@ -939,10 +939,10 @@ replica-priority 100
# -@all. The user returns to the same state it has immediately
# after its creation.
# (<options>) Create a new selector with the options specified within the
# parentheses and attach it to the user. Each option should be
# space separated. The first character must be ( and the last
# parentheses and attach it to the user. Each option should be
# space separated. The first character must be ( and the last
# character must be ).
# clearselectors Remove all of the currently attached selectors.
# clearselectors Remove all of the currently attached selectors.
# Note this does not change the "root" user permissions,
# which are the permissions directly applied onto the
# user (outside the parentheses).
Expand All @@ -968,7 +968,7 @@ replica-priority 100
# Basically ACL rules are processed left-to-right.
#
# The following is a list of command categories and their meanings:
# * keyspace - Writing or reading from keys, databases, or their metadata
# * keyspace - Writing or reading from keys, databases, or their metadata
# in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,
# KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,
# key or metadata will also have `write` category. Commands that only read
Expand Down Expand Up @@ -1589,8 +1589,8 @@ cluster-config-file nodes-6379.conf
#
cluster-node-timeout 15000

# The cluster port is the port that the cluster bus will listen for inbound connections on. When set
# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires
# The cluster port is the port that the cluster bus will listen for inbound connections on. When set
# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires
# you to specify the cluster bus port when executing cluster meet.
cluster-port 16379

Expand Down Expand Up @@ -1725,12 +1725,12 @@ cluster-allow-pubsubshard-when-down yes
# PubSub message by default. (client-query-buffer-limit default value is 1gb)
#
cluster-link-sendbuf-limit 0
# Clusters can configure their announced hostname using this config. This is a common use case for

# Clusters can configure their announced hostname using this config. This is a common use case for
# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based
# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS
# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is
# communicated along the clusterbus to all nodes, setting it to an empty string will remove
# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is
# communicated along the clusterbus to all nodes, setting it to an empty string will remove
# the hostname and also propagate the removal.
#
# cluster-announce-hostname ""
Expand All @@ -1739,13 +1739,13 @@ cluster-link-sendbuf-limit 0
# a user defined hostname, or by declaring they have no endpoint. Which endpoint is
# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type
# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how
# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.
# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'
# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.
# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'
# will be returned instead.
#
# When a cluster advertises itself as having an unknown endpoint, it's indicating that
# the server doesn't know how clients can reach the cluster. This can happen in certain
# networking situations where there are multiple possible routes to the node, and the
# the server doesn't know how clients can reach the cluster. This can happen in certain
# networking situations where there are multiple possible routes to the node, and the
# server doesn't know which one the client took. In this case, the server is expecting
# the client to reach out on the same endpoint it used for making the last request, but use
# the port provided in the response.
Expand Down Expand Up @@ -2058,7 +2058,7 @@ client-output-buffer-limit pubsub 32mb 8mb 60
# errors or data eviction. To avoid this we can cap the accumulated memory
# used by all client connections (all pubsub and normal clients). Once we
# reach that limit connections will be dropped by the server freeing up
# memory. The server will attempt to drop the connections using the most
# memory. The server will attempt to drop the connections using the most
# memory first. We call this mechanism "client eviction".
#
# Client eviction is configured using the maxmemory-clients setting as follows:
Expand Down
2 changes: 1 addition & 1 deletion crates/router/src/connector/globalpay/requests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,7 @@ pub struct Card {
pub struct DigitalWallet {
/// Identifies who provides the digital wallet for the Payer.
pub provider: Option<DigitalWalletProvider>,
/// A token that represents, or is the payment method, stored with the digital wallet.
/// A token that represents, or is the payment method, stored with the digital wallet.
pub payment_token: Option<serde_json::Value>,
}

Expand Down
19 changes: 1 addition & 18 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ networks:


services:

promtail:
image: grafana/promtail:latest
volumes:
Expand Down Expand Up @@ -92,21 +91,9 @@ services:
volumes:
- ./:/app

hyperswitch-server-init:
image: rust:1.70
command: cargo build --bin router
working_dir: /app
networks:
- router_net
volumes:
- ./:/app
- cargo_cache:/cargo_cache
- cargo_build_cache:/cargo_build_cache
environment:
- CARGO_TARGET_DIR=/cargo_build_cache
hyperswitch-server:
image: rust:1.70
command: /cargo_build_cache/debug/router -f ./config/docker_compose.toml
command: cargo run -- -f ./config/docker_compose.toml
working_dir: /app
ports:
- "8080:8080"
Expand All @@ -127,10 +114,6 @@ services:
start_period: 20s
timeout: 10s

depends_on:
hyperswitch-server-init:
condition: service_completed_successfully

hyperswitch-producer:
image: rust:1.70
command: cargo run --bin scheduler -- -f ./config/docker_compose.toml
Expand Down
4 changes: 2 additions & 2 deletions docs/rfcs/000-issuing-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
### I. Objective
A clear and concise title for the RFC

### II. Proposal
### II. Proposal
A detailed description of the proposed changes, discussion time frame, technical details and potential drawbacks or alternative solutions that were considered

### III. Open Questions
### III. Open Questions
Any questions or concerns that are still open for discussion and debate within the community

### IV. Additional Context / Previous Improvements
Expand Down
22 changes: 11 additions & 11 deletions docs/rfcs/guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Hyperswitch welcomes contributions from anyone in the open-source community. Although some contributions can be easily reviewed and implemented through regular GitHub pull requests, larger changes that require design decisions will require more discussion and collaboration within the community.

To facilitate this process, Hyperswitch has adopted the RFC (Request for Comments) process from other successful open-source projects like Rust and React. The RFC process is designed to encourage community-driven change and ensure that everyone has a voice in the decision-making process, including both core and non-core contributors.
To facilitate this process, Hyperswitch has adopted the RFC (Request for Comments) process from other successful open-source projects like Rust and React. The RFC process is designed to encourage community-driven change and ensure that everyone has a voice in the decision-making process, including both core and non-core contributors.

Here are the steps involved:
1. Prepare an RFC Proposal
Expand All @@ -15,7 +15,7 @@ Here are the steps involved:

**Prepare an RFC Proposal:** Anyone interested in proposing a change to Hyperswitch should first create an RFC(in the format given below) that outlines the proposed change. This document should describe the problem the proposal is trying to solve, the proposed solution, and any relevant technical details. The document should also include any potential drawbacks or alternative solutions that were considered.

**Submit Proposal:** Once the RFC document is complete, the proposer should submit it to the Hyperswitch community for review. The proposal can be submitted either as a pull request to the RFC Documents folder or as a GitHub Issue.
**Submit Proposal:** Once the RFC document is complete, the proposer should submit it to the Hyperswitch community for review. The proposal can be submitted either as a pull request to the RFC Documents folder or as a GitHub Issue.

**Complete Initial Review:** After the proposal is submitted, the Hyperswitch core team would review it and provide feedback. Feedback can include suggestions for improvements, questions about the proposal, or concerns about its potential impact.

Expand All @@ -33,35 +33,35 @@ This RFC process for Hyperswitch is intended to encourage collaboration and comm

### Issuing an RFC
```text
**Title**
**Title**

**Objective**
A clear and concise title for the RFC

**Proposal**
**Proposal**
A detailed description of the proposed changes, discussion time frame, technical details and potential drawbacks or alternative solutions that were considered

**Open Questions**
**Open Questions**
Any questions or concerns that are still open for discussion and debate within the community

**Additional Context / Previous Improvements**
**Additional Context / Previous Improvements**
Any relevant external resources or references like slack / discord threads that support the proposal
```

### Resolving an RFC
```text
**Title**
**Title**
The title of the resolved RFC

**Status**
**Status**
The final status of the RFC (Accepted / Rejected)

**Resolution**
**Resolution**
A description of the final resolution of the RFC, including any modifications or adjustments made during the discussion and review process

**Implementation**
**Implementation**
A description of how the resolution will be implemented, including any relevant future scope for the solution

**Acknowledgements**
**Acknowledgements**
Any final thoughts or acknowledgements for the community and contributors who participated in the RFC process
```
2 changes: 1 addition & 1 deletion generate_code_coverage.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ export LLVM_PROFILE_FILE="$$-%p-%m.profraw"

echo "Running 'cargo test' and generating grcov reports.. This may take some time.."

cargo test && grcov . -s . -t html --branch --binary-path ./target/debug && rm -f $$*profraw && echo "starting server on localhost:$SERVER_PORT" && cd html && python3 -m http.server $SERVER_PORT
cargo test && grcov . -s . -t html --branch --binary-path ./target/debug && rm -f $$*profraw && echo "starting server on localhost:$SERVER_PORT" && cd html && python3 -m http.server $SERVER_PORT
18 changes: 9 additions & 9 deletions loadtest/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
## Performance Benchmarking Setup

The setup uses docker compose to get the required components up and running. It also handles running database migration
and starts [K6 load testing](https://k6.io/docs/) script at the end. The metrics are visible in the console as well as
The setup uses docker compose to get the required components up and running. It also handles running database migration
and starts [K6 load testing](https://k6.io/docs/) script at the end. The metrics are visible in the console as well as
through Grafana dashboard.

We have added a callback at the end of the script to compare result with existing baseline values. The env variable
`LOADTEST_RUN_NAME` can be used to change the name of the run which will be used to create json, result summary and diff
`LOADTEST_RUN_NAME` can be used to change the name of the run which will be used to create json, result summary and diff
benchmark files. The default value is "baseline", and diff will be created by comparing new results against baseline.
See 'How to run' section.

Expand All @@ -15,10 +15,10 @@ See 'How to run' section.

`grafana`: data source and dashboard files

`k6`: K6 load testing tool scripts. The `setup.js` contain common functions like creating merchant api key etc.
`k6`: K6 load testing tool scripts. The `setup.js` contain common functions like creating merchant api key etc.
Each js files will contain load testing scenario of each APIs. Currently, we have `health.js` and `payment-confirm.js`.

`.env`: It provide default value to docker compose file. Developer can specify which js script they want to run using env
`.env`: It provide default value to docker compose file. Developer can specify which js script they want to run using env
variable called `LOADTEST_K6_SCRIPT`. The default script is `health.js`. See 'How to run' section.

### How to run
Expand All @@ -33,7 +33,7 @@ Run default (`health.js`) script. It will generate baseline result.
bash loadtest.sh
```

The `loadtest.sh` script takes following flags,
The `loadtest.sh` script takes following flags,

`-c`: _compare_ with baseline results [without argument]
auto assign run name based on current commit number
Expand All @@ -42,7 +42,7 @@ The `loadtest.sh` script takes following flags,

`-s`: _script name_ exists in `k6` directory without the file extension as argument (default: health)

`-a`: run loadtest for _all scripts_ existing in `k6` directory [without argument]
`-a`: run loadtest for _all scripts_ existing in `k6` directory [without argument]

For example, to run the baseline for `payment-confirm.js` script.
```bash
Expand All @@ -64,7 +64,7 @@ Assuming there is baseline files for all the script, following command will comp
```bash
bash loadtest.sh -ca
```
It uses `-c` compare flag and `-a` run loadtest using all the scripts.
It uses `-c` compare flag and `-a` run loadtest using all the scripts.

Developer can observe live metrics using [K6 Load Testing Dashboard](http://localhost:3002/d/k6/k6-load-testing-results?orgId=1&refresh=5s&from=now-1m&to=now) in Grafana.
The [Tempo datasource](http://localhost:3002/explore?orgId=1&left=%7B%22datasource%22:%22P214B5B846CF3925F%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22queryType%22:%22nativeSearch%22%7D%5D,%22range%22:%7B%22from%22:%22now-1m%22,%22to%22:%22now%22%7D%7D)
Expand All @@ -74,6 +74,6 @@ is available to inspect tracing of individual requests.

1. The script will first "down" the already running docker compose to run loadtest on freshly created database.
2. Make sure that the Rust compiler is happy with your changes before you start running a performance test. This will save a lot of your time.
3. If the project image is available locally then `docker compose up` won't take your new changes into account.
3. If the project image is available locally then `docker compose up` won't take your new changes into account.
Either first do `docker compose build` or `docker compose up --build k6`.
4. For baseline, make sure you in the right branch and have build the image before running the loadtest script.
8 changes: 4 additions & 4 deletions loadtest/grafana/grafana-dashboard.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: 1
providers:
- name: 'default'
org_id: 1
folder: ''
type: 'file'
- name: 'default'
org_id: 1
folder: ''
type: 'file'
options:
path: /var/lib/grafana/dashboards
Loading