Skip to content

antonLytkin18/otus-highload

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Social Network

Architecture

Run and apply migrations

$ cp .env.exapmple .env
$ docker-compose up -d
$ docker-compose exec app alembic upgrade head

Deploy to GCP

$ export GOOGLE_PROJECT=[name]
$ docker-machine create --driver google \
     --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
     --google-machine-type n1-standard-8 \
     --google-zone europe-west1-b \
     app
$ eval $(docker-machine env app)
$ docker-compose up -d

Generate users

$ flask user generate [count]

WRK Report

Available at the following link

Master-Slave Replication

Click to expand
$ export GOOGLE_PROJECT=[name]
$ docker-machine create --driver google \
     --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
     --google-machine-type n1-standard-2 \
     app-db-slave
$ eval $(docker-machine env app-db-slave)

Open MySql port by adding firewall rule:

$ gcloud compute firewall-rules create app-db-slave \
     --allow tcp:10101 \
     --target-tags=docker-machine \
     --description="Allow DB slave connections" \
     --direction=INGRESS 

Run containers:

$ docker-compose -f docker-compose-replication.yml up -d
$ docker-compose -f docker-compose-replication.yml exec db_slave bash

Import actual DB Dump:

$ mysql -h34.72.179.20 -uroot -p -P10101 app < dump/app_db.sql

Connect to MySql server and run the following command:

CHANGE MASTER TO
MASTER_HOST='34.78.37.195',
MASTER_PORT=10100,
MASTER_USER='root',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=0;

START SLAVE;

Master-Slave Semisynchronous Replication

Click to expand

Install semi-sync plugin for master:

docker-compose exec db mysql -uroot -p \
  -e "INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';"

Install semi-sync plugins for slaves:

docker-compose -f docker-compose-replication.yml exec db_slave mysql -uroot -p \
  -e "INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';"

docker-compose -f docker-compose-replication.yml exec db_slave_1 mysql -uroot -p \
  -e "INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';"

Enable semi-sync replication on master and show the result:

docker-compose exec db mysql -uroot -p \
  -e "SET GLOBAL rpl_semi_sync_master_enabled = 1;" \
  -e "SHOW VARIABLES LIKE 'rpl_semi_sync%';"

Enable semi-sync replication on slaves and show the result:

docker-compose -f docker-compose-replication.yml exec db_slave mysql -uroot -p \
  -e "SET GLOBAL rpl_semi_sync_slave_enabled = 1;" \
  -e "SHOW VARIABLES LIKE 'rpl_semi_sync%';"

docker-compose -f docker-compose-replication.yml exec db_slave_1 mysql -uroot -p \
  -e "SET GLOBAL rpl_semi_sync_slave_enabled = 1;" \
  -e "SHOW VARIABLES LIKE 'rpl_semi_sync%';"

Sharding via Vitess

Click to expand

Preparing environment

Create GCP instance:

docker-machine create --driver google \
     --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
     --google-machine-type n1-standard-4 \
     vitess
eval $(docker-machine env vitess)

Open MySql port by adding firewall rule:

gcloud compute firewall-rules create vitess \
     --allow tcp:15000,tcp:15001,tcp:15306 \
     --target-tags=docker-machine \
     --description="Sharing vitess ports" \
     --direction=INGRESS

Clone vitess repository and run it using docker:

git clone https://github.com/vitessio/vitess.git
cd vitess/ && docker build -f docker/local/Dockerfile -t vitess/local .
docker run -p 15000:15000 -p 15001:15001 -p 15306:15306 --rm -it vitess/local

Run application with master database:

docker-compose up -d

Run replica databases:

docker-compose -f docker-compose-replication.yml up -d

Move source tables to reshard

Create Vttablet using current master and replica databases:

vttablet \
 $TOPOLOGY_FLAGS \
 -logtostderr \
 -tablet-path "zone1-0000000200" \
 -init_keyspace app \
 -init_shard 0 \
 -init_tablet_type replica \
 -port 15200 \
 -grpc_port 16200 \
 -service_map 'grpc-queryservice,grpc-tabletmanager,grpc-updatestream' \
 -db_host 35.195.211.151 \
 -db_port 10100 \
 -db_repl_user root \
 -db_repl_password password \
 -db_filtered_user root \
 -db_filtered_password password \
 -db_app_user root \
 -db_app_password password \
 -db_dba_user root \
 -db_dba_password password \
 -init_db_name_override app \
 -init_populate_metadata \
 > $VTDATAROOT/$tablet_dir/vttablet.out 2>&1 &

vttablet \
 $TOPOLOGY_FLAGS \
 -logtostderr \
 -tablet-path "zone1-0000000201" \
 -init_keyspace app \
 -init_shard 0 \
 -init_tablet_type replica \
 -port 15201 \
 -grpc_port 16201 \
 -service_map 'grpc-queryservice,grpc-tabletmanager,grpc-updatestream' \
 -db_host 35.195.211.151 \
 -db_port 10101 \
 -db_repl_user root \
 -db_repl_password password \
 -db_filtered_user root \
 -db_filtered_password password \
 -db_app_user root \
 -db_app_password password \
 -db_dba_user root \
 -db_dba_password password \
 -init_db_name_override app \
 -init_populate_metadata \
 > $VTDATAROOT/$tablet_dir/vttablet.out 2>&1 &

Mark first Vttablet as master:

vtctlclient InitShardMaster -force app/0 zone1-200

Create new Keyspace for resharding:

vtctl $TOPOLOGY_FLAGS CreateKeyspace -sharding_column_name=chat_id chat_message

Create new Vttablets for single shard:

for i in 300 301; do
 CELL=zone1 TABLET_UID=$i ./scripts/mysqlctl-up.sh
 CELL=zone1 KEYSPACE=chat_message TABLET_UID=$i ./scripts/vttablet-up.sh
done

Mark first Vttablet as master:

vtctlclient InitShardMaster -force chat_message/0 zone1-300

Move table chat_message:

vtctlclient MoveTables -workflow=app2chat_message app chat_message '{"chat_message":{}}'

Show the difference between two sources:

vtctlclient VDiff chat_message.app2chat_message

Switch read and write operations without downtime:

vtctlclient SwitchReads -tablet_type=rdonly chat_message.app2chat_message
vtctlclient SwitchReads -tablet_type=replica chat_message.app2chat_message

vtctlclient SwitchWrites chat_message.app2chat_message

Switch application database connection credentials used for chat_message table.
VTGate credentials:

CHAT_MYSQL_HOST=34.66.217.5
CHAT_MYSQL_PORT=15306
CHAT_MYSQL_USER=mysql_user
CHAT_MYSQL_PASSWORD=mysql_password
CHAT_MYSQL_ROOT_PASSWORD=mysql_password
CHAT_MYSQL_DB=chat_message

Drop source table:

vtctlclient DropSources chat_message.app2chat_message

Now application is using VTGate connection to serve all operations with table chat_message.

Resharding from 0 to -80, 80- shards without downtime

Create new Vttablets for shards -80, 80-:

for i in 400 401; do
 CELL=zone1 TABLET_UID=$i ./scripts/mysqlctl-up.sh
 SHARD=-80 CELL=zone1 KEYSPACE=chat_message TABLET_UID=$i ./scripts/vttablet-up.sh
done

vtctlclient InitShardMaster -force chat_message/-80 zone1-400

for i in 500 501; do
 CELL=zone1 TABLET_UID=$i ./scripts/mysqlctl-up.sh
 SHARD=80- CELL=zone1 KEYSPACE=chat_message TABLET_UID=$i ./scripts/vttablet-up.sh
done

vtctlclient InitShardMaster -force chat_message/80- zone1-500

Create and apply VSchema for table chat_message. Sharding function is reverse_bits.

echo '{
    "sharded": true,
    "vindexes": {
      "hash_f": {
        "type": "reverse_bits"
      }
    },
    "tables": {
      "chat_message": {
        "column_vindexes": [
          {
            "column": "chat_id",
            "name": "hash_f"
          }
        ]
      },
      "/.*": {
        "column_vindexes": [
          {
            "column": "chat_id",
            "name": "hash_f"
          }
        ]
      }
    }
}' > chat_vschema.json

vtctl $TOPOLOGY_FLAGS ApplyVSchema -vschema_file=chat_vschema.json chat_message
rm -f chat_vschema.json

Reload schema keyspace:

vtctlclient ReloadSchemaKeyspace -concurrency=10 chat_message

Run resharding:

vtctlclient Reshard chat_message.chat2chat '0' '-80,80-'

Show the difference between two sources:

vtctlclient VDiff chat_message.chat2chat

Switch read and write operations without downtime:

vtctlclient SwitchReads -tablet_type=rdonly chat_message.chat2chat
vtctlclient SwitchReads -tablet_type=replica chat_message.chat2chat

vtctlclient SwitchWrites chat_message.chat2chat

Delete source shard:

vtctlclient DeleteShard -recursive chat_message/0

Resharding from -80 to -40, 40-80 shards without downtime

Create new Vttablets for shards -40, 40-80:

for i in 600 601; do
 CELL=zone1 TABLET_UID=$i ./scripts/mysqlctl-up.sh
 SHARD=-40 CELL=zone1 KEYSPACE=chat_message TABLET_UID=$i ./scripts/vttablet-up.sh
done

vtctlclient InitShardMaster -force chat_message/-40 zone1-600

for i in 700 701; do
 CELL=zone1 TABLET_UID=$i ./scripts/mysqlctl-up.sh
 SHARD=40-80 CELL=zone1 KEYSPACE=chat_message TABLET_UID=$i ./scripts/vttablet-up.sh
done

vtctlclient InitShardMaster -force chat_message/40-80 zone1-700

Run resharding:

vtctlclient Reshard chat_message.chat2chat-80 '-80' '-40,40-80'

Show the difference between two sources:

vtctlclient VDiff chat_message.chat2chat-80

Switch read and write operations without downtime:

vtctlclient SwitchReads -tablet_type=rdonly chat_message.chat2chat-80
vtctlclient SwitchReads -tablet_type=replica chat_message.chat2chat-80

vtctlclient SwitchWrites chat_message.chat2chat-80

Delete source shard:

vtctlclient DeleteShard -recursive chat_message/-80

Tarantool Replication

Click to expand
  1. Start docker-compose:
docker-compose up -d
  1. Run tarantool console and execute script within:
docker-compose exec tarantool console
dofile('/opt/tarantool/init.lua')
  1. Restart replicator container to read binlog and start replication:
docker-compose restart tarantool-replicator
  1. Check replicatord status:
docker-compose exec tarantool-replicator systemctl status replicatord
  1. Analyze replicatord logs:
docker-compose exec tarantool-replicator tail -f /var/log/replicatord.log

Clickhouse

Click to expand
  1. Create csv file containing user table data, then remove local sql dump:
sudo mysqldump -h127.0.0.1 -P10100 -uroot -p --tz-utc --quick --fields-terminated-by=, --fields-optionally-enclosed-by=\" --fields-escaped-by=\  --tab="/var/lib/mysql-files/" app user
sudo rm -f /var/lib/mysql-files/user.sql
  1. Copy csv file into clickhouse directory, then remove source file from db container:
docker-compose exec db cat /var/lib/mysql-files/user.txt > clickhouse/dump/user.txt
docker-compose exec db rm -f /var/lib/mysql-files/user.txt
  1. Apply data from dump into clickhouse user table:
docker-compose exec clickhouse bash
cat "/dump/user.txt" | clickhouse-client --max_partitions_per_insert_block=0 --query="INSERT INTO default.user FORMAT CSV"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published