Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNM: TiUP Cluster UI #706

Closed
wants to merge 147 commits into from
Closed
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
147 commits
Select commit Hold shift + click to select a range
c33c6a2
test listener
baurine Aug 21, 2020
331db68
add vagrant for test
baurine Aug 21, 2020
e293bcc
update vagrant config
baurine Aug 21, 2020
4bba02f
add comments
baurine Aug 21, 2020
22909d6
calc progress
baurine Aug 22, 2020
1d0af1f
move web to tiup
baurine Aug 22, 2020
e8698b0
run deploy in a routine
baurine Aug 22, 2020
e3325df
update Deploy parameter
baurine Aug 22, 2020
3be76e2
move web ui
baurine Aug 22, 2020
247f532
list cluster frontend
baurine Aug 22, 2020
2448965
show cluster detail
baurine Aug 22, 2020
79dd983
destroy cluster
baurine Aug 22, 2020
48325fa
deploy status
baurine Aug 22, 2020
ad54704
refine
baurine Aug 22, 2020
99dd507
revert
baurine Aug 22, 2020
886306f
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Aug 22, 2020
f6bd841
refine api
baurine Aug 22, 2020
daf6b10
refine
baurine Aug 22, 2020
94f4c19
refine
baurine Aug 22, 2020
1988e58
save cluster name and tidb version
baurine Aug 22, 2020
29b1875
get cluster topo
baurine Aug 22, 2020
1742586
show cluster topo
baurine Aug 22, 2020
bdf27ef
refine
baurine Aug 22, 2020
00d770f
update generate topo
baurine Aug 22, 2020
5e011c1
fix missed error
baurine Aug 22, 2020
5b86dd7
set global login options
baurine Aug 24, 2020
9ce7594
refine
baurine Aug 24, 2020
2dce800
start and stop cluster
baurine Aug 24, 2020
89d74e1
scale in cluster
baurine Aug 24, 2020
263e30c
refine
baurine Aug 24, 2020
6e981e7
revert vagrantfile
baurine Aug 25, 2020
ce5e697
prepare scale out
baurine Aug 25, 2020
d95335f
refine
baurine Aug 25, 2020
457f487
record scale out status
baurine Aug 25, 2020
43cb6b5
scale out frontend
baurine Aug 25, 2020
6c6bd5a
refine
baurine Aug 25, 2020
7042039
refine
baurine Aug 25, 2020
23f072e
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Aug 25, 2020
75a6f18
refine, only show started steps
baurine Aug 25, 2020
aed394d
fix
baurine Aug 25, 2020
9a92132
refine save progress and status for scale out
baurine Aug 26, 2020
541e1cd
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Aug 26, 2020
7896fdc
refine
baurine Aug 27, 2020
820bd3e
Merge remote-tracking branch 'origin' into tiup-web
baurine Aug 27, 2020
023518e
update save deploy and scale out status logic
baurine Aug 27, 2020
d592948
update start cluster, record status as well
baurine Aug 27, 2020
299c066
update stop cluster, record status as well
baurine Aug 27, 2020
d3ad4f1
update scale in cluster, record status as well
baurine Aug 27, 2020
7000f93
update destroy cluster, record status as well
baurine Aug 27, 2020
cee5dcc
adjust router
baurine Aug 27, 2020
6a5400a
update status page
baurine Aug 27, 2020
c356c8b
refine
baurine Aug 27, 2020
18b657c
add scale out page
baurine Aug 27, 2020
2cb3203
move OperationStatus
baurine Aug 27, 2020
9394152
extract CompsManager
baurine Aug 27, 2020
4460b5d
edit and delete component
baurine Aug 27, 2020
d2a09cb
auto get topo
baurine Aug 27, 2020
fcbfc51
refine
baurine Aug 27, 2020
8cfce30
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Aug 28, 2020
b9a36ca
fix compile
baurine Aug 28, 2020
76b3d3c
revert
baurine Aug 28, 2020
2c2499a
wip
baurine Aug 28, 2020
810920f
wip
baurine Aug 28, 2020
2e54aba
handle location labels
baurine Aug 28, 2020
833de2e
sync scale out
baurine Aug 28, 2020
d2fe18f
wip
baurine Aug 28, 2020
bc86612
embed assets
baurine Aug 28, 2020
ff62f32
wip
baurine Aug 28, 2020
11d3ab9
wip
baurine Aug 28, 2020
d4ee68d
use hashrouter
baurine Aug 28, 2020
7a3a493
refine
baurine Aug 28, 2020
f26acef
refine
baurine Aug 29, 2020
eb126d1
fix routers
baurine Aug 29, 2020
4b0baa9
refine, add redirect
baurine Aug 29, 2020
cee57a4
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Aug 29, 2020
201ae0b
refine
baurine Aug 31, 2020
f9c0219
support assign password
baurine Aug 31, 2020
01149df
refine
baurine Aug 31, 2020
17137cd
extract types
baurine Aug 31, 2020
8bd384f
wip
baurine Aug 31, 2020
615e57b
refine
baurine Sep 1, 2020
fe2b3b6
refine
baurine Sep 1, 2020
bc05c60
support setting deploy dir and data dir
baurine Sep 1, 2020
dbe5688
set alias
baurine Sep 1, 2020
af4b9e0
wip
baurine Sep 1, 2020
c7c2e98
fix compile error
baurine Sep 1, 2020
960d9a1
refine
baurine Sep 1, 2020
af8b247
rename utils to apis
baurine Sep 1, 2020
9429443
refine
baurine Sep 1, 2020
0764786
refine
baurine Sep 1, 2020
f010f10
refine
baurine Sep 1, 2020
c3b82a1
refine
baurine Sep 1, 2020
aa746c4
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Sep 1, 2020
5af2a18
refine ListCluster
baurine Sep 1, 2020
ee6ea25
refine Display
baurine Sep 1, 2020
e713d2e
fix deploy progress
baurine Sep 1, 2020
d9eb614
refine
baurine Sep 1, 2020
0f5e2e3
support global dir
baurine Sep 2, 2020
066a2a0
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Sep 2, 2020
5231734
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Sep 9, 2020
68ea2f7
fix compile
baurine Sep 9, 2020
1d8b1e8
fix pd and alertmanger increasePorts method
baurine Sep 10, 2020
62f136e
fix crash when scale out
baurine Sep 10, 2020
13c7418
run cluster web ui by `tiup-cluster --ui`
baurine Sep 10, 2020
31900e3
fix
baurine Sep 11, 2020
a2749d0
rename web-ui to cluster-ui
baurine Sep 11, 2020
fd4a164
update embed assets
baurine Sep 11, 2020
1589547
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Sep 11, 2020
42b82b8
add missed react hook dependency
baurine Sep 11, 2020
7dc97c9
close eslint href check
baurine Sep 11, 2020
d57d330
dismiss react hook warning
baurine Sep 11, 2020
c2864ab
Merge branch 'master' into tiup-web
baurine Sep 14, 2020
d334a3a
Merge branch 'master' into tiup-web
lonng Sep 14, 2020
ab47793
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Sep 16, 2020
f82a768
add v4.0.6 option and support input the version manually
baurine Sep 16, 2020
ea0a9f4
Merge branch 'master' into tiup-web
baurine Sep 17, 2020
d4fdb0c
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 10, 2020
eb717c7
fix compile error
baurine Oct 10, 2020
539c434
support modify cluster configuration by integrate tidb-dashboard
baurine Oct 10, 2020
0e93281
check whether configuration feature is enabled in responding tidb
baurine Oct 12, 2020
d5d90d2
support to supply tidb offline download address
baurine Oct 14, 2020
2c7a95d
support config mirror address when deploying
baurine Oct 14, 2020
6fc3ad6
move setting mirror address to a single page
baurine Oct 15, 2020
d57a6ff
fix set mirror failed bug and refine
baurine Oct 15, 2020
5aed034
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 15, 2020
33550f9
add more entries for dashboard
baurine Oct 16, 2020
b71c7bb
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 16, 2020
a6422ae
refine wording
baurine Oct 16, 2020
26b6c71
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 21, 2020
ef8cf3a
ignore label check when deploying and scaling out
baurine Oct 21, 2020
03165c7
add changelog and a shell for build linux tiup cluster
baurine Oct 21, 2020
5c6b6c0
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 21, 2020
ffa34e7
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Oct 26, 2020
6520a29
fix compile
baurine Oct 26, 2020
89f4c98
change location labels
baurine Oct 26, 2020
a3a4c17
support modify topo manually
baurine Oct 26, 2020
660054c
support config numa_node
baurine Oct 26, 2020
ce6490f
refine
baurine Oct 26, 2020
29975d3
add changelog
baurine Oct 26, 2020
7f75414
refine
baurine Nov 3, 2020
0fc71b9
update changelog
baurine Nov 4, 2020
54545a1
support config arch for machine
baurine Nov 4, 2020
bb5a48a
update CHANGELOG
baurine Nov 4, 2020
83e22f9
refine cluster topo table
baurine Nov 4, 2020
b456ce1
Merge remote-tracking branch 'origin/master' into tiup-web
baurine Nov 12, 2020
307be0b
audit for web commands
baurine Nov 13, 2020
27e1bca
update changelog
baurine Nov 13, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ include ./tests/Makefile
# Build TiUP and all components
build: tiup components

components: playground client cluster dm bench server
components: playground client cluster dm bench web server

tiup:
$(GOBUILD) -ldflags '$(LDFLAGS)' -o bin/tiup
Expand All @@ -58,6 +58,9 @@ doc:
err:
$(GOBUILD) -ldflags '$(LDFLAGS)' -o bin/tiup-err ./components/err

web:
$(GOBUILD) -ldflags '$(LDFLAGS)' -o bin/tiup-web ./components/web

server:
$(GOBUILD) -ldflags '$(LDFLAGS)' -o bin/tiup-server ./server

Expand Down
2 changes: 1 addition & 1 deletion components/cluster/command/display.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ func newDisplayCmd() *cobra.Command {
return displayDashboardInfo(clusterName)
}

err = manager.Display(clusterName, gOpt)
_, err = manager.Display(clusterName, gOpt)
if err != nil {
return perrs.AddStack(err)
}
Expand Down
3 changes: 2 additions & 1 deletion components/cluster/command/list.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ func newListCmd() *cobra.Command {
Use: "list",
Short: "List all clusters",
RunE: func(cmd *cobra.Command, args []string) error {
return manager.ListCluster()
_, err := manager.ListCluster()
return err
baurine marked this conversation as resolved.
Show resolved Hide resolved
},
}
return cmd
Expand Down
2 changes: 1 addition & 1 deletion components/dm/command/display.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ func newDisplayCmd() *cobra.Command {

clusterName = args[0]

err := manager.Display(clusterName, gOpt)
_, err := manager.Display(clusterName, gOpt)
if err != nil {
return perrs.AddStack(err)
}
Expand Down
3 changes: 2 additions & 1 deletion components/dm/command/list.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ func newListCmd() *cobra.Command {
Use: "list",
Short: "List all clusters",
RunE: func(cmd *cobra.Command, args []string) error {
return manager.ListCluster()
_, err := manager.ListCluster()
return err
baurine marked this conversation as resolved.
Show resolved Hide resolved
},
}
return cmd
Expand Down
133 changes: 133 additions & 0 deletions components/web/main.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
package main

import (
"fmt"
"io/ioutil"
"net/http"

"github.com/gin-gonic/gin"
"github.com/pingcap/tiup/pkg/cluster"
operator "github.com/pingcap/tiup/pkg/cluster/operation"
"github.com/pingcap/tiup/pkg/cluster/spec"
cors "github.com/rs/cors/wrapper/gin"
)

var tidbSpec *spec.SpecManager
var manager *cluster.Manager

func main() {
if err := spec.Initialize("cluster"); err != nil {
panic("initialize spec failed")
}
tidbSpec = spec.GetSpecManager()
manager = cluster.NewManager("tidb", tidbSpec, spec.TiDBComponentVersion)

router := gin.Default()
router.Use(cors.AllowAll())
api := router.Group("/api")
{
api.GET("/clusters", clustersHandler)
api.GET("/clusters/:clusterName", clusterHandler)
api.DELETE("/clusters/:clusterName", destroyClusterHandler)

api.POST("/deploy", deployHandler)
api.GET("/deploy_status", deployStatusHandler)
}
_ = router.Run()
}

// DeployReq represents for the request of deploy API
type DeployReq struct {
ClusterName string `json:"cluster_name"`
TiDBVersion string `json:"tidb_version"`
TopoYaml string `json:"topo_yaml"`
}

func deployHandler(c *gin.Context) {
var req DeployReq
if err := c.ShouldBindJSON(&req); err != nil {
_ = c.Error(err)
return
}

// create temp topo yaml file
tmpfile, err := ioutil.TempFile("", "topo")
if err != nil {
_ = c.Error(err)
return
}
defer tmpfile.Close()
_, _ = tmpfile.WriteString(req.TopoYaml)
topoFilePath := tmpfile.Name()
fmt.Println("topo file path:", topoFilePath)

// parse request parameters
// topoFilePath = "/Users/baurine/Codes/Work/tiup/examples/manualTestEnv/multiHost/topology.yaml"
identifyFile := "/Users/baurine/Codes/Work/tiup/examples/manualTestEnv/_shared/vagrant_key"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be changed to use a user defined path, like -i in cluster's deploy sub command

go func() {
_ = manager.Deploy(
req.ClusterName,
req.TiDBVersion,
topoFilePath,
cluster.DeployOptions{
User: "vagrant",
IdentityFile: identifyFile,
},
nil,
true,
120,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These timeout arguments should also be defined by the user

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, the frontend has no setting page to set these options, so I simply use the default values. Maybe we can add a setting page later (depends on the requirement).

5,
false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choosing SSH implementation is also needed by some environments of users

)
}()

c.JSON(http.StatusOK, gin.H{
"message": "ok",
})
}

func deployStatusHandler(c *gin.Context) {
status := manager.GetDeployStatus()
c.JSON(http.StatusOK, status)
}

func clustersHandler(c *gin.Context) {
clusters, err := manager.ListCluster()
if err != nil {
_ = c.Error(err)
return
}
c.JSON(http.StatusOK, clusters)
}

func clusterHandler(c *gin.Context) {
clusterName := c.Param("clusterName")
instInfos, err := manager.Display(clusterName, operator.Options{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should honor the user's SSH implementation choice as well.

SSHTimeout: 5,
OptTimeout: 120,
APITimeout: 300,
})
if err != nil {
_ = c.Error(err)
return
}
c.JSON(http.StatusOK, instInfos)
}

func destroyClusterHandler(c *gin.Context) {
clusterName := c.Param("clusterName")
err := manager.DestroyCluster(clusterName, operator.Options{
SSHTimeout: 5,
OptTimeout: 120,
APITimeout: 300,
}, operator.Options{}, true)

if err != nil {
_ = c.Error(err)
return
}

c.JSON(http.StatusOK, gin.H{
"message": "ok",
})
}
2 changes: 2 additions & 0 deletions examples/manualTestEnv/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.vagrant/
tiup-cluster-*.log
23 changes: 23 additions & 0 deletions examples/manualTestEnv/_shared/Vagrantfile.partial.pubKey.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Vagrant.configure("2") do |config|
ssh_pub_key = File.readlines("#{File.dirname(__FILE__)}/vagrant_key.pub").first.strip

config.vm.box = "hashicorp/bionic64"
config.vm.provision "shell", privileged: false, inline: <<-SHELL
sudo apt install -y zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
sudo chsh -s /usr/bin/zsh vagrant
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
SHELL

config.vm.provision "shell", privileged: true, inline: <<-SHELL
echo "setting ulimit"
sudo echo "fs.file-max = 65535" >> /etc/sysctl.conf
sudo sysctl -p
sudo echo "* hard nofile 65535" >> /etc/security/limits.conf
sudo echo "* soft nofile 65535" >> /etc/security/limits.conf
sudo echo "root hard nofile 65535" >> /etc/security/limits.conf
sudo echo "root hard nofile 65535" >> /etc/security/limits.conf
SHELL
end

# ulimit ref: https://my.oschina.net/u/914655/blog/3067520
27 changes: 27 additions & 0 deletions examples/manualTestEnv/_shared/vagrant_key
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn
NhAAAAAwEAAQAAAQEAxboZzYumqNoVOQ/hKKhIZHxNhf5tmnkLZry8i6Xur4FPLDiRxos/
xVVDx0ynTPOyQVVaXtNxZnAmbR4HuNBzRvNoklwSXazt5YgWeiKCHtPpKFt3PJeE2cn6FJ
p6F6qFChG0NSPbZxJWWxv4noX0U3PLKgHNIehYK2Fu0E6plhSZazzJEVWapwo9d7aGnAsz
bBCd5TNZ5ogrXn+3bSFcdCbAfWOwYg54a+PzTQlzgt6JmhlEjpFfPhhpBW92pQXxmQ2c17
iPCbA8G++FiaEwA5teex8k1+HzmHf7YjyhPr+I67EzEiIueJg2+0PYbM1p06S8kVTNDXsf
0eJx4Dr8qQAAA9iFPcpVhT3KVQAAAAdzc2gtcnNhAAABAQDFuhnNi6ao2hU5D+EoqEhkfE
2F/m2aeQtmvLyLpe6vgU8sOJHGiz/FVUPHTKdM87JBVVpe03FmcCZtHge40HNG82iSXBJd
rO3liBZ6IoIe0+koW3c8l4TZyfoUmnoXqoUKEbQ1I9tnElZbG/iehfRTc8sqAc0h6FgrYW
7QTqmWFJlrPMkRVZqnCj13toacCzNsEJ3lM1nmiCtef7dtIVx0JsB9Y7BiDnhr4/NNCXOC
3omaGUSOkV8+GGkFb3alBfGZDZzXuI8JsDwb74WJoTADm157HyTX4fOYd/tiPKE+v4jrsT
MSIi54mDb7Q9hszWnTpLyRVM0Nex/R4nHgOvypAAAAAwEAAQAAAQBtk0+/YDgQ9SKzx8AQ
xwmvXk+cBT76T0BpRAj9HwziiDe3GvZ2YC8MDc+NAEbq11ae7E0zpdv/WAGDkRPYcPShij
0Wdx3aef4wqLVEJCGWMfvRWLcAhjuiclM73cvxl5c42EzU8jUhrsDapuql9zhKky4w7mSe
+OL7z3gYyq8isvcQMe+1eXJqiv27AJJfAir+rLJZO/gDW36hOowhnZxYRlVYPgZ8GwetxD
VdCrgwUgR/2HYmbXYdVxI0PwswGc6rEqs5XXOYRzwvPTvRKdD3J5MxmsvJljT7FMr4kCLT
X1+aWysk1cgAUIdzzwQL8DLE/N9PFFYdZyNBkZMgedl9AAAAgCtP3F8XYFR18gQLPGLDyQ
FFg8+JHN9b/yIg2pymC6SI8qEp+GnuEK9IKhqh/Uw14KEKcs/9sgbZo0K9uTBTDG5F6Qmp
hADVbWXJ/97Xeya6kH2Sa56UKLCQ/uQWBKwLQ0auU/qwxATIZowh31XUXjzVBg6wgUjT7Q
+3Fk1zGYxnAAAAgQD5USIRUNwkI+htv+f1g8QdmrFAGymcGEkXAixKvBTon9cWQb2iyiK+
2IO8EwFwRdL5kw2foILCnlp/4FevfxHU7wTcoFEp3PItUlcxYqO8vY2VCZ913oNLKBIt9p
uFfG2BZM5szMRNMh0svelu61FePsfN5Z8J0ltPrS8UKB95ywAAAIEAywbyNbjz1AxEjWIX
2Vbk4/MjQyjui8Wi7H0F+LDWyMfPJHzhnbr79Z/lIZmDAo++3EYU9J9s0C+wJ6vXGK+gvC
7e5qGfT/0J0DwBfLbpeTdDELCa/LmfLWVPzZ9Q+9Fq0AjmW9YXFZ/+qT9xfY1v9XfztFRS
xR1iXJ42q6ff5NsAAAAeYnJlZXpld2lzaEBCcmVlemV3aXNoTUJQLmxvY2FsAQIDBAU=
-----END OPENSSH PRIVATE KEY-----
1 change: 1 addition & 0 deletions examples/manualTestEnv/_shared/vagrant_key.pub
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFuhnNi6ao2hU5D+EoqEhkfE2F/m2aeQtmvLyLpe6vgU8sOJHGiz/FVUPHTKdM87JBVVpe03FmcCZtHge40HNG82iSXBJdrO3liBZ6IoIe0+koW3c8l4TZyfoUmnoXqoUKEbQ1I9tnElZbG/iehfRTc8sqAc0h6FgrYW7QTqmWFJlrPMkRVZqnCj13toacCzNsEJ3lM1nmiCtef7dtIVx0JsB9Y7BiDnhr4/NNCXOC3omaGUSOkV8+GGkFb3alBfGZDZzXuI8JsDwb74WJoTADm157HyTX4fOYd/tiPKE+v4jrsTMSIi54mDb7Q9hszWnTpLyRVM0Nex/R4nHgOvyp
36 changes: 36 additions & 0 deletions examples/manualTestEnv/multiHost/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# multiHost

TiDB, PD, TiKV, TiFlash each in different hosts.

## Usage

1. Start the box:

```bash
vagrant up
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer reusing the docker/up.sh utilities to use docker-compose instead of introducing vagrant to the repo. Or even better, we find some way to make the tests independent from any virtualization.

```

1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy multiHost v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:

```bash
tiup cluster start multiHost
```

1. Start TiDB Dashboard server:

```bash
bin/tidb-dashboard --pd http://10.0.1.11:2379
```

## Cleanup

```bash
tiup cluster destroy multiHost -y
vagrant destroy --force
```
14 changes: 14 additions & 0 deletions examples/manualTestEnv/multiHost/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
load "#{File.dirname(__FILE__)}/../_shared/Vagrantfile.partial.pubKey.rb"

Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 1
end

(1..3).each do |i|
config.vm.define "node#{i}" do |node|
node.vm.network "private_network", ip: "10.0.1.#{i+10}"
end
end
end
42 changes: 42 additions & 0 deletions examples/manualTestEnv/multiHost/topology.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
global:
user: tidb
deploy_dir: tidb-deploy
data_dir: tidb-data

server_configs:
tikv:
server.grpc-concurrency: 1
raftstore.apply-pool-size: 1
raftstore.store-pool-size: 1
readpool.unified.max-thread-count: 1
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
storage.block-cache.capacity: 256MB
raftstore.capacity: 10GB
pd:
replication.enable-placement-rules: true

pd_servers:
- host: 10.0.1.11
- host: 10.0.1.12
- host: 10.0.1.13

tikv_servers:
- host: 10.0.1.12

tidb_servers:
- host: 10.0.1.11
- host: 10.0.1.12
- host: 10.0.1.13

# tiflash_servers:
# - host: 10.0.1.14

grafana_servers:
- host: 10.0.1.11

monitoring_servers:
- host: 10.0.1.11

alertmanager_servers:
- host: 10.0.1.11
36 changes: 36 additions & 0 deletions examples/manualTestEnv/multiReplica/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# multiReplica
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Directory name should be lower case. e.g: manual-test-env


Multiple TiKV nodes in different labels.

## Usage

1. Start the box:

```bash
vagrant up
```

1. Use [TiUP](https://tiup.io/) to deploy the cluster to the box (only need to do it once):

```bash
tiup cluster deploy multiReplica v4.0.4 topology.yaml -i ../_shared/vagrant_key -y --user vagrant
```

1. Start the cluster in the box:

```bash
tiup cluster start multiReplica
```

1. Start TiDB Dashboard server:

```bash
bin/tidb-dashboard --pd http://10.0.1.20:2379
```

## Cleanup

```bash
tiup cluster destroy multiReplica -y
vagrant destroy --force
```
10 changes: 10 additions & 0 deletions examples/manualTestEnv/multiReplica/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
load "#{File.dirname(__FILE__)}/../_shared/Vagrantfile.partial.pubKey.rb"

Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 4 * 1024
v.cpus = 2
end

config.vm.network "private_network", ip: "10.0.1.20"
end
Loading