title | category |
---|---|
TiDB 2.1 GA Release Notes |
Releases |
On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this release. Compared with TiDB 2.0, this release has great improvements in stability, performance, compatibility, and usability.
-
SQL Optimizer
-
Optimize the selection range of
Index Join
to improve the execution performance -
Optimize the selection of outer table for
Index Join
and use the table with smaller estimated value of Row Count the as the outer table -
Optimize Join Hint
TIDB_SMJ
so that Merge Join can be used even without proper index available -
Optimize Join Hint
TIDB_INLJ
to specify the Inner table to Join -
Optimize correlated subquery, push down Filter, and extend the index selection range, to improve the efficiency of some queries by orders of magnitude
-
Support using Index Hint and Join Hint in the
UPDATE
andDELETE
statement -
Support pushing down more functions:
ABS
/CEIL
/FLOOR
/IS TRUE
/IS FALSE
-
Optimize the constant folding algorithm for the IF
and
IFNULL` built-in functions -
Optimize the output of the
EXPLAIN
statement and use hierarchy structure to show the relationship between operators
-
-
SQL executor
-
Refactor all the aggregation functions and improve execution efficiency of the
Stream
andHash
aggregation operators -
Implement the parallel
Hash Aggregate
operators and improve the computing performance by 350% in some scenarios -
Implement the parallel
Project
operators and improve the performance by 74% in some scenarios -
Read the data of the Inner table and Outer table of
Hash Join
concurrently to improve the execution performance -
Optimize the execution speed of the
REPLACE INTO
statement and increase the performance nearly by 10 times -
Optimize the memory usage of the time data type and decrease the memory usage of the time data type by fifty percent
-
Optimize the point select performance and improve the point select efficiency result of Sysbench by 60%
-
Improve the performance of TiDB on inserting or updating wide tables by 20 times
-
Support configuring the memory upper limit of a single statement in the configuration file
-
Optimize the execution of Hash Join, if the Join type is Inner Join or Semi Join and the inner table is empty, return the result without reading data from the outer table
-
Support using the
EXPLAIN ANALYZE
statement to check the runtime statistics including the execution time and the number of returned rows of each operator
-
-
Statistics
-
Support enabling auto ANALYZE statistics only during certain period of the day
-
Support updating the table statistics automatically according to the feedback of the queries
-
Support configuring the number of buckets in the histogram using the
ANALYZE TABLE WITH BUCKETS
statement -
Optimize the Row Count estimation algorithm using histogram for mixed queries of equality query and range queries
-
-
Expressions
-
Support following built-in function:
-
json_contains
-
json_contains_path
-
encode/decode
-
-
-
Server
-
Support queuing the locally conflicted transactions within tidb-server instance to optimize the performance of conflicted transactions
-
Support Server Side Cursor
-
Add the HTTP API
-
Scatter the distribution of table Regions in the TiKV cluster
-
Control whether to open the
general log
-
Support modifying the log level online
-
Check the TiDB cluster information
-
-
Add the
auto_analyze_ratio
system variables to contorl the ratio of Analyze -
Add the
tidb_retry_limit
system variable to control the automatic retry times of transactions -
Support using
admin show slow
statement to obtain the slow queries -
Add the
tidb_slow_log_threshold
environment variable to set the threshold of slow log automatically
-
-
DDL
-
Support the parallel execution of the Add index statement and other statements to avoid the time consuming Add index operation blocking other operations
-
Optimize the execution speed of
ADD INDEX
and improve it greatly in some scenarios -
Support the
select tidb_is_ddl_owner()
statement to facilitate deciding whether TiDB isDDL Owner
-
Support the
ALTER TABLE FORCE
syntax -
Support the
ALTER TABLE RENAME KEY TO
syntax -
Add the table name and database name in the output information of
admin show ddl jobs
-
-
Compatibility
-
Support more MySQL syntaxes
-
Make the
BIT
aggregate function support theALL
parameter -
Support the
SHOW PRIVILEGES
statement -
Support the
CHARACTER SET
syntax in theLOAD DATA
statement -
Support the
IDENTIFIED WITH
syntax in theCREATE USER
statement -
Support the
LOAD DATA IGNORE LINES
statement -
The
Show ProcessList
statement returns more accurate information
-
-
Optimize availability
-
Introduce the version control mechanism and support rolling update of the cluster compatibly
-
Enable
Raft PreVote
among PD nodes to avoid leader reelection when network recovers after network isolation -
Enable
raft learner
by default to lower the risk of unavailable data caused by machine failure during scheduling -
TSO allocation is no longer affected by the system clock going backwards
-
Support the
Region merge
feature to reduce the overhead brought by metadata
-
-
Optimize the scheduler
-
Optimize the processing of Down Store to speed up making up replicas
-
Optimize the hotspot scheduler to improve its adaptability when traffic statistics information jitters
-
Optimize the start of Coordinator to reduce the unnecessary scheduling caused by restarting PD
-
Optimize the issue that Balance Scheduler schedules small Regions frequently
-
Optimize Region merge to consider the number of rows within the Region
-
Improve PD simulator to simulate the scheduling scenarios
-
-
API and operation tools
-
Add the
GetPrevRegion
interface to support theTiDB reverse scan
feature -
Add the
BatchSplitRegion
interface to speed up TiKV Region splitting -
Add the
GCSafePoint
interface to support distributed GC in TiDB -
Add the
GetAllStores
interface, to support distributed GC in TiDB
- pd-ctl supports:
- pd-recover doesn't need to provide the
max-replica
parameter
-
-
Metrics
-
Add related metrics for
Filter
-
Add metrics about etcd Raft state machine
-
-
Performance
-
Optimize the performance of Region heartbeat to reduce the memory overhead brought by heartbeats
-
Optimize the Region tree performance
-
Optimize the performance of computing hotspot statistics
-
-
Coprocessor
-
Add more built-in functions
-
Add Coprocessor
ReadPool
to improve the concurrency in processing the requests -
Fix the time function parsing issue and the time zone related issues
-
Optimize the memory usage for pushdown aggregation computing
-
-
Transaction
-
Optimize the read logic and memory usage of MVCC to improve the performance of the scan operation and the performance of full table scan is 1 time better than that in TiDB 2.0
-
Fold the continuous Rollback records to ensure the read performance
-
Add the
UnsafeDestroyRange
API to support to collecting space for the dropping table/index -
Separate the GC module to reduce the impact on write
-
Add the
upper bound
support in thekv_scan
command
-
-
Raftstore
-
Improve the snapshot writing process to avoid RocksDB stall
-
Add the
LocalReader
thread to process read requests and reduce the delay for read requests -
Support
BatchSplit
to avoid large Region brought by large amounts of write -
Support
Region Split
according to statistics to reduce the I/O overhead -
Support
Region Split
according to the number of keys to improve the concurrency of index scan -
Improve the Raft message process to avoid unnecessary delay brought by
Region Split
-
Enable the
PreVote
feature by default to reduce the impact of network isolation on services
-
-
Storage Engine
-
Fix the
CompactFiles
bug in RocksDB and reduce the impact on importing data using Lightning -
Upgrade RocksDB to v5.15 to fix the possible issue of snapshot file corruption
-
Improve
IngestExternalFile
to avoid the issue that flush could block write
-
-
tikv-ctl
-
The
compact
command supports specifying whether to compact data in the bottommost level
-
Fast full import of large amounts of data: TiDB-Lightning
-
Support new TiDB-Binlog
- TiDB 2.1 does not support downgrading to v2.0.x or earlier due to the adoption of the new storage engine
-
Parallel DDL is enabled in TiDB 2.1, so the clusters with TiDB version earlier than 2.0.1 cannot upgrade to 2.1 using rolling update. You can choose either of the following two options:
- Stop the cluster and upgrade to 2.1 directly
- Roll update to 2.0.1 or later 2.0.x versions, and then roll update to the 2.1 version
- If you upgrade from TiDB 2.0.6 or earlier to TiDB 2.1, check if there is any ongoing DDL operation, especially the time consuming
Add Index
operation, because the DDL operations slow down the upgrading process. If there is ongoing DDL operation, wait for the DDL operation finishes and then roll update.