Klustron 1.1 Release Notes
Klustron 1.1 Release Notes
During the 1.1 release cycle, the Klustron team developed a series of significant features that further enhance the core feature set and performance of Klustron 1.0. Specifically, we have added a number of new features, improved the implementation of several existing features, and fixed numerous bugs while also improving usability and user-friendliness. Here, we highlight the major features we have implemented and the key bug fixes we have made during the 1.1 release cycle. For a complete list of all features implemented and bugs fixed during this cycle, please search our task system here.
1 New Features
1.1 Logical Backup and Restore
#1174 Logical backup and restore
Users can now perform logical backups of a Klustron cluster, either for the entire cluster or for specific databases, schemas, or tables. Logical backups allow users to restore a cluster or a specific database/table at any time.
Common use cases for logical backups include:
a. Backing up a database or table before performing a potentially dangerous operation. If the operation fails, the damaged database/table can be quickly restored without impacting other databases/tables, and the cluster does not need to be taken offline.
b. Backing up different parts of the cluster's data at different frequencies, such as performing a full backup every 12 hours for high-value or frequently updated data, and backing up less frequently changing databases/tables once a week.
c. Exporting the Klustron cluster and importing it into another database system. Learn more.
1.2 Table Repartition
#1176 Table repartitioning
Table repartitioning allows users to adjust the partitioning of a table when the current partitioning scheme is no longer suitable, or when a non-partitioned table becomes too large and needs partitioning, or when a partitioned table becomes too small and no longer needs partitioning. During this operation, the table remains continuously available for read and write access without interruption. Learn more.
1.3 Mirror Tables
For tables with small data volumes and infrequent updates, it is efficient to store them on each storage shard to achieve optimal query performance (suitable for join pushdown). Learn more.
#218 mirror sharding
#976 replicate mirror tables to newly added shards
1.4 Resource Isolation
If multiple Klustron clusters are deployed on a group of servers, and it is necessary to allocate a certain amount of computing resources for each cluster and limit the amount of computing resources that each cluster can use, then this feature can be used. Learn more.
#1089 support cgroup
1.5 MySQL DML Private Syntax
To ensure good MySQL compatibility and enable various applications to work seamlessly with Klustron without any modifications or recompilations, we have done a lot of work, including adding support for all MySQL private DML syntax, MySQL system variable read/write syntax, MySQL SHOW series commands, MySQL data types, common MySQL system functions, and the MySQL-specific backtick mechanism. Particularly, in addition to supporting autocommit transactions in version 1.0, Klustron 1.1 also introduces support for MySQL's transaction error handling method -- when a statement execution error occurs, only the error statement is rolled back instead of the entire transaction, which is determined by the client (application software) whether to roll back, continue execution, or commit the transaction.
For more details, please refer to Introduction to Klustron MySQL Connection Protocol, Summary of Unsupported MySQL Syntax and Features in Klustron and Klustron Support for MySQL Private DML Syntax.
#38 support locking clauses in select stmts SELECT FOR UPDATE, SELECT LOCK IN SHARE MODE
#644 More MySQL private DML grammars
#930 Support mysql syntax "on update current_timestamp" for timestamp[tz] type
#944 Multi-table update and delete statements
#945 Allow updating shard key
1.6 Parallel Query
#1062 server side cursor improvement
#1017 refresh snapshot
#943 parallel readonly query execution
#980 Parallel read only query execution
1.7 MySQL Compatibility
Klustron 1.1 supports MySQL SHOW series commands, MySQL common system functions, and especially supports autocommit transactions and MySQL's transaction error handling method -- when a statement execution error occurs, only the error statement is rolled back instead of the entire transaction, which is determined by the client (application software) whether to roll back, continue execution, or commit the transaction.
#902 Support widely used mysql functions
#915 Implement MySQL private functions
#992 show tables/schemas/database supports WHERE clause
#1000 Support for show table status
#1085 Support MySQL style error handling
#1110 In MySQL connections transform CREATE DATABASE to CREATE SCHEMA by default
#1084 lower case quoted symbol names
#1087 Add MySQL facilities
#902 Support widely used mysql functions
1.8 DBaaS
Internal features required for Klustron DBaaS that are not directly used by users.
#1128 usage stats by session groups
#1129 user accounts for internal connections
#1130 collect usage and consumption status
#1134 DBaaS service registry on public clouds
1.9 Single Shard Mode
#1175 Single shard mode
1.10 Others
#694 Cluster_mgr adds raft group members and information persistence
#95 support for range types in query
#551/552 Allow users to set fullsync config vars
#623 xpanel supports switching between RBR master and slave
2 Optimization and Improvement of Existing Functions
#225 Refuse client requests if local metadata lags too far behind
#485 do not add disabled nodes into computing nodes metadata tables
#609 create dedicated per-thread connection handler for admin connections
#776 Allow illegal chars in symbol names by back-quoting
#923 Use fullsync for metadata shard
#927 Record used port in server_nodes
#929 Refactor distributed transaction ID generation
#931 Allow accessing a new Klustron-server node after it has replayed all ddl logs
2.1 Backup and Restore Optimization
#481 Consider slave node status when selecting source storage node for backup
#940 Record current shard information in metadata cluster when cluster's shard is added or removed
#941 When rolling back, refer to historical event timeline for shard topology changes
#952 Remember next sequence values to start reserve
#996 Backup MetadataCluster
2.2 Scaling Optimization
#1032 Optimize the Catching-up phase during table moving - XA trx id
#1033 replicating too many irrelevent events
#1036 Fullsync per channel fsync ack setting
#1080 Recording the binlog state (filename,pos) before reroute table during scale out
#1113 A statement to clean up binlog dump filters on master storage instance
3 Important Bug Fixes
#760 Generate ID number for autoinc column when given NULL value
#939 MySQL and flashback crash for rbr one-shard testing
#958 Random sorting issues
#959 Issues related to creating indexes
#972 Coredump of computing node caused by partition attachment
#975 DDL applier process replayed DDL that was currently executing on the node
#995 SQLsmith triggered core when calling pg_rotate_logfile() in join
#1122 Server side cursor bug fixes