Insertions and Indexes
TokuDB’s efficient indexing enables quick queries over the use of rich indexes, such as cover and clustering indexes. Its value devoting some time to improve index classifications to get the best routine from TokuDB.
Column order in indexes
Clustering Secondary Indexes
One of the keys to manipulating TokuDB asset is in indexing is to use of clustering secondary indexes.
TokuDB lets a secondary key to be well-defined as a clustering key. This means that all the columns in the table are grouped with the secondary key. Percona Server parser and query optimizer care Many Clustering Keys when TokuDB engine is used. This resources that the query optimizer will avoid main clustered index reads and change them by secondary grouped index reads in firm situations.
CREATE TABLE table_name (
PRIMARY KEY index_one (column_one),
CLUSTERING KEY index_two (column_two)) ENGINE = TokuDB;
CREATE TABLE table_name ( .., CLUSTERING KEY identifier (column list), ..
CREATE TABLE table_name ( .., UNIQUE CLUSTERING KEY identifier (column list), ..
CREATE TABLE table_name ( .., CLUSTERING UNIQUE KEY identifier (column list), ..
CREATE TABLE table_name ( .., CONSTRAINT identifier UNIQUE CLUSTERING KEY identifier (column list), ..
CREATE TABLE table_name ( .., CONSTRAINT identifier CLUSTERING UNIQUE KEY identifier (column list), ..
CREATE TABLE table_name (.. column type CLUSTERING [UNIQUE] [KEY], ..)
CREATE TABLE table_name (.. column type [UNIQUE] CLUSTERING [KEY], ..)
ALTER TABLE table_name ADD CLUSTERING INDEX identifier (column list), ..
ALTER TABLE table_name ADD UNIQUE CLUSTERING INDEX identifier (column list), ..
ALTER TABLE table_name ADD CLUSTERING UNIQUE INDEX identifier (column list), ..
ALTER TABLE table_name ADD CONSTRAINT identifier UNIQUE CLUSTERING INDEX identifier (column list), ..
ALTER TABLE table_name ADD CONSTRAINT identifier CLUSTERING UNIQUE INDEX identifier (column list), ..
TokuDB allows you to create indexes to an present table and still do inserts and queries on that table while the index is being created.
Column Add, Delete, Expand, and Rename
TokuDB enables you to add or delete columns in an existing table, expand char, varchar, varbinary, and integer type columns in an current table, or give new name an existing column in a table with little hindering of other updates and queries. HCADER naturally blocks other queries with a table lock for no more than a few seconds. After that early short-term table locking, the system adjusts each row when adding, deleting, or growing columns future, when the row is next took into main memory from disk. For column give new name, all the work is done throughout the seconds of idle time. On-disk rows need not be altered.
TokuDB offers different levels of compression, which trade off between the amount of CPU used and the compression achieved. Normal compression uses less CPU but usually compresses at a lower level, high compression uses more CPU and usually compresses at a higher level. We have seen compression up to 25x on client data.
Compression in TokuDB occurs on contextual threads, which means that high compression need not slow down your database. Certainly, in some situations, we’ve seen higher overall database routine with high compression.
CREATE TABLE table_name (
column_one INT NOT NULL PRIMARY KEY,
column_two INT NOT NULL) ENGINE=TokuDB
For changing compression of table use following command.
ALTER TABLE table_name ROW_FORMAT=row_format;
Transactions and ACID-compliant
By default, TokuDB spot check all open tables frequently and logs all changes sandwiched between checkpoints, so that after a power disaster or system crash, TokuDB will reinstate all tables into their fully ACID-compliant state-run. That is, all committed connections will be reproduced in the tables, and any operation not committed at the time of disaster will be rolled back.
The avoidance barrier period is every 60 seconds, and this requires the time from the start of one checkpoint to the start of the next. If a checkpoint needs more than the well-defined checkpoint period to whole, the next checkpoint begins instantly. It is also related to the incidence with which log files are trimmed, as described below. The user can induce a checkpoint at any time by issuing the FLUSH LOGS expertise. When a database is shut down ordinarily it is also check pointed and all open connections are terminated. The logs are clipped at startup.
TokuDB has a organization for following progress of long running reports, thereby eliminating the need to define triggers to track declaration finishing
Migrating to TokuDB
To change an current table to use the TokuDB engine, run ALTER TABLE... ENGINE=TokuDB.