For the moment, the only engines that fully support encryption are XtraDB and InnoDB. The Aria storage engine also supports encryption, but only for temporary tables.

MariaDB supports 2 different way to encrypt data in InnoDB/XtraDB:

  1. Specified table encryption: Only tables which you create with PAGE_ENCRYPTION=1 are encrypted. This feature was created by eperi.
  2. Tablespace encryption: Everything is encrypted (including log files). This feature was created by Google and is based on their MySQL branch.

InnoDB Specified Table Encryption

Specified Table encryption means that you choose which tables to encrypt. This allows you to balance security with speed. To use table encryption, you have to:

  • Set the value of encryption-algorithm to the algorithm of your choice.
  • Load the file-key-management-plugin (or similar)
  • Define the location of key file
  • Create keys


Keys can be generated using OpenSSL with following command

The key file is a text file containing an key id, the hex-encoded iv and the hex-encoded key. Example keys.txt using above generated key:

After this is it up to database designer to select tables that contain sensitive data for encryption. Encryption can be enabled to table in table creation time or using ALTER TABLE. As an example:

In table encryption currently keys can’t be changed but used key can be changed using ALTER TABLE. If no key identifier is provided a default key is used. Default key can be set either on my.cnf with

or dynamically using global setting:

Default key is used e.g.

InnoDB Tablespace Encryption

In tablespace encryption all InnoDB tables are encrypted. Additionally, you may encrypt InnoDB log files, Aria tables (ROW_FORMAT=PAGE) and Aria temporary tables. To use tablespace encryption, you have to:

  •  Set the value of encryption-algorithm to the algorithm of your choice.
  • Load the example-key-management-plugin (or similar)


In tablespace encryption keys are not static. Instead so called key rotation is used. In key rotation used encryption key is changed if key used on a page is older than innodb-encryption-rotate-key-age seconds.

InnoDB Tablespace Scrubbing

Scrubbing means that there is a background process that regularly scans through all tables and upgrades the encryption keys for the pages. This happens either as part of purge (non compressed) or scrubbing by scanning whole tablespaces (added into key rotation threads). Purge is a a type of garbage collection that InnoDB internally runs to improve performance. Configuration for this feature might look as follows:

Performance Impact

Encrypting the tables or tablespaces naturally have some effect on overall performance of the system. Naturally, the amount of performance effect encryption has is dependent on used hardware, workload and used encryption method. Goal of this section is to give some indication how much effect on performance there is when table encryption is used or when tablespace encryption is used when compared to setup where no encryption is used.

All experiments where conducted on Intel Xeon E5-2690 @ 2.9GHz CPU containing 2 sockets with 8 cores each using hyper threading, thus 32 total cores and Linux 3.4.12 with 132G main memory. The database is stored on a Fusion-io ioDrive2 Duo 2.41TB Firmware v7.2.5, rev 110646, Driver 3.3.4 build 5833069. The database filesystem is using NVMFS and all test logs and outputs are stored on second ioDrive using EXT4. We use On-Line Transaction Processing (OLTP) benchmark from Percona This TPC-C like workload involves a mix of five concurrent transaction types executed on-line or queued for deferred execution. The database is comprised of nine tables with a wide range of record and population sizes. Results are measured in terms of transactions per minute (tpmC). We will use 1000 warehouses producing ~100G database and buffer pool size 50G, so that full database does not fit to buffer pool. Additionally, we will use only InnoDB plugin as a storage engine. Finally, we use 3 hour measure time.

In the first graph we compare the resulting tpmC results on normal InnoDB tables (unencrypted tables), page encrypted tables, using passive key rotation and scrubbing (setting both intervals bigger than test time) and tablespace encryption (google full encrypted on graph).



MariaDB Corporation would like to thank eperi and Google for their contributions to MariaDB.


When you e.g. delete rows, these rows are just marked as deleted not really physically deleted from indexes and free space introduced is not returned to operating system for later reuse. Purge thread will physically delete index keys and rows, but still free space introduced is not returned to operating system and this operation can lead holes on page. If you have variable length rows, this could lead to situation where this free space can’t be used for new rows (if these rows are larger than old ones). User may use OPTIMIZE TABLE or ALTER TABLE <table> ENGINE=InnoDB to reconstruct the table.

Unfortunately, running OPTIMIZE TABLE against an InnoDB table stored in the shared table-space file ibdata1 does two things:

  • Makes the table’s data and indexes contiguous inside ibdata1
  • Makes ibdata1 grow because the contiguous data and index pages are appended to ibdata1

New defragmentation

In MariaDB 10.1 we have merged Facebooks defragmentation code prepared for MariaDB by Matt, Seong Uck Lee from Kakao. Only major difference to Facebooks code and Matt’s patch is the fact that in MariaDB we decided not to introduce new literals to SQL and no changes to server code. Instead we use already existing OPTIMIZE TABLE and all code changes are inside InnoDB/XtraDB storage engines. To enable this new feature you need to add following to my.cnf (this requirement is to keep the original behavior of OPTIMIZE TABLE for those users that need it).

This new defragmentation feature works inplace, thus no new tables are created and there is no need to copy data from old table to new table. Instead this feature loads n pages and tries to move records so that pages would be full of records and frees pages that are fully empty after the operation.

New configuration variables

  • innodb_defragment: Enable/disable InnoDB defragmentation. When set to FALSE, all existing defragmentation will be paused. And new defragmentation command will fail. Paused defragmentation commands will resume when this variable is set to TRUE. Default value FALSE.
  • innodb_defragment_n_pages: Number of pages considered at once when merging multiple pages to defragment. Range of 2–32 and default is 7.
  • innodb_defragment_stats_accuracy: How many defragment stats changes there are before the stats are written to persistent storage. Set to 0 meaning disable defragment stats tracking. Default 0.
  • innodb_defragment_fill_factor_n_recs:  How many records of space defragmentation should leave on the page. This variable, together with innodb_defragment_fill_factor, is introduced so defragmentation won’t pack the page too full and cause page split on the next insert on every page. The variable indicating more defragmentation gain is the one effective. Range of 1–100 and default 20.
  • innodb_defragment_fill_factor: A number between [0.7, 1] that tells defragmentation how full it should fill a page. Default is 0.9. Number below 0.7 won’t make much sense. This variable, together with innodb_defragment_fill_factor_n_recs, is introduced so defragmentation won’t pack the page too full and cause page split on the next insert on every page. The variable indicating more defragmentation gain is the one effective.
  • innodb_defragment_frequency: Do not defragment a single index more than this number of time per second.This controls the number of time defragmentation thread can request X_LOCK on an index. Defragmentation thread will check whether 1/defragment_frequency (s) has passed since it worked on this index last time, and put the index back to the queue if not enough time has passed. The actual frequency can only be lower than this given number.

New status variables

  • Innodb_defragment_compression_failures: Number of defragment re-compression failures
  • Innodb_defragment_failures: Number of defragment failures.
  • Innodb_defragment_count: Number of defragment operations.


After CREATE TABLE and INSERT operations we can see following from INFORMATION_SCHEMA:

Now if we delete 3/4 of the records that will leave holes in pages and then we optimize table to execute defragmentation:

After this we can see that some pages are freed and some pages merged:


WebScaleSQL Git repository

Facebook Percona Live presentation:


Online DDL is a new feature in MariaDB 10.0. Online DDL is processed through below 4 tasks in sequence.

  1. InnoDB::ha_prepare_inplace_alter_table(..)
  2. InnoDB::ha_inplace_alter_table(..)
  3. InnoDB::ha_commit_inplace_alter_table(..)
  4. mysql_rename_table(..)

InnoDB storage engine allocates temporal memory buffer for transaction logging in phase 1 where row changes during this phase are logged. Size of this buffer is at start sort_buffer_size and it can be grown up to innodb_online_alter_log_max size. During phase 2 thread processing the ALTER statement will copy old table’s rows to a new altered table. After this MariaDB will take exclusive lock for target table and applies row log buffer to the new altered table.

This introduces a new unpredictable failure case row log buffer overflow. MariaDB server will rollback ALTER statement if row log buffer overflows. Thus, there is following problems:

  • If row log buffer size is too small the ALTER statement is rolled back and you have wasted precious time and resources.
  • If row log buffer is too big, you have wasted precious main-memory that could be used e.g. for buffer pool.
  • Currently, there is no way to see how much row log buffer is used and how much there is free space.
  • Currently, there is not even estimate how much work has been done and how much there is till to be done.
  • Currently, merge sort phase could also take a long time and there is no progress information.


There is two improvements in MariaDB 10.1: new status variables and progress information for online DDL.

New status variables and progress info

MariaDB Corporation would like to thank Matt, Seong Uck Lee from Kakao for contributing a patch that has now merged to MariaDB 10.1.

First of all there is three new global status variables.

  • Innodb_onlineddl_rowlog_rows: Shows how many rows is stored in the row log buffer.
  • Innodb_onlineddl_rowlog_pct_used: Shows row log buffer usage in 5-digit integer  (10000 means 100.00% ).
  • Innodb_onlineddl_pct_progress: Shows the progress of in-place alter table. It might be not so accurate because in-place alter is highly dependent on disk and buffer pool status.

Lets consider as an example where we have InnoDB table containing 150000 rows and we try to add a new column.

Concurrently, if we add new row, update some rows and delete few rows

This means that at the time of status statement there were 2003 rows on row log, 23\% of memory allocated for row log is used, and online alter table has completed 56.77\% of it’s work.

There is also additional output at error log, as example:

Merge sort progress

Additionally, show processlist statement will output estimate of index merge sort progress, e.g.