Introduction

Evaluating the performance of database systems is a very demanding task. There are a lot of hard choices to be made, e.g.:

  • What operating system and operating system version is to be used
  • What configuration setup is to be used
  • What benchmarks are to be used and how long are the warm-up and measure times
  • What test setups are to be used
  • What version of the database management system is used
  • What storage engine is used

While performance evaluation is mostly machine time, there is still a lot of hard work for the human monitoring the tests. In this blog post we have made following choices:

  • We’re using an Intel Xeon E5-2690 @ 2.9GHz CPU containing 32-cores and Linux 3.4.12 with 132G main memory. The database is stored on a Fusion-IO ioDrive2 Duo 2.41TB Firmware v7.2.5, rev 110646, using Driver 3.3.4 build 5833069. The database file system is using NVMFS and all test logs and outputs are stored on EXT4.
  • We have selected following benchmarks:
    • LinkBench [1] which is based on traces from production databases that store social graph data at Facebook, a major social network. LinkBench provides a realistic and challenging test for persistent storage of social and web service data.
    • Percona provides an on-line transaction processing (OLTP) benchmark which is an implementation of TPC Benchmark C [2] like workload, which simulates a warehouse application. Approved in July of 1992, TPC Benchmark C is an on-line transaction processing (OLTP) benchmark. TPC-C is more complex than previous OLTP benchmarks such as TPC-A because of its multiple transaction types, more complex database and overall execution structure. TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution. The database is comprised of nine types of tables with a wide range of record and population sizes. TPC-C is measured in transactions per minute (tpmC).
  • We are using LinkBench with 10x database size i.e. about 100G and TPC-C with 1000 warehouses. InnoDB buffer pool is set to 50G, thus on both benchmarks the database does not fit in main-memory.
  • We selected alpha versions of MariaDB 10.1.0 and MySQL 5.7.4-labs-tplc, because we are interested how their development releases perform. We selected exactly these versions because they both include support for FusionIO hardware, atomic writes [3,4] and page compression [5].
  • We decided to use InnoDB storage engine on both servers tested.
  • We use FusionIO hardware atomic writes [3,4] on all tests.

Our reasons for these choices are the following:

  • We want to evaluate performance on modern hardware not some old soon-to-be-deprecated hardware. The selected hardware is something that could be used when new hardware is acquired today for a database server.
  • We do not want to evaluate the performance of any corner cases, i.e. single threaded client, all data in main-memory, ridiculous high or low number of client threads, only read-only or write-only workloads, and etc…. These cases are interesting for some applications but not all.
  • We want to use benchmarks that are well known and approved by the database community.
  • XtraDB is not available for MySQL and we have evidence that XtraDB does not perform that well on FusionIO hardware.
  • We use atomic writes on all tests because it provides better performance and thus we have disabled the doublewrite buffer, because when using atomic writes it is unnecessary.

MySQL/MariaDB configuration

We can’t use exactly the same configuration file (my.cnf) for both MariaDB and MySQL because there are some differences on the naming of the parameters. So, below are the configuration variable values so that first common variables and their values are shown followed with tag oracle containing MySQL parameters and with tag mariadb for MariaDB parameters. All parameters are selected so that they are fair and representative, but we do not claim that they optimal values for this hardware or these benchmarks.

Benchmark setup

As mentioned before, for LinkBench we use 10x database size, this will create a database size of about 100G. The load command is as follows

The LinkBench measure phase uses 64 client threads and we use maxtime 86300 seconds, i.e. 24 hours, and warm-up is default 180 seconds. The measure command is as follows:

For TPC-C we use 1000 warehouses, thus the load command is as follows:

We use the default number of threads, i.e. 64 client threads (if not something else mentioned). We have run TPC-C with time limit 10800 seconds, i.e. 3 hours, and warmup is 30 seconds, i.e. the measure command is as follows:

 

Performance results: LinkBench

Both servers contain support for atomic writes [3,4] and page compression [5] for FusionIO hardware. Below we compare the operations = transactions/second reported using this feature:

LinkBench_measure

Both servers provide stable performance after the buffer pool is warm and full. In both servers the performance drops after the buffer pool is full and we really need to fetch pages from FusionIO hardware. However, MariaDB provides clearly superior performance after the buffer pool is full. In this test we used the LZ4 compression method that is provided by both servers. MariaDB can provide about 30% better performance and the difference at the end of the test is 5654 operations/second.

Both servers provide the following compression methods if available on the system:

 

MariaDB also provides support for the following compession methods if available on the system:

Below we present the same results using uncompressed tables on both servers.

LinkBench_measure2

Both servers start with very good performance, however performance slowly drops when the buffer pool is full and write amplification starts to slow down the I/O performance. This is due to Write amplification (WA), which  can happen on Flash memory and Solid-state drives (SSDs) where the actual amount of physical information written is a multiple of the logical amount intended to be written. Because flash memory must be erased before it can be rewritten, the process to perform these operations results in moving (or rewriting) user data and metadata more than once. This multiplying effect increases the number of writes required over the life of the SSD which shortens the time it can reliably operate. The increased writes also consume bandwidth (computing) to the flash memory which mainly reduces random write performance to the SSD [6]. MariaDB provides about 8% better performance and at the end of the tests the difference is 2225 operations/second.

The next research point is row compressed tables. Because both MySQL and MariaDB contain the same, or almost the same, implementation, we do not expect any significant differences on performance. Results below:

LinkBench_measure3

Results are almost identical as expected and the small differences seen are clearly inside a statistical variation. Finally, let’s combine all previous results to see the differences between uncompressed, row-compressed, and page-compressed results.

LinkBench_measure4

As expected, the best performance is achieved by using uncompressed tables. Note that this result does not contradict the results presented on [5] because here we use atomic writes and have disabled the doublewrite buffer on all tests. At the end of the measure time, page compressed performance is about 22% less than uncompressed performance and the difference is 5500 operations/second. Similarly, at end of the measure time, row compressed performance is about 68% less than uncompressed performance and row compressed performance is about 37% less than page compressed performance. Finally, the difference is 11800 operations/second between uncompressed and row compressed and 6568 operations/second between page compressed and row compressed.

For our final test, we measured the load time used to create the 10x LinkBench database using MariaDB 10.1.

load_time

 

Clearly, creating a page-compressed database has the shortest time followed by an uncompressed database. Creating a row compressed database is significantly slower than either of those.

Performance results: TPC-C

We again start from presenting the results using TpmC from page compression feature [5] using a different number of client treads:

 

tpcc_page

Again MariaDB provides clearly better performance on all tests. The server used has 32 cores and that is the reason why performance numbers drop when the number of client treads are increased to higher than 32 cores. Let’s see the performance results from New Order transactions and using the default number of client threads:

 

tpcc_pagecomp

While MySQL occasionally can provide a higher number of completed New Order transactions the variation is significantly higher and the median is significantly lower than the MariaDB results. MariaDB can provide quite stable performance over time after the buffer pool is warm. Clearly, the MySQL flush implementation can’t keep up with very fast FusionIO hardware. The bottleneck on all tests on this post is not the FusionIO hardware, instead the bottleneck is in the MySQL/MariaDB implementation.

In the following graph we have compared the results using uncompressed tables and page compressed tables while varying the number of TPC-C test suite client threads from 4 to 512:

tpcc_full

When client threads are between 4 and 32 MariaDB can provide clearly better performance for both uncompressed and page compressed tables. Actually, MariaDB performance for page compressed tables is the same or even better than MySQL performance for uncompressed tables. When the number of client threads is increased, MariaDB performance drops earlier than MySQL performance but not as significantly as MySQL performance seen on the final results using 512 client threads. With high numbers of client threads, MariaDB page compressed can offer significantly better performance compared with uncompressed tables.

Conclusions

First of all, both alpha versions of these database servers were very stable. We did not encounter even a single crash or other problem while the performance evaluations were being done. This is a very good sign for the quality of these development releases.

Secondly, there is clearly a need to identify and fix new bottlenecks introduced by the significantly faster FusionIO storage device. In all tests we could not even nearly use the full throughput provided by the FusionIO hardware. FusionIO hardware can be seen as a new level on memory hierarchy [7] that requires new kinds of optimizations and research in a similar way as main-memory has required [8]. Some early research results can be seen on [9].

Finally, MariaDB can clearly offer better performance to applications having a similar workload as presented by LinkBench and TPC-C.

References

[1] Timothy G. Armstrong, Vamsi Ponnekanti, Dhruba Borthakur, and Mark Callaghan. 2013. LinkBench: a database benchmark based on the Facebook social graph. In Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data (SIGMOD ’13). ACM, New York, NY, USA, 1185-1196. http://doi.acm.org/10.1145/2463676.2465296.

[2] http://www.tpc.org/tpcc/

[3] MariaDB Introduces Atomic Writes, https://blog.mariadb.org/mariadb-introduces-atomic-writes/

[4] Xiangyong Ouyang and David W. Nellans and Robert Wipfel and David Flynn and Dhabaleswar K. Panda: Beyond block I/O: Rethinking traditional storage primitives, in Proceedings of the 2011 IEEE 17th International Symposium on High Performance Computer Architecture, pages 301-311, http://dx.doi.org/10.1109/HPCA.2011.5749738

[5] Significant performance boost with new MariaDB page compression on FusionIO, https://blog.mariadb.org/significant-performance-boost-with-new-mariadb-page-compression-on-fusionio/

[6] Write amplification. http://en.wikipedia.org/wiki/Write_amplification

[7] Memory hierarchy. http://en.wikipedia.org/wiki/Memory_hierarchy

[8] Stefan Manegold, Peter A. Boncz, and Martin L. Kersten. 2000. Optimizing database architecture for the new bottleneck: memory access. The VLDB Journal 9, 3 (December 2000), 231-246. DOI=10.1007/s007780000031 http://dx.doi.org/10.1007/s007780000031

[9] Seok-Hoon Kang, Dong-Hyun Koo, Woon-Hak Kang and Sang-Won Lee: A Case for Flash Memory SSD in Hadoop Applications,
International Journal of Control and Automation, Vol. 6, No. 1, February, 2013. http://www.sersc.org/journals/IJCA/vol6_no1/17.pdf

22 thoughts on “Performance evaluation of MariaDB 10.1 and MySQL 5.7.4-labs-tplc

  1. Did you really use the default innodb_log_file_size in io-intensive benchmarks with high write load? That’s a great way to harm throughput and since you left adaptive flushing on it’s an even greater way to cause lots of avoidable duplicate flushing of pages. What were the resulting innodb_log_file_sizes in MySQL and MariaDB? I’m asking that in case you are exploiting a difference between the default configurations for this setting.

    Does the server have four CPUs with eight cores each and hyperthreading disabled to halve the number of effective CPUs in use or is it two CPUs with 16 full cores and hyperthreading enabled? Assuming it’s four CPUs I’d have hyperthreading enabled as first try.

    Why did you use innodb_thread_concurrency = 32 on a system that appears able to run 64 threads at once? Given the thread count the cores seem to have this seems unhelpful, though it does depend on workload and hardware so it might be the correct setting.

    Why did you use innodb_thread_concurrency at a non-aero value when the workload never goes above 32 clients, given that innodb_thread_concurrency settings that are non-zero are traditionally regarded as slower than setting it unless the cap is required?

    Why did you leave adaptive flushing enabled on a high load production server workload instead of turning it off and tuning io settings yourself?

    Why did you not change innodb_flush_neighbors from the default of 1 to 0, which is what we recommend for SSD-like storage? Leaving it at 1 is a good way to waste IO and slow down disk setups that do not use spinning disks.

    Why was innodb_buffer_pool_instances changed from the 8 default for both server versions to 16? Did this increase or decrease any performance difference between the two and is it the optimal value for both or best for one and non-optimal for the other?

    Why was innodb_checksum_algorithm left at the default old innodb method instead of the new in 5.6 crc32 method that was introduced specifically to improve performance on very fast storage hardware like that used here, where it was found that checksum computation using the innodb method could be a performance bottleneck?

    Why was the table_open_cache_instances setting not used in MySQL when MariaDB has a similar contention-reducing setting enabled by default? 16 is what we recommend for the sort of hardware you’re using, or even 8 that is also mentioned in the manual could help if the concurrency in this area is an issue. For suitable workloads hoping that people won’t notice this missing setting is one way to exploit the way that MySQL and MariaDB handle this and use misconfiguration to create a performance difference that capable DBAs would already have eliminated.

    What is the output of SHOW VARIABLES for each server? What was the value of SHOW GLOBAL STATUS for each server after each benchmark? What was the output of SHOW ENGINE INNODB STATUS immediately after the end of each run? This is to check for things like table_open_cache and table_definition_cache being set to values that are too small, wasting io with continual closing and opening of tables.

    Did you have PS enabled or disabled or otherwise set to the same settings on each server or did you decide to handicap one by having it on and the other not by having it off?

    My initial impression is that you misconfigured the servers for the workload and hardware, not doing things that I’d expect even fairly inexperienced DBAs to do or explain why they chose not to do them.

    Since I’m not a beginning DBA and have a little experience at the ways that benchmarks can be manipulated through careful choices of settings for configuration of servers and workload, I wondered about that as well.

    Now, none of this contains any sort of sophisticated analysis, it’s all questions that I hope would occur to many capable DBAs. But it does tell you why I think that your post is pretty much useless for anyone with a serious interest in performance differences between MySQL and MariaDB. This doesn’t mean that there isn’t the start of a potentially useful post here, just that there is too much that is odd and unexplained about even the settings at the moment for me to have any useful thoughts at all about any performance differences based on it.

    James Day, MySQL Senior Principal Support Engineer, Oracle

    • On both servers, I used innodb_log_buffer_size=256M, innodb_log_file_size=8G and innodb_log_files_in_group=2. Changing these to bigger values, do not have significant difference and difference is similar to both servers. Bottleneck is not IO, as noted, It is on both MariaDB and MySQL servers.

      I used innodb_thread_concurrency = 32, because it produced best performance results on both servers. Workload does use more than 32 clients, in TPC-C max was 512 and in LinkBench 100 client threads.

      I did not see a significant difference on performance with innodb_adaptive_flushing setting. I used innodb_flush_neighbors = 0.

      Setting of buffer pool instances seems to be optimal for these tests and hardware. I have tested 0 and 64, I did not see significant difference.

      If new crc32 method is better for performance, why it is not a default ? I do not think different setting has asymptotically significant effect on performance, change of that will affect both servers similarly (because of same implementation).

      Again, if table_open_cache_instances different default value has effect, why it is not default ?

      PS was disabled on both servers.

      Do you say that I misconfigured just because, MySQL was shown to be inferior? Configuration was as fair as possible. I did not have time to run tests with all combinations of configuration variables. Most of the variable settings that I did test, did not have significant difference on end results.

      I used defaults for both table_open_cache and table_definition_cache because both benchmarks have very few tables.

      • Thanks. As you might have noticed, MySQL has a larger default for table_open_cache than MariaDB but I mentioned that potential problem anyway, even though it could have helped you to notice a performance problem with the MariaDB settings.

        The number of table_open_cache entries needed does not depend only on the number of tables but on how many instances of each are opened by the running queries, so it’s entirely possible for a test with only a few tables to use thousands of table_open_cache entries. That’s one reason why it is necessary to check the counter output after the tests.

        The MySQL 5.5 server does not support the CRC checksum calculation method. If we had set that on by default we would have blocked people from downgrading to 5.5 after upgrading to 5.6. Since the target audience for the defaults would mostly not be using SSD or even faster storage and would also be less likely to have hardware CRC calculation support we made the ease of use choice.

        The table_open_cache_instances setting is unlikely to be an issue for the MySQL 5.6 defaults target audience but it is relevant to higher load servers and benchmarking.

        It’s not easy benchmarking different servers, so you have my sympathy.

        James Day, MySQL Senior Principal Support Engineer, Oracle

    • I added the missing configuration variables to the text. My point is if I misconfigured the server, I did it so for both. Thus, if different setting would provide better performance, effect is similar to both servers. If there is difference on default values and different setting would be better, it should be fixed.

      • As others have also observed, sometimes a setting is handled differently in one from the other and may require an option in one but not the other.

        The target audience for the defaults also has to be considered. With MySQL 5.6 I asked for defaults that changed from the old tiny server assumptions to small setups that are probably the majority of all MySQL installations globally, the huge number of hosted and shared setups. A server that chooses a different target audience could choose different defaults and neither would be wrong, just optimised for the different targets.

        Views are my own. For an official Oracle view, consult a PR person.

        James Day, MySQL Senior Principal Support Engineer, Oracle

  2. How much of this is just comparing the FusionIO atomic write implementation of MySQL and MariaDB?

    The vast majority of users don’t use FusionIO. Can you rerun the benchmark on non-FusionIO SSD?

    • This post does not compare atomic write implementation, because atomic writes are handled by NVMFS and FusionIO hardware. Both servers just issue ioctl and use normal AIO writes. I know that users do not yet use FusionIO, based on these results they really should.

      • > This post does not compare atomic write implementation
        Then what does this quote from the post mean:

        “We use FusionIO hardware atomic writes [3,4] on all tests.”

        • On all tests both servers used atomic writes and this is implemented on both servers exactly similar way calling ioctl() with DFS_IOCTL_ATOMIC_WRITE_SET _IOW(0x95, 2, uint). As noted on text of blog, there is significant difference on flushing, MariaDB there is possibility to use multiple-threads to do both compression and sending AIO requests. MySQL does not have that feature.

          • So this goes back to my original point: all this benchmark means could be just that MariaDB’s FusionIO atomic write implementation is faster than MySQL’s. That’s interesting but also useless to the vast majority of users who don’t use FusionIO.

            I know you said FusionIO atomic write was not the cause of difference in performance between MariaDB and MySQL in this benchmark, but there’s no way to prove that when you’re only using FusionIO.

            What is far more useful would be running this benchmark on non-FusionIO SSD. That would have much more real-world relevance.

          • Atomic writes are a FusionIO NVMFS specific feature. The MySQL labs release can transparently compress/decompress pages to any FS that supports sparse files (PUNCH_HOLE). Therefore it should work on EXT4, XFS and Windows.

            I would also like to add that the MySQL labs release has gone through full QA and will also work on systems that don’t support AIO. Transportable tablespaces should work too and in the labs release it will compress all InnoDB files, including the system tablespace and the UNDO logs.

          • Is this an attempt at misinformation? The MySQL labs release with transparent page compression is based on 5.7.4. And as Laurynas points out you need to use –innodb-page-cleaners=16 for a fair comparison. See http://dev.mysql.com/worklog/task/?id=6642

            Your claim that “MySQL does not have that feature” is false. The labs release can and does use the flush threads for compression and the flush threads can and do submit AIO write requests.

  3. By the definitions I use this isn’t benchmarking yet. It is still benchmarketing until you begin to explain why performance is different between the two systems. I know there are performance gurus at SkySQL who have the talent to do that — http://smalldatum.blogspot.com/2014/06/benchmarketing.html

    innodb_lru_scan_size is per buffer pool instance. Setting it to 25k with 16 instances is too high. 2k or 3k is probably better in this case. And “too high” might mean that an InnoDB background thread uses too much CPU time and causes mutex contention from trying too hard to find too many pages from the LRU tail. I frequently get confused because io_capacity is total and lru_scan_depth is per buffer pool instance.

    • Mark, you might be correct about innodb_lru_scan_size is too large, but it was same for both, I will fix this setting on return.

  4. Why innodb_mtflush_threads = 16 for MariaDB but innodb_page_cleaners = 1 for MySQL? That’s a glaring difference, although hard to tell its impact because the results are presented in a black box way.

    MariaDB 10.1 does not compile in PFS by default. How much of performance difference is attributable to that?

    • My bad, I did not know there was such a parameter, I will need to rerun some of the tests.

  5. For some reason my comment did not go through the first time. Second try:

    Why parallel flushing is enabled for MariaDB but not MySQL (innodb_mtflush_threads = 16 but innodb_page_cleaners = 1)? This is the biggest difference between the two that does not allow claiming the setup is identical.

    How much of performance difference is attributable to MariaDB compiling PFS away by default?

    • Problem is that MySQL does not have exactly the similar feature as parallel flushing. If I set innodb_page_cleaners=16 on MySQL is that then identical setup, I do not think so.

      MySQL can also be compiled PFS away, I will try the performance with these new configuration parameters.

      • Right, MariaDB and MySQL implement parallel flushing in different ways, but unless one of the implementations is horribly broken by some mutex contention, setting the same number of threads should result in comparable performance, at least that’s my impression having reviewed both.

        Just like there are differences between MariaDB/fusionIO and MySQL page compression implementations too, yet you can still compare those, can’t you?

        • Yes, you are correct. This is my mistake. I did not know there was such a parameter and I had to look from source code to see how there is similarities/differences on approach. I will need to run the page-compressed results again with corrected configurations. For uncompressed and row-compressed this is not necessary, because those results do not use multi-threaded flush on MariaDB.

  6. Pingback: MySQL Performance: Analyzing LinkBench Workload on MySQL 5.7 and MariaDB 10.1 | InsideMySQL

  7. My apologies to the readers of this post. It clearly seems that all results are not totally fair, especially the page-compressed results. I did not know that there was innodb_page_cleaners parameter on 5.7 and after looking the code, to be exactly fair I should use value 16 for that parameter on page-compressed runs. For uncompressed and row-compressed this is not necessary because multi-threaded flush was not used even on MariaDB.

Comments are closed.