MariaDB 10.1 server is now “Galera ready” with the latest 10.1.1 release. It includes wsrep (write set replication) patch that enables server to load the wsrep provider (galera) library and interact with it to provide multi-master synchronous replication support. The patch implements hooks inside server and storage engines to populate and apply the write sets on sender and receiver nodes in a cluster respectively. The wsrep patch also adds a number of system and status variables (prefixed with wsrep) that can be used to configure and monitor the server acting as a node in Galera cluster.

Unlike older MariaDB versions, the wsrep patch is now part of regular (vanilla) MariaDB 10.1 server, that is, with 10.1.1 there would be no separate vanilla and Galera server versions. As a result, same 10.1 packages (binary tarballs, deb, rpm, etc.) can now be used to run MariaDB server as standalone server or a node in MariaDB Galera cluster.

As a standalone server, it would require no additional settings, just your usual favorite options. However, if one wants to start it as a MariaDB Galera node the following mandatory settings would be required :

  • wsrep_on=ON
  • wsrep_provider
  • wsrep_cluster_address
  • binlog_format=ROW
  • default_storage_engine=InnoDB
  • innodb_autoinc_lock_mode=2
  • innodb_doublewrite=1
  • query_cache_size=0

Without these additional settings the Galera replication essentially gets disabled and server functions like a standalone server with no Galera-related overhead.

If you are building MariaDB 10.1 server from source, WITH_WSREP cmake switch (ON by default) can be used to control the inclusion of wsrep patch in the build.

More about MariaDB Galera cluster:

Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case.

I’ve had one of these moments when I’ve heard about a request of making triggers to work on the slave in the row-based replication. Like, really? In RBR all changes made by triggers are replicated from the master to slaves as row events. If triggers would be fired on the slave they would do their changes twice. And anyway, assuming that one only has triggers one the slave (why?) in statement-based replication triggers would run on the slave normally, wouldn’t they?

Well, yes, they would, but one cannot always use statement-based replication. If one could, RBR would’ve never been implemented. There are many cases that statement-based replication cannot handle correctly. Galera requires RBR too. And as it turned out, that user, indeed, only had triggers on the slave — the master gets all the updates and slaves maintain summary tables that triggers keep up to date.

That’s why MariaDB 10.1.1 can now optionally invoke triggers for row-based events. This is controlled by the slave_run_triggers_for_rbr system variable. This variable is defined as

That is, it’s a global variable, it can be set to YES, NO, and LOGGING. By default it’s NO. The value of YES will, naturally, make triggers to run for RBR events. The value of LOGGING will make them to run and changes made by the triggers will be written into the binary log. This mode can be changed run-time or from the command line or a config file.

Either way, when slave_run_triggers_for_rbr is not NO MariaDB slaves will invoke specific triggers for certain row events:

  • Update_row_event will invoke an UPDATE trigger
  • Delete_row_event will invoke a DELETE trigger
  • Write_row_event is a bit tricky. It is applied (yes, in MySQL too) as follows:
    • The slave tries to insert a row.
    • If the table has UNIQUE KEY constraints (or a PRIMARY KEY) and there is a conflicting row — it’ll be updated to have all values as in what should’ve been inserted.
    • But if the table also has FOREIGN KEY constraints or there are other UNIQUE keys in the table, old row will be deleted and new row will be inserted. Two operations instead of one, so it’s slower, but guarantees that there will be no references to the old row after it disappears.

    That is, Write_row_event can invoke INSERT trigger, DELETE trigger, or UPDATE trigger, depending on whether a conflicting row exists, which UNIQUE constraint was violated and whether a table was referenced in a foreign key constraint. If you think it’s too complex — you’re right, we’ll simplify it in 10.1.2.

In setups like this, particularly if one replicates further, these slaves being masters to some other slaves, there is a risk that triggers will be run multiple times for the same row. To detect this kind of mistakes, MariaDB marks row events that have invoked triggers, and when these events are replicated, they won’t cause further trigger invocations on slaves. That is, the master must not have any triggers on tables in question, otherwise slave-side triggers will not be invoked.

Let me start with a little story. You sit in your house near the fireplace in the living room and need a book from the library… Eh, no, sorry, wrong century. You’re building a robotic arm that will open your beer or brew your coffee or supply you with whatever other drinks of your choice… while you’ll be building the next robotic arm. So, you — soldering iron in one hand and Arduino in another — ask your little brother to bring a box with specific resistors (that you unexpectedly run out of) from the cellar. The problem — your brother is small and cannot tell a resistor from a respirator. You explain that it’s small thing with two wires sticking out of it. And he starts going back and forth brining you boxes after boxes of different small things with two wires.

This is approximately where we were in MySQL when NDB Cluster was just added. The use wants to find a row, say WHERE number_of_wires=2 AND size='small' AND type='resistor' and ohm=10000. And the optimizer devises an execution plan to use the “ref” access for the number_of_wires column. And NDB Cluster engine starts retrieving all rows that have number_or_wires=2 from all nodes and send all these rows over the network to the server only to see them rejected over and over, because they don’t satisfy the rest of the WHERE condition.

Thus, you see, originally engine condition pushdown was implemented for NDB Cluster. With engine condition pushdown the server can send the complete WHERE clause to NDB, and the latter will send it to all the nodes, so only rows that actually satisfy the complete WHERE clause will be sent over the network back to the server. If the WHERE clause is too complex for nodes to understand (for example, it involves a stored function) the NDB can push only a part of it and let the server enforce the rest.

It wasn’t all perfect, though. If engine condition pushdown is used the server needs to prepare a condition for pushing. In a join it means extracting parts of the WHERE clause that don’t depend on data from tables which weren’t read yet. It creates a sizable overhead for every query even if it doesn’t use NDB Cluster. And most queries don’t. So, engine condition pushdown was disabled by default.

Fast forward to MariaDB. MariaDB doesn’t have NDB Cluster, but it has many other storage engines. For some of them, for example, for SphinxSE, engine condition pushdown is not just a useful optimization, but a requirement, SphinxSE simply cannot function properly if engine condition pushdown is disabled. We’ve solved it in MariaDB 5.2 by implementing an override — the engine could activate engine condition pushdown for its tables, even if it would be disabled for all other tables. No overhead for queries that don’t use SphinxSE. Later other engines started to use this override too.

That’s how we ended up in a strange situation, engine condition pushdown is disabled globally by default, but de facto all engines that can use engine condition pushdown, use it anyway, via this override. And if a user enables engine condition pushdown, it adds no additional optimizations or functionality, but only an overhead of the server preparing pushdown conditions for engines that cannot use them.

This is why in MariaDB 10.1.1 we deprecate engine_condition_pushdown flag. You can still write

but it will not do anything besides printing a deprecation warning. Starting from MariaDB 10.1.1 engine condition pushdown is always enabled for engines that support it and always disabled for engines that do not support it.