DBAHire.com sponsors the “IX Concurso Universitario de Software Libre”

CUSL (Foto de Familia)

Photo: Ana Rey CC-BY-SA-2.0

The “Concurso Universitario de Software Libre” (CUSL, Free Software University Contest), is an initiative similar to the Google Summer of Code, but specifically aimed to the Spanish university and high school students and organized by a group of Free Software University Offices.

As part of the effort to encourage the growth of the free software ecosystem, and introduce young talent to the development of free applications and technologies, DBAHire.com has just become a silver sponsor of the competition, providing resources for the prizes, traveling and accommodation to the students..

The final phase of the competition will take place on the 7-8 May in Zaragoza, and our MySQL consultant Jaime Crespo will deliver on that Friday a short speech in Spanish titled “Free Software ¿Is it profitable?”.

Stop using FLUSH PRIVILEGES

Mermaids have the same probability of fixing your permission problems, but people continue believing in the FLUSH PRIVILEGES myth.

Mermaids have the same probability of fixing your permission problems, but people continue believing in the FLUSH PRIVILEGES myth.

I see suggesting the usage of FLUSH PRIVILEGES every time someone writes a tutorial or a solution to a problem regarding creating a new account or providing different privileges. For example, the top post on /r/mysql as of the writing of these lines, “MySQL:The user specified as a definer does not exist (error 1449)-Solutions” has multiple guilty cases of this (Update: the user has corrected those lines after I posted this article).

It is not my intention to bash that post, but I have seen committing that mistake many, many times. Even if you go to the reference manual for the GRANT command, you will see a comment at the bottom -from a third party user- using GRANT and then FLUSH PRIVILEGES.

Why should I bother? Is executing FLUSH PRIVILEGES an issue? Why is everybody doing it? The reason why that command exists is because —in order to improve performance— MySQL maintains an in-memory copy of the GRANT tables, so it does not require to read it from disk on every connection, every default database change and every query sent to the server. The mentioned command forces the reload of this cache by reading it directly from disk (or the filesystem cache) as the MySQL reference manual itself clearly indicates (having even its own section: When Privilege Changes Take Effect). However, its execution is unnecessary in most practical cases because:

If you modify the grant tables indirectly using account-management statements such as GRANT, REVOKE, SET PASSWORD, or RENAME USER, the server notices these changes and loads the grant tables into memory again immediately.

Then only reason to perform that reload operation manually is when:

you modify the grant tables directly using statements such as INSERT, UPDATE, or DELETE

For most operations, like creating a user, changing its privileges, or changing its password, you will want to use the high-level statements. Not only they are easier to use and they are compatible with a larger number of MySQL versions, but they will also prevent you from making mistakes (of course, remember to setup the NO_AUTO_CREATE_USER sql mode). They even usually work nicely in a MyISAM-hostile environment like a Galera cluster. There are certainly reasons to edit the tables manually- as an administrator, you may want to tweak the privileges in a special way or import the mysql.* tables from elsewhere, so in those cases running FLUSH PRIVILEGES is mandatory. Please note that, as the manual page says, in most cases (e.g. global privileges) changing a user’s grants will only affect new connections, and certainly never to ongoing queries, as privileges are checked at the beginning of the query processing- read the manual page for details.

So, again, why my crusade against the overuse of FLUSH PRIVILEGES, after all, worst case scenario, the same privileges will be loaded again! It is not a question of performance issues. Although, in an extreme case it certainly can be an issue. Check for example the following script, that executes 10 000 CREATE USER statements (this can only be done in a single thread as the grant tables are still in MyISAM format, even in 5.7.6):

The timing for both executions are:

We can see that using FLUSH PRIVILEGES is 8x slower that not using them. Again, I want to stress that performance is not the main issue here, as most people would execute it only once at the end of each command block, so it wouldn’t be a huge overload. Even if there is some extra read IO load, we must assume that every round trip to the database, and every commit takes some server resources -so that can be extrapolated to any command. Additionally, concurrency issues is not a typical problem for MySQL account creation, as the mysql.user table it not usually (or should not be) very dynamic.

The main issue I have against the overuse of FLUSH PRIVILEGES is that people execute it without really understanding why they do it and what that command actually does. Every time a person has a problem with MySQL privilege systems, the first piece of advice that is given is to execute this command “just in case”. Check, for example, answers on dba.stackexchange like this, this and this (which I have selected among many others), and where the original user was not altering manually the mysql.* tables. The issue is that in most cases this command does nothing, and the real problem lays on the poor understanding of MySQL’s permission model. As the saying tells- when you have a hammer, every problem looks like a nail. People read that that is a proper way to solve permission-related problems, and they pass the “knowledge” on, creating basically the MySQL equivalent of an urban myth.

So, the next time you encounter a problem with a user not being able to log it, or apply privileges to a user, there are many other sources of issues such as: using old_passwords, using a different authentication method than the native passwords, not having the actual privileges or the WITH GRANT OPTION properties to apply them, your server not identifying you with the same user or host than the one you are actually in, using skip-name-resolve so dns entries are ignored, waiting for a new connection for the changes to take effect, … and many other issues that come with authorization and authentication. MySQL grant system is not precisely obvious and perfect (Hello, granting permissions from databases that do not exist?), but taking 5 minutes to read the extensive manual on privileges can avoid you many headaches in the future. TL;TR RTFM

For those people that already know when to use or not to use FLUSH PRIVILEGES, please, next time you find someone overusing it, educate the user on best practices so people no longer relay in magic and urban myths to solve problems, go to reddit/stackoverflow/your favorite social network/etc. and upvote good practices/comment on bad practices. Today it could be FLUSH PRIVILEGES, tomorrow it could be “add OPTIMIZE TABLE in a cron job every 5 minutes for your InnoDB tables” (and yes, that last one was actually found in the wild).

EXPLAIN minor wishlist

EXPLAIN output: filesortWhile we always want better performance and more and larger features for MySQL, those cannot just “magically appear” from one version to another, requiring deep architecture changes and lots of lines of code. However, there are sometimes smaller features and fixes that could be implemented by an intern or an external contributor, mainly at SQL layer, and that could make the MySQL ecosystem friendlier to newbies and non-experts. Making a piece of software easier to use is sometimes overlooked, but it is incredibly important -not everybody using MySQL is a DBA, and the more people adopting it, more people will be able to live from it, both upstream and as third party providers.

Here it is my own personal list of fixes for EXPLAIN messages. If you are an experienced MySQL user you are probably aware of their meaning, but that doesn’t solve the problem for beginners. The reason why I am writing a blog post is to gather opinions on whether they seem important to you or not, and if my way of solving them seems reasonable so that we can submit them as feature requests.

EXPLAIN messages

As a MySQL instructor, the following case happens a lot with new students. You start with a command like this:

So, “Using index” means that an index is being used, right? No, in this case, the type: index is telling us that it is using an index for scanning or accessing the rows (because it is not a type: ALL– although we could get a full row scan and using the index for ordering or grouping them). The Extra: Using index indicates that the index is also used for retrieving the data, without actually needing to read the whole row. This is, as far as I know, commonly referred as Covering index. And that is exactly what I would like to see:

or maybe:

Another common misunderstanding: Using filesort:

At this level, I do not care if I am using filesort as an algorithm, and -if I am correct- since 5.6 can also use a priority queue for the sorting algorithm if the number of items is small. Additionally, the “file” in the filesort word can lead to confusion that this requires a temporary table on disk. I do not have a perfect alternative (please provide feedback), but maybe something like the following would be clearer:

or maybe:

Another example would be:

I understand that the developers didn’t want to confuse us with NDB’s pushed condition, but this output is quite misleading, too. It literally means that “the index condition is being used”, instead of “ICP is being used”. What about:

There are many other expressions, but those are the most annoying to me in terms of students’ confusion.

Would you agree with me? Would these changes break applications that may parse EXPLAIN output? What other small things would you change in MySQL output or error messages? I would specially would like to hear from MySQL beginners and people coming from other databases, as the more we have used to it, the more we get accustomed to MySQLisms.

Regarding MySQL 5.6 temporary tables format

A temporary tabledefault_tmp_storage_engine variable was introduced in 5.6.3, allowing the configuration of the default engine for temporary tables. This seems to be in the direction, as I commented before, of making MyISAM an optional engine. In 5.7, a separate tablespace is being created to hold those tables in order to reduce its performance penalty (those tables do not need to be redone if the server crashes, so extra writes are avoided).

However, I have seen many people assuming that because default_tmp_storage_engine has the value “InnoDB”, all temporary tables are created in InnoDB format in 5.6. This is not true: first, because implicit temporary tables are still being created in memory using the MEMORY engine (sometimes called the HEAP engine), while MyISAM is being used for on-disk tables. If you do not trust the reference manual on this, here it is a quick test to check it:

mysql> SELECT version();
+------------+
| version() |
+------------+
| 5.6.23-log |
+------------+
1 row in set (0.00 sec)

mysql> SHOW GLOBAL VARIABLES like 'default%';
+----------------------------+--------+
| Variable_name | Value |
+----------------------------+--------+
| default_storage_engine | InnoDB |
| default_tmp_storage_engine | InnoDB |
| default_week_format | 0 |
+----------------------------+--------+
3 rows in set (0.00 sec)

mysql> SHOW GLOBAL VARIABLES like 'tmpdir';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| tmpdir | /tmp |
+---------------+-------+
1 row in set (0.00 sec)

mysql> CREATE TABLE test (id serial, a text);
Query OK, 0 rows affected (0.10 sec)

mysql> insert into test (a) values ('a');
Query OK, 1 row affected (0.06 sec)

mysql> insert into test (a) values ('aa');
Query OK, 1 row affected (0.00 sec)

mysql> insert into test (a) values ('aaa');
Query OK, 1 row affected (0.00 sec)

mysql> SELECT *, sleep(10) FROM test ORDER BY rand();
...

[ec2-user@jynus_com tmp]$ ls -la
total 24
drwxrwxrwt 5 root root 4096 Feb 24 11:55 .
dr-xr-xr-x 23 root root 4096 Jan 28 14:09 ..
drwxrwxrwt 2 root root 4096 Jan 28 14:09 .ICE-unix
-rw-rw---- 1 mysql mysql 0 Feb 24 11:55 #sql_7bbd_0.MYD
-rw-rw---- 1 mysql mysql 1024 Feb 24 11:55 #sql_7bbd_0.MYI
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:41 ssh-5ZGoXWFwtQ
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:43 ssh-w9IkW0SvYo

...
+----+------+-----------+
| id | a | sleep(10) |
+----+------+-----------+
| 1 | a | 0 |
| 2 | aa | 0 |
| 3 | aaa | 0 |
+----+------+-----------+
3 rows in set (30.00 sec)

The only thing I have done above is forcing the creation of the temporary table on disk by adding a TEXT field (incompatible with the MEMORY engine, so it has to be created on disk) and using sleep so that we have enough time to check the filesystem. You can see on the output of ls the .MYD and .MYI particular to the MyISAM engine. That last step would be unnecessary if we just used PERFORMANCE_SCHEMA to check the waits/io.

A second, and more obvious reason why thinking that all temporary tables are created in InnoDB format, is because explicit temporary tables can still be created in a different engine with the ENGINE keyword:

mysql> CREATE TEMPORARY TABLE test (i serial) ENGINE=MyISAM;
Query OK, 0 rows affected (0.00 sec)

[ec2-user@jynus_com tmp]$ ls -la
total 36
drwxrwxrwt 5 root root 4096 Feb 24 12:16 .
dr-xr-xr-x 23 root root 4096 Jan 28 14:09 ..
drwxrwxrwt 2 root root 4096 Jan 28 14:09 .ICE-unix
-rw-rw---- 1 mysql mysql 8554 Feb 24 12:12 #sql7bbd_36a3_0.frm
-rw-rw---- 1 mysql mysql 0 Feb 24 12:12 #sql7bbd_36a3_0.MYD
-rw-rw---- 1 mysql mysql 1024 Feb 24 12:12 #sql7bbd_36a3_0.MYI
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:41 ssh-5ZGoXWFwtQ
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:43 ssh-w9IkW0SvYo

mysql> DROP TEMPORARY TABLE test;
Query OK, 0 rows affected (0.00 sec)

[ec2-user@jynus_com tmp]$ ls -la
total 20
drwxrwxrwt 5 root root 4096 Feb 24 12:17 .
dr-xr-xr-x 23 root root 4096 Jan 28 14:09 ..
drwxrwxrwt 2 root root 4096 Jan 28 14:09 .ICE-unix
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:41 ssh-5ZGoXWFwtQ
drwx------ 2 ec2-user ec2-user 4096 Feb 24 11:43 ssh-w9IkW0SvYo

Will this change in the future? 5.7.5 continues to have the same behavior as 5.6. However, as Stewart pointed some time ago, the performance optimizations in 5.7 make some uses of MEMORY and MyISAM obsolete so I will not be surprised if that dependency, together with MyISAM grant tables, will be removed in the future.

Update: I’ve been told by email by Morgan that the yet-to-be-released (at the time of this writing) 5.7.6 will finally change the default behavior to be full InnoDB for implicit temporary tables, too, as seen on the release notes:

InnoDB: The default setting for the internal_tmp_disk_storage_engine option, which defines the storage engine the server uses for on-disk internal temporary tables (see How MySQL Uses Internal Temporary Tables), is now INNODB. With this change, the Optimizer uses the InnoDB storage engine instead of MyISAM for internal temporary tables.

internal_tmp_disk_storage_engine was introduced in 5.7.5, but its default value then was MYISAM.

This is in order to get advantage of the in-memory performance of InnoDB for variable-lengh fields, which I am personally 100% for. Thank you Morgan for the extra information!

My Talk at PyConES 2014 Zaragoza about MySQL Fabric

pyconesNext Saturday, 8 November 2014, at 19:30 I will be speaking about MySQL Fabric for PyConES 2014 (the Spanish version of the PyCon), the annual meeting point for all developers and enthusiasts of Python in Spain.
While I say myself that I am not a developer, a lot of my time as a MySQL consultant requires implementing automatic procedures (backups, health checks, AWS management, …) and for that I mainly use a combination of Python and Bash.

In my talk, which I have titled “MySQL Fabric, a High Availability solution for Connector/Python” I will explain how to setup and configure a set of MySQL servers and Python application clients using Connector/Python in order to provide service resiliency and extra performance for both reads and writes (thanks to its semi-automatic sharding capabilities) on your application. The framework itself (Fabric, part of the MySQL Utilities) is open source and under heavy development (also programmed in Python, of course!). If at any point in your career you suffered from bad database performance or application downtime, you must come to my talk! I will also compare it to other relatively similar solutions, providing its pros and cons. The session will be delivered in English.

As a speaker, I have a discount code to share with you “DescuentoPonentes“, as there is still time to register for the event.

See you in Zaragoza!

Update: Here you have the slides:

My Tutorial on Query Optimization for Percona Live London 2014 (and Important Information If You Wish to Attend)

Percona Live London is next week!Monday next week, on November 3rd, I will be delivering a tutorial on the greatest MySQL European Conference, the Percona Live London 2014. The topic is a natural continuation of the one I delivered last year on the same venue, “Query Optimization with MySQL 5.6: Old and New Tricks“. This year I will be focusing on the newest optimizer changes that we can find not only in the already published 5.6 and MariaDB 10, but also some of the latest features in the still in development MySQL 5.7 and MariaDB 10.1. Topics on this workshop, which I have titled “Query Optimization with MySQL 5.7 and MariaDB 10: Even Newer Tricks“, will include: new 5.7 cost-based optimizer, virtual columns, query rewriter plugin api, new join methods, subquery optimization, sql mode changes, full text search and GIS improvements. All of it with easy-to follow examples and hands-on exercises.

Here you have the full detailed agenda:

  1. Introduction
  2. Break (VM installation)
  3. General Optimizer Improvements
  4. Computed/Virtual Columns
  5. Query Rewrite Plugins
  6. SQL Mode Changes
  7. Join Optimization
  8. Subquery Optimization
  9. Fulltext search
  10. GIS Improvements
  11. Break
  12. Query Profiling
  13. Results and Conclusions
  14. Q&A

Update: Here you have the full deck of slides:

All necessary materials are on: dbahire.com/pluk14

IMPORTANT INFORMATION: if you plan to attend this tutorial, in order to avoid last-minute connectivity problems, please pre-download all necessary files as indicated in the tutorial guide. These include the example database and the 5.7/MariaDB server installation.

While I will be handling removable media with those same files, and you will be able to follow the explanation fully just by watching my screen, as I will show everything myself, but you will get much more out of the tutorial if you took 5 minutes to prepare your system in advance.

I will be gifting several usb drives among those that take the time to setup their systems beforehand and attend my tutorial as a thank you for helping make the session smoother. Mention me on twitter saying something like “I already have everything prepared for the @dbahire_en tutorial http://dbahire.com/pluk14 #perconalive”, so I can reserve yours!

See you next Monday at 9:00 London time at Orchard 2.

Testing the Fastest Way to Import a Table into MySQL (and some interesting 5.7 performance results)

As I mentioned on my last post, where I compared the default configurations options in 5.6 and 5.7, I have been doing some testing for a particular load in several versions of MySQL. What I have been checking is different ways to load a CSV file (the same file I used for testing the compression tools) into MySQL. For those seasoned MySQL DBAs and programmers, you probably know the answer, so you can jump over to my 5.6 versus 5.7 results. However, the first part of this post is dedicated for developers and MySQL beginners that want to know the answer to the title question, in a step-by-step fashion. I must say I also learned something, as I under- and over-estimated some of the effects of certain configuration options for this workload.

Disclaimers: I do not intend to do proper benchmarks, most of the results obtained here were produced in a couple of runs, and many of them with a default configuration. This is intended, as I want to show “bad practices” for people that is just starting to work with MySQL, and what they should avoid doing. It is only the 5.6 to 5.7 comparison that have left me wondering. Additional disclaimer: I do not call myself a programmer, and much less a python programmer, so I apologize in advance for my code- after all, this is about MySQL. The download link for the scripts is at the bottom.

The Rules

I start with a CSV file (remember that it is actually a tab-separated values file) that is 3,700,635,579 bytes in size, has 46,741,126 rows and looks like this:

I want to load it into a table with the following structure:

The import finish time will be defined as the moment the table is crash safe (even if there is some pending IO). That means that for InnoDB, the last COMMIT has to be successful and flush_log_at_trx_commit must be equal to 1, meaning that even if there is pending IO to be made, it is fully durable on disk (it is crash-resistant). For MyISAM, that means that I force a FLUSH TABLES before finishing the test. Those are, of course, not equivalent but it is at least a way to make sure that everything is more or less disk-synced. This is the ending part of all my scripts:

For the hardware and OS, check the specs on this previous post– I used the same environment as the one mentioned there, with the exception of using CentOS7 instead of 6.5.

The naive method

Let’s say I am a developer being tasked with loading a file regularly into MySQL- how would I do that? I would probably be tempted to use a CSV parsing library, the mysql connector and link them together in a loop. That would work, wouldn’t it? The main parts of the code would look like this (load_data_01.py):

As I am playing the role of a developer without MySQL experience, I would also use the default configuration. Let’s see what we get (again, that is why I call these “tests”, and not benchmarks). Lower is better:
load_data_01

MySQL Version5.1.725.5.395.6.205.7.4
Load time (seconds)4708.5946274.3046499.0336939.722

Wow, is that 5.1 being a 50% faster than the rest of versions? Absolutely not, remember that 5.5 was the first version to introduce InnoDB as the default engine, and InnoDB has additional transactional overhead and usually not good default configuration (unlike MyISAM, which is so simple that the default options can work in many cases). Let’s normalize our results by engine:
load_data_01_by_engine

MySQL Version5.1.725.5.395.6.205.7.4
MyISAM4708.5945010.6555149.7915365.005
InnoDB6238.5036274.3046499.0336939.722

This seems more reasonable, doesn’t it? However, in this case, it seems that there is a slight regression in single-thread performance as the versions go up, specially on MySQL 5.7. Of course, it is early to draw conclusions, because this method of importing a CSV file, row by row, is one of the slowest ones, and we are using very poor configuration options (the defaults), which vary from version to version and should not be taken into account to draw conclusions.

What we can say is that MyISAM seems to work better by default for this very particular scenario for the reasons I mentioned before, but it still takes 1-2 hours to load such a simple file.

The even more naive method

The next question is not: can we do it better, but, can we do it even slower? A particular text draw my attention when looking at the MySQL connector documentation:

Since by default Connector/Python does not autocommit, it is important to call this method after every transaction that modifies data for tables that use transactional storage engines.

-from the connector/python documentation
I though to myself- oh, so maybe we can speedup the import process by committing every single row to the database, one by one, don’t we? After all, we are inserting the table on a single huge transaction. Certainly, a huge number of small transactons will be better! :-) This is the slightly modified code (load_data_02.py):

And I do not even have a fancy graphic to show you because after 2 hours, 19 minutes and 39.405 seconds, I cancelled the import because only 111533 nodes had been inserted in MySQL 5.1.72 for InnoDB with the default configuration (innodb_flush_log_at_trx_commit = 1). Obviously, millions of fsyincs will not make our load faster, consider this a leason learned.

Going forward: multi-inserts

The next step I wanted to test is how effective grouping queries was in a multi-insert statement. This method is used by mysqldump, and supposedly minimizes the SQL overhead of handling every single query (parsing, permission checking, query planning, etc.). This is the main code (load_data_03.py):

We tested it with a sample of 100 rows inserted with every query. What are the results? Lower is better:
load_data_03.py results

MySQL Version5.1.725.5.395.6.205.7.4
MyISAM1794.6931822.0811861.3411888.283
InnoDB3645.4543455.8002849.2993032.496

With this method we observe an improvement of the import time of 262-284% from the original time for MyISAM and of 171-229% from the original time for InnoDB. Remember that this method will not scale indefinitely, as we will encounter the package size limit if we try to insert too many rows at the same time. However, it is a clear improvement over the one-row-at-a-time approach.

MyISAM times are essentially the same between versions while InnoDB shows an improvement over time (which may be due to code and optimization changes, but also to the defaults like the transaction log size changing, too), except again between 5.6 and 5.7.

The right method for importing data: Load Data

If you have a minimum of experience with MySQL, you know that there is a specialized keyword for data imports, and that is LOAD DATA. Let’s see how the code would end up looking like by using this option (load_data_04.py):

Simple, isn’t it? With this we are minimizing the SQL overhead, and executing the loop in the compiled C MySQL code. Let’s have a look at the results (lower is better):
load_data_04.py results

MySQL Version5.1.725.5.395.6.205.7.4
MyISAM141.414149.612155.181166.836
InnoDB2091.6171890.972920.6151041.702

In this case, MyISAM has a very dramatic improvement – LOAD DATA speeds up to 12x times the import. InnoDB, again still each one with the default parameters can improve the speed up to 3x times, and more significantly in the newer versions (5.6, 5.7) than the older ones (5.1, 5.5). I predict that this has to do much more with the different configuration of log files than with the code changes.

Trying to improve Load Data for MyISAM

Can we improve the load times for MyISAM? There are 2 things that I tried to do -augmenting the key_cache_size and disabling the Performance Schema for 5.6 and 5.7. I set up the key_cache_size to 600M (trying to fit the primary key on memory) and I set the performance_schema = 0, and I tested the 3 remaining combinations. Lower is better:
load_data_04.py results for myisam

MySQL Version5.1.725.5.395.6.205.7.4
default141.414149.612155.181166.836
key_buffer_size=600M136.649170.622182.698191.228
key_buffer_size=600M, P_S = OFF133.967170.677177.724186.171
P_S = OFF142.592145.679150.684159.702

There are certain things to notice here:

  • P_S=ON and P_S=OFF should have no effect for MySQL 5.1 and 5.5, but it brings different results because of measuring errors. We must understand that only 2 significative figures should be taken into account.
  • key_buffer_cache does not in general improve performance, in fact I would say that it statistically worsens the performance. This is reasonable because after all, I am writing to filesystem cache, and a larger key cache might require costlier memory reservations, or more memory copys. This should be researched further to make a conclusion.
  • Performance_schema may worsen the performance on this workload, but I am not statistically sure.
  • MyISAM (or maybe the MySQL server) seems to have slightly worsen its performance for this specific workload (single threaded batch import).

There are more things that I would like to try with MyISAM, like seeing the impact of the several row formats (fixed), but I wanted to follow up for other engines.

Trying to improve Load Data for InnoDB

InnoDB is a much more interesting engine, as it is ACID by default, and more complex. Can we make it as fast as MyISAM for importing?

The first thing I wanted to do is to change the default values of the innodb_log_file_size and innodb_buffer_pool_size. The log is different by default before and after 5.6, and it is not suitable for a heavy write load. I set it for a first test to 2G, as it is the largest size that 5.1 and 5.5 can use (actually, I set it to 2,147,483,136 as it has to be less than 2G), meaning that we have logs of about 4G. I also set the buffer pool for a convenient size, 8GB, enough to hold the whole dataset. Remember that one of the problems why InnoDB is so slow for imports is because it writes the new pages (at least) twice on disk -on the log, and on the tablespace. However, with these parameters, the second write should be mostly buffered on memory. These are the new results (lower is better):
load_data_04.py innodb results

MySQL Version5.1.725.5.395.6.205.7.4
default1923.7511797.220850.6361008.349
log_file_size=2G, buffer_pool=8G1044.9231012.488743.818850.868

Now this is a test that starts to be more reasonable. We can comment that:

  • Most of the improvements that we had before in 5.6 and 5.7 respect to 5.1 and 5.5 was due to the 10x size in logs.
  • Still, 5.6 and 5.7 are faster than 5.1 and 5.5 (reasonable, as 5.6 had quite some impresive InnoDB changes, both on code and on configuration)
  • InnoDB continues being at least 5x slower than MyISAM
  • Still, 5.7 is slower than 5.6! We are having consistently a 13-18% regression in 5.7 (now I am starting to worry)

I said before that the main overhead of InnoDB is writing the data twice (log and tables). This is actually wrong, as it may actually write it 3 times (on the double write area) and even 4 times, in the binary log. The binary log is not enabled by default, but the double write is, as it protects from corruption. While we never recommend disabling the latter on a production, the truth is that on an import, we do not care if the data ends up corrupted (we can delete it and import it again). There is also some options on certain filesystems to avoid setting it up.

Other features that are in InnoDB for security, not for performance are the InnoDB checksums- they even were the cause of bottlenecks on very fast storage devices like flash PCI cards. In those cases, the CPU was too slow to calculate it! I suspect that that will not be a problem because more modern versions of MySQL (5.6 and 5.7) have the option to change it to the hardware-sped up function CRC32 and, mainly, because I am using a magnetic disk, which is the real bottleneck here. But let’s not believe on what we’ve learned and let’s test it.

The other thing I can check is performance_schema overhead. I’ve found cases of workload where it produces significative overhead, while almost none in others. Let’s also test enabling and disabling it.

These are the results (lower is better):
load_data_04.py results for innodb optimized

MySQL Version5.1.725.5.395.6.205.7.4
default security and monitoring enabled1044.9231012.488743.818850.868
doublewrite=off896.423848.110483.542468.943
doublewrite=off,checksums=none889.827846.552488.311476.916
doublewrite=off,checksums=none,P_S=off488.273467.716

There are several things to comment here, some of them I cannot even explain:

  • The doublewrite feature doesn’t halve the performance, but it impacts it significantly (between a 15-30%)
  • Without the doublewrite, most of the 5.7 regression goes away (why?)
  • The doublewrite is also more significative in 5.6 and 5.7 than previous versions of MySQL. I would dare to tell that most of the other bottleneck may have been eliminated (or maybe it is just something like the buffer pool partitions being active by default?)
  • The innodb checksum makes absolutely no difference for this workload and hardware.
  • Again, I cannot give statistical significance to the overhead of the performance schema. However, I have obtained very variables results in these tests, having results with a 10% higher latency than the central values of the ones with it disabled, so I am not a hundred percent sure on this.

In summary, with just a bit of tweaking, we can get results on InnoDB that are only 2x slower than MyISAM, instead of 5x or 12x.

Import in MyISAM, convert it to InnoDB

I’ve seem some people at some forums recommending importing a table as MyISAM, then convert it to InnoDB. Let’s see if we can bust or confirm this myth with the following code (load_data_06.py):

These are the comparisons (lower is better):
load_data_06.py results

MySQL Version5.1.725.5.395.6.205.7.4
LOAD DATA InnoDB1923.7511797.220850.6361008.349
LOAD DATA MyISAM; ALTER TABLE ENGINE=InnoDB2075.4452041.8931537.7751600.467

I can see how that could be almost true in 5.1, but it is definitely not true in supported versions of MySQL. It is actually faster than importing twice the table, once for MyISAM and another for InnoDB.

I leave as a homework as a reader to check it for other engines, like MEMORY or CSV [Hint: Maybe we could import to this latest engine in a different way].

Parallel loading

MyISAM writes to tables using a full table lock (although it can perform in some cases concurrent inserts), but InnoDB only requires row-level locks in many cases. Can we speed up the process by doing a parallel loading? This is what I tried to test with my last test. I do not trust my programming skills (or do not have time) to perform the file-seeking and chunking in a performant way, so I will start with a pre-sliced .csv file into 8 chunks. It should not consume much time, but the limited synchronization tools on the default threading library, together with my limited time made me opt for this plan. We only need to understand that we do not start with the exact same scenario in this case. This is the code (load_data_08.py):

And these are the results, with different parameters:
load_data_08.py results

MySQL Version5.1.725.5.395.6.205.7.4
1 thread, log_file_size=2G, buffer_pool=8G894.367859.965488.273467.716
8 threads, log_file_size=2G, buffer_pool=8G752.233704.444370.598290.343
8 threads, log_file_size=5G, buffer_pool=20G301.693243.544
4 threads, log_file_size=5G, buffer_pool=20G295.884245.569

From this we can see that:

  • There is little performance changes between loading in parallel with 4 or 8 threads. This is a machine with 4 cores (8 HT)
  • Parallelization helps, although it doesn’t scale (4-8 threads gives around a 33% speed up)
  • This is where 5.6 and specially 5.7 shines
  • A larger transaction log and buffer pool (larger than 4G, only available in 5.6+) still helps with the load
  • Parallel load with 5.7 is the fastest way in which I can load this file into a table using InnoDB: 243 seconds. It is 1.8x times the fastest way I can load a MyISAM table (5.1, single-threaded): 134 seconds. That is almost 200K rows/s!

Summary and open questions

  • The fastest way you can import a table into MySQL without using raw files is the LOAD DATA syntax. Use parallelization for InnoDB for better results, and remember to tune basic parameters like your transaction log size and buffer pool. Careful programming and importing can make a >2-hour problem became a 2-minute process. You can disable temporarily some security features for extra performance
  • There seems to be an important regression in 5.7 for this particular single-threaded insert load for both MyISAM and InnoDB, with up to 15% worse performance than 5.6. I do not know yet why.
  • On the bright side, there is also an important improvement (up to 20%) in relation to 5.6 with parallel write-load.
  • Performance schema may have an impact on this particular workload, but I am unable to measure it reliably (it is closer to 0 than my measuring error). That is a good thing.

I would be grateful if you can tell me if I have made any mistakes on my assumptions here.

Here you can download the different scripts in Python tested for the MySQL data loading.

Remember that these were not “formal” benchmarks, and I have no longer access to the machine where I generated them. I have yet to analyze if the same problem exists on 5.7.5. There are other people pointing to regressions under low concurrency, like Mark Callaghan, maybe these are related? As usual, post a comment here or reach me on Twitter.

Today is the day in which MyISAM is no longer needed

RIP MyISAMOf course, this is just a catchy title. As far as I know not all system tables can be converted to InnoDB yet (e.g. grant tables), which makes the header technically false. MyISAM is a very simple engine, and that has some inherent advantages (no transactional overhead, easier to “edit” manually, usually less space footprint on disk), but also some very ugly disadvantages: not crash safe, no foreign keys, only full-table locks, consistency problems, bugs in for large tables,… The 5.7.5 “Milestone 15″ release, presented today at the Oracle Open World has an impressive list of changes, which I will need some time to digest, like an in-development (syncronous?) multi-master replication or a revamped query optimizer. But the one very change that I want to highlight today is how the last one of the “big 3″ reasons to use MyISAM has finally vanished. For me (and my customers) those reasons were:

Transportable tablespaces

In MyISAM, moving a table in binary format from one server to another was very easy- shutdown the servers and copy the .MYI, .MYD and .frm files. You could even do it in a hot way with the due care: you could copy the table files if you executed the infamous “FLUSH TABLES WITH READ LOCK;” beforehand, and use that as a backup.

innodb_file_per_table was introduced as early as MySQL 4.1, but it wasn’t set as default until 5.6.6 (with a brief indecision on early versions of 5.5). The actual feature “Transportable tablespaces” was added also in 5.6.6, and provided a way inside the server to prepare InnoDB tables for copying, by locking them and exporting its portion of the InnoDB data dictionary (FLUSH TABLES ... FOR EXPORT).

Before 5.6, MySQL required a patch for this to work reliably. Now, single tables can be exported and imported without problem in binary format, even between servers.

Fulltext indexes

Fulltext search has never been the strong point of MySQL (and that is why many people combined it with Sphinx or Apache Lucene/Solr). But many users didn’t require a Google Search clone, only a quick way to search on a smallish website, or a description column, and as we know, BTREE indexes wouldn’t help with like '%term%' expressions.

FULLTEXT indexes and searches have been available since MySQL 3.23.23, but only on MyISAM. I do not know about you, but I have found a relatively high number of customers whose reason to continue using MyISAM was only “we need fulltext search”. Starting with MySQL 5.6.4, fulltext support was added to InnoDB, avoiding the need to decide between transactionality and fast string search. While the starts were not precisely great, (specially compared to other more complex, external solutions) and they were released with some important crashing bugs; the latest changes indicate that InnoDB fulltext support is still being worked on in order to increase its performance.

GIS support

This is the one that MySQL engineers added in MySQL 5.7.5. Of course, GIS datatypes were available since MySQL 4.1 for MyISAM, and in 5.0.16 for most other upstream engines, including InnoDB. However, those types are not useful if they cannot be used quickly in common geographical operations like finding if 2 polygons overlap or finding all points that are close to another. Most of those operations require indexing in 2 dimensions, something that doesn’t work very well with standard BTREE indexes. For that, we need R-Trees or Quadtrees, structures that can efficiently index multidimensional values. Up to now, those SPATIAL indexes, as they are called in MySQL syntax, were only available for MyISAM- meaning that you had to decide again between transactions and foreign keys or fast GIS operations. This was one of the reasons why projects like OpenStreetMap migrated to PostGIS, while others used Oracle Spatial Extensions.

To be fair, the list of changes regarding GIS seems quite extensive, and I have been yet unable to have a detailed look at it. But for I can see there is still no support for projections (after all, that would probably require a full overhaul of this feature), and with it, no native distance functions, which makes it not a viable alternative to PostGIS in many scenarios. But I can see how InnoDB support, at least at MyISAM level and beyond that, is a huge step forward. Again, sometimes you do not need a complete set of features for the main MySQL audience, but a set of minimum options to display efficiently something like a map on a website.

MyISAM in a post-myisam world

In summary, these changes, together with the slow but steady migration of system tables to InnoDB format, plus the efforts on reducing transactional overhead for internal temporary tables will allow Oracle to make MyISAM optional in the future.

I will continue to use MyISAM myself in certain cases because sometimes you do not need a fully ACID storage, and it works particularly well for small, read-only datasets -even if you have millions of those (hey, it works well for WordPress.com, so why should you not use it, too?).

Also, it will take years for all people to adopt 5.7, which is not even in GA release yet.

So tell me, are you planning to migrate engine when 5.7 arrives to your production? What are you still using MyISAM for? Which is your favorite 5.7.5 new feature? Which caveats have you found on the new announced features? Send me a message here or on Twitter.

Changes in Configuration of Global Variables between MySQL 5.6.20 and MySQL 5.7.4 “Milestone 14″

MySQL Upgrade from 5.6 to 5.7While doing some testing (that I published later here) on the still-in-development MySQL 5.7 I wanted to do some analysis on the configuration to see if the changes in performance were due to the code changes or just to the new MySQL defaults (something that is very common in the migration from 5.5 to 5.6 due to the default transaction log size and other InnoDB parameters). This is a quick post aiming to identify the global variables changed between these two versions.

You could tell me that you could just read the release notes, but my experience (and this is not an exception, as you will see) tells me to check these changes by myself.

I do not include changes in the performance_schema tables, as I was running these particular tests with performance_schema = OFF. I also do not include “administrative changes”, my name for variables that do not influence the behaviour or performance of mysql, like server_uuid which will be unique for different instances and version and innodb_version, which obviously have been changed from 5.6.20 to 5.7.4-m14. Please note that some changes have also been back-ported to 5.6, so not being shown here, or were already available in previous releases of 5.7.

Variables that have changed its value

variable name5.6.20 value5.7.4 value
eq_range_index_dive_limit10200
log_warnings12
performance_schema_max_statement_classes168189

New variables

variable name5.7.4 value
default_authentication_pluginmysql_native_password
default_password_lifetime360
have_statement_timeoutYES
innodb_buffer_pool_dump_pct100
innodb_log_write_ahead_size8192
innodb_page_cleaners1
innodb_temp_data_file_pathibtmp1:12M:autoextend
log_error_verbosity3
log_timestampsUTC
max_statement_time0
performance_schema_events_transactions_history_long_size-1
performance_schema_events_transactions_history_size-1
performance_schema_max_memory_classes250
performance_schema_max_metadata_locks-1
performance_schema_max_prepared_statements_instances-1
performance_schema_max_program_instances5000
performance_schema_max_statement_stack10
rbr_exec_modeSTRICT
session_track_schemaON
session_track_state_changeOFF
session_track_system_variablestime_zone,autocommit,
character_set_client,
character_set_results,
character_set_connection
slave_parallel_typeDATABASE

Deprecated variables

variable name5.6.20 value
binlogging_impossible_modeIGNORE_ERROR
innodb_additional_mem_pool_size8388608
innodb_use_sys_mallocON
thread_concurrency10

Some comments:

  • Regarding potential incompatibilities, all deprecated variables but one were literally useless, and I did not find them setup usually except for innodb_additional_mem_pool_size, which was, in my experience, always configured by mistake, as it had absolutely no effect in recent versions of InnoDB. The exception is binlogging_impossible_mode, which had been added in 5.6.20 and probably not merged in time for this 5.7 milestone. It will probably be added in the future with equivalent functionality. An interesting feature, I would add.
  • eq_range_index_dive_limit changed from 10 to 200 is a very reasonable change, made after a Facebook suggestion. This variable was added in MySQL 5.6, and while it solved the problem of getting more reliable statistics for IN expressions with multiple values, Facebook was completely right that IN clauses have commonly more than 10 items (as it is a feature that many developers/frameworks like).
  • max_statement_timeout and have_statement_timeout comes from the merge or reimplementation of the Twitter Statement Timeout functionality. Nice change to see upstream.
  • default_authentication_plugin is not a new feature, it just has been moved from a server parameter to a full global variable that can be inspected (but not changed) at runtime. The real change here is default_password_lifetime, which was really missing on the 5.6 release- automatic password expiration (without having to do manually PASSWORD EXPIRE). What I find amusing is the default value: 360 (passwords expire approximately once a year). I am not saying that that is a right or wrong default, but I predict a lot of controversy/confusion over that. There is more to talk about about authentication changes, but I will not expand it here, as it does not concern configuration variables.
  • By changing the slave_parallel_type to LOGICAL_CLOCK, mysql allows for more fine-grained parallel replication, much better than the limited 5.6 option (only useful in multi-tenant setups)
  • Some interesting additions to InnoDB, too, like the innodb_page_cleaners variable, allowing multiple threads for flushing pages from the buffer pool in parallel, and which was the subject of a recent discussion about a certain benchmark. Also we have additions like some extra flexibility regarding the transaction log caching configuration and the location of temporary tables in InnoDB format, but I consider those lesser changes to go over them in detail.
  • log_warnings has changed and it has not been documented. But to be honest, its functionality is being deprecated for log_error_verbosity, a newly introduced variable that makes by default all errors, warnings and notes to be logged by default. I have submitted bug #73745 (now fixed) about this.
  • A new variable, rbr_exec_mode, seems to have been added in 5.7.1, but it is not documented anywhere in the server variables section or on the release notes, only on that developer’s blog. It allows setting at session level an IDEMPOTENT mode when replicating events in row format, ignoring all conflicts found. I’ve created a bug #73744 for this issue (now fixed).
  • There has been several performance_schema changes; I will not go over each of them here. Please note that performance_schema_max_statement_classes is not a real change, as that is calculated at startup time, it does not have a fixed value.
  • Session tracking variables were added for notification of session changes when using the C connector

In summary, some interesting changes, only one default change that may alter the performance (eq_range_index_dive_limit), and nothing that will create problems for a migration, with only two own-predicted exceptions:

Instances of the (useless for a long time, as mentioned above) variable innodb_additional_mem_pool_size failing with:

, which just should be deleted from the configuration file.

And the expiration time set by default to 1 year, that may create lots of:

or even create some difficult-to-debug problems in older drivers, as we had experienced with this functionality in 5.6. I would like in particular your opinion about software defaults for password expiration, as I do not consider myself a security expert. As usual, you can comment here or on Twitter.

EDIT: Morgan Tocker, from Oracle, has commented via twitter that “innodb_additional_mem_pool_size had been useless for a long time (since the plugin), and that the reason for the change now is the additional problems of parsing but ignoring options“. I am not complaining about those changes, I actually think that they should have been done long time ago to prevent those very errors, I am just putting here a solution for what I think can be frequent mistakes on migration. Incompatibility is sometimes the way to go.