Changes in Configuration of Global Variables between MySQL 5.6.20 and MySQL 5.7.4 “Milestone 14″

MySQL Upgrade from 5.6 to 5.7While doing some testing (that I have pending to show you) on the still-in-development MySQL 5.7 I wanted to do some analysis on the configuration to see if the changes in performance were due to the code changes or just to the new MySQL defaults (something that is very common in the migration from 5.5 to 5.6 due to the default transaction log size and other InnoDB parameters). This is a quick post aiming to identify the global variables changed between these two versions.

You could tell me that you could just read the release notes, but my experience (and this is not an exception, as you will see) tells me to check these changes by myself.

I do not include changes in the performance_schema tables, as I was running these particular tests with performance_schema = OFF. I also do not include “administrative changes”, my name for variables that do not influence the behaviour or performance of mysql, like server_uuid which will be unique for different instances and version and innodb_version, which obviously have been changed from 5.6.20 to 5.7.4-m14. Please note that some changes have also been back-ported to 5.6, so not being shown here, or were already available in previous releases of 5.7.

Variables that have changed its value

variable name5.6.20 value5.7.4 value
eq_range_index_dive_limit10200
log_warnings12
performance_schema_max_statement_classes168189

New variables

variable name5.7.4 value
default_authentication_pluginmysql_native_password
default_password_lifetime360
have_statement_timeoutYES
innodb_buffer_pool_dump_pct100
innodb_log_write_ahead_size8192
innodb_page_cleaners1
innodb_temp_data_file_pathibtmp1:12M:autoextend
log_error_verbosity3
log_timestampsUTC
max_statement_time0
performance_schema_events_transactions_history_long_size-1
performance_schema_events_transactions_history_size-1
performance_schema_max_memory_classes250
performance_schema_max_metadata_locks-1
performance_schema_max_prepared_statements_instances-1
performance_schema_max_program_instances5000
performance_schema_max_statement_stack10
rbr_exec_modeSTRICT
session_track_schemaON
session_track_state_changeOFF
session_track_system_variablestime_zone,autocommit,
character_set_client,
character_set_results,
character_set_connection
slave_parallel_typeDATABASE

Deprecated variables

variable name5.6.20 value
binlogging_impossible_modeIGNORE_ERROR
innodb_additional_mem_pool_size8388608
innodb_use_sys_mallocON
thread_concurrency10

Some comments:

  • Regarding potential incompatibilities, all deprecated variables but one were literally useless, and I did not find them setup usually except for innodb_additional_mem_pool_size, which was, in my experience, always configured by mistake, as it had absolutely no effect in recent versions of InnoDB. The exception is binlogging_impossible_mode, which had been added in 5.6.20 and probably not merged in time for this 5.7 milestone. It will probably be added in the future with equivalent functionality. An interesting feature, I would add.
  • eq_range_index_dive_limit changed from 10 to 200 is a very reasonable change, made after a Facebook suggestion. This variable was added in MySQL 5.6, and while it solved the problem of getting more reliable statistics for IN expressions with multiple values, Facebook was completely right that IN clauses have commonly more than 10 items (as it is a feature that many developers/frameworks like).
  • max_statement_timeout and have_statement_timeout comes from the merge or reimplementation of the Twitter Statement Timeout functionality. Nice change to see upstream.
  • default_authentication_plugin is not a new feature, it just has been moved from a server parameter to a full global variable that can be inspected (but not changed) at runtime. The real change here is default_password_lifetime, which was really missing on the 5.6 release- automatic password expiration (without having to do manually PASSWORD EXPIRE). What I find amusing is the default value: 360 (passwords expire approximately once a year). I am not saying that that is a right or wrong default, but I predict a lot of controversy/confusion over that. There is more to talk about about authentication changes, but I will not expand it here, as it does not concern configuration variables.
  • By changing the slave_parallel_type to LOGICAL_CLOCK, mysql allows for more fine-grained parallel replication, much better than the limited 5.6 option (only useful in multi-tenant setups)
  • Some interesting additions to InnoDB, too, like the innodb_page_cleaners variable, allowing multiple threads for flushing pages from the buffer pool in parallel, and which was the subject of a recent discussion about a certain benchmark. Also we have additions like some extra flexibility regarding the transaction log caching configuration and the location of temporary tables in InnoDB format, but I consider those lesser changes to go over them in detail.
  • log_warnings has changed and it has not been documented. But to be honest, its functionality is being deprecated for log_error_verbosity, a newly introduced variable that makes by default all errors, warnings and notes to be logged by default. I have submitted bug #73745 about this.
  • A new variable, rbr_exec_mode, seems to have been added in 5.7.1, but it is not documented anywhere in the server variables section or on the release notes, only on that developer’s blog. It allows setting at session level an IDEMPOTENT mode when replicating events in row format, ignoring all conflicts found. I’ve created a bug #73744 for this issue.
  • There has been several performance_schema changes; I will not go over each of them here. Please note that performance_schema_max_statement_classes is not a real change, as that is calculated at startup time, it does not have a fixed value.
  • Session tracking variables were added for notification of session changes when using the C connector

In summary, some interesting changes, only one default change that may alter the performance (eq_range_index_dive_limit), and nothing that will create problems for a migration, with only two own-predicted exceptions:

Instances of the (useless for a long time, as mentioned above) variable innodb_additional_mem_pool_size failing with:

, which just should be deleted from the configuration file.

And the expiration time set by default to 1 year, that may create lots of:

or even create some difficult-to-debug problems in older drivers, as we had experienced with this functionality in 5.6. I would like in particular your opinion about software defaults for password expiration, as I do not consider myself a security expert. As usual, you can comment here or on Twitter.

EDIT: Morgan Tocker, from Oracle, has commented via twitter that “innodb_additional_mem_pool_size had been useless for a long time (since the plugin), and that the reason for the change now is the additional problems of parsing but ignoring options“. I am not complaining about those changes, I actually think that they should have been done long time ago to prevent those very errors, I am just putting here a solution for what I think can be frequent mistakes on migration. Incompatibility is sometimes the way to go.

Which Compression Tool Should I Use for my Database Backups? (Part II: Decompression)

On my post last week, I analysed some of the most common compression tools and formats, and its compression speed and ratio. While that could give us a good idea of the performance of those tools, the analysis would be incomplete without researching the decompression. This is particularly true for database backups as, for those cases where the compression process is performed outside of the production boxes, you may not care too much about compression times. In that case, even if it is relatively slow, it will not affect the performance of your MySQL server (or whatever you are using). The decompression time, however, can be critical, as it may influence in many cases the MTTR of your whole system.

Testing environment

I used the same OpenStreetMap node MySQL dump in CSV format that I mentioned on my previous post, and -as some tools used the same format (and they should be compatible), but resulted on different compressed ratio- I chose the smallest resulting file for each of them. Here it is a table with the compressed size per format, as a reminder:

formatsize (bytes)
original .csv (no compression)3700635579
gzip585756379
bzip2508276130
bzip2 (pbzip2-compressed)508782016
7z354107250
lzip354095395
lzo782234410
lz4816582329

Please note that while p7zip and lzip/plzip used the same algorithm, the file format is different. Also please notice the usage of two different compressed files for bzip2: the reason for that will be clarified later.

The hardware specs were the same as for the last post: an almost-idle Intel Quad Core i7-3770@3.40GHz with hyper threading, exposing 8 cpus to the kernel. 32 GB of ram. 2 spinning disks of 3 TB in RAID 1. The filesystem type was ext4 with the default OS options. The operating system had changed to CentOS 7, however.

The methodology was similar as before, for each tool, the following command was executed several times:

Except for dd and 7za, where the syntax is slightly different.

The final file was stored on a different partition of the same RAID. The final file was checked for correction (the uncompressed file was exactly the same as the original one) and deleted after every execution. I will not repeat here my disclaimer about the filesystem cache usage, but I also added the dd results as a reference.

Global results

This is my table of final results and the analysis and discussion follows bellow:

These data can be seen more easily, as usual, on a bidimensional graph. Here the X axis represents the median speed of decompression in MB/s (more is better) and the Y axis represents the compressed ratio in percentage of the compressed size over the original size (less is better):

Analysis of different compression tools: Compression ratio vs decompression speed
(not plotted: dd, as it would appear with a 100% compression ratio).

CPU usage was polled every second, and so it was the memory usage, that in no test for any of the tools was over 1MB.

In this case I have plotted the function y = x*0.01+12 over the points and, while there is a clear tendency of better compression ratios requiring more time to decompress, the correlation is weaker than on the compression case.

The last thing I want to remark about the global results is that I have not tried variations in parameters for decompression, as in most cases there are little to no options for this process, and the algorithms will do essentially the same for a file that was created with --fast than another created with --best.

Decompressing gzip and bzip2 formats

Unsurprisingly, the gzip file took less time to decompress than bzip with the generic GNU tools (56 seconds vs. 17). I used GNU gzip 1.5 and bzip2 1.0.6. I said everything I had to say about the advantages and disadvantages of using the most standard tools, so I will not repeat myself, but I wanted to reiterate the idea that gzip is a great tool for fast compression processes when there is not an alternative, as it got a mean throughput of almost 203 MB/s when decompressing our test file.

Of course, the next step was testing decompressing in parallel, and for that I had pigz 2.3.1 and pbzip2 v1.1.6. As a side note, I would like to mention that, at the time of this writing, there were no rpm packages for pbzip2 for CentOS 7 in the base distribution nor on EPEL (which is currently in beta for version 7). I used the package for RHEL6.

However, when looking at the pigz results we can realise that, while there is certainly an improvement on speed (just over 7 seconds), it is not as dramatic as the 4x+ improvement that we had on compression. Also, if we look at the cpu usage, we can realise that the maximum %CPU usage is never over 170. I found the reason for that while looking at the documentation: while pigz uses several threads for read and write I/O, it is unable to parallelise the core gzip decompression algorithm. The improvement over standard gzip -however- is there, with almost 500 MB/s of decompression bandwidth.

When checking pbzip2, on my first try, I realised that there was no parallelization at all, and that the timing results were essentially the same as with regular bzip2. I searched for answers on the documentation and I found that and the reason for that was that decompression in parallel was possible (unlike gzip), but only for files created by pbzip2 itself. In other words, both bzip2 and pbzip2 create files with a compatible format (they can be decompressed with each other), but parallelization is only fully possible if they are created and decompressed with pbzip2. To test that second case, I got the best-compressed file that I got from my previous results (which was slightly larger than the one created with bzip2) and retried the tests. That is why there are two different rows on the global results for pbzip2.

In that second scenario, pbzip2 was a real improvement over bzip2, obtaining decompression rates of 356 MB/s, roughly equivalent to the results of a raw filesystem copy.

As it was expected, multiple threads of decompression is a clear advantage on SMP systems, with the usual disclaimers of extra resources consumed and the fact that, as just seen, it is not possible for all file formats.

Lzma decompression

The next group to test is lzma-based tools: Lzip 1.7, p7zip 9.20 and plzip 1.2-rc2. Again, lzip was not available on EPEL-7, and the RedHat6 version was used, and plzip was compiled from source, as we had to do previously.

Lzma algorithm was classified as a slow but good-compression algorithm on our previous results. A similar thing can be extrapolated for decompression: both lzip and 7za provide decompression times of around 30 seconds, with throughputs near the 100 MB/s. Although p7zip seems to be a bit better paralleled than lzip (with %cpu usage reaching 150), both provide essentially a mono-thread decompression algorithm. Plzip provides a better parallelization, reaching a maximum %cpu of 290, but the throughput never reaches the 200 MB/s.

The general evaluation is that they are clearly better tools than single-threaded gzip and bzip2, as they provide similar decompression bandwidths but with much better compression ratios.

Fast tools: lzop and lz4

Finally, we have left the fast compression and decompression tools, in our tests lzop v1.03 and lz4 r121. In this case we can testify the the claims that lz4, while providing similar compression speed than lzop, it is faster for decompression: almost doubling the rate (580 MB/s for lzop vs. 1111 MB/s for lz4). Obviously, the only reason those results are possible is because the filesystem cache is kicking in, so take this results with the due precaution. But it shows what kind of decompression bandwidth can be achieved when the disk latency is not the bottleneck.

When the time of the test is so small, I would recommend repeating it with larger filesizes and/or limiting the effect of the filesystem cache. I will leave that as a homework for the reader.

Conclusion

Aside from the found limitations of several of the tools regarding decompression parallelization (pigz, pbzip2), no highly surprising results have been found. Fast compression tools are fast to decompress (I have become a fan of lz4) and best-compression tools are slower (plzip seems to work very well if we are not constrained by time and CPU). As usual, I will leave you with my recommendation of always testing on your environment, with your own files and machines.

Which compression tool(s) do you use for MySQL (or any other database backups)? Leave me a comment here or on Twitter.

Which Compression Tool Should I Use for my Database Backups? (Part I: Compression)

This week we are talking about size, which is a subject that should matter to any system administrator in charge of the backup system of any project, and in particular database backups.

I sometimes get questions about what should be the best compression tool to apply during a particular backup system: gzip? bzip2? any other?

The testing environment

In order to test several formats and tools, I created a .csv file (comma-separated values) that was 3,700,635,579 bytes in size by transforming a recent dump of all the OpenStreetMap nodes of the European portion of Spain. It had a total of 46,741,126 rows and looked like this:

In fact, the original file is really a tsv (tab-separated values), not a csv, but only because I am lazy when importing/exporting to MySQL to add the extra FIELDS SEPARATED BY ','. You can download this file in 7z format, or create your own from the Geofabrik OpenStreetMap Data Extracts.

All tests were done on an almost-idle Intel Quad Core i7-3770@3.40GHz with hyper threading, exposing 8 cpus to the kernel. 32 GB of ram. 2 spinning disks of 3 TB in RAID 1. All running on CentOS 6.5 x86_64. The filesystem type was ext4 with the default OS options.

On-table sizes

For an import to MySQL, I proposed the following table structure:

And these are the sizes on database (once we made sure there were no pending write operations):

  • MySQL MyISAM data file (.MYD): 2,755,256,596 bytes.(*)
  • MySQL InnoDB tablespace (.ibd): 3,686,793,216 bytes.
  • MySQL InnoDB tablespace using row_format=compressed (.ibd): 1,736,441,856 bytes.

Why is it taking more space on plain text than on the database? Well, despite databases being optimised for fast access and not for space, as we are using very compact set of datatypes (integers and timestamps instead of strings), actually saving disk space. This is why a proper database design is critical for performance!

We can see that one of the few reason why people are still using MyISAM is because it is a very simple and compact format. (*)However, to be fair, we are not having into account the extra 674,940,928 bytes for the primary key (.MYI), making the difference not so big. On the other side, we are not taking into account that InnoDB index size goes up quite significantly when using multiple secondary keys (due to the storage of the primary key, if it is large enough) and the many other structures (tablespace 0, transaction logs) that are needed for InnoDB to work properly, shared with other tables. In general, it is impossible to do a fair comparison between MyISAM and InnoDB because we are comparing apples and oranges.

What it is clear is that compression (in this case I used the default InnoDB zlib algorithm with the default level of compression-6) helps reduce on-disk size, potentially helping for some specific scenarios: more tables to fit in SSDs, or less IOPS for a disk-bound database. On the other side, the initial load from a file took significantly more. I do not want to show time measurements for the different table imports because it is not trivial to account the actual time to disk due to all the buffering occurring at database level, and giving the time of SQL execution would be unfair. I will talk more about import times in a future post.

Global results

The sizes in table are only showed as reference, our main goal was to test the several tools available for compressing the original nodes.csv file. I constrained myself to some of the most popular ones, and you can see the final results on the following table (analysis, explanation and discussion or results follows afterwards):

As you can see, I evaluated several tools on their default modes, plus additionally “high-compression mode” and a “fast mode”. For them, I tried to evaluate 3 different parameters important for the creation of compressed files: time to completion, final file size and resources used. Please note that I only evaluated compression tools, and not archiving ones (like tar or zip). The latter tools can usually use different algorithms for compressing each file individually or the final full archive.

The first data column shows the number of seconds of wall clock time that took for the process to write the compressed file to a different partition on the same set of RAID disks. Several runs of:

were executed (except for 7z and dd, where the syntax is different) and the median vale was taken, in order to minimise measure errors due to external factors. For each run, the final file was checked for correctness (the compressed file is deterministic and it can be extracted into the original file without errors or differences) and then deleted. The results are not too scientific, as the filesystem cache can be heavily used for both reads and writes, but I tried to focus on that scenario in particular. An execution of dd (copying the file without compression) is also shown as a control value.

I think the second and third data columns are self-explanatory: the file size, in bytes, of the compressed file and how it compares with the original file.

The last column tries to measure the max and min CPU usage, as reported by the operating system during compression. However, due to the cpu scheduler, and the fact that most tools have a synchronisation period at the beginning and at end of the execution, together with the fact that is was obtained by polling its value at intervals, it is not very significative except for checking the parallelism of the algorithm used. Values greater than 100 means that more than core/thread is being used for compression.

I did not registered the memory usage (the other important resource) because even on ultra modes, its usage was not significative for my 32 GB machine (less than 1/2 GB every time, most of the times much less). I considered it was something one should not worry too much for a machine that should have enough free RAM like a database server. What you probably would like to have into account is the effects on the filesystem cache, as that could impact directly on the MySQL performance. Preventing backup page reads and writes going into the filesystem cache can be done playing around with the flag POSIX_FADV_DONTNEED. I want to mention also that there are tools, like bzip, that have a small footprint mode: bzip2 --small.

You can find the measures concerning decompression times on a followup post: Part II.

The global results may be appreciated much more clearly plotted on a bidimensional graph. I have plotted the obtained values with the time to compression on the X axis (lower is better) and the compression ratio on the Y axis (lower is better):

Time and ratio comparison of gzip, bzip2, pigz, pbzip2, lzip, p7zip, plzip, lzop and lz4 compression, with different levels and parameters
Not plotted: dd (100% compression ratio), 7za “ultra” (>21 minutes for compression) and lzip (>35 minutes for compression).

In general, we can see that there are no magical tools, and that a better compression ratio requires more time (size is inversely proportional to time). I have plotted also the function y = 200/x + 9. That, or something like y = 200/x+9.5(it is difficult to provide a good correlation with so little number of matches, most of them unrelated) seems to provide the lower limit of ratio per unit of time, suggesting that 9%-9.5% would be the maximum compress ration obtainable for that file with the available tools at the moment.

Let’s analyse what are the weaknesses and strong points of every compression format.

The well-known gzip and bzip2

If you want compatibility, gzip and bzip2 are the kings. Not only they are widely recognised compression formats, but the tools for compress and decompress are preinstalled in most unix-like operating systems. Probably Windows is the only operating system that doesn’t support gzip by default. gzip and bzip2 are the only compressions with its own letter on tar (with compress on BSD and xz on GNU).

Compatibility and availability are the strong points of these tools, however, if we look at the graph, we can see that they are relatively far from the line I mentioned as “ideal” in time/size ratio. bzip2 provides a better compression ratio than gzip in exchange of more cpu cycles, but both tools are single-threaded and they do not shine in any aspect. Surprisingly enough, bzip2 -1 provided me with a worse compression time and better rate than standard bzip2 execution, and the manual for the gnu version provides an explanation for that:

Probably the best use I would recommend for this tools is gzip --fast (equivalent to gzip -1) that, while not providing a great compression rate, it does it in a very fast way even for a single-thread application. So it can be useful in those cases where we want to maximise speed without taking many resources. In other cases, where tool availability is not a problem, I would recommend you trying other tools with either better speed or better compression ratio.

I used GNU versions gzip 1.3.12 and bzip2 1.0.6.

Parallel compression tools: pigz and pbzip2

Things get more interesting if you use the parallel versions of gzip and bzip2 on a multi-core system. While there are more than one version, I choose pigz 2.3.1 and pbzip2 1.1.6 for my tests. While they are not part of the official Red Hat/CentOS repositories, they can found on EPEL and Debian repositories.

Both tools auto-detect the number of cores I had and performed the compression in 8 threads, providing comparable compression ratios in about 4 times less time. The obvious downsize is that in a high-demanding environment, like a MySQL server under considerable load, you may not want/can’t provide full CPU resources to the backup process. But if you are doing the compression on a separated dedicated server, parallelization is something you should get advantage of, as in general CPU will be the main bottleneck on a compression algorithm.

Again, as a highlight, pigz with the default parameters provided me a good compression ration (16,89%) in less than 28 seconds- that is compressing at close to 130MB/s for my modest hardware (that is more than a third of my copy rate, 350MB/s).

As a side note, while pbzip2 accepts a compression level as a parameter, the default compression level is -9.

lzma implementations: lzip, 7zip and plzip

The next tests performed were simply different lzma implementations, an algorithm that has the fame of providing very good compression rates.

I started with lzip. It is not on the official repositories, so I got it from EPEL, installing lzip 1.7. The compression ratio was, effectively, the best of all other algorithms (close to 9.5%) but it took 35 minutes and 38 seconds to produce the output. Not only the algorithm was to blame: it used a single thread, hence the delay.

After that, I tried p7zip 9.20, in particular the unix tool 7za. This one the only tool tested that did not conformed to the gzip parameters. I had to execute it using:

Please note that p7zip is an archiver tool, but I made an exception in order to test an alternative implementation of lzma.

The results were better: while the tool provided a slightly worse compression ration (10.29%), thanks to some kind of execution in more than one thread, the execution time was reduced to just under 14 minutes. I also tested a suggested “ultra” mode found in the 7za manual, with the following parameters:

In short: maximising the memory usage, compression level and dictionary size -aside from enforcing the archive format and compression algorithm. While this provided with a smaller file size (but only 25 MB smaller, less than a 1% of the original file), the time went up to more than 21 minutes.

I wanted to try a real parallel implementation of lzma, and plzip was exactly that. I could not find a rpm package anywhere, so I downloaded and installed from source code Lzlib 1.5 and plzip 1.2-rc2. The results were really good, as expected. plzip provided comparable results to “pigz -9″ when running in “fast mode”; but by default, in only 3m37s I got a 359MB compressed file, or 10.17% of the original file. I then tried to emulate the ultra settings of p7zip (with -9 -m 64 -s 33554432) and got the winner tool in compression ratio (9.57%) in only 7 minutes and 6.486 seconds.

Obviously, the same restrictions that I mentioned for the other parallel tools apply here: usage of multiple cpus may be discuraged for a very busy server, but if you are storing the backups for long-term on a separate server, you may want to have a look at this possibility. In any case, most parallel tools have a way to limit the number of threads created (for example with the --threads option in lzip).

Fast compression tools: lzop and lz4

I didn't want to finish my testing without having a look some of the high-bandwidth compression tools, and I choose 2: lzop and lz4. While I had to install lz4 r119 from EPEL, lzop v1.02rc1 is part of the base packages of Red Hat/CentOS.

They both provide what they promise: very fast compression algorithms (in some cases, faster than a plain copy of a file, as they are not CPU-bound but they have to write less amount of data) in exchange for worse compression ratios (21-30%). For the example file, on my machine, I got better performance for lz4 than lzop, offering similar ratios but in less time (8.5 vs. 15.5 seconds). So if I had to choose, I would probably would use lz4 over lzop in my particular case. Additionally, although it has not been tested, lz4 boasts of having better decompression speeds.

As a negative highlight, I would recommend against using lzop -9, as there are tools that could get you better compression ratios in half the time. lz4 did not perform well also with a higher compression level, so I recommend you to stick to the defaults or lower compression levels for these tools (in fact, lz4 defaults to -1).

Conclusion

I didn't test other tools like compress (Lempel-Ziv), xz (lzma2) or QuickLZ, but I do not expect too many deviations from the patterns we have seen: time needed is inversely proportional to compression level. If you want fast compression times, go for lz4. If you want a small file size, go for an implementation of lzma, like p7zip. bzip and gzip formats are good options when compatibility is important (e.g. releasing a file), but when possible, use a parallel compress implementation to improve its performance (plzip, pbzip2, pigz). We can even use a combination of tools for our backups, for example, export our tables in binary format using lz4 to get them outside of the mysql server, and later, on a separate server, convert it to lzma for long-term storage.

I would also tell you to try the compression methods for your particular dataset and hardware, as you may get different compression ratios and timings, specially depending on the amount of memory available for filesystem caching, your cpu(s) and your read and write rate from secondary storage. What I have tried to do with this, however, is a starting point for you to get your own conclusions.

Do you agree with me? Do you think I am wrong at some point? Did you miss something? Write a comment or send me a replay on twitter or by email.

Check out Part II of this analysis for my research on decompression times.

How to install MySQL 5.6 on CentOS 7

CentOS 7 and MySQL 5.6

A bit of history

The latest version of Red Hat Enterprise Linux, one of the most popular and respected Linux distributions in the server market, was released in June 2014, followed by CentOS 7 and Oracle Linux releases in July of the same year.

There are very interesting changes for database administrators in these new releases, among which I would like to highlight the fact that installer now chooses XFS as its filesystem by default, which substitutes ext4 as the preferred format for local data storage. Red Hat EL7 also includes Btrfs as a tech preview.

Regarding packages, the most impacting change is arguable the update of both MySQL and PostgreSQL versions, indeed in a need of an update, as the previous version of Red Hat, 6.5, still featured 5-year old versions of both RDBMSs, and both are currently in end of life support. The biggest surprise is that Red Hat has opted to choose MariaDB 5.5, and not Oracle, as the default MySQL-like vendor. This has the hilarious consequence that Oracle Linux actually distributes its competitor version, MariaDB on its repositories, with the aim of being 100% compatible. The difference is that, of course, Oracle offers its latest MySQL version in yum repositories, and as a consequence, it is available for install on all Red Hat-compatible versions.

Pre-requisites

In this tutorial we will show how to install MySQL 5.6 on CentOS 7, useful for those that prefer to deploy the latest MySQL GA release. 5.6 introduces a lot of improvements over MySQL 5.5, and given that Red Hat EL7 has a support cycle of at least ten years, it may become very outdated in the future. The process we are about to show for CentOS 7 will be identical on RHEL 7 and, to some extent, other yum-based distributions like the latest versions of Fedora and Amazon Linux.

Please note that the following tutorial supposes that no previous version of MySQL or MariaDB is already installed. You can use the following command: rpm -qa | grep -i mysql to check for MySQL packages that may be previously installed and you can delete them with the yum remove command.

Tutorial

The first step is to setup Oracle’s MySQL repository, for that, we can go to the mysql.com website, click on “Downloads“, then “Yum repository” and then “Red Hat Enterprise Linux 7″. At the moment of the writing of these lines, this version of the repository setup package is still in beta, but I had no problems to install it with several combinations of software and hardware. Select “Download” and you can choose to login or create an Oracle account. We can also skip that step and just copy the link on “No thanks , just start my download”. This will provide us the address of the rpm to auto-configure the MySQL Community Server repository.

Now, if we execute on the terminal:

We can check that the repository is effectively active by running:

We have done a one-time configuration that will allow us to easily install and keep to date our MySQL installation.

The next step is actually installing the server packages. For that, we write:

As you can see, the community server package defaults to the latest version of MySQL 5.6. During the installation process, only two interruptions may happen (aside from the sudo password), one for confirmation of changes, and another to import the Oracle release engineers’ key on your system, which should be fine if it matches the fingerprint a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5. Remember that for automatic processes, we can give the -y (assume yes to all) flag to yum, but I wanted to avoid that for explanation and security purposes.

We are done with the installation, now we only have to run it and test it. Remember that Red Hat Enterprise Linux 7 replaces the management of services with systemd, so the “correct” way of starting the mysql service is:

You can check that it started successfully by doing:

And now connect from localhost by doing:

Remember also to activate the mysql auto-start on boot, as you will want in most cases (this has also changed from CentOS 6):

You can check that it was enabled successfully with the previously shown ‘status’ command; it should be shown now as “enabled”.

As good administrator, the next steps will be to properly configure user accounts and securize the database service, but that is out of the scope of this tutorial.

Thanks to installing via repository, now your packages can be easily upgraded by using yum.

For a more detailed documentation, you can also review the official docs. The Oracle engineers also have an interesting story about the testing process of these packages.

I hope this tutorial has been helpful.