Sunday, July 22, 2007

OLAP Paradise - WITH ROLLUP

Though this is an age old concept, but after seeing the ignorance factor of many regarding this, I thought of writing a bit about it. ROLLUP is a GROUP BY modifier that adds extra rows to the output which contain summary data. At each level of aggregation the aggregated column is filled with a NULL value.

Let's see a small example. I know the values are too small for the data presented, but just to keep it readable.

   1: mysql> CREATE TABLE `rollup_1` (
   2:   `url` varchar(255) DEFAULT NULL,
   3:   `year` int(11) DEFAULT NULL,
   4:   `country` char(2) DEFAULT NULL,
   5:   `visit` int(11) DEFAULT NULL
   6: ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
   7: Query OK, 0 rows affected (0.01 sec)
   8:  
   9: mysql> SELECT * FROM rollup_1;
  10: +----------------------+------+---------+-------+
  11: | url                  | year | country | visit |
  12: +----------------------+------+---------+-------+
  13: | http://www.yahoo.com | 2005 | US      | 12000 |
  14: | http://www.yahoo.com | 2005 | IN      | 10000 |
  15: | http://www.yahoo.com | 2006 | US      | 16000 |
  16: | http://www.yahoo.com | 2006 | IN      | 10700 |
  17: | http://www.yahoo.com | 2007 | US      | 21000 |
  18: | http://www.yahoo.com | 2007 | IN      | 13200 |
  19: +----------------------+------+---------+-------+
  20: 6 rows in set (0.00 sec)
  21:  
  22: mysql> SELECT IFNULL(url, 'TOTAL') AS url, SUM(visit) AS visit FROM rollup_1 GROUP BY url WITH ROLLUP;
  23: +----------------------+-------+
  24: | url                  | visit |
  25: +----------------------+-------+
  26: | http://www.yahoo.com | 82900 |
  27: | TOTAL                | 82900 |
  28: +----------------------+-------+
  29: 2 rows in set, 1 warning (0.00 sec)
  30:  
Complete analysis:
   1: mysql> SELECT IFNULL(url, 'ALL_URLS') AS url,
   2:     -> IFNULL(year, 'ALL_YEARS') AS year,
   3:     -> IFNULL(country, 'ALL_COUNTRIES') AS country,
   4:     -> SUM(visit) FROM rollup_1
   5:     -> GROUP BY url, year, country WITH ROLLUP;
   6: +----------------------+-----------+---------------+------------+
   7: | url                  | year      | country       | SUM(visit) |
   8: +----------------------+-----------+---------------+------------+
   9: | http://www.yahoo.com | 2005      | IN            |      10000 |
  10: | http://www.yahoo.com | 2005      | US            |      12000 |
  11: | http://www.yahoo.com | 2005      | ALL_COUNTRIES |      22000 |
  12: | http://www.yahoo.com | 2006      | IN            |      10700 |
  13: | http://www.yahoo.com | 2006      | US            |      16000 |
  14: | http://www.yahoo.com | 2006      | ALL_COUNTRIES |      26700 |
  15: | http://www.yahoo.com | 2007      | IN            |      13200 |
  16: | http://www.yahoo.com | 2007      | US            |      21000 |
  17: | http://www.yahoo.com | 2007      | ALL_COUNTRIES |      34200 |
  18: | http://www.yahoo.com | ALL_YEARS | ALL_COUNTRIES |      82900 |
  19: | ALL_URLS             | ALL_YEARS | ALL_COUNTRIES |      82900 |
  20: +----------------------+-----------+---------------+------------+
  21: 11 rows in set, 3 warnings (0.00 sec)
There is a small Gotcha while using ROLLUP. As you can see, my table definition does not put a NOT NULL constraint on any of the fields. Though you might have guessed what I'm pointing to, let's see this example.
   1: mysql> INSERT INTO rollup_1 values ('http://www.yahoo.com', 2007, null, 12000);
   2: Query OK, 1 row affected (0.00 sec)
   3:  
   4: mysql> SELECT IFNULL(url, 'ALL_URLS') AS url, 
   5:     -> IFNULL(year, 'ALL_YEARS') AS year, 
   6:     -> IFNULL(country, 'ALL_COUNTRIES') AS country, 
   7:     -> SUM(visit) FROM rollup_1 
   8:     -> GROUP BY url, year, country WITH ROLLUP;
   9: +----------------------+-----------+---------------+------------+
  10: | url                  | year      | country       | SUM(visit) |
  11: +----------------------+-----------+---------------+------------+
  12: | http://www.yahoo.com | 2005      | IN            |      10000 |
  13: | http://www.yahoo.com | 2005      | US            |      12000 |
  14: | http://www.yahoo.com | 2005      | ALL_COUNTRIES |      22000 |
  15: | http://www.yahoo.com | 2006      | IN            |      10700 |
  16: | http://www.yahoo.com | 2006      | US            |      16000 |
  17: | http://www.yahoo.com | 2006      | ALL_COUNTRIES |      26700 |
  18: | http://www.yahoo.com | 2007      | ALL_COUNTRIES |      12000 |
  19: | http://www.yahoo.com | 2007      | IN            |      13200 |
  20: | http://www.yahoo.com | 2007      | US            |      21000 |
  21: | http://www.yahoo.com | 2007      | ALL_COUNTRIES |      46200 |
  22: | http://www.yahoo.com | ALL_YEARS | ALL_COUNTRIES |      94900 |
  23: | ALL_URLS             | ALL_YEARS | ALL_COUNTRIES |      94900 |
  24: +----------------------+-----------+---------------+------------+
  25: 12 rows in set, 3 warnings (0.00 sec)
As you can see in the output, there are two rows claiming to provide data for url='http://www.yahoo.com' and year=2007, question is which one is which one? Few good answers are:
  1. One that appears late in the result set.
  2. One that has a higher value for SUM(visit) field. This is arguable in general as there might be negative values, for our example we can say that's impossible.
The best answer is of course this. For using ROLLUP with data like this, please make sure that you have fields marked as NOT NULL and instead specify another DEFAULT like 'not set' or something else.

While using ROLLUP, you may further like to consider few points.

  • ROLLUP and ORDER BY are mutually exclusive, they cannot be used in the same statement.
  • If you are using LIMIT clause along with ROLLUP, remember LIMIT clause is applied after ROLLUP, so it may include the aggregated rows.
  • In MySQL documentation, it is mentioned that "Because the NULL values in the super-aggregate rows are placed into the result set at such a late stage in query processing, you cannot test them as NULL values within the query itself.". You can see in the above queries, we are actually testing this value to be replaced by TOTAL_* string. Also if your data itself does not have NULL values in it, it works wonders. Look at the output below, I have removed the row inserted above.
   1: mysql> SELECT IFNULL(url, 'ALL_URLS') AS url, 
   2:     -> IFNULL(year, 'ALL_YEARS') AS year, 
   3:     -> IFNULL(country, 'ALL_COUNTRIES') AS country, 
   4:     -> SUM(visit) FROM rollup_1 
   5:     -> GROUP BY url, year, country WITH ROLLUP HAVING country is null;
   6: +----------------------+-----------+---------------+------------+
   7: | url                  | year      | country       | SUM(visit) |
   8: +----------------------+-----------+---------------+------------+
   9: | http://www.yahoo.com | 2005      | ALL_COUNTRIES |      22000 |
  10: | http://www.yahoo.com | 2006      | ALL_COUNTRIES |      26700 |
  11: | http://www.yahoo.com | 2007      | ALL_COUNTRIES |      34200 |
  12: | http://www.yahoo.com | ALL_YEARS | ALL_COUNTRIES |      82900 |
  13: | ALL_URLS             | ALL_YEARS | ALL_COUNTRIES |      82900 |
  14: +----------------------+-----------+---------------+------------+
  15: 5 rows in set, 4 warnings (0.00 sec)
Hope this blog was helpful to you. Keep posting your comments.

Thursday, July 19, 2007

InnoDB Row Counting using Indexes

This is always mentioned that InnoDB is slower in giving results for COUNT(*) as compared to MyISAM. But as Peter points out in his blog that this fact only applies to COUNT(*) queries without WHERE clause. This text is from Peter's blog only - "If you have query like SELECT COUNT(*) FROM IMAGE WHERE USER_ID=5 this query will be executed same way both for MyISAM and Innodb tables by performing index rage scan. This can be faster or slower both for MyISAM and Innodb depending on various conditions." Let's see what EXPLAIN has in store for us.

   1: mysql> CREATE TABLE `test_index` (
   2:   `id` int(11) NOT NULL AUTO_INCREMENT,
   3:   `x` int(11) DEFAULT NULL,
   4:   `y` int(11) DEFAULT NULL,
   5:   `z` varchar(255) NOT NULL DEFAULT 'testing',
   6:   PRIMARY KEY (`id`),
   7:   KEY `x` (`x`,`y`)
   8: ) ENGINE=InnoDB;
   9:  
  10: mysql> EXPLAIN SELECT COUNT(*) FROM test_index \G
  11: *************************** 1. row ***************************
  12:            id: 1
  13:   select_type: SIMPLE
  14:         table: test_index
  15:          type: index
  16: possible_keys: NULL
  17:           key: PRIMARY
  18:       key_len: 4
  19:           ref: NULL
  20:          rows: 4875772
  21:         Extra: Using index
  22: 1 row in set (0.01 sec)
  23:  
  24: mysql> SELECT COUNT(*) FROM test_index;
  25: +----------+
  26: | COUNT(*) |
  27: +----------+
  28: |  4915200 |
  29: +----------+
  30: 1 row in set (2.61 sec)

The explain states that the counting is going to be done on PRIMARY index and Using Index. The best part is that since it is going to use PRIMARY index and since it is NOT NULL, MySQL will actually count the values from the index itself. So, contrary to the thought that something like COUNT(1) will work faster is not true in this case. Here is an interesting case from a bug.

   1: mysql> CREATE TABLE `test` (
   2:   `id` int(11) NOT NULL,
   3:   `int_k` int(11) NOT NULL,
   4:   `data1` varchar(255) NOT NULL,
   5:   `data2` varchar(255) NOT NULL,
   6:   PRIMARY KEY (`id`),
   7:   KEY `int_k` (`int_k`)
   8: ) ENGINE=InnoDB DEFAULT CHARSET=latin1
   9: 1 row in set (0.00 sec)
  10:  
  11: mysql> CREATE PROCEDURE populate()
  12: begin
  13:   declare i int;
  14:   set i = 0;
  15:   start transaction;
  16:   while i < 300000 do
  17:     insert into test (id, int_k, data1, data2)
  18:       values (i, i, repeat("-", 250), repeat("-", 250));
  19:     set i = i + 1;
  20:     if i % 1000 = 0 then
  21:       start transaction;
  22:     end if;
  23:   end while;
  24:   commit;
  25: end
  26: 1 row in set (0.00 sec)
  27:  
  28: mysql> call populate;
  29: Query OK, 0 rows affected (1 min 0.65 sec)
  30:  
  31: mysql> SELECT COUNT(*) FROM test;
  32: +----------+
  33: | COUNT(*) |
  34: +----------+
  35: |   300000 |
  36: +----------+
  37: 1 row in set (1.08 sec)
  38:  
  39: mysql> SELECT COUNT(*) FROM test use index (int_k);
  40: +----------+
  41: | COUNT(*) |
  42: +----------+
  43: |   300000 |
  44: +----------+
  45: 1 row in set (0.12 sec)

Using a secondary index is faster. But why? Generally speaking, the PRIMARY index should be faster because it is usually in order and can be read with sequential I/O at around 15 times more speed than generally fragmented secondary index. Actually this is a special case, since the secondary index is inserted into the table in perfect order, which is very rare. Also, as Heikki points out in the bug

"Since the minimum record size of InnoDB is about 20 bytes, and the fill-factor of a secondary index is typically 70 %, we can calculate that if the row length is > 15 * 1.5 * 20 = 450 bytes, then scanning the secondary index would probably be a better option." Till this feature is implemented and we have a much better optimized count(*) for InnoDB, please use secondary index explicitly for counting rows if you satisfy any of above conditions. Hope this blog was helpful to you. Keep posting your comments.

del.icio.us Tags: , , ,

Monday, July 16, 2007

Implicit Commits and Transactions

We think that whatever statements are there in a transaction can be rolled-back. Right? No! If by any chance you have statements that cause an implicit commit, then no. These statements commit the transaction mid-air. And if you think that these statements are only DDL, beware. There are many statements in MySQL that implicitly commit like LOAD DATA INFILE was an implicit commit statement for all storage engines until 5.1.12. Even now, if you are using NDB, this is still a implicit commit statement.

Say, we have statements like this:

1. START TRANSACTION
2. SOME INSERT STATEMENT
3. ONE MORE INSERT STATEMENT
4. ONE IMPLICIT COMMIT STATEMENT
5. ONE MORE INSERT STATEMENT
6. SOME MORE STATEMENTS
7. COMMIT
The implicit commit statement (statement number 4) commits the statements above it and aborts the transaction, as already mentioned. So, what happens to the statements 5 and 6. They run as normal statements outside transaction, and beat the whole purpose of a transaction. As MySQL documentation points out that this behavior occurs because we cannot have nested transactions. The same link also provides the list of all the implicit commit statements.

There is an open bug (I saw lately) requesting a feature on this, as most of the users take transactions for granted.

Hope this blog was helpful to you. Keep posting your comments.
del.icio.us Tags: , ,

Saturday, July 14, 2007

AUTO_INCREMENT in InnoDB

It's very easy to have an auto_increment column in a table and maintain primary keys. Many people think so, and also they think that MySQL will do the best job for them. With InnoDB, the case is a little different as supposed to be. Basic assumption is that since InnoDB works with row-level locking, combining it with auto_increment should ward off worries. But it is just the beginning.

Let me clarify how auto_increment works. InnoDB maintains the auto_increment value in memory. Good approach. When an insert statement requests the auto_increment value, InnoDB takes the stored value, increment it by one and return. The value in memory is not incremented at this time. This is not bad, because who knows how many rows the insert statement has.

But, this means that InnoDB cannot provide auto_increment values to other threads until this finishes, hence we cannot have concurrent inserts even in InnoDB when using an auto_increment value. This is assured by a special AUTO-INC lock by InnoDB, which is an exclusive table level lock. We will see how this can affect your application and some workarounds.

Application may suffer

If you have concurrent inserts into an InnoDB table, start worrying. Single row inserts won't affect the performance noticeably. But, if you are doing bulk inserts and your inserts are taking enough time, one of your insert threads may exit because of lock wait timeout. Also, this will cause problems for other components of your application (even if they are read only) as these insert statements will keep connected and you may hit your max_connections.

Workarounds

  1. As you might think, moving to MyISAM won't help. MyISAM by default takes a table lock while writing.
  2. Do some magic and make sure your inserts do not execute concurrently. This is also not much possible in the case of web applications, where one doesn't have the control over the inserts, but neither web applications do bulk inserts. This can be managed where one is using a cron job to do bulk inserts, say from a log file.
  3. Get rid of auto_increment. If your application actually needs to do bulk inserts from multiple sources at the same time and the above solution doesn't make sense, figure out your own primary key. Something like a combination of MD5 checksum over data combined with the length of the data may be a good option in most cases, but do not rely on it. Also, remember a multi-column primary key is not always a good option.

Hope this blog was helpful to you. Keep posting your comments.

del.icio.us Tags: , ,

Thursday, July 12, 2007

InnoDB Tablespace Size

Unlike MyISAM where data for individual tables is stored in their respective files, InnoDB stores data in a tablespace. By default, there is one single tablespace and data of all the databases is stored in one file. This file has data dictionary, tables, as well as indexes in it. There is a global parameter innodb_data_file_path that defines this tablespace file. It has a syntax like ibdata1:256M:autoextend, this means at the beginning a file of size 256 MB will be created and then whenever the data size exceeds this, the file will be auto-extended. The innodb_autoextend_increment variable defines in MB's that by how much each increment should be.

Let's see how well can we play around:

  1. Inserts: Suppose you have too many inserts and InnoDB is extending the file too frequently. It makes sense to increase the value of innodb_autoextend_increment. Say we increase it to 16MB, then obviously the number of attempts to autoextend tablespace comes down by a factor of 2, hence performance. But beware before you take it too easy and increase the value too much. There is a big trap, we will come to it shortly.
  2. Deletes: Here is the trap. You have a 10 GB tablespace (after too many autoextends), delete some 5 GB data (data + indexes) and think now the tablespace is 5 GB. Wrong, InnoDB doesn't have the notion of giving back space to the file system. Though, it will make sure to use the freed up space for further inserts. So, this method directly cannot be used to free disk space. So, in case you have data which you can get rid of, get rid of quickly before the next autoextend is done. One thing that can be done to reclaim space is to use OPTIMIZE TABLE frequently enough on tables that have high volume of inserts and deletes. But again remember, MySQL locks a table during the time OPTIMIZE TABLE is running. Another Gotcha, right? OPTIMIZE TABLE does several other things for which it makes sense to run it, though not that frequently. I will be posting a blog soon on it. 
  3. Separate Files per Table: InnoDB provides this option where data (data + indexes) for each table can be stored in a separate file through a global variable innodb_file_per_table. Though still a shared tablespace will be created for storing the likes of data dictionary et al. But still this approach makes sense as having data in small chunks (separate files) will improve the scope of managing them well and may increase performance in general.
  4. Fixed Tablespace size: One way to work around with the tablespace file size problems is to fix the tablespace size (remove autoextend) to an extrapolated value. So, when you hit the limit, you know it is time to cleanup. This is not that viable with all the applications, as extrapolation is not always possible. And also it increases the complexity of the application, which then needs to take care of all such error conditions and not lose any data.

So, where does this end? You need to figure out what your data is, how critical it is, what all you want to do with it, what all you want your data to do. Then take some of the following steps.

  1. Move to MyISAM: For all the tables (or even databases), for which you feel data is not that critical to have transactions et al, move them to MyISAM. So, for the problem we can't solve completely, we destroy the problem.
  2. Separate Tablespace: Its a lot easier to maintain 10 small problems than a single big one.
  3. Delete data/OPTIMIZE TABLE: Figure out how soon you can get rid of data. You actually don't need to delete data as it is. Transfer it to a MyISAM table, compress the file and archive it somewhere else and then delete it from the main table. Likewise there are many ways to do it. Run OPTIMIZE TABLE frequently enough so that it doesn't bother your reads and writes too much and also it doesn't take too much time to run.

Hope this blog was helpful to you. Keep posting your comments.

Monday, July 9, 2007

Table Status

Checking out the state of tables of your MySQL setup in production is important for fine tuning server parameters. MySQL's SHOW TABLE STATUS command comes very handy in determining table states from a running environment. Here, I explain the output of the command.

mysql> SHOW TABLE STATUS LIKE 'mytable' \G
****************** 1. row ******************
           Name: mytable
         Engine: InnoDB
        Version: 9
     Row_format: Dynamic
           Rows: 2465222
 Avg_row_length: 370
    Data_length: 913326080
Max_data_length: NULL
   Index_length: 146817024
      Data_free: 0
 Auto_increment: 18357110
    Create_time: 2007-05-11 22:27:56
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options:
        Comment: InnoDB free: 869376 kB
1 row in set (0.34 sec)

Leaving behind the obvious parameters, we shall see only the more relevant ones and ones that look like more interesting.

Row_format: This describes the length behavior of the rows in this table. Can have one value out of Dynamic, Fixed or Compressed. Dynamic rows have variable length due to fields having type like BLOB, VARCHAR etc. Fixed rows are made up of fields that do not vary in length. Compressed rows exist for compressed tables as those in Compressed MyISAM.

Rows: This describes the number of rows in a table. This number is always accurate for non-transactional engines like MyISAM, but an approximate value for transactional tables like InnoDB. Ever wondered why select count(*) is faster on MyISAM than InnoDB.

Avg_row_length: The name says it all. Anyhow, it's in bytes.

Data_length: This is the total data (in bytes) in the table. For InnoDB tables, which tend to use a single tablespace, this can be a deterministic factor for splitting tables in their individual tablespaces.

Max_data_lenth: This is generally not applicable for all the engines. This describes the maximum data that a table can hold. MyISAM tables generally have this restriction.

Index_length: This parameter should be of utmost interest. This describes the total space (in bytes) taken up by the index of the table. The aggregate value of this field over all the tables under the same engine (or the engines that use same buffer for index data) can actually be a deterministic value for such buffers. For InnoDB tables, this can also play a role in splitting tables in their individual tablespaces.

As far as this moment is concerned, this much of explanation is enough. Users should take care of not going by the instinct of a single snap shot for this data. Rather this data should be collected periodically to actually see the change and decisions be made accordingly.

Hope this blog was helpful to you. Keep posting your comments.

Sunday, July 8, 2007

Introduction

Would appreciate your comments and love to work on MySQL related problems if you are facing any, that will also keep me motivated about posting stuff here. My current project is MyEye, which will be soon available. It is a tool that will keep a vigilant eye on your MySQL installation, provide valuable feedback and helps in monitoring too.