Monday, April 28, 2008

Variable's Day Out #9: long_query_time

Properties:

Applicable To MySQL Server
Server Startup Option --long-query-time=<value>
Scope Both
Dynamic Yes
Possible Values (< 5.1.21): Integer
(>=5.1.21): Numeric
Minimum Value (< 5.1.21): 1
(>=5.1.21): 0
Default Value 10
Category Monitoring

Description:

In case (as generally the case is) one wants to know about the bottlenecks in their system, MySQL's has a small answer to this in "Slow Query Logs". Any query that takes more time than specified by long_query_time, is labeled as a slow query and logged. Time measurement for a query is done in real time and not CPU time. So, whether a query is slow or not depends on the system that you are running on and the load the system is running under.

For many people's delight, as of MySQL 5.1.21, the long_query_time can be specified in microseconds and minimum value has been changed to 0. Earlier, the minimum value was 1.  If set to "0", it will log all the queries under slow query log. However, the new microsecond resolution in 5.1.21 is only supported when logging to a file and microsecond part is ignored when logging to table. The table logging was introduced in MySQL 5.1.6.

How to Use:

This is completely dependent on your use-case, SLA's etc. I have two systems, one of them is running with long_query_time being 5 seconds. Whereas for the second one, we are planning to upgrade so to become capable of using microsecond resolution (actually we need up-to millisecond resolution ;) ).

Related Variables:

  • log_slow_queries
  • log_queries_not_using_indexes
  • min_examined_row_limits
  • log_output
  • slow_query_log_file

Read More:

 

Hope you enjoyed reading this.

Wednesday, April 16, 2008

Variable's Day Out #8: innodb_additional_mem_pool_size

Properties:

Applicable To InnoDB
Server Startup Option --innodb-additional-mem-pool-size=<value>
Scope Global
Dynamic No
Possible Values Integer: Range: 524288 (512K) - 4294967295 (4G)
Default Value 1048576 (1M)
Category Performance

Description:

As per MySQL documentation, this pool is used to store data dictionary information and other internal data structures. If InnoDB runs out of memory on this pool, it starts allocating from OS.

Most of the additional memory pool usage goes to tables in the data dictionary and connections. Normally, with increasing number of tables you may need to allocate more memory to this pool. But as modern OS'es have good and fast memory allocating functions, this variable does not hit performance.

Anyhow, if you need to see how much of additional memory pool is being used, it's available via SHOW ENGINE INNODB STATUS.

   1: SHOW ENGINE INNODB STATUS \G
   2: .
   3: .
   4: ----------------------
   5: BUFFER POOL AND MEMORY
   6: ----------------------
   7: Total memory allocated XXXXXXXXXXXXX; in additional pool allocated XXXXXXX
   8: Dictionary memory allocated XXXXX
   9: .
  10: .
additional pool allocated will show amount of memory allocated in additional memory pool, the usage. And out of usage, Dictionary memory allocated will show amount of memory being used for data dictionary.
Best value: There is hardly any impact of a smaller value for this variable. Still it is better to have a value which around 20% more than what SHOW ENGINE INNODB STATUS shows for additional pool allocated. It's proportional to (number of tables + number of concurrent connections). Read more: Hope you enjoyed reading this post.

Tuesday, April 15, 2008

Variable's Day Out #7: innodb_autoinc_lock_mode

Properties:

Applicable To InnoDB
Introduced In 5.1.22
Server Startup Option --innodb-autoinc-lock-mode=<value>
Scope Global
Dynamic No
Possible Values enum(0,1,2)
Interpretation:
Value Meaning
0 Traditional
1 Consecutive
2 Interleaved
Default Value 1 (consecutive)
Categories Scalability, Performance

Description:

This variable was introduced in 5.1.22 as a result of the [Bug 16979] and comes very handy when stuck with auto_increment scalability issue, also mentioned in my previous post. So what do traditional, consecutive and interleaved mean?

Traditional is "traditional", this takes back InnoDB to pre-innodb_autoinc_lock_mode and a table level AUTO-INC lock is obtained and held until the statement is done. This ensures consecutive auto-increment values by a single statement. Remember, this lock is scoped for a statement and not transaction and hence is not equivalent to serializing transactions as someone raised a question to me recently.

Consecutive, the default lock mode, works in context switching method. For inserts where the number of rows is not known (bulk inserts), ie INSERT ... SELECT, REPLACE ... SELECT, LOAD DATA, it takes a table level AUTO-INC lock. Otherwise for inserts where the number of rows is known in advance (simple inserts), it uses a light weight mutex during the allocation of auto-increment values. The mutex is of course checked for only if no other transaction holds the AUTO-INC lock. However for inserts where user provides auto-increment values for some rows (mixed mode inserts), InnoDB tends to allocate more values and lose them.

Interleaved mode just ensures uniqueness for each generated auto-incremented value. This mode never takes an AUTO-INC lock and multiple statements can keep generating values simultaneously.

How to use?

My overall recommendation is not to change this variable and keep it to default. And if you are having mixed mode insert statements that contradict the usage, better look into them. Otherwise, following are the constraints on usage.

  • Use interleaved only when your tables don't have auto-increment columns. Also if you don't know/don't care if they have, then you have more issues to resolve. :)
  • "mixed mode inserts" can lead to losing values with consecutive mode.
  • It's not safe to use statement based or mixed replication with interleaved mode.
  • Traditional mode has scalability issues, but is safe when used with mixed mode inserts.

Read more:

Hope you enjoyed reading this.

Friday, April 11, 2008

Variable's Day Out #6: Innodb_buffer_pool_reads

Properties:

Applicable To InnoDB
Type Status Variable
Scope GLOBAL | SESSION
Dynamic NA
Possible Values Integer
Default Value NA
Category Performance

Description:

According to the official documentation, this variable defines "The number of logical reads that InnoDB could not satisfy from the buffer pool and had to do a single-page read.".

But we can go a little further to see the what happens in and around this variable. As per the MySQL Internals documentation, a buffer is defined as a memory area that contains copies of pages that ordinarily are in the main data file (read innodb tablespace). And "buffer pool" is simply a set of all the buffers. Now, when InnoDB needs to access a new page it looks first in the buffer pool, in case page isn't there InnoDB reads it from the disk to a new buffer. InnoDB chucks old buffers (basing its decision on LRU algorithm) when it has to make space for a new buffer. This is the time when this variable gets incremented.

Best value:

Without doubt, 0 (zero). But that is only possible if you are not using InnoDB. :)

So, while using InnoDB, you have to see that this variable remains as small as possible when compared to Innodb_buffer_pool_read_requests. In case you are seeing a high value for this, you might like to increase your innodb_buffer_pool_size and/or look at the size of your data/indexes at large.

What else:

I always used to wonder the difference between Innodb_buffer_pool_reads and Innodb_pages_read, but never too seriously until I read Jay Janssen's query about it. And then thought of figuring out myself. As I stated there also "Innodb_pages_read are the pages read from buffer pool and Innodb_buffer_pool_reads are the pages read for the buffer pool", so no confusion.

Read more:

 

Hope you enjoyed reading this.

Tuesday, April 8, 2008

Variable's Day Out #5: innodb_thread_concurrency

Properties:

Applicable To InnoDB
Server Startup Option --innodb_thread_concurrency=<value>
Scope Global
Dynamic Yes
Possible Values Integer:
Range: 0 - 1000
Interpretation:
MySQL Version Value
4.x 0 - sets it to 1
>500 - Infinite Concurrency
< 5.0.19 0 - sets it to 1
>= 20 - Infinite Concurrency
>= 5.0.19 0 - Infinite Concurrency
Default Value
MySQL Version Default Value
< 5.0.8 8 (Finite)
5.0.8 - 5.0.18 20 (Infinite)
5.0.19 - 5.0.20 0 (Infinite)
>= 5.0.21 8 (Finite)
Category Performance

Description:

innodb_thread_concurrency is the variable that limits the number of operating system threads that can run concurrently inside the InnoDB engine. Rest of the threads have to wait in a FIFO queue for execution. Also, threads waiting for locks are not counted in the number of concurrently executing threads.

In the beginning of a thread execution, InnoDB checks the count of already running threads. If it is greater than or equal to the limit set by this variable, then the thread waits for some time and gives a new try. In case the second try also fails, the thread enters a FIFO queue. The sleep time was 50 ms and later on changed to 10 ms, and is managed by innodb_thread_sleep_delay from 5.0.3 onwards. On entering the InnoDB, the thread is given number of "free tickets", number as defined by innodb_concurrency_tickets.

Optimal Setting:

Depends on your environment. The value largely depends on the version of MySQL, number of CPUs, number of Disks, and the kind of load that you are running. As a general principle, you may like to set it to twice the number of CPUs and Disks. And may consider only Disks for Disk IO intensive load and only CPUs for CPU intensive load. Also, if you have greater than or equal to 4 CPUs, you may consider lowering this variable, courtesy [Bug 15815], for un-patched MySQL versions (read the bug for the discussion). For machines with 1 or 2 CPUs, go for the kill, set it to infinite.

Hyperthreading is worse than anything else, it is in many ways different than multi-CPU problems. Because hyperthreading tries to make a single processor work like two while making the instances share a lot of pipelines and cache. You might consider turning hyperthreading off when running MySQL.

In my experience, I have seen a big enough innodb_buffer_pool_size to complement this variable on a multi-CPU machine, of course with the bug resolved. I will be publishing some numbers on it soon.

Read more:

 

Hope this post was helpful to you.

Thursday, April 3, 2008

Variable's Day Out #4: innodb_buffer_pool_size

Properties:

Applicable To InnoDB
Server Startup Option --innodb_buffer_pool_size=<value>
Scope Global
Dynamic No
Possible Values Integer
Range: 1048576 (1MB) - 4294967295 (4GB)
(more for 64 bit machines)
Default Value 8388608 (8MB)
Category Performance

Description:

One of the Peter's "what to tune?" parameters, this innodb buffer pool is a cache for both indexes and data and should be considered as the most important option for InnoDB performance. In case your server is hosting InnoDB heavy database, you can allocate up to 70-80% of the physical memory to this buffer. Anything more than this might cause paging in the operating system. Obviously if your server is not InnoDB heavy and you are using a mix of certain engines, dividing memory can be lot more complicated. This may also be taken as an argument against mixing engine types. :)

Optimal Setting:

As always, there is no magic number and the optimal value for you depends on your requirements. In case your data size is too much, you can always chose to provide as much as possible to this buffer pool. While doing so always remember that InnoDB will eat up some space for data structures related to your buffer pool also. On the other hand, if your projected database size (data+indexes) is small enough to fit in the memory, allocate around 10-15% more than your data-size.

Read more:

Hope this post was helpful to you. Keep posting your comments.