1

I am facing serious performance issue in inserting, selecting and updating rows to a table in mysql.

The table structure I am using is

CREATE TABLE `sessions` (
     `sessionid` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
     `expiry` datetime NOT NULL,
     `value` text NOT NULL,
     `data` text,
     PRIMARY KEY (`sessionid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Sessions';

The queries for which I face issue are :

INSERT INTO sessions (SESSIONID, EXPIRY, DATA, VALUE) VALUES ('b8c10810c505ba170dd9403072b310ed', '2019-05-01 17:25:50', 'PFJlc3BvbnNlIHhtbG5zPSJ1cm46b2FzaXM6bmFtZXM', '7bKDofc/pyFSQhm7QE5jb6951Ahg6Sk8OCVZI7AcbUPb4jZpHdrCAKuCPupJO14DNY3jULxKppLadGlpsKBifiJavZ/');

UPDATE sessions SET EXPIRY = '2019-05-01 17:26:07' WHERE (SESSIONID = 'e99a0889437448091a06a43a44d0f170');

SELECT SESSIONID, EXPIRY, DATA, VALUE FROM sessions WHERE (SESSIONID = '507a752c48fc9cc3043a3dbe889c52eb');

I tried explaining the query but was not able to infer much about optimizing the table/query.

From the slow query report the time taken

for select in average is 23.45, for update it is 15.93 and for insert it is 22.31.

Any help in identifying the issue is much appreciated.

Delon
  • 551
  • 1
  • 4
  • 13
  • What is the setting of `long_query_time`? If it is the default of 10 (seconds), it is catching only a few of the queries. Still, these are unreasonably high. Was a backup occurring at the same time? An `ALTER TABLE`? Something else 'big'? – Rick James May 06 '19 at 17:02
  • long_query_time is 10 seconds. There is no backup configured and we have not done any DDL operations. – Delon May 07 '19 at 13:45
  • Perhaps hundreds of such queries are running simultaneously? See if you can catch that with `SHOW PROCESSLIST;` – Rick James May 07 '19 at 19:18

2 Answers2

1

How many queries per second?

How big is the table?

How much RAM?

What is the value of innodb_buffer_pool_size?

UUIDs are terrible for performance. (Is that a SHA1?) This is because they are so random that the 'next' query (any of those you mentioned) is likely not to be in cache, hence necessitating a disk hit.

So, with a table that is much larger than the buffer_pool, you won't be able to sustain more than about 100 queries per second with a spinning drive. SSD would be faster.

More on the evils of UUIDs (SHA1 has the same unfortunate properties, but no solution like the one for uuids): http://mysql.rjweb.org/doc.php/uuid

One minor thing you can do is to shrink the table:

session_id BINARY(20)

and use UNHEX() when inserting/updating/deleting and HEX() when selecting.

More

51KB avg row len --> The TEXT columns are big, and "off-record", hence multiple blocks needed to work with a row.

0.8GB buffer_pool, but 20GB of data, and 'random' PRIMARY KEY --> The cache is virtually useless.

These mean that there will be multiple disk hits to for each query, but probably under 10.

300ms (a fast time) --> about 30 disk hits on HDD (more on SSD; which do you have?).

So, I must guess that 20s for a query happened when there was a burst of activity that had the queries stumbling over each other, leading to lots of I/O contention.

What to do? Most of the data looks like hex. If that is true, you could cut the disk footprint in half (and cut back some on disk hits needed) by packing and using BINARY(..) or BLOB.

INSERT INTO sessions (SESSIONID, EXPIRY, DATA, VALUE)
    VALUES (UNHEX('b8c10810c505ba170dd9403072b310ed'),
            '2019-05-01 17:25:50',
            UNHEX('PFJlc3BvbnNlIHhtbG5zPSJ1cm46b2FzaXM6bmFtZXM'),
            UNHEX('7bKDofc/pyFSQhm7QE5jb6951Ahg6Sk8OCVZI7AcbUPb4jZpHdrCAKuCPupJO14DNY3jULxKppLadGlpsKBifiJavZ/'));

UPDATE sessions SET EXPIRY = '2019-05-01 17:26:07'
 WHERE SESSIONID = UNHEX('e99a0889437448091a06a43a44d0f170');

SELECT SESSIONID, EXPIRY, DATA, VALUE FROM sessions
 WHERE SESSIONID = UNHEX('507a752c48fc9cc3043a3dbe889c52eb');

and

 `sessionid` VARBINARY(20) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
 `expiry` datetime NOT NULL,
 `value` BLOB NOT NULL,
 `data` BLOB,

And ROW_FORMAT=DYNAMIC might be optimal (but this is not critical).

Rick James
  • 106,233
  • 9
  • 103
  • 171
  • Maximum 20 - 40 queries per second, 20000 rows max, 10 GB of RAM, innodb_buffer_pool_size is 772800512, I am using UUID which is generated using a cryptographically strong pseudo random number generator. – Delon May 06 '19 at 09:28
  • InnoDB, version 10, row_format Compact, rows 391,avg_row_length 50995, data_length 19939328, max_data_length 0, index_length 0, data_free 76546048 – Delon May 07 '19 at 13:42
  • We are not using a SSD drive. It is during peek time we are consistently getting this issue. Your suggestion looks solid, I will give it a try. – Delon May 20 '19 at 12:37
  • @Delon - With the data much smaller than the buffer_pool_size, I don't see a RAM problem. Let's analyze your settings; see http://mysql.rjweb.org/doc.php/mysql_analysis#tuning – Rick James May 20 '19 at 14:50
0

Your queries looks good, but problem is with your server, it may not be having enough memory to handle such request, you can increase memory of your database server to to get optimised response

execution of same queries

Vivek Chaudhari
  • 1,690
  • 1
  • 10
  • 19
  • also read this https://stackoverflow.com/questions/172925/how-do-databases-work-internally for more details – Vivek Chaudhari May 02 '19 at 11:27
  • I could see the response time is not consistently large, it some times response in 300milliseconds and sometimes it is 20-60seconds. – Delon May 02 '19 at 11:51
  • @Delon - Even with the worst case of caching and memory size, a single-row query (select/update/delete/insert) should not take more than a second. Something else must be going on?? – Rick James May 06 '19 at 17:00