3

I have a innoDB table, in which, multiple connections might insert data, and a single MySql Event that runs every 10 secs deletes some previously inserted records.

But i am getting Deadlocks. How can i avoid them?

This code is responsible for Inserting the records into the table row-by-row.

sql = "INSERT INTO ex_result (Result_ID, ParentResult_ID, StepNumber, Name, Type, DB_ID, Session_ID, Status, TotalTime, FunctionCall, FunctionResult, ToolTime, PluginName, Screenshot_FID, IsNegative, ContinueOnError, WantSnapshot, Message, ResultCode, Output, InputArguments) VALUES (@Result_ID, @ParentResult_ID, @StepNumber, @Name, @Type, @DB_ID, @Session_ID, @Status, @TotalTime, @FunctionCall, @FunctionResult, @ToolTime, @PluginName, @Screenshot_FID, @IsNegative, @ContinueOnError, @WantSnapshot, @Message, @ResultCode, @Output, @InputArguments)"

The code for the event is:

DELIMITER //

CREATE EVENT EVENT_RESULT_CLEANER
ON SCHEDULE
EVERY 10 second
COMMENT 'Clean all delted results'
DO
BEGIN

DROP TEMPORARY TABLE IF EXISTS `Temp_Result_Purge`;

CREATE TEMPORARY TABLE `Temp_Result_Purge` (
`Result_ID` VARCHAR(63) NOT NULL,
PRIMARY KEY (`Result_ID`))
ENGINE = MEMORY;

INSERT INTO Temp_Result_Purge(Result_ID)
(
    SELECT t1.result_id
    FROM ex_result AS t1
    INNER JOIN ex_session as t2
    ON t1.session_id=t2.session_id
    WHERE NOT EXISTS(SELECT t3.result_id FROM ex_result as t3 WHERE t3.parentresult_id=t1.result_id)
    AND t2.DeletedOn IS NOT NULL
    LIMIT 2000
);

DELETE t1 FROM `ex_result` AS t1 INNER JOIN
`Temp_Result_Purge` AS t2 ON t1.Result_ID=t2.Result_ID;

DROP TEMPORARY TABLE `Temp_Result_Purge`;

END//

DELIMITER ;
  • I don't understand why you create a temporary table 'Temp_Result_Purge' and not directly DELETE t1 FROM ex_result AS t1 INNER JOIN ex_session as t2 ON t1.session_id=t2.session_id WHERE NOT EXISTS(SELECT t3.result_id FROM ex_result as t3 WHERE t3.parentresult_id=t1.result_id) AND t2.DeletedOn IS NOT NULL – Mailkov Feb 16 '15 at 10:04
  • Are you sure there is deadlock? It is very strange how single INSERT can be reason for deadlock. Is there a way to run delete procedure multiple times simultaneously? This can be the reason for deadlock. – i486 Feb 16 '15 at 10:07
  • Please provide SHOW CREATE TABLE ex_result. – Rick James Feb 17 '15 at 00:46
  • Right after getting a deadlock do SHOW ENGINE INNODB STATUS and provide us with the two statements that were caught in a deadlock. – Rick James Feb 17 '15 at 00:47
  • @RickJames: The CreateTable and InnoDBStatus for the same were added in the question [link](http://stackoverflow.com/questions/28472027/mysql-innodb-deadlock-while-inserting-bulk-data-in-a-tree-like-table) – Anshuman Chatterjee Feb 17 '15 at 04:22

3 Answers3

1

At least, you need to start a transaction in your event code.

This is not done implizit by the begin...end, see http://dev.mysql.com/doc/refman/5.6/en/commit.html

Within all stored programs (stored procedures and functions, triggers, and events), the parser treats BEGIN [WORK] as the beginning of a BEGIN ... END block. Begin a transaction in this context with START TRANSACTION instead.

Also, if you run this every 10 seconds, than think about changing your architecture. A relational database is not so good for data with short life span. An alternative may be messsage queue.

Meier
  • 3,773
  • 1
  • 14
  • 43
1

First I have some nasty things to say, then I will get to a possible solution.

"Don't queue it, just do it." -- MySQL does not make a good queuing engine.

Do add BEGIN...COMMIT (as already mentioned). And the BEGIN...COMMIT needs to be around the other code, too.

Do add code to test for deadlocks. Then replay the BEGIN...COMMIT. You cannot avoid all deadlocks, so plan for them.

Drop the LIMIT to only, say, 10. Then put the purger into a continual loop rather than waking up every 10 seconds. (If you are about to say "that makes things worse", then read on; I'll give you a variant that may work better.)

Use a LEFT JOIN ... WHERE ... IS NULL instead of NOT EXISTS ( SELECT ... )

Don't recreate the table over and over; just TRUNCATE TABLE. Or, better yet, just DELETE directly without going through a tmp table. However, that leads to another question...

How many rows does query have to go through to find the LIMIT rows? Keep in mind that the SELECT is interfering with the INSERTs. If it usually has to scan a million rows to find 2000 to delete, then we need to figure out a way to make the rows easier to find. For that, we need more details about your app, and the table sizes. Or ponder this...

One technique for politely scanning a million rows to find just a few, is to walk through the table 1000 rows at a time, usually using the PRIMARY KEY. Note: That's 1000 rows of the table, not 1000 rows eligible for deletion. After each 1000, DELETE the ones you found (if any), then move on to the next 1000. When you get to the end of the table, start over. Details on how to do this chunking is found here: http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks

Rick James
  • 106,233
  • 9
  • 103
  • 171
  • Chunking should be in the range of 100 to 1000 rows at a time. Your batch of 5000 will be too long, especially after adding BEGIN...COMMIT. Still chunking will make things run 10 times as fast, thereby greatly reducing the frequency of deadlocks. – Rick James Feb 22 '15 at 21:26
0

In my opinion the best solution would be to use soft deletes

Just set the status of deleted object as deleted. In case the amount of records ir really HUGE and you don't want to store history data at all you can make a schedulled database purge nightly or weekly

One of the drawbacks will be you will have to rewrite the data retrieval logics, by adding one new condition

Community
  • 1
  • 1
Artjoman
  • 256
  • 1
  • 9
  • We have already implemented the soft-deletes in the table ex_session(see t2.DeletedOn IS NULL). The purging is done by the event, but the INSERT might collide with the purge-event anytime. – Anshuman Chatterjee Feb 16 '15 at 12:35