A deadlock is a state where diverse transactions are unable to continue because each holds a lock that the other wants. Because both transactions are in the making for a resource to become accessible, neither ever release the locks it holds.
A deadlock can happen when transactions lock rows in several tables (complete statements such as UPDATE or SELECT * FOR UPDATE), but in the reverse order. A deadlock can also happen when such declarations lock ranges of index records and breaches, with each transaction obtaining some locks but not others due to a timing matter. A deadlock case is as follows:
Next, client B starts a transaction and tries to delete the row from the table:
The delete process needs an X lock. The lock cannot be fixed because it is mismatched with the S lock that client A grips, so the demand goes on the queue of lock requirements for the row and client B hunks.
Finally, client A also goes to delete the row from the table:
Deadlock happens here because client A wants an X lock to delete the row. Though, that lock request cannot be fixed because client B previously has a request for an X lock and is to come for client A to release its S lock. Nor can the S lock held by A be raised to an X lock because of the preceding request by B for an X lock. As a outcome, InnoDB causes an error for one of the clients and issues its locks. The client returns this error:
At that point, the lock demand for the other client can be decided and it deletes the row from the table.
To decrease the likelihood of deadlocks, use connections rather than LOCK TABLES declarations; keep connections that insert or update data small adequate that they do not break open for long periods of time; when dissimilar transactions update several tables or great ranges of rows, use the same order of processes (such as SELECT * FOR UPDATE) in each deal; create indexes on the columns used in SELECT * FOR UPDATE and UPDATE * WHERE declarations. The prospect of deadlocks is not affected by the isolation level, for the reason that the isolation level changes the conduct of read operations, while deadlocks happen because of write processes. For more material about avoiding and getting better from deadlock situations. See “How to Diminish and Handle Deadlocks”.
When deadlock finding is enabled (the default) and a deadlock does happen, InnoDB notices the state and rolls back one of the transactions (the target). If deadlock recognition is restricted using the innodb_deadlock_detect configuration possibility, InnoDB depend on on the innodb_lock_wait_timeout setting to roll back dealings in case of a deadlock. Thus, even if your application logic is right, you must still grip the case where a transaction must be redoing. To see the previous deadlock in an InnoDB user transaction, use the SHOW ENGINE INNODB STATUS expertise. If recurrent deadlocks highlight a problem with transaction building or application error treatment, run with the innodb_print_all_deadlocks setting allowed to print info about all deadlocks to the mysqld error log. For more info about how deadlocks are inevitably detected and held, see “Deadlock Finding and Rollback”.
Deadlock Detection and Rollback
When deadlock discovery is enabled (the default), InnoDB repeatedly detects transaction deadlocks and rolls back a operation or transactions to breakdown the deadlock. InnoDB attempts to pick small transactions to roll back, where the size of a transaction is strongminded by the number of rows inserted, updated, or deleted.
InnoDB is attentive of table locks if innodb_table_locks = 1 (the default) and autocommit = 0, and the MySQL layer directly above it knows about row-level locks. Otherwise, InnoDB cannot sense deadlocks where a table lock set by a MySQL LOCK TABLES declaration or a lock set by a storage engine other than InnoDB is complicated. Resolve these circumstances by setting the value of the innodb_lock_wait_timeout structure variable.
When InnoDB achieves a complete rollback of a operation, all locks set by the transaction are free. However, if just a single SQL declaration is rolled back as a product of an error, some of the locks set by the declaration may be conserved. This occurs because InnoDB stores row locks in a setup such that it cannot know later which lock was set by which declaration.
If a SELECT calls a stored function in a operation, and a statement within the function flops, that declaration rolls back. Furthermore, if ROLLBACK is performed after that, the entire transaction rolls back.
If the LATEST DETECTED DEADLOCK segment of InnoDB Display output includes a message asserting, “TOO DEEP OR LONG SEARCH IN THE LOCK TABLE WAITS-FOR GRAPH, WE WILL ROLL BACK FOLLOWING TRANSACTION,” this designates that the number of transactions on the wait-for slope has reached a limit of 200. A wait-for list that surpasses 200 transactions is preserved as a deadlock and the transaction trying to check the wait-for list is moved back. The same error may also happen if the locking thread must appearance at more than 1,000,000 locks possessed by transactions on the wait-for list.
For practices to organize database processes to evade deadlocks, see “Deadlocks in InnoDB”.
Disabling Deadlock Detection
On high concurrency structures, deadlock discovery can cause a slowdown when many threads wait for the same lock. At times, it may be more well-organized to disable deadlock discovery and rely on the innodb_lock_wait_timeout situation for transaction rollback when a deadlock happens. Deadlock discovery can be disabled using the innodb_deadlock_detect configuration possibility.
A machinery that automatically notices when a deadlock happens, and repeatedly rolls back one of the transactions complicated (the target). Deadlock discovery can be inactivated using the innodb_deadlock_detect conformation option.
How to Minimize and Handle Deadlocks
This segment builds on the intangible info about deadlocks in Section 220.127.116.11, “Deadlock Discovery and Rollback”. It describes how to establish database processes to minimize deadlocks and the following error handling essential in applications.
Deadlocks are a definitive problematic in transactional databases, but they are not risky unless they are so regular that you cannot run positive transactions at all. Usually, you must write your applications so that they are constantly ready to re-issue a transaction if it gets rolled back since of a deadlock.
InnoDB uses involuntary row-level locking. You can get deadlocks smooth in the case of transactions that just insert or delete a single row. That is because these processes are not really “atomic”; they routinely set locks on the (perhaps several) index archives of the row inserted or deleted.
You can manage with deadlocks and decrease the likelihood of their manifestation with the following methods:
At any time, issue the SHOW ENGINE INNODB STATUS command to control the reason of the most fresh deadlock. That can help you to tune your application to evade deadlocks.
If common deadlock warnings cause worry, collect more widespread debugging info by enabling the innodb_print_all_deadlocks formation option. Info about each deadlock, not just the newest one, is logged in the MySQL error log. Disable this choice when you are ended debugging.
Always be ready to re-issue a transaction if it flops due to deadlock. Deadlocks are not unsafe. Just try over.
Keep transactions small and short to make them less prone to impact.
Commit transactions directly after making a set of correlated changes to make them less disposed to impact. Do not leave an collaborative mysql session open for a lengthy time with an uncommitted transaction.
If you use locking reads (SELECT * FOR UPDATE or SELECT * FOR SHARE), try using a minor isolation level such as READ COMMITTED.
When adjusting multiple tables within a operation, or diverse sets of rows in the same table, do those procedures in a dependable order each time. Then transactions form well-defined queues and do not deadlock. For instance, establish database operations into functions within your request, or call stored routines, rather than coding several similar arrangements of INSERT, UPDATE, and DELETE statements in different spaces.
Add appropriate indexes to your tables. Then your queries need to scan rarer index records and subsequently set fewer locks. Use EXPLAIN SELECT to control which indexes the MySQL server respects as the most suitable for your queries.
Use less locking. If you can afford to document a SELECT to return data from an old picture, do not add the section FOR UPDATE or FOR SHARE to it. Using the READ COMMITTED isolation level is decent here, because each reliable read within the same transaction reads from its own fresh snap.
If unknown else helps, serialize your transactions with table-level locks. The exact way to use LOCK TABLES with transactional tables, such as InnoDB tables, is to begin a transaction with SET autocommit = 0 (not START TRANSACTION) followed by LOCK TABLES, and to not call UNLOCK TABLES until you commit the transaction explicitly. For example, if you need to write to table t1 and read from table t2, you can do this:
Table-level locks avert concurrent updates to the table, evading deadlocks at the cost of less sensitivity for a busy system.
Another way to serialize transactions is to make an auxiliary “semaphore” table that comprises just a single row. Have each transaction update that row before editing other tables. In that way, all transactions occur in a serial fashion. Note that the InnoDB immediate deadlock discovery algorithm also works in this case, since the serializing lock is a row-level lock. With MySQL table-level locks, the timeout technique must be used to determination deadlocks.