mysql – Getting “Lock wait timeout exceeded; try restarting transaction” even though I’m not using a transaction

The Question :

289 people think this question is useful

I’m running the following MySQL UPDATE statement:

mysql> update customer set account_import_id = 1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

I’m not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn’t help.

The table has 406,733 rows.

The Question Comments :

The Answer 1

223 people think this answer is useful

You are using a transaction; autocommit does not disable transactions, it just makes them automatically commit at the end of the statement.

What is happening is, some other thread is holding a record lock on some record (you’re updating every record in the table!) for too long, and your thread is being timed out.

You can see more details of the event by issuing a


after the event (in sql editor). Ideally do this on a quiet test-machine.

The Answer 2

362 people think this answer is useful

HOW TO FORCE UNLOCK for locked tables in MySQL:

Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock.

This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again.

1) Enter MySQL

mysql -u your_user -p

2) Let’s see the list of locked tables

mysql> show open tables where in_use>0;

3) Let’s see the list of the current processes, one of them is locking your table(s)

mysql> show processlist;

4) Kill one of these processes

mysql> kill <put_process_id_here>;

The Answer 3

108 people think this answer is useful
mysql> set innodb_lock_wait_timeout=100

Query OK, 0 rows affected (0.02 sec)

mysql> show variables like 'innodb_lock_wait_timeout';
| Variable_name            | Value |
| innodb_lock_wait_timeout | 100   |

Now trigger the lock again. You have 100 seconds time to issue a SHOW ENGINE INNODB STATUS\G to the database and see which other transaction is locking yours.

The Answer 4

76 people think this answer is useful

Take a look on if your database is fine tuned. Especially the transactions isolation. Isn’t good idea to increase the innodb_lock_wait_timeout variable.

Check your database transaction isolation level in the mysql cli:

mysql> SELECT @@GLOBAL.tx_isolation, @@tx_isolation, @@session.tx_isolation;
| @@GLOBAL.tx_isolation | @@tx_isolation  | @@session.tx_isolation |
1 row in set (0.00 sec)

You could get improvements changing de isolation level, use the oracle like READ COMMITTED instead REPEATABLE READ (InnoDB Defaults)

mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)

mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)


Also try use SELECT FOR UPDATE only in if necesary.

The Answer 5

30 people think this answer is useful

None of the suggested solutions worked for me but this did.

Something is blocking the execution of the query. Most likely another query updating, inserting or deleting from one of the tables in your query. You have to find out what that is:


Once you locate the blocking process, find its id and run :

KILL {id};

Re-run your initial query.

The Answer 6

12 people think this answer is useful

100% with what MarkR said. autocommit makes each statement a one statement transaction.

SHOW ENGINE INNODB STATUS should give you some clues as to the deadlock reason. Have a good look at your slow query log too to see what else is querying the table and try to remove anything that’s doing a full tablescan. Row level locking works well but not when you’re trying to lock all of the rows!

The Answer 7

5 people think this answer is useful

Can you update any other record within this table, or is this table heavily used? What I am thinking is that while it is attempting to acquire a lock that it needs to update this record the timeout that was set has timed out. You may be able to increase the time which may help.

The Answer 8

3 people think this answer is useful

The number of rows is not huge… Create an index on account_import_id if its not the primary key.

CREATE INDEX idx_customer_account_import_id ON customer (account_import_id);

The Answer 9

3 people think this answer is useful

If you’ve just killed a big query, it will take time to rollback. If you issue another query before the killed query is done rolling back, you might get a lock timeout error. That’s what happened to me. The solution was just to wait a bit.


I had issued a DELETE query to remove about 900,000 out of about 1 million rows.

I ran this by mistake (removes only 10% of the rows): DELETE FROM table WHERE MOD(id,10) = 0

Instead of this (removes 90% of the rows): DELETE FROM table WHERE MOD(id,10) != 0

I wanted to remove 90% of the rows, not 10%. So I killed the process in the MySQL command line, knowing that it would roll back all the rows it had deleted so far.

Then I ran the correct command immediately, and got a lock timeout exceeded error soon after. I realized that the lock might actually be the rollback of the killed query still happening in the background. So I waited a few seconds and re-ran the query.

The Answer 10

2 people think this answer is useful
kill xxxx; 

and then kill which one in sleep. In my case it is 2456.

enter image description here

The Answer 11

1 people think this answer is useful

Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.

You can check it by SELECT @@GLOBAL.tx_isolation, @@tx_isolation; on mysql console.

If it is not set to be READ-COMMITTED then you must set it. Make sure before setting it that you have SUPER privileges in mysql.

You can take help from

By setting this I think your problem will be get solved.

You might also want to check you aren’t attempting to update this in two processes at once. Users ( @tala ) have encountered similar error messages in this context, maybe double-check that…

The Answer 12

1 people think this answer is useful

I came from Google and I just wanted to add the solution that worked for me. My problem was I was trying to delete records of a huge table that had a lot of FK in cascade so I got the same error as the OP.

I disabled the autocommit and then it worked just adding COMMIT at the end of the SQL sentence. As far as I understood this releases the buffer bit by bit instead of waiting at the end of the command.

To keep with the example of the OP, this should have worked:

mysql> set autocommit=0;

mysql> update customer set account_import_id = 1; commit;

Do not forget to reactivate the autocommit again if you want to leave the MySQL config as before.

mysql> set autocommit=1;

The Answer 13

0 people think this answer is useful

Late to the party (as usual) however my issue was the fact that I wrote some bad SQL (being a novice) and several processes had a lock on the record(s) <– not sure the appropriate verbiage. I ended up having to just: SHOW PROCESSLIST and then kill the IDs using KILL <id>

The Answer 14

0 people think this answer is useful

This kind of thing happened to me when I was using php language construct exit; in middle of transaction. Then this transaction “hangs” and you need to kill mysql process (described above with processlist;)

The Answer 15

0 people think this answer is useful

In my instance, I was running an abnormal query to fix data. If you lock the tables in your query, then you won’t have to deal with the Lock timeout:

update customer set account_import_id = 1;

This is probably not a good idea for normal use.

For more info see: MySQL 8.0 Reference Manual

The Answer 16

0 people think this answer is useful

I ran into this having 2 Doctrine DBAL connections, one of those as non-transactional (for important logs), they are intended to run parallel not depending on each other.


My integration tests were wrapped into transactions for data rollback after very test.

    TransactionlessConnectionQuery() // CONFLICT

My solution was to disable the wrapping transaction in those tests and reset the db data in another way.

The Answer 17

-4 people think this answer is useful

Had this same error, even though I was only updating one table with one entry, but after restarting mysql, it was resolved.

Add a Comment