How to pop from a stack in MySQL

by  on 
24 Oct

On a recent project one of the requirements was to be able to provide unique ID’s for tracking that is able to scale, and that is race condition safe.

When you work with PHP and MySQL this can be challenging to achieve, especially when dealing with high traffic websites.

Considering the requirements, it is clear that this has to be handled purely through MySQL to be effective. After some planning, testing and failing we came up with a solution that both scaled well and was literally race condition safe.

We loaded the table with sixty million unique ID’s, and started testing the load on this part of the system. After seeing that it was able to process close to a thousand requests per second over a set time period to this API method without any race condition issues we decided it was production ready considering this was way above the traffic estimates the client provided in their requirements.

Note. it is important to understand that the hardware you run, both on the database cluster and web server cluster decide the speed you can process requests. If you run your website and database on the same server, you will not be able to obtain similar results.

 

To give the full picture, I will go over each of the three MySQL code pieces required.

For the table we ended up with a simple one with two columns. The key here is that it should be inexpensive to delete from it, as well as add more records. As you can see, the “promotion_track_ref” is our unique reference ID. The column is set to be unique and case sensitive, with other words the value “ukrEP” is different from “UkrEP”. The reason we have done this, is due to it significantly increase the amount of different ID’s you get from a five character string.

An important point is that since our unique ID is alphanumeric, we had to add the “promotion_track_id” allowing us to order by it. Without it, we would have returned the result sorted by the ID. This makes the unique ID pulled non-random as long as you have a lot of them queued up.

CREATE TABLE IF NOT EXISTS `promotion_track` (
  `promotion_track_id` INT NOT NULL AUTO_INCREMENT,
  `promotion_track_ref` CHAR(5) CHARACTER SET 'latin1' COLLATE 'latin1_general_cs' NOT NULL,
  PRIMARY KEY (`promotion_track_id`),
  UNIQUE INDEX `unique` (`promotion_track_ref` ASC))
ENGINE = InnoDB;

The most effective way to achieve our goal, is by creating a Stored Procedure that handle the process of pulling and deleting the record.

What we tried first was to delete the record, and then pull the unique ID:

DELETE FROM
	promotion_track
WHERE
	(@track:=promotion_track_ref)
ORDER BY
	promotion_track_id ASC
LIMIT 1;

SELECT @track;

This did not work too well as the ID’s was not deleted according to the position of promotion_track_id as we initially expected it would. After researching, we found out that this is due to the value of the user-defined variable is set before the data is sorted.

To get around this we had to change the order of the queries, and ended up with this stored procedure:

DELIMITER //

CREATE PROCEDURE getProspectTrackID()
BEGIN

START TRANSACTION;

SELECT
	@track:=promotion_track_ref as promotion_track_ref
FROM
	promotion_track
ORDER BY
	promotion_track_id ASC
LIMIT 1;

DELETE FROM
	promotion_track
WHERE
	promotion_track_ref=@track;

COMMIT;

END//

Inside the PHP code, we then called it with:

CALL getProspectTrackID();

To get any performance out of this code it is VITAL that you call the stored procedure before any database transaction is initiated on the PHP side. If you do not do this, it will slow down the amount of requests you can do, and YOU WILL BE PRONE FOR RACE CONDITIONS!

 

What I have not covered is the code that will insert the unique ID’s into the table. This part will be unique for your specific business requirments, so I will leave that for you to solve.

What is important is to remember to also compare the new ID’s you create vs. the table where you put any of the ID’s you have popped out. We found out that the easiest and fasted method here, is to create a temporary tables where you insert a few million records, and then invalidate records, before inserting them.

Please note that as you reach a few hundreds of millions of records this process will start to take longer time, but when you reach this number, you should seriously consider increasing the length of the ID anyway.

If you found this helpful, or have any questions, please post a comment below.

About the author Formally educated as an electronics engineer, Sven moved on to web development in 2004 after having it as a hobby for almost a decade. Over the last few years he has accumulated a vast knowledge and experience in the field of complicated web-based applications working with everything from transaction based systems to high traffic websites.

He is passionate about clean, efficient and secure code. When working on a project he will not budge until every security aspect has been taken care of.