1

I've started to use mysql_data_seek () as an easy way to roll pagination transparently into my database class by shifting the pointer and reading the current 'page'.

What are the performance implications of doing this? I am only reading the data I need into PHP itself, but in most cases the SELECT covers the whole table - is this bad, and will I hit problems with massive tables?

Meep3D
  • 3,744
  • 4
  • 32
  • 55

4 Answers4

2

If I understand your idea you select all records in the table, then skip the first n*page records with mysql_data_seek() in order to "get to" the current page and read n records. If you have a lot of records you might not want to do that because http://php.net/mysql_data_seek says:

Note: The function mysql_data_seek() can be used in conjunction only with mysql_query(), not with mysql_unbuffered_query()
The difference between mysql_query() and the unbuffered version is that mysql_query() returns only after the whole result set has been copied from the MySQL server to the PHP process' memory while after the unbuffered version each mysql_fetch_xyz() has to receive the next record from the server. If you have a lot of records and you have to transfer all of them for each single request that sounds a bit suboptimal.
I guess you want to do this to get the total amount of records and the current subset with only one query. With MySQL you can have both, a LIMIT clause and the total number of records (that would be in the result set without the LIMIT clause),
SELECT SQL_CALC_FOUND_ROWS id FROM foo
ORDER BY id LIMIT 20,10

see http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows

VolkerK
  • 92,020
  • 18
  • 157
  • 222
1

I imagine that using the appropriate LIMIT clause in your SELECT statement is more efficient (it will definitely save lookup time atleast), but I don't know how the internals of how mysql_data_seek works, specifically if it reads x records and discards them, or whether it works like filesystem seek commands do (sends a signal to MySQL telling it to skip sending the next x records).

If it works the first way, I'd expect a minimal speedup doing it this way. If it was the later way, I'd expect a speedup, but not as much as simply using the appropriate LIMIT clause.

Matthew Scharley
  • 115,776
  • 51
  • 189
  • 215
1

The only thing to worry about is table size.

Basically, your tradeoff is between:

  • Reading the whole table/result into a single resultset, saving round-trips to the database
  • or making multiple bite-sized visits to the DB, one per page

If your application means users don't often get past the first page or two, then you're storing big wodges of data unnecessarily. OTOH if your users tend to visit all the pages, and there are many (and each page visit doesn't launch a new page request/resultset, i.e. you're using dynamic pagination or some semi-persistent server-side memory container), then keeping the resultset in one place is fine, particularly if it's cached between multiple users.

However, unless you've built that into your architecture, you're probably better off using something like

...LIMIT 10,10

...LIMIT 20,10

etc. in your query though, and suffering the multiple DB hits to avoid having to read more data than you're going to need, and storing it unnecessarily.

Jeremy Smyth
  • 21,962
  • 2
  • 46
  • 64
1

I ll go with mysql_seek_data() , if the cost of query execution is high.

Normally pagination needs two query executions ref: [ MySQL pagination without double-querying? ]

a) execute the query to find total no of records returned

b) execute the query with limit clause to fetch only desired results
For second case please check the question :

If query execution cost is very high, I ll not go for second execution , and opt for mysql_seek_data() function , as I have used it successfully

Community
  • 1
  • 1
sakhunzai
  • 11,985
  • 19
  • 85
  • 142