6

We are on a situation where we must choose between thoose 2 databases. We are currently on Firebird, but sometime it lag because of it stacking too much transaction history or something and backup-restore shall be applied in order to make things better.

In my specific case: Database have mostly tables filled with numeric fields. There is mostly inner joins on the queries. Almost the same rate i am inserting, and selecting. ( but on future i am looking about more severe selecting ) There is 3 main tables which having a few bilions of records ( keep growing each second ).

But i would like to see which is the best overall into normal loads like thoose above, and overloads - like selecting and working on the selected fields, preformance overall into event trigering and store procedure execution which i am thinking is sufficient good enought knowlege to choose between them ( more opinions is welcome ) and probably will help other peoples to took the descision.

I am asking

  • is it the same with Interbase ?
  • Is it worth the effort of jumping toward Interbase ?
  • Which performing better overall ?
  • Is Interbase having this history issue like Firebird, which keep growing database, and slowing it down ?

P.S.: I will leave this question without check for a solution for now. Maybe there will be someone to comare actually the databases on normal, each-day querys base, and the question and results will be more usable for me and the other peoples falled on such situation.

Yordan Yanakiev
  • 2,366
  • 3
  • 35
  • 82
  • wo knowing interbase 9, i'd suggest you dig into scheduled garbage collection and updating index-statistics in FB. You should be able to avoid the backup/restore-procedure. – Teson May 26 '11 at 07:22
  • oops ... This sounds interesting, but i have never did it. please point me to the problem, so i can post it as another question, which maybe will lead me to a solution, aswell other peoples having this issue. – Yordan Yanakiev May 26 '11 at 07:30

3 Answers3

8

The problem you describe is usually caused by bad transaction management, or long running transactions. In general you don't need a backup and restore to fix this. A backup should be sufficient (as Firebird does extra clean up and garbage collection during backup).

Firebird (and Interbase) both use Multi Version Concurrency Control, meaning that changes are recorded in a new record version. Old record versions are only cleaned up when there are no transactions open that have an interest in that transaction. Record versions that were created by a rolled back transaction is only cleaned up during a sweep.

Bad transaction management (having long running transactions, or using commit retaining instead of commit), unexpected disconnects, etc can mean that transactions are still open, which means that they will need to be cleaned up by the database (a so called sweep in Firebird). This can slow down your database because it needs to read multiple versions of the same record.

As said, the sweep is performed when doing a backup. So just doing a backup should be sufficient to remove most of the problems.

For more detailed information, look at gfix housekeeping

Mark Rotteveel
  • 82,132
  • 136
  • 114
  • 158
  • I am using gfix but it is very slow and time consuming, especially when the garbage is about 100 GB weakly ... :( Is it the same situation with Interbase ? – Yordan Yanakiev May 27 '11 at 14:04
  • As Interbase has the same (or at least similar) architecture, I don't think there will be much difference. It is very hard to tell, but sometimes these kinds of things are caused by bad transaction management in the application, so you might want to review your exact usage of transactions. – Mark Rotteveel May 29 '11 at 10:11
  • So - noone knows if there is actually a tests, revealing the differences between Firebird and Interbase ? – Yordan Yanakiev May 29 '11 at 13:11
  • IB has similar architecture, but different rules. Read-committed, read-only transactions don't stick the OAT in IB. BTW, backup won't fix a stuck OAT on either server; it will only fix a stuck OAT. Backup doesn't sweep, either; it just reads from the whole DB, which is similar to a sweep, but not the same. – Craig Stuntz Jun 11 '11 at 02:57
7

The obvious and I'm afraid rather tedious answer is that you are going to have to benchmark the two using your own workloads. The chances are that your applications workloads will be different from every other benchmark or application.

Gareth Davis
  • 26,716
  • 11
  • 69
  • 103
  • actually your point is corrent. but it is doing almost equals write/reads above some bilions of rows with overall database size 22 Gb. So i tought it will be normal 1 VS 1 fight. – Yordan Yanakiev May 26 '11 at 07:32
  • you should add some of that detail to the question.. there might be somebody out there with a similar app :), you also might want to consider whether it is even feasible to switch.. transferring 22Gb between two databases will take a little. – Gareth Davis May 26 '11 at 08:09
  • added the rest of the data ... let's hope that i get some opinions, because this cashing is really hurt – Yordan Yanakiev May 26 '11 at 08:44
  • Ok. But how peoples shall get an opinion which one to choose. In each forum i have searched for such a comparation - there were not any serious benchmarks, only opinions which rejecting further researches on the topic. As result - i start to ask myself if there is actually someone which actually knows if there is any differences, and which is better in specific situations, and are they perfectly equal on everyday querys ? – Yordan Yanakiev May 27 '11 at 11:36
3

You can send the test databases and the test case to the Firebird developers so they can improve the speed

I start to think that database needs partitioning

Mariuz
  • 1,150
  • 2
  • 10
  • 18
  • Firstly i wish to see if someone did some comparations with results, even not close to mine work loads. Then i will took further steps :). But yes - for sure i will send feedback to Firebird developers after i took considerations... after all preserving the free projects is prime target :) – Yordan Yanakiev May 31 '11 at 09:10
  • 2
    The way i see it is with firebird you do have a chance to profile it and tune it to your workload : think of something low level as oprofile from linux kernel or tracing and debugging from the inside , of course is harder but I think if we have a test case the performance can be improved , Also for cases with lots of transactions i would recommend the ussage of ssd for faster random IO like is in your case also some monitoring tools might help you http://www.sinatica.com/index.php/en/benefits/increased-preformance – Mariuz Jun 02 '11 at 06:27
  • owww, this is really interesting tool Mariuz. Thank You very much ! :) – Yordan Yanakiev Jun 02 '11 at 11:09