I'd like to understand the quirks of MySqlDataReader (or IDataReader in general). Whilst researching on the internet I found many resources on how to use the MySqlDataReader, but very little about what's happening behind the scenes. I am asking because I found that in some benchmark I performed the time needed to execute MySqlCommand.ExecuteReader()
is orders of magnitude smaller than reading all of my data-sets with MySqlDataReader.Reader()
. This holds especially for large data sets. To provide an example: I am reading ~740000 rows, which takes 80-100 ms for the query to execute and about 15 s to read all data. Another example is reading ~2200 rows with a query time of ~200 ms and ~1 s to read all data.
According to High Performance MySQL data retrieved is buffered in common connectors (3rd Edition, p. 212) and I'd assume this also holds for Connector/Net. I understand that in the case of 740000 rows maybe not all data can or should be buffered, but it should be possible to buffer the 2200 rows in the second example with ease (I am requesting not more than 5-7 columns).
Creating a comparable amount of data structures without reading from a database takes <1 ms (measured with System.Diagnostics.Stopwatch), hence this ain't the bottleneck. I am wondering why reading from the reader takes that much time, if the data is buffered.