2

I've found a curious quirk of the MySqlDataReader.GetBytes implementation and just wondering if this is well known as I can't seem to find any articles about it on the net.

If you follow the code example for the SqlDataReader and apply it to the MySqlDataReader it wont work...unless the number of bytes in the record you are retreiving is exactly divisible by the buffer. So for example if you are on the last iteration of the loop and there is only 100 bytes left but it's trying to read another 1024 bytes then the MySqlDataReader will fail and throw an exception. The SqlDataReader will not.

Unless I'm doing something wrong?

D.

David
  • 1,451
  • 19
  • 32

1 Answers1

4

Rather than reading the whole buffer size, only ask for at most the buffer size, but also at most what you believe is left. To be honest, you might as well create a buffer of exactly the right size rather than a fixed size one anyway.

// I assume this works for MySqlDataReader too...
int length = (int)reader.GetBytes(column, 0, null, 0, 0);
byte[] buffer = new byte[length];
int index = 0;

while (index < length)
{
    int bytesRead = (int)reader.GetBytes(column, index,
                                    buffer, index, length - index);
    index += bytesRead;
}

But if you wanted a smaller buffer (e.g. if you were processing it a buffer at a time) you could use:

int length = (int)reader.GetBytes(column, 0, null, 0, 0);
byte[] buffer = new byte[length];
int index = 0;

while (index < length)
{
    int bytesRead = (int)reader.GetBytes(column, index, buffer, 0, 
                                    Math.Max(buffer.Length, length - index));
    // Process the buffer, up to value bytesRead
    // ...
    index += bytesRead;
}
JohnB
  • 15,593
  • 15
  • 91
  • 109
Jon Skeet
  • 1,261,211
  • 792
  • 8,724
  • 8,929
  • What I don't understand is the method GetBytes takes a **long** for for the fieldOffset but takes an **int** for bufferoffset? Surely that will cause issues when working with very large binary objects? – David Oct 09 '11 at 12:36
  • @David: If you wanted more than 2^31 bytes in a single go, that would certainly be a problem - but this interface lets you get an amount of data from an *enormous* blob, but still limits you to working with the .NET array types which are limited (at least in some places - it's inconsistent) to signed 32 bit integers for indexes. – Jon Skeet Oct 09 '11 at 16:27
  • So in other words if I was storing something with more than 2147483647 bytes in a database record I'd probably have other problems :) – David Oct 09 '11 at 19:04
  • @David: No, you could *store* that much - but you wouldn't be able to get it into a .NET process as a single array, that's all. For example, you could have a 10GB file in the database and fetch it out 1MB at a time to write to local disk. – Jon Skeet Oct 09 '11 at 19:49
  • @Jon: I had to add some casts, and change `byte index = 0;` to `int index = 0;` so it would work. Although, you might want to treat `fieldOffset` and `bufferoffset` separately (I just throw an exception if length is > `int.MaxValue` and use `buffer.Length`). Thanks for the great answer (+1). – JohnB Apr 10 '12 at 22:09