I'm not sure if this have been asked before, so I'll give it a try.
I have code for loading of large clients list (200k clients). Every client is stored in a (currently) fixed-size struct that contains his name, address and phone number as follow:
struct client {
char name[80];
char address[80];
char phonenumber[80];
};
As you can see, the size of this struct is 240 bytes. So 200k clients would take 48MB of memory. Obviously advantages of such a structure is the ease of management and creating a "free-list" for recycling clients. However, if tommorow I needed to load 5M clients, then this would grow to 1.2Gb of RAM.
Now, obviously in most cases, the client's name, address and phone number take much less than 80 bytes, so instead of the above structure I thought of using a structure as the following:
struct client {
char *name;
char *address;
char *phonenumber;
};
And then have *name, *address and *phonenumber point to dynamically allocated structures at the exact needed size for storing each information.
I do suspect however, that as more clients are loaded this way, it would greatly increase the number of new[] and delete[] allocations needed, and my question is if this can hurt performance at some point, for example if I want to suddenly delete 500k of the 1M clients and replace them with 350k different clients?
I am suspecting whether after I allocated 1M "variable length" small buffers, if I "delete" many of them and then want to create new allocations that would recycle the ones that were deleted, won't it cause some overhead for the allocator to find them?