What's the best way to exchange data moderately large amounts of data (multiple megabytes, but not gigabytes) between UNIX processes?
I think, it would be memory mapped files, since size limitations seem tolerable enough.
I need bidirectional communication, so common pipes won't help. And with sockets and the UDP there are size limitations, as far as I know (also see here).
Not sure, if TCP would be a good idea to communicate between child and parent process of a fork()
.
Reading related questions such as this, some people have recommended shared memory / mmap and others have recommended sockets.
Is there something else I should look into? For example, is there some higher level library that helps with IPC by providing e.g. XML serialization/deserialization of data?
Edit due to comments:
In my special case, there is a parent/controller process and several children (can't use threads). The controller provides children - on request - with some key data which could probably fit into one UDP package. Children act on the key data and provide the controller with information based upon the keys (size of the information can be 10-100MB).
Issues: Size of the respone data, mechanism to inform parent on key request, synchronization - parent has to delete key from its list after passing to child, no duplicate key processing should occur.
Boost and other third-party libraries (unfortunately) must not be used. I might be able to use libraries provided by the SunOS 5.10 system.