We have a TCP/IP socket software including a Java client and C++ server. The data across the socket contains numbers, int, float, and char array. The precision of these floats are four digits after the point. Recently, we started to use either char array to represent float in data structure/protocol or int to represent float (float to int by time 10000 and then divide 10000 on receiver side) because of the precision.
I was told it is difficulty to keep the precision if we use float inside the data structure/protocol directly. The sender is hard to put exactly float into the socket and the receiver is hard to receive/convert back to the exact float number.
I am not convinced. By reading the Wiki again. It seems the single-precision float can provide 6-9 precision:
This gives from 6 to 9 significant decimal digits precision (if a decimal string with at most 6 significant decimal is converted to IEEE 754 single precision and then converted back to the same number of significant decimal, then the final string should match the original; and if an IEEE 754 single precision is converted to a decimal string with at least 9 significant decimal and then converted back to single, then the final number must match the original [3]).
What's the good practice to transfer float across the internet if the required precision is 4 or 6? How about more than that? Double!?. How banks handle bigger floating point numbers?