8

Something is troubling me for years. I work with a lot of bluetooth and lately wifi streams (spp). Those streams always connect to specific devices and communication happens via simple byte commands.

Some of the devices (their microcontrollers) i program myself and there i have to always check if the signal on the wire is what i expect, send and check for crcs.

Somehow i want to do the same on my smartphone because i access my streams with "readByte" and read byte by byte and i am always wondering if it is actually possible that a) one byte from a message can be missing b) messages arrive mixed or "out of sequence"

I have no idea how much the underlying hardware does. Does it check every message with crc and requests the message again if it was corrupted ? Or does it blindly pass every byte through to my "readByte" method ?

If the device sends message a and then b, is it possible that the receiver receives b before a and passes my code b before a or even mix up the bytes like a zipper and give me a[0] then b[0] then a[1] and so on.

How much trust in those streams should i have ? Some clarification would be appreciated

NikkyD
  • 2,057
  • 1
  • 13
  • 29

3 Answers3

6

I think you can sleep well. WiFi and Bluetooth based on packet switch network, each packet comes with crc, and physical layer has built-in congestion and link quality control - so, aside from ultra rare firmware bugs, it is actually more reliable than wired serial connection.

In other words - error correction occurs at lower level than you are using..

Answer to question about packet arrival order: point to point protocols aren't affected by this problem. Packet reordering occurs when they travel by different routes, thus no problem when there are no other routes.

You will get same bytes in same order if you are using byte oriented streams over those protocols, because they are designed with this goal in mind. Packet access, on the other hand, is not, but Android doesn't provide you means to use it.

weaknespase
  • 954
  • 7
  • 15
1

I feel like If you ever learnt about Computer Network OSI Model, you will understand what I am talking about better.

First, TCP/IP has nothing in common with Bluetooth. TCP is a transport level protocol whereas Bluetooth would be a lower level protocol. Thus you could use TCP or UDP on top of Bluetooth just as you use TCP and UDP on top of Ethernet.

Second, When data transferred through bluetooth devices, TCP protocol would be used. The TCP use congestion recovery algorithms to ensure data is transferred exactly. Modern implementations of TCP include four intertwined algorithms for flow control: slow start, congestion avoidance, fast retransmit, and fast recovery. So, If you want to know more about this, you may search the internet. Because they would rather be more theoretical than programmatic.

SilentKnight
  • 13,063
  • 18
  • 46
  • 76
0

Well of data corruption i don't have any good idea. But "mix up the bytes like a zipper and give me a[0] then b[0] then a[1] " should not happen.

I built app that parses nmea messages form external bluetooth gps. I don't check anything and some how my app works stable.

Alpha
  • 1,685
  • 3
  • 19
  • 38