3

According to the definition of the bandwidth, it is the width of the frequency spectrum. Hence bandwidth should be measured in Hz. But bps, Mbps, kbps have been used as the measurement of bandwidth almost everywhere. What I need to know is, why bps, kbps kind of measurements of data transmission rate are used to measure the bandwidth of a signal.

buddhi weerasinghe
  • 477
  • 1
  • 8
  • 15
  • I'm no expert, but my guess would be because the width of a frequency spectrum can have different bps speeds which is why basing it purely on the width is not reliable. e.g. having 400Hz-420Hz will not give you as many bps as 10,000Hz-10,0020Hz even though the "width" is the same. – hyarion Apr 30 '14 at 13:59
  • 2
    I'm voting to close this question as off-topic because belongs to electrical engineering stack exchange. – Koray Tugay Jan 19 '20 at 23:05

4 Answers4

0

Throughput, measured in bps, Mbps, Kbps, is mistakenly referred to as bandwidth because of a mis-conception of the Shannon-Hartley law.

The Shannon-Hartley law states that a given frequency bandwidth is proportional to the channel capacity. Channel capacity is the theoretical upper limit of the throughput for a channel. A channel could be a single fiber optic cable, or a FDMA or ADMA channel.

Bandwidth is not the same thing as throughput. Not long ago radio stations in the United States switched from analog to digital. Stations had (under the analog system) been allocated a frequency separated from the adjacent frequencies by 200 KHz. So their station's bandwidth was 200 kHz, from 100 kHz below to 100 kHz above their assigned center frequency. (In practice some of a stations power might leak outside their assigned frequency band.)

With the digital standard, stations could transmit clearly in a 40 kHz band. So stations could use their allocated band to set up as many as five channels.

For example, a local radio station is assigned 88.1 MHz. Under the analog assignment, the could transmit in the 88.0 MHz to 88.2 MHz band. After switching to digital, they used this band for three channels, 88.0 to 88.04, 88.08 to 88.12, and 88.16 to 88.2 MHz. So they tripled their throughput but their bandwidth (used) was reduced from 200 kHz, to 120 kHz.

Bandwidth and throughput are not the same thing, so calling throughput, which is measured in bps, by the name bandwidth is mistaken.

Marlin Pierce
  • 9,073
  • 2
  • 24
  • 45
  • Actually bps (**bits** per second) is rarely used to measure throughput. Usually that's Bps (**bytes** per second). bps is usually a measure of raw data rate, where throughput is the resulting data transfer rate after other losses. Losses can include things like ethernet frame headers and signal quality issues resulting in retransmission. – Philip Couling May 02 '21 at 12:22
  • Here is the basis of comparison of FM radio channels. When the station had a 2MHz analog allocation it would transmit one three minute song in three minutes. If later using digital channels it used three of the allocated five digital 400 hHz channels it could transmit 3 three minute songs in three minutes for a tripled data rate. – Marlin Pierce May 03 '21 at 17:18
  • I might agree with you that my statements have a lack of rigger. My point is throwing a monkey wrench into the misconception people have of Shannon-Hartley that the rate of information which can be transmitted is proportional to bandwidth. When the theoretical maximum rate is proportional to bandwidth but actual usage of channels do not use the theoretical maximum. The reason for this is noise, error correction, redundancy from not compressing the data and just artifacts of the protocol specification. – Marlin Pierce May 03 '21 at 17:23
0

Short version

  • 1bps requires 1Hz bandwidth when modulated on a carrier (eg radio).
  • When the technology requires no carrier 1bps = 0.5Hz maximum frequency (eg wired ethernet).

So it's often more useful to compare data rates than frequencies. Where the term "bandwidth" applies (a modulated carrier) bits per second = hertz.


To transmit in one second 1, 0, 1, 0, 1, 0, (6 bits) the signal goes high, low, high, low, high, low. That's 3 full cycles of high-low and back low-high. So to transmit 6 bits per second, the information signal will oscillate at most 3hz.

So information frequency = 0.5 * data rate.

Sign wave at 3hz

But when you modulate a (radio) carrier with information, the result is a (radio) is other frequencies near to the carrier. The difference between the carrier and these other frequencies is the same as the information frequency.

So the whole single comprises of the carrier and two "side bands".

IE bandwidth = 2 x signal frequency.

IE bandwidth (Hz) = data rate (bps)

enter image description here


Philip Couling
  • 10,963
  • 4
  • 37
  • 63
-1

Well, in data communications, bits per second (abbreviated bps or bit/sec) is a common measure of data speed for computer modems and transmission carriers. As the term implies, the speed in bps is equal to the number of bits transmitted or received each second.

It is important to distinguish between bits per second (bps) and bytes per second (Bps). One byte is equal to 8 bits.

Jebathon
  • 3,670
  • 9
  • 42
  • 94
  • This isn't an answer. The question was asking why we refer to bandwidth with a data rate unit of measurement, rather than the bandwidth unit of measurement when talking about internet speeds. – Andrew Oct 03 '18 at 01:48
-1

First place to start is the Spectrum. Here the whole thing, from 0 Hz to Gazillion Hz has been mapped out in highway-like lanes. Just as in a highway, you measure the width of the lane by meters, here too, the width is measured in Hz. This is an analog concept. However note, that Hz, is actually a two dimensional concept, not one. It tells you how many cycles are taking place per time. (Hz -> cycles/time) So Hz is not a linear measure like distance. It is a more complex idea, but we tend to forget the time part and plot it as if it were a scalar.

Then we come to digital communications. Here we have two ways of judging how many bits we can transmit per time. This also has a time dimension. Nyquist gave us the first way of estimating how many bits/time we can transmit given a bandwidth, B Hz. He says it is 2 times B, max. But by taking into account signal levels, a completely independent parameter, the number is actually 2 B log2(M). This Nyquist limit is in terms of bits per second. As you see in this equation the term B in Hz has been converted to bps by merely multiplication with a scalar number, M. So they are really the same thing.

Then we have Shannon, who developed an expression that takes into account the noise in the channel and and came up with C = B log2(1+SNR) irrespective of M. This too is in bits/second.

Both of these are way to represent the frequency measure of a signal, which is Hz, into bits per second. Note that both Hz and bit per second are time related concepts and so nothing strange has happened here. Hz and bps are essentially the same concept and directly related.

Bit efficiency is often given in just bits. (bit efficiency = rate/bandwidth) When you divide rate by bandwidth in Hz, the time units cancel and you are left with just bits. Such as 200 bps/100 Hz -> 2 bps per cycle. But this is often written just as bits.

  • The Nyquist and Shannon formulas give channel capacity, not bandwidth. – Andrew Oct 03 '18 at 02:02
  • @Andrew The Nyquist and Shannon formulae relate bandwidth to channel capacity. Channel capacity is the theoretical maximum throughput, measured in bps. – Marlin Pierce Oct 23 '19 at 10:43