Don't Worry if You Overslept this class - Byte Order

When I was reading the source code of a network system, htons() and ntohs() are the very first two functions that confused me. To understand them, I decided to refresh the fading knowledge learned from college in order to squeeze out the last bit of conceptual doubt in byte order.

We are lucky as computer science practitioners since we already understand the smallest unit in cyber space, that is, bit. Thus, we only need to advance towards one direction, up.

A bit is simply a 0 or an 1. Physically it is sometimes represented as a tiny magnetic region on a metal plate.

Moving up, we have byte that is 8 bits. We generally use byte to measure the sizes of data, like we use meter to measure sizes of objects. So instead of saying “something is 32 bits”, we normally say “it is 4 bytes”.

Moving upward, we have word that is 4 bytes (or 32 bits)in 32 bit architecture. Note that I use 32 bit architecture throughout this text to keep it concise, and the concept can be easily ported to 64 bit architectures.

Network and host byte order

Byte order (also known as, endianness) controls how a word is stored in memory, and how it is transmitted over the network. In big-endian, the most significant byte is set at the lowest address; whilst in little India, some dishes are really hot…no…in little-endian, the most significant byte is set at the highest address.

One direction
Or another

Laying bytes on physical media (that is, memory & network) is like laying floors, in both cases it is OK to lay a wooden tile, or a 4-bytes word in both directions. However, a system designer has to make a decision so as to keep the style consistent. Defined by RFC 1700, network protocols designers chose big-endian. However, some designers of host systems disagree. In X86, it is little-endian; In ARM, it could be either. That means the actual data of the same value varies in different physical media. For instance, the value A1B2C3D4 (2712847316 in decimal) can have two forms, as given below:

endianness

In the above figure, each box — for example, the box containing A1 — represents a byte. Please note that the “endianness shuffling” is based on the unit of byte, not bit.

Machines can process the two formats almost equally well, but humans complaint that the numbers in little-endian are reversed. Why? Isn’t it more logical to set the less significant byte to lower address, and more to higher?

The reason is: when we write digital on papers (that is, well, another physical media), we use big endian unconsciously. Taking the above number (A1B2C3D4) as the example, our sub-conscious draws the low and high addresses from left to right even though they are not there:

We use big endian unconsciously

But if we mandate our subconscious to draw them from right to left, maybe we can reconcile the conflicts between the rationale and intuition.

It is NATURALLY little endian now!

In my opinion, it is perfectly acceptable because we use all kinds of coordinate systems when locating UI components on a screen (e.g., app, game development), for instance:

What do you think?

Next we look at how the theories are used, and why they matter, in practice.

htons() ntohs()

These two functions fill the format gap between the network and hosts. Technically, when hosts communicate over network, they are used to convert the packet’s byte order. If a host and network byte order are the same (that means they both uses big-endian), the two functions simply do nothing. When a host byte order is different from network (that means the host uses little-endian), htons() converts the data from little-endian to big-endian, and ntohs() converts from big-endian back to little-endian.

There are another pair of functions of this kind, htonl() and ntohl() that operate on larger numbers than htons() and ntohs(). They are based on the same principle so I will not further discuss them.

Fact-check

I always need concrete code to be sure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <stdio.h>

#define BITS_PER_BYTE 8
int main() {
unsigned int anint = 0xa1b2c3d4;
unsigned char *truth = &anint;

printf("value in decimal: %u\n", anint);
printf("0x");

for (int i = 0; i < sizeof(anint); i++) {
printf("%2X", truth[i]);
}

printf("\n");

unsigned int anint_net = htons(anint);
truth = &anint_net;

printf("value in decimal after hton: %u\n", anint_net);
printf("0x");

for (int i = 0; i < sizeof(anint_net); i++) {
printf("%02X", truth[i]);
}
printf("\n");
}

The result on my machine:

1
2
3
4
value in decimal: 2712847316
0xD4C3B2A1
value in decimal after hton: 54467
0xC3D40000

As given in the example, htons() changes the original value of anint using big-endian to prepare for network transmission and the original value will be recovered in the receiving host by ntohs(). Though the real network operations are not demonstrated in the example, I think you can get the idea.

How to determine

We can reuse some code from the above example to impelment a function that can show a machine’s byte order:

1
2
3
4
5
int isLittle() {
unsigned int anint = 0xa1b2c3d4;
unsigned char *truth = &anint;
return truth[0] == 0xd4;
}

In fact, there is a much simpler way, reading directly the CPU information. If asking Ubuntu, you’ll know the code free methothe code-free method:

1
cpu | grep "Byte Order"

That's it. Did I make a serious mistake? or miss out on anything important? Or you simply like the read. Link me on -- I'd be chuffed to hear your feedback.