Good morning Lion Nation, and welcome back to my weekly tech article, Techie Tuesday. I’m Devlyn Coulter, ACH’s resident tech nerd. This week, I’ll be discussing the units of information measurement and what they mean, as well as throwing in a little bit of trivia.
What are bits and bytes?
At their most basic level, all computers speak the same language, binary. Binary is a system of numbers that operates differently from the numbers we know. Instead of going 0,1,2,3,4,5,6,7,8,9 and then going adding a digit, like our numbers, binary has only 2 digits, 0 and 1. Because of this, binary goes 0,1,10,11,100,101,110,111,1000,10001,1010…, hence “binary”, which means “having 2 parts.” Each 0 or 1 is called a “bit.” 8 bits make a byte. It sounds complicated, but with some practice, most people can pick it up pretty quickly.
I hear “Megabyte” and “Gigabyte” a lot, what do those words mean?
In order to understand how these words work, one must first learn how they are made, and why the words are formed that way.
The words have two parts, a prefix and the word “byte.”
The prefixes are the same ones that are used in SI/metric measurements.
Here’s a little chart for you:
Kilo: 1,000 (thousands)
Mega: 1,000,000 (millions)
Giga: 1,000,000,000 (billions)
Tera: 1,000,000,000,000 (trillions)
Peta: 1,000,000,000,000,000 (quadrillions)
About a thousand of each prefix makes 1 of the next prefix down. 2,400 Megabytes makes 2.4 Gigabytes
1: 4 bits are called a “nib.” This is because it’s half of a byte!
2: Each step up in measurement units is actually 1024 of the previous measurement. This is because 1024 is a “power of two.” Powers of two are easier for computers to organize
3: The code in The Matrix movie is not binary; it’s a random mixture of different Japanese characters.
That’s all for this week, so until we meet again, remember to stay curious!