Sunday, February 23, 2014

Units of information


In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the information contents or entropy of random variables.

The most common units are the bit, the capacity of a system which can exist in only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes). Information capacity is a dimensionless quantity, because it refers to a count of binary symbols.

In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logb N of the number N of possible states of that system. Changing the basis of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the basis b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.

When b is 2, the unit is the "bit" (a contraction of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include:

    Base b = 3: the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[2]
    Base b = 10: the unit is called decimal digit, Hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.[1][3][4][5]
    Base b = e, the base of natural logarithms: the unit is called a nat, nit, or nepit (from Neperian), and is worth log2 e (≈ 1.443) bits.[1]

The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are sometimes easier to handle than logarithms in other bases.

Byte


 The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.

The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity

associated at the time with the byte.

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits — that is, an octet. A byte can represent 256 (28) distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte. Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.


Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage  space.

While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.

 

Nibble

A group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit. 


Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 8, 24, 32, 36, 56, 64, 80 bits or others.

Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").

Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.

Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.


Systematic multiples

Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (kilobit or kbit), mega- = 106 = 1000000 (megabit or Mbit) and giga = 109 = 1000000000 (gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (kB = 8000 bits), megabyte (1 MB = 8000000bits), and gigabyte (1 GB = 8000000000bits).

However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often misused the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of  228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.






Symbol
Prefix
SI Meaning
   Binary meaning     
Size difference
K
kilo
103   = 10001
   210 = 10241
2.40%
M
mega
106   = 10002
   220 = 10242
4.86%
G
giga
109   = 10003
   230 = 10243
7.37%
T
tera
1012 = 10004
   240 = 10244
9.95%
P
peta
1015 = 10005
   250 = 10245
12.59%
E
exa
1018 = 1000
   260 = 10246
15.29%
Z
zetta
1021 = 10007
   270 = 10247
18.06%
Y
yotta
1024 = 10008
   280 = 10248
20.89%



















In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied.

On the other hand, for external storage systems (such as optical disks), the SI prefixes were commonly used with their decimal values (powers of 10). There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission (IEC) issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix.























Symbol
   Prefix

Ki
kibi,        binary kilo
  1 kikibyte (KiB)
      210 bytes    
1024 B
Mi
mebi,     binary mega  
  1 mebibyte (MiB)
     220 bytes
1024 KiB
Gi
gibi,       binary giga 
  1 gigibyte (GiB)
     230 bytes
1024 MiB
Ti
tebi,      binary tera
  1 tebibyte (TiB)
     240 bytes
1024 GiB
Pi
pebi,      binary peta
  1 pepibyte (PiB)
     250 bytes
1024 TiB
Ei
exbi,     binary exa 
  1 exbibyte (EiB)
     260 bytes
1024 PiB





















The JEDEC memory standards however define uppercase K, M, and G for the binary powers 210, 220, 230, and 240 to reflect common usage.

















Size examples

  •  1 bit - answer to an yes/no question
  • 1 byte - a number from 0 to 255.
  • 90 bytes: enough to store a typical line of text from a book.
  • 512 bytes = ½ KiB: the typical sector of a hard disk.
  • 1024 bytes = 1 KiB: the classical block size in UNIX filesystems.
  • 2048 bytes = 2 KiB: a CD-ROM sector.
  • 4096 bytes = 4 KiB: a memory page in x86 (since Intel 80386).
  • 4 kB: about one page of text from a novel.
  • 120 kB: the text of a typical pocket book.
  • 1 MB - a 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth).
  • 3 MB - a three minute song (128k bitrate)
  • 650-900 MB - an CD-ROM
  • 1 GB - 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
  • 15 GB - number of bytes Google offers you for free.
  • 8/16 GB - size of a normal flash drive
  • 4 TB - The size of a $300 hard disk
  • 966 EB - prediction of the volume of the whole internet in 2015











No comments:

Post a Comment