Numeral systems |
---|
Hindu–Arabic numeral system |
East Asian |
|
American |
|
Alphabetic |
Former |
|
Positional systems by base |
Non-standard positional numeral systems |
|
List of numeral systems |
The decimal numeral system (also called the base-ten positional numeral system, and occasionally called denary /ˈdiːnəri/[1] or decanary) is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers of the Hindu–Arabic numeral system.[2] The way of denoting numbers in the decimal system is often referred to as decimal notation.[3]
A decimal numeral (also often just decimal or, less correctly, decimal number), refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by a decimal separator (usually "." or "," as in 25.9703 or 3,1415).[4][5]Decimal may also refer specifically to the digits after the decimal separator, such as in "3.14 is the approximation of π to two decimals".
The numbers that may be represented in the decimal system are the decimal fractions. That is, fractions of the form a/10n, where a is an integer, and n is a non-negative integer.
The decimal system has been extended to infinite decimals for representing any real number, by using an infinite sequence of digits after the decimal separator (see decimal representation). In this context, the decimal numerals with a finite number of non-zero digits after the decimal separator are sometimes called terminating decimals. A repeating decimal is an infinite decimal that, after some place, repeats indefinitely the same sequence of digits (e.g., 5.123144144144144... = 5.123144).[6] An infinite decimal represents a rational number, the quotient of two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits.
Many numeral systems of ancient civilizations use ten and its powers for representing numbers, probably because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly the Egyptian numerals, then the Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, and Chinese numerals. Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of the Hindu–Arabic numeral system for representing integers. This system has been extended to represent some non-integer numbers, called decimal fractions or decimal numbers, for forming the decimal numeral system.
For writing numbers, the decimal system uses ten decimal digits, a decimal mark, and, for negative numbers, a minus sign "−". The decimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9;[7] the decimal separator is the dot "." in many countries,[4][8] but also a comma "," in other countries.[5]
For representing a non-negative number, a decimal numeral consists of
If m > 0, that is, if the first sequence contains at least two digits, it is generally assumed that the first digit am is not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example, 3.14 = 03.14 = 003.14. Similarly, if the final digit on the right of the decimal mark is zero—that is, if bn = 0—it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number; [note 1] for example, 15 = 15.0 = 15.00 and 5.2 = 5.20 = 5.200.
For representing a negative number, a minus sign is placed before am.
The numeral represents the number
The integer part or integral part of a decimal numeral is the integer written to the left of the decimal separator (see also truncation). For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is the fractional part, which equals the difference between the numeral and its integer part.
When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written (for example .1234, instead of 0.1234). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation.
In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is a positional numeral system.
Decimal fractions (sometimes called decimal numbers, specially in contexts involving explicit fractions) are the rational numbers that may be expressed as a fraction whose denominator is a power of ten.[9] For example, the decimals represent the fractions 8/10, 1489/100, 24/100000, +1618/1000 and +314159/100000, and are therefore decimal numbers.
More generally, a decimal with n digits after the separator represents the fraction with denominator 10n, whose numerator is the integer obtained by removing the separator.
It follows that a number is a decimal fraction if and only if it has a finite decimal representation.
Expressed as a fully reduced fraction, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are
Decimal numerals do not allow an exact representation for all real numbers, e.g. for the real number π. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximates the real π, being less than 10−5 off; so decimals are widely used in science, engineering and everyday life.
More precisely, for every real number x and every positive integer n, there are two decimals L and u with at most n digits after the decimal mark such that L ≤ x ≤ u and (u − L) = 10−n.
Numbers are very often obtained as the result of measurement. As measurements are subject to measurement uncertainty with a known upper bound, the result of a measurement is well-represented by a decimal with n digits after the decimal mark, as soon as the absolute measurement error is bounded from above by 10−n. In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see also significant figures).
For a real number x and an integer n ≥ 0, let [x]n denote the (finite) decimal expansion of the greatest number that is not greater than x that has exactly n digits after the decimal mark. Let di denote the last digit of [x]i. It is straightforward to see that [x]n may be obtained by appending dn to the right of [x]n−1. This way one has
and the difference of [x]n−1 and [x]n amounts to
which is either 0, if dn = 0, or gets arbitrarily small as n tends to infinity. According to the definition of a limit, x is the limit of [x]n when n tends to infinity. This is written asor
which is called an infinite decimal expansion of x.
Conversely, for any integer [x]0 and any sequence of digits the (infinite) expression [x]0.d1d2...dn... is an infinite decimal expansion of a real number x. This expansion is unique if neither all dn are equal to 9 nor all dn are equal to 0 for n large enough (for all n greater than some natural number N).
If all dn for n > N equal to 9 and [x]n = [x]0.d1d2...dn, the limit of the sequence is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.: dN, by dN + 1, and replacing all subsequent 9s by 0s (see 0.999...).
Any such decimal fraction, i.e.: dn = 0 for n > N, may be converted to its equivalent infinite decimal expansion by replacing dN by dN − 1 and replacing all subsequent 0s by 9s (see 0.999...).
In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of [x]n, and the other containing only 9s after some place, which is obtained by defining [x]n as the greatest number that is less than x, having exactly n digits after the decimal mark.
Long division allows computing the infinite decimal expansion of a rational number. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has a repeating decimal. For example,
The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational.
For example, if x is | 0.4156156156... |
then 10,000x is | 4156.156156156... |
and 10x is | 4.156156156... |
so 10,000x − 10x, i.e. 9,990x, is | 4152.000000000... |
and x is | 4152/9990 |
or, dividing both numerator and denominator by 6, 692/1665.
Most modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally).[10] For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems.
For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.)
Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal,[11][12] especially in database implementations, but there are other decimal representations in use (including decimal floating point such as in newer revisions of the IEEE 754 Standard for Floating-Point Arithmetic).[13]
Decimal arithmetic is used in computers so that decimal fractional results of adding (or subtracting) values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of have no finite binary fractional representation; and is generally impossible for multiplication (or division).[14][15] See Arbitrary-precision arithmetic for exact calculations.
Many ancient cultures calculated with numerals based on ten, sometimes argued due to human hands typically having ten fingers/digits.[16] Standardized weights used in the Indus Valley Civilization (c. 3300–1300 BCE) were based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, while their standardized ruler – the Mohenjo-daro ruler – was divided into ten equal parts.[17][18][19]Egyptian hieroglyphs, in evidence since around 3000 BCE, used a purely decimal system,[20] as did the Cretan hieroglyphs (c. 1625−1500 BCE) of the Minoans whose numerals are closely based on the Egyptian model.[21][22] The decimal system was handed down to the consecutive Bronze Age cultures of Greece, including Linear A (c. 18th century BCE−1450 BCE) and Linear B (c. 1375−1200 BCE) – the number system of classical Greece also used powers of ten, including, Roman numerals, an intermediate base of 5.[23] Notably, the polymath Archimedes (c. 287–212 BCE) invented a decimal positional system in his Sand Reckoner which was based on 108[23] and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery.[24]Hittite hieroglyphs (since 15th century BCE) were also strictly decimal.[25]
Some non-mathematical ancient texts such as the Vedas, dating back to 1700–900 BCE make use of decimals and mathematical decimal fractions.[26]
The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1000, 2000, 3000, 4000, to 10,000.[27] The world's earliest positional decimal system was the Chinese rod calculus.[28]
Decimal fractions were first developed and used by the Chinese in the end of 4th century BCE,[29] and then spread to the Middle East and from there to Europe.[28][30] The written Chinese decimal fractions were non-positional.[30] However, counting rod fractions were positional.[28]
Qin Jiushao in his book Mathematical Treatise in Nine Sections (1247[31]) denoted 0.96644 by
J. Lennart Berggren notes that positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century.[32] The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, anticipating Simon Stevin, but did not develop any notation to represent them.[33] The Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century.[32]Al Khwarizmi introduced fraction to Islamic countries in the early 9th century; a Chinese author has alleged that his fraction presentation was an exact copy of traditional Chinese mathematical fraction from Sunzi Suanjing.[28] This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by al-Uqlidisi and by al-Kāshī in his work "Arithmetic Key".[28][34]
A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century.[35]
A method of expressing every possible natural number using a set of ten symbols emerged in India. Several Indian languages show a straightforward decimal system. Many Indo-Aryan and Dravidian languages have numbers between 10 and 20 expressed in a regular pattern of addition to 10.[36]
The Hungarian language also uses a straightforward decimal system. All numbers between 10 and 20 are formed regularly (e.g. 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty").
A straightforward decimal rank system with a word for each order (10 十, 100 百, 1000 千, 10,000 万), and in which 11 is expressed as ten-one and 23 as two-ten-three, and 89,345 is expressed as 8 (ten thousands) 万 9 (thousand) 千 3 (hundred) 百 4 (tens) 十 5 is found in Chinese, and in Vietnamese with a few irregularities. Japanese, Korean, and Thai have imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example, in English 11 is "eleven" not "ten-one" or "one-teen".
Incan languages such as Quechua and Aymara have an almost straightforward decimal system, in which 11 is expressed as ten with one and 23 as two-ten with three.
Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability.[37]
Units of information |
Information-theoretic |
---|
Data storage |
Quantum information |
|
Some cultures do, or did, use other bases of numbers.
Some of the Germanic languages appear to show traces of an ancient blending of the decimal with the vigesimal system.
By: Wikipedia.org
Edited: 2021-06-18 19:34:25
Source: Wikipedia.org