Probability 


Part of a series on statistics 
Probability theory 






Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty.^{[note 1]}^{[1]}^{[2]} The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.^{[3]}
Experiment: An operation which can produce some welldefined outcomes, is called an Experiment.
Example: When we toss a coin, we know that either head or tail shows up. So, the operation of tossing a coin may be said to have two welldefined outcomes, namely, (a) heads showing up; and (b) tails showing up.
Random Experiment: When we roll a die we are well aware of the fact that any of the numerals 1,2,3,4,5, or 6 may appear on the upper face but we cannot say that which exact number will show up.
Such an experiment in which all possible outcomes are known and the exact outcome cannot be predicted in advance, is called a Random Experiment.
Sample Space: All the possible outcomes of an experiment as an whole, form the Sample Space.
Example: When we roll a die we can get any outcome from 1 to 6. All the possible numbers which can appear on the upper face form the Sample Space(denoted by S). Hence, the Sample Space of a dice roll is S={1,2,3,4,5,6}
Outcome: Any possible result out of the Sample Space S for a Random Experiment is called an Outcome.
Example: When we roll a die, we might obtain 3 or when we toss a coin, we might obtain heads.
Event: Any subset of the Sample Space S is called an Event (denoted by E). When an outcome which belongs to the subset E takes place, it is said that an Event has occurred. Whereas, when an outcome which does not belong to subset E takes place, the Event has not occurred.
Example: Consider the experiment of throwing a die. Over here the Sample Space S={1,2,3,4,5,6}. Let E denote the event of 'a number appearing less than 4.' Thus the Event E={1,2,3}. If the number 1 appears, we say that Event E has occurred. Similarly, if the outcomes are 2 or 3, we can say Event E has occurred since these outcomes belong to subset E.
Trial: By a trial, we mean performing a random experiment.
Example: (i) Tossing a fair coin, (ii) rolling an unbiased die^{[4]}
When dealing with experiments that are random and welldefined in a purely theoretical setting (like tossing a fair coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. For example, tossing a fair coin twice will yield "headhead", "headtail", "tailhead", and "tailtail" outcomes. The probability of getting an outcome of "headhead" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:
The word probability derives from the Latin probabilitas, which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.^{[10]}
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues ^{[note 2]} are still obscured by the superstitions of gamblers.^{[11]}
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."^{[12]} However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.^{[13]}
The early form of statistical inference were developed by Middle Eastern mathematicians studying cryptography between the 8th and 13th centuries. AlKhalil (717–786) wrote the Book of Cryptographic Messages which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. AlKindi (801–873) made the earliest known use of statistical inference in his work on cryptanalysis and frequency analysis. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.^{[14]}
The sixteenthcentury Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes^{[15]}). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject.^{[16]}Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics.^{[17]} See Ian Hacking's The Emergence of Probability^{[10]} and James Franklin's The Science of Conjecture^{[18]} for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation.^{[19]} The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with PierreSimon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error—disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error.^{[20]} The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his wellknown precocity had probably not made this discovery before he was two years old."^{[20]}
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
AdrienMarie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets).^{[21]} In ignorance of Legendre's contribution, an IrishAmerican writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
where is a constant depending on precision of observation, and is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850).^{[citation needed]}Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula^{[clarification needed]} for r, the probable error of a single observation, is well known.
In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
In 1906, Andrey Markov introduced^{[22]} the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on the measure theory was developed by Andrey Kolmogorov in 1931.^{[23]}
On the geometric side, contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).^{[24]} See integral geometry for more info.
Like other theories, the theory of probability is a representation of its concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usuallyunderstood laws of probability.
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.
An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.^{[25]}
In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.^{[26]}
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty.^{[27]}
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as .^{[28]} The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.^{[29]}
The probability of an event A is written as ,^{[28]}^{[30]}, or .^{[31]} This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as ,^{[28]}, or ; its probability is given by P(not A) = 1 − P(A).^{[32]} As an example, the chance of not rolling a six on a sixsided die is 1 – (chance of rolling a six) . For a more comprehensive treatment, see Complementary event.
If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as .^{[28]}
If two events, A and B are independent then the joint probability is^{[30]}
For example, if two coins are flipped, then the chance of both being heads is .^{[33]}
If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.
If two events are mutually exclusive, then the probability of both occurring is denoted as and
If two events are mutually exclusive, then the probability of either occurring is denoted as and
For example, the chance of rolling a 1 or 2 on a sixsided die is
If the events are not mutually exclusive then
For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is , since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written ,^{[28]} and is read "the probability of A, given B". It is defined by^{[34]}
If then is formally undefined by this expression. In this case and are independent, since . However, it is possible to define a conditional probability for some zeroprobability events using a σalgebra of such events (such as those arising from a continuous random variable).^{[citation needed]}
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is ; however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be , since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be .
In probability theory and applications, Bayes' rule relates the odds of event to event , before (prior to) and after (posterior to) conditioning on another event . The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). See Inverse probability and Bayes' rule.
Event  Probability 

A  
not A  
A or B  
A and B  
A given B 
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon), (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant 6.02×10^{23}) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena.^{[35]} A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at subatomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice".^{[36]} Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality.^{[37]} In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
By: Wikipedia.org
Edited: 20210618 19:05:49
Source: Wikipedia.org