User:Ungläubiger/Probability

From RationalWiki
Jump to navigation Jump to search

Probability is a mathematical concept which can be used to analyze the certainty of events which occur randomly.

Mathematical theory[edit]

In the mathematical theory of probability (stochastic) a probability is a number between 0% and 100% assigned to an event. The theory describes how to do calculations with probabilities.

The mathematical theory is independent of the meaning of these numbers and there exist multiple interpretations. It doesn't described were probabilities are coming from in the first place, this also depends on the interpretation.

Interpretations[edit]

Frequentism[edit]

In frequentism a probability expresses the frequency of an outcome if some process is repeated for a large number of times. This interpretation allows probabilities to be measured or estimated if the process repeats or is repeatable and it forms the basis of standard statistics.

For example if after 1000 coin tosses the result was 507 times heads, we can estimate the probability of head to be 50.7%.

For a single event which is the result of a process which is not repeatable there is no meaningful probability in a frequentist sense.

Bayesian[edit]

According to the Bayesian interpretation (also called subjectivism) a probability expresses the subjective confidence that an event will occur or a statement is true. This allows to assign probabilities even to single events, but makes the probability dependent on the knowledge of the subject.

For example if one person tosses a coin and looks at the result which is heads, the probability of this will be 100% for them. If another person didn't see the result yet and assumes that the coin is fair, the probability of heads will be 50% for them.

Propensity[edit]

According to propensity theories probabilities are objective properties of processes. In the case of repeatable processes propensities are normally required to agree with the frequentist interpretation, but they are also allowed for single events, without specifying their meaning in any more detail. Such theories are often motivated by the occurrence of probabilities in scientific models, especially Quantum Mechanics.

Probabilities in models[edit]

It is possible to formulate assumptions about the world such that the resulting model contains numbers which follow the mathematical rules of probabilities. If at least some of the processes modeled are repeatable it can be checked if these probability are correct in the frequentist sense.

Testing hypotheses[edit]

Frequentism knows the concept of significance to test if a hypothesis can be valid. A more detailed analysis is possible when a Bayesian interpretation is used.

Suppose there is a rare disease which affects 1 in 1000 people. Assume we have test for this disease which is 99% correct. How likely is a person infected if this test is positive?

Assume 100,000 people are tested:

  • There are 99,900 healthy people and for 999 (1%) of them the test gets it wrong and gives a positive result.
  • There are 100 ill people and for 99 (99%) of them the test gets it right and gives a positive result.
  • Of the 1098 people with positive results only 99 (9%) are actually infected.

In this case there are two hypotheses (healthy and infected) with an a priori probability (999 and 1 in 1000) and this calculation gives the a posteriori probability after receiving a positive test result as new data. If we have additional information like risk factors it may well be that the a priori Bayesian probabilities are different, which would lead to different a posteriori probabilities. The formula for this calculation is called Bayesian rule and is the origin of the name Basian probability.

Overfitting[edit]

If a model is tailored to explain the know data to such a degree, that it no longer is a good predictor of future events it is called overfitted.

Suppose a die is rolled 3 times and the result is 1, 2, and 3. The standard model of a fair die gives a probability of only 0.5% for this result. An alternative theory, that a fairy guides the die to always show the sequence 1 to 6 would assign a probability of 100% to this result. Probably the second hypothesis is overfitted and the only way to distinguish between the two hypotheses are additional rolls of the die.

The important consequence of this is that only data which was unknown when the hypotheses were formulated can be used to test the validity.

Misuse by creationists[edit]

Conservlogo late april.png
For those living in an alternate reality, Conservapedia has an "article" about Probability

Applying probabilities to single events[edit]

Creationists often talk about the probability of single events like "What is the probability of life in the universe?"

According to frequentism this question doesn't have a meaningful answer since the universe is not a repeatable process.

The Bayesian probability is 100% since we already know that life exists.

Since the existence of life in this universe was known for a long time, it is also not possible to use it to validate hypotheses as this would lead to overfitting.

The classical example of an overfitted theory is creationism, it "explains" the known events just as well as the fairy explains the results of the die in the example above, but it is contradicted by all newly discovered data like the age of the universe.

On the other hand the theory of evolution correctly predicted relationships between all life forms which were later confirmed by genetics.

Strawman probabilities[edit]

Creation scientists are eager to quote probabilities of events to occur naturally. In order to calculate these probabilities they need to make assumptions about the processes which bring theses events about. The assumptions they make are normally flawed and hence the probabilities can be only used to refute their flawed strawman assumptions. Often they not even succeed in knocking down the strawman because they are only looking at old data and single events.

A common error is to use over-simplified statistical models that do not take all factors into account and groundlessly assume that all outcomes are equally probable; for example, assuming a 50-50 chance of heads when flipping a loaded coin, or assuming a 1-in-4 probability of drawing a card of a certain suit from a Svengali deck, or assuming that since there are four possible gender-pairings (man-man, man-woman, woman-man, and woman-woman), half of all people are gay.[1]

Misconceptions such as the balance fallacy occur when people treat such unequal outcomes as equally probable; as silly as it sounds when it is actually explained, this mistake is still quite common. The philosophy professor Norman Swartz links this oversimplifying analysis to Cartesian cogito-ergo-sum rationalism, which completely rejects empirical observation as a "way of knowing."[1]

See also[edit]

Footnotes[edit]

Category:Mathematics