**The Following is an Excerpt from this Book**

has been added to your cart!

have been added to your cart!

has been added to your cart!

have been added to your cart!

has been added to your cart!

have been added to your cart!

Randomness is a theory used in social sciences and mathematics to refer to factors of chance that occur in such a way that the individual occurrences in a sequence of events or outcomes do not display a connection in their frequency to each other. Events are independent of each other, and thus the occurrence of one event is not in any systemic way related to the happening of other events in a sequence. It is assumed that randomness as an attribute of a series of events is the product of multiple minor causes generating small effects as an outcome resulting in no systematic predictability for a particular event in the series.

The effect of several minor factors, some of which cancel one another, is the independence of each case from the other.

It’s noteworthy that causation activity in the world of events and interactions is not disputed. Rather, the causal backdrop to a series of random events is defined as a multiplicity of small effects, some of which are almost infinitesimal in size, with some results in abandoning, or keeping in part, other bases. An effect is a sequence of separately occurring events to one another. In common terms, the word chance is used to refer to this state where events are produced independently of each other.

**Randomness and its distribution**

A particular interest to social scientists is the quantifiable observation that when a sequence of events occurs in large numbers, certain random distributions pursue fairly predictable patterns. Specific results or instances cannot be predicted with absolute certainty, but it can be shown that a random pattern follows a wide configuration in such a way that a wider probability or likelihood of occurrence can be attributed to larger areas in the distribution of random events. Instances involve numerous known or empirical distributions of random events that adopt the bilateral symmetrical distribution generating a curve called the normal curve.

Another similar known probability distribution is the binomial distribution, where only two possible scenarios will occur in a series of random events, such as throwing an unweighted coin a large or even an infinite number of times. The binomial distribution computes the usual probability curve with a large sequence of discrete, independent tests of two-possible outcomes. Yet another known distribution of random events is the chi-square distribution where the X 2 statistic is determined on several randomly drawn samples if the samples themselves are chosen from a wider universe of randomly distributed frequencies for a variety of observations.

**Randomness and statistics**

The area of statistics as applied in social science research can be split into two broad branches: descriptive statistics and inferential statistics. The object of the first division, descriptive statistics is the summary of the data. Measures of central tendencies, such as means, medians, and mode, usually summarize data. Data variation or dispersion may be summed up with variance measurements and standard deviations. When two or more variables are compared, a researcher can seek association using correlation measures. The second wider statistical category is inferential statistics. It is the category used to describe randomness and the use of defined random distributions to analyze the empirical distributions for deviations from randomness. Departures from randomness in the latter scenario can be mathematically measured, which can be of great use in the evaluation of causation. The quest for causation starts with the confirmation of the correlation or association between observations and the observer. Departures from randomness are of great importance as the initial indication of an association or correlation pattern is given. This second broad field of inferential statistics is itself divided into two large segments.

It is attempted when a smaller sample from a larger population or universe has been drawn. The approximate value of a sample is called statistics. A measured mean for a sample is a statistical assertion. If the mean for an entire population is determined, it would be regarded as the parameter. Scientists are often needed to estimate parameters based on sample statistics due to time limitations, lack of manpower, and particularly the cost of researching the total population. Randomness is an important criterion for estimating parameters because if an inference is to be taken as a statement of probability for the parameter value, a sample must be drawn through a random process. This requirement arises because the parameter estimate, or population mean, is based on knowledge of the pattern of a sequence of randomly sampled means, or a sampling distribution of means. This established distribution of samples approximates a normal curve, with known probabilities of the latter for domains under the curve.

Random sampling meets the randomness assumptions for the mean pattern in the sampling distribution and thus helps a researcher to quantify the probability of error by using a sample mean to estimate the population parameter. A second inferential statistics section includes specified methods that are used when examining hypotheses. Hypothesis testing is used in the analysis as a search for the presence of potential interpretations of relationships with a population or wider universe. Also, if precise statements of probability are to be made about the hypotheses being checked, random samples must be used. Research analysts differ in mode of functioning, but a standard model using randomness, to be precise, a search for hypothesized correlated variables or non-randomness by creating research hypotheses of correlation between categories of empirical findings or variables and then checks for such correlations through statistical techniques, such as the difference of means-testing, chi-square testing or testing of significance for randomly sampled measures that produces the sampled coefficients of correlation. In this method of doing research, the randomness hypothesis, or no relationship, called the null hypothesis, is cast against a collection of empirical frequencies derived from a random sample.

If the testing of discrepancies between means, or the X 2 statistics, or the sampled correlation coefficients generates a value or values that are so big that they are unlikely to occur by default in repeated random samples from random distributions with their known statistical patterns. The probability of a correlation or association between the hypothesized variables in the greater universe from where the samples have been drawn can be inferred. This research family is known as the significance test.

Therefore, observation and interpretation of random events help one become familiar with random patterns. This knowledge of random patterns is useful to social scientists because it helps them to make inferences about causes in the social world, which is essentially a non-random structure, and thereby to construct theoretical explanations based on empirical evidence to obtain a deeper understanding of the complex social system in its many organized variations.

has been added to your cart!

have been added to your cart!

has been added to your cart!

have been added to your cart!

has been added to your cart!

have been added to your cart!