Normal law of distribution of random variables. Normal distribution of a continuous random variable The normal distribution is described by

Definition 1

A random variable $X$ has a normal distribution (Gaussian distribution) if the density of its distribution is determined by the formula:

\[\varphi \left(x\right)=\frac(1)(\sqrt(2\pi )\sigma )e^(\frac(-((x-a))^2)(2(\sigma )^ 2))\]

Here $aϵR$ is the mathematical expectation, and $\sigma >0$ is the standard deviation.

Density of the normal distribution.

Let us show that this function is indeed a distribution density. To do this, check the following condition:

Consider the improper integral $\int\limits^(+\infty )_(-\infty )(\frac(1)(\sqrt(2\pi )\sigma )e^(\frac(-((x-a))^ 2)(2(\sigma )^2))dx)$.

Let's make the substitution: $\frac(x-a)(\sigma )=t,\ x=\sigma t+a,\ dx=\sigma dt$.

Since $f\left(t\right)=e^(\frac(-t^2)(2))$ is an even function, then

The equality holds, so the function $\varphi \left(x\right)=\frac(1)(\sqrt(2\pi )\sigma )e^(\frac(-((x-a))^2)(2 (\sigma )^2))$ is indeed the distribution density of some random variable.

Consider some of the simplest properties of the probability density function of the normal distribution $\varphi \left(x\right)$:

  1. The graph of the probability density function of the normal distribution is symmetrical with respect to the straight line $x=a$.
  2. The function $\varphi \left(x\right)$ reaches its maximum at $x=a$, while $\varphi \left(a\right)=\frac(1)(\sqrt(2\pi )\sigma ) e^(\frac(-((a-a))^2)(2(\sigma )^2))=\frac(1)(\sqrt(2\pi )\sigma )$
  3. The function $\varphi \left(x\right)$ decreases as $x>a$ and increases as $x
  4. The function $\varphi \left(x\right)$ has inflection points at $x=a+\sigma $ and $x=a-\sigma $.
  5. The function $\varphi \left(x\right)$ asymptotically approaches the $Ox$ axis as $x\to \pm \infty $.
  6. The schematic graph looks like this (Fig. 1).

Figure 1 1. Normal distribution density plot

Note that if $a=0$, then the graph of the function is symmetrical with respect to the $Oy$ axis. Hence the function $\varphi \left(x\right)$ is even.

Probability normal distribution function.

To find the probability distribution function for a normal distribution, we use the following formula:

Hence,

Definition 2

The function $F(x)$ is called the standard normal distribution if $a=0,\ \sigma =1$, that is:

Here $Ф\left(x\right)=\frac(1)(\sqrt(2\pi ))\int\limits^x_0(e^(\frac(-t^2)(2))dt)$ is the Laplace function.

Definition 3

Function $Ф\left(x\right)=\frac(1)(\sqrt(2\pi ))\int\limits^x_0(e^(\frac(-t^2)(2))dt)$ is called the probability integral.

Numerical characteristics of the normal distribution.

Mathematical expectation: $M\left(X\right)=a$.

Variance : $D\left(X\right)=(\sigma )^2$.

Mean square distribution: $\sigma \left(X\right)=\sigma $.

Example 1

An example of solving a problem on the concept of normal distribution.

Task 1: The length of the path $X$ is a random continuous value. $X$ is distributed according to the normal distribution law, the average value of which is $4$ kilometers, and the standard deviation is $100$ meters.

  1. Find the distribution density function $X$.
  2. Construct a diagrammatic plot of the distribution density.
  3. Find the distribution function of the random variable $X$.
  4. Find the variance.
  1. To begin with, let's imagine all the quantities in one dimension: 100m = 0.1km

From Definition 1, we get:

\[\varphi \left(x\right)=\frac(1)(0,1\sqrt(2\pi ))e^(\frac(-((x-4))^2)(0,02 ))\]

(because $a=4\ km,\ \sigma =0.1\ km)$

  1. Using the properties of the distribution density function, we have that the graph of the function $\varphi \left(x\right)$ is symmetric with respect to the straight line $x=4$.

The function reaches its maximum at the point $\left(a,\frac(1)(\sqrt(2\pi )\sigma )\right)=(4,\ \frac(1)(0,1\sqrt(2\pi )))$

The schematic graph looks like:

Figure 2.

  1. By definition of the distribution function $F\left(x\right)=\frac(1)(\sqrt(2\pi )\sigma )\int\limits^x_(-\infty )(e^(\frac(-( (t-a))^2)(2(\sigma )^2))dt)$, we have:
\
  1. $D\left(X\right)=(\sigma )^2=0.01$.

Normal law of probability distribution

Without exaggeration, it can be called a philosophical law. Observing various objects and processes of the world around us, we often encounter the fact that something is not enough, and that there is a norm:


Here is a basic view density functions normal probability distribution, and I welcome you to this most interesting lesson.

What examples can be given? They are just darkness. This, for example, is the height, weight of people (and not only), their physical strength, mental abilities, etc. There is a "mass" (in one way or another) and there are deviations in both directions.

These are different characteristics of inanimate objects (the same dimensions, weight). This is a random duration of processes, for example, the time of a hundred-meter race or the transformation of resin into amber. From physics, air molecules came to mind: among them there are slow ones, there are fast ones, but most of them move at “standard” speeds.

Then we deviate from the center one more standard deviation and calculate the height:

Marking points on the drawing (green color) and we see that this is quite enough.

At the final stage, we carefully draw a graph, and especially carefully reflect it convexity / concavity! Well, you probably realized a long time ago that the abscissa axis is horizontal asymptote, and it is absolutely impossible to “climb” for it!

With the electronic design of the solution, the graph is easy to build in Excel, and unexpectedly for myself, I even recorded a short video on this topic. But first, let's talk about how the shape of the normal curve changes depending on the values ​​of and .

When increasing or decreasing "a" (with unchanged "sigma") the graph retains its shape and moves right / left respectively. So, for example, when the function takes the form and our graph "moves" 3 units to the left - exactly to the origin:


A normally distributed quantity with zero mathematical expectation received a completely natural name - centered; its density function is even, and the graph is symmetrical about the y-axis.

In the event of a change in "sigma" (with constant "a"), the graph "remains in place", but changes shape. When enlarged, it becomes lower and elongated, like an octopus stretching its tentacles. And vice versa, when decreasing the graph becomes narrower and taller- it turns out "surprised octopus." Yes, at decrease"sigma" two times: the previous chart narrows and stretches up twice:

Everything is in full accordance with geometric transformations of graphs.

The normal distribution with unit value "sigma" is called normalized, and if it is also centered(our case), then such a distribution is called standard. It has even more a simple function density, which has already been encountered in local Laplace theorem: . The standard distribution has found wide application in practice, and very soon we will finally understand its purpose.

Now let's watch a movie:

Yes, quite right - somehow undeservedly we have remained in the shadows probability distribution function. We remember her definition:
- the probability that a random variable will take a value LESS than the variable , which "runs" all real values ​​\u200b\u200bto "plus" infinity.

Inside the integral, a different letter is usually used so that there are no "overlays" with the notation, because here each value is assigned improper integral, which is equal to some number from the interval.

Almost all values ​​cannot be accurately calculated, but as we have just seen, with modern computing power, this is not difficult. So, for the standard distribution function, the corresponding Excel function generally contains one argument:

=NORMSDIST(z)

One, two - and you're done:

The drawing clearly shows the implementation of all distribution function properties, and from the technical nuances here you should pay attention to horizontal asymptotes and an inflection point.

Now let's recall one of the key tasks of the topic, namely, find out how to find - the probability that a normal random variable will take a value from the interval. Geometrically, this probability is equal to area between the normal curve and the x-axis in the corresponding section:

but each time grind out an approximate value is unreasonable, and therefore it is more rational to use "easy" formula:
.

! also remembers , What

Here you can use Excel again, but there are a couple of significant “buts”: firstly, it is not always at hand, and secondly, “ready-made” values, most likely, will raise questions from the teacher. Why?

I have repeatedly talked about this before: at one time (and not very long ago) an ordinary calculator was a luxury, and the “manual” way of solving the problem under consideration is still preserved in the educational literature. Its essence is to standardize the values ​​"alpha" and "beta", that is, reduce the solution to the standard distribution:

Note : the function is easy to obtain from the general caseusing a linear substitutions. Then and:

and from the replacement just follows the formula for the transition from the values ​​of an arbitrary distribution to the corresponding values ​​of the standard distribution.

Why is this needed? The fact is that the values ​​were scrupulously calculated by our ancestors and summarized in a special table, which is in many books on terver. But even more common is the table of values, which we have already dealt with in Laplace integral theorem:

If we have at our disposal a table of values ​​of the Laplace function , then we solve through it:

Fractional values ​​are traditionally rounded to 4 decimal places, as is done in the standard table. And for control Item 5 layout.

I remind you that , and to avoid confusion always be in control, table of WHAT function before your eyes.

Answer is required to be given as a percentage, so the calculated probability must be multiplied by 100 and provide the result with a meaningful comment:

- with a flight from 5 to 70 m, approximately 15.87% of the shells will fall

We train on our own:

Example 3

The diameter of bearings manufactured at the factory is a random variable normally distributed with an expectation of 1.5 cm and a standard deviation of 0.04 cm. Find the probability that the size of a randomly taken bearing ranges from 1.4 to 1.6 cm.

In the sample solution and below, I will use the Laplace function as the most common option. By the way, note that according to the wording, here you can include the ends of the interval in the consideration. However, this is not critical.

And already in this example, we met a special case - when the interval is symmetric with respect to mathematical expectation. In such a situation, it can be written in the form and, using the oddness of the Laplace function, simplify the working formula:


The delta parameter is called deviation from the mathematical expectation, and the double inequality can be “packed” using module:

is the probability that the value of a random variable deviates from the mathematical expectation by less than .

Well, the solution that fits in one line :)
is the probability that the diameter of a bearing taken at random differs from 1.5 cm by no more than 0.1 cm.

The result of this task turned out to be close to unity, but I would like even more reliability - namely, to find out the boundaries in which the diameter is almost everyone bearings. Is there any criterion for this? Exists! The question is answered by the so-called

three sigma rule

Its essence is that practically reliable is the fact that a normally distributed random variable will take a value from the interval .

Indeed, the probability of deviation from the expectation is less than:
or 99.73%

In terms of "bearings" - these are 9973 pieces with a diameter of 1.38 to 1.62 cm and only 27 "substandard" copies.

In practical research, the “three sigma” rule is usually applied in the opposite direction: if statistically found that almost all values random variable under study fit into an interval of 6 standard deviations, then there are good reasons to believe that this value is distributed according to the normal law. Verification is carried out using the theory statistical hypotheses.

We continue to solve the harsh Soviet tasks:

Example 4

The random value of the weighing error is distributed according to the normal law with zero mathematical expectation and a standard deviation of 3 grams. Find the probability that the next weighing will be carried out with an error not exceeding 5 grams in absolute value.

Solution very simple. By the condition, and we immediately note that at the next weighing (something or someone) we will almost 100% get the result with an accuracy of 9 grams. But in the problem there is a narrower deviation and according to the formula:

- the probability that the next weighing will be carried out with an error not exceeding 5 grams.

Answer:

A solved problem is fundamentally different from a seemingly similar one. Example 3 lesson about uniform distribution. There was an error rounding measurement results, here we are talking about the random error of the measurements themselves. Such errors arise due to the technical characteristics of the device itself. (the range of permissible errors, as a rule, is indicated in his passport), and also through the fault of the experimenter - when, for example, "by eye" we take readings from the arrow of the same scales.

Among others, there are also so-called systematic measurement errors. It's already nonrandom errors that occur due to incorrect setup or operation of the device. So, for example, unadjusted floor scales can consistently "add" a kilogram, and the seller systematically underweight buyers. Or not systematically because you can shortchange. However, in any case, such an error will not be random, and its expectation is different from zero.

…I am urgently developing a sales training course =)

Let's solve the problem on our own:

Example 5

The roller diameter is a random normally distributed random variable, its standard deviation is mm. Find the length of the interval, symmetrical with respect to the mathematical expectation, in which the length of the diameter of the bead will fall with probability.

Item 5* design layout to help. Please note that the mathematical expectation is not known here, but this does not in the least interfere with solving the problem.

And the exam task, which I highly recommend to consolidate the material:

Example 6

A normally distributed random variable is given by its parameters (mathematical expectation) and (standard deviation). Required:

a) write down the probability density and schematically depict its graph;
b) find the probability that it will take a value from the interval ;
c) find the probability that the modulo deviates from no more than ;
d) applying the rule of "three sigma", find the values ​​of the random variable .

Such problems are offered everywhere, and over the years of practice I have been able to solve hundreds and hundreds of them. Be sure to practice hand drawing and using paper spreadsheets ;)

Well, I will analyze an example of increased complexity:

Example 7

The probability distribution density of a random variable has the form . Find , mathematical expectation , variance , distribution function , plot density and distribution functions, find .

Solution: first of all, let's pay attention that the condition does not say anything about the nature of the random variable. By itself, the presence of the exhibitor does not mean anything: it can be, for example, demonstrative or generally arbitrary continuous distribution. And therefore, the “normality” of the distribution still needs to be substantiated:

Since the function determined at any real value , and it can be reduced to the form , then the random variable is distributed according to the normal law.

We present. For this select a full square and organize three-story fraction:


Be sure to perform a check, returning the indicator to its original form:

which is what we wanted to see.

Thus:
- By power rule"pinching off". And here you can immediately write down the obvious numerical characteristics:

Now let's find the value of the parameter. Since the normal distribution multiplier has the form and , then:
, from which we express and substitute into our function:
, after which we will once again go over the record with our eyes and make sure that the resulting function has the form .

Let's plot the density:

and the plot of the distribution function :

If there is no Excel and even a regular calculator at hand, then the last chart is easily built manually! At the point, the distribution function takes on a value and here is

Random variables are associated with random events. Random events are spoken of when it is impossible to unambiguously predict the result that can be obtained under certain conditions.

Suppose we are tossing an ordinary coin. Usually the result of this procedure is not uniquely certain. One can only say with certainty that one of two things will happen: either heads or tails will fall out. Any of these events will be random. You can enter a variable that will describe the outcome of this random event. Obviously, this variable will take two discrete values: heads and tails. Since we cannot accurately predict in advance which of the two possible values ​​this variable will take, it can be argued that in this case we are dealing with random variables.

Let us now assume that in the experiment we are evaluating the reaction time of the subject upon the presentation of some stimulus. As a rule, it turns out that even when the experimenter takes all measures to standardize the experimental conditions, minimizing or even eliminating possible variations in the presentation of the stimulus, the measured values ​​of the reaction time of the subject will still differ. In this case, they say that the reaction time of the subject is described by a random variable. Since, in principle, in the experiment we can get any value of the reaction time - the set of possible values ​​​​of the reaction time that can be obtained as a result of measurements turns out to be infinite - they say about continuity this random variable.

The question arises: are there any regularities in the behavior of random variables? The answer to this question turns out to be in the affirmative.

So, if you spend indefinitely big number tossing the same coin, you will find that the number of occurrences on each of the two sides of the coin will be approximately the same, unless, of course, the coin is false and not bent. To emphasize this pattern, the concept of the probability of a random event is introduced. It is clear that in the case of a coin toss, one of two possible events will occur without fail. This is due to the fact that the total probability of these two events, otherwise called the total probability, is 100%. If we assume that both of the two events associated with the testing of the coin occur with equal probabilities, then the probability of each outcome separately, obviously, turns out to be 50%. Thus, theoretical considerations allow us to describe the behavior of a given random variable. Such a description in mathematical statistics denoted by the term "distribution of a random variable".

The situation is more complicated with a random variable that does not have a well-defined set of values, i.e. turns out to be continuous. But even in this case, some important regularities of its behavior can be noted. So, when conducting an experiment with measuring the reaction time of the subject, it can be noted that different intervals of the duration of the reaction of the subject are evaluated with varying degrees probabilities. It is likely rare that the subject will react too quickly. For example, in semantic decision tasks, subjects practically fail to more or less accurately respond at a speed of less than 500 ms (1/2 s). Similarly, it is unlikely that a subject faithfully following the experimenter's instructions will greatly delay his response. In semantic decision problems, for example, responses estimated to be more than 5 s are usually considered unreliable. Nevertheless, with 100% certainty, it can be assumed that the reaction time of the subject will be in the range from 0 to + co. But this probability is the sum of the probabilities of each individual value of the random variable. Therefore, the distribution of a continuous random variable can be described as continuous function y = f (X ).

If we are dealing with a discrete random variable, when all its possible values ​​are known in advance, as in the example with a coin, it is usually not very difficult to build a model for its distribution. It suffices to introduce only some reasonable assumptions, as we did in the example under consideration. The situation is more complicated with the distribution of continuous magnitudes that take on an unknown number of values ​​in advance. Of course, if we, for example, developed a theoretical model that describes the behavior of a test subject in an experiment with the measurement of reaction time when solving a semantic solution problem, we could try to describe on the basis of this model the theoretical distribution of specific values ​​of the reaction time of the same subject upon presentation of one and the same stimulus. However, this is not always possible. Therefore, the experimenter may be forced to assume that the distribution of the random variable of interest to him is described by some law already studied in advance. Most often, although this may not always be absolutely correct, the so-called normal distribution is used for these purposes, which acts as a standard for the distribution of any random variable, regardless of its nature. This distribution was first described mathematically in the first half of the 18th century. de Moivre.

Normal distribution occurs when the phenomenon of interest to us is subject to the influence of an infinite number of random factors that balance each other. Formally, the normal distribution, as de Moivre showed, can be described by the following relation:

Where X represents a random variable of interest to us, the behavior of which we study; R is the probability value associated with this random variable; π and e - well-known mathematical constants describing respectively the ratio of the circumference to the diameter and the base of the natural logarithm; μ and σ2 are the parameters of the normal distribution of the random variable, respectively, the mathematical expectation and variance of the random variable X.

To describe the normal distribution, it turns out to be necessary and sufficient to define only the parameters μ and σ2.

Therefore, if we have a random variable whose behavior is described by equation (1.1) with arbitrary values ​​of μ and σ2, then we can denote it as Ν (μ, σ2) without remembering all the details of this equation.

Rice. 1.1.

Any distribution can be represented visually in the form of a graph. Graphically, the normal distribution has the form of a bell-shaped curve, the exact shape of which is determined by the parameters of the distribution, i.e. mathematical expectation and variance. The parameters of the normal distribution can take almost any values, which are limited only by the measuring scale used by the experimenter. In theory, the value of the mathematical expectation can be any number from the range of numbers from -∞ to +∞, and the variance can be any non-negative number. Therefore, there is an infinite number various kinds normal distribution and, accordingly, an infinite set of curves representing it (having, however, a similar bell-shaped form). It is clear that it is impossible to describe all of them. However, if the parameters of a particular normal distribution are known, it can be converted to the so-called unit normal distribution, the mathematical expectation for which is equal to zero, and the variance is equal to one. This normal distribution is also called standard or z-distribution. The plot of the unit normal distribution is shown in fig. 1.1, whence it is obvious that the top of the bell-shaped curve of the normal distribution characterizes the value of the mathematical expectation. Another parameter of the normal distribution - dispersion - characterizes the degree of "spreading" of the bell-shaped curve relative to the horizontal (abscissa axis).

Random if, as a result of experience, it can take on real values ​​with certain probabilities. The most complete, exhaustive characteristic of a random variable is the law of distribution. The distribution law is a function (table, graph, formula) that allows you to determine the probability that a random variable X takes a certain value xi or falls into a certain interval. If a random variable has a given distribution law, then it is said that it is distributed according to this law or obeys this distribution law.

Every distribution law is a function that completely describes a random variable from a probabilistic point of view. In practice, the probability distribution of a random variable X often has to be judged only by test results.

Normal distribution

Normal distribution, also called the Gaussian distribution, is a probability distribution that plays a crucial role in many fields of knowledge, especially in physics. Physical quantity obeys a normal distribution when it is influenced by a huge number of random noises. It is clear that this situation is extremely common, so we can say that of all distributions, it is the normal distribution that most often occurs in nature - hence one of its names came from.

The normal distribution depends on two parameters - displacement and scale, that is, from a mathematical point of view, it is not one distribution, but a whole family of them. The parameter values ​​correspond to the mean (mathematical expectation) and spread (standard deviation) values.

The standard normal distribution is a normal distribution with mean 0 and standard deviation 1.

Asymmetry coefficient

The skewness coefficient is positive if the right tail of the distribution is longer than the left, and negative otherwise.

If the distribution is symmetrical with respect to the mathematical expectation, then its skewness coefficient is equal to zero.

The sample skewness coefficient is used to test the distribution for symmetry, as well as for a rough preliminary test for normality. It allows you to reject, but does not allow you to accept the hypothesis of normality.

Kurtosis coefficient

The coefficient of kurtosis (coefficient of sharpness) is a measure of the sharpness of the peak of the distribution of a random variable.

"Minus three" at the end of the formula is introduced so that the coefficient of kurtosis of the normal distribution is equal to zero. It is positive if the peak of the distribution near the expected value is sharp, and negative if the peak is smooth.

Moments of a random variable

The moment of a random variable is a numerical characteristic of the distribution of a given random variable.

Normal distribution ( normal distribution) - plays important role in data analysis.

Sometimes instead of the term normal distribution use the term Gaussian distribution in honor of K. Gauss (older terms, practically not used now: Gauss law, Gauss-Laplace distribution).

Univariate normal distribution

The normal distribution has a density::

In this formula, fixed parameters, - average, - standard deviation.

Graphs of density for various parameters are given.

The characteristic function of the normal distribution has the form:

Differentiating the characteristic function and setting t = 0, we obtain moments of any order.

The normal distribution density curve is symmetric with respect to and has a single maximum at this point, equal to

The standard deviation parameter varies from 0 to ∞.

Average varies from -∞ to +∞.

As the parameter increases, the curve spreads along the axis X, tending to 0 shrinks around the average value (the parameter characterizes the spread, scattering).

When it changes the curve is shifted along the axis X(see graphs).

By varying the parameters and , we obtain various models of random variables that arise in telephony.

A typical application of the normal law in the analysis of, for example, telecommunications data is signal modeling, description of noise, interference, errors, traffic.

Graphs of the univariate normal distribution

Figure 1. Normal distribution density plot: mean is 0, standard deviation is 1

Figure 2. Density plot of the standard normal distribution with areas containing 68% and 95% of all observations

Figure 3. Density plots of normal distributions with zero mean and different deviations (=0.5, =1, =2)

Figure 4 Graphs of two normal distributions N(-2,2) and N(3,2).

Note that the center of distribution has shifted when changing the parameter.

Comment

In a programme STATISTICS the designation N(3,2) is understood as a normal or Gaussian law with parameters: mean = 3 and standard deviation =2.

In the literature, sometimes the second parameter is interpreted as dispersion, i.e. square standard deviation.

Normal Distribution Percentage Point Calculations with a Probability Calculator STATISTICS

Using a probability calculator STATISTICS it is possible to calculate various characteristics of distributions without resorting to the cumbersome tables used in old books.

Step 1. We launch Analysis / Probability Calculator / Distributions.

In the distribution section, choose normal.

Figure 5. Launching the probability distribution calculator

Step 2 Specify the parameters we are interested in.

For example, we want to calculate the 95% quantile of a normal distribution with a mean of 0 and a standard deviation of 1.

Specify these parameters in the fields of the calculator (see fields of the calculator mean and standard deviation).

Let's introduce the parameter p=0.95.

Checkbox "Reverse f.r.". will be displayed automatically. Check the "Graph" box.

Click the "Calculate" button in the upper right corner.

Figure 6. Parameter setting

Step 3 In the Z field, we get the result: the quantile value is 1.64 (see the next window).

Figure 7. Viewing the result of the calculator

Figure 8. Plots of density and distribution functions. Straight x=1.644485

Figure 9. Graphs of the normal distribution function. Vertical dotted lines - x=-1.5, x=-1, x=-0.5, x=0

Figure 10. Graphs of the normal distribution function. Vertical dotted lines - x=0.5, x=1, x=1.5, x=2

Estimation of normal distribution parameters

Normal distribution values ​​can be calculated using interactive calculator.

Bivariate normal distribution

The univariate normal distribution generalizes naturally to two-dimensional normal distribution.

For example, if you consider a signal at only one point, then a one-dimensional distribution is enough for you, at two points - a two-dimensional distribution, at three points - a three-dimensional distribution, and so on.

The general formula for the bivariate normal distribution is:

Where is the pairwise correlation between x1 And x2;

x1 respectively;

Mean and standard deviation of a variable x2 respectively.

If random variables X 1 And X 2 are independent, then the correlation is 0, = 0, respectively, the middle term in the exponent vanishes, and we have:

f(x 1 ,x 2) = f(x 1)*f(x 2)

For independent quantities, the two-dimensional density decomposes into the product of two one-dimensional densities.

Bivariate Normal Density Plots

Figure 11. Density plot of a bivariate normal distribution (zero mean vector, unit covariance matrix)

Figure 12. Section of the density plot of the two-dimensional normal distribution by the plane z=0.05

Figure 13. Density plot of the bivariate normal distribution (zero expectation vector, covariance matrix with 1 on the main diagonal and 0.5 on the side diagonal)

Figure 14. Cross-section of the 2D normal density plot (expectation vector zero, covariance matrix with 1 on the main diagonal and 0.5 on the side diagonal) by the z= 0.05 plane

Figure 15. Density plot of a bivariate normal distribution (zero expectation vector, covariance matrix with 1 on the main diagonal and -0.5 on the side diagonal)

Figure 16. Cross section of the 2D normal distribution density plot (expectation vector zero, covariance matrix with 1 on the main diagonal and -0.5 on the side diagonal) by the z=0.05 plane

Figure 17. Cross-sections of plots of densities of a two-dimensional normal distribution by the plane z=0.05

For a better understanding of the bivariate normal distribution, try the following problem.

Task. Look at the graph of the bivariate normal distribution. Think about it, can it be represented as a rotation of a graph of a one-dimensional normal distribution? When do you need to apply the deformation technique?