Sunday, September 22, 2013

6.1: What we're really going to be talking about.

Just kidding, everyone! I was going off the incorrect list, assuming we were going through the book in order. Instead, I should have been looking at the schedule that says we will be going to chapter 6 now.

Think of section 4.1 as another bonus section. Hooray!

Anyway, onto the next summary…

Chapter 6 is entitled “Numerical Methods.”
Section 6.1 is entitled “Euler’s Method.”

Since the beginning of our differential equation journey, we have been using numerical solutions of our ordinary differential equations. However, a numerical solution is less of a solution and more of an approximation. This means we make an error while finding our solution, which now is the time to understand the error.

Let’s consider our already well-known initial value problem


Let’s also be interested in our solution on a certain interval, say a ≤ t ≤ b. Let’s also assume the solution exists on this interval. As always, we will denote this solution by y(t).

Here’s a definition for you: A numerical solution method (also known as a numerical solver) will choose a specific and discrete set of points (i.e. t0, t1, t2, … , tN) within our interval and values (i.e. y0, y1, y2, … , yN) such that each value yi is approximately equal to y(ti) (where i = 0, 1, 2, … , N). Our initial condition is our first point and will start this method off for us.

The first method we will look at is Euler’s method. This is an example of what is called a fixed-step solver, which means we chose the set of values of the independent variable so that for our interval, we get N equal subdivisions or subintervals. In order to do this, we shall set a step size h, which is equal to (b – a)/N.

The idea is that we’re using the tangent line to approximate the solution. We start out with the tangent line of our initial condition, and we make our first step (t1). Then, once we have computed t1 and y1, we use those values to make our second step, and so on. The general method is as follows:

Something to take note is that y­i only depends on the previous calculations (ti-1 and yi-1). This solver is thus known as a single-step solver because of this property.

For this method, the magnitude of the error usually increases with each step. Sometimes this isn’t so, but for the most part, this is a thing that happens.

There are two sorts of error involved in Euler’s method. In the actual process, there is truncation error, and there is round-off error by the computer, calculator, or by hand.

Round-off error is pretty straightforward: you make a calculation, and you round off to, let’s say, three decimal places. You might be rounding up or down depending on your digits. So there’s a probability of an error being produced in the last place of your calculation. However, the process that calculators and computers use to make calculations has such a high accuracy that the round-off error is usually negligible. Sometimes, this isn’t the case. However, negligible round-off error is almost always a thing to look forward to when we have numerical solutions of ordinary differential equations. Therefore, we won’t be talking about round-off error anymore.

This leaves us with truncation error. In order to really understand truncation error, we would have to look at Taylor’s formula. In order to avoid spitting more formulas at you, here’s some handy websites for you to look at:

The point of bringing up Taylor’s formula is to see the remainder in Taylor’s formula. This is a simple error that is made in each step. This error is truncation error.

This graph shows the first two steps of Euler’s method. The error in the first step is purely truncation error (as shown by the darker gray line between (t1, y(t1)) and (t1, y1)), while there are two sources of error for the second step. The error below the gray line is truncation error, while the error above the gray line is what is called propagated truncation error. This is the error propagated by the solution error. It’s a little hard to see from my picture, but sometimes the propagated truncation error can be much larger than the original truncation error. As you increase in the number of steps, the sum of the propagated truncation errors from the previous steps plus the truncation error of the current step itself adds up to the total error, and it’s a lot.
We can analyze the total error.
With this error, the constants M and L depend only on the function f(t, y).
In this case, R is a rectangle in which the solution curve is contained.

There is both a good thing and a bad thing for our error. This is a good thing because the step size h is a factor in the error. This means we can make our step size super small and the error will also be super small. The bad news is (b-a) is also a factor. This means for large intervals, the error gets super large. However, this is a thing for all numerical solvers.

Solution methods can be applied to systems of differential equations. You know what’s funny about this? My accidental bonus section (4.1) is where we get the example for this. Remember the spring and mass system?

Okay, that’s it for 6.1. I’m going to summarize 6.2 for next time (I promise, I’m looking at the correct schedule this time). I’ll see you when I see you. 

No comments:

Post a Comment