Mon - December 6, 2004

Lecture 36


Quiz #4

The quizzes have been marked and can be picked up from the general office. I will also bring quizzes with me to the tutorial on Dec. 14.

The average on this quiz was lower than the previous two. However, there were still many high scores as well as a few perfects.

Posted at 01:04 PM    

Thu - December 2, 2004

Lecture 35.


Review for Quiz #4.

The material for quiz #4 is based on the material we studied for line fitting and polynomial interpolation. I reviewed last year's quiz, as well as assignment #4.

Posted at 01:02 PM    

Tue - November 30, 2004

Lecture 34


Higher Order Taylor Methods

We looked at the higher order Taylor methods and a derivation of the RK-2 formula. I followed the treatment in class 24 and 25 of the Ellis notes. (By the way I know that classes 24 and 25 were written by Tom Fevens so I really should be calling these notes the Ellis/Fevens notes.) Recktenwald uses an entirely different approach to present this material. It may be useful to look at some of the pictures in chapter 12 of Recktenwald to get some intuitive explanation of the RK formulae.

I did not go into much detail so you should check on the formulae in the notes. I concluded with an example that emphasized the huge increase in accuracy obtainable by using the higher order Taylor methods. This and other examples can be found in NMM.

Posted at 11:21 AM    

Fri - November 26, 2004

Lecture 33


Introduction to numerical algorithms for solving differential equations.

We looked at initial value problems with ordinary differential equations. I illustrated how radio-active decay is modelled by a differential equation, and how one could obtain the amount of radio-active substance at time t given the initial amount by solving the differential equation. I then illustrated a simple numerical technique, Euler's method for solving initial value problems.

Here's the example we worked out in class today: EulerODE1.pdf

You can find this material in class 23 of the Ellis notes, or at the beginning of chapter 12 of NMM.

Posted at 12:31 PM    

Thu - November 25, 2004

Lecture 32


Adaptive Quadrature.


Adaptive quadrature is a technique that automatically determines how many panels to use (and where to put them) so that a numerical integral is computed within the prescribed accuracy. We saw how the rounding error could be estimated, and how this estimate is used in a recursive adaptive trapezoid rule and Simpson's rule algorithm.

I wrote out an algorithm for recursive adaptive Simpson's rule quadrature. We then looked at some MATLAB examples of quadrature routines with and without plots. This ends our treatment of numerical quadrature. We will proceed to Chapter 12 of NMM, which looks at differential equations.

Posted at 12:29 PM    

Tue - November 23, 2004

Lecture 31


Gaussian Quadrature

Today we looked at Gaussian Quadrature. The method of solving four equations in four unknowns comes from the Ellis notes Class 22. As far the Recktenwald book is concerned you may skip over all of the theoretical development if you wish to. That is in Chapter 11 you may skip sections 11.3.1, and 11.3.4. Section 11.3.5 discusses composite rules for Gaussian quadrature. I did not explicitly cover this in class, but I think that you should be able to read this section and understand it.

Posted at 12:19 PM    

Fri - November 19, 2004

Lecture 30


Newton-Cotes Integration rules

Today we used Taylor polynomials to derive the midpoint method of integration. The midpoint method is just one of a collection of rules known as Newton-Cotes formulas. Check the link or your text book. I illustrated the midpoint trapezoid and Simpson's methods on some examples and we saw how these methods worked both analytically and through the pictures we obtained by these examples. We ended by giving an argument that using multiple panels leads to a reduction in the error at the expense of additional computational cost.

I also mentioned the date for the final exam and possible times for a pre-exam review session.
Check the announcements category of this web log for up-to-date details.

Posted at 11:48 AM    

Thu - November 18, 2004

Lecture 29.


Numerical Quadrature.

Today we began our exploration of numerical quadrature techniques. We saw the Trapezoid rule as well as Simpson's rule. We also saw how to estimate the errors for each of the methods.

These two quadrature techniques belong to a class of quadrature rules, known as the Newton Cotes rules. We will examine these tomorrow.

Posted at 12:01 PM    

Tue - November 16, 2004

Lecture 28


Least squares curve fitting using the normal equations.

Last week we used a geometric approach to derive the system of equations used to obtain the best line that fits a set of data points. Today I used the normal equations approach to derive the same equations. The derivation expresses everything in terms of vectors and matrices and generalized very easily to the more general case where we are fitting a linear combination of functions to a set of data points.

I began the lecture by showing you an interactive applet that demonstrates line fitting. It can be found at:

http://standards.nctm.org/document/eexamples/chap7/7.4/#applet

Posted at 03:39 PM    

Fri - November 12, 2004

Lecture 27.


Quiz #3

Quiz #3 was held today. My impression after marking a small sample of the quizzes is that students are maintaining a very high standing in this course. I will not return the quizzes on Tuesday. Quizzes will be returned on Thursday instead.

You can go ahead with the questions from last year's assignment #4, as they cover the same material that we did this year.

Next Tuesday we will continue looking at least squares curve fitting.

Posted at 02:21 PM    

Thu - November 11, 2004

Lecture 26


Review of Linear Algebra and Gaussian Elimination for Quiz 3.

The material for quiz #3 is based on the linear algebra review and our treatment of Gaussian Elimiation. The main points that we covered were vector and matrix norms, condition number, and algorithms for doing Gaussian Elimination and LU factorization.

I went over quiz #3 from 2003 and assignment #3.

Posted at 02:12 PM    

Tue - November 9, 2004

Lecture 25


Line fitting.

Today I began the lecture by illustrating with the help of some pictures, the difference between interpolating data and fitting a curve to data.

I then summarized the techniques for computing interpolating cubic spline curves. I went over the necessity for end conditions, and then discussed three alternative ways in which to specify end conditions.

I concluded the lecture with an introduction to finding the "best line" to fit a set of data points. We used a line that minimizes the sum of the squares of vertical distances to the data points. The method of computing such a line involved solving a system of linear equations.

Posted at 11:14 AM    

Fri - November 5, 2004

Lecture 24


Efficient computation of Spline curves.

Today we saw how a problem that initially requires solving a system of 4n equations in 4n unknowns, can be reduced to solving for n equations in n unknowns. Furthermore, the matrix that represents our system of linear equations is a so called diagonally dominant tri-diagonal matrix. This structures affords an O(n) time solution.

Next week we will look at least squares curve fitting. This is discussed in NMM chapter 9, and Ellis classes 30 and 31.

Posted at 01:23 PM    

Thu - November 4, 2004

Lecture 23


Cubic Spline Interpolation

Cubic Spline interpolation is a practical piece-wise interpolation method that is widely used. The development in Class 16 of the Ellis notes is straightforward, but does not make use of the mathematical tricks that can be used to obtain an amazingly efficient algorithm. The treatment in section 10.3.5 of Recktenwald leads to this efficient method. Today we began our exploration of cubic splines, following Ellis. I plan to compare and contrast these two different descriptions of cubic spline interpolation on Friday.

Posted at 01:27 PM    

Tue - November 2, 2004

Lecture 22


Hermite Interpolation

Hermite interpolation and Cubic Spline interpolation are piece-wise cubic polynomial interpolation methods. They both use cubic polynomials to interpolate each of the pieces. Hermite interpolation is named after the French mathematician Charles Hermite. Let me emphasize that for material on piecewise polynomial interpolation the notation Pi(x) the i in the subscript refers to the polynomial interpolating the ith piece, and not the degree of the polynomial. For each piece a separate cubic polynomial is found.
Hermite interpolation is discussed in section 10.3.4 of Recktenwald.

Posted at 01:25 PM    

















©