Computational Techniques in Theoretical Physics

Section 2: Review of Numerical Methods

## 2.1 Linear Algebra

Physical Examples:

Atomic vibrartional motions in condensed matter systems, chemical molecules or proteins.

One dimensional case:

Newton's law: F = ma = m d2x/dt2

Hooke's law:  F = - kx

Assumption of an oscillatory motion: x = A cos(wt+a0)

Equation of motion:  (k'-w2)A'=0    (k'=k/m, A'= m1/2 A)

Multi-dimentional System:

Newton's law:   Sum j Fij = mi a = mi d2xi /dt2

Hooke's law:  Fij = - kij(xi-xj)

Assumption of oscillatory motions: x = Ai cos(wt+ai)

Equation of motions is a matrix equation (eigenvalue problem):
(k'ij-w2)A'j = 0    (k'ij =kij /(mi1/2 mj1/2), A'i= mi1/2 Ai)

Methods:

Methods are described for solving linear equations, expressed in terms of a matrix equation, Ax=b, in which the vector, x, is to be determined. The Gauss-Jordan Elimination method is one in which the inverse of the matrix, A, is calculated. If the inverse is not needed, then using Gaussian Elimination with backsubstitution is preferred, as it is about 3 times faster. For repeated solutions to problems that involve the same matrix, but various right hand sides, the LU decomposition method is recommended.

Iterative improvements to the solution, dealing with singular problems, and methods for sparse matrices are briefly discussed.

Online slides.

2.2. Minimization

This deals with the task to find the point that minimizes a single or multi-dimensional function with as few calls as possibile. There many applications in statistical analyses of data and in optimization problems. A number of methods including golden section search, Brent's method, simplex method, Powell's method, and gradient methods are described. For multidimensional problems, the importance of minimizing along conjugate directions is demonstrated.  Linear programming (optimization) using the simplex method is explained.