In dynamical systems, the motion of a particle in some geometric space, governed by some time dependent rules, is studied. The process can be discrete (where the particle jumps from point to point) or continuous (where the particle follows a trajectory). Dynamical systems is used in mathematical models of diverse fields such as classical mechanics, economics, traffic modelling, population dynamics, and biological feedback.

A *dynamical system* is, very broadly, a system which changes in time according to some rules. One concrete example of a dynamical system is the following.

Example 1:A billiard ball moving on a frictionless billiards table. In this example, what is changing in time is the position of the ball. There are two rules governing this motion, namely that the ball will travel at the same speed for all time, and that the ball will bank off a rail at the same angle that it hit the rail.

Given an initial position and velocity for the ball, these two rules enable one to compute the trajectory of the ball for all time. This illustrates an important property of dynamical systems: they are *deterministic.* The rules governing the dynamical system should, at least in theory, allow one to determine the state of the system at every point in the future, given some initial data. Another, more abstract, example of a dynamical system is

Example 2:A function $f\colon X\to X$, where $X$ is a set. In this example, one thinks of $X$ as a space in which a particle is moving, and $f$ as a rule governing the motion of the particle. Explicitly, if the particle is at the point $x_0\in X$ at time $t = 0$, then at time $t = 1$ it is at the point $x_1:= f(x_0)$, and at time $t = 2$ it is at the point $x_2 := f(x_1)$, etc.

Given the initial position $x_0$ of the particle, its position at time $t = n$ is therefore $f^{\circ n}(x_0)$, where $f^{\circ n}$ denotes the composition of $f$ with itself $n$ times. In this example, studying the dynamical system is equivalent to studying the iterates of $f$

Notice that in example 1 the position of the ball is defined for every time $t>0$, whereas in example 2 the position of the particle is only defined at positive integer values of time. Example 1 is called a *continuous time* dynamical system, and example 2 is called a *discrete time* dynamical system. These are the most commonly studied dynamical systems.

In both continuous and discrete time dynamical systems, the most commonly asked questions are the following:

- What is the trajectory of the system given specified initial conditions? While these trajectories can be computed in theory, in practice they are often difficult to impossible to compute.
- What is the long term behavior of the system? What happens after a long time, i.e., as $t\to\infty$?
- Are there any initial conditions which lead to "special" trajectories? For instance, in example 1, if the ball is hit from the center of the table along a line perpendicular to a rail, then its trajectory will be
*periodic*, that is, it will repeat itself forever.

### Continuous time dynamical systems

The most classical examples of dynamical systems are continuous time dynamical systems coming from physics. The motion of a particle moving in space under some force is a standard system; the rules governing the system in this situation are Newton's laws of motion. Another common systems are the diffusion of heat through a material, which is determined by the heat equation, or the motion of particles in a fluid, which is determined by a flow.

In each of these, as in most continuous time dynamical systems, the rules governing the system are a system of differential equations. Because of this, there is a great deal of overlap between the study dynamical systems and differential equations. Questions about the asymptotic behavior of solutions of differential equations very often fall under the heading of dynamical systems.

### Discrete time dynamical systems

A discrete time dynamical system is given by a function $f\colon X\to X$, where $X$ is a set. In this generality, such a system is hard to study. Usually one imposes more structure:

- If $X$ is a topological space and $f$ is continuous, it is called a
*topological dynamical system.* - If $X$ is a manifold and $f$ is smooth, is it called a
*smooth dynamical system.* - If $X$ is a complex manifold and $f$ is holomorphic, it is called a
*complex dynamical system.* - If $X$ is a measure space and $f$ is measurable, it is called a
*measurable dynamical system.*

Each of these types of dynamical systems has a rich theory behind it.

### Chaos and ergodic theory

The most interesting dynamical systems are those that exhibit *chaotic behavior.* For instance, in example 1, suppose one hits the ball from the center of the table in a certain direction, and on another table one hits the ball from the center of the table in a slightly different direction. Then, after a long period of time, the trajectories of the two balls will diverge and be very different. Thus a slight change in initial conditions (direction the ball is hit) results in very different behavior of the two systems. Such extreme sensitivity to initial conditions is referred to as chaotic behavior.

Systems which exhibit chaotic behavior, while interesting, are often more difficult to study. A common method for approaching such systems is to use statistical and probabilistic methods. In example 1, for instance, instead of asking where the ball is at some very large time $t$ (which could be difficult to compute), one could ask where the ball is most likely to be at time $t$. Such questions are usually easier to approach, and fall under the heading of *ergodic theory.*