Sunday, December 28, 2014

Transfinite Interpolation - Algebraic way of transposing computational domain to physical domain

Transfinite Interpolation was first described by William Gordon in 1973, it's used for transcribing the computational domain to the physical domain. I've been reading Handbook of Grid Generation by J.Thompson. Transfinite interpolation can be really confusing to people who are new to CFD and Grid work.

There is limited resources online on how Transfinite Interpolation really works. The goal of this post is to explain TFI in detail:
General Concept
From J.Thompson. Shows computational domain transformed into the physical domain

TFI comes from the Boolean Sum Formulation - J.Thompson Handbook of Grid Generation
\begin{align}
U(\xi,\eta,\zeta) = \Sigma_{i=1}^L \Sigma_{n=0}^P \alpha_{i}^{n}(\xi) \frac{\partial^n X(\xi_i, \eta, \zeta)}{\partial \xi^n}\\
V(\xi,\eta,\zeta) = \Sigma_{j=1}^M \Sigma_{m=0}^Q \beta_{j}^{m}(\eta) \frac{\partial^m X(\xi, \eta_j, \zeta)}{\partial \eta^m}\\
W(\xi,\eta,\zeta) = \Sigma_{k=1}^N \Sigma_{l=0}^R \gamma_{k}^{l}(\zeta) \frac{\partial^l X(\xi, \eta, \zeta_k)}{\partial \zeta^l}\\
\end{align}

$$X(\xi,\eta,\zeta)  =
\begin{bmatrix}
x(\xi,\eta,\zeta) \\
y(\xi,\eta,\zeta) \\
z(\xi,\eta,\zeta) \\
\end{bmatrix}$$

Tensor Products - not defined correctly in Thompson, but defined correctly here.
\begin{align}
UV = VU = \Sigma_{i=1}^L \Sigma_{j=1}^M \Sigma_{m=0}^Q \Sigma_{n=0}^R \alpha_{i}^{n}(\xi) \beta_{j}^{m}(\eta) \frac{\partial^{nm} X(\xi_i,\eta_j,\zeta)}{\partial \eta^{NM} \partial \xi^n}\\
UW = WU = \Sigma_{i=1}^L \Sigma_{k=1}^N \Sigma_{l=0}^R \Sigma_{n=0}^P \alpha_{i}^{n}(\xi) \gamma_{k}^{l}(\zeta) \frac{\partial^{ln} X(\xi_i,\eta,\zeta_k)}{\partial \zeta^{l} \partial \xi^{n}}\\
VW = WV = \Sigma_{j=1}^M \Sigma_{k=1}^N \Sigma_{m=0}^Q \Sigma_{l=0}^R \beta_{j}^{m}(\eta) \gamma_{k}^k (\zeta) \frac{\partial^{lm} X(\xi,\eta_j,\zeta_k)}{\partial \zeta^l \partial \eta^m}
UVW = \Sigma_{i=1}^L \ Sigma_{j=1}^M \Sigma_{k=1}^N \Sigma_{l=0}^R \ Sigma_{m=0}^Q \Sigma_{n=0}^P \alpha_{i}^{n}(\xi) \beta_{j}^{m}(\eta) \gamma_{k}^{l}(\zeta) \frac{\partial^{lmn} X(\xi_i,\eta_j,\zeta_k) }{\partial \zeta^{l} \partial \eta^m \xi^{n}}
\end{align}

$$ X(\xi_i,\eta_j,\zeta_k)  = U+V+W - UV-UW-VW-UVW$$

Looks confusing, lots of variables. The coefficients $\alpha$, $\beta$, and $\gamma$ depend on what method we use, they are called blending functions. Thompson goes into detail with them however, for the purpose of this post we will consider the simple case, linear interpolation.

Linear Interpolation the blending functions are simple, P=Q=R=0, L=M=N=2.
We are going to consider the 2D case
\begin{align}
\alpha_{1}^0 = 1-\xi \\
\alpha_{2}^0 = \xi \\
\beta_{1}^0 = 1-\eta \\
\beta_{2}^0 = \eta\\
U(\xi_i,\eta_j) = (1-\xi_i) X(0,\eta_j) + \xi_i X(1,\eta_j) \\
V(\xi_i,\eta_j) = (1-\eta_j)X(\xi_i,0) + \eta_j X(\xi_i,1)\\
UV(\xi_i,\eta_j) = (1-\xi_i)(1-eta_j) X(0,0) + \xi_i (1-\eta_j) X(1,0) + (1-\xi_i)(\eta_j) X(0,1) + \xi_i \eta_j X(1,1)
\end{align}

How is it used?
Take the simple duct example below. In order for us to map a computational domain onto the duct we have to convert this geometry from a function of x(x,y) y(x,y) to $x(\xi,\eta)$ and $y(\xi,\eta)$. Then we can use the above equations to map the computational domain.

In my code I do a simple mapping, nothing complicated. There may be a few bugs if you change geometries. $\xi$ and $\eta$ are passed into to a function and that outputs (x,y) coordinates. So where you see $X(\xi,\eta)$ the output is a vector $(x,y)$ then that vector is multiplied by $(1-\xi)$ or $\xi$ to get U,V, UV.
$$X(\xi,\eta) = U+V-UV$$

Grid mapped onto duct
Link to Matlab Code: GitHub

Wednesday, December 24, 2014

Finite Difference: 2nd Order Central, Any order

2nd order Central Difference can be found by adding the Forward and Backward Differences

Forward Difference:
\begin{equation}
u(x_0+\Delta x,y_0) = u(x_0, y_0) + \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} + \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)
\end{equation}

Backward Difference:
\begin{equation}
u(x_0-\Delta x,y_0) = u(x_0, y_0) - \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} - \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)
\end{equation}

Forward + Backward:
\begin{align}
u(x_0+\Delta x,y_0) + u(x_0-\Delta x,y_0) = 2u(x_0,y_0) + \frac{2 \Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} + \frac{2 \Delta x^4}{4!} \frac{\partial^4 u}{\partial x^4} \\
\frac{u(x_0+\Delta x,y_0) -2 u(x_0,y_0)+ u(x_0-\Delta x,y_0)}{\Delta x^2} = \frac{\partial^2 u}{\partial x^2} + O(\Delta x^2)
\end{align}

There's a pattern here. We can formulate finite difference schemes of any order given a set number of points. Take for example these 3 points. We can use these 3 points to find $u_x$ with $O(\Delta x^3)$ accuracy.


\begin{align} P(u_{i-1,j}) = u_{i,j} - \Delta x u_x + \frac{\Delta x^2}{2!} u_{xx} - \frac{\Delta x^3}{3!} u_{xxx} \\ Q(u_{i-2,j}) = u_{i,j} -2 \Delta x u_x + \frac{4 \Delta x^2}{2!} u_{xx} + \frac{3^3 \Delta x^3}{3!} u_{xxx} \end{align}

We want to set the term $u_{xx}=0$ and $u_x = 1$ we then have to solve the equation 
$$\frac{P}{2!} + \frac{4Q}{2!} = 0$$
$$P -2Q = 1$$

Substituting the coefficients $P=-2,Q=1/2$: 
$$-2 u_{i-1,j} + 2 u_{i,j} = -2\Delta x u_x + O(\Delta x^3)$$
$$1/2 u_{i-2,j} =  1/2 u_{i,j} - \Delta x u_x + O(\Delta x^3)$$
$$1/2 u_{i-2,j} - 2 u_{i-1,j} - 1.5 u_{i,j} = -3 \Delta x u_x + O(\Delta x^3)$$

We can use the same idea with 4 points and achieve a greater accuracy.





Finite Difference: Central Difference

The equation for Central Difference can be found by subtracting the equations for the Forward and Backward differencing schemes.

Forward Difference:
\begin{equation}
u(x_0+\Delta x,y_0) = u(x_0, y_0) + \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} + \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)
\end{equation}

Backward Difference:
\begin{equation}
u(x_0-\Delta x,y_0) = u(x_0, y_0) - \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} - \frac{\Delta x^3}{3!} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)
\end{equation}

Subtracting these two equations $u(x_0+\Delta x,y_0) - u(x_0-\Delta x,y_0)$
\begin{align}
u(x_0+\Delta x,y_0) - u(x_0-\Delta x,y_0) = 0 + 2\Delta x \frac{\partial u}{\partial x} + 0 + O(\Delta x^3)\\
\frac{u(x_0+\Delta x,y_0) - u(x_0-\Delta x,y_0)}{2 \Delta x} = \frac{\partial u}{\partial x} + O(\Delta x^2)
\end{align}

Finite Difference: Forward and Backward Difference

When I first saw finite difference equation for the first derivative of a function "u" it seemed easy. It was just the derivative $\frac{\partial u}{\partial x} = \frac{u(x+\Delta x)}{\Delta x}$, but there's some form of art to how finite differencing really works.

Forward Difference:
\begin{equation}
\frac{\partial u}{\partial x} =\frac{u(x_0 + \Delta x, y_0) - u(x_0,y_0)}{\Delta x}
\end{equation}

Backward Difference:
\begin{equation}
\frac{\partial u}{\partial x} =\frac{ u(x_0,y_0) - u(x_0 - \Delta x, y_0)}{\Delta x}
\end{equation}

Two common schemes, we have the forward difference scheme and backward difference, but how did we get to here? In order to really understand finite difference we need to look at the Taylor Series.

\begin{equation}
u(x_0+\Delta x,y_0) = u(x_0, y_0) + \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2!} \frac{\partial^2 u}{\partial x^2} +...+ \frac{\Delta x^n}{n!} \frac{\partial^n u}{\partial x^n}
\end{equation}

Using the Taylor series we can derive the equation of the Forward Difference
\begin{equation}
\frac{u(x_0 + \Delta x, y_0) - u(x_0,y_0)}{\Delta x} = \frac{\partial u}{\partial x} + O(\Delta x)
\end{equation}

On the right hand side we have $O(\Delta x)$ this comes from the division on both sides by $\Delta x$; $\frac{O(\Delta x^2)}{\Delta x} = O(\Delta x)$ this is important when you're trying to find the order of accuracy of your finite difference equation.

To get backward differencing all we have to do is replace $\Delta x$ with $-\Delta x$


Tuesday, December 23, 2014

Poisson Equation Setup

The objective of this tutorial is to help you think of how to effectively program a solution to the Poisson Equation. The tutorial will go into how to think of node linking within matrices which can often be confusing.

Poisson Equation is defined to be $-\nabla{f} = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}$ . Using central differencing the equation can be discretized to the following:
\begin{equation}
\frac{1}{h^2} \big( u(x_{i-1},y_{j})+u(x_{i},y_{j-1})-4u(x_{i},y_{j})+
u(x_{i+1},y_{j} +u(x_{i},y_{j+1}) \big) = f(x_{i}, y_{j})
\end{equation}

The grid is setup to be $5x5$, $n=4$. the formula for creating the grid should always be $(n+1)x(n+1)$. The total number of nodes will be $(n+1)^2$. We choose $(n+1)$ because it allows us to model the boundary.

Node list:
1 6 11 16 21
2 7 12 17 22
3 8 13 18 23
4 9 14 19 24
5 10 15 20 25


In the figure below Node 1 is connected to Node 6


We see that there is a pattern for node connectivity, N1 is connected to N6 and N2. N7 is connected to 2 and 12.

There is an easy way of programming this kind of connectivity. Take a look at the node table again, we see that if we iterate along the row by $i+1$ starting at node 1 we get 2. If we do the same thing except we iterate across the column by $j+1$ we arrive at 6. The same is true for node 7.


 function Poisson(n)  
 A = zeromatrix((n + 1)2; (n + 1)2)  
 F = zeros((n + 1)2,1)  
 G = reshape(1:((n + 1)2),n+1,n+1); . Note: 1:((n + 1)2) = 1,2,3,...,(n + 1)2 Reshape changes the  
 size of the vector 1,2,3,...,(n + 1)2 to a matrix n+1 by n+1  
 for i  0 to n do  
    for j  0 to n do  
   row = G(i+1,j+1);  
   if i=0 or j=0 or i=n or j =n then . We are on the boundary  
    A(row,row) = -4;  
    F(row,1) = 0;  
   else  
    A(row,G(i+1,j)) = 1;  
    A(row,G(i+1,j+2)) = 1;  
    A(row,G(i,j+1)) = 1;  
    A(row,G(i+2,j+1)) = 1;  
    A(row,G(i+1,j+1)) = -4;  
    F(row,1) = 1;  
 return A,F . Now solve for Ax=F  

120^2 x 120^2 Poisson Solution using Sparse Matricies