Commit 2795e4c3 authored by Dirk Ekelschot's avatar Dirk Ekelschot
Browse files

Added FieldConvert + shock capturing + variable p

parent f2de1dbb
......@@ -9,7 +9,7 @@ both the compressible Euler and the Navier-Stokes equations.
\subsection{Euler equations}
The Euler equations can be expressed as a hyperbolic
conservation law in the form
\begin{equation}
\begin{equation}\label{eq:euler}
\frac{\partial \mathbf{q} }{\partial t} + \frac{\partial \mathbf{f}_i}{\partial x}
+ \frac{\partial \mathbf{g}_i}{\partial y} +
\frac{\partial \mathbf{h}_i}{\partial z} = 0,
......@@ -62,10 +62,6 @@ E = \frac{p}{\gamma - 1} + \frac{1}{2} \rho(u^2 + v^2 + w^2),
\label{energy}
\end{equation}
where $\gamma$ is the ratio of specific heats.
\subsection{Compressible Navier-Stokes equations}
The Navier-Stokes equations include the effects of fluid viscosity and heat
conduction and are consequently composed by an inviscid and a viscous
......@@ -422,5 +418,125 @@ Under the two following sections it is possible to define the initial conditions
VALUE="pInf/(Gamma-1)+0.5*rhoInf*(uInf*uInf+vInf*vInf+wInf*wInf)"/>
</FUNCTION>
\end{lstlisting}
\subsection{Shock capturing}
Compressible flow is characterised by abrupt changes in density within the flow domain often referred to as shocks. These discontinuities lead to numerical instabilities (Gibbs phenomena). This problem is prevented by locally adding a diffusion term to the equations to damp the numerical fluctuations. These fluctuations in an element are identified using a sensor algorithm which quantifies the smoothness of the solution within an element. The value of the sensor in an element is defined as
\begin{equation}\label{eq:sensor}
S_e=\frac{||\rho^p_e-\rho^{p-1}_e||_{L_2}}{||\rho_e^p||_{L_2}}
\end{equation}
%In order to determine the solution at $p-1$ and threat the numerical fluxes at the edges of the element accordingly, the solution at polynomial order p, determined using a modified basis, has to be projected onto a hierarchical orthogonal expansion basis. Hence, using a more general formulation, the solution of a variable $u$ is expressed as:
%\begin{equation}
%u^0=u
%\end{equation}
%where $u$ and $u^0$ represent the general solution obtained using a modified basis (\textbf{B}) and orthogonal ($\textbf{B}^0$) respectively, hence:
%\begin{equation}
%\textbf{B}^0\hat{u}^0 = \textbf{B}\hat{u} \rightarrow \hat{u}^0 = \left[\textbf{B}^0\right]^{-1}\textbf{B}\hat{u}
%\end{equation}
%where $\hat{u}$ represents the vector of coefficients at polynomial $p^+$. Since a hierarchical basis is used, it is possible to lower the polynomial order to $p^-$ where $p^+ > p^-$. Since the coefficients are not coupled in the hierarchical orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a cut-off filter to the orthogonal coefficient vector. This cut-off filter sets all the coefficients that are higher than $p^-$ equal to zero. The information contained in the high frequency components is removed without altering the mean value. The orthogonal coefficients represent the solution at $p^-$ using an orthogonal basis, hence the following transformation has to be applied to obtain the modified filtered coefficients of the lower polynomial degree:
%\begin{equation}
%\hat{u}_f=\left [\textbf{B}^{-1}\right] \textbf{B}^0\hat{u}^0_f
%\end{equation}
%The solution $\rho_e^{p-1}$ is obtained from $\rho^p_e$ using this post-processing step.\\
%\\
%The sensor is then used to identify the elements that require the addition of an extra diffusion term to avoid numerical oscillations due to local flow discontinuities (Gibbs phenomenon).
An artificial diffusion term is introduced locally to the Euler equations to deal with flow discontinuity and the consequential numerical oscillations. Two models are implemented, a non-smooth and a smooth artificial viscosity model.
\subsubsection{Non-smooth artificial viscosity model}
For the non-smooth artificial viscosity model the added artificial viscosity is constant in each element and discontinuous between the elements. The Euler system is augmented by an added laplacian term on right hand side of equation \ref{eq:euler}. The diffusivity of the system is controlled by a variable viscosity coefficient $\epsilon$. The value of $\epsilon$ is dependent on $\epsilon_0$, which is the maximum viscosity that is dependent on the polynomial order ($p$), the mesh size ($h$) and the maximum wave speed and the local sensor value. Based on pre-defined sensor threshold values, the variable viscosity is set accordingly
\begin{equation}
\epsilon
=\epsilon_0\left \{ \begin{array}{l}
0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{if}\ \ s_e<s_\kappa-\kappa\\
0.5\left(1+\sin{\frac{\pi\left(S_e-s_\kappa\right)}{2\kappa}}\right)\ \ \ \ \ \ \mbox{if}\ \ s_\kappa-\kappa<S_e<s_\kappa+\kappa\\
1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{if}\ \ s_e > s_\kappa+\kappa
\end{array}
\right.
\end{equation}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width = 0.47 \textwidth]{Figures/Mach_P4.eps}
\includegraphics[width = 0.47 \textwidth]{Figures/ArtVisc_P4.eps}
\caption{(a) Steady state solution for $M=0.8$ flow at $\alpha = 1.25^\circ$ past a NACA 0012 profile, (b) Artificial viscosity ($\epsilon$) distribution}
\label{fig:}
\end{center}
\end{figure}
\subsubsection{Smooth artificial viscosity model}
For the smooth artificial viscosity model an extra PDE for the artificial viscosity is appended to the Euler system
\begin{equation}\label{eq:euler}\begin{split}
\frac{\partial \epsilon}{\partial t} &= \nabla\cdot \left(\nabla \epsilon\right) + \frac{1}{\tau}\left(\frac{h}{p}\lambda_{max}S_\kappa - \epsilon\right)\ \ \ \ \ \ \textrm{on}\ \ \ \ \ \ \ \Omega\\
\frac{\partial \epsilon}{\partial n} &= 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{on}\ \ \ \ \ \ \ \Gamma
\end{split}
\end{equation}
where $S_\kappa$ is a normalised sensor value and serves as a forcing term for the artificial viscosity. A smooth artificial viscosity distribution is obtained.\\
\\
To enable the smooth viscosity model, the following line has to be added to the \inltt{SOLVERINFO} section:
\begin{lstlisting}[style=XmlStyle]
<SOLVERINFO>
<I PROPERTY="ShockCaptureType" VALUE="Smooth" />
<SOLVERINFO>
\end{lstlisting}
Furthermore, the extra viscosity variable \inltt{eps} has to be added to the variable list:
\begin{lstlisting}[style=XmlStyle]
<VARIABLES>
<V ID="0"> rho </V>
<V ID="1"> rhou </V>
<V ID="2"> rhov </V>
<V ID="4"> E </V>
<V ID="5"> eps </V>
</VARIABLES>
\end{lstlisting}
A similar addition has to be made for the boundary conditions and initial conditions. The tests that have been run started with a uniform homogeneous boundary condition and initial condition.
The following parameters can be set in the xml session file:
\begin{lstlisting}[style=XmlStyle]
<PARAMETERS>
<P> Skappa = -1.3 </P>
<P> Kappa = 0.2 </P>
<P> mu0 = 1.0 </P>
<P> FH = 3 </P>
<P> FL = 0.01*FH </P>
<P> C1 = 0.03 </P>
<P> C2 = 5/3*C1 </P>
</PARAMETERS>
\end{lstlisting}
where for now \inltt{FH} and \inltt{FL} are used to tune which range of the sensor is used as a forcing term and \inltt{C1} and \inltt{C2} are fixed constants which can be played around with to make the model more diffusive or not. However these constants are generally fixed.
\subsection{Variable polynomial order}
A sensor based $p$-adaptive algorithm is implemented to optimise the computational cost and accuracy. %The motivation for implementing an adaptive algorithm in $Nektar^{++}$ is given in figure \ref{fig:conv} where the convergence is shown of the solution for the advection-diffusion-reaction equation in a cube for a given set of boundary conditions and initial conditions. The mesh consist of $8$ hexahedral shaped elements. The red line shows the convergence when uniform $h$-refinement is applied and the blue line shows the convergence of the solution when uniform $p$-refinement is performed.\\
%\begin{figure}[!htbp]
%\begin{center}
%\includegraphics[width = 0.5\textwidth]{figures/hvsp.eps}
%\caption{Comparison between uniform $h$ and $p$ refinement}
%\label{fig:conv}
%\end{center}
%\end{figure}
%Figure \ref{fig:conv} functions as the motivation for implementing this particular adaptive algorithm into a higher order $h/p$ element code ($Nektar^{++}$).
The DG scheme allows one to use different polynomial orders since the fluxes over the elements are determined using a Riemann solver and there is now further coupling between the elements. Furthermore, the initial $p$-adaptive algorithm uses the same sensor as the shock capturing algorithm to identify the smoothness of the local solution so it rather straightforward to implement both algorithms at the same time.\\
\\
%When dealing with different polynomial degrees between the elements, it is important to ensure an adequate treatment of the two following operations: the change of the polynomial degree of the solution within the element and the computation of the numerical flux on the interfaces between two elements. Both operations rely on the same procedure of projection and filtering of the available solution coefficients. This procedure is also used to determine the sensor in each element given in equation \ref{eq:sensor}.\\
%The solution at $p - 1$ in an element is obtained by projecting the solution at polynomial order $p$, determined using a modified basis (see equations (\ref{eq:modified1D})-(\ref{eq:modified3D})), onto an orthogonal expansion basis (see equations (\ref{eq:ortho1D})-(\ref{eq:ortho3D})) \cite{BookSpencer}. Using conservation arguments, the solution of a general variable $u$ is expressed as $u^0=u$ where $u^0$ and $u$ represent the general solution obtained using a orthogonal and modified basis respectively, hence
%\begin{equation}
%\textbf{B}^0 \hat{\textbf{u}}^0 = \textbf{B}\hat{\textbf{u}} \rightarrow \hat{\textbf{u}}^0 = \left[\textbf{B}^0\right]^{-1}\textbf{B}\hat{\textbf{u}}
%\end{equation}
%where $\hat{u}$ represents the vector of coefficients. %Since the coefficients are not coupled in the orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a filter to the orthogonal coefficient vector.
%The coefficients $\hat{u}^0$ are filtered by setting all the coefficients that are higher than $p-1$ equal to zero. The orthogonal coefficients represent the solution at $p-1$ using an orthogonal basis, hence the following transformation has to be applied to obtain the coefficients that belong to the modified basis:
%\begin{equation}
%\hat{\textbf{u}}_f=\left[\textbf{B}^{-1}\right] \textbf{B}^0\hat{\textbf{u}}^0_f
%\end{equation}
%where $\hat{u}_f$ represents the filtered coefficient for a modified basis. In this way, the information contained in the high frequency components is removed without altering the mean value. Hence, this procedure is also used when computing the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used to avoid numerical instabilities.\\
%\subsubsection{$p$-adaption}
The polynomial order in each element can be adjusted based on the sensor value that is obtained. Initially, a converged solution is obtained after which the sensor in each element is calculated. Based on the determined sensor value and the pre-defined sensor thresholds, it is decided to increase, decrease or maintain the degree of the polynomial approximation in each element and a new converged solution is obtained.\\
\begin{equation}\label{eq:pswitch}
p_e =\left \{ \begin{array}{l}
p_e-1\ \ \ \mbox{if}\ \ s_e>s_{ds}\\
p_e+1\ \ \ \mbox{if}\ \ s_{sm }<s_e<s_{ds}\\
p_e\ \ \ \ \ \ \ \ \ \mbox{if}\ \ s_{fl}<s_e<s_{sm}\\
p_e-1\ \ \ \mbox{if}\ \ s_e<s_{fl}
\end{array}
\right.
\end{equation}
For now, the threshold values $s_e$, $s_{ds}$, $s_{sm}$ and $s_{fl}$ are determined empirically by looking at the sensor distribution in the domain. Once these values are set, two .txt files are outputted, one that has the composites called VariablePComposites.txt and one with the expansions called VariablePExpansions.txt. These values have to copied into a new .xml file to create the adapted mesh.
%where $s_{ds}$, $s_{sm}$ and $s_{fl}$ are the threshold values to identify discontinuities, smooth and flat solutions respectively. This procedure is carried out iteratively.\\
%After the sensor is applied and the polynomial order of the element is changed, a similar filtering procedure as described above is performed to compute the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used. The number of quadrature points has to be equal to the number used by the highest polynomial degree of the two adjacent elements to avoid numerical instabilities \cite{SherwinConference}. To ensure conservation and stability, the continuity of the total flux is required and therefore
%\begin{equation}
%\int_{\Gamma_{f^-}}\textbf{F}_-^ud\Gamma_f = \int_{\Gamma_{f^+}}\textbf{F}_+^ud\Gamma_f
%\end{equation}
%Where $\textbf{F}^u_-$ and $\textbf{F}^u_+$ represent the numerical flux on the edge between two elements with a lower and a higher polynomial order respectively. If the order or the quadrature points is different, the coefficients are copied directly on the higher resolved side, but fewer coefficients have to be set on the other side. The interface flux is then projected on the space of orthogonal polynomials and filtered to delete the high-order frequencies. Once the degree of the orthogonal expansion is decreased to the lower degree, a reverse projection is carried out and modified coefficients are obtained. the newly calculated flux values are then used to determine the boundary integral for the higher order element while for the lower order element they have to be interpolated onto the lower order boundary.
\section{Field Convert}
3.4.1/UserGuide/Tutorial/FieldConvert
In this section, the capabilities of the field convert utility are explained. This utility allows the user to convert and add extra output data to the already existing output data (.fld or .chk file) given by running a solver in \nekpp. The new output data is converted into a format that is compatible with the available external visualizers like Tecplot or Paraview. For now, this utility has the following functionality:
\begin{itemize}
\item Convert a .fld file to a .vtu or .dat file
\item Calculate vorticity
\item Extract a boundary region
\item Specify a sub-range of the domain
\end{itemize}
The executable FieldConvert can be found in the PostProcessing directory located in builds:\\
\\
\inltt{nektar++/builds/utilities/PostProcessing/FieldConvert/FieldConvert}\\
\\
In the remaining of this section all the command lines will be called from the FieldConvert directory located in builds. Converting an output file that has been obtained using \nekpp into a .dat file goes then as follows:\\
\\
\inltt{FieldConvert test.xml test.fld test.dat
}
\subsection{Calculating vorticity}
To perform the vorticity calculation and obtain an output data containing the vorticity solution, the executable FieldConvert has to be run. This executable can be found in:\\
\\
\inltt{FieldConvert -m vorticity test.xml test.fld test-vort.fld}\\
\\
where the file test-vort.fld can be processed in a similar way as described above to visualise the solution.
\subsection{Sub-range of the domain}
One can also select a region in the computational domain and process only the data for that part of the domain. For example for processing the data of a 2D plane defined by $-2\leq x \leq 3$, $-1\leq y \leq 2$, the following command can be run:\\
\\
\inltt{FieldConvert -r -2,3,-1,2 test.xml test.fld test.dat}\\
\\
where -r defines the range option of the field convert utility, the two first numbers define the range in $x$ direction and the the third and forth number specify the $y$ range. The a 3D part of the domain, a third set of numbers has to be provided to define the $z$ range. For calculating the vorticity in a specified part of the computational domain, the following command line can be run:\\
\\
\inltt{FieldConvert -r -2,3,-1,2 -m vorticity test.xml test.fld test-vort.fld}
\subsection{Extracting a boundary region}
The boundary region of a domain be extracted from the output data using the following command line\\
\\
\inltt{FieldConvert -m extract:bnd=2:fldtoboundary=1 test.xml test.fld test-boundary.fld}\\
\\
The option \inltt{bnd} specifies which boundary region to extract. Note this is different to MeshConvert where the parameter \inltt{surf} is specified and corresponds to composites rather boundaries. If bnd is to provided all boundaries are extracted to different field. The fldtoboundary is an optional command argument which copies the expansion of test.fld into the boundary region before outputting the .fld file. This option is on by default. If it turned off using \inltt{fldtoboundary=0} the extraction will only evaluate the boundary condition from the xml file. The output will be placed in test-boundary-b2.fld. If more than one boundary region is specified the extension -b0.fld, -b1.fld etc will be outputted. To process this file you will need an xml file of the same region. This can be generated using the command:
\\
\inltt{MeshConvert -m extract:surf=5 test.xml test-b0.xml}\\
\\
The surface to be extracted in this command is the composite number and so needs to correspond to the boundary region of interest. Finally to process the surface file one can use
\\
\inltt{MeshConvert -m extract:surf=5 test.xml test-b0.xml}\\
\\
This will obviously generate a tecplot output using a .dat file is specified in the last argument. A .vtu extension will produce a vtk output.\\
\\
To run the utility, if you have compiled \nekpp with MPI support, you may run in parallel\\
\\
\inltt{mpirun -np <nprocs> FieldConvert test.xml test.fld test.dat }\\
\\
replacing <nprocs> with the number of processors. This will produce multiple .dat files of the form test-P0.dat, test-P1.dat, test-P2.dat etc. Similarly the VTK files can be processed in this manner as can the vorticity option. In the case of the vorticity option a directory called test-vort.fld (or the specified output name) will be produced with the standard parallel field files placed within the directory.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment