Compressible flow is characterised by abrupt changes in density within the flow domain often referred to as shocks. These discontinuities lead to numerical instabilities (Gibbs phenomena). This problem is prevented by locally adding a diffusion term to the equations to damp the numerical fluctuations. These fluctuations in an element are identified using a sensor algorithm which quantifies the smoothness of the solution within an element. The value of the sensor in an element is defined as

%In order to determine the solution at $p-1$ and threat the numerical fluxes at the edges of the element accordingly, the solution at polynomial order p, determined using a modified basis, has to be projected onto a hierarchical orthogonal expansion basis. Hence, using a more general formulation, the solution of a variable $u$ is expressed as:

%\begin{equation}

%u^0=u

%\end{equation}

%where $u$ and $u^0$ represent the general solution obtained using a modified basis (\textbf{B}) and orthogonal ($\textbf{B}^0$) respectively, hence:

%where $\hat{u}$ represents the vector of coefficients at polynomial $p^+$. Since a hierarchical basis is used, it is possible to lower the polynomial order to $p^-$ where $p^+ > p^-$. Since the coefficients are not coupled in the hierarchical orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a cut-off filter to the orthogonal coefficient vector. This cut-off filter sets all the coefficients that are higher than $p^-$ equal to zero. The information contained in the high frequency components is removed without altering the mean value. The orthogonal coefficients represent the solution at $p^-$ using an orthogonal basis, hence the following transformation has to be applied to obtain the modified filtered coefficients of the lower polynomial degree:

%The solution $\rho_e^{p-1}$ is obtained from $\rho^p_e$ using this post-processing step.\\

%\\

%The sensor is then used to identify the elements that require the addition of an extra diffusion term to avoid numerical oscillations due to local flow discontinuities (Gibbs phenomenon).

An artificial diffusion term is introduced locally to the Euler equations to deal with flow discontinuity and the consequential numerical oscillations. Two models are implemented, a non-smooth and a smooth artificial viscosity model.

For the non-smooth artificial viscosity model the added artificial viscosity is constant in each element and discontinuous between the elements. The Euler system is augmented by an added laplacian term on right hand side of equation \ref{eq:euler}. The diffusivity of the system is controlled by a variable viscosity coefficient $\epsilon$. The value of $\epsilon$ is dependent on $\epsilon_0$, which is the maximum viscosity that is dependent on the polynomial order ($p$), the mesh size ($h$) and the maximum wave speed and the local sensor value. Based on pre-defined sensor threshold values, the variable viscosity is set accordingly

\caption{(a) Steady state solution for $M=0.8$ flow at $\alpha=1.25^\circ$ past a NACA 0012 profile, (b) Artificial viscosity ($\epsilon$) distribution}

\label{fig:}

\end{center}

\end{figure}

\subsubsection{Smooth artificial viscosity model}

For the smooth artificial viscosity model an extra PDE for the artificial viscosity is appended to the Euler system

where $S_\kappa$ is a normalised sensor value and serves as a forcing term for the artificial viscosity. A smooth artificial viscosity distribution is obtained.\\

\\

To enable the smooth viscosity model, the following line has to be added to the \inltt{SOLVERINFO} section:

\begin{lstlisting}[style=XmlStyle]

<SOLVERINFO>

<I PROPERTY="ShockCaptureType" VALUE="Smooth" />

<SOLVERINFO>

\end{lstlisting}

Furthermore, the extra viscosity variable \inltt{eps} has to be added to the variable list:

\begin{lstlisting}[style=XmlStyle]

<VARIABLES>

<V ID="0"> rho </V>

<V ID="1"> rhou </V>

<V ID="2"> rhov </V>

<V ID="4"> E </V>

<V ID="5"> eps </V>

</VARIABLES>

\end{lstlisting}

A similar addition has to be made for the boundary conditions and initial conditions. The tests that have been run started with a uniform homogeneous boundary condition and initial condition.

The following parameters can be set in the xml session file:

\begin{lstlisting}[style=XmlStyle]

<PARAMETERS>

<P> Skappa = -1.3 </P>

<P> Kappa = 0.2 </P>

<P> mu0 = 1.0 </P>

<P> FH = 3 </P>

<P> FL = 0.01*FH </P>

<P> C1 = 0.03 </P>

<P> C2 = 5/3*C1 </P>

</PARAMETERS>

\end{lstlisting}

where for now \inltt{FH} and \inltt{FL} are used to tune which range of the sensor is used as a forcing term and \inltt{C1} and \inltt{C2} are fixed constants which can be played around with to make the model more diffusive or not. However these constants are generally fixed.

\subsection{Variable polynomial order}

A sensor based $p$-adaptive algorithm is implemented to optimise the computational cost and accuracy. %The motivation for implementing an adaptive algorithm in $Nektar^{++}$ is given in figure \ref{fig:conv} where the convergence is shown of the solution for the advection-diffusion-reaction equation in a cube for a given set of boundary conditions and initial conditions. The mesh consist of $8$ hexahedral shaped elements. The red line shows the convergence when uniform $h$-refinement is applied and the blue line shows the convergence of the solution when uniform $p$-refinement is performed.\\

%\caption{Comparison between uniform $h$ and $p$ refinement}

%\label{fig:conv}

%\end{center}

%\end{figure}

%Figure \ref{fig:conv} functions as the motivation for implementing this particular adaptive algorithm into a higher order $h/p$ element code ($Nektar^{++}$).

The DG scheme allows one to use different polynomial orders since the fluxes over the elements are determined using a Riemann solver and there is now further coupling between the elements. Furthermore, the initial $p$-adaptive algorithm uses the same sensor as the shock capturing algorithm to identify the smoothness of the local solution so it rather straightforward to implement both algorithms at the same time.\\

\\

%When dealing with different polynomial degrees between the elements, it is important to ensure an adequate treatment of the two following operations: the change of the polynomial degree of the solution within the element and the computation of the numerical flux on the interfaces between two elements. Both operations rely on the same procedure of projection and filtering of the available solution coefficients. This procedure is also used to determine the sensor in each element given in equation \ref{eq:sensor}.\\

%The solution at $p - 1$ in an element is obtained by projecting the solution at polynomial order $p$, determined using a modified basis (see equations (\ref{eq:modified1D})-(\ref{eq:modified3D})), onto an orthogonal expansion basis (see equations (\ref{eq:ortho1D})-(\ref{eq:ortho3D})) \cite{BookSpencer}. Using conservation arguments, the solution of a general variable $u$ is expressed as $u^0=u$ where $u^0$ and $u$ represent the general solution obtained using a orthogonal and modified basis respectively, hence

%where $\hat{u}$ represents the vector of coefficients. %Since the coefficients are not coupled in the orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a filter to the orthogonal coefficient vector.

%The coefficients $\hat{u}^0$ are filtered by setting all the coefficients that are higher than $p-1$ equal to zero. The orthogonal coefficients represent the solution at $p-1$ using an orthogonal basis, hence the following transformation has to be applied to obtain the coefficients that belong to the modified basis:

%where $\hat{u}_f$ represents the filtered coefficient for a modified basis. In this way, the information contained in the high frequency components is removed without altering the mean value. Hence, this procedure is also used when computing the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used to avoid numerical instabilities.\\

%\subsubsection{$p$-adaption}

The polynomial order in each element can be adjusted based on the sensor value that is obtained. Initially, a converged solution is obtained after which the sensor in each element is calculated. Based on the determined sensor value and the pre-defined sensor thresholds, it is decided to increase, decrease or maintain the degree of the polynomial approximation in each element and a new converged solution is obtained.\\

For now, the threshold values $s_e$, $s_{ds}$, $s_{sm}$ and $s_{fl}$ are determined empirically by looking at the sensor distribution in the domain. Once these values are set, two .txt files are outputted, one that has the composites called VariablePComposites.txt and one with the expansions called VariablePExpansions.txt. These values have to copied into a new .xml file to create the adapted mesh.

%where $s_{ds}$, $s_{sm}$ and $s_{fl}$ are the threshold values to identify discontinuities, smooth and flat solutions respectively. This procedure is carried out iteratively.\\

%After the sensor is applied and the polynomial order of the element is changed, a similar filtering procedure as described above is performed to compute the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used. The number of quadrature points has to be equal to the number used by the highest polynomial degree of the two adjacent elements to avoid numerical instabilities \cite{SherwinConference}. To ensure conservation and stability, the continuity of the total flux is required and therefore

%Where $\textbf{F}^u_-$ and $\textbf{F}^u_+$ represent the numerical flux on the edge between two elements with a lower and a higher polynomial order respectively. If the order or the quadrature points is different, the coefficients are copied directly on the higher resolved side, but fewer coefficients have to be set on the other side. The interface flux is then projected on the space of orthogonal polynomials and filtered to delete the high-order frequencies. Once the degree of the orthogonal expansion is decreased to the lower degree, a reverse projection is carried out and modified coefficients are obtained. the newly calculated flux values are then used to determine the boundary integral for the higher order element while for the lower order element they have to be interpolated onto the lower order boundary.

In this section, the capabilities of the field convert utility are explained. This utility allows the user to convert and add extra output data to the already existing output data (.fld or .chk file) given by running a solver in \nekpp. The new output data is converted into a format that is compatible with the available external visualizers like Tecplot or Paraview. For now, this utility has the following functionality:

\begin{itemize}

\item Convert a .fld file to a .vtu or .dat file

\item Calculate vorticity

\item Extract a boundary region

\item Specify a sub-range of the domain

\end{itemize}

The executable FieldConvert can be found in the PostProcessing directory located in builds:\\

In the remaining of this section all the command lines will be called from the FieldConvert directory located in builds. Converting an output file that has been obtained using \nekpp into a .dat file goes then as follows:\\

\\

\inltt{FieldConvert test.xml test.fld test.dat

}

\subsection{Calculating vorticity}

To perform the vorticity calculation and obtain an output data containing the vorticity solution, the executable FieldConvert has to be run. This executable can be found in:\\

where the file test-vort.fld can be processed in a similar way as described above to visualise the solution.

\subsection{Sub-range of the domain}

One can also select a region in the computational domain and process only the data for that part of the domain. For example for processing the data of a 2D plane defined by $-2\leq x \leq3$, $-1\leq y \leq2$, the following command can be run:\\

where -r defines the range option of the field convert utility, the two first numbers define the range in $x$ direction and the the third and forth number specify the $y$ range. The a 3D part of the domain, a third set of numbers has to be provided to define the $z$ range. For calculating the vorticity in a specified part of the computational domain, the following command line can be run:\\

The option \inltt{bnd} specifies which boundary region to extract. Note this is different to MeshConvert where the parameter \inltt{surf} is specified and corresponds to composites rather boundaries. If bnd is to provided all boundaries are extracted to different field. The fldtoboundary is an optional command argument which copies the expansion of test.fld into the boundary region before outputting the .fld file. This option is on by default. If it turned off using \inltt{fldtoboundary=0} the extraction will only evaluate the boundary condition from the xml file. The output will be placed in test-boundary-b2.fld. If more than one boundary region is specified the extension -b0.fld, -b1.fld etc will be outputted. To process this file you will need an xml file of the same region. This can be generated using the command:

The surface to be extracted in this command is the composite number and so needs to correspond to the boundary region of interest. Finally to process the surface file one can use

replacing <nprocs> with the number of processors. This will produce multiple .dat files of the form test-P0.dat, test-P1.dat, test-P2.dat etc. Similarly the VTK files can be processed in this manner as can the vorticity option. In the case of the vorticity option a directory called test-vort.fld (or the specified output name) will be produced with the standard parallel field files placed within the directory.