Commit 19d247ed by Dirk Ekelschot

 ... ... @@ -423,21 +423,6 @@ Compressible flow is characterised by abrupt changes in density within the flow \label{eq:sensor} S_e=\frac{||\rho^p_e-\rho^{p-1}_e||_{L_2}}{||\rho_e^p||_{L_2}} %In order to determine the solution at $p-1$ and threat the numerical fluxes at the edges of the element accordingly, the solution at polynomial order p, determined using a modified basis, has to be projected onto a hierarchical orthogonal expansion basis. Hence, using a more general formulation, the solution of a variable $u$ is expressed as: % %u^0=u % %where $u$ and $u^0$ represent the general solution obtained using a modified basis (\textbf{B}) and orthogonal ($\textbf{B}^0$) respectively, hence: % %\textbf{B}^0\hat{u}^0 = \textbf{B}\hat{u} \rightarrow \hat{u}^0 = \left[\textbf{B}^0\right]^{-1}\textbf{B}\hat{u} % %where $\hat{u}$ represents the vector of coefficients at polynomial $p^+$. Since a hierarchical basis is used, it is possible to lower the polynomial order to $p^-$ where $p^+ > p^-$. Since the coefficients are not coupled in the hierarchical orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a cut-off filter to the orthogonal coefficient vector. This cut-off filter sets all the coefficients that are higher than $p^-$ equal to zero. The information contained in the high frequency components is removed without altering the mean value. The orthogonal coefficients represent the solution at $p^-$ using an orthogonal basis, hence the following transformation has to be applied to obtain the modified filtered coefficients of the lower polynomial degree: % %\hat{u}_f=\left [\textbf{B}^{-1}\right] \textbf{B}^0\hat{u}^0_f % %The solution $\rho_e^{p-1}$ is obtained from $\rho^p_e$ using this post-processing step.\\ %\\ %The sensor is then used to identify the elements that require the addition of an extra diffusion term to avoid numerical oscillations due to local flow discontinuities (Gibbs phenomenon). An artificial diffusion term is introduced locally to the Euler equations to deal with flow discontinuity and the consequential numerical oscillations. Two models are implemented, a non-smooth and a smooth artificial viscosity model. \subsubsection{Non-smooth artificial viscosity model} For the non-smooth artificial viscosity model the added artificial viscosity is constant in each element and discontinuous between the elements. The Euler system is augmented by an added laplacian term on right hand side of equation \ref{eq:euler}. The diffusivity of the system is controlled by a variable viscosity coefficient $\epsilon$. The value of $\epsilon$ is dependent on $\epsilon_0$, which is the maximum viscosity that is dependent on the polynomial order ($p$), the mesh size ($h$) and the maximum wave speed and the local sensor value. Based on pre-defined sensor threshold values, the variable viscosity is set accordingly ... ... @@ -498,29 +483,9 @@ The following parameters can be set in the xml session file: \end{lstlisting} where for now \inltt{FH} and \inltt{FL} are used to tune which range of the sensor is used as a forcing term and \inltt{C1} and \inltt{C2} are fixed constants which can be played around with to make the model more diffusive or not. However these constants are generally fixed. \subsection{Variable polynomial order} A sensor based $p$-adaptive algorithm is implemented to optimise the computational cost and accuracy. %The motivation for implementing an adaptive algorithm in $Nektar^{++}$ is given in figure \ref{fig:conv} where the convergence is shown of the solution for the advection-diffusion-reaction equation in a cube for a given set of boundary conditions and initial conditions. The mesh consist of $8$ hexahedral shaped elements. The red line shows the convergence when uniform $h$-refinement is applied and the blue line shows the convergence of the solution when uniform $p$-refinement is performed.\\ %\begin{figure}[!htbp] %\begin{center} %\includegraphics[width = 0.5\textwidth]{figures/hvsp.eps} %\caption{Comparison between uniform $h$ and $p$ refinement} %\label{fig:conv} %\end{center} %\end{figure} %Figure \ref{fig:conv} functions as the motivation for implementing this particular adaptive algorithm into a higher order $h/p$ element code ($Nektar^{++}$). A sensor based $p$-adaptive algorithm is implemented to optimise the computational cost and accuracy. The DG scheme allows one to use different polynomial orders since the fluxes over the elements are determined using a Riemann solver and there is now further coupling between the elements. Furthermore, the initial $p$-adaptive algorithm uses the same sensor as the shock capturing algorithm to identify the smoothness of the local solution so it rather straightforward to implement both algorithms at the same time.\\ \\ %When dealing with different polynomial degrees between the elements, it is important to ensure an adequate treatment of the two following operations: the change of the polynomial degree of the solution within the element and the computation of the numerical flux on the interfaces between two elements. Both operations rely on the same procedure of projection and filtering of the available solution coefficients. This procedure is also used to determine the sensor in each element given in equation \ref{eq:sensor}.\\ %The solution at $p - 1$ in an element is obtained by projecting the solution at polynomial order $p$, determined using a modified basis (see equations (\ref{eq:modified1D})-(\ref{eq:modified3D})), onto an orthogonal expansion basis (see equations (\ref{eq:ortho1D})-(\ref{eq:ortho3D})) \cite{BookSpencer}. Using conservation arguments, the solution of a general variable $u$ is expressed as $u^0=u$ where $u^0$ and $u$ represent the general solution obtained using a orthogonal and modified basis respectively, hence % %\textbf{B}^0 \hat{\textbf{u}}^0 = \textbf{B}\hat{\textbf{u}} \rightarrow \hat{\textbf{u}}^0 = \left[\textbf{B}^0\right]^{-1}\textbf{B}\hat{\textbf{u}} % %where $\hat{u}$ represents the vector of coefficients. %Since the coefficients are not coupled in the orthogonal basis and the information about the mean is contained only in the first coefficient, it is possible to apply a filter to the orthogonal coefficient vector. %The coefficients $\hat{u}^0$ are filtered by setting all the coefficients that are higher than $p-1$ equal to zero. The orthogonal coefficients represent the solution at $p-1$ using an orthogonal basis, hence the following transformation has to be applied to obtain the coefficients that belong to the modified basis: % %\hat{\textbf{u}}_f=\left[\textbf{B}^{-1}\right] \textbf{B}^0\hat{\textbf{u}}^0_f % %where $\hat{u}_f$ represents the filtered coefficient for a modified basis. In this way, the information contained in the high frequency components is removed without altering the mean value. Hence, this procedure is also used when computing the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used to avoid numerical instabilities.\\ %\subsubsection{$p$-adaption} The polynomial order in each element can be adjusted based on the sensor value that is obtained. Initially, a converged solution is obtained after which the sensor in each element is calculated. Based on the determined sensor value and the pre-defined sensor thresholds, it is decided to increase, decrease or maintain the degree of the polynomial approximation in each element and a new converged solution is obtained.\\ \label{eq:pswitch} p_e =\left \{ \begin{array}{l} ... ... @@ -532,11 +497,5 @@ The polynomial order in each element can be adjusted based on the sensor value t \right. For now, the threshold values $s_e$, $s_{ds}$, $s_{sm}$ and $s_{fl}$ are determined empirically by looking at the sensor distribution in the domain. Once these values are set, two .txt files are outputted, one that has the composites called VariablePComposites.txt and one with the expansions called VariablePExpansions.txt. These values have to copied into a new .xml file to create the adapted mesh. %where $s_{ds}$, $s_{sm}$ and $s_{fl}$ are the threshold values to identify discontinuities, smooth and flat solutions respectively. This procedure is carried out iteratively.\\ %After the sensor is applied and the polynomial order of the element is changed, a similar filtering procedure as described above is performed to compute the advective numerical fluxes on the interface of two elements with different expansions since the appropriate number of quadrature points has to be used. The number of quadrature points has to be equal to the number used by the highest polynomial degree of the two adjacent elements to avoid numerical instabilities \cite{SherwinConference}. To ensure conservation and stability, the continuity of the total flux is required and therefore % %\int_{\Gamma_{f^-}}\textbf{F}_-^ud\Gamma_f = \int_{\Gamma_{f^+}}\textbf{F}_+^ud\Gamma_f % %Where $\textbf{F}^u_-$ and $\textbf{F}^u_+$ represent the numerical flux on the edge between two elements with a lower and a higher polynomial order respectively. If the order or the quadrature points is different, the coefficients are copied directly on the higher resolved side, but fewer coefficients have to be set on the other side. The interface flux is then projected on the space of orthogonal polynomials and filtered to delete the high-order frequencies. Once the degree of the orthogonal expansion is decreased to the lower degree, a reverse projection is carried out and modified coefficients are obtained. the newly calculated flux values are then used to determine the boundary integral for the higher order element while for the lower order element they have to be interpolated onto the lower order boundary.