Commit c48b339b authored by Michael Turner's avatar Michael Turner
Browse files

Merge branch 'master' into 'feature/wss-compressible'

# Conflicts:
#   CHANGELOG.md
parents 7edb1f3d cfed59c8
......@@ -41,6 +41,7 @@ v5.0.0
- Fixed interppoints module (!760)
- Move StreamFunction utility to a FieldConvert module (!809)
- Extend wss module to compressible flows (!810)
- Allow explicitly setting bool options of FieldConvert modules as false (!811)
- Enable output to multiple files (!844)
- Allow using xml file without expansion tag in FieldConvert (!849)
......@@ -91,13 +92,21 @@ v4.4.1
**IncNavierStokesSolver**
- Fix an initialisation issue when using an additional advective field (!779)
- Fix MovingBody boundary condition (!852)
**Utilities**
- Fix vtkToFld missing dependency which prevented compiling with VTK 7.1 (!808)
**Documentation**
- Added missing details on artificial viscosity and dealising to compressible
flow solver user guide (!846)
**Packaging**
- Added missing package for FieldUtils library (!755)
**ADRSolver:**
- Fix UnsteadyAdvectionDiffusion with DG (!855)
v4.4.0
------
**Library**:
......
......@@ -221,7 +221,7 @@ to screen;
\item \inltt{TInf} farfield temperature (i.e. $T_{\infty}$). Default value = 288.15 $K$;
\item \inltt{Twall} temperature at the wall when isothermal boundary
conditions are employed (i.e. $T_{w}$). Default value = 300.15$K$;
\item \inltt{uInf} farfield $X$-component of the velocity (i.e. $u_{\infty}$). Default value = 0.1 $m/s$;
\item \inltt{uint} farfield $X$-component of the velocity (i.e. $u_{\infty}$). Default value = 0.1 $m/s$;
\item \inltt{vInf} farfield $Y$-component of the velocity (i.e. $v_{\infty}$). Default value = 0.0 $m/s$;
\item \inltt{wInf} farfield $Z$-component of the velocity (i.e. $w_{\infty}$). Default value = 0.0 $m/s$;
\item \inltt{mu} dynamic viscosity (i.e. $\mu_{\infty}$). Default value = 1.78e-05 $Pa s$;
......@@ -437,7 +437,6 @@ Compressible flow is characterised by abrupt changes in density within the flow
\begin{equation}\label{eq:sensor}
S_e=\frac{||\rho^p_e-\rho^{p-1}_e||_{L_2}}{||\rho_e^p||_{L_2}}
\end{equation}
by default the comparison is made with the $p-1$ solution, but this can be changed by setting the parameter \inltt{SensorOffset}.
An artificial diffusion term is introduced locally to the Euler equations to deal with flow discontinuity and the consequential numerical oscillations. Two models are implemented, a non-smooth and a smooth artificial viscosity model.
\subsubsection{Non-smooth artificial viscosity model}
For the non-smooth artificial viscosity model the added artificial viscosity is constant in each element and discontinuous between the elements. The Euler system is augmented by an added laplacian term on right hand side of equation \ref{eq:euler}. The diffusivity of the system is controlled by a variable viscosity coefficient $\epsilon$. The value of $\epsilon$ is dependent on $\epsilon_0$, which is the maximum viscosity that is dependent on the polynomial order ($p$), the mesh size ($h$) and the maximum wave speed and the local sensor value. Based on pre-defined sensor threshold values, the variable viscosity is set accordingly
......@@ -450,6 +449,24 @@ For the non-smooth artificial viscosity model the added artificial viscosity is
\end{array}
\right.
\end{equation}
To enable the non-smooth viscosity model, the following line has to be added to the \inltt{SOLVERINFO} section:
\begin{lstlisting}[style=XmlStyle]
<SOLVERINFO>
<I PROPERTY="ShockCaptureType" VALUE="NonSmooth" />
<SOLVERINFO>
\end{lstlisting}
The diffusivity is controlled by the following parameters:
\begin{lstlisting}[style=XmlStyle]
<PARAMETERS>
<P> Skappa = -1.3 </P>
<P> Kappa = 0.2 </P>
<P> mu0 = 1.0 </P>
</PARAMETERS>
\end{lstlisting}
where mu0 is the maximum values for the viscosity coefficient,
Kappa is half of the width of the transition interval and Skappa is
the value of the centre of the interval.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width = 0.47 \textwidth]{img/Mach_P4.pdf}
......@@ -458,6 +475,7 @@ For the non-smooth artificial viscosity model the added artificial viscosity is
\label{fig:}
\end{center}
\end{figure}
\subsubsection{Smooth artificial viscosity model}
For the smooth artificial viscosity model an extra PDE for the artificial viscosity is appended to the Euler system
\begin{equation}\label{eq:eulerplusvis}\begin{split}
......@@ -512,54 +530,52 @@ The polynomial order in each element can be adjusted based on the sensor value t
\right.
\end{equation}
For now, the threshold values $s_e$, $s_{ds}$, $s_{sm}$ and $s_{fl}$ are determined empirically by looking at the sensor distribution in the domain. Once these values are set, two .txt files are outputted, one that has the composites called VariablePComposites.txt and one with the expansions called VariablePExpansions.txt. These values have to copied into a new .xml file to create the adapted mesh.
\subsection{De-Aliasing Techniques}
Aliasing effects, arising as a consequence of the nonlinearity of the
underlying problem, need to be address to stabilise the simulations. Aliasing
appears when nonlinear quantities are calculated at an insufficient number of
quadrature points. We can identify two types of nonlinearities:
\begin{itemize}
\item PDE nonlinearities, related to the nonlinear and quasi-linear fluxes.
\item Geometrical nonlinearities, related to the deformed/curves meshes.
\end{itemize}
We consider two de-aliasing strategies based on the concept of consistent integration:
\subsection{Quasi-1D nozzle flow}
A quasi-1D inviscid flow (flow with area variation) can be obtained using the
\inltt{Quasi1D} forcing in a 1D solution of the Euler equations:
\begin{lstlisting}[style=XMLStyle]
<FORCING>
<FORCE TYPE="Quasi1D">
<AREAFCN> Area </AREAFCN>
</FORCE>
</FORCING>
\end{lstlisting}
in this case a function named \inltt{Area} must be specified in the \inltt{CONDITIONS} section of the session file.
\begin{itemize}
\item Local dealiasing: It only targets the PDE-aliasing sources, applying a consistent integration of them locally.
\item Global dealiasing: It targets both the PDE and the geometrical-aliasing sources. It requires a richer quadrature order to consistently integrate the nonlinear fluxes, the geometric factors, the mass matrix and the boundary term.
\end{itemize}
In this case, it is possible to prescribe the inflow conditions in terms of stagnation properties (density and pressure)
by using the following boundary condition
\begin{lstlisting}[style=XmlStyle]
<BOUNDARYCONDITIONS>
<REGION REF="0">
<D VAR="rho" USERDEFINEDTYPE="StagnationInflow" VALUE="rhoStag" />
<D VAR="rhou" USERDEFINEDTYPE="StagnationInflow" VALUE="0" />
<D VAR="E" USERDEFINEDTYPE="StagnationInflow" VALUE="pStag/(Gamma-1)" />
</REGION>
</BOUNDARYCONDITIONS>
\end{lstlisting}
Since Nektar++ tackles separately the PDE and geometric aliasing during the
projection and solution of the equations, to consistently
integrate all the nonlinearities in the compressible
NavierStokes equations, the quadrature points should
be selected based on the maximum order of the nonlinearities:
\begin{equation}
Q_{min}= P_{exp}+\frac{max(2P_{exp},P_{geom})}{2} + \frac{3}{2}
\end{equation}
\subsection{Axi-symmetric flow}
An axi-symmetric inviscid flow (with symmetry axis on x=0) can be obtained using
the \inltt{AxiSymmetric} forcing in a 2D solution of the Euler equations:
\begin{lstlisting}[style=XMLStyle]
<FORCING>
<FORCE TYPE="AxiSymmetric">
</FORCE>
</FORCING>
\end{lstlisting}
The \inltt{StagnationInflow} boundary condition can also be used in this case.
where $Q_{min}$ is the minimum required number of quadrature
points to exactly integrate the highest-degree of nonlinearity,
$P_{exp}$ being the order of the polynomial expansion and $P_{geom}$
being the geometric order of the mesh. Bear in mind thatwe are
using a discontinuous discretisation, meaning that aliasing
effect are not fully controlled, since the boundary terms
introduce non-polynomial functions into the problem.
To enable the global de-aliasing technique, modify the number of quadrature
points by:
Also, by defining the geometry with \inltt{<GEOMETRY DIM="2" SPACE="3">} (i.e. a two-dimensional
mesh in three-dimensional space) and adding the \inltt{rhow} variable, we obtain an axi-symmetric
flow with swirl, in which case the \inltt{StagnationInflow} boundary condition allows prescribing \inltt{rhow}:
\begin{lstlisting}[style=XmlStyle]
<BOUNDARYCONDITIONS>
<REGION REF="0">
<D VAR="rho" USERDEFINEDTYPE="StagnationInflow" VALUE="rhoStag" />
<D VAR="rhou" USERDEFINEDTYPE="StagnationInflow" VALUE="0" />
<D VAR="rhov" USERDEFINEDTYPE="StagnationInflow" VALUE="0" />
<D VAR="rhow" USERDEFINEDTYPE="StagnationInflow" VALUE="x" />
<D VAR="E" USERDEFINEDTYPE="StagnationInflow" VALUE="pStag/(Gamma-1)" />
</REGION>
</BOUNDARYCONDITIONS>
<E COMPOSITE="[101]"
BASISTYPE="Modified_A,Modified_A"
NUMMODES="7,7"
POINTSTYPE="GaussLobattoLegendre,GaussLobattoLegendre"
NUMPOINTS="14,14"
FIELDS="rho,rhou,rhov,E"
/>
\end{lstlisting}
where \inltt{NUMMODES} corresponds to $P$+1, where $P$ is the order of the polynomial
used to approximate the solution. \inltt{NUMPOINTS} specifies the number of quadrature
points.
......@@ -109,7 +109,7 @@ void Module::RegisterConfig(string key, string val)
it->second.m_beenSet = true;
if (it->second.m_isBool)
if (it->second.m_isBool && val=="")
{
it->second.m_value = "1";
}
......
......@@ -196,7 +196,8 @@ public:
virtual ModulePriority GetModulePriority() = 0;
FIELD_UTILS_EXPORT void RegisterConfig(std::string key, std::string value);
FIELD_UTILS_EXPORT void RegisterConfig(std::string key,
std::string value = "");
FIELD_UTILS_EXPORT void PrintConfig();
FIELD_UTILS_EXPORT void SetDefaults();
......
......@@ -104,7 +104,7 @@ void OutputFileBase::Process(po::variables_map &vm)
}
if (m_f->m_writeBndFld)
{
int nfields = m_f->m_exp.size();
int nfields = m_f->m_variables.size();
int normdim = m_f->m_graph->GetMeshDimension();
// Prepare for normals output
......
......@@ -613,7 +613,7 @@ void OutputTecplotBinary::WriteDoubleOrFloat(std::ofstream &outfile,
Array<OneD, NekDouble> &data)
{
// Data format: either double or single depending on user options
bool useDoubles = m_config["double"].m_beenSet;
bool useDoubles = m_config["double"].as<bool>();
if (useDoubles)
{
......@@ -642,7 +642,7 @@ void OutputTecplotBinary::WriteTecplotZone(std::ofstream &outfile)
Array<OneD, NekDouble> fieldMax(m_fields.num_elements());
// Data format: either double or single depending on user options
bool useDoubles = m_config["double"].m_beenSet;
bool useDoubles = m_config["double"].as<bool>();
if ((m_oneOutputFile && m_f->m_comm->GetRank() == 0) || !m_oneOutputFile)
{
......
......@@ -59,7 +59,7 @@ ProcessBoundaryExtract::ProcessBoundaryExtract(FieldSharedPtr f)
// set up dafault values.
m_config["bnd"] = ConfigOption(false, "All", "Boundary to be processed");
m_config["addnormals"] =
ConfigOption(true, "NotSet", "Add normals to output");
ConfigOption(true, "0", "Add normals to output");
f->m_writeBndFld = true;
f->m_declareExpansionAsContField = true;
......@@ -72,7 +72,7 @@ ProcessBoundaryExtract::~ProcessBoundaryExtract()
void ProcessBoundaryExtract::Process(po::variables_map &vm)
{
m_f->m_addNormals = m_config["addnormals"].m_beenSet;
m_f->m_addNormals = m_config["addnormals"].as<bool>();
// Set up Field options to output boundary fld
string bvalues = m_config["bnd"].as<string>();
......
......@@ -108,7 +108,7 @@ ProcessDisplacement::ProcessDisplacement(FieldSharedPtr f)
ConfigOption(false, "", "Name of file containing high order boundary");
m_config["usevertexids"] = ConfigOption(
false, "0", "Use vertex IDs instead of face IDs for matching");
true, "0", "Use vertex IDs instead of face IDs for matching");
}
ProcessDisplacement::~ProcessDisplacement()
......@@ -129,7 +129,7 @@ void ProcessDisplacement::Process(po::variables_map &vm)
return;
}
bool useVertexIds = m_config["usevertexids"].m_beenSet;
bool useVertexIds = m_config["usevertexids"].as<bool>();
vector<string> files;
files.push_back(toFile);
......
......@@ -60,10 +60,10 @@ ProcessEquiSpacedOutput::ProcessEquiSpacedOutput(FieldSharedPtr f)
: ProcessModule(f)
{
m_config["tetonly"] =
ConfigOption(true, "NotSet", "Only process tetrahedral elements");
ConfigOption(true, "0", "Only process tetrahedral elements");
m_config["modalenergy"] =
ConfigOption(true, "NotSet", "Write output as modal energy");
ConfigOption(true, "0", "Write output as modal energy");
}
ProcessEquiSpacedOutput::~ProcessEquiSpacedOutput()
......@@ -199,7 +199,7 @@ void ProcessEquiSpacedOutput::Process(po::variables_map &vm)
for (int i = 0; i < nel; ++i)
{
e = m_f->m_exp[0]->GetExp(i);
if (m_config["tetonly"].m_beenSet)
if (m_config["tetonly"].as<bool>())
{
if (m_f->m_exp[0]->GetExp(i)->DetShapeType() !=
LibUtilities::eTetrahedron)
......@@ -365,7 +365,7 @@ void ProcessEquiSpacedOutput::Process(po::variables_map &vm)
cnt = 0;
int cnt1 = 0;
if (m_config["modalenergy"].m_beenSet)
if (m_config["modalenergy"].as<bool>())
{
Array<OneD, const NekDouble> phys = m_f->m_exp[n]->GetPhys();
for (int i = 0; i < nel; ++i)
......
......@@ -58,7 +58,7 @@ ProcessHomogeneousPlane::ProcessHomogeneousPlane(FieldSharedPtr f)
{
m_config["planeid"] = ConfigOption(false, "NotSet", "plane id to extract");
m_config["wavespace"] =
ConfigOption(true, "NotSet", "Extract plane in Fourier space");
ConfigOption(true, "0", "Extract plane in Fourier space");
}
ProcessHomogeneousPlane::~ProcessHomogeneousPlane()
......@@ -108,7 +108,7 @@ void ProcessHomogeneousPlane::Process(po::variables_map &vm)
int n = s * nfields + i;
m_f->m_exp[n] = m_f->m_exp[n]->GetPlane(plane);
if (m_config["wavespace"].m_beenSet)
if (m_config["wavespace"].as<bool>())
{
m_f->m_exp[n]->BwdTrans(m_f->m_exp[n]->GetCoeffs(),
m_f->m_exp[n]->UpdatePhys());
......
......@@ -63,7 +63,7 @@ ProcessInnerProduct::ProcessInnerProduct(FieldSharedPtr f) : ProcessModule(f)
false, "NotSet", "Take inner product of multiple field fields with "
"ids given in string. i.e. file_0.chk file_1.chk ...");
m_config["allfromflds"] =
ConfigOption(true, "NotSet", "Take inner product between all fromflds, "
ConfigOption(true, "0", "Take inner product between all fromflds, "
"requires multifldids to be set");
}
......@@ -101,7 +101,7 @@ void ProcessInnerProduct::Process(po::variables_map &vm)
string multifldidsstr = m_config["multifldids"].as<string>();
vector<unsigned int> multiFldIds;
vector<string> fromfiles;
bool allfromflds = m_config["allfromflds"].m_beenSet;
bool allfromflds = m_config["allfromflds"].as<bool>();
if (fields.compare("All") == 0)
{
......
......@@ -82,11 +82,11 @@ ProcessIsoContour::ProcessIsoContour(FieldSharedPtr f) :
m_config["fieldvalue"] = ConfigOption(false, "NotSet",
"field value to extract");
m_config["globalcondense"] = ConfigOption(true, "NotSet",
m_config["globalcondense"] = ConfigOption(true, "0",
"Globally condense contour to unique "
"values");
m_config["smooth"] = ConfigOption(true, "NotSet",
m_config["smooth"] = ConfigOption(true, "0",
"Smooth isocontour (might require "
"globalcondense)");
......@@ -197,8 +197,8 @@ void ProcessIsoContour::Process(po::variables_map &vm)
}
// Process isocontour
bool smoothing = m_config["smooth"].m_beenSet;
bool globalcondense = m_config["globalcondense"].m_beenSet;
bool smoothing = m_config["smooth"].as<bool>();
bool globalcondense = m_config["globalcondense"].as<bool>();
if(globalcondense)
{
if(verbose)
......
......@@ -61,7 +61,7 @@ ModuleKey ProcessQualityMetric::className =
ProcessQualityMetric::ProcessQualityMetric(FieldSharedPtr f) : ProcessModule(f)
{
m_config["scaled"] =
ConfigOption(true, "", "use scaled jacobian instead");
ConfigOption(true, "0", "use scaled jacobian instead");
}
ProcessQualityMetric::~ProcessQualityMetric()
......@@ -101,7 +101,7 @@ void ProcessQualityMetric::Process(po::variables_map &vm)
// copy Jacobian into field
LocalRegions::ExpansionSharedPtr Elmt = exp->GetExp(i);
int offset = exp->GetPhys_Offset(i);
Array<OneD, NekDouble> q = GetQ(Elmt,m_config["scaled"].m_beenSet);
Array<OneD, NekDouble> q = GetQ(Elmt,m_config["scaled"].as<bool>());
Array<OneD, NekDouble> out = phys + offset;
ASSERTL0(q.num_elements() == Elmt->GetTotPoints(),
......
......@@ -1075,18 +1075,8 @@ void Mapping::v_UpdateBCs( const NekDouble time)
int nbnds = m_fields[0]->GetBndConditions().num_elements();
// Declare variables
Array<OneD, int> BCtoElmtID;
Array<OneD, int> BCtoTraceID;
Array<OneD, const SpatialDomains::BoundaryConditionShPtr> BndConds;
Array<OneD, MultiRegions::ExpListSharedPtr> BndExp;
StdRegions::StdExpansionSharedPtr elmt;
StdRegions::StdExpansionSharedPtr Bc;
Array<OneD, NekDouble> ElmtVal(physTot, 0.0);
Array<OneD, NekDouble> BndVal(physTot, 0.0);
Array<OneD, NekDouble> coordVelElmt(physTot, 0.0);
Array<OneD, NekDouble> coordVelBnd(physTot, 0.0);
Array<OneD, NekDouble> Vals(physTot, 0.0);
Array<OneD, bool> isDirichlet(nfields);
......@@ -1180,66 +1170,26 @@ void Mapping::v_UpdateBCs( const NekDouble time)
{
BndConds = m_fields[i]->GetBndConditions();
BndExp = m_fields[i]->GetBndCondExpansions();
// Loop boundary conditions again to get correct
// values for cnt
int cnt = 0;
for(int m = 0 ; m < nbnds; ++m)
if( BndConds[n]->GetUserDefined() =="" ||
BndConds[n]->GetUserDefined() =="MovingBody")
{
int exp_size = BndExp[m]->GetExpSize();
if (m==n && isDirichlet[i])
{
for (int j = 0; j < exp_size; ++j, cnt++)
{
m_fields[i]->GetBoundaryToElmtMap(BCtoElmtID,
BCtoTraceID);
/// Casting the bnd exp to the specific case
Bc = std::dynamic_pointer_cast<
StdRegions::StdExpansion>
(BndExp[n]->GetExp(j));
// Get element expansion
elmt = m_fields[i]->GetExp(BCtoElmtID[cnt]);
// Get values on the element
ElmtVal = values[i] +
m_fields[i]->GetPhys_Offset(
BCtoElmtID[cnt]);
// Get values on boundary
elmt->GetTracePhysVals(BCtoTraceID[cnt],
Bc, ElmtVal, BndVal);
// Pointer to value that should be updated
Vals = BndExp[n]->UpdatePhys()
+ BndExp[n]->GetPhys_Offset(j);
// Copy result
Vmath::Vcopy(Bc->GetTotPoints(),
BndVal, 1, Vals, 1);
// Apply MovingBody correction
if ( (i<nvel) &&
BndConds[n]->GetUserDefined() ==
"MovingBody" )
{
// get coordVel in the element
coordVelElmt = coordVel[i] +
m_fields[i]->GetPhys_Offset(
BCtoElmtID[cnt]);
// Get values on boundary
elmt->GetTracePhysVals(
BCtoTraceID[cnt], Bc,
coordVelElmt, coordVelBnd);
// Apply correction
Vmath::Vadd(Bc->GetTotPoints(),
coordVelBnd, 1,
Vals, 1, Vals, 1);
}
}
}
else // setting if m!=n
m_fields[i]->ExtractPhysToBnd(n,
values[i], BndExp[n]->UpdatePhys());
// Apply MovingBody correction
if ( (i<nvel) &&
BndConds[n]->GetUserDefined() ==
"MovingBody" )
{
cnt += exp_size;
// Get coordinate velocity on boundary
Array<OneD, NekDouble> coordVelBnd(BndExp[n]->GetTotPoints());
m_fields[i]->ExtractPhysToBnd(n, coordVel[i], coordVelBnd);
// Apply correction
Vmath::Vadd(BndExp[n]->GetTotPoints(),
coordVelBnd, 1,
BndExp[n]->UpdatePhys(), 1,
BndExp[n]->UpdatePhys(), 1);
}
}
}
......@@ -1256,12 +1206,16 @@ void Mapping::v_UpdateBCs( const NekDouble time)
if ( BndConds[n]->GetBoundaryConditionType() ==
SpatialDomains::eDirichlet)
{
BndExp[n]->FwdTrans_BndConstrained(BndExp[n]->GetPhys(),
BndExp[n]->UpdateCoeffs());
if (m_fields[i]->GetExpType() == MultiRegions::e3DH1D)
if( BndConds[n]->GetUserDefined() =="" ||
BndConds[n]->GetUserDefined() =="MovingBody")
{
BndExp[n]->HomogeneousFwdTrans(BndExp[n]->GetCoeffs(),
BndExp[n]->UpdateCoeffs());
BndExp[n]->FwdTrans_BndConstrained(BndExp[n]->GetPhys(),
BndExp[n]->UpdateCoeffs());
if (m_fields[i]->GetExpType() == MultiRegions::e3DH1D)
{
BndExp[n]->HomogeneousFwdTrans(BndExp[n]->GetCoeffs(),
BndExp[n]->UpdateCoeffs());
}
}
}
}
......
......@@ -2350,22 +2350,6 @@ namespace Nektar
ASSERTL0(false, "This type of BC not implemented yet");
}
}
else if (boost::iequals(m_bndConditions[i]->GetUserDefined(),
"MovingBody"))
{
locExpList = m_bndCondExpansions[i];
if (m_bndConditions[i]->GetBoundaryConditionType()
== SpatialDomains::eDirichlet)
{
locExpList->FwdTrans_IterPerExp(
locExpList->GetPhys(),
locExpList->UpdateCoeffs());
}
else
{
ASSERTL0(false, "This type of BC not implemented yet");
}
}
}
}
} // end of namespace
......
......@@ -251,9 +251,7 @@ namespace Nektar
for (n = 0; n < m_bndCondExpansions.num_elements(); ++n)
{
if (time == 0.0 ||
m_bndConditions[n]->IsTimeDependent() ||
boost::iequals(m_bndConditions[n]->GetUserDefined(),
"MovingBody"))
m_bndConditions[n]->IsTimeDependent() )
{
m_bndCondExpansions[n]->HomogeneousFwdTrans(
m_bndCondExpansions[n]->GetCoeffs(),
......
......@@ -433,7 +433,7 @@ void FilterFieldConvert::CreateModules( vector<string> &modcmds)
if (tmp2.size() == 1)
{
mod->RegisterConfig(tmp2[0], "1");
mod->RegisterConfig(tmp2[0]);
}
else if (tmp2.size() == 2)
{
......@@ -467,6 +467,7 @@ void FilterFieldConvert::CreateModules( vector<string> &modcmds)
module.second = string("equispacedoutput");
mod = GetModuleFactory().CreateInstance(module, m_f);
m_modules.insert(m_modules.end()-1, mod);
mod->SetDefaults();
}
// Check if modules provided are compatible
......
......@@ -79,6 +79,9 @@ IF( NEKTAR_SOLVER_ADR )
# 2D advection diffusion (Imex DG)
ADD_NEKTAR_TEST(UnsteadyAdvectionDiffusion_2D_ImexDG)
# 2D advection diffusion (Weak DG)
ADD_NEKTAR_TEST(UnsteadyAdvectionDiffusion_2D_WeakDG)
# 3D discontinous advection
ADD_NEKTAR_TEST(Advection3D_m12_DG_hex_periodic_nodal LENGTHY)
ADD_NEKTAR_TEST(Advection3D_m12_DG_hex_nodal)
......
......@@ -271,8 +271,9 @@ namespace Nektar
for (int i = 0; i < nVariables; ++i)
{
Vmath::Vadd(nSolutionPts, &outarray[i][0], 1,
&outarrayDiff[i][0], 1, &outarray[i][0], 1);
Vmath::Svtvp(nSolutionPts, m_epsilon, &outarrayDiff[i][0], 1,
&outarray[i][0], 1,
&outarray[i][0], 1);
}
}
......
<?xml version="1.0" encoding="utf-8"?>
<test>
<description>2D Advection-Diffusion with WeakDG (epsilon = 0.5)</description>
<executable>ADRSolver</executable>
<parameters>UnsteadyAdvectionDiffusion_2D_WeakDG.xml</parameters>
<files>
<file description="Session File">UnsteadyAdvectionDiffusion_2D_WeakDG.xml</file>
</files>
<metrics>
<metric type="L2" id="1">
<value variable="u" tolerance="1e-08">3.087e-08</value>
</metric>
<metric type="Linf" id="2">