Commit d18ef8d8 authored by Chris Cantwell's avatar Chris Cantwell

Merge branch 'feature/FieldConvertCleanUp' into 'master'

Feature/field convert clean up

This MR includes a number of new features and fixes for the FieldConvert utility. It also deprecates a number of standalone utilities.

See merge request !475
parents 6c3be247 23954a9a
......@@ -93,9 +93,9 @@ stands for \inltt{m}odule)..
Specifically, FieldConvert has these additional functionalities
%
\begin{enumerate}
\item \inltt{AddFld}: Sum two .fld files;
\item \inltt{C0Projection}: Computes the C0 projection of a given output file;
\item \inltt{QCriterion}: Computes the Q-Criterion for a given output file;
\item \inltt{addFld}: Sum two .fld files;
\item \inltt{concatenate}: Concatenate a \nekpp binary output (.chk or .fld) field file into single file;
\item \inltt{equispacedoutput}: Write data as equi-spaced output using simplices to represent the data for connecting points;
\item \inltt{extract}: Extract a boundary field;
......@@ -103,8 +103,13 @@ Specifically, FieldConvert has these additional functionalities
\item \inltt{interppointdatatofld}: Interpolates given discrete data using a finite difference approximation to a fld file given an xml file;
\item \inltt{interppoints}: Interpolates a set of points to another, requires fromfld and fromxml to be defined, a line or plane of points can be defined;
\item \inltt{isocontour}: Extract an isocontour of ``fieldid'' variable and at value ``fieldvalue''. Optionally ``fieldstr'' can be specified for a string defiition or ``smooth'' for smoothing;
\item \inltt{scaleinputfld}: Rescale input field by a constant factor.
\item \inltt{jacobianenergy}: Shows high frequency energy of Jacobian;
\item \inltt{printfldnorms}: Print L2 and LInf norms to stdout;
\item \inltt{scalargrad}: Computes scalar gradient field;
\item \inltt{scaleinputfld}: Rescale input field by a constant factor;
\item \inltt{shear}: Computes time-averaged shear stress metrics: TAWSS, OSI, transWSS, TAAFI, TACFI, WSSG;
\item \inltt{vorticity}: Computes the vorticity field.
\item \inltt{wss}: Computes wall shear stress field.
\end{enumerate}
The module list above can be seen by running the command
%
......@@ -117,25 +122,6 @@ In the following we will detail the usage of each module.
%
%
\subsubsection{Sum two .fld files: \textit{AddFld} module}
To sum two .fld files one can use the \inltt{AddFld} module of FieldConvert
%
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m addfld:fromfld=file1.fld:scale=-1 file1.xml file2.fld file3.fld
\end{lstlisting}
%
In this case we use it in conjunction with the command \inltt{scale}
which multiply the values of a given .fld file by a constant \inltt{value}.
\inltt{file1.fld} is the file multiplied by \inltt{value}, \inltt{file1.xml}
is the associated session file, \inltt{file2.fld} is the .fld file which
is summed to \inltt{file1.fld} and finally \inltt{file3.fld} is the output
which contain the sum of the two .fld files.
\inltt{file3.fld} can be processed in a similar way as described
in section \ref{s:utilities:fieldconvert:sub:convert} to visualise
it either in Tecplot or in Paraview the result.
%
%
%
\subsubsection{Smooth the data: \textit{C0Projection} module}
To smooth the data of a given .fld file one can
use the \inltt{C0Projection} module of FieldConvert
......@@ -162,6 +148,26 @@ to visualise either in Tecplot or in Paraview the result.
%
%
%
\subsubsection{Sum two .fld files: \textit{addFld} module}
To sum two .fld files one can use the \inltt{addFld} module of FieldConvert
%
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m addfld:fromfld=file1.fld:scale=-1 file1.xml file2.fld file3.fld
\end{lstlisting}
%
In this case we use it in conjunction with the command \inltt{scale}
which multiply the values of a given .fld file by a constant \inltt{value}.
\inltt{file1.fld} is the file multiplied by \inltt{value}, \inltt{file1.xml}
is the associated session file, \inltt{file2.fld} is the .fld file which
is summed to \inltt{file1.fld} and finally \inltt{file3.fld} is the output
which contain the sum of the two .fld files.
\inltt{file3.fld} can be processed in a similar way as described
in section \ref{s:utilities:fieldconvert:sub:convert} to visualise
it either in Tecplot or in Paraview the result.
%
%
%
\subsubsection{Concatenate two files: \textit{concatenate} module}
To concatenate \inltt{file1.fld} and \inltt{file2.fld} into \inltt{file-conc.fld}
one can run the following command
......@@ -233,6 +239,20 @@ a Paraview output.
%
%
%
\subsubsection{Compute the gradient of a field: \textit{gradient} module}
To compute the spatial gradients of all fields one can run the following command
%
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m gradient test.xml test.fld test-grad.fld
\end{lstlisting}
%
where the file \inltt{file-grad.fld} can be processed in a similar
way as described in section \ref{s:utilities:fieldconvert:sub:convert}
to visualise either in Tecplot or in Paraview the result.
%
%
%
%
\subsubsection{Interpolate one field to another: \textit{interpfield} module}
To interpolate one field to another, one can use the following command:
%
......@@ -312,11 +332,11 @@ The Inverse Distance implementation has no such requirement.
\subsubsection{Interpolate a field to a series of points: \textit{interppoints} module}
You can interpolate one field to a series of given points using the following command:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m interppoints:fromxml=from.xml:fromfld=file1.fld \
FieldConvert -m interppoints:fromxml=file1.xml:fromfld=file1.fld \
file2.pts file2.dat
\end{lstlisting}
This command will interpolate the field defined by \inltt{file1.xml} and
\inltt{file1.fld} to the points defined in \inltt{file2.xml} and output it to
\inltt{file1.fld} to the points defined in \inltt{file2.pts} and output it to
\inltt{file2.dat}.
The \inltt{fromxml} and \inltt{fromfld} must be specified in this module.
The format of the file \inltt{file2.pts} is of the same form as for the
......@@ -392,16 +412,64 @@ automatically calls a \inltt{globalcondense} option which remove
multiply defined verties from the simplex definition which arise as
isocontour are generated element by element.
In addition to the \inltt{smooth} or \inltt{globalcondense} options
you can specify \inltt{removesmallcontour}=100 which will remove
separate isocontours of less than 100 triangles. This optin requires
\inltt{smooth} or \inltt{globalcondense} to be specified.
\begin{notebox}
Currently this option is only set up for triangles, quadrilaterals,
tetrahedrons and prisms.
\end{notebox}
%
%
%
%
\subsubsection{Show high frequency energy of the Jacobian: \textit{jacobianenergy} module}
\subsubsection{Scale a given .fld: \textit{scaleinputfld} module}
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m jacobianenergy file.xml file.fld jacenergy.fld
\end{lstlisting}
The option \inltt{topmodes} can be used to specify the number of top modes to
keep.
The output file \inltt{jacenergy.fld} can be processed in a similar
way as described in section \ref{s:utilities:fieldconvert:sub:convert}
to visualise it either in Tecplot or in Paraview the result.
%
%
%
\subsubsection{Print L2 and LInf norms: \textit{printfldnorms} module}
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m printfldnorms test.xml test.fld
\end{lstlisting}
This module does not create an output file. The L2 and LInf norms for each field variable is printed to the stdout.
%
%
%
\subsubsection{Computes the scalar gradient: \textit{scalargrad} module}
The scalar gradient of a field is computed by running:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m scalargrad:bnd=0 test.xml test.fld test-scalgrad.fld
\end{lstlisting}
The option \inltt{bnd} specifies which boundary region to extract. Note this is different to MeshConvert where the parameter \inltt{surf} is specified and corresponds to composites rather boundaries. If \inltt{bnd} is not provided, all boundaries are extracted to different fields. To process this file you will need an xml file of the same region.
%
%
%
\subsubsection{Scale a given .fld: \textit{scaleinputfld} module}
To scale a .fld file by a given scalar quantity, the user can run:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m sacleinputfld:scale=value test.fld test-scal.fld
FieldConvert -m scaleinputfld:scale=value test.xml test.fld test-scal.fld
\end{lstlisting}
The argument \inltt{scale=value} rescales of a factor \inltt{value}
\inltt{test.fld} by the factor value.
......@@ -409,6 +477,29 @@ The output file \inltt{file-conc.fld} can be processed in a similar
way as described in section \ref{s:utilities:fieldconvert:sub:convert}
to visualise it either in Tecplot or in Paraview the result.
%
%
%
\subsubsection{Time-averaged shear stress metrics: \textit{shear} module}
Time-dependent wall shear stress derived metrics relevant to cardiovascular fluid dynamics research can be computed using this module. They are
\begin{itemize}
\item TAWSS: time-averaged wall shear stress;
\item OSI: oscillatory shear index;
\item transWSS: transverse wall shear stress;
\item TACFI: time-averaged cross-flow index;
\item TAAFI: time-averaged aneurysm formation index;
\item |WSSG|: wall shear stress gradient.
\end{itemize}
To compute these, the user can run:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m shear:N=value:fromfld=test_id_b0.fld test.xml test-multishear.fld
\end{lstlisting}
The argument \inltt{N} and \inltt{fromfld} are compulsory arguments that respectively define the number of \inltt{fld} files corresponding to the number of discrete equispaced time-steps, and the first \inltt{fld} file which should have the form of \inltt{test\_id\_b0.fld} where the first underscore in the name marks the starting time-step file ID.
The input \inltt{.fld} files are the outputs of the \textit{wss} module. If they do not contain the surface normals (an optional output of the \textit{wss} modle), then the \textit{shear} module will not compute the last metric, |WSSG|.
%
%
%
......@@ -423,6 +514,17 @@ way as described in section \ref{s:utilities:fieldconvert:sub:convert}.
%
%
%
\subsubsection{Computing the wall shear stress: \textit{wss} module}
To obtain the wall shear stres vector and magnitude, the user can run:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert -m wss:bnd=0:addnormals=1 test.xml test.fld test-wss.fld
\end{lstlisting}
The option \inltt{bnd} specifies which boundary region to extract. Note this is different to MeshConvert where the parameter \inltt{surf} is specified and corresponds to composites rather boundaries. If \inltt{bnd} is not provided, all boundaries are extracted to different fields. The \inltt{addnormals} is an optional command argument which, when turned on, outputs the normal vector of the extracted boundary region as well as the shear stress vector and magnitude. This option is off by default. To process the output file(s) you will need an xml file of the same region.
%
%
%
\subsubsection{Manipulating meshes with FieldConvert}
FieldConvert has support for two modules that can be used in conjunction with
the linear elastic solver, as shown in chapter~\ref{s:elasticity}. To do this,
......
......@@ -2,12 +2,4 @@
\input{meshconvert}
\input{fieldconvert}
\input{fldtovtk}
\input{fldtotecplot}
\input{xmltovtk}
\input{probefld}
\input{fieldconvert}
\ No newline at end of file
......@@ -875,6 +875,10 @@ namespace Nektar
}
}
/** \brief Get the normals along specficied face
* Get the face normals interplated to a points0 x points 0
* type distribution
**/
void PrismExp::v_ComputeFaceNormal(const int face)
{
const SpatialDomains::GeomFactorsSharedPtr &geomFactors =
......
......@@ -197,7 +197,7 @@ namespace Nektar
}
}
/**
/**
*
*/
ExpansionType ExpList::GetExpType(void)
......@@ -560,7 +560,7 @@ namespace Nektar
* array of size \f$N_{\mathrm{eof}}\f$.
*/
void ExpList::v_FwdTrans_IterPerExp(const Array<OneD, const NekDouble> &inarray,
Array<OneD, NekDouble> &outarray)
Array<OneD, NekDouble> &outarray)
{
Array<OneD,NekDouble> f(m_ncoeffs);
......@@ -1187,7 +1187,7 @@ namespace Nektar
* \f$Q_{\mathrm{tot}}\f$.
*/
void ExpList::v_BwdTrans_IterPerExp(const Array<OneD, const NekDouble> &inarray,
Array<OneD, NekDouble> &outarray)
Array<OneD, NekDouble> &outarray)
{
Array<OneD, NekDouble> tmp;
for (int i = 0; i < m_collections.size(); ++i)
......@@ -1235,7 +1235,7 @@ namespace Nektar
NekDouble tol,
bool returnNearestElmt)
{
NekDouble resid;
NekDouble nearpt = 1e6;
if (GetNumElmts() == 0)
{
......@@ -1255,7 +1255,7 @@ namespace Nektar
{
if ((*m_exp)[i]->GetGeom()->ContainsPoint(gloCoords,
locCoords,
tol, resid))
tol, nearpt))
{
w.SetX(gloCoords[0]);
w.SetY(gloCoords[1]);
......@@ -1291,7 +1291,7 @@ namespace Nektar
// retrieve local coordinate of point
(*m_exp)[min_id]->GetGeom()->GetLocCoords(gloCoords,
locCoords);
locCoords);
return min_id;
}
else
......@@ -1304,56 +1304,64 @@ namespace Nektar
{
static int start = 0;
int min_id = 0;
NekDouble resid_min = 1e6;
NekDouble nearpt_min = 1e6;
Array<OneD, NekDouble> savLocCoords(locCoords.num_elements());
// restart search from last found value
for (int i = start; i < (*m_exp).size(); ++i)
{
if ((*m_exp)[i]->GetGeom()->ContainsPoint(gloCoords, locCoords,
tol, resid))
if ((*m_exp)[i]->GetGeom()->ContainsPoint(gloCoords,
locCoords,
tol, nearpt))
{
start = i;
return i;
}
else
{
if(resid < resid_min)
if(nearpt < nearpt_min)
{
min_id = i;
resid_min = resid;
Vmath::Vcopy(locCoords.num_elements(),savLocCoords,1,locCoords,1);
nearpt_min = nearpt;
Vmath::Vcopy(locCoords.num_elements(),locCoords,1,savLocCoords,1);
}
}
}
for (int i = 0; i < start; ++i)
{
if ((*m_exp)[i]->GetGeom()->ContainsPoint(gloCoords, locCoords,
tol, resid))
if ((*m_exp)[i]->GetGeom()->ContainsPoint(gloCoords,
locCoords,
tol, nearpt))
{
start = i;
return i;
}
else
{
if(resid < resid_min)
if(nearpt < nearpt_min)
{
min_id = i;
resid_min = resid;
Vmath::Vcopy(locCoords.num_elements(),savLocCoords,1,locCoords,1);
nearpt_min = nearpt;
Vmath::Vcopy(locCoords.num_elements(),
locCoords,1,savLocCoords,1);
}
}
}
std::string msg = "Failed to find point in element to tolerance of "
+ boost::lexical_cast<std::string>(resid)
+ " using nearest point found";
WARNINGL0(true,msg.c_str());
std::string msg = "Failed to find point within element to tolerance of "
+ boost::lexical_cast<std::string>(tol)
+ " using local point ("
+ boost::lexical_cast<std::string>(locCoords[0]) +","
+ boost::lexical_cast<std::string>(locCoords[1]) +","
+ boost::lexical_cast<std::string>(locCoords[1])
+ ") in element: "
+ boost::lexical_cast<std::string>(min_id);
WARNINGL1(false,msg.c_str());
if(returnNearestElmt)
{
Vmath::Vcopy(locCoords.num_elements(),locCoords,1,savLocCoords,1);
Vmath::Vcopy(locCoords.num_elements(),savLocCoords,1,locCoords,1);
return min_id;
}
else
......@@ -1817,7 +1825,6 @@ namespace Nektar
ASSERTL0(false,
"This method is not defined or valid for this class type");
LibUtilities::TranspositionSharedPtr trans;
return trans;
}
......@@ -1834,7 +1841,6 @@ namespace Nektar
ASSERTL0(false,
"This method is not defined or valid for this class type");
Array<OneD, unsigned int> NoModes(1);
return NoModes;
}
......@@ -1843,7 +1849,6 @@ namespace Nektar
ASSERTL0(false,
"This method is not defined or valid for this class type");
Array<OneD, unsigned int> NoModes(1);
return NoModes;
}
......@@ -1936,6 +1941,7 @@ namespace Nektar
void ExpList::GeneralGetFieldDefinitions(std::vector<LibUtilities::FieldDefinitionsSharedPtr> &fielddef,
int NumHomoDir,
int NumHomoStrip,
Array<OneD, LibUtilities::BasisSharedPtr> &HomoBasis,
std::vector<NekDouble> &HomoLen,
std::vector<unsigned int> &HomoZIDs,
......@@ -2025,8 +2031,16 @@ namespace Nektar
if(elementIDs.size() > 0)
{
LibUtilities::FieldDefinitionsSharedPtr fdef = MemoryManager<LibUtilities::FieldDefinitions>::AllocateSharedPtr(shape, elementIDs, basis, UniOrder, numModes,fields, NumHomoDir, HomoLen, HomoZIDs, HomoYIDs);
fielddef.push_back(fdef);
for(int i = 0; i < NumHomoStrip; ++i)
{
LibUtilities::FieldDefinitionsSharedPtr fdef =
MemoryManager<LibUtilities::FieldDefinitions>::
AllocateSharedPtr(shape, elementIDs, basis,
UniOrder, numModes,fields,
NumHomoDir, HomoLen, HomoZIDs,
HomoYIDs);
fielddef.push_back(fdef);
}
}
}
}
......@@ -2395,7 +2409,6 @@ namespace Nektar
"This method is not defined or valid for this class type");
}
void ExpList::v_GetBCValues(Array<OneD, NekDouble> &BndVals,
const Array<OneD, NekDouble> &TotField,
int BndID)
......
......@@ -64,13 +64,13 @@ namespace Nektar
class GlobalMatrix;
enum Direction
{
eX,
eY,
eZ,
eS,
eN
};
{
eX,
eY,
eZ,
eS,
eN
};
enum ExpansionType
{
......@@ -81,21 +81,21 @@ namespace Nektar
e3DH2D,
e3D,
eNoType
};
};
MultiRegions::Direction const DirCartesianMap[] =
{
eX,
eY,
eZ
};
{
eX,
eY,
eZ
};
/// A map between global matrix keys and their associated block
/// matrices.
typedef map<GlobalMatrixKey,DNekScalBlkMatSharedPtr> BlockMatrixMap;
/// A shared pointer to a BlockMatrixMap.
typedef boost::shared_ptr<BlockMatrixMap> BlockMatrixMapShPtr;
/// Base class for all multi-elemental spectral/hp expansions.
class ExpList: public boost::enable_shared_from_this<ExpList>
......@@ -134,7 +134,7 @@ namespace Nektar
/// Returns the type of the expansion
MULTI_REGIONS_EXPORT void SetExpType(ExpansionType Type);
/// Evaulates the maximum number of modes in the elemental basis
/// order over all elements
inline int EvalBasisNumModesMax(void) const;
......@@ -160,12 +160,12 @@ namespace Nektar
/// Returns the total number of qudature points scaled by
/// the factor scale on each 1D direction
inline int Get1DScaledTotPoints(const NekDouble scale) const;
/// Sets the wave space to the one of the possible configuration
/// true or false
inline void SetWaveSpace(const bool wavespace);
///Set Modified Basis for the stability analysis
inline void SetModifiedBasis(const bool modbasis);
......@@ -321,8 +321,8 @@ namespace Nektar
Array<OneD, NekDouble> &coord_0,
Array<OneD, NekDouble> &coord_1 = NullNekDouble1DArray,
Array<OneD, NekDouble> &coord_2 = NullNekDouble1DArray);
// Homogeneous transforms
// Homogeneous transforms
inline void HomogeneousFwdTrans(
const Array<OneD, const NekDouble> &inarray,
Array<OneD, NekDouble> &outarray,
......@@ -342,7 +342,7 @@ namespace Nektar
const Array<OneD, NekDouble> &inarray2,
Array<OneD, NekDouble> &outarray,
CoeffState coeffstate = eLocal);
inline void GetBCValues(
Array<OneD, NekDouble> &BndVals,
const Array<OneD, NekDouble> &TotField,
......@@ -353,7 +353,7 @@ namespace Nektar
Array<OneD, const NekDouble> &V2,
Array<OneD, NekDouble> &outarray,
int BndID);
/// Apply geometry information to each expansion.
MULTI_REGIONS_EXPORT void ApplyGeomInfo();
......@@ -397,7 +397,7 @@ namespace Nektar
}
void WriteVtkPieceHeader(std::ofstream &outfile, int expansion,
int istrip)
int istrip = 0)
{
v_WriteVtkPieceHeader(outfile, expansion, istrip);
}
......@@ -627,8 +627,8 @@ namespace Nektar
inline void PhysDeriv(
Direction edir,
const Array<OneD, const NekDouble> &inarray,
Array<OneD, NekDouble> &out_d);
Array<OneD, NekDouble> &out_d);
/// This function discretely evaluates the derivative of a function
/// \f$f(\boldsymbol{x})\f$ on the domain consisting of all
/// elements of the expansion.
......@@ -738,6 +738,7 @@ namespace Nektar
MULTI_REGIONS_EXPORT void GeneralGetFieldDefinitions(
std::vector<LibUtilities::FieldDefinitionsSharedPtr> &fielddef,
int NumHomoDir = 0,
int NumHomoStrip = 1,
Array<OneD, LibUtilities::BasisSharedPtr> &HomoBasis =
LibUtilities::NullBasisSharedPtr1DArray,
std::vector<NekDouble> &HomoLen =
......@@ -821,15 +822,15 @@ namespace Nektar
const boost::shared_ptr<ExpList> &fromExpList,
const Array<OneD, const NekDouble> &fromCoeffs,
Array<OneD, NekDouble> &toCoeffs);
//Extract data in fielddata into the m_coeffs_list for the 3D stability analysis (base flow is 2D)
MULTI_REGIONS_EXPORT void ExtractDataToCoeffs(
LibUtilities::FieldDefinitionsSharedPtr &fielddef,
std::vector<NekDouble> &fielddata,
std::string &field,
Array<OneD, NekDouble> &coeffs);