Commit 759581f7 authored by Spencer Sherwin's avatar Spencer Sherwin Committed by Dave Moxey

Updated a couple of fixes for extract surf when it does not have curved faces...

Updated a couple of fixes for extract surf when it does not have curved faces and needing to read the geometry when using the part-only options
parent 3583bb1a
......@@ -3,8 +3,3 @@
path = docs/tutorial
url = git@gitlab.nektar.info:nektar/tutorial
ignore = all
[submodule "docs/developer-guide"]
branch = master
path = docs/developer-guide
url = git@gitlab.nektar.info:nektar/developer-guide
ignore = all
......@@ -122,6 +122,7 @@ v5.0.0
- Fixed wss module for compressible flows (!958)
- Made Sutherland's law non-dimensional (!972)
- Add module for removing fields from .fld files (!978)
- Fixed nparts option in FieldConvert and automated Info.xml generation (!995)
- Added if statement to fix case of 1D/2D manifold interpolation in 1D/2D space,
added check on dimensions for interpolation, fixed seg interp (!999)
......
Subproject commit 0724faa50ed893acb7ca256b1582d288396bb5d4
Subproject commit 7c88d86e78a8908720b971d71ce8b5aafdd5eb92
Subproject commit 8f8ed6f96bad562bedd8c242a4dc247e605ca20d
......@@ -1153,63 +1153,13 @@ process each parallel partition in serial, for example when interpolating a
solution field from one mesh to another or creating an output file for
visualization.
\subsection{Using the \textit{nparts} options}
One option is to use the \inltt{nparts} command line
option. For example, the following command will create a
\inltt{.vtu} file using 10 partitions of \inltt{file1.xml}:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert --nparts 10 file1.xml file1.fld file1.vtu
\end{lstlisting}
Note this will create a parallel vtu file as it processes each partition.
Another example is to interpolate \inltt{file1.fld} from one mesh
\inltt{file1.xml} to another \inltt{file2.xml}. If the mesh files are
large we can do this by partitioning \inltt{file2.xml} into 10 (or more)
partitions and interpolating each partition one by one using the
command:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert --nparts 10 -m interpfield:fromxml=file1.xml:fromfld=file1.fld \
file2.xml file2.fld
\end{lstlisting}
Note that internally the routine uses the range option so that it
only has to load the part of \inltt{file1.xml} that overlaps with each
partition of \inltt{file2.xml}.
The resulting output will lie in a directory called \inltt{file2.fld}, with each
of the different parallel partitions in files with names \inltt{P0000000.fld},
\inltt{P0000001.fld}, \dots, \inltt{P0000009.fld}. This is nearly a complete
parallel field file. However, when the output file is in the .fld format,
the \inltt{Info.xml} file, which contains the information about which elements
lie in each partition, is not correct since it will only contain the information for one of the partitions. The correct \inltt{Info.xml} file can be generated by using the command
\begin{lstlisting}[style=BashInputStyle]
FieldConvert file2.xml file2.fld/Info.xml:info:nparts=10
\end{lstlisting}
Note the \inltt{:info} extension on the last argument is necessary to tell
FieldConvert that you wish to generate an info file, but with the extension
\inltt{.xml}. This syntax allows the routine not to get confused with the
input/output XML files.
\subsection{Running in parallel with the \textit{ nparts} option}
The examples above will process each partition serially which may now
take a while for many partitions. You can however run this option in
parallel using a smaller number of cores than the nparts.
For the example of creating a vtu file above you can use 4 processor concurrently wiht the command line:
\begin{lstlisting}[style=BashInputStyle]
mpirun -n 4 FieldConvert --nparts 10 file1.xml file1.fld file1.vtu
\end{lstlisting}
Obviously the executable will have to have been compiled with the MPI option for this to work.
\subsection{Using the \textit{ part-only} and \textit{ part-only-overlapping} options}
The options above will all load in the full \inltt{file1.xml}, partition
it into \inltt{nparts} files in a director called \inltt{file1\_xml}.
This can be expensive if the \inltt{file1.xml} is already large. So instead you can
pre-partition the file using the using the \inltt{--part-only}
option. So the command
Loading full \inltt{file1.xml} can be expensive if the
\inltt{file1.xml} is already large. So instead you can pre-partition
the file using the using the \inltt{--part-only} option. So the
command
\begin{lstlisting}[style=BashInputStyle]
FieldConvert --part-only 10 file.xml file.fld
\end{lstlisting}
......@@ -1218,13 +1168,6 @@ directory called \inltt{file\_xml}. If you enter this directory you will find
partitioned XML files \inltt{P0000000.xml}, \inltt{P0000001.xml}, \dots,
\inltt{P0000009.xml} which can then be processed individually as outlined above.
If you have a partitioned directory either from a parallel run or using the \inltt{--part-only} option you can now run the \inltt{FieldConvert} option using the command
\begin{lstlisting}[style=BashInputStyle]
mpirun -n 4 FieldConvert --nparts 10 file1\_xml:xml file1.fld file1.vtu
\end{lstlisting}
Note the form \inltt{file1\_xml:xml} option tells the code it is a parallel partition which should be treated as an \inltt{xml} type file.
There is also a \inltt{--part-only-overlapping} option, which can be run in the
same fashion.
\begin{lstlisting}[style=BashInputStyle]
......@@ -1241,6 +1184,58 @@ within a partition. using the \inltt{--part-only-overlapping} option will still
yield a shrinking isocontour, but the overlapping partitions help to overlap the
partiiton boundaries.
\subsection{Using the \textit{nparts} options}
If you have a partitioned directory either from a parallel run or
using the \inltt{--part-only} option you can now run the
\inltt{FieldConvert} option using the \inltt{nparts} command line
option, that is
\begin{lstlisting}[style=BashInputStyle]
FieldConvert --nparts 10 file1\_xml:xml file1.fld file1.vtu
\end{lstlisting}
Note the form \inltt{file1\_xml:xml} option tells the code it is a
parallel partition which should be treated as an \inltt{xml} type
file. the argument of \inltt{nparts} should correpsond to the number
of partitions used in generating the file1\_xml directory. This will
create a parallel vtu file as it processes each partition.
Another example is to interpolate \inltt{file1.fld} from one mesh
\inltt{file1.xml} to another \inltt{file2.xml}. If the mesh files are
large we can do this by partitioning \inltt{file2.xml} into 10 (or
more) partitions to generate the \inltt{file\_xml} directory and
interpolating each partition one by one using the command:
\begin{lstlisting}[style=BashInputStyle]
FieldConvert --nparts 10 -m interpfield:fromxml=file1.xml:fromfld=file1.fld \
file2_xml:xml file2.fld
\end{lstlisting}
Note that internally the routine uses the range option so that it only
has to load the part of \inltt{file1.xml} that overlaps with each
partition of \inltt{file2.xml}. The resulting output will lie in a
directory called \inltt{file2.fld}, with each of the different
parallel partitions in files with names \inltt{P0000000.fld},
\inltt{P0000001.fld}, \dots, \inltt{P0000009.fld}. In previous
versions of FieldConvert it was necessary to generate an updated
\inltt{Info.xml} file but in the current version it should
automatically be updating this file.
\subsection{Running in parallel with the \textit{ nparts} option}
The examples above will process each partition serially which may now
take a while for many partitions. You can however run this option in
parallel using a smaller number of cores than the nparts.
For the example of creating a vtu file above you can use 4 processor
concurrently wiht the command line:
\begin{lstlisting}[style=BashInputStyle]
mpirun -n 4 FieldConvert --nparts 10 file1\_xml:xml file1.fld file1.vtu
\end{lstlisting}
Obviously the executable will have to have been compiled with the MPI
option for this to work.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../user-guide"
......
......@@ -220,6 +220,15 @@ void InputXml::Process(po::variables_map &vm)
m_f->m_session = LibUtilities::SessionReader::CreateInstance(
argc, (char **)argv, files, m_f->m_comm);
if (vm.count("nparts"))
{
// make sure have pre-partitioned mesh for nparts option
ASSERTL0(boost::icontains(files[0],"_xml"),
"Expect the mesh to have been pre-partitioned when "
" using the\"--nparts\" option. Please use \"--part-only\" "
"option to prepartition xml file.");
}
// Free up memory.
delete[] argv;
......
......@@ -317,6 +317,11 @@ protected:
return true;
}
bool v_IsSerial(void)
{
return true;
}
bool v_RemoveExistingFiles(void)
{
return false;
......
......@@ -218,6 +218,9 @@ void OutputFileBase::Process(po::variables_map &vm)
PrintErrorFromExp();
}
}
// put outfile back to filename in case of nparts option
RegisterConfig("outfile", filename);
}
// Restore m_exp
exp.swap(m_f->m_exp);
......
......@@ -107,7 +107,8 @@ void OutputFld::OutputFromExp(po::variables_map &vm)
}
}
}
fld->Write(filename, FieldDef, FieldData, m_f->m_fieldMetaDataMap);
fld->Write(filename, FieldDef, FieldData, m_f->m_fieldMetaDataMap,
false);
}
else
{
......
......@@ -68,75 +68,58 @@ void OutputInfo::Process(po::variables_map &vm)
{
// Extract the output filename and extension
string filename = m_config["outfile"].as<string>();
int i = 0;
// partition mesh
ASSERTL0(m_config["nparts"].as<string>().compare("NotSet") != 0,
"Need to specify nparts for info output");
const int nparts = m_config["nparts"].as<int>();
int nparts = m_config["nparts"].as<int>();
std::vector<std::string> files;
// load .xml ending
for (auto &x : m_f->m_inputfiles["xml"]) {
files.push_back(x);
}
// load any .xml.gz endings
for (auto &x: m_f->m_inputfiles["xml.gz"])
{
files.push_back(x);
}
ASSERTL0(m_f->m_comm->GetSize() == 1,
"OutputInfo module should be run in serial.");
// Default partitioner to use is Scotch. Override default with
// command-line flags if they are set.
string vPartitionerName = "Scotch";
if (m_f->m_session->DefinesCmdLineArgument("use-metis"))
{
vPartitionerName = "Metis";
}
if (m_f->m_session->DefinesCmdLineArgument("use-scotch"))
{
vPartitionerName = "Scotch";
}
// Construct mesh partitioning.
SpatialDomains::MeshPartitionSharedPtr meshPartition =
SpatialDomains::GetMeshPartitionFactory().CreateInstance(
vPartitionerName, m_f->m_session, m_f->m_graph->GetMeshDimension(),
m_f->m_graph->CreateMeshEntities(),
m_f->m_graph->CreateCompositeDescriptor());
meshPartition->PartitionMesh(nparts, true);
// get hold of local partition ids
std::vector<std::vector<unsigned int> > ElementIDs(nparts);
// Populate the list of element ID lists from all processes
for (i = 0; i < nparts; ++i)
{
std::vector<unsigned int> tmp;
meshPartition->GetElementIDs(i, tmp);
ElementIDs[i] = tmp;
}
// Input/output file
LibUtilities::CommSharedPtr c = m_f->m_comm;
std::shared_ptr<LibUtilities::FieldIOXml> fldXml =
std::static_pointer_cast<LibUtilities::FieldIOXml>(
LibUtilities::GetFieldIOFactory().CreateInstance("Xml", c, true));
// Set up output names
// open file and setup meta data.
fs::path pinfilename(filename);
std::vector<std::string> filenames;
for (int i = 0; i < nparts; ++i)
std::vector<std::vector<unsigned int> > ElementIDs;
for (int p = 0; p < nparts; ++p)
{
boost::format pad("P%1$07d.fld");
pad % i;
boost::format pad("P%1$07d.%2$s");
pad % p % "fld";
fs::path fullpath = pinfilename / pad.str();
string fname = LibUtilities::PortablePath(fullpath);
LibUtilities::DataSourceSharedPtr dataSource =
LibUtilities::XmlDataSource::create(fname);
std::vector<LibUtilities::FieldDefinitionsSharedPtr> fielddefs;
std::vector<unsigned int> PartElmtIDs;
// read in header of partition if it exists
fldXml->ImportFieldDefs(dataSource,fielddefs,false);
// create ElmenetIDs list then use
for(int i = 0; i < fielddefs.size(); ++i)
{
for(int j = 0; j < fielddefs[i]->m_elementIDs.size(); ++j)
{
PartElmtIDs.push_back(fielddefs[i]->m_elementIDs[j]);
}
}
ElementIDs.push_back(PartElmtIDs);
filenames.push_back(pad.str());
}
// Write the output file
LibUtilities::CommSharedPtr c = m_f->m_comm;
std::shared_ptr<LibUtilities::FieldIOXml> fldXml =
std::static_pointer_cast<LibUtilities::FieldIOXml>(
LibUtilities::GetFieldIOFactory().CreateInstance("Xml", c, true));
fldXml->WriteMultiFldFileIDs(filename, filenames, ElementIDs);
// Write the Info.xml file
string infofile =
LibUtilities::PortablePath(pinfilename / fs::path("Info.xml"));
fldXml->WriteMultiFldFileIDs(infofile,filenames, ElementIDs);
}
}
}
......@@ -70,19 +70,6 @@ void ProcessWSS::Process(po::variables_map &vm)
int expdim = m_f->m_graph->GetSpaceDimension();
m_spacedim = expdim + m_f->m_numHomogeneousDir;
if (m_spacedim == 2)
{
m_f->m_variables.push_back("Shear_x");
m_f->m_variables.push_back("Shear_y");
m_f->m_variables.push_back("Shear_mag");
}
else
{
m_f->m_variables.push_back("Shear_x");
m_f->m_variables.push_back("Shear_y");
m_f->m_variables.push_back("Shear_z");
m_f->m_variables.push_back("Shear_mag");
}
if (m_f->m_exp[0]->GetNumElmts() == 0)
{
......@@ -108,11 +95,14 @@ void ProcessWSS::Process(po::variables_map &vm)
Array<OneD, MultiRegions::ExpListSharedPtr> BndExp(nshear);
Array<OneD, MultiRegions::ExpListSharedPtr> BndElmtExp(nfields);
// Resize m_exp
m_f->m_exp.resize(nfields + nshear);
for (i = 0; i < nshear; ++i)
// will resuse nfields expansions to write shear components.
if(nshear > nfields)
{
m_f->m_exp[nfields + i] = m_f->AppendExpList(m_f->m_numHomogeneousDir);
m_f->m_exp.resize(nshear);
for (i = nfields; i < nshear; ++i)
{
m_f->m_exp[nfields + i] = m_f->AppendExpList(m_f->m_numHomogeneousDir);
}
}
// Create map of boundary ids for partitioned domains
......@@ -137,8 +127,7 @@ void ProcessWSS::Process(po::variables_map &vm)
// bnd
for (i = 0; i < nshear; i++)
{
BndExp[i] =
m_f->m_exp[nfields + i]->UpdateBndCondExpansion(bnd);
BndExp[i] = m_f->m_exp[i]->UpdateBndCondExpansion(bnd);
}
for (i = 0; i < nfields; i++)
{
......@@ -294,6 +283,20 @@ void ProcessWSS::Process(po::variables_map &vm)
BndExp[nshear - 1]->UpdateCoeffs());
}
}
if (m_spacedim == 2)
{
m_f->m_variables[0] = "Shear_x";
m_f->m_variables[1] = "Shear_y";
m_f->m_variables[2] = "Shear_mag";
}
else
{
m_f->m_variables[0] = "Shear_x";
m_f->m_variables[1] = "Shear_y";
m_f->m_variables[2] = "Shear_z";
m_f->m_variables[3] = "Shear_mag";
}
}
void ProcessWSS::GetViscosity(
......
......@@ -794,7 +794,6 @@ void FieldIOXml::ImportFieldDefs(
while (loopXml)
{
TiXmlElement *element = loopXml->FirstChildElement("ELEMENTS");
ASSERTL0(element, "Unable to find ELEMENTS tag within nektar tag.");
while (element)
{
......
......@@ -193,7 +193,7 @@ namespace Nektar
LIB_UTILITIES_EXPORT const std::string GetSessionNameRank() const;
/// Returns the communication object.
LIB_UTILITIES_EXPORT CommSharedPtr &GetComm();
/// Returns the communication object.
/// Returns if file system shared
LIB_UTILITIES_EXPORT bool GetSharedFilesystem();
/// Finalises the session.
LIB_UTILITIES_EXPORT void Finalise();
......
......@@ -57,6 +57,7 @@ bool Comm::v_RemoveExistingFiles(void)
return true;
}
CommFactory &GetCommFactory()
{
static CommFactory instance;
......
......@@ -134,6 +134,7 @@ public:
LIB_UTILITIES_EXPORT inline CommSharedPtr GetColumnComm();
LIB_UTILITIES_EXPORT inline bool TreatAsRankZero(void);
LIB_UTILITIES_EXPORT inline bool IsSerial(void);
LIB_UTILITIES_EXPORT inline bool RemoveExistingFiles(void);
protected:
......@@ -189,6 +190,7 @@ protected:
virtual CommSharedPtr v_CommCreateIf(int flag) = 0;
virtual void v_SplitComm(int pRows, int pColumns) = 0;
virtual bool v_TreatAsRankZero(void) = 0;
virtual bool v_IsSerial(void) = 0;
LIB_UTILITIES_EXPORT virtual bool v_RemoveExistingFiles(void);
};
......@@ -515,10 +517,16 @@ inline bool Comm::TreatAsRankZero(void)
return v_TreatAsRankZero();
}
inline bool Comm::IsSerial(void)
{
return v_IsSerial();
}
inline bool Comm::RemoveExistingFiles(void)
{
return v_RemoveExistingFiles();
}
}
}
......
......@@ -146,6 +146,21 @@ bool CommMpi::v_TreatAsRankZero(void)
return true;
}
/**
*
*/
bool CommMpi::v_IsSerial(void)
{
if(m_size == 1)
{
return true;
}
else
{
return false;
}
}
/**
*
*/
......
......@@ -90,6 +90,7 @@ protected:
virtual void v_Block();
virtual double v_Wtime();
virtual bool v_TreatAsRankZero(void);
virtual bool v_IsSerial(void);
virtual void v_Send(void *buf, int count, CommDataType dt, int dest);
virtual void v_Recv(void *buf, int count, CommDataType dt, int source);
virtual void v_SendRecv(void *sendbuf, int sendcount, CommDataType sendtype,
......
......@@ -85,6 +85,14 @@ bool CommSerial::v_TreatAsRankZero(void)
return true;
}
/**
*
*/
bool CommSerial::v_IsSerial(void)
{
return true;
}
/**
*
*/
......@@ -212,8 +220,17 @@ void CommSerial::v_SplitComm(int pRows, int pColumns)
*/
CommSharedPtr CommSerial::v_CommCreateIf(int flag)
{
ASSERTL0(flag, "Serial process must always be split");
return shared_from_this();
if (flag == 0)
{
// flag == 0 => get back MPI_COMM_NULL, return a null ptr instead.
return std::shared_ptr<Comm>();
}
else
{
// Return a real communicator
return shared_from_this();
}
}
}
}
......@@ -71,6 +71,7 @@ protected:
LIB_UTILITIES_EXPORT virtual void v_Finalise();
LIB_UTILITIES_EXPORT virtual int v_GetRank();
LIB_UTILITIES_EXPORT virtual bool v_TreatAsRankZero(void);
LIB_UTILITIES_EXPORT virtual bool v_IsSerial(void);
LIB_UTILITIES_EXPORT virtual void v_Block();
LIB_UTILITIES_EXPORT virtual NekDouble v_Wtime();
......
......@@ -168,7 +168,7 @@ static inline gs_data *Init(const Nektar::Array<OneD, long> pId,
bool verbose = true)
{
#ifdef NEKTAR_USE_MPI
if (pComm->GetSize() == 1)
if (pComm->IsSerial())
{
return 0;
}
......@@ -201,7 +201,7 @@ static inline void Unique(const Nektar::Array<OneD, long> pId,
const LibUtilities::CommSharedPtr &pComm)
{
#ifdef NEKTAR_USE_MPI
if (pComm->GetSize() == 1)
if (pComm->IsSerial())
{
return;
}
......
......@@ -215,6 +215,13 @@ namespace Nektar
int n = vComm->GetSize();
int p = vComm->GetRank();
if(vComm->IsSerial())
{
// for FieldConvert Comm this is true and it resets
// parallel processing back to serial case
n = 1;
p = 0;
}
// At this point, graph only contains information from Dirichlet
// boundaries. Therefore make a global list of the vert and edge
// information on all processors.
......@@ -384,7 +391,7 @@ namespace Nektar
// partitions
for (i = 0; i < nTotVerts; ++i)
{
if (vComm->GetRank() == vertprocs[i])
if (p == vertprocs[i]) // rank = vertproc[i]
{
extraDirVerts.insert(vertids[i]);
}
......@@ -393,7 +400,7 @@ namespace Nektar
// Set up list of edges that need to be shared to other partitions
for (i = 0; i < nTotEdges; ++i)
{
if (vComm->GetRank() == edgeprocs[i])
if (p == edgeprocs[i]) // rank = vertproc[i]
{
extraDirEdges.insert(edgeids[i]);
}
......@@ -1530,6 +1537,7 @@ namespace Nektar
int mdswitch;
m_session->LoadParameter(
"MDSwitch", mdswitch, 10);
int nGraphVerts =
CreateGraph(locExp, bndCondExp, bndCondVec,
checkIfSystemSingular, periodicVerts, periodicEdges,
......
......@@ -175,6 +175,19 @@ namespace Nektar
{
LibUtilities::CommSharedPtr comm = m_session->GetComm();
if (comm->IsSerial())
{
// Do not try and generate a communicator if we have a serial
// communicator. Arises with a FieldConvert communicator when
// using --nparts in FieldConvert. Just set communicator to comm
// in this case.
for (auto &it : m_boundaryRegions)
{
m_boundaryCommunicators[it.first] = comm;
}
return;
}
std::set<int> allids = ShareAllBoundaryIDs(m_boundaryRegions, comm);
for (auto &it : allids)
......
......@@ -307,6 +307,8 @@ public:
inline void SetExpansions(const std::string variable,
ExpansionMapShPtr &exp);