Commit ff501d05 authored by Dave Moxey's avatar Dave Moxey

Merge branch 'feature/FilterFieldConvert' into 'master'

FilterFieldConvert

This MR will introduce a new filter, which will be able to run modules from FieldConvert on a checkpoint. This filter replaces FilterSampler, and therefore the new functionalities will extend to other filters derived from it (AverageFields, ReynoldsStresses). An example of the setup for that would be:
``` xml
<FILTER TYPE="FieldConvert">
    <PARAM NAME="OutputFile"> filename.vtu </PARAM>
    <PARAM NAME="OutputFrequency"> 100 </PARAM>
    <PARAM NAME="Modules"> vorticity homplane:planeid=4 </PARAM>
</FILTER>
```
It is unlikely that all options and modules will be supported. However, the main idea is to use this with modules that result in outputs with reduced size (e.g. isocontour, meanmode for 3DH1D). This way we can obtain these outputs more frequently than would be practical by storing the full checkpoint and running FieldConvert later.

I don't know how far I am from finishing this (some cases already work, but I still need to test it further). Anyway, it would be nice to have some feedback on it, especially regarding the compilation and general structure of the code, since I had to create a new FieldConvert library and move some files around.

See merge request !589
parents 790e7416 92a9d46b
...@@ -14,4 +14,3 @@ IndentCaseLabels: true ...@@ -14,4 +14,3 @@ IndentCaseLabels: true
Standard: Cpp03 Standard: Cpp03
AccessModifierOffset: -4 AccessModifierOffset: -4
BinPackParameters: false BinPackParameters: false
BinPackArguments: false
...@@ -12,6 +12,8 @@ v4.4.0 ...@@ -12,6 +12,8 @@ v4.4.0
(!656) (!656)
- Sped up interpolataion from pts files and fixed parallel pts import (!584) - Sped up interpolataion from pts files and fixed parallel pts import (!584)
- Increased required boost version to 1.56.0 (!584) - Increased required boost version to 1.56.0 (!584)
- New FieldUtils library allows support for most `FieldConvert` post-processing
operations during simulation using a new filter (!589)
**IncNavierStokesSolver:** **IncNavierStokesSolver:**
- Add ability to simulate additional scalar fields (!624) - Add ability to simulate additional scalar fields (!624)
...@@ -20,6 +22,8 @@ v4.4.0 ...@@ -20,6 +22,8 @@ v4.4.0
- Modify curve module to allow for spline input (!628) - Modify curve module to allow for spline input (!628)
**FieldConvert:** **FieldConvert:**
- Move all modules to a new library, FieldUtils, to support post-processing
during simulations (!589)
- Add module to stretch homogeneous direction (!609) - Add module to stretch homogeneous direction (!609)
v4.3.3 v4.3.3
......
...@@ -283,7 +283,7 @@ INCLUDE_DIRECTORIES(${CMAKE_SOURCE_DIR}) ...@@ -283,7 +283,7 @@ INCLUDE_DIRECTORIES(${CMAKE_SOURCE_DIR})
# Build active components # Build active components
IF (NEKTAR_BUILD_LIBRARY) IF (NEKTAR_BUILD_LIBRARY)
SET(NEKTAR++_LIBRARIES SolverUtils LibUtilities StdRegions SpatialDomains LocalRegions SET(NEKTAR++_LIBRARIES SolverUtils LibUtilities StdRegions SpatialDomains LocalRegions
MultiRegions Collections GlobalMapping NekMeshUtils) MultiRegions Collections GlobalMapping FieldUtils NekMeshUtils)
INCLUDE_DIRECTORIES(library) INCLUDE_DIRECTORIES(library)
ADD_SUBDIRECTORY(library) ADD_SUBDIRECTORY(library)
INSTALL(EXPORT Nektar++Libraries DESTINATION ${LIB_DIR}/cmake COMPONENT dev) INSTALL(EXPORT Nektar++Libraries DESTINATION ${LIB_DIR}/cmake COMPONENT dev)
......
...@@ -760,6 +760,7 @@ INPUT = @CMAKE_SOURCE_DIR@/docs/doxygen/ \ ...@@ -760,6 +760,7 @@ INPUT = @CMAKE_SOURCE_DIR@/docs/doxygen/ \
@CMAKE_SOURCE_DIR@/library/LocalRegions/ \ @CMAKE_SOURCE_DIR@/library/LocalRegions/ \
@CMAKE_SOURCE_DIR@/library/MultiRegions/ \ @CMAKE_SOURCE_DIR@/library/MultiRegions/ \
@CMAKE_SOURCE_DIR@/library/GlobalMapping/ \ @CMAKE_SOURCE_DIR@/library/GlobalMapping/ \
@CMAKE_SOURCE_DIR@/library/FieldUtils/ \
@CMAKE_SOURCE_DIR@/library/SolverUtils/ \ @CMAKE_SOURCE_DIR@/library/SolverUtils/ \
@CMAKE_SOURCE_DIR@/library/NekMeshUtils/ \ @CMAKE_SOURCE_DIR@/library/NekMeshUtils/ \
@CMAKE_SOURCE_DIR@/solvers/ \ @CMAKE_SOURCE_DIR@/solvers/ \
......
...@@ -27,6 +27,52 @@ In the following we document the filters implemented. Note that some filters are ...@@ -27,6 +27,52 @@ In the following we document the filters implemented. Note that some filters are
solver-specific and will therefore only work for a given subset of the available solver-specific and will therefore only work for a given subset of the available
solvers. solvers.
\subsection{FieldConvert checkpoints}
\begin{notebox}
This filter is still at an experimental stage. Not all modules and options
from FieldConvert are supported.
\end{notebox}
This filter applies a sequence of FieldConvert modules to the solution,
writing an output file. An output is produced at the end of the simulation into
\inltt{session\_fc.fld}, or alternatively every $M$ timesteps as defined by the
user, into a sequence of files \inltt{session\_*\_fc.fld}, where \inltt{*} is
replaced by a counter.
The following parameters are supported:
\begin{center}
\begin{tabularx}{0.99\textwidth}{lllX}
\toprule
\textbf{Option name} & \textbf{Required} & \textbf{Default} &
\textbf{Description} \\
\midrule
\inltt{OutputFile} & \xmark & \texttt{session.fld} &
Output filename. If no extension is provided, it is assumed as .fld\\
\inltt{OutputFrequency} & \xmark & \texttt{NumSteps} &
Number of timesteps after which output is written, $M$.\\
\inltt{Modules} & \xmark & &
FieldConvert modules to run, separated by a white space.\\
\bottomrule
\end{tabularx}
\end{center}
As an example, consider:
\begin{lstlisting}[style=XMLStyle,gobble=2]
<FILTER TYPE="FieldConvert">
<PARAM NAME="OutputFile">MyFile.vtu</PARAM>
<PARAM NAME="OutputFrequency">100</PARAM>
<PARAM NAME="Modules"> vorticity isocontour:fieldid=0:fieldvalue=0.1 </PARAM>
</FILTER>
\end{lstlisting}
This will create a sequence of files named \inltt{MyFile\_*\_fc.vtu} containing isocontours.
The result will be output every 100 time steps. Output directly to
\inltt{.vtu} or \inltt{.dat} is currently only supported for isocontours.
In other cases, the output should be a \inltt{.fld} file.
\subsection{Time-averaged fields} \subsection{Time-averaged fields}
This filter computes time-averaged fields for each variable defined in the This filter computes time-averaged fields for each variable defined in the
...@@ -38,7 +84,8 @@ user, into a sequence of files \inltt{session\_*\_avg.fld}, where \inltt{*} is ...@@ -38,7 +84,8 @@ user, into a sequence of files \inltt{session\_*\_avg.fld}, where \inltt{*} is
replaced by a counter. This latter option can be useful to observe statistical replaced by a counter. This latter option can be useful to observe statistical
convergence rates of the averaged variables. convergence rates of the averaged variables.
The following parameters are supported: This filter is derived from FieldConvert filter, and therefore support all parameters
available in that case. The following additional parameter is supported:
\begin{center} \begin{center}
\begin{tabularx}{0.99\textwidth}{lllX} \begin{tabularx}{0.99\textwidth}{lllX}
...@@ -46,13 +93,8 @@ The following parameters are supported: ...@@ -46,13 +93,8 @@ The following parameters are supported:
\textbf{Option name} & \textbf{Required} & \textbf{Default} & \textbf{Option name} & \textbf{Required} & \textbf{Default} &
\textbf{Description} \\ \textbf{Description} \\
\midrule \midrule
\inltt{OutputFile} & \xmark & \texttt{session} &
Prefix of the output filename to which average fields are written.\\
\inltt{SampleFrequency} & \xmark & 1 & \inltt{SampleFrequency} & \xmark & 1 &
Number of timesteps at which the average is calculated, $N$.\\ Number of timesteps at which the average is calculated, $N$.\\
\inltt{OutputFrequency} & \xmark & \texttt{NumSteps} &
Number of timesteps after which output is written, $M$.\\
\bottomrule
\end{tabularx} \end{tabularx}
\end{center} \end{center}
...@@ -123,7 +165,7 @@ for example: ...@@ -123,7 +165,7 @@ for example:
\end{lstlisting} \end{lstlisting}
By default, this filter uses a simple average. Optionally, an exponential By default, this filter uses a simple average. Optionally, an exponential
time average can be used, in which case the output contain the moving moving average can be used, in which case the output contains the moving
averages and the Reynolds stresses calculated based on them. For example: averages and the Reynolds stresses calculated based on them. For example:
\begin{lstlisting}[style=XMLStyle,gobble=2] \begin{lstlisting}[style=XMLStyle,gobble=2]
......
SET(LibrarySubDirs GlobalMapping LibUtilities LocalRegions Collections SET(LibrarySubDirs FieldUtils GlobalMapping LibUtilities LocalRegions
MultiRegions SpatialDomains StdRegions SolverUtils NekMeshUtils) Collections MultiRegions SpatialDomains StdRegions SolverUtils NekMeshUtils)
SET(UnitTestSubDirs UnitTests) SET(UnitTestSubDirs UnitTests)
SET(DemoSubDirs Demos) SET(DemoSubDirs Demos)
SET(TimingsSubDirs Timings) SET(TimingsSubDirs Timings)
......
SET(FieldUtilsHeaders
Module.h
Field.hpp
Interpolator.h
InputModules/InputDat.h
InputModules/InputFld.h
InputModules/InputXml.h
InputModules/InputPts.h
OutputModules/OutputInfo.h
OutputModules/OutputTecplot.h
OutputModules/OutputVtk.h
OutputModules/OutputFld.h
OutputModules/OutputStdOut.h
OutputModules/OutputPts.h
OutputModules/OutputXml.h
ProcessModules/ProcessAddFld.h
ProcessModules/ProcessBoundaryExtract.h
ProcessModules/ProcessCombineAvg.h
ProcessModules/ProcessConcatenateFld.h
ProcessModules/ProcessDeform.h
ProcessModules/ProcessDisplacement.h
ProcessModules/ProcessEquiSpacedOutput.h
ProcessModules/ProcessGrad.h
ProcessModules/ProcessHomogeneousPlane.h
ProcessModules/ProcessHomogeneousStretch.h
ProcessModules/ProcessInnerProduct.h
ProcessModules/ProcessInterpField.h
ProcessModules/ProcessInterpPoints.h
ProcessModules/ProcessInterpPointDataToFld.h
ProcessModules/ProcessIsoContour.h
ProcessModules/ProcessJacobianEnergy.h
ProcessModules/ProcessMapping.h
ProcessModules/ProcessNumModes.h
ProcessModules/ProcessMeanMode.h
ProcessModules/ProcessPointDataToFld.h
ProcessModules/ProcessPrintFldNorms.h
ProcessModules/ProcessScaleInFld.h
ProcessModules/ProcessSurfDistance.h
ProcessModules/ProcessVorticity.h
ProcessModules/ProcessScalGrad.h
ProcessModules/ProcessMultiShear.h
ProcessModules/ProcessWSS.h
ProcessModules/ProcessC0Projection.h
ProcessModules/ProcessQCriterion.h
ProcessModules/ProcessQualityMetric.h
)
SET(FieldUtilsSources
Module.cpp
Interpolator.cpp
InputModules/InputDat.cpp
InputModules/InputFld.cpp
InputModules/InputXml.cpp
InputModules/InputPts.cpp
OutputModules/OutputInfo.cpp
OutputModules/OutputTecplot.cpp
OutputModules/OutputVtk.cpp
OutputModules/OutputFld.cpp
OutputModules/OutputStdOut.cpp
OutputModules/OutputPts.cpp
OutputModules/OutputXml.cpp
ProcessModules/ProcessAddFld.cpp
ProcessModules/ProcessBoundaryExtract.cpp
ProcessModules/ProcessCombineAvg.cpp
ProcessModules/ProcessConcatenateFld.cpp
ProcessModules/ProcessDeform.cpp
ProcessModules/ProcessDisplacement.cpp
ProcessModules/ProcessEquiSpacedOutput.cpp
ProcessModules/ProcessGrad.cpp
ProcessModules/ProcessHomogeneousPlane.cpp
ProcessModules/ProcessHomogeneousStretch.cpp
ProcessModules/ProcessInnerProduct.cpp
ProcessModules/ProcessInterpField.cpp
ProcessModules/ProcessInterpPoints.cpp
ProcessModules/ProcessInterpPointDataToFld.cpp
ProcessModules/ProcessIsoContour.cpp
ProcessModules/ProcessJacobianEnergy.cpp
ProcessModules/ProcessMapping.cpp
ProcessModules/ProcessNumModes.cpp
ProcessModules/ProcessMeanMode.cpp
ProcessModules/ProcessPointDataToFld.cpp
ProcessModules/ProcessPrintFldNorms.cpp
ProcessModules/ProcessScaleInFld.cpp
ProcessModules/ProcessVorticity.cpp
ProcessModules/ProcessScalGrad.cpp
ProcessModules/ProcessSurfDistance.cpp
ProcessModules/ProcessMultiShear.cpp
ProcessModules/ProcessWSS.cpp
ProcessModules/ProcessC0Projection.cpp
ProcessModules/ProcessQCriterion.cpp
ProcessModules/ProcessQualityMetric.cpp
)
ADD_NEKTAR_LIBRARY(FieldUtils lib ${NEKTAR_LIBRARY_TYPE} ${FieldUtilsSources} ${FieldUtilsHeaders})
TARGET_LINK_LIBRARIES(FieldUtils LINK_PUBLIC GlobalMapping)
ADD_DEFINITIONS(-DFIELD_UTILS_EXPORTS)
INSTALL(DIRECTORY ./ DESTINATION ${NEKTAR_INCLUDE_DIR}/FieldUtils COMPONENT dev FILES_MATCHING PATTERN "*.h" PATTERN "*.hpp")
///////////////////////////////////////////////////////////////////////////////
//
// For more information, please see: http://www.nektar.info
//
// The MIT License
//
// Copyright (c) 2006 Scientific Computing and Imaging Institute,
// University of Utah (USA) and Department of Aeronautics, Imperial
// College London (UK).
//
// License for the specific language governing rights and limitations under
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included
// in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//
//
///////////////////////////////////////////////////////////////////////////////
#ifndef NEKTAR_FIELD_UTILS_DECLSPEC_H
#define NEKTAR_FIELD_UTILS_DECLSPEC_H
#if defined(_MSC_VER)
#ifdef FIELD_UTILS_EXPORTS
#define FIELD_UTILS_EXPORT _declspec(dllexport)
#else
#define FIELD_UTILS_EXPORT _declspec(dllimport)
#endif
#else
#define FIELD_UTILS_EXPORT
#endif
#define LOKI_CLASS_LEVEL_THREADING
#endif // NEKTAR_FIELD_UTILS_DECLSPEC_H
...@@ -33,12 +33,12 @@ ...@@ -33,12 +33,12 @@
// //
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
#include <string>
#include <iostream> #include <iostream>
#include <string>
using namespace std; using namespace std;
#include <LibUtilities/BasicUtils/PtsIO.h>
#include <LibUtilities/BasicUtils/PtsField.h> #include <LibUtilities/BasicUtils/PtsField.h>
#include <LibUtilities/BasicUtils/PtsIO.h>
#include <tinyxml.h> #include <tinyxml.h>
...@@ -46,14 +46,14 @@ using namespace std; ...@@ -46,14 +46,14 @@ using namespace std;
namespace Nektar namespace Nektar
{ {
namespace Utilities namespace FieldUtils
{ {
ModuleKey InputDat::m_className[1] = { ModuleKey InputDat::m_className[1] = {
GetModuleFactory().RegisterCreatorFunction( GetModuleFactory().RegisterCreatorFunction(
ModuleKey(eInputModule, "dat"), ModuleKey(eInputModule, "dat"),
InputDat::create, InputDat::create,
"Reads Tecplot dat file for FE block triangular format."), "Reads Tecplot dat file for FE block triangular format."),
}; };
/** /**
...@@ -65,7 +65,6 @@ InputDat::InputDat(FieldSharedPtr f) : InputModule(f) ...@@ -65,7 +65,6 @@ InputDat::InputDat(FieldSharedPtr f) : InputModule(f)
m_allowedFiles.insert("dat"); m_allowedFiles.insert("dat");
} }
/** /**
* *
*/ */
...@@ -73,29 +72,27 @@ InputDat::~InputDat() ...@@ -73,29 +72,27 @@ InputDat::~InputDat()
{ {
} }
/** /**
* *
*/ */
void InputDat::Process(po::variables_map &vm) void InputDat::Process(po::variables_map &vm)
{ {
if(m_f->m_verbose) if (m_f->m_verbose)
{ {
if(m_f->m_comm->TreatAsRankZero()) if (m_f->m_comm->TreatAsRankZero())
{ {
cout << "Processing input dat file" << endl; cout << "Processing input dat file" << endl;
} }
} }
string line, word, tag; string line, word, tag;
std::ifstream datFile; std::ifstream datFile;
stringstream s; stringstream s;
// Open the file stream. // Open the file stream.
string fname = m_f->m_inputfiles["dat"][0]; string fname = m_f->m_inputfiles["dat"][0];
datFile.open(fname.c_str()); datFile.open(fname.c_str());
if (!datFile.good()) if (!datFile.good())
{ {
...@@ -111,17 +108,18 @@ void InputDat::Process(po::variables_map &vm) ...@@ -111,17 +108,18 @@ void InputDat::Process(po::variables_map &vm)
{ {
getline(datFile, line); getline(datFile, line);
if(line.find("VARIABLES") != string::npos) if (line.find("VARIABLES") != string::npos)
{ {
std::size_t pos = line.find('='); std::size_t pos = line.find('=');
pos++; pos++;
// note this expects a comma separated list but // note this expects a comma separated list but
// does not work for white space separated lists! // does not work for white space separated lists!
bool valid = ParseUtils::GenerateOrderedStringVector( bool valid = ParseUtils::GenerateOrderedStringVector(
line.substr(pos).c_str(), fieldNames); line.substr(pos).c_str(), fieldNames);
ASSERTL0(valid,"Unable to process list of field variable in " ASSERTL0(valid, "Unable to process list of field variable in "
" VARIABLES list: "+ line.substr(pos)); " VARIABLES list: " +
line.substr(pos));
// remove coordinates from fieldNames // remove coordinates from fieldNames
fieldNames.erase(fieldNames.begin(), fieldNames.begin() + dim); fieldNames.erase(fieldNames.begin(), fieldNames.begin() + dim);
...@@ -141,9 +139,9 @@ void InputDat::Process(po::variables_map &vm) ...@@ -141,9 +139,9 @@ void InputDat::Process(po::variables_map &vm)
{ {
getline(datFile, line); getline(datFile, line);
if((line.find("ZONE") != string::npos)|| if ((line.find("ZONE") != string::npos) ||
(line.find("Zone") != string::npos)|| (line.find("Zone") != string::npos) ||
(line.find("zone") != string::npos)) (line.find("zone") != string::npos))
{ {
ReadTecplotFEBlockZone(datFile, line, pts, ptsConn); ReadTecplotFEBlockZone(datFile, line, pts, ptsConn);
} }
...@@ -151,88 +149,81 @@ void InputDat::Process(po::variables_map &vm) ...@@ -151,88 +149,81 @@ void InputDat::Process(po::variables_map &vm)
datFile.close(); datFile.close();
m_f->m_fieldPts = m_f->m_fieldPts = MemoryManager<LibUtilities::PtsField>::AllocateSharedPtr(
MemoryManager<LibUtilities::PtsField>::AllocateSharedPtr( dim, fieldNames, pts);
dim, fieldNames, pts);
m_f->m_fieldPts->SetPtsType(LibUtilities::ePtsTriBlock); m_f->m_fieldPts->SetPtsType(LibUtilities::ePtsTriBlock);
m_f->m_fieldPts->SetConnectivity(ptsConn); m_f->m_fieldPts->SetConnectivity(ptsConn);
} }
/** /**
* *
*/ */
void InputDat::ReadTecplotFEBlockZone( void InputDat::ReadTecplotFEBlockZone(std::ifstream &datFile,
std::ifstream &datFile, string &line,
string &line, Array<OneD, Array<OneD, NekDouble> > &pts,
Array<OneD, Array<OneD, NekDouble> > &pts, vector<Array<OneD, int> > &ptsConn)
vector<Array<OneD, int> > &ptsConn)
{ {
ASSERTL0(line.find("FEBlock") != string::npos, ASSERTL0(line.find("FEBlock") != string::npos,
"Routine only set up for FEBLock format"); "Routine only set up for FEBLock format");
ASSERTL0(line.find("ET") != string::npos, ASSERTL0(line.find("ET") != string::npos, "Routine only set up TRIANLES");
"Routine only set up TRIANLES");
// read the number of nodes // read the number of nodes
stringstream s; stringstream s;
string tag; string tag;
int start,end; int start, end;
s.clear(); s.clear();
s.str(line); s.str(line);
tag = s.str(); tag = s.str();
// read the number of vertices // read the number of vertices
start = tag.find("N="); start = tag.find("N=");
end = tag.find_first_of(',',start); end = tag.find_first_of(',', start);
int nvert = atoi(tag.substr(start+2,end).c_str()); int nvert = atoi(tag.substr(start + 2, end).c_str());
// read the number of elements // read the number of elements
start = tag.find("E="); start = tag.find("E=");
end = tag.find_first_of(',',start); end = tag.find_first_of(',', start);
int nelmt = atoi(tag.substr(start+2,end).c_str()); int nelmt = atoi(tag.substr(start + 2, end).c_str());
// set-up or extend m_pts array; // set-up or extend m_pts array;
int norigpts = pts[0].num_elements(); int norigpts = pts[0].num_elements();
int totfields = pts.num_elements(); int totfields = pts.num_elements();
Array<OneD, Array<OneD, NekDouble> > origpts(totfields); Array<OneD, Array<OneD, NekDouble> > origpts(totfields);
for(int i = 0; i < totfields; ++i) for (int i = 0; i < totfields; ++i)
{ {
origpts[i] = pts[i]; origpts[i] = pts[i];
pts[i] = Array<OneD, NekDouble>(norigpts + nvert); pts[i] = Array<OneD, NekDouble>(norigpts + nvert);
} }
NekDouble value; NekDouble value;
for(int n = 0; n < totfields; ++n) for (int n = 0; n < totfields; ++n)
{ {
for(int i = 0; i < norigpts; ++i) for (int i = 0; i < norigpts; ++i)
{ {
pts[n][i] = origpts[n][i]; pts[n][i] = origpts[n][i];
} }
for(int i = 0; i < nvert; ++i) for (int i = 0; i < nvert; ++i)
{ {
datFile >> value; datFile >> value;
pts[n][norigpts+i] = value; pts[n][norigpts + i] = value;
} }
} }
// read connectivity and add to list // read connectivity and add to list
int intvalue; int intvalue;
Array<OneD, int> conn(3