(Proposal WIP) Separate Geometry and Discretization (Enabling of high order finite elements, multi-physics on multi-level) #1409
Replies: 3 comments 10 replies
-
For first order elements are we going to duplicate the geometric objects? |
Beta Was this translation helpful? Give feedback.
-
There's also been interest in adding support for structured meshes and simplifying the logic to get to a single element and I bring them up here because the solutions all influence each other. To simplify the element indexing (and I think life in general) I would squash all the element regions and sub-regions into one, storing all the elements in a single allocation (so a single index) and then everything would operate on a set of elements. This logic could be re-used to allow allocating a field on a set of other topological objects as well. If we allow non-contiguous element local indices we can have performant element insertion (for surface generator) by giving each sub-region some extra capacity to grow. Basically an I think that would pair nicely with a structured mesh implementation as well since if each rank has a ijk structured mesh the complexity of region and sub region is not needed, although operating on a set of elements would most likely be required. I also second @rrsettgast proposal to have a single |
Beta Was this translation helpful? Give feedback.
-
I do not understand a lot about all of this right now, but here is how I would consider the question. The user writes an
If we have a clear view of the flows, it will make drawing connections with other parts of the code easier. E.g. for FEM, I assume we need a description of the test/basis function, the geometric support (cell). Based on this we'll be able to assemble the matrix. Do we need something else and where? Also how do we send the data back to the user?… One convenient way could be to write pseudo code so we know the algorithm and what it needs from which or which element. |
Beta Was this translation helpful? Give feedback.
-
WIP. This initial post is a rough proposal, and will evolve based on follow-on discussion.
Currently we have a situation where we have mesh levels, which contain the mesh geometry and discretization data. The finite element/volume "discretization" is implicitly assumed to be embedded in the singular geometric "mesh".
However, in order to effectively support high order meshes, one approach would be to break the
MeshLevel
up into a geometricBaseMesh
consisting of "Cell/Vertex/Face/Edge" data, and aDiscretizationLevel
consisting of "ElementManager/NodeManager/FaceManager/EdgeManager". TheMeshGeometry
object would be pretty static, while theDiscretizationLevel
would depend on the order/resolution. The physics solvers would then operate on theDiscretizationLevel
(they currently act onMeshLevel
), and not the geometric mesh.To illustrate the relationship between the objects in the proposed
BaseMesh
andDiscretizationLevel
consider the following:Whether or not the
BaseMesh
object will need to contain sub-Group
s such is debatable. I would argue that we could just have theBaseMesh
object contain the coordinates of the vertices, and the connectives for the higher topology objects (edge, face, cell), thus flattening the structure.The
NodeManager
contains all the "nodal fields", but what to do with the coordinate data that is held in theBaseMesh::Vertex
. It makes sense that theNode
would have an interface for the coordinates, and it can be a dynamic relationship.DiscretizationLevel
object/refinement is first order (i.e. nodes overlay on vertices) then theVertex
coordinates should be referred to instead of copied.DiscretizationLevel
object is different order/refinement ( #nodes != #vertices ) then the coordinate data may be copied if it makes sense....otherwise a more complex relationship. Summary of options:NodeManager
contains coordinate data. It is distinct from theVertex
coordinate data.NodeManager
doesn't contain coordinates, which means that it is just a container for field data. Access to theBaseMesh ::Vertex
is required to calculate the coordinates of a node. This means that outside the context of an element, a node may not be able to know its coordinate. We would want to do this to avoid carrying around a lot of high order/resolution nodal data which may be calculated from much less data denseBaseMesh::Vertex
.NodeManager
to contain coordinate data, but only use that data sparingly. We do not refer toNodeManager::coordinate
inside of element kernels, instead relying onBaseMesh::Vertex
in that context.Cell
holds the connectivity with the vertices.Element
holds the connective with the nodes.Element
holds quadrature data.Usage
The only major usage change may be when the current
NodeManager
requires coordinate data...if we remove coordinate data fromNodeManager
. Kernels will have be modified to take inBaseMesh::Vertex
coordinate data and calculate the coordinates of the nodes. Of course, this can be embedded in the interface forNodeManager
, but we will end up with some usage issues to properly move the data between host/device.Interaction between
DiscretizationLevel
DiscretizationLevel
will be discretization on top of the geometric "BaseMesh".DiscretizationLevel
will have a distinct resolution and/or order from the otherDiscretizationLevel
s.DiscretizationLevel
s s.t. physics equations can be applied to oneDiscretizationLevel
, and coupled with physics/fields from anotherDiscretizationLevel
.One example where this will be used the plan for fracture modeling. The Embedded approach has yet to explore the use of a fracture criteria for propagation. The current plan is to have a coarse Embedded representation, and then have a fine scale continuum PhaseField representation, and tie the two
Level
together. Need Reference.Communications
This proposal will result in modifications to the ghosting process. Ghosting will have to be setup on the
BaseMesh
, and subsequently eachDiscretizationLevel
.Beta Was this translation helpful? Give feedback.
All reactions