CellId Class (providing global cell index)
A class to represent a unique ID for a cell in a Triangulation. It is returned by cell->id() if cell is a cell iterator.
In other words, CellId provides the tool with which it is possible to globally and uniquely identify cells in a parallel triangulation, and consequently makes it possible to exchange, between processors, data tied to individual cells.
Extract cell solution
1 | cell->get_dof_values(solution, cell_solution); |
https://www.dealii.org/current/doxygen/deal.II/step_61.html
MTA (Manchester Thermal Analyzer)
May be helpful to the cell operation(cell_id) in a triangulation.
Official website: https://staffnet.cs.manchester.ac.uk/acso/thermal-analyzer/
COARSE MESH PARTITIONING FOR TREE-BASED AMR
May be helpful to the further research where initial fine mesh is required to be distributed.
I found this paper in a pull.
https://github.com/dealii/dealii/pull/3956.
I found the pull when searching CellId class in google group.
https://groups.google.com/forum/#!searchin/dealii/cell_id|sort:date/dealii/zSBwQBvY9mc/CfgYci10BQAJ
(Note: this discussion was raised by MTA’s author Yi-Chung Chen.)
The following is the paper.
https://epubs.siam.org/doi/pdf/10.1137/16M1103518
FE_DGQ(0)
I asked a question in the google group of deal.II. Dr. W. Bangerth indicated I should use FE_DGQ(0) to derive the cell index for the density vector x in topology optimization.
FE_DGQ(0) is one value one cell, which just satifies the requirement of my problem.
Question ask on deal.II google group
https://groups.google.com/forum/#!topic/dealii/6ldNo1Vkrbg
Iterators on mesh-like containers
https://www.dealii.org/current/doxygen/deal.II/group__Iterators.html
1 | typename Trianguation<dim>::active_cell_iterator ti = tria.begin_active(); |
But the method provided there for accessing multiple DoFHandlers is very time-consuming.
How can I accelerate the process of accessing the distributed vector x (pesudo density in topology optimization).
I found the following synatx is very time-consuming.
I don’t know why. I need to figure it out.
1 | opt->x(local_dof_indices_dg_int); |
I need to solve this!
I have figured the solution out. I also posted it in the deal.II google group:
https://groups.google.com/forum/#!topic/dealii/6ldNo1Vkrbg
I summarize it as follows in case that it can help some one else as well.
Problem summary: In topology optimization, each cell has a scalar variable (pesudo density). So the density will form a vector, but the vector dimensions are different from those of the vector, say LA::MPI::Vector locally_relevant_solution (refer to Step-40). Because it only has one related value per cell. How to build up such “cell-based” vector?
Solution steps: (inspired by the suggestion from Dr. W. Bangerth)
- Build two fem field objects.
FESystem
FE_DGQ
And initialize them in the initialization list of the class,
fe(FE_Q
fe_dg(0),
And initialize the cell-based vector (the pesudo density in topology optimization) as follows,
opt->locally_owned_cells= opt->dof_handler_dg.locally_owned_dofs();
// it is named with “cells” because I want to distinguish it from “opt->locally_owned_dofs” which is for the first FE field (fe(FE_Q
DoFTools::extract_locally_relevant_dofs(opt->dof_handler_dg,
opt->locally_relevant_cells);
opt->x.reinit(opt->locally_owned_cells,
opt->locally_relevant_cells,
opt->mpi_communicator);
- I need to access both FE fields at the same time.
DoFHandler
DoFHandler
If I use the default loop,
for (const auto &cell : opt->dof_handler.active_cell_iterators())
{
…
}
how can I access to the second FE field, which is dof_handler_dg?
My attempting solution is to set up multiple iterators, just as in the tutorial:
https://www.dealii.org/current/doxygen/deal.II/group__Iterators.html,
typename Triangulation
opt->triangulation.begin_active();
typename DoFHandler
opt->dof_handler.begin_active();
typename DoFHandler
opt->dof_handler_dg.begin_active();
Therefore, I set up the loop as follows,
while (cell != opt->triangulation.end())
{
// do someting
++ti;
++di1;
++di2;
}
- Now I can access to the cell-based vector x successfully as follows,
di2->get_dof_indices(local_dof_indices_dg);
unsigned int local_dof_indices_dg_int = local_dof_indices_dg.at(0);
opt->x(local_dof_indices_dg_int);
It was very time-consuming to execute because I put the vector access, say opt->x(local_dof_indices_dg_int), in the most inner loop. I don’t need to access it for each quadrature point nor each degree of freedom. Because for each cell, the opt->x is fixed. So I put opt->x in the outside of those loops, and the computing time is normal now.
In sum, the parallel distributed density vector has been set up.
Mesh importing from GMSH and mesh generation from CAD model
Mesh importing from GMSH:
https://www.dealii.org/current/doxygen/deal.II/step_49.html
Tetrahedron to hexahedron converting tool (also mention in deal.II FAQ):
https://github.com/martemyev/tethex
Mesh generation based on CAD model:
https://www.dealii.org/current/doxygen/deal.II/step_54.html
Spectral element method (SEM)
This method can use very coarse mesh to describe very small features such as cracks.
It may be possible to employ this method to the LPBF process simulatioin. Because in LPBF, the laser is very small compared to the whole part to be printed. Triaditionally, locally fine mesh is required, leading to numerous elements totally. SEM could reduce the number of elements required in the LPBF process simulation.
The wikipedia link:
https://en.wikipedia.org/wiki/Spectral_element_method
Problems to be solved
- How are the classes destructed in deal.II and my own codes?
- How do macros exactly work in c++? namespace, using namespace…
- Need to add detailed comments to all the codes!
- Detailed compiling and linking info. of all the codes!
- [Solved] How does the following code work? (Please refer to the blog related to constraints in deal.II)
opt->constraints.distribute_local_to_global(cell_matrix, cell_rhs, local_dof_indices, opt->system_matrix, opt->system_rhs);