-
Notifications
You must be signed in to change notification settings - Fork 8
Example stationary heat equation
This is a rather lengthy example on how to solve the stationary heat equation with dune-gdt.
We consider the stationary heat equation in a domain , for
with the boundary partitioned into a nontrivial Dirichlet boundary
\Gamma_D \subseteq \partial\Omega, \Gamma_D \neq \emptyset and a Neumann boundary \Ganna_N \subset \partial\Omega, such that \Gamma_D \cap \Gamma_N = \emptyset and \Gamma_D \cup \Gamma_N = \partial\Omega, that is:
given
- a diffusion factor
\lambda \in L^\infty(\Omega)and a diffusion tensor\kappa \in L^\infty(\Omega)^{d \times d}[such that\lambda\kappa \in L^\infty(\Omega)^{d \times d}], - a force
f \in L^2(\Omega) - Dirichlet boundary values
g_D \in H^{1/2}(\Gamma_D)and - Neumann boundary values
g_N \in H^{1/2}(\Gamma_N)
find u \in H^1(\Omega), such that
- \nabla\cdot ( \kappa \nabla u ) = f in \Omega
u = g_D on \Gamma_D
(\kappa \nabla u) \cdot n = g_N on \Gamma_N
in a weak sense, where n \in \R^d denotes the unit outer normal on \partial\Omega.
We define
- the bilinear form
b: H^1(\Omega) \times H^1(\Omega) -> \R, - the functional
l_f: H^1(\Omega) -> \Rand - the functional
l_{g_N}: H^1(\Omega) -> \R
by
b(u, v) := \int_\Omega (\kappa \nabla u) \nabla v \dx
l_f(v) := \int_\Omega f v \dx
l_{g_N} := \int_{\Gamma_N} g_N v \ds
respectively.
We start with a simpler problem in 2d on the unit cube with trivial Dirichlet boundary values and no Neumann boundary. To begin with, apart from non Dune specific includes we have these includes and global defines:
#include <dune/xt/la/container/istl.hh>
#include <dune/xt/grid/boundaryinfo.hh>
#include <dune/xt/grid/grids.hh>
#include <dune/xt/grid/gridprovider/cube.hh>
#include <dune/xt/grid/type_traits.hh>
using namespace Dune;
using namespace Dune::GDT;
using G = YASP_2D_EQUIDISTANT_OFFSET;
static const constexpr size_t d = G::dimension;
using GV = typename G::LeafGridView;
using E = XT::Grid::extract_entity_t<GV>; // Type of Codim 0 elements of the grid.
using I = XT::Grid::extract_intersection_t<GV>; // Type of the Codim 1 intersection between two Codim 1 elements.
using M = XT::LA::IstlRowMajorSparseMatrix<double>; // Matrix
using V = XT::LA::IstlDenseVector<double>; // VectorAfter initializing the MPI helper and parsing the command line, the main starts with creating a grid and defining the type of the domain boundary:
auto grid = XT::Grid::make_cube_grid<G>(/*lower_left=*/-1.,
/*upper_right=*/1.,
/*num_elements_per_dim=*/128);
auto grid_view = grid.leaf_view();
XT::Grid::NormalBasedBoundaryInfo<I> boundary_info;
boundary_info.register_new_normal({0., -1.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({0., 1.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({-1., 0.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({1., 0.}, new XT::Grid::DirichletBoundary());If we want more complicated grids (e.g. using gmesh, we can use another GridProvider from dune-xt-grid.
The grid_view now is the actual partition of \Omega.
Next we create the conforming space of globally continuous Lagrange polynomials, S_h^p and a matrix operator mapping from S_h^p -> S_h^p (or rather into its dual, which we identify with S_h^p in the discrete setting):
#include <dune/gdt/spaces/h1/continuous-lagrange.hh>
#include <dune/gdt/operators/matrix-based.hh>
auto space = make_continuous_lagrange_space(grid_view, /*polorder=*/1);
auto lhs_op = make_matrix_operator<M>(space, Stencil::element);- explain
make_matrix_operator<M>, sparsity pattern, etc.
Let us next define the local bilinear form which models the restriction of b to a single grid element, applied to the restriction of the global basis of the test and ansatz space to the element:
const LocalElementIntegralBilinearForm<E> local_bilinear_form(
/*integrand_order=*/[](
const auto& test_base,
const auto& ansatz_base,
const auto& param) {
return std::max(test_base.order(param) - 1, 0)
+ std::max(ansatz_base.order(param) - 1, 0);
},
/*integrand_evaluate=*/[](
const auto& test_base,
const auto& ansatz_base,
const auto& x,
auto& result,
const auto& param) {
const auto test_grads = test_base.jacobians_of_set(x, param);
const auto ansatz_grads = ansatz_base.jacobians_of_set(x, param);
for (size_t ii = 0; ii < test_base.size(param); ++ii)
for (size_t jj = 0; jj < ansatz_base.size(param); ++jj)
result[ii][jj] = 1.0 * test_grads[ii][0] * ansatz_grads[jj][0];
});- explain:
- quadrature, x is reference element point
- test and ansatz base are not shape functions but the global basis restricted to the element,
-
test_gradsisstd::vector<FieldMatrix<double, r, d>>(for each element of the basis, we have the jacobian as aFieldMatrix<double, r, d>, where in our caser = 1)
We also ship some often used local integrands, so the above is equivalent to
#include <dune/gdt/local/integrands/elliptic.hh>
const LocalElementIntegralBilinearForm<E>
local_bilinear_form(LocalEllipticIntegrand<E>(diff_f, diff_t));Here, diff_f and diff_t are suitable data functions from dune-xt-functions (which will be detailed later) modeling lambda and kappa.
The advantage of the provided LocalEllipticIntegrand is also that it is a bit more memory efficient (reusing test_grads and ansatz_grads for instance).
Given this local bilinear form, we can assemble the matrix representation of b into the lhs_op by:
lhs_op.append(local_bilinear_form);- explain:
- matrix op knows what to do with a local bilinear form: local assembler (localizes the basis to the grid element, calls the local bilinear form, uses the mapper of the space to copy the local result into the global matrix),
- matrix op is a grid walker (will apply the local assembler on each element, we have no control over the ordering, local assembler is applied in parallel, is copied for each thread, including the local bilinear form)
Since, on append(), the local bilinear form is copied for each thread anyway, there is not even a need to keep the local bilinear form as a separate object. To conclude the definition of the operator: to obtain the matrix representation of b we only require:
auto lhs_op = make_matrix_operator<M>(space, Stencil::element);
lhs_op.append(LocalElementIntegralBilinearForm<E>(LocalEllipticIntegrand<E>(diff_f, diff_t)));Similarly for the right hand side, we create a functional which will assemble its vector representation for a nontrivial force f:
#include <dune/gdt/functionals/vector-based.hh>
#include <dune/gdt/local/functionals/integrals.hh>
auto rhs_func = make_vector_functional<V>(space);
rhs_func.append(LocalElementIntegralFunctional<E>(
/*integrand_order=*/[](
const auto& test_base, const auto& param) {
return test_base.order(param) + 3; },
/*integrand_evaluate=*/[](
const auto& test_base,
const auto& x_local,
auto& result,
const auto& param) {
const auto x_global = test_base.element().geometry().global(x_local);
const auto f_value = M_PI_2 * M_PI * std::cos(M_PI_2 * x_global[0])
* std::cos(M_PI_2 * x_global[1]);
const auto test_values = test_base.evaluate_set(x_local, param);
for (size_t ii = 0; ii < test_base.size(param); ++ii)
result[ii] = f_value * test_values[ii];
}));As for the local bilinear form, dune-gdt already ships suitable local integrands. The local integrand to model a producct between two bases is the LocalElementProductIntegrand. This is again an integrand which expects a test and an ansatz basis. To be usable in the functional, we need to fix one argument of the local integrand (namely to be the force f), which we achieve by first creating a corresponding data function,
#include <dune/xt/functions/generic/function.hh>
const XT::Functions::GenericFunction<d> force(
3,
[](const auto& x, const auto& /*param*/) {
return M_PI_2 * M_PI * std::cos(M_PI_2 * x[0]) * std::cos(M_PI_2 * x[1]);
});and then by using the correct integrand, and to directly append it for assembly to the vector functional:
#include <dune/gdt/local/integrands/conversion.hh>
#include <dune/gdt/local/integrands/product.hh>
auto rhs_func = make_vector_functional<V>(space);
rhs_func.append(LocalElementIntegralFunctional<E>(
local_binary_to_unary_element_integrand(LocalElementProductIntegrand<E>(),
force.as_grid_function<E>())));We are now ready to actually assemble the matrix- and vector representation of the bilinear form and functional, but we also need to determine all DoFs which lie on the Dirichlet boundary to solve the arising linear system:
#include <dune/gdt/tools/dirichlet-constraints.hh>
auto dirichlet_constraints = make_dirichlet_constraints(space, boundary_info);
auto walker = XT::Grid::make_walker(grid_view);
walker.append(lhs_op);
walker.append(rhs_func);
walker.append(dirichlet_constraints);
walker.walk(/*thread_parallel=*/true);- explain:
- the walker, that we can do all the work in one grid walk, the walker takes care about parallel execution
- that the dirichlet constraints are also copied for each thrad, so each thread assemles the DoFs which belong to is partition. But after the walk these are all collected into the object
dirichlet_constraints, so this must not be a temporary.
Now we are in the position to clear all Dirichlet Dofs and to solve the linear system:
dirichlet_constraints.apply(lhs_op.matrix(), rhs_func.vector());
auto solution = make_discrete_function<V>(space);
XT::LA::make_solver(lhs_op.matrix()).apply(rhs_func.vector(),
solution.dofs().vector());
solution.visualize("solution");- explain:
- discrete function creates a matching vector or takes a reference to an existing one
- different solvers are available by
auto linear_solver = XT::LA::make_solver(lhs_op.matrix()); for (auto type : {"superlu", "bicgstab.ssor", "bicgstab.amg.ssor", "bicgstab.amg.ilu0"}) linear_solver.apply(rhs_func.vector(), solution.dofs().vector(), type);
Apart from creating the data functions, the complete code to solve the stationary heat equation with trivial Dirichlet boundary values for given data functions is:
auto grid = XT::Grid::make_cube_grid<G>(/*lower_left=*/-1.,
/*upper_right=*/1.,
/*num_elements_per_dim=*/128);
auto grid_view = grid.leaf_view();
XT::Grid::NormalBasedBoundaryInfo<I> boundary_info;
boundary_info.register_new_normal({0., -1.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({0., 1.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({-1., 0.}, new XT::Grid::DirichletBoundary());
boundary_info.register_new_normal({1., 0.}, new XT::Grid::DirichletBoundary());
auto space = make_continuous_lagrange_space(grid_view, /*polorder=*/1);
auto lhs_op = make_matrix_operator<M>(space, Stencil::element);
lhs_op.append(LocalElementIntegralBilinearForm<E>(LocalEllipticIntegrand<E>(diff_f,
diff_t)));
auto rhs_func = make_vector_functional<V>(space);
rhs_func.append(LocalElementIntegralFunctional<E>(
local_binary_to_unary_element_integrand(LocalElementProductIntegrand<E>(),
force.as_grid_function<E>())));
auto dirichlet_constraints = make_dirichlet_constraints(space, boundary_info);
auto walker = XT::Grid::make_walker(grid_view);
walker.append(lhs_op);
walker.append(rhs_func);
walker.append(dirichlet_constraints);
walker.walk(/*thread_parallel=*/true);
dirichlet_constraints.apply(lhs_op.matrix(), rhs_func.vector());
auto solution = make_discrete_function<V>(space);
XT::LA::make_solver(lhs_op.matrix()).apply(rhs_func.vector(),
solution.dofs().vector());First we need to create a data function to represent the exact solution:
const XT::Functions::GenericFunction<d> exact_solution(
3,
/*evaluate=*/[](const auto& x, const auto& /*param*/) {
return std::cos(M_PI_2 * x[0]) * std::cos(M_PI_2 * x[1]); },
/*name=*/"exact_solution",
/*parameter_type=*/{},
/*jacobian=*/
[](const auto& x, const auto& /*param*/) {
const auto pre = -0.5 * M_PI;
const auto x_arg = M_PI_2 * x[0];
const auto y_arg = M_PI_2 * x[1];
FieldMatrix<double, 1, d> result;
result[0] = {pre * std::sin(x_arg) * std::cos(y_arg),
pre * std::cos(x_arg) * std::sin(y_arg)};
return result;
});We can then compute the H^1 semi and the L^2 error like:
const auto error = solution - exact_solution.as_grid_function<E>();
auto h1_prod = make_localizable_bilinear_form(grid_view, error, error);
h1_prod.append(LocalElementIntegralBilinearForm<E>(LocalEllipticIntegrand<E>(diff_f,
diff_t)));
auto l2_prod = make_localizable_bilinear_form(grid_view, error, error);
l2_prod.append(LocalElementIntegralBilinearForm<E>(LocalElementProductIntegrand<E>()));
walker.append(h1_prod);
walker.append(l2_prod);
walker.walk(/*thread_parallel=*/true);
std::cout << "error in H^1 semi-norm: " << std::sqrt(h1_prod.result()) << std::endl;
std::cout << "error in L^2 norm: " << std::sqrt(l2_prod.result()) << std::endl;All we need to change in the above implementation is
- the discrete function space
#include <dune/gdt/spaces/l2/discontinuous-lagrange.hh> auto space = make_discontinuous_lagrange_space(grid_view, /*polorder=*/1);
- the sparsity pattern of the matrix operator
auto lhs_op = make_matrix_operator<M>(space, Stencil::element_and_intersection); - additional coupling terms on the inner intersections
#include <dune/gdt/local/integrands/elliptic-ipdg.hh> lhs_op.append(LocalIntersectionIntegralBilinearForm<I>( LocalEllipticIpdgIntegrands::Inner<I>(diff_f, diff_t)), {}, XT::Grid::ApplyOn::InnerIntersectionsOnce<GV>());
- and the corresponding terms on the Dirichlet boundary
lhs_op.append(LocalIntersectionIntegralBilinearForm<I>( LocalEllipticIpdgIntegrands::DirichletBoundaryLhs<I>(diff_f, diff_t)), {}, XT::Grid::ApplyOn::CustomBoundaryIntersections<GV>( boundary_info, new XT::Grid::DirichletBoundary())); - no Dirichlet constraints
To conclude, the complete SWIPDG bilinear form is given by
auto lhs_op = make_matrix_operator<M>(space, Stencil::element_and_intersection);
lhs_op.append(LocalElementIntegralBilinearForm<E>(
LocalEllipticIntegrand<E>(diff_f, diff_t)));
lhs_op.append(LocalIntersectionIntegralBilinearForm<I>(
LocalEllipticIpdgIntegrands::Inner<I>(diff_f, diff_t)), {},
XT::Grid::ApplyOn::InnerIntersectionsOnce<GV>());
lhs_op.append(LocalIntersectionIntegralBilinearForm<I>(
LocalEllipticIpdgIntegrands::DirichletBoundaryLhs<I>(diff_f, diff_t)), {},
XT::Grid::ApplyOn::CustomBoundaryIntersections<GV>(
boundary_info, new XT::Grid::DirichletBoundary()));