Skip to content

Commit d1cab8c

Browse files
authoredDec 15, 2024··
Merge pull request #84 from gridap/develop
StaggeredFEOperators
2 parents 1ede622 + cc769ac commit d1cab8c

15 files changed

+723
-153
lines changed
 

‎CHANGELOG.md

-10
This file was deleted.

‎NEWS.md

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Changelog
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7+
8+
## [Unreleased]
9+
10+
### Added
11+
12+
- Added support for GMG in serial. Since PR[#68](https://github.com/gridap/GridapSolvers.jl/pull/68).
13+
- Added Vanka-like smoothers in serial. Since PR[#68](https://github.com/gridap/GridapSolvers.jl/pull/68).
14+
- Added `StaggeredFEOperators` and `StaggeredFESolvers`. Since PR[#84](https://github.com/gridap/GridapSolvers.jl/pull/84).
15+
16+
## Previous versions
17+
18+
A changelog is not maintained for older versions than 0.4.0.

‎docs/src/BlockSolvers.md

+10
Original file line numberDiff line numberDiff line change
@@ -59,3 +59,13 @@ BlockTriangularSolver
5959
BlockTriangularSolver(blocks::AbstractMatrix{<:SolverBlock},solvers ::AbstractVector{<:LinearSolver},)
6060
BlockTriangularSolver(solvers::AbstractVector{<:LinearSolver})
6161
```
62+
63+
## Staggered FEOperators
64+
65+
```@docs
66+
StaggeredFESolver
67+
StaggeredFEOperator
68+
StaggeredAffineFEOperator
69+
StaggeredNonlinearFEOperator
70+
solve!(xh, solver::StaggeredFESolver{NB}, op::StaggeredFEOperator{NB}, ::Nothing) where {NB}
71+
```

‎docs/src/index.md

+29
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ GridapSolvers provides algebraic and non-algebraic solvers for the Gridap ecosys
1010

1111
Solvers follow a modular design, where most blocks can be combined to produce PDE-taylored solvers for a wide range of problems.
1212

13+
## Table of contents
14+
1315
```@contents
1416
Pages = [
1517
"SolverInterfaces.md",
@@ -20,3 +22,30 @@ Pages = [
2022
"PatchBasedSmoothers.md"
2123
]
2224
```
25+
26+
## Documentation and examples
27+
28+
A (hopefully) comprehensive documentation is available [here](https://gridap.github.io/GridapSolvers.jl/stable/).
29+
30+
A list of examples is available in the documentation. These include some very well known examples such as the Stokes, Incompressible Navier-Stokes and Darcy problems. The featured scripts are available in `test/Applications`.
31+
32+
An example on how to use the library within an HPC cluster is available in `joss_paper/scalability`. The included example and drivers are used to generate the scalability results in our [JOSS paper](https://doi.org/10.21105/joss.07162).
33+
34+
## Installation
35+
36+
GridapSolvers is a registered package in the official [Julia package registry](https://github.com/JuliaRegistries/General). Thus, the installation of GridapSolvers is straight forward using the [Julia's package manager](https://julialang.github.io/Pkg.jl/v1/). Open the Julia REPL, type `]` to enter package mode, and install as follows
37+
38+
```julia
39+
pkg> add GridapSolvers
40+
pkg> build
41+
```
42+
43+
Building is required to link the external artifacts (e.g., PETSc, p4est) to the Julia environment. Restarting Julia is required after building in order to make the changes take effect.
44+
45+
### Using custom binaries
46+
47+
The previous installations steps will setup GridapSolvers to work using Julia's pre-compiled artifacts for MPI, PETSc and p4est. However, you can also link local copies of these libraries. This might be very desirable in clusters, where hardware-specific libraries might be faster/more stable than the ones provided by Julia. To do so, follow the next steps:
48+
49+
- [MPI.jl](https://juliaparallel.org/MPI.jl/stable/configuration/)
50+
- [GridapPETSc.jl](https://github.com/gridap/GridapPETSc.jl)
51+
- [GridapP4est.jl](https://github.com/gridap/GridapP4est.jl), and [P4est_wrapper.jl](https://github.com/gridap/p4est_wrapper.jl)

‎src/BlockSolvers/BlockFEOperators.jl

+87-75
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,62 @@
11

22
struct BlockFEOperator{NB,SB,P} <: FEOperator
3-
global_op :: FEOperator
4-
block_ops :: Matrix{<:Union{<:FEOperator,Missing,Nothing}}
5-
is_nonlinear :: Matrix{Bool}
3+
global_op :: FEOperator
4+
block_ids :: Vector{CartesianIndex{2}}
5+
block_ops :: Vector{FEOperator}
6+
nonlinear :: Vector{Bool}
67
end
78

8-
const BlockFESpaceTypes{NB,SB,P} = Union{<:MultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}},<:GridapDistributed.DistributedMultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}}}
9-
109
function BlockFEOperator(
11-
res::Matrix{<:Union{<:Function,Missing,Nothing}},
10+
res::Vector{<:Union{<:Function,Missing,Nothing}},
1211
jac::Matrix{<:Union{<:Function,Missing,Nothing}},
12+
args...; nonlinear = fill(true,size(res))
13+
)
14+
keep(x) = !ismissing(x) && !isnothing(x)
15+
ids = findall(keep, res)
16+
@assert ids == findall(keep, jac)
17+
_res = [res[I] for I in ids]
18+
_jac = [jac[I] for I in ids]
19+
return BlockFEOperator(_res,_jac,ids,args...; nonlinear = nonlinear[ids])
20+
end
21+
22+
function BlockFEOperator(
23+
contributions :: Vector{<:Tuple{Any,Function,Function}}, args...; kwargs...
24+
)
25+
ids = [CartesianIndex(c[1]) for c in contributions]
26+
res = [c[2] for c in contributions]
27+
jac = [c[3] for c in contributions]
28+
return BlockFEOperator(res,jac,ids,args...;kwargs...)
29+
end
30+
31+
function BlockFEOperator(
32+
res::Vector{<:Function},
33+
jac::Vector{<:Function},
34+
ids::Vector{CartesianIndex{2}},
1335
trial::BlockFESpaceTypes,
14-
test::BlockFESpaceTypes;
36+
test ::BlockFESpaceTypes;
1537
kwargs...
1638
)
1739
assem = SparseMatrixAssembler(test,trial)
18-
return BlockFEOperator(res,jac,trial,test,assem)
40+
return BlockFEOperator(res,jac,ids,trial,test,assem;kwargs...)
1941
end
2042

43+
# TODO: I think nonlinear should not be a kwarg, since its the whole point of this operator
2144
function BlockFEOperator(
22-
res::Matrix{<:Union{<:Function,Missing,Nothing}},
23-
jac::Matrix{<:Union{<:Function,Missing,Nothing}},
45+
res::Vector{<:Function},
46+
jac::Vector{<:Function},
47+
ids::Vector{CartesianIndex{2}},
2448
trial::BlockFESpaceTypes{NB,SB,P},
2549
test::BlockFESpaceTypes{NB,SB,P},
2650
assem::MultiField.BlockSparseMatrixAssembler{NB,NV,SB,P};
27-
is_nonlinear::Matrix{Bool}=fill(true,(NB,NB))
51+
nonlinear::Vector{Bool}=fill(true,length(res))
2852
) where {NB,NV,SB,P}
29-
@check size(res,1) == size(jac,1) == NB
30-
@check size(res,2) == size(jac,2) == NB
31-
32-
global_res = residual_from_blocks(NB,SB,P,res)
33-
global_jac = jacobian_from_blocks(NB,SB,P,jac)
53+
ranges = MultiField.get_block_ranges(NB,SB,P)
54+
global_res = residual_from_blocks(ids,ranges,res)
55+
global_jac = jacobian_from_blocks(ids,ranges,jac)
3456
global_op = FEOperator(global_res,global_jac,trial,test,assem)
3557

36-
trial_blocks = blocks(trial)
37-
test_blocks = blocks(test)
38-
assem_blocks = blocks(assem)
39-
block_ops = map(CartesianIndices(res)) do I
40-
if !ismissing(res[I]) && !isnothing(res[I])
41-
FEOperator(res[I],jac[I],test_blocks[I[1]],trial_blocks[I[2]],assem_blocks[I])
42-
end
43-
end
44-
return BlockFEOperator{NB,SB,P}(global_op,block_ops,is_nonlinear)
58+
block_ops = map(FEOperator,res,jac,blocks(trial),blocks(test),blocks(assem))
59+
return BlockFEOperator{NB,SB,P}(global_op,block_ids,block_ops,nonlinear)
4560
end
4661

4762
# BlockArrays API
@@ -58,85 +73,82 @@ Algebra.allocate_jacobian(op::BlockFEOperator,u) = allocate_jacobian(op.global_o
5873
Algebra.jacobian(op::BlockFEOperator,u) = jacobian(op.global_op,u)
5974
Algebra.residual!(b::AbstractVector,op::BlockFEOperator,u) = residual!(b,op.global_op,u)
6075

61-
function Algebra.jacobian!(A::AbstractBlockMatrix,op::BlockFEOperator{NB},u) where NB
62-
map(blocks(A),blocks(op),op.is_nonlinear) do A,op,nl
63-
if nl
64-
residual!(A,op,u)
76+
function Algebra.jacobian!(A::AbstractBlockMatrix,op::BlockFEOperator,u)
77+
A_blocks = blocks(A)
78+
for (i,I) in enumerate(op.block_ids)
79+
if op.nonlinear[i]
80+
jacobian!(A_blocks[I],op.block_ops[i],u)
6581
end
6682
end
6783
return A
6884
end
6985

7086
# Private methods
7187

72-
function residual_from_blocks(NB,SB,P,block_residuals)
73-
function res(u,v)
74-
block_ranges = MultiField.get_block_ranges(NB,SB,P)
75-
block_u = map(r -> (length(r) == 1) ? u[r[1]] : Tuple(u[r]), block_ranges)
76-
block_v = map(r -> (length(r) == 1) ? v[r[1]] : Tuple(v[r]), block_ranges)
77-
block_contrs = map(CartesianIndices(block_residuals)) do I
78-
if !ismissing(block_residuals[I]) && !isnothing(block_residuals[I])
79-
block_residuals[I](block_u[I[2]],block_v[I[1]])
80-
end
81-
end
82-
return add_block_contribs(block_contrs)
83-
end
84-
return res
85-
end
86-
87-
function jacobian_from_blocks(NB,SB,P,block_jacobians)
88-
function jac(u,du,dv)
89-
block_ranges = MultiField.get_block_ranges(NB,SB,P)
90-
block_u = map(r -> (length(r) == 1) ? u[r[1]] : Tuple(u[r]) , block_ranges)
91-
block_du = map(r -> (length(r) == 1) ? du[r[1]] : Tuple(du[r]), block_ranges)
92-
block_dv = map(r -> (length(r) == 1) ? dv[r[1]] : Tuple(dv[r]), block_ranges)
93-
block_contrs = map(CartesianIndices(block_jacobians)) do I
94-
if !ismissing(block_jacobians[I]) && !isnothing(block_jacobians[I])
95-
block_jacobians[I](block_u[I[2]],block_du[I[2]],block_dv[I[1]])
96-
end
97-
end
98-
return add_block_contribs(block_contrs)
99-
end
100-
return jac
101-
end
102-
103-
function add_block_contribs(contrs)
104-
c = contrs[1]
105-
for ci in contrs[2:end]
106-
if !ismissing(ci) && !isnothing(ci)
107-
c = c + ci
108-
end
109-
end
110-
return c
111-
end
112-
11388
function BlockArrays.blocks(a::MultiField.BlockSparseMatrixAssembler)
11489
return a.block_assemblers
11590
end
11691

11792
function BlockArrays.blocks(f::MultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}}) where {NB,SB,P}
11893
block_ranges = MultiField.get_block_ranges(NB,SB,P)
11994
block_spaces = map(block_ranges) do range
120-
(length(range) == 1) ? f[range[1]] : MultiFieldFESpace(f[range])
95+
(length(range) == 1) ? f[range[1]] : MultiFieldFESpace(f.spaces[range])
12196
end
12297
return block_spaces
12398
end
12499

125100
function BlockArrays.blocks(f::GridapDistributed.DistributedMultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}}) where {NB,SB,P}
126101
block_gids = blocks(get_free_dof_ids(f))
127-
128102
block_ranges = MultiField.get_block_ranges(NB,SB,P)
129103
block_spaces = map(block_ranges,block_gids) do range, gids
130104
if (length(range) == 1)
131105
space = f[range[1]]
132106
else
133-
global_sf_spaces = f[range]
134-
local_sf_spaces = GridapDistributed.to_parray_of_arrays(map(local_views,global_sf_spaces))
107+
global_sf_spaces = f.field_fe_space[range]
108+
local_sf_spaces = to_parray_of_arrays(map(local_views,global_sf_spaces))
135109
local_mf_spaces = map(MultiFieldFESpace,local_sf_spaces)
136110
vector_type = GridapDistributed._find_vector_type(local_mf_spaces,gids)
137-
space = MultiFieldFESpace(global_sf_spaces,local_mf_spaces,gids,vector_type)
111+
space = DistributedMultiFieldFESpace(global_sf_spaces,local_mf_spaces,gids,vector_type)
138112
end
139113
space
140114
end
141115
return block_spaces
142116
end
117+
118+
function liform_from_blocks(ids, ranges, liforms)
119+
function biform(v)
120+
c = DomainContribution()
121+
for (I,lf) in zip(ids, liforms)
122+
vk = v[ranges[I]]
123+
c += lf(uk,vk)
124+
end
125+
return c
126+
end
127+
return biform
128+
end
129+
130+
function biform_from_blocks(ids, ranges, biforms)
131+
function biform(u,v)
132+
c = DomainContribution()
133+
for (I,bf) in zip(ids, biforms)
134+
uk = u[ranges[I[1]]]
135+
vk = v[ranges[I[2]]]
136+
c += bf(uk,vk)
137+
end
138+
return c
139+
end
140+
return biform
141+
end
142+
143+
function triform_from_blocks(ids, ranges, triforms)
144+
function triform(u,du,dv)
145+
c = DomainContribution()
146+
for (I,tf) in zip(ids, triforms)
147+
duk = du[ranges[I[1]]]
148+
dvk = dv[ranges[I[2]]]
149+
c += tf(u,duk,dvk)
150+
end
151+
return c
152+
end
153+
return triform
154+
end

‎src/BlockSolvers/BlockSolvers.jl

+31-19
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,40 @@
11
module BlockSolvers
2-
using LinearAlgebra
3-
using SparseArrays
4-
using SparseMatricesCSR
5-
using BlockArrays
6-
using IterativeSolvers
72

8-
using Gridap
9-
using Gridap.Helpers, Gridap.Algebra, Gridap.CellData, Gridap.Arrays, Gridap.FESpaces, Gridap.MultiField
10-
using PartitionedArrays
11-
using GridapDistributed
3+
using LinearAlgebra
4+
using SparseArrays
5+
using SparseMatricesCSR
6+
using BlockArrays
7+
using IterativeSolvers
128

13-
using GridapSolvers.MultilevelTools
14-
using GridapSolvers.SolverInterfaces
9+
using Gridap
10+
using Gridap.Helpers, Gridap.Algebra, Gridap.CellData, Gridap.Arrays, Gridap.FESpaces, Gridap.MultiField
11+
using PartitionedArrays
12+
using GridapDistributed
1513

16-
include("BlockFEOperators.jl")
14+
using GridapSolvers.MultilevelTools
15+
using GridapSolvers.SolverInterfaces
1716

18-
include("BlockSolverInterfaces.jl")
19-
include("BlockDiagonalSolvers.jl")
20-
include("BlockTriangularSolvers.jl")
17+
using Gridap.MultiField: BlockSparseMatrixAssembler
2118

22-
export BlockFEOperator
19+
using GridapDistributed: to_parray_of_arrays
20+
using GridapDistributed: DistributedMultiFieldFESpace, DistributedMultiFieldFEFunction
2321

24-
export MatrixBlock, LinearSystemBlock, NonlinearSystemBlock, BiformBlock, TriformBlock
22+
const MultiFieldFESpaceTypes = Union{<:MultiFieldFESpace,<:GridapDistributed.DistributedMultiFieldFESpace}
23+
const BlockFESpaceTypes{NB,SB,P} = Union{<:MultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}},<:GridapDistributed.DistributedMultiFieldFESpace{<:BlockMultiFieldStyle{NB,SB,P}}}
24+
25+
include("BlockSolverInterfaces.jl")
26+
include("BlockDiagonalSolvers.jl")
27+
include("BlockTriangularSolvers.jl")
28+
29+
include("BlockFEOperators.jl")
30+
include("StaggeredFEOperators.jl")
31+
32+
export MatrixBlock, LinearSystemBlock, NonlinearSystemBlock, BiformBlock, TriformBlock
33+
34+
export BlockDiagonalSolver
35+
export BlockTriangularSolver
36+
37+
export BlockFEOperator
38+
export StaggeredFEOperator, StaggeredAffineFEOperator, StaggeredNonlinearFEOperator, StaggeredFESolver
2539

26-
export BlockDiagonalSolver
27-
export BlockTriangularSolver
2840
end

0 commit comments

Comments
 (0)
Please sign in to comment.