Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idea: MixedTensor #188

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from
Draft

Conversation

KnutAM
Copy link
Member

@KnutAM KnutAM commented Dec 26, 2022

In some cases, a tensor could have mixed bases, e.g. a 2x3 tensor. This PR is a first tryout that is currently not super performant, but I think that can be "easily" fixed by using the same code-generation strategies as currently done for the existing Tensor-type.
The motivation came from looking at the CellVectorValues in Ferrite thinking that for the special interpolations that were recently discussed, it might be good to have e.g. 3d shape functions for 2d coordinates or vice-versa. @termi-official / Ferrite-FEM/Ferrite.jl#569 / CellRTValues?

Some examples and ideas

julia> x = Vec{2}((0.5,0.5));
julia> N(x) = Vec{3}((1-x[1], x[2], x[1]*x[2]));
julia> ∂N∂x = gradient(N, x)
3×2 MixedTensor{2, (3, 2), Float64, 6}:
 -1.0  -0.0
  0.0   1.0
  0.5   0.5
julia> ∂N∂x' ∂N∂x
2×2 Tensor{2, 2, Float64, 4}:
 1.25  0.25
 0.25  1.25

where the point of the last is to show that it should always eagerly transform back to regular tensors whenever possible (i.e. when allequal(dims) - all bases are the same)

Parameterization

Currently, the type is parameterized by MixedTensor{order, dims, T, M} where dims is checked during construction to be an NTuple{order,Int}. Not sure what is the best option here, SArray use Tuple{dims...} (E.g. Tuple{2,3}. Perhaps a more general method would be to define a basis type, e.g.

struct CartesianBase{dim} end
struct TensorBasis{B<:NTuple{<:Any,CartesianBase}} 
 bases::B
end

But this makes construction less convenient, e.g.

MixedTensor{2, TensorBasis((CartesianBase{2}(), CartesianBase{3}()))}((i,j)->i*j)

compared to the current

MixedTensor{2, (2,3)}((i,j)->i*j)
2×3 MixedTensor{2, (2, 3), Int64, 6}:
 1  2  3
 2  4  6

@KnutAM KnutAM marked this pull request as draft December 26, 2022 12:58
@codecov-commenter
Copy link

codecov-commenter commented Dec 26, 2022

Codecov Report

Patch coverage: 52.53% and project coverage change: -3.89% ⚠️

Comparison is base (996043c) 96.78% compared to head (1557caa) 92.89%.
Report is 3 commits behind head on master.

❗ Current head 1557caa differs from pull request most recent head 5ad0648. Consider uploading reports for the commit 5ad0648 to get more accurate results

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #188      +/-   ##
==========================================
- Coverage   96.78%   92.89%   -3.89%     
==========================================
  Files          17       18       +1     
  Lines        1338     1394      +56     
==========================================
  Hits         1295     1295              
- Misses         43       99      +56     
Files Changed Coverage Δ
src/Tensors.jl 94.44% <ø> (ø)
src/mixed_tensors.jl 44.61% <44.61%> (ø)
src/automatic_differentiation.jl 97.83% <88.00%> (-1.21%) ⬇️
src/tensor_products.jl 100.00% <100.00%> (ø)

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@termi-official
Copy link
Member

termi-official commented Dec 29, 2022

This one would be really great to have! As a first note, we do not need this for H(div) and H(curl) spaces. To elaborate where I utilize such embedding: I have sometimes problems with surface flows, surface reaction-diffusion systems or reaction-diffusion systems in "bioeletrical wires", which are all embedded in 3D. However, different from solid mechanics problems, it is possible to treat this problem as if it was an actual 1D/2D element (because stuff happening in the radial/normal direction can be neglected).

Another question here: This should already (or almost already) work for mixed derivatives, right? Turns out for multiphysics problems (or also sometimes mechanics problems with internal variables) we need partial derivatives of the form $\partial F \partial s \phi$ and $\partial s \partial F \phi$, where $s$ is some field or internal variable, so this would be quite handy to have in Tensors.jl. If not, then I will try to find some time for a follow up PR on this to implement such derivatives. :)

Edit 1: To also give some explanation on the CellRTValues - if I understand the implementation correctly, then this should be a so-called boundary element method, where you solve the problem only on a discretization of the boundary of your domain. Hence your elements naturally have codimension 1. I need more context from the original author on the exact formulation, because I have no experience for embedded H(curl) and H(div) problems.

@KnutAM
Copy link
Member Author

KnutAM commented Dec 29, 2022

Nice to hear! Thanks for the comments!

The MixedTensor type allows taking derivatives of two tensors with different dimensions, so as long as you state variables are tensorial quantities, this should work. If you would like to take derivatives wrt. user-defined types, I'm not sure how to do the abstraction layer to retain the computational efficiency of Tensors.jl.

I have not included (in this PR) support for different orders than the current 1, 2, and 4.
(I don't see any technical difficulties in implementing 3rd order in addition, it will just require a bit of code to become as performant and sufficiently tested. )

At this stage, this is just an idea, so I wouldn't count on it being merged any time soon. One potential issue is increasing compilation time due to the high amount of different possible combinations.

@termi-official
Copy link
Member

Is the precompilation increase really this significant? If this is a problem for users, wouldn't be the actual solution to build a sysimage?

@KnutAM
Copy link
Member Author

KnutAM commented Dec 30, 2022

Is the precompilation increase really this significant? If this is a problem for users, wouldn't be the actual solution to build a sysimage?

Ah, right - I was worried about "Combinatorial Type Explosion", but as long as not explicitly precompiled, I suppose this should only lead to performance degradation for type-unstable code that code using Tensors should normally avoid? (Of course also a slight degradation for compilation, but that is likely negligible as you say)

@KristofferC or @fredrikekre: would you like such a feature in Tensors?
If so, I'll think more about the appropriate interface/parameterization.
Once the interface is set, this branch can be used for experimenting with Ferrite and similar.
If it turns out to be useful, I'll try to find some hobby-programming-time to work on making it performant to be worthy of Tensors.jl:)

# where the dimensions of the coordinate and shape function is not
# the same (and shape function is not scalar)
struct MixedTensor{order, dims, T, M} <: AbstractTensor{order, dims, T}
data::NTuple{M, T}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it make sense to have data::SMatrix which I think already has the correct parametrization?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes,

struct MyTensor{order, dims, T, M}
    data::SArray{dims, T, order, M}
end

would simplify (at least) single dot-products a lot!

@lijas
Copy link

lijas commented Sep 11, 2023

Random comment:
If we dont want to create a new type MixedTensor, I think it could be possible to dispatch on the type of type-parameter order:

struct Tensor{order, dims, T, M, order_t} <: AbstractTensor{order, dims, T}
    n::StaticArrays{...}
end

const MixedTensor{order, dim, T, M} = Tensor{order, dim, T, M, Tuple{Int,Int}}

Not sure if therye would be any major benifit with this though

@lijas
Copy link

lijas commented Sep 11, 2023

Also, MixedTensors could be benifical when working with shells (and beams), where we have two surface coordinates (xi and eta) in 3d space, and it quite common to want to take the gradient dx/d\Xi which would be a 3x2 tensor. Currently what I do is just to store them as two vectors: dx/dxi and dx/deta instead (which also works fine).

@KnutAM
Copy link
Member Author

KnutAM commented Sep 11, 2023

Great that it can be useful for more cases!

After talking to @fredrikekre earlier, a possible way forward is to define only a the functionality required for specific cases and consider the mixed tensor an in-development feature. In Ferrite, that means we can have shape gradients etc. output mixed tensors instead of StaticArrays as is currently returned.

possible to dispatch on the type of type-parameter order

Does that mean that the regular tensor would change to, e.g. Tensor{order,dim,T,M,Nothing} or something like that? (In that case, I guess one could also have Tensor{order,dim,T,M,:regular} and Tensor{order,dim,T,M,:mixed}, but I'm too, am not sure if that has any advantage over having a separate type). But please let me know if I am missing something.)

@lijas
Copy link

lijas commented Sep 11, 2023

I agree, it makes sense to view mixed tensors (and 3rd order tensors) as in-development, and to not implement all operations/features for them all at once. For example, with shells I have wanted to have 3rd order tensors basically just for storage (of higher order gradients), and not for doing multiplication/addition/other operations with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants