-
Notifications
You must be signed in to change notification settings - Fork 5
Exponential #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Exponential #94
Conversation
Codecov Report❌ Patch coverage is
... and 36 files with indirect coverage changes 🚀 New features to boost your workflow:
|
src/implementations/exponential.jl
Outdated
|
|
||
| function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh) | ||
| D, V = eigh_full(A, alg.eigh_alg) | ||
| copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reduced allocation strategy:
| copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V)) | |
| iV = inv(V) | |
| map!(exp, diagview(D)) | |
| mul!(expA, rmul!(V, D), iV) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has to be map!(exp, diagview(D), diagview(D)) instead of map!(exp, diagview(D)), but good suggestion otherwise. I have also added it for the ExponentialViaEig.
EDIT: the suggested change works only for Julia 1.12 onwards. That's why I will keep the version with
3 arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just diagview(D) .= exp.(diagview(D))?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that more efficient than the current code? If not, I'd prefer to keep it that way, since it feels a bit more natural to me.
lkdvos
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I'm not fully convinced by the interface of exponential(!), especially in its current form and implementation this looks slightly strange.
LinearAlgebra uses an in-place version, i.e. it reuses the input array to return exp!, and looking at the different implementations you have here, it is not obvious that trying to fit this into a exponentiate!(A, expA, alg) signature is really helping us - on the contrary, all this is really doing is creating an additional copy at the end just to make sure that it is allocated in the provided output.
As we discussed for your previous PR, this really is not the purpose of being able to provide the output argument.
For the algorithms, thinking a bit ahead, it might be appropriate to just call these something along the lines of matrix functions via eig, since presumably these approaches are actually generic for all of these implementations.
src/implementations/exponential.jl
Outdated
|
|
||
| function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh) | ||
| D, V = eigh_full(A, alg.eigh_alg) | ||
| copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just diagview(D) .= exp.(diagview(D))?
The idea of putting it in this framework is to allow for different sorts of algorithms, i.e. ones that would also work for BigFloats. I get that in the BLASFloat case, we should avoid extra allocations, but could you elaborate on your suggestion for
I may be missing your point here, but that was the idea behind the |
I am indeed referring to the preallocated output, not the algorithms part. It really only makes sense to have the option of giving a preallocated output if we are actually able to use this, and for the current implementations you have this is not saving us any work, rather it is increasing it because you add an extra allocation at the beginning and an extra copy at the end. While it is definitely possible to have
Sorry I should have explained that better, I meant that I would like to avoid having to also define |
Is your suggestion then to just skip the whole
Okay, I see. I agree and will change this. |
|
Regarding the algorithm names, I agree with Lukas and also think we want to have a general Regarding the role of the output arguments, I only partially agree. The whole point of why we started MatrixAlgebraKit.jl, is because in TensorKit we first want to define the output tensor, and then compute block per block the result, where we want to store the result in the corresponding block of the output tensor. Ideally, yes, the computation is such that we also use that output data as storage during the computation, in such a way that the end result "naturally" ends up there, but if that is difficult, a final Note that the LinearAlgebra |
|
Regardless of the comment about general matrix functions, it is a fact that the exponential is by far the most useful and common one that we need, so I am also not opposed to first thinking carefully about this one, and having some part of the implementation be specific for matrix exponentials. In particular, one important consideration that we might want to include in this design, that is specific to our use case, is that we also might be interested in computing |
|
To comment on the TensorKit interaction, I definitely agree with the purpose, but this is not actually currently the design we ended up with. So basically there are two comments I have: On the one hand, there is the question about whether or not there are implementations that benefit from providing an additional output array. On the other hand, given that interface, if there is no way of naturally making the output end up in the provided destination, I would really like to avoid ending up with a final |
|
I guess I am a bit confused, because most of the implementations now do actually perform the final step in the calculation in such a way that the result is directly stored in the output array, no? It is only the algorithm that goes via Base/LinearAlgebra that requires the extra But it is also true that, by the time the final step of the calculation is reached; the memory of |
change name to `MatrixFunctionViaEig` etc change `decompositions` to `matrixfunctions` add default algorithm for Diagonal matrices add input checks add @testthrows to catch non-hermitian matrices being given to MatrixFunctionViaEigh change default exponential algorithm to e.g. `MatrixFunctionViaEig` of the default `eig_alg`
|
Your PR no longer requires formatting changes. Thank you for your contribution! |
|
The last few commits added the functions |
|
Could we revisit the names for a second?
|
There are a few comments that occur twice in the view, that I have reacted to once. The comments that I have taken into account and have reacted a 'thumbs up' to, I haven't resolved to make sure you saw they were taken into account. There are, I think, two comments left:
|
|
I think I finally managed to work away (some of) my backlog of PRs that I had to review, so this one is among the next on my list. I hope to go through it (and if necessary commit changes myself directly) before the end of the week. |
lkdvos
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cleaned up part of the implementation, with two major changes:
expAis only allocated if a scalartype change is required- I refactored/added
map_diagonal(!)in an attempt to have a specialization hook, thinking ahead a bit about extending this to different types and different matrix functions
I feel like with this change, the specific difference for exponentiali feels even smaller.
I would still argue that it is a bit strange to default to an imaginary scalar while not supporting a real one, and instead would argue that it could be more convenient to have a generic scalar for which you could simply fill in an imaginary value.
Thinking about that a bit more, I even would say that the reason we need to have the imaginary_evolution flag in MPSKit for time evolution is precisely because we defaulted to evolving with exp(im * t * H) instead of exp(t * H).
It is easy to detect real vs complex scalars in a type stable manner, but less so to detect purely imaginary vs complex.
(I hope I'm explaining this somewhat decently)
Trying to come up with some solutions I think could be better, I can think of some alternative approaches:
- We could introduce a
ScaledOperator(A, tau)dedicated type, and then (re)useexponential(ScaledOperator(A, tau), ...). This might remove a little bit of overhead, and a default implementation could just instantiate the scaled operator. This avoids the need for@functiondef 2 ... - We could simply always have a scale factor in the
exponentialfunction, which by default takes the valuetrueorVectorInterface.One(), and just correctly handle the complex/real case by checking if this scalar type is real/complex.
Some more discussion points/things to consider, some I recall briefly bringing up but might be good to write down here:
These matrix functions should have the same output types, no matter whether they are ishermitian or not.
Instead, most matrix functions distinguish between having support for the entire \bbR -> \bbR, or require complex extensions to do so (this is of course not relevant for exp).
Therefore, it might actually be reasonable to instead of splitting MatrixFunctionViaEig and MatrixFunctionViaEigh into separate types, simply having a MatrixFunctionViaEigen with a runtime flag for hermitianness, rather than a compile time flag. (in essence, manually union-splitting with an if statement) This could potentially cut down on compile-times, and the number of required algorithms?
It might be reasonable to have a global LinearAlgebraAlgorithm backend, which functions similar to DiagonalAlgorithm in the sense that it simply dispatches to the respective LinearAlgebra routines.
With this, I think we could deprecate most of the GenericLinearAlgebra extension, as this would simply coincide.
| if !ishermitian(A) | ||
| throw(DomainError(A, "Hermitian matrix was expected. Use `project_hermitian` to project onto the nearest hermitian matrix)")) | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: do we want to keep this check here, knowing that it will again be checked in the iimplementation of eigh_full?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, we should remove this test.
| if !ishermitian(A) | ||
| throw(DomainError(A, "Hermitian matrix was expected. Use `project_hermitian` to project onto the nearest hermitian matrix)")) | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment here
| using GenericSchur | ||
| @testset "exponentiali! for T = $T" for T in GenericFloats | ||
| rng = StableRNG(123) | ||
| m = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stood out to me as a very specific choice, but it looks like it is hard to make this converge. Do we know what is going on here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The m = 2 choice was only made to make the tests run more quickly while running them locally, I must have forgotten to change it back. If I change it to m = 54 (like in the other tests), all tests still pass in my case (that is, in my latest version, without the commits of yesterday.)
|
I don't get why you say the current code would not work with a real scalar. The argument Apart from the fact that I don't see why the changes would make the difference between In terms of differentiating between If you really feel strongly about any of this, feel free to tailor the implementation to how you see things. The only thing that I really care about is an implementation that works, preferably as generic as possible (again, because I would like to have this in the next TensorKit version). As long as the tests pass, this is all fine by me. I might not be a big fan of the choice of implementation, but naming conventions and interface choices are way less important to me than having either implementation. |
The difference is a bit subtle, but as I mentioned before this is about type stability. function exponentialr(tau, A)
expA = tau isa Complex ? complex(A) : A
return exponentialr!(tau, A, expA) # this computes expA .= exp(A * tau) in whatever way
end
function exponentiali(t, A)
expA = iszero(real(t)) ? A : complex(A)
return exponentiali!(t, A, expA) # this computes expA .= exp(A * t * im) in whatever way
endThis first implementation is type stable for both purely real, purely imaginary and complex values of tau, while the second isn't.
The reason we opted for For
I’m afraid that might be difficult: we’re trying to get a version released this week, so I don’t think integrating and polishing this in time is very realistic. That said, if your main goal is to get something working for your use case in the next TensorKit release, you could already move ahead with something along the following lines: function myexp(A::AbstractTensorMap)
D, V = eig_full(A)
D.data .= exp.(D.data)
return V * D * inv(V)
endFrom my side, I think it’s important that we take a bit of time here to come up with a good, consistent interface and to think through allocations and long-term maintenance, especially since this will likely influence all of the matrix functions. I’d really like to avoid having to introduce multiple breaking changes later. But that shouldn’t block you from experimenting or using a simpler version for your immediate needs. |
This implements the exponential of a matrix for both
BLASFloatsandBigFloats.I have named these functions
exponentialandexponential!, instead of the usualexpandexp!fromLinearAlgebra. Extending these methods while keeping the current structure using @algdef and @ functiondef results in some naming conflicts. The default for BLASFloats is to useLinearAlgebra.exp!. InTensorKit, we can still stick to theexpnaming convention.