-
Notifications
You must be signed in to change notification settings - Fork 30
Open
Description
I am getting a Pardiso crash if I use JuMP at the same time, but it seems to depend on the order of package inclusion. Example is below. If I do using JuMP
before using Pardiso
, I get a crash, but if I reverse the order there is no crash. This is all from a fresh Julia session, i.e. I haven't used JuMP for anything, just included it.
using JuMP #<---- putting JuMP here causes Pardiso to crash
using Pardiso
#using JuMP #<---- putting JuMP here instead does not
n = 4
A = sparse(I(n)*1.)
b = ones(n)
#solve with Pardiso
ps = MKLPardisoSolver()
set_msglvl!(ps, Pardiso.MESSAGE_LEVEL_ON)
Pardiso.solve(ps, A, b)
The Pardiso output in the case of a crash is something like this:
*** error PARDISO: reordering, symbolic factorization
=== PARDISO: solving a symmetric positive definite system ===
1-based array indexing is turned ON
PARDISO double precision computation is turned ON
METIS algorithm at reorder step is turned ON
Summary: ( starting phase is reordering, ending phase is solution )
================
Times:
======
Time spent in calculations of symmetric matrix portrait (fulladj): 0.001230 s
Time spent in reordering of the initial matrix (reorder) : 0.000004 s
Time spent in symbolic factorization (symbfct) : 0.001095 s
Time spent in allocation of internal data structures (malloc) : 0.023739 s
Time spent in additional calculations : 6.144252 s
Total time spent : 6.170320 s
Statistics:
===========
Parallel Direct Factorization is running on 6 OpenMP
< Linear system Ax = b >
number of equations: 4
number of non-zeros in A: 4
number of non-zeros in A (%): 25.000000
number of right-hand sides: 1
< Factors L and U >
number of columns for each panel: 128
number of independent subgraphs: 0
< Preprocessing with state of the art partitioning metis>
number of supernodes: 4
size of largest supernode: 1
number of non-zeros in L: 2817782047
number of non-zeros in U: 1
number of non-zeros in L+U: 2817782048
gflop for the numerical factorization: 0.000000
ERROR: LoadError: Reordering problem.
Stacktrace:
[1] check_error(ps::MKLPardisoSolver, err::Int32)
@ Pardiso ~/.julia/packages/Pardiso/3uj3F/src/mkl_pardiso.jl:80
[2] ccall_pardiso(ps::MKLPardisoSolver, N::Int64, nzval::Vector{Float64}, colptr::Vector{Int64}, rowval::Vector{Int64}, NRHS::Int64, B::Vector{Float64}, X::Vector{Float64})
@ Pardiso ~/.julia/packages/Pardiso/3uj3F/src/mkl_pardiso.jl:73
[3] pardiso(ps::MKLPardisoSolver, X::Vector{Float64}, A::SparseMatrixCSC{Float64, Int64}, B::Vector{Float64})
@ Pardiso ~/.julia/packages/Pardiso/3uj3F/src/Pardiso.jl:346
[4] solve!(ps::MKLPardisoSolver, X::Vector{Float64}, A::SparseMatrixCSC{Float64, Int64}, B::Vector{Float64}, T::Symbol)
@ Pardiso ~/.julia/packages/Pardiso/3uj3F/src/Pardiso.jl:260
[5] solve
@ ~/.julia/packages/Pardiso/3uj3F/src/Pardiso.jl:222 [inlined]
[6] solve(ps::MKLPardisoSolver, A::SparseMatrixCSC{Float64, Int64}, B::Vector{Float64})
@ Pardiso ~/.julia/packages/Pardiso/3uj3F/src/Pardiso.jl:221
Note the huge number of non-zeros in L
. In this example it reports "Reordering problem", but I also see "out of memory" errors on occasion when doing something similar. It seems like including JuMP first somehow causes a memory leak in Pardiso, but I don't understand how that is possible.
Metadata
Metadata
Assignees
Labels
No labels