Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only store non-zero blocks in BlockMap #84

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

jagot
Copy link

@jagot jagot commented Feb 3, 2020

I generate quite large block-maps with sparse block occupancy, so creating BlockMaps via *cat operations is cumbersome. In addition, I wanted to avoid having to fill out the zero blocks with LinearMap(false*I, n) entries if possible (even though the multiplication overhead may be negligible). Therefore, I changed such that BlockMap.rows may have vanishing entries, i.e. representing rows without any blocks, added explicit dimensions of the BlockMap, and a new constructor where the non-vanishing blocks along with their coordinates are explicitly given.

By changing the materialization methods to not use hvcat (which does not work when vanishing blocks are not stored), Matrix(B) actually returns a Matrix now, even if some of the LinearMaps are e.g. Diagonal matrices. hvcat returns a SparseMatrixCSC if concatenating non-dense matrices.

I did not add any tests yet, since I wanted to know if you like the idea first.

Example usage:

julia> n = 6 # Block size
6

julia> m = 4 # Number of blocks
4

julia> rows = repeat([n], m)
4-element Array{Int64,1}:
 6
 6
 6
 6

julia> T = Int
Int64

julia> D = Diagonal(3ones(T, n))
6×6 Diagonal{Int64,Array{Int64,1}}:
 3          
   3        
     3      
       3    
         3  
           3

julia> U = UpperTriangular(reshape(one(T):n^2, n, n))
6×6 UpperTriangular{Int64,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}}}:
 1  7  13  19  25  31
   8  14  20  26  32
     15  21  27  33
        22  28  34
           29  35
              36

julia> L = LowerTriangular(reshape(one(T):n^2, n, n))
6×6 LowerTriangular{Int64,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}}}:
 1               
 2   8            
 3   9  15         
 4  10  16  22      
 5  11  17  23  29   
 6  12  18  24  30  36

julia> maps = [D, L, U, D, D, D]
6-element Array{AbstractArray{Int64,2},1}:
 [3 0  0 0; 0 3  0 0;  ; 0 0  3 0; 0 0  0 3]
 [1 0  0 0; 2 8  0 0;  ; 5 11  29 0; 6 12  30 36]
 [1 7  25 31; 0 8  26 32;  ; 0 0  29 35; 0 0  0 36]
 [3 0  0 0; 0 3  0 0;  ; 0 0  3 0; 0 0  0 3]
 [3 0  0 0; 0 3  0 0;  ; 0 0  3 0; 0 0  0 3]
 [3 0  0 0; 0 3  0 0;  ; 0 0  3 0; 0 0  0 3]

julia> is = [1,1,2,2,2,4]
6-element Array{Int64,1}:
 1
 1
 2
 2
 2
 4

julia> js = [1,2,1,2,4,3]
6-element Array{Int64,1}:
 1
 2
 1
 2
 4
 3

julia> B = BlockMap{Int}(m*n, m*n, (rows,rows), maps, is, js)
4×3-blocked 24×24 BlockMap{Int64}

julia> Matrix(B)
24×24 Array{Int64,2}:
 3  0   0   0   0   0  1   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  3   0   0   0   0  2   8   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   3   0   0   0  3   9  15   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   3   0   0  4  10  16  22   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   3   0  5  11  17  23  29   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   3  6  12  18  24  30  36  0  0  0  0  0  0  0  0  0  0  0  0
 1  7  13  19  25  31  3   0   0   0   0   0  0  0  0  0  0  0  3  0  0  0  0  0
 0  8  14  20  26  32  0   3   0   0   0   0  0  0  0  0  0  0  0  3  0  0  0  0
 0  0  15  21  27  33  0   0   3   0   0   0  0  0  0  0  0  0  0  0  3  0  0  0
 0  0   0  22  28  34  0   0   0   3   0   0  0  0  0  0  0  0  0  0  0  3  0  0
 0  0   0   0  29  35  0   0   0   0   3   0  0  0  0  0  0  0  0  0  0  0  3  0
 0  0   0   0   0  36  0   0   0   0   0   3  0  0  0  0  0  0  0  0  0  0  0  3
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  3  0  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  3  0  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  3  0  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  3  0  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  3  0  0  0  0  0  0  0
 0  0   0   0   0   0  0   0   0   0   0   0  0  0  0  0  0  3  0  0  0  0  0  0

EDIT: I just noticed that the heuristic to decide how many columns the BlockMap consists of does not really work when no row has entries in all columns: 4×3-blocked 24×24 BlockMap{Int64}

@jagot jagot requested a review from dkarrasch February 3, 2020 10:49
@coveralls
Copy link

coveralls commented Feb 3, 2020

Coverage Status

Coverage decreased (-8.3%) to 89.728% when pulling ce8e7c5 on jagot:master into cda88dc on Jutho:master.

@codecov
Copy link

codecov bot commented Feb 3, 2020

Codecov Report

Merging #84 into master will decrease coverage by 6.81%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #84      +/-   ##
==========================================
- Coverage   97.67%   90.86%   -6.82%     
==========================================
  Files          10       10              
  Lines         732      799      +67     
==========================================
+ Hits          715      726      +11     
- Misses         17       73      +56     
Impacted Files Coverage Δ
src/LinearMaps.jl 98.48% <0.00%> (ø) ⬆️
src/transpose.jl 92.85% <0.00%> (ø) ⬆️
src/uniformscalingmap.jl 97.18% <0.00%> (ø) ⬆️
src/conversion.jl 85.71% <0.00%> (-2.05%) ⬇️
src/wrappedmap.jl 100.00% <0.00%> (ø) ⬆️
src/composition.jl 98.61% <0.00%> (ø) ⬆️
src/blockmap.jl 84.88% <0.00%> (-12.93%) ⬇️
src/kronecker.jl 100.00% <0.00%> (ø) ⬆️
src/linearcombination.jl 66.21% <0.00%> (-33.79%) ⬇️
src/functionmap.jl 98.86% <0.00%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cda88dc...ce8e7c5. Read the comment docs.

@jagot
Copy link
Author

jagot commented Feb 3, 2020

Also, as a humble suggestion, maybe we could avoid having the type of maps as a type argument to BlockMap:
https://github.com/Jutho/LinearMaps.jl/blob/3386eb9f239a4ae6a691e204dcd6888b799c2825/src/blockmap.jl#L1
The problem is when maps is a heterogeneous collection of many different kind of maps, you get ridiculously large error messages when you have many maps of complicated types.

I would prefer to either have maps::Vector{A} where A is a type parameter of BlockMap, or retain maps::As, but relax the requirements, such that we could allow As <: Vector.

@dkarrasch
Copy link
Member

Thanks for your contribution, @jagot! Unfortunately, I'm pretty busy this and the next week, as the semester is closing and the exam session is starting.

I'm not sure about the maps::Vector{A} issue. So far, it has been the policy of this package to make the types of the maps inferable. I guess this contributes significantly to the ultimate speed. For development, you could relax this and use Vectors if that helps you, but I strongly assume that we will want to keep the level of type inference up in the end. If you have linear maps of mixed types, what is the type parameter going to be? Any?

@jagot
Copy link
Author

jagot commented Feb 3, 2020

No worries; I can continue working with my branch.

Regarding the storage type for maps: I do not suggest to change the default constructor, which does generate tuples of everything, I only suggest to relax the requirement on the type in the argument list of the struct itself. This way, the user can go outside the box if necessary, whilst still keeping the old behaviour. The compiler is able to perform the type inference without the explicit requirement <:Tuple{Vararg{LinearMap}} and you will get the same speed.

My use-case is this: I have an integral operator which I represent as a subtype <:LinearMap. Every block in my BlockMap is either this IntegralOperator or a LinearMaps.LinearCombination thereof. Then I have on the order of thousands of blocks. It feels a bit weird and cumbersome to have a type that is thousands of arguments long, just because the type of each and every block gets pushed into the tuple. It is possible I could somehow coax the tuple into realizing that it should be NTuple{1000,<:Union{IntegralOperator,LinearMaps.LinearCombination{<:IntegralOperator}}} or somesuch.

I will play around a bit more.

@dkarrasch
Copy link
Member

Regardless of details, I like your generalization, and obviously, you come with a use case. Does mat-vec-multiply work with your (generalized) BlockMaps and our current mul! method?

@jagot
Copy link
Author

jagot commented Feb 3, 2020

Yes, as far as I can tell, i.e. Matrix(B) and mul!(out, B, eye) where eye is an appropriately sized identity matrix yield the same results. The tests did not seem to break either.

@jagot
Copy link
Author

jagot commented Feb 3, 2020

For the same reason, I would actually like to relax the type requirement of LinearCombination as well; in the above example, even if I wrap each and every block in instances of LinearCombination, some blocks will then have linear combinations with one term, and others with multiple terms, and since the number of terms is part of the type, it will actually make the BlockMap more type unstable, than simply storing the terms in a Vector{L} where L<:LinearMap is a concrete type. I.e. leave the number of terms up to Vector and only fix the eltype.

@dkarrasch
Copy link
Member

it will actually make the BlockMap more type unstable

I'm afraid I don't follow. IMHO, it doesn't get any more type stable if you have all the types in the hierarchy available as type parameters.

Also, it's not going to be Vector{L} where L<:LinearMap, but Vector{LinearMap}.
Consider this:

julia> using LinearMaps, LinearAlgebra
julia> A = LinearMap(rand(3,3));
julia> B = LinearMap(I,3);
julia> C = [A, B]
2-element Array{LinearMap,1}:
 LinearMaps.WrappedMap{Float64,Array{Float64,2}}([0.1334265214134871 0.192274602216711 0.019312426807825966; 0.6276101396807128 0.684224418730355 0.02245764986996379; 0.8164557288364198 0.9256657613490129 0.7130602030248578], false, false, false)
 LinearMaps.UniformScalingMap{Bool}(true, 3)
julia> eltype(C)
LinearMap

at which point all type information about the constituents of C is lost. The slightest inhomogeneity in the type (and be it in the parameters) of the elements, will make the Vector choose the joint supertype, which is not concrete anymore.

I understand that having tuples of length on the order of thousands may not be helpful for the compiler (I think I found that earlier with LinearCombination of 16 or more LinearMaps), but I believe that LinearMaps.jl is fast because the compiler has maximum (type) knowledge about the entire linear map. I guess I still don't fully understand whether the tuple requirement poses a real computational problem, or whether it's more human inconvenience.

@jagot
Copy link
Author

jagot commented Feb 4, 2020

Sorry, I was not being very clear. Imagine I have a concrete subtype A<:LinearMap. Now I need to insert them into a BlockMap, where each block will have between 1 and n instances of A, and I have thousands of blocks. Since each block then necessarily will have a different type, i.e. is not collapsible into a single type so we can use NTuple, the type argument becomes a Tuple with thousands of entries. I don't know if there is a computational advantage/disadvantage yet, because I'm trying to find an error, but the error trace is incomprehensible because of the type explosion. Even if I wrapped the blocks where I have only one operator into a LinearCombination of length one, that would still result in a type explosion since LinearCombination in turn also record the number of operators in the type. So my suggestion is the following:

  • Relax type requirements in both LinearCombination and BlockMap
  • Keep the standard constructors the same, which means you get exactly the same types as you do now, unless you specifically ask for the new behaviour, which would
  • allow you to use e.g. Vector as the container for the maps of both LinearCombination and BlockMap. It is then your responsibility to supply the Vector with homogeneous types to ensure type stability:
julia> L = LowerTriangular(reshape(3*(one(T):n^2), n, n))
6×6 LowerTriangular{Int64,Base.ReshapedArray{Int64,2,StepRange{Int64,Int64},Tuple{}}}:
  3                
  6  24             
  9  27  45          
 12  30  48  66       
 15  33  51  69  87    
 18  36  54  72  90  108

julia> L2 = LowerTriangular(reshape(2*(one(T):n^2), n, n))
6×6 LowerTriangular{Int64,Base.ReshapedArray{Int64,2,StepRange{Int64,Int64},Tuple{}}}:
  2               
  4  16            
  6  18  30         
  8  20  32  44      
 10  22  34  46  58   
 12  24  36  48  60  72

julia> [LinearMap(L), LinearMap(L2)]
2-element Array{LinearMaps.WrappedMap{Int64,LowerTriangular{Int64,Base.ReshapedArray{Int64,2,StepRange{Int64,Int64},Tuple{}}}},1}:
 LinearMaps.WrappedMap{Int64,LowerTriangular{Int64,Base.ReshapedArray{Int64,2,StepRange{Int64,Int64},Tuple{}}}}([3 0  0 0; 6 24  0 0;  ; 15 33  87 0; 18 36  90 108], false, false, false)
 LinearMaps.WrappedMap{Int64,LowerTriangular{Int64,Base.ReshapedArray{Int64,2,StepRange{Int64,Int64},Tuple{}}}}([2 0  0 0; 4 16  0 0;  ; 10 22  58 0; 12 24  60 72], false, false, false)

EDIT: To illustrate: by relaxing the type requirement like so

struct LinearCombination{T, As} <: LinearMap{T}
    # Everything else unchanged
end

we can do this

julia> n = 6
6

julia> T = Int
Int64

julia> D = Diagonal(3ones(T, n))
6×6 Diagonal{Int64,Array{Int64,1}}:
 3          
   3        
     3      
       3    
         3  
           3

julia> L = LowerTriangular(reshape(one(T):n^2, n, n))
6×6 LowerTriangular{Int64,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}}}:
 1               
 2   8            
 3   9  15         
 4  10  16  22      
 5  11  17  23  29   
 6  12  18  24  30  36

julia> typeof(LinearMap(D) + LinearMap(L)) # Yields same type as before
LinearMaps.LinearCombination{Int64,Tuple{LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}},LinearMaps.WrappedMap{Int64,LowerTriangular{Int64,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}}}}}}

julia> typeof(LinearMaps.LinearCombination{Int}([LinearMap(D)])) # Yields new type, but still type-stable
LinearMaps.LinearCombination{Int64,Array{LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}},1}}

julia> typeof(LinearMaps.LinearCombination{Int}([LinearMap(D),LinearMap(D)])) # Same type as one-term case
LinearMaps.LinearCombination{Int64,Array{LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}},1}}

@Jutho
Copy link
Collaborator

Jutho commented Feb 4, 2020

Hi @jagot and @dkarrasch . I've quickly read through this thread, so I might be misinterpreting somethings, but here are some suggestions:

  • Regarding zero blocks: we could have a special zero linear map type. Not just UniformScalingMap with lambda=0, since it needs to be able to be rectangular

  • Regarding the BlockMap type and other types like LinearCombination. If the mere issue is the length of error messages, we can define custom show methods for those types (the actual types, not the instances), which don't print all the type parameters. If there is also a performance problem with these long tuple types, then indeed loosening the As type requirement looks like a good solution, where the default behaviour still creates tuples (as I assume this is mostly used with a small number of blocks)).

@jagot
Copy link
Author

jagot commented Feb 4, 2020

Thanks @Jutho for you input!

I went ahead and pushed to type requirement relaxation, and then I can do the following:

julia> n = 6
6

julia> T = Int
Int64

julia> D = Diagonal(3ones(T, n))
6×6 Diagonal{Int64,Array{Int64,1}}:
 3          
   3        
     3      
       3    
         3  
           3

julia> A = LinearMaps.LinearCombination{Int}(repeat([LinearMap(D)],inner=40))
LinearMaps.LinearCombination((LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), …, LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true), LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}}([3 0 … 0 0; 0 3 … 0 0; … ; 0 0 … 3 0; 0 0 … 0 3], true, true, true)))

julia> typeof(A)
LinearMaps.LinearCombination{Int64,Array{LinearMaps.WrappedMap{Int64,Diagonal{Int64,Array{Int64,1}}},1}}

julia> eye = Matrix{Int}(I, n, n)
6×6 Array{Int64,2}:
 1  0  0  0  0  0
 0  1  0  0  0  0
 0  0  1  0  0  0
 0  0  0  1  0  0
 0  0  0  0  1  0
 0  0  0  0  0  1

julia> out = similar(eye)
6×6 Array{Int64,2}:
  1   6  12  18  26          32
  2   7  13  19  27          33
 35   8  14  20  28          34
  3   9  15  23  29          21
  4  10  16  24  30          22
  5  11  17  25  31  4421842032

julia> @benchmark mul!(out, A, eye)

BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     8.125 μs (0.00% GC)
  median time:      8.804 μs (0.00% GC)
  mean time:        9.611 μs (0.00% GC)
  maximum time:     46.923 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     3

i.e. the storage is fully type-resolved and there is no allocation (provided that the multiplication of the individual maps are non-allocating, of course).

Regarding a special ZeroMap, I guess it could be useful to have, but with this PR it is at least not needed for BlockMap.

@dkarrasch
Copy link
Member

@jagot I had another look at your PR (finally!). I think this is really a great extension. So, just for myself to clarify, this addresses the case when the LinearMaps are arranged in a "lattice", right? With possible zero blocks. In some sense, this shares a lot with #65, like have LinearMaps "floating around", while only specifying the origin and target ranges. The challenge in #65 was that if you have one block surrounded by "zero blocks", then multiplying a vector needs to set some indices in y to zero. This issue does not come up when we assume the block occupancy structure to be such that the block map is surjective. Is this accounted for here (at least I didn't see it)?

@jagot
Copy link
Author

jagot commented Mar 26, 2020

You're right, that's a case I did not think of. That should be easy enough to add.

I myself had to abandon this approach since it was too slow for my purposes, so I had to roll my own specialized block multiplier, which I could also efficiently multithread. I'll explain the problem, and how I solved it, and maybe we could generalize the idea such that it could find its way into LinearMaps.jl at some point.

A big part of my calculations is computing the action on a vector of the exponential of the exchange interaction between electrons, i.e. exp(K)*v, where K is the exchange interaction. Computing just the action K*v is itself quite expensive, so right now I use a Runge–Kutta 4 propagator to approximate the exponential, which requires evaluating K on four test vectors per time step.

K is a large blocked operator (square and Hermitian), where each block forms an integral operator: k x = | v> < u | g | x>, i.e. the vector x goes into the inner product, and g is the kernel of the operator. To actually compute this, one solves the Poisson problem for the joint density u'x and the resultant potential is used as a diagonal matrix applied to v.

Now it turns out that a lot of the Poisson problems are actually used more than once, so they can be solved once (and in parallel). Then the resultant potentials applied to the various vs are also appearing multiple times, so I store them as a matrix V, with each column being one potential–v combination. Finally, I make linear combinations of these columns, one for each block in the output vector, storing the expansion coefficients in a sparse matrix (for which purpose I created https://github.com/jagot/ThreadedSparseArrays.jl to have fast multiplication in this step too).

Sorry for the long-wound explanation. Do you think providing for these kind of blocked operators with a lot of data reuse has a place in LinearMaps.jl, or is it too specialized. I must concede that it is not immediately obvious to me how to generalize this easily.

@dkarrasch
Copy link
Member

Thanks, @jagot, for the update. I'm not sure I understand every detail, but I'm sure you're well familiar with what the community has to offer in that direction (like DiffEq stuff, and @Jutho's KrylovKit.jl, for instance).

For the time being, I'd like to extract some parts of this PR, if I may (and have you a co-author of the commit, of course). I particularly like the show implementations, I have long thought about filing an issue for interested contributors actually.

@oschulz
Copy link
Contributor

oschulz commented Feb 17, 2022

@jagot , I'll pick up on this PR, if that's Ok with you?

@jagot
Copy link
Author

jagot commented Feb 17, 2022

By all means 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants