Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to parallelize class_group(K, grh=false) #1248

Closed
alexjbest opened this issue Oct 17, 2023 · 8 comments
Closed

Is it possible to parallelize class_group(K, grh=false) #1248

alexjbest opened this issue Oct 17, 2023 · 8 comments

Comments

@alexjbest
Copy link
Contributor

I'd like to compute some class groups without GRH, the built in progress estimator says around 20h, can this be parallelized easily?

Also I was quite surprised that class_group uses GRH by default with no indication of this in the doc that I could see

@thofma
Copy link
Owner

thofma commented Oct 17, 2023

Where would you have liked to see it mentioned?

@alexjbest
Copy link
Contributor Author

In

help?> class_group(OK)
  class_group(O::NfOrd; bound = -1, method = 3, redo = false, large = 1000) -> GrpAbFinGen, Map

  Returns a group A and a map f from A to the set of ideals of O. The inverse of the map is the projection onto the group of ideals modulo the group of
  principal ideals. redo allows to trigger a re-computation, thus avoiding the cache. bound, when given, is the bound for the factor base.

would be good I think! (I assume that would show up in the online doc then)

@thofma
Copy link
Owner

thofma commented Oct 17, 2023

OK. I am on it!

I will check whether we can parallelize it easily.

@thofma
Copy link
Owner

thofma commented Oct 27, 2023

There are some improvements in the latest version (]up should give version 0.22.4). If you start with julia -j$n, the class group proof will use $n threads. I am not super happy with it, but it might do the trick. One can still enable verbose printing as before. Let me know if it crashes. Try to copy the backtrace if it does.

@thofma thofma closed this as completed Oct 27, 2023
@alexjbest
Copy link
Contributor Author

Hi Tommy, this is great thank you! I had to use julia -t$n I think but that seemed to work at least at the start.
After I left it running over the weekend it appears 6 of my 8 threads have died somehow, and are not restarted. I wasn't able to catch the reason. but it seems they failed after 10h on average. The remaining two are still going strong 60h later. I'll start a log and see if I can work out why the others crashed.

@alexjbest
Copy link
Contributor Author

It also appears that they continue running if I interrupt the original!

@alexjbest
Copy link
Contributor Author

I saw

[1413135] signal (11.1): Segmentation fault
in expression starting at none:0
fmpz_addmul_ui at /workspace/srcdir/flint2/fmpz/addmul_ui.c:23
unknown function (ip: 0x7ffffffe)
Allocations: 320061411284 (Pool: 320061402222; Big: 9062); GC: 504759
Segmentation fault (core dumped)

when I ended my julia process, I'll restart it now and try and keep a closer eye on it

@thofma
Copy link
Owner

thofma commented Oct 31, 2023

Hm, seems that it is still not thread safe. I will try and see if I can reproduce it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants