You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing a great project. Not sure how much this repository is active, but I have a question.
I am running precompiled version of CRFSharp v1.2.0.0 on Ubuntu 16.04 with Mono. 5.2.0.224.
I love the performance increase in encoding (or learning), but I see that CPU resources are underexploited when I decode files. I run it with 30 cores and usage rate of most of the cores are below 10 % for half of the time and 30-50 % for the rest of the time. (In encoding stage, I see 100% usages all over the cores.) I would like to know if this is inherent in CRFSharp's algorithm or this is my configuration issue.
One of the observations I had was that there is not much difference in delays of decoding 10 samples to 1000 samples like from 4 mins for 5 sentences (each sentence consists of 150 tags), 5 mins for 100 sentences, and 10 mins for 1000 sentences. It looks like there is a large delay in loading the model. I currently use nbest=10, but I previously saw that there is not much difference in nbest=1 and nbest=10.
Let me know if there is anything that I can learn for this issue.
Thank you!
The text was updated successfully, but these errors were encountered:
Hi there,
Thanks for sharing a great project. Not sure how much this repository is active, but I have a question.
I am running precompiled version of CRFSharp v1.2.0.0 on Ubuntu 16.04 with Mono. 5.2.0.224.
I love the performance increase in encoding (or learning), but I see that CPU resources are underexploited when I decode files. I run it with 30 cores and usage rate of most of the cores are below 10 % for half of the time and 30-50 % for the rest of the time. (In encoding stage, I see 100% usages all over the cores.) I would like to know if this is inherent in CRFSharp's algorithm or this is my configuration issue.
One of the observations I had was that there is not much difference in delays of decoding 10 samples to 1000 samples like from 4 mins for 5 sentences (each sentence consists of 150 tags), 5 mins for 100 sentences, and 10 mins for 1000 sentences. It looks like there is a large delay in loading the model. I currently use nbest=10, but I previously saw that there is not much difference in nbest=1 and nbest=10.
Let me know if there is anything that I can learn for this issue.
Thank you!
The text was updated successfully, but these errors were encountered: