Replies: 3 comments 19 replies
-
hey ho, well we have different problems here. One is, that Python isn't performing well with XML (which is one part of not so good performance). Since a new node has different reference types to different existing nodes and the possibility of a faulty node exists (caused by wrong programming or faulty xml file) you have to check that...and touch every single one to be sure. I couldn't figure out a more performing way. What exists (but is more hidden in backround and atm I am not a friend of it) is to pickle the whole aspace, when it got everything you need, and dump or load it.. But you would still need to run this process atleast once, so that everything would be still alright with the references. |
Beta Was this translation helpful? Give feedback.
-
Another option is to single out one process and add missing reverse references in the background, but I don't quite understand if it is possible to do this? |
Beta Was this translation helpful? Give feedback.
-
Is import speed an issue? I would prioritize code readability. That XML code is horrible |
Beta Was this translation helpful? Give feedback.
-
Hi everybody!:)
When trying to import with a large number of nodes (from 1 thousand to 10 thousand), I noticed that the import takes a very long time. I started profiling and it turned out there is a method that has a minimum complexity of O(n ^ 3).
Was added in this commit
Import of 1 thousand nodes: 0:00:52
Import of 10 thousand nodes: 3:46:00
Profiling indicated that more time was spent on _add_missing_reverse_references() method in the xmlimporter.py.
Is it possible to search for and add missing reverse references in a different way?
Beta Was this translation helpful? Give feedback.
All reactions