You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have found that adding items to a JSON schema (e.g. increasing the length of an enum) increases the time ~linearly. However, it seems like adding Unions of different types of nested objects to the schema increases the complexity supralinearly. e.g. I can add an enum with 3000 items for a pretty trivial slowdown but a union of 3000 classes (with just one string field) leads to an intractable slowdown.
I see the comment
# This is a bit of a performance hit, as it means get_allowed_characters() is called twice.
but this shouldn't cause a supralinear change, right? I'm looking into this myself but any support would be appreciated.
(Also this library is amazing, thank you!!)
The text was updated successfully, but these errors were encountered:
Its probably due to the recursive nature of compute allowed tokens. This would require in depth research to understand how to improve. I don't think I will get to it in the coming days, PRs welcome :)
Hi, I have found that adding items to a JSON schema (e.g. increasing the length of an enum) increases the time ~linearly. However, it seems like adding Unions of different types of nested objects to the schema increases the complexity supralinearly. e.g. I can add an enum with 3000 items for a pretty trivial slowdown but a union of 3000 classes (with just one string field) leads to an intractable slowdown.
I see the comment
but this shouldn't cause a supralinear change, right? I'm looking into this myself but any support would be appreciated.
(Also this library is amazing, thank you!!)
The text was updated successfully, but these errors were encountered: