You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing soft deletion in Prisma ORM using the prisma-extension-soft-delete. My schema includes a User model with an email field marked as unique. The issue arises when a user registers, deletes their account (soft delete), and then attempts to re-register with the same email. Prisma throws a unique constraint violation error.
Therefore, the current soft deletion implementation doesn't account for reusing unique fields (like email) after a soft delete. This leads to a violation of the unique constraint when the same unique data is entered again.
Is it possible to consider a workaround where a prefix is added to unique fields upon deletion? For example, modifying the email field to something like email_[timestamp] during the soft delete process. This approach would theoretically allow the reuse of the original unique data for new records. Or maybe I've overlooked something and it is possible to solve this with the current implementation? Thanks for your time!
The text was updated successfully, but these errors were encountered:
I'm implementing soft deletion in Prisma ORM using the prisma-extension-soft-delete. My schema includes a
User
model with an email field marked as unique. The issue arises when a user registers, deletes their account (soft delete), and then attempts to re-register with the same email. Prisma throws a unique constraint violation error.Therefore, the current soft deletion implementation doesn't account for reusing unique fields (like email) after a soft delete. This leads to a violation of the unique constraint when the same unique data is entered again.
Is it possible to consider a workaround where a prefix is added to unique fields upon deletion? For example, modifying the email field to something like
email_[timestamp]
during the soft delete process. This approach would theoretically allow the reuse of the original unique data for new records. Or maybe I've overlooked something and it is possible to solve this with the current implementation? Thanks for your time!The text was updated successfully, but these errors were encountered: