You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work on this project. I noticed the following when trying this out:-
does not appear to cope when using schemas other than public. I tried using your test data loaded into an schema called anonymise and it doesn't change the row contents, eg. yaml
anonymise.customer:
raw: [language, currency]
pk: customer_id
anonymise.customer_address:
raw: [country, customer_id]
custom_rules:
address_line: aggregate_length
Even when I manually drop cascaded or rename the customer & customer_address tables before running the anonymizer the rows look the same. Looking at anonymize.py I see hard coded reference the the public schema instead of using a parameter value. It should not be assumed that all databases alway suse the public schema or indeed have one at all.
In anonymize.py I see code to anonymize with hardcoded time & date values and inet values. It might be better to change this by adding random time periods for the time date values and randomise the inet. If the tables and being randomised prior to be used for some application testing then fixed values are unhelpful to test correct processing.
The default Yaml definition is to only exclude columns specified as raw. With hundreds of tables most of which do not need to be anonymised this means all the tables and columns need to be specified as raw. This is why I moved the tables to another schema for anonymising purposes in the expectation of moving them back after processing.
The text was updated successfully, but these errors were encountered:
I just created a small lib because I had the same issues (i.e: schema is not public, uppercase columns name, no need to allow all columns one by one, etc.): https://github.com/rap2hpoutre/pg-anonymizer/. I failed to find a lib that correctly anonymize my database, so I decided to create one. Not sure it will fix your issues, though. Since it's a late answer, I guess you found a way to fix your issue 😅, still maybe it could help other people!
Thank you for your work on this project. I noticed the following when trying this out:-
anonymise.customer:
raw: [language, currency]
pk: customer_id
anonymise.customer_address:
raw: [country, customer_id]
custom_rules:
address_line: aggregate_length
Even when I manually drop cascaded or rename the customer & customer_address tables before running the anonymizer the rows look the same. Looking at anonymize.py I see hard coded reference the the public schema instead of using a parameter value. It should not be assumed that all databases alway suse the public schema or indeed have one at all.
In anonymize.py I see code to anonymize with hardcoded time & date values and inet values. It might be better to change this by adding random time periods for the time date values and randomise the inet. If the tables and being randomised prior to be used for some application testing then fixed values are unhelpful to test correct processing.
The default Yaml definition is to only exclude columns specified as raw. With hundreds of tables most of which do not need to be anonymised this means all the tables and columns need to be specified as raw. This is why I moved the tables to another schema for anonymising purposes in the expectation of moving them back after processing.
The text was updated successfully, but these errors were encountered: