Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Initial commit * recon enhancement done to deal with different columns in source and target (#1216) * Initial commit * recon enhancement done to deal with different columns in source and target --------- Co-authored-by: Guenia <[email protected]> * adjust Silver Job Runs module configuration (#1256) enable auto-optimized shuffle for module 2011 originally implemented in commit [d751d5f](d751d5f) * append null columns from cluster snapshot for cluster_spec_silver (#1239) * Initial commit * append null columns from cluster snapshot for cluster_spec_silver * append null columns from cluster snapshot for cluster_spec_silver --------- Co-authored-by: Guenia <[email protected]> * 1201 collect all event logs on first run (#1255) * Initial commit * cluster event bronze will take all the data from API for first run * Update BronzeTransforms.scala adjust whitespace around `landClusterEvents()` --------- Co-authored-by: Guenia <[email protected]> Co-authored-by: Neil Best <[email protected]> * Redefine views so that they are created from tables not locations (#1241) * Initial commit * Change publish() function to incorporate views from ETL Tables iso paths * Handle view creation in case of table does not exists --------- Co-authored-by: Guenia <[email protected]> * 1030 pipeline validation framework (#1071) * Initial commit * 19-Oct-23 : Added Validation Framework * 19-Oct-23: Customize the message for customer * 19-Oct-23: Customize the message for customer * 26-Oct-23: Added OverwatchID filter in the table * 26-Oct-23: Change for Coding Best Practices * Added Function Description for validateColumnBetweenMultipleTable * Added Pattern Matching in Validation * Convert if-else in validateRuleAndUpdateStatus to case statement as per comment * Initial commit * traceability implemented (#1102) * traceability implemented * code review implemented * missed code implemented (#1105) * Initial commit * traceability implemented (#1102) * traceability implemented * code review implemented * missed code implemented * missed code implemented --------- Co-authored-by: Guenia Izquierdo <[email protected]> * Added proper exception for Spark Stream Gold if progress c… (#1085) * Initial commit * 09-Nov-23: Added proper exception for Spark Stream Gold if progress column contains only null in SparkEvents_Bronze --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * Gracefully Handle Exception for NotebookCommands_Gold (#1095) * Initial commit * Gracefully Handle Exception for NotebookCommands_Gold * Convert the check in buildNotebookCommandsFact to single or clause --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * code missed in merge (#1120) * Fix Helper Method to Instantiate Remote Workspaces (#1110) * Initial commit * Change getRemoteWorkspaceByPath and getWorkspaceByDatabase to take it RemoteWorkspace * Remove Unnecessary println Statements --------- Co-authored-by: Guenia Izquierdo <[email protected]> * Ensure we test the write into a partitioned storage_prefix (#1088) * Initial commit * Ensure we test the write into a partitioned storage_prefix * silver warehouse spec fix (#1121) * added missed copy-pasta (#1129) * Exclude cluster logs in S3 root bucket (#1118) * Exclude cluster logs in S3 root bucket * Omit cluster log paths pointing to s3a as well * implemented recon (#1116) * implemented recon * docs added * file path change * review comments implemented * Added ShuffleFactor to NotebookCommands (#1124) Co-authored-by: Sourav Banerjee <[email protected]> * disabled traceability (#1130) * Added JobRun_Silver in buildClusterStateFact for Cluster E… (#1083) * Initial commit * 08-Nov-23: Added JobRun_Silver in buildClusterStateFact for Cluster End Time Imputation * Impute Terminating Events in CLSF from JR_Silver * Impute Terminating Events in CLSD * Impute Terminating Events in CLSD * Change CLSF to original 0730 version * Change CLSF to original 0730 version * Added cluster_spec in CLSD to get job Cluster only * Make the variables name in buildClusterStateDetail into more descriptive way * Make the variables name in buildClusterStateDetail into more descriptive way --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * Sys table audit log integration (#1122) * system table integration with audit log * adding code to resolve issues with response col * fixed timestamp issue * adding print statement for from and until time * adding fix for azure * removed comments * removed comments and print statements * removed comments * implemented code review comments * implemented code review comments * adding review comment * Sys table integration multi acount (#1131) * added code changes for multi account deployment * code for multi account system table integration * Sys table integration multi acount (#1132) * added code changes for multi account deployment * code for multi account system table integration * adding code for system table migration check * changing exception for empty audit log from system table * adding code to handle sql_endpoint in configs and fix in migration validation (#1133) * corner case commit (#1134) * Handle CLSD Cluster Impute when jrcp and clusterSpec is Empty (#1135) * Handle CLSD Cluster Impute when jrcp and clusterSpec is Empty * Exclude last_state from clsd as it is not needed in the logic. --------- Co-authored-by: Sourav Banerjee <[email protected]> * Exclude 2011 and 2014 as dependency module for 2019 (#1136) * Exclude 2011 and 2014 as dependency module for 2019 * Added comment in CLSD for understandability --------- Co-authored-by: Sourav Banerjee <[email protected]> * corner case commit (#1137) * Update version * adding fix for empty EH config for system tables (#1140) * corner case commit (#1142) * adding fix for empty audit log for warehouse_spec_silver (#1141) * recon columns removed (#1143) * recon columns removed * recon columns removed * Initial Commit * Added Changes in Validation Framework as per comments added during sprint meeting * added hotfix for warehouse_spec_silver (#1154) * Added Multiple RunID check in Validation Frameowkr * Added Other tables in Validation Framework * Added Multiple WS ID option in Cros Table Validation * Added change for Pipeline_report * Change for Pipeline Report * Added msg for single table validation * Added negative msg in HealthCheck Report * Added Negative Msg for Cross Table Validation * Added extra filter for total cost validation for CLSF * Changed as per Comments * Changed as per the comments * Added some filter condition for cost validation in clsf * Added Config for all pipeline run * 19-Oct-23 : Added Validation Framework * 19-Oct-23: Customize the message for customer * 19-Oct-23: Customize the message for customer * 26-Oct-23: Added OverwatchID filter in the table * 26-Oct-23: Change for Coding Best Practices * Added Function Description for validateColumnBetweenMultipleTable * Added Pattern Matching in Validation * Convert if-else in validateRuleAndUpdateStatus to case statement as per comment * traceability implemented (#1102) * traceability implemented * code review implemented * Added JobRun_Silver in buildClusterStateFact for Cluster E… (#1083) * Initial commit * 08-Nov-23: Added JobRun_Silver in buildClusterStateFact for Cluster End Time Imputation * Impute Terminating Events in CLSF from JR_Silver * Impute Terminating Events in CLSD * Impute Terminating Events in CLSD * Change CLSF to original 0730 version * Change CLSF to original 0730 version * Added cluster_spec in CLSD to get job Cluster only * Make the variables name in buildClusterStateDetail into more descriptive way * Make the variables name in buildClusterStateDetail into more descriptive way --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * corner case commit (#1134) * Exclude 2011 and 2014 as dependency module for 2019 (#1136) * Exclude 2011 and 2014 as dependency module for 2019 * Added comment in CLSD for understandability --------- Co-authored-by: Sourav Banerjee <[email protected]> * Added Changes in Validation Framework as per comments added during sprint meeting * Added Multiple RunID check in Validation Frameowkr * Added Other tables in Validation Framework * Added Multiple WS ID option in Cros Table Validation * Added change for Pipeline_report * Change for Pipeline Report * Added msg for single table validation * Added negative msg in HealthCheck Report * Added Negative Msg for Cross Table Validation * Added extra filter for total cost validation for CLSF * Changed as per Comments * Changed as per the comments * Added some filter condition for cost validation in clsf * Added Config for all pipeline run --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> Co-authored-by: Sriram Mohanty <[email protected]> Co-authored-by: Aman <[email protected]> * adding fix for duplicate accountId in module 2010 and 3019 (#1270) * 1218 warehouse state details (#1254) * test * code for warehouse_state_detail_silver * removed comments * adding warehouseEvents scope * added exception for table not found * added exception to check if system tables are getting used or not * enhance function getWarehousesEventDF * added code to fix max number of clusters * change in column names * refactored code * Add descriptive `NamedTransformation`s to Spark UI (#1223) * Initial commit * Add descriptive job group IDs and named transformations This makes the Spark UI more developer-friendly when analyzing Overwatch runs. Job group IDs have the form <workspace name>:<OW module name> Any use of `.transform( df => df)` may be replaced with `.transformWithDescription( nt)` after instantiating a `val nt = NamedTransformation( df => df)` as its argument. This commit contains one such application of the new extension method. (See `val jobRunsAppendClusterName` in `WorkflowsTransforms.scala`.) Some logic in `GoldTransforms` falls through to elements of the special job-run-action form of Job Group IDs emitted by the platform but the impact is minimal relative to the benefit to Overwatch development and troubleshooting. Even so this form of Job Group ID is still present in initial Spark events before OW ETL modules begin to execute. * improve TransformationDescriberTest * flip transformation names to beginning of label for greater visibility in Spark UI. `NamedTransformation` type name now appears in labels' second position. (cherry picked from commit 2ead752) * revert modified Spark UI Job Group labels TODO: enumerate the regressions this would introduce when the labels set by then platform are replaced this way. --------- Co-authored-by: Guenia <[email protected]> * adding code for warehouseStateFact gold (#1265) * adding code for warehouseStateFact gold * removed hard coded data and fix logic * removed commented code * Show `DataFrame` records in logs (#1224) * Initial commit * Add extension method to show `DataFrame` records in the log * catch up with 0820_release Squashed commit of the following: commit bbdb61f Author: Neil Best <[email protected]> Date: Tue Aug 20 10:11:03 2024 -0500 Add descriptive `NamedTransformation`s to Spark UI (#1223) * Initial commit * Add descriptive job group IDs and named transformations This makes the Spark UI more developer-friendly when analyzing Overwatch runs. Job group IDs have the form <workspace name>:<OW module name> Any use of `.transform( df => df)` may be replaced with `.transformWithDescription( nt)` after instantiating a `val nt = NamedTransformation( df => df)` as its argument. This commit contains one such application of the new extension method. (See `val jobRunsAppendClusterName` in `WorkflowsTransforms.scala`.) Some logic in `GoldTransforms` falls through to elements of the special job-run-action form of Job Group IDs emitted by the platform but the impact is minimal relative to the benefit to Overwatch development and troubleshooting. Even so this form of Job Group ID is still present in initial Spark events before OW ETL modules begin to execute. * improve TransformationDescriberTest * flip transformation names to beginning of label for greater visibility in Spark UI. `NamedTransformation` type name now appears in labels' second position. (cherry picked from commit 2ead752) * revert modified Spark UI Job Group labels TODO: enumerate the regressions this would introduce when the labels set by then platform are replaced this way. --------- Co-authored-by: Guenia <[email protected]> commit 3055a22 Author: Aman <[email protected]> Date: Mon Aug 12 22:59:13 2024 +0530 1218 warehouse state details (#1254) * test * code for warehouse_state_detail_silver * removed comments * adding warehouseEvents scope * added exception for table not found * added exception to check if system tables are getting used or not * enhance function getWarehousesEventDF * added code to fix max number of clusters * change in column names * refactored code commit 59daae5 Author: Aman <[email protected]> Date: Thu Aug 8 20:20:17 2024 +0530 adding fix for duplicate accountId in module 2010 and 3019 (#1270) commit d6fa441 Author: Sourav Banerjee <[email protected]> Date: Wed Aug 7 23:24:00 2024 +0530 1030 pipeline validation framework (#1071) * Initial commit * 19-Oct-23 : Added Validation Framework * 19-Oct-23: Customize the message for customer * 19-Oct-23: Customize the message for customer * 26-Oct-23: Added OverwatchID filter in the table * 26-Oct-23: Change for Coding Best Practices * Added Function Description for validateColumnBetweenMultipleTable * Added Pattern Matching in Validation * Convert if-else in validateRuleAndUpdateStatus to case statement as per comment * Initial commit * traceability implemented (#1102) * traceability implemented * code review implemented * missed code implemented (#1105) * Initial commit * traceability implemented (#1102) * traceability implemented * code review implemented * missed code implemented * missed code implemented --------- Co-authored-by: Guenia Izquierdo <[email protected]> * Added proper exception for Spark Stream Gold if progress c… (#1085) * Initial commit * 09-Nov-23: Added proper exception for Spark Stream Gold if progress column contains only null in SparkEvents_Bronze --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * Gracefully Handle Exception for NotebookCommands_Gold (#1095) * Initial commit * Gracefully Handle Exception for NotebookCommands_Gold * Convert the check in buildNotebookCommandsFact to single or clause --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * code missed in merge (#1120) * Fix Helper Method to Instantiate Remote Workspaces (#1110) * Initial commit * Change getRemoteWorkspaceByPath and getWorkspaceByDatabase to take it RemoteWorkspace * Remove Unnecessary println Statements --------- Co-authored-by: Guenia Izquierdo <[email protected]> * Ensure we test the write into a partitioned storage_prefix (#1088) * Initial commit * Ensure we test the write into a partitioned storage_prefix * silver warehouse spec fix (#1121) * added missed copy-pasta (#1129) * Exclude cluster logs in S3 root bucket (#1118) * Exclude cluster logs in S3 root bucket * Omit cluster log paths pointing to s3a as well * implemented recon (#1116) * implemented recon * docs added * file path change * review comments implemented * Added ShuffleFactor to NotebookCommands (#1124) Co-authored-by: Sourav Banerjee <[email protected]> * disabled traceability (#1130) * Added JobRun_Silver in buildClusterStateFact for Cluster E… (#1083) * Initial commit * 08-Nov-23: Added JobRun_Silver in buildClusterStateFact for Cluster End Time Imputation * Impute Terminating Events in CLSF from JR_Silver * Impute Terminating Events in CLSD * Impute Terminating Events in CLSD * Change CLSF to original 0730 version * Change CLSF to original 0730 version * Added cluster_spec in CLSD to get job Cluster only * Make the variables name in buildClusterStateDetail into more descriptive way * Make the variables name in buildClusterStateDetail into more descriptive way --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * Sys table audit log integration (#1122) * system table integration with audit log * adding code to resolve issues with response col * fixed timestamp issue * adding print statement for from and until time * adding fix for azure * removed comments * removed comments and print statements * removed comments * implemented code review comments * implemented code review comments * adding review comment * Sys table integration multi acount (#1131) * added code changes for multi account deployment * code for multi account system table integration * Sys table integration multi acount (#1132) * added code changes for multi account deployment * code for multi account system table integration * adding code for system table migration check * changing exception for empty audit log from system table * adding code to handle sql_endpoint in configs and fix in migration validation (#1133) * corner case commit (#1134) * Handle CLSD Cluster Impute when jrcp and clusterSpec is Empty (#1135) * Handle CLSD Cluster Impute when jrcp and clusterSpec is Empty * Exclude last_state from clsd as it is not needed in the logic. --------- Co-authored-by: Sourav Banerjee <[email protected]> * Exclude 2011 and 2014 as dependency module for 2019 (#1136) * Exclude 2011 and 2014 as dependency module for 2019 * Added comment in CLSD for understandability --------- Co-authored-by: Sourav Banerjee <[email protected]> * corner case commit (#1137) * Update version * adding fix for empty EH config for system tables (#1140) * corner case commit (#1142) * adding fix for empty audit log for warehouse_spec_silver (#1141) * recon columns removed (#1143) * recon columns removed * recon columns removed * Initial Commit * Added Changes in Validation Framework as per comments added during sprint meeting * added hotfix for warehouse_spec_silver (#1154) * Added Multiple RunID check in Validation Frameowkr * Added Other tables in Validation Framework * Added Multiple WS ID option in Cros Table Validation * Added change for Pipeline_report * Change for Pipeline Report * Added msg for single table validation * Added negative msg in HealthCheck Report * Added Negative Msg for Cross Table Validation * Added extra filter for total cost validation for CLSF * Changed as per Comments * Changed as per the comments * Added some filter condition for cost validation in clsf * Added Config for all pipeline run * 19-Oct-23 : Added Validation Framework * 19-Oct-23: Customize the message for customer * 19-Oct-23: Customize the message for customer * 26-Oct-23: Added OverwatchID filter in the table * 26-Oct-23: Change for Coding Best Practices * Added Function Description for validateColumnBetweenMultipleTable * Added Pattern Matching in Validation * Convert if-else in validateRuleAndUpdateStatus to case statement as per comment * traceability implemented (#1102) * traceability implemented * code review implemented * Added JobRun_Silver in buildClusterStateFact for Cluster E… (#1083) * Initial commit * 08-Nov-23: Added JobRun_Silver in buildClusterStateFact for Cluster End Time Imputation * Impute Terminating Events in CLSF from JR_Silver * Impute Terminating Events in CLSD * Impute Terminating Events in CLSD * Change CLSF to original 0730 version * Change CLSF to original 0730 version * Added cluster_spec in CLSD to get job Cluster only * Make the variables name in buildClusterStateDetail into more descriptive way * Make the variables name in buildClusterStateDetail into more descriptive way --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> * corner case commit (#1134) * Exclude 2011 and 2014 as dependency module for 2019 (#1136) * Exclude 2011 and 2014 as dependency module for 2019 * Added comment in CLSD for understandability --------- Co-authored-by: Sourav Banerjee <[email protected]> * Added Changes in Validation Framework as per comments added during sprint meeting * Added Multiple RunID check in Validation Frameowkr * Added Other tables in Validation Framework * Added Multiple WS ID option in Cros Table Validation * Added change for Pipeline_report * Change for Pipeline Report * Added msg for single table validation * Added negative msg in HealthCheck Report * Added Negative Msg for Cross Table Validation * Added extra filter for total cost validation for CLSF * Changed as per Comments * Changed as per the comments * Added some filter condition for cost validation in clsf * Added Config for all pipeline run --------- Co-authored-by: Guenia Izquierdo <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> Co-authored-by: Sriram Mohanty <[email protected]> Co-authored-by: Aman <[email protected]> commit 3c16b5f Author: Sourav Banerjee <[email protected]> Date: Wed Aug 7 23:23:17 2024 +0530 Redefine views so that they are created from tables not locations (#1241) * Initial commit * Change publish() function to incorporate views from ETL Tables iso paths * Handle view creation in case of table does not exists --------- Co-authored-by: Guenia <[email protected]> commit f3ffd7c Author: Sourav Banerjee <[email protected]> Date: Wed Aug 7 23:21:37 2024 +0530 1201 collect all event logs on first run (#1255) * Initial commit * cluster event bronze will take all the data from API for first run * Update BronzeTransforms.scala adjust whitespace around `landClusterEvents()` --------- Co-authored-by: Guenia <[email protected]> Co-authored-by: Neil Best <[email protected]> commit caa3282 Author: Sriram Mohanty <[email protected]> Date: Wed Aug 7 23:20:25 2024 +0530 append null columns from cluster snapshot for cluster_spec_silver (#1239) * Initial commit * append null columns from cluster snapshot for cluster_spec_silver * append null columns from cluster snapshot for cluster_spec_silver --------- Co-authored-by: Guenia <[email protected]> commit f7460bd Author: Neil Best <[email protected]> Date: Tue Jul 30 14:52:38 2024 -0500 adjust Silver Job Runs module configuration (#1256) enable auto-optimized shuffle for module 2011 originally implemented in commit [d751d5f](d751d5f) commit 25671b7 Author: Sriram Mohanty <[email protected]> Date: Tue Jul 9 02:02:04 2024 +0530 recon enhancement done to deal with different columns in source and target (#1216) * Initial commit * recon enhancement done to deal with different columns in source and target --------- Co-authored-by: Guenia <[email protected]> commit 97236ae Author: Guenia <[email protected]> Date: Wed May 8 19:43:29 2024 -0400 Initial commit commit f9c8dd0 Author: Guenia Izquierdo Delgado <[email protected]> Date: Mon Jun 24 11:28:15 2024 -0400 0812 release (#1249) * Initial commit * adding fix for schemaScrubber and StructToMap (#1232) * fix for null driver_type_id and node_type_id in jrcp (#1236) * Modify Cluster_snapshot_bronze column (#1234) * Comvert all the struct field inside 'spec' column for cluster_snapshot_bronze to mapType * Dropped Spec column from snapshot * Removed Reductant VerifyMinSchema * Update_AWS_instance_types (#1248) * Update_gcp_instance_types (#1244) Update_gcp_instance_types * Update_AWS_instance_types Update_AWS_instance_types --------- Co-authored-by: Aman <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> Co-authored-by: Mohan Baabu <[email protected]> commit 7390d4a Author: Mohan Baabu <[email protected]> Date: Fri Jun 21 20:01:46 2024 +0530 Update_Azure_Instance_details (#1246) * Update_Azure_Instance_details Update_Azure_Instance_details * Update Azure_Instance_Details.csv Updated Standard_NV72ads_A10_v5 types, missed a comma commit 6cbb9d7 Author: Mohan Baabu <[email protected]> Date: Fri Jun 21 19:37:57 2024 +0530 Update_gcp_instance_types (#1244) Update_gcp_instance_types * add Spark conf option for `DataFrame` logging extension methods This feature respects the logging level set for the logger in scope. ```scala spark.conf.set( "overwatch.dataframelogger.level", "DEBUG") logger.setLevel( "WARN") df.log() // no data shown in logs logger.setLevel( "DEBUG") df.log() // :) ``` also: - implement `DataFrameSyntaxTest` suite to test `Dataset`/`DataFrame` extension methods `.showLines()` and `.log()` as implemented within the `DataFrameSyntax` trait. - move `SparkSessionTestWrapper` into `src/main` and made it extend `SparkSessionWrapper` in order to make `DataFrameSyntax` testable through the use of type parameter `SPARK` and self-typing. --------- Co-authored-by: Guenia <[email protected]> * Added deriveRawApiResponseDF fix (#1283) Co-authored-by: Sourav Banerjee <[email protected]> * add new module to gold metadata (#1296) --------- Co-authored-by: Sriram Mohanty <[email protected]> Co-authored-by: Neil Best <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]> Co-authored-by: Aman <[email protected]> Co-authored-by: Sourav Banerjee <[email protected]>
- Loading branch information