Skip to content

Commit

Permalink
feat(fabric_spark_workspace_settings): add more properties to resourc…
Browse files Browse the repository at this point in the history
…e and data source
  • Loading branch information
DariuszPorowski committed Jan 16, 2025
1 parent 6b9f87d commit 1bc85c0
Show file tree
Hide file tree
Showing 6 changed files with 136 additions and 0 deletions.
9 changes: 9 additions & 0 deletions .changes/unreleased/added-20250116-152025.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
kind: added
body: |
Added additional properties for `fabric_spark_workspace_settings` Data-Source and Resource:
- `high_concurrency.notebook_pipeline_run_enabled` (Boolean)
- `jobs.conservative_job_admission_enabled` (Boolen)
- `jobs.session_timeout_in_minutes` (Number)
time: 2025-01-16T15:20:25.9324812-08:00
custom:
Issue: "111"
11 changes: 11 additions & 0 deletions docs/data-sources/spark_workspace_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ data "fabric_spark_workspace_settings" "example" {
- `environment` (Attributes) Environment properties. (see [below for nested schema](#nestedatt--environment))
- `high_concurrency` (Attributes) High Concurrency properties. (see [below for nested schema](#nestedatt--high_concurrency))
- `id` (String) The ID of this resource.
- `jobs` (Attributes) (see [below for nested schema](#nestedatt--jobs))
- `pool` (Attributes) Pool properties. (see [below for nested schema](#nestedatt--pool))

<a id="nestedatt--timeouts"></a>
Expand Down Expand Up @@ -75,6 +76,16 @@ Read-Only:
Read-Only:

- `notebook_interactive_run_enabled` (Boolean) The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.
- `notebook_pipeline_run_enabled` (Boolean) The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.

<a id="nestedatt--jobs"></a>

### Nested Schema for `jobs`

Read-Only:

- `conservative_job_admission_enabled` (Boolean) Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.
- `session_timeout_in_minutes` (Number) Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).

<a id="nestedatt--pool"></a>

Expand Down
11 changes: 11 additions & 0 deletions docs/resources/spark_workspace_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ resource "fabric_spark_workspace_settings" "example2" {
- `automatic_log` (Attributes) Automatic Log properties. (see [below for nested schema](#nestedatt--automatic_log))
- `environment` (Attributes) Environment properties. (see [below for nested schema](#nestedatt--environment))
- `high_concurrency` (Attributes) High Concurrency properties. (see [below for nested schema](#nestedatt--high_concurrency))
- `jobs` (Attributes) Jobs properties. (see [below for nested schema](#nestedatt--jobs))
- `pool` (Attributes) Pool properties. (see [below for nested schema](#nestedatt--pool))
- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))

Expand Down Expand Up @@ -122,6 +123,16 @@ Optional:
Optional:

- `notebook_interactive_run_enabled` (Boolean) The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.
- `notebook_pipeline_run_enabled` (Boolean) The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.

<a id="nestedatt--jobs"></a>

### Nested Schema for `jobs`

Optional:

- `conservative_job_admission_enabled` (Boolean) Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.
- `session_timeout_in_minutes` (Number) Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).

<a id="nestedatt--pool"></a>

Expand Down
18 changes: 18 additions & 0 deletions internal/services/spark/data_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,24 @@ func (d *dataSourceSparkWorkspaceSettings) Schema(ctx context.Context, _ datasou
MarkdownDescription: "The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
"notebook_pipeline_run_enabled": schema.BoolAttribute{
MarkdownDescription: "The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
},
},
"jobs": schema.SingleNestedAttribute{
Computed: true,
CustomType: supertypes.NewSingleNestedObjectTypeOf[jobsPropertiesModel](ctx),
Attributes: map[string]schema.Attribute{
"conservative_job_admission_enabled": schema.BoolAttribute{
MarkdownDescription: "Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
"session_timeout_in_minutes": schema.Int32Attribute{
MarkdownDescription: "Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).",
Computed: true,
},
},
},
"pool": schema.SingleNestedAttribute{
Expand Down
47 changes: 47 additions & 0 deletions internal/services/spark/models_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ type baseSparkWorkspaceSettingsModel struct {
AutomaticLog supertypes.SingleNestedObjectValueOf[automaticLogPropertiesModel] `tfsdk:"automatic_log"`
Environment supertypes.SingleNestedObjectValueOf[environmentPropertiesModel] `tfsdk:"environment"`
HighConcurrency supertypes.SingleNestedObjectValueOf[highConcurrencyPropertiesModel] `tfsdk:"high_concurrency"`
Jobs supertypes.SingleNestedObjectValueOf[jobsPropertiesModel] `tfsdk:"jobs"`
Pool supertypes.SingleNestedObjectValueOf[poolPropertiesModel] `tfsdk:"pool"`
}

Expand Down Expand Up @@ -76,6 +77,19 @@ func (to *baseSparkWorkspaceSettingsModel) set(ctx context.Context, from fabspar

to.HighConcurrency = highConcurrency

jobs := supertypes.NewSingleNestedObjectValueOfNull[jobsPropertiesModel](ctx)

if from.Jobs != nil {
jobsModel := &jobsPropertiesModel{}
jobsModel.set(from.Jobs)

if diags := jobs.Set(ctx, jobsModel); diags.HasError() {
return diags
}
}

to.Jobs = jobs

pool := supertypes.NewSingleNestedObjectValueOfNull[poolPropertiesModel](ctx)

if from.Pool != nil {
Expand Down Expand Up @@ -115,10 +129,22 @@ func (to *environmentPropertiesModel) set(from *fabspark.EnvironmentProperties)

type highConcurrencyPropertiesModel struct {
NotebookInteractiveRunEnabled types.Bool `tfsdk:"notebook_interactive_run_enabled"`
NotebookPipelineRunEnabled types.Bool `tfsdk:"notebook_pipeline_run_enabled"`
}

func (to *highConcurrencyPropertiesModel) set(from *fabspark.HighConcurrencyProperties) {
to.NotebookInteractiveRunEnabled = types.BoolPointerValue(from.NotebookInteractiveRunEnabled)
to.NotebookPipelineRunEnabled = types.BoolPointerValue(from.NotebookPipelineRunEnabled)
}

type jobsPropertiesModel struct {
ConservativeJobAdmissionEnabled types.Bool `tfsdk:"conservative_job_admission_enabled"`
SessionTimeoutInMinutes types.Int32 `tfsdk:"session_timeout_in_minutes"`
}

func (to *jobsPropertiesModel) set(from *fabspark.JobsProperties) {
to.ConservativeJobAdmissionEnabled = types.BoolPointerValue(from.ConservativeJobAdmissionEnabled)
to.SessionTimeoutInMinutes = types.Int32PointerValue(from.SessionTimeoutInMinutes)
}

type poolPropertiesModel struct {
Expand Down Expand Up @@ -233,6 +259,27 @@ func (to *requestUpdateSparkWorkspaceSettings) set(ctx context.Context, from res
}
}

if !from.Jobs.IsNull() && !from.Jobs.IsUnknown() {
jobs, diags := from.Jobs.Get(ctx)
if diags.HasError() {
return diags
}

var reqJobs fabspark.JobsProperties

if !jobs.ConservativeJobAdmissionEnabled.IsNull() && !jobs.ConservativeJobAdmissionEnabled.IsUnknown() {
reqJobs.ConservativeJobAdmissionEnabled = jobs.ConservativeJobAdmissionEnabled.ValueBoolPointer()
}

if !jobs.SessionTimeoutInMinutes.IsNull() && !jobs.SessionTimeoutInMinutes.IsUnknown() {
reqJobs.SessionTimeoutInMinutes = jobs.SessionTimeoutInMinutes.ValueInt32Pointer()
}

if reqJobs != (fabspark.JobsProperties{}) {
to.Jobs = &reqJobs
}
}

if !from.Pool.IsNull() && !from.Pool.IsUnknown() { //nolint:nestif
pool, diags := from.Pool.Get(ctx)
if diags.HasError() {
Expand Down
40 changes: 40 additions & 0 deletions internal/services/spark/resource_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"fmt"

"github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework-validators/int32validator"
"github.com/hashicorp/terraform-plugin-framework-validators/resourcevalidator"
"github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/diag"
Expand Down Expand Up @@ -138,6 +139,44 @@ func (r *resourceSparkWorkspaceSettings) Schema(ctx context.Context, _ resource.
boolplanmodifier.UseStateForUnknown(),
},
},
"notebook_pipeline_run_enabled": schema.BoolAttribute{
MarkdownDescription: "The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.",
Optional: true,
Computed: true,
PlanModifiers: []planmodifier.Bool{
boolplanmodifier.UseStateForUnknown(),
},
},
},
},
"jobs": schema.SingleNestedAttribute{
MarkdownDescription: "Jobs properties.",
Optional: true,
Computed: true,
CustomType: supertypes.NewSingleNestedObjectTypeOf[jobsPropertiesModel](ctx),
PlanModifiers: []planmodifier.Object{
objectplanmodifier.UseStateForUnknown(),
},
Attributes: map[string]schema.Attribute{
"conservative_job_admission_enabled": schema.BoolAttribute{
MarkdownDescription: "Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.",
Optional: true,
Computed: true,
PlanModifiers: []planmodifier.Bool{
boolplanmodifier.UseStateForUnknown(),
},
},
"session_timeout_in_minutes": schema.Int32Attribute{
MarkdownDescription: "Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).",
Optional: true,
Computed: true,
Validators: []validator.Int32{
int32validator.AtMost(20160),
},
PlanModifiers: []planmodifier.Int32{
int32planmodifier.UseStateForUnknown(),
},
},
},
},
"pool": schema.SingleNestedAttribute{
Expand Down Expand Up @@ -259,6 +298,7 @@ func (r *resourceSparkWorkspaceSettings) ConfigValidators(_ context.Context) []r
path.MatchRoot("automatic_log"),
path.MatchRoot("environment"),
path.MatchRoot("high_concurrency"),
path.MatchRoot("jobs"),
path.MatchRoot("pool"),
),
}
Expand Down

0 comments on commit 1bc85c0

Please sign in to comment.