Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Google Identity-Aware Proxy Provider #5

Merged
merged 41 commits into from
Jan 3, 2025

Conversation

brettcurtis
Copy link
Contributor

@brettcurtis brettcurtis commented Dec 31, 2024

Fixes #2

Summary by CodeRabbit

  • New Features

    • Added Google Cloud Identity-Aware Proxy (IAP) authentication
    • Introduced new configuration for local development environment
    • Enhanced Backstage deployment with improved service account and configuration management
  • Documentation

    • Updated README with testing instructions
    • Expanded Terraform configuration documentation
    • Added details about deployment and authentication processes
  • Configuration Changes

    • Updated application title for local development
    • Removed production and sandbox configuration files
    • Modified Terraform variables and local values
  • Authentication

    • Implemented new sign-in page with environment-specific logic
    • Added support for GCP IAP authentication provider

@brettcurtis brettcurtis self-assigned this Dec 31, 2024
Copy link
Contributor

coderabbitai bot commented Dec 31, 2024

Walkthrough

This pull request introduces comprehensive changes to integrate Google Identity-Aware Proxy (IAP) authentication into the Backstage application. The modifications span multiple configuration files across the project, including updates to authentication mechanisms, Terraform deployments, and application configurations. The changes aim to enable secure authentication through Google's IAP, replacing previous authentication methods with a more robust, proxy-based approach.

Changes

File Change Summary
.pre-commit-config.yaml Updated checkov repository version from 3.2.345 to 3.2.346
README.md Added new "Tests" section with instructions for local development
app/app-config.yaml Updated app title to "Backstage (Local Development)"
app/packages/app/src/App.tsx Added ProxiedSignInPage for non-development environments
app/packages/backend/src/index.ts Added import for GCP IAP authentication provider
deployments/main.tf Added IAP-related resources and service
deployments/regional/helm/backstage.yml Added comprehensive app configuration and service details

Assessment against linked issues

Objective Addressed Explanation
Support Google IAP Authentication
Offload User Authentication to Google HTTPS Load Balancer
Implement Proxy-based Authentication

The pull request comprehensively addresses the requirements for supporting Google Identity-Aware Proxy authentication in Backstage, implementing changes across configuration, authentication, and deployment layers.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 1, 2025 14:44 — with GitHub Actions Failure
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 1, 2025 15:13 — with GitHub Actions Failure
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 1, 2025 16:01 — with GitHub Actions Failure
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 1, 2025 17:31 — with GitHub Actions Inactive
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 1, 2025 17:48 — with GitHub Actions Inactive
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 1, 2025 18:20 — with GitHub Actions Inactive
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 1, 2025 18:32 — with GitHub Actions Inactive
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 2, 2025 21:48 — with GitHub Actions Failure
@brettcurtis brettcurtis deployed to Sandbox: Main January 2, 2025 21:56 — with GitHub Actions Active
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 2, 2025 21:58 — with GitHub Actions Failure
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 2, 2025 22:05 — with GitHub Actions Failure
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 2, 2025 22:10 — with GitHub Actions Inactive
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 3, 2025 13:31 — with GitHub Actions Failure
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 3, 2025 14:18 — with GitHub Actions Failure
…e Helm chart with organization details and TechDocs configuration
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 3, 2025 14:42 — with GitHub Actions Inactive
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 3, 2025 15:07 — with GitHub Actions Inactive
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 3, 2025 15:17 — with GitHub Actions Failure
@brettcurtis brettcurtis had a problem deploying to Sandbox: Regional - us-east1-b January 3, 2025 15:34 — with GitHub Actions Failure
@brettcurtis brettcurtis temporarily deployed to Sandbox: Regional - us-east1-b January 3, 2025 15:39 — with GitHub Actions Inactive
@brettcurtis brettcurtis marked this pull request as ready for review January 3, 2025 16:14
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (11)
app/packages/backend/src/index.ts (1)

24-24: Validate GCP IAP provider initialization
The addition of @backstage/plugin-auth-backend-module-gcp-iap-provider is appropriate for enabling Identity-Aware Proxy support. Ensure the plugin usage is fully configured (e.g., environment variables, credentials, and IAP brand/client) and that no steps have been overlooked in the backstage configuration.

deployments/variables.tf (1)

19-22: Enhance the variable description for clarity.

The description could be more detailed to help users understand:

  1. The expected format (service account email)
  2. The purpose (GCP Workload Identity integration)
  3. The relationship with IAP authentication
 variable "k8s_workload_identity_service_account" {
-  description = "The service account to use for the workload identity"
+  description = "The service account email ([email protected]) used for GCP Workload Identity. This enables secure authentication between Kubernetes workloads and GCP services, including IAP."
   type        = string
 }
deployments/regional/variables.tf (2)

48-51: LGTM! Consider adding validation.

The variable replacement from cloud_sql_host_project_id to networking_project_id better reflects its broader scope for shared VPC configuration.

Consider adding validation to ensure the project ID follows GCP naming conventions:

 variable "networking_project_id" {
   description = "The project ID for the shared VPC"
   type        = string
+  validation {
+    condition     = can(regex("^[a-z][a-z0-9-]{4,28}[a-z0-9]$", var.networking_project_id))
+    error_message = "Project ID must be between 6 and 30 characters, start with a letter, and contain only lowercase letters, numbers, and hyphens."
+  }
 }

53-56: Add validation for GCS bucket name format.

The remote bucket variable is correctly defined, but adding validation would help prevent configuration errors.

 variable "remote_bucket" {
   description = "The remote bucket the `terraform_remote_state` data source retrieves the state from"
   type        = string
+  validation {
+    condition     = can(regex("^[a-z0-9][a-z0-9-_.]{1,61}[a-z0-9]$", var.remote_bucket))
+    error_message = "Bucket names must be between 3 and 63 characters, start and end with a number or letter, and can contain dots, hyphens, and underscores."
+  }
 }
deployments/regional/helm/backstage.yml (1)

94-102: Review permission configuration.

The admin permissions configuration uses a simple group-based rule. While functional, consider:

  1. Adding more granular permissions for different user roles
  2. Documenting the group membership requirements

Consider expanding the permissions model:

      permissions:
        rules:
          - name: backstage-admin-rule
            resourceType: all
            policy: allow
            conditions:
              - type: group
                group: admins
+          - name: backstage-developer-rule
+            resourceType: catalog-entity
+            policy: allow
+            conditions:
+              - type: group
+                group: developers
deployments/main.tf (1)

64-67: Ensure secure handling of IAP client credentials

The IAP client will generate OAuth credentials that need to be handled securely. Make sure:

  1. The credentials are stored securely in your secrets management system
  2. The credentials are only accessible to necessary services
  3. There's a rotation policy in place

Consider documenting the credential management process in your operational runbooks.

deployments/README.md (1)

45-48: Add descriptions for the IAP client outputs

The outputs backstage_iap_client_id and backstage_iap_client_secret are missing descriptions. Consider adding meaningful descriptions to help users understand their purpose and usage.

 | Name | Description |
 |------|-------------|
-| <a name="output_backstage_iap_client_id"></a> [backstage\_iap\_client\_id](#output\_backstage\_iap\_client\_id) | n/a |
-| <a name="output_backstage_iap_client_secret"></a> [backstage\_iap\_client\_secret](#output\_backstage\_iap\_client\_secret) | n/a |
+| <a name="output_backstage_iap_client_id"></a> [backstage\_iap\_client\_id](#output\_backstage\_iap\_client\_id) | The OAuth client ID for IAP authentication |
+| <a name="output_backstage_iap_client_secret"></a> [backstage\_iap\_client\_secret](#output\_backstage\_iap\_client\_secret) | The OAuth client secret for IAP authentication |
README.md (1)

43-55: Enhance testing instructions for IAP configuration

While the basic local development setup is documented, consider adding:

  1. Instructions for testing with IAP authentication locally
  2. Steps to verify IAP configuration in different environments
  3. Troubleshooting guide for common IAP-related issues

This will help developers validate the IAP integration properly.

deployments/regional/locals.tf (1)

43-46: LGTM! The local values support IAP implementation.

The new local values properly configure:

  1. Environment-specific hostnames for IAP endpoints
  2. DNS managed zones for domain management
  3. Remote state integration for infrastructure dependencies

However, consider documenting the expected outputs from the remote state for better maintainability.

Add comments explaining the expected outputs from data.terraform_remote_state.main.outputs:

+  # Outputs from main remote state:
+  # - iap_oauth_client_id: The OAuth client ID for IAP
+  # - iap_oauth_client_secret: The OAuth client secret for IAP
   main               = data.terraform_remote_state.main.outputs
deployments/regional/main.tf (2)

86-97: Consider adding explicit dependency on Kubernetes ingress.

The DNS record correctly uses the ingress IP, but consider adding an explicit depends_on block to ensure proper resource creation order.

 resource "google_dns_record_set" "backstage_a_record" {
   project      = var.networking_project_id
   name         = "${local.hostname}." # Trailing dot is required
   managed_zone = local.managed_zone
   type         = "A"
   ttl          = 300

   rrdatas = [kubernetes_ingress_v1.backstage.status.0.load_balancer.0.ingress.0.ip]
+
+  depends_on = [
+    kubernetes_ingress_v1.backstage
+  ]
 }

147-217: LGTM! Consider additional security headers.

The ingress and IAP configuration is well-structured with proper HTTPS enforcement. Consider adding security-related annotations for enhanced protection.

 metadata {
   name      = "backstage"
   namespace = "backstage"

   annotations = {
     "kubernetes.io/ingress.allow-http"       = "false"
     "networking.gke.io/managed-certificates" = kubernetes_manifest.backstage_tls.manifest.metadata.name
+    "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
+    "nginx.ingress.kubernetes.io/ssl-redirect" = "true"
+    "nginx.ingress.kubernetes.io/hsts-max-age" = "31536000"
+    "nginx.ingress.kubernetes.io/hsts-include-subdomains" = "true"
   }
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0dc3cf9 and f987913.

⛔ Files ignored due to path filters (2)
  • app/packages/backend/package.json is excluded by !**/*.json
  • app/yarn.lock is excluded by !**/yarn.lock, !**/*.lock, !**/*.lock
📒 Files selected for processing (21)
  • .pre-commit-config.yaml (1 hunks)
  • README.md (1 hunks)
  • app/app-config.production.yaml (0 hunks)
  • app/app-config.sandbox.yaml (0 hunks)
  • app/app-config.yaml (1 hunks)
  • app/examples/admin.yaml (1 hunks)
  • app/packages/app/src/App.tsx (2 hunks)
  • app/packages/backend/src/index.ts (1 hunks)
  • deployments/README.md (3 hunks)
  • deployments/main.tf (1 hunks)
  • deployments/outputs.tf (1 hunks)
  • deployments/regional/README.md (3 hunks)
  • deployments/regional/helm/backstage.yml (2 hunks)
  • deployments/regional/locals.tf (1 hunks)
  • deployments/regional/main.tf (3 hunks)
  • deployments/regional/tfvars/us-east1-b-production.tfvars (1 hunks)
  • deployments/regional/tfvars/us-east1-b-sandbox.tfvars (1 hunks)
  • deployments/regional/variables.tf (1 hunks)
  • deployments/shared/helpers.tf (1 hunks)
  • deployments/tfvars/sandbox.tfvars (1 hunks)
  • deployments/variables.tf (1 hunks)
💤 Files with no reviewable changes (2)
  • app/app-config.production.yaml
  • app/app-config.sandbox.yaml
✅ Files skipped from review due to trivial changes (2)
  • .pre-commit-config.yaml
  • deployments/tfvars/sandbox.tfvars
🔇 Additional comments (18)
deployments/regional/tfvars/us-east1-b-sandbox.tfvars (1)

1-2: Validate references to removed variable
The networking_project_id and remote_bucket variables have been introduced, presumably replacing the removed cloud_sql_host_project_id. Ensure any references to cloud_sql_host_project_id are fully removed throughout the Terraform codebase and that the new variables are properly passed to modules that rely on them.

✅ Verification successful

The previous search didn't find any references to cloud_sql_host_project_id, but let's verify:

  1. If these variables are properly defined in Terraform files
  2. How they are being used across the codebase
  3. If there are any remnants of the old variable in tfvars files

This will help us confirm if the transition from cloud_sql_host_project_id to the new variables is complete and properly implemented.


Let's verify how these new variables are being used in the actual Terraform configurations to ensure they are properly integrated and there are no leftover references to the old variable.


Transition from cloud_sql_host_project_id to new variables is complete and correct

The verification shows that:

  1. No references to the old cloud_sql_host_project_id variable exist in the codebase
  2. The new variables are properly defined in deployments/regional/variables.tf
  3. The variables are correctly used:
    • networking_project_id is used for host project ID in Cloud SQL and DNS record configurations
    • remote_bucket is used for backend configuration

The transition appears to be complete and properly implemented with no remnants of the old variable.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search the repository for references to `cloud_sql_host_project_id`
rg -A 3 -B 3 "cloud_sql_host_project_id"

Length of output: 40


Script:

#!/bin/bash
# Let's check for any Terraform variable definitions and their usage
fd -e tf -x cat {} | grep -A 3 -B 3 "variable.*\(networking_project_id\|remote_bucket\|cloud_sql_host_project_id\)"

# Also check tfvars files for these variables
fd -e tfvars -x cat {} | grep -A 3 -B 3 "\(networking_project_id\|remote_bucket\|cloud_sql_host_project_id\)"

Length of output: 1093


Script:

#!/bin/bash
# Let's check how these variables are being used in the Terraform configurations
rg -A 3 -B 3 "\b(networking_project_id|remote_bucket)\b" --type tf

Length of output: 2153

deployments/regional/tfvars/us-east1-b-production.tfvars (1)

1-2: Consistency check across environments
Introducing networking_project_id and remote_bucket in production is aligned with the sandbox environment changes. Verify that all staging/other environment .tfvars or relevant variables have been consistently updated to avoid mismatched configurations between environments.

✅ Verification successful

Variables are missing in some environment configurations

The verification reveals that networking_project_id and remote_bucket variables are only defined in two environment files, likely sandbox and production. If there are staging or other environments, they might be missing these required variables, which could lead to deployment issues.

Let's verify if there are other environment files that should include these variables:


Let's check the content of the other tfvars files to understand if these variables should be present there:


Variables are correctly configured across environments

Based on the verification, the networking_project_id and remote_bucket variables are appropriately configured in the regional tfvars files (deployments/regional/tfvars/) for both sandbox and production environments. The other tfvars files (deployments/tfvars/) contain different sets of variables for different purposes and don't require these networking-related variables.

The configuration is consistent across environments where these variables are needed:

  • us-east1-b-production.tfvars: Contains both variables with prod values
  • us-east1-b-sandbox.tfvars: Contains both variables with sandbox values
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Validate usage of new variables across environment .tfvars files
fd -e tfvars --exec rg "(networking_project_id|remote_bucket)"

Length of output: 268


Script:

#!/bin/bash
# List all tfvars files to check for potential missing environment configurations
fd -e tfvars

Length of output: 196


Script:

#!/bin/bash
# Check content of the other tfvars files
cat deployments/tfvars/production.tfvars deployments/tfvars/sandbox.tfvars

Length of output: 486

app/packages/app/src/App.tsx (2)

64-68: Split environment-specific sign-in flows
This approach introduces a separate GCP IAP sign-in path for non-development environments. This is a solid pattern. Ensure that any local development toggles or environment variables are well-documented so new contributors understand why the fallback logic uses the guest provider locally.


32-32: Confirm new imports are available
The ProxiedSignInPage import from @backstage/core-components is a new addition. Verify that your project dependencies align with this version of @backstage/core-components, ensuring no version conflicts exist.

✅ Verification successful

Dependencies are properly aligned with Backstage versioning

The verification shows that @backstage/core-components is correctly listed as a dependency with the version specifier backstage:^, which follows the same pattern as other Backstage dependencies in the project. This versioning scheme ensures compatibility across the Backstage ecosystem.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for the version of @backstage/core-components
rg "dependencies|core-components" package.json -A 2

Length of output: 106


Script:

#!/bin/bash
# Search for package.json files in the repository and check for @backstage/core-components dependency
fd package.json --exec rg "dependencies|core-components" {} -A 2

Length of output: 466

deployments/outputs.tf (1)

4-11: LGTM! Properly configured IAP client outputs.

The outputs are well-structured and follow Terraform best practices:

  • Appropriate naming convention
  • Sensitive flag set for the secret
  • Documentation link provided

Let's verify the IAP client resource exists:

✅ Verification successful

IAP client outputs are correctly configured and resources exist

The verification confirms that both required resources are properly defined in the codebase:

  • google_iap_client.this resource exists and is properly configured with the brand reference
  • google_iap_brand.this resource exists and is properly configured with application title and project ID

The outputs in outputs.tf correctly reference these existing resources, making the configuration complete and valid.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify the google_iap_client resource is properly defined
rg "resource\s+\"google_iap_client\"\s+\"this\"" -A 5

Length of output: 315


Script:

#!/bin/bash
# Let's verify the IAP brand resource is also properly defined
rg "resource\s+\"google_iap_brand\"\s+\"this\"" -A 5

Length of output: 546

deployments/shared/helpers.tf (1)

9-10: LGTM! Team and repository values updated correctly.

The changes appropriately reflect the Backstage-specific configuration.

Let's verify the latest available version of the helpers module:

✅ Verification successful

Version v0.1.2 of terraform-core-helpers is the latest available

The code is using the most recent version (v0.1.2) of the terraform-core-helpers module, which is confirmed by checking the latest release tag.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check latest release version of terraform-core-helpers
gh api repos/osinfra-io/terraform-core-helpers/releases/latest --jq .tag_name

Length of output: 86

app/examples/admin.yaml (2)

5-11: Consider security implications of exposed admin details.

While the user configuration is valid, consider:

  1. Limiting exposed personal information in version control
  2. Using environment variables or secrets management for sensitive details
  3. Adding a comment explaining the purpose of this admin account

15-20: Verify group membership configuration.

The group configuration looks good, but ensure:

  1. The empty children array is intentional
  2. Group permissions and roles are properly configured elsewhere

Let's check for related group configurations:

✅ Verification successful

Let me gather more information about the group configurations and their relationships.


Empty children array is intentional, permissions are properly configured

Based on the verification:

  • The empty children array in the admins group is intentional as it follows the same pattern as the guests group
  • Group permissions are properly configured:
    • Permissions framework is enabled in app/app-config.yaml
    • Admin permissions are set in deployments/regional/helm/backstage.yml with appropriate rules
    • The admin user is correctly associated with the admins group through memberOf
  • The configuration follows Backstage's documented structure (as referenced in the comments)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for additional group configurations
rg "kind:\s+Group" --type yaml
# Search for group permission configurations
rg "permission" --type yaml -A 5

Length of output: 1164


Script:

#!/bin/bash
# Check content of both group files
cat app/examples/org.yaml app/examples/admin.yaml

# Look for any references to these groups in other YAML files
rg "admins|org" --type yaml -A 3

# Check for role bindings or role configurations
rg "role|binding" --type yaml -A 3

Length of output: 2490

deployments/regional/helm/backstage.yml (2)

6-7: Resource configurations look appropriate.

The cloud-sql-proxy configuration and resource limits are well-defined and follow best practices:

  • Private IP connectivity is enabled
  • Resource limits are set appropriately for both memory and CPU
  • Requests are set to reasonable values for initial scaling

Also applies to: 12-15


71-78: Review database connection security.

The database configuration uses environment variables for credentials, which is good. However, there are some security considerations:

  1. The connection is using localhost, implying the cloud-sql-proxy sidecar
  2. No SSL configuration is specified

Consider adding SSL configuration for enhanced security:

      database:
        client: pg
        connection:
          host: localhost
          port: 5432
          user: ${POSTGRES_USER}
          password: ${POSTGRES_PASSWORD}
+         ssl:
+           rejectUnauthorized: true
+           ca: ${POSTGRES_SSL_CERT}
deployments/main.tf (3)

36-36: LGTM: IAP service enabled correctly

The addition of "iap.googleapis.com" is necessary for enabling Identity-Aware Proxy functionality.


72-76: Verify workload identity configuration

The Cloud SQL client role assignment looks correct, but please verify:

  1. The workload identity service account variable is properly set in all environments
  2. The minimal required permissions are granted
✅ Verification successful

Cloud SQL IAM configuration is properly set with minimal permissions

The verification shows that:

  • The workload identity service account variable is properly defined and set in the tfvars files
  • Only the necessary roles/cloudsql.client role is granted, which is the minimal required permission for Cloud SQL access
  • No other excessive Cloud SQL-related permissions are assigned to this service account
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for the workload identity variable in tfvars files
rg "k8s_workload_identity_service_account" --type tf

# List all IAM roles assigned to this service account
ast-grep --pattern 'resource "google_project_iam_member" $_$ {
  member = "serviceAccount:${var.k8s_workload_identity_service_account}"
  $$$
}'

Length of output: 369


Script:

#!/bin/bash
# Check the variable definition and any default values
cat deployments/variables.tf | grep -A 5 "k8s_workload_identity_service_account"

# Check for tfvars files that might set this variable
fd ".tfvars$"

# Look for other IAM roles assigned to service accounts in the project
rg 'resource "google_project_iam_member"' -A 5 --type tf

Length of output: 821


Script:

#!/bin/bash
# Check the content of tfvars files for the workload identity variable
cat deployments/regional/tfvars/us-east1-b-production.tfvars deployments/regional/tfvars/us-east1-b-sandbox.tfvars deployments/tfvars/production.tfvars deployments/tfvars/sandbox.tfvars | grep "k8s_workload_identity_service_account"

# Check for any other IAM-related configurations that might grant additional permissions
rg "roles/cloudsql" --type tf

Length of output: 433


50-59: Verify the support email configuration

The support email ([email protected]) needs to be verified as:

  1. A valid email address that your team controls
  2. Either a user email or a Google Groups alias (as noted in comments)
  3. Accessible for IAP-related support requests

This is especially important as the brand cannot be easily modified after creation.

deployments/regional/README.md (2)

31-47: LGTM! Comprehensive documentation of IAP infrastructure.

The new resources and data sources properly document the IAP implementation:

  • DNS configuration for IAP endpoints
  • Kubernetes resources for IAP integration
  • Backend configuration for IAP authentication
  • TLS setup for secure communication

61-62: LGTM! Clear documentation of new input variables.

The new input variables properly support:

  • Shared VPC networking for IAP
  • Remote state management for infrastructure dependencies
deployments/regional/main.tf (3)

1-13: LGTM! Well-structured remote state configuration.

The remote state configuration follows Terraform best practices with clear workspace naming and proper GCS backend setup.


22-22: Verify networking_project_id variable definition.

The change to use networking_project_id aligns with the project structure, but let's ensure this variable is properly defined.

✅ Verification successful

Variable networking_project_id is properly defined

The variable networking_project_id is correctly defined in deployments/regional/variables.tf with appropriate description and type declaration, making the change to use this variable in main.tf valid and consistent.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if networking_project_id variable is defined
rg -A 5 'variable.*networking_project_id' --type tf

Length of output: 397


222-234: Verify IAP credentials in remote state and consider encryption.

The IAP secret configuration looks good, but let's verify:

  1. The existence of IAP client outputs in the remote state
  2. Consider using a secret management solution like Google Secret Manager for the IAP credentials

Consider migrating sensitive credentials to Google Secret Manager for enhanced security:

data "google_secret_manager_secret_version" "iap_client_id" {
  secret  = "backstage-iap-client-id"
  project = var.project_id
}

data "google_secret_manager_secret_version" "iap_client_secret" {
  secret  = "backstage-iap-client-secret"
  project = var.project_id
}
✅ Verification successful

IAP credentials configuration is properly set up, but could be enhanced

The verification confirms that:

  • IAP client credentials are properly managed through the google_iap_client resource
  • The outputs are correctly defined with the client secret marked as sensitive
  • The Kubernetes secret is properly configured to use these outputs

While the current setup is functional, here's a security enhancement suggestion:

  • Consider migrating sensitive credentials to Google Secret Manager for additional security controls and audit capabilities
  • This can be implemented using the google_secret_manager_secret_version data source as shown in the original architecture advice
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for IAP client outputs in Terraform files
rg -A 5 'output.*backstage_iap_client' --type tf

Length of output: 419


Script:

#!/bin/bash
# Check for the IAP client resource definition
rg -A 10 'resource.*google_iap_client.*this' --type tf

Length of output: 708

deployments/regional/helm/backstage.yml Show resolved Hide resolved
deployments/regional/helm/backstage.yml Show resolved Hide resolved
app/app-config.yaml Show resolved Hide resolved
@brettcurtis brettcurtis merged commit 688ec4b into main Jan 3, 2025
8 checks passed
@brettcurtis brettcurtis deleted the brettcurtis/issue2 branch January 3, 2025 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

Support for Google Identity-Aware Proxy Provider
1 participant