From 0d747a5d656d473d668ae60670455ca3ccaaf4c2 Mon Sep 17 00:00:00 2001 From: Anna Levenberg Date: Tue, 5 Dec 2023 12:59:40 -0500 Subject: [PATCH] sample checkers output --- .github/snippet-bot.yml | 1 - CODE_OF_CONDUCT.md | 58 ++++--- CONTRIBUTING.md | 20 +-- README.md | 31 ++-- SECURITY.md | 6 +- bigquery/write/README.md | 30 ++-- ci/builds/setup-bazel.sh | 28 ++-- ci/builds/setup-conda.sh | 2 +- ci/builds/setup-vcpkg.sh | 2 +- ci/cloudbuild-setup-bazel.yaml | 2 +- ci/cloudbuild-setup-vcpkg.yaml | 2 +- ci/conda.Dockerfile | 2 +- cloud-run-hello-world/README.md | 4 +- .../bootstrap-cloud-run-hello.sh | 36 ++-- gcs-fast-transfers/README.md | 33 ++-- getting-started/README.md | 108 ++++++------ getting-started/gke/README.md | 155 ++++++++---------- getting-started/update/README.md | 119 +++++++------- iot/mqtt-ciotc/.dockerignore | 1 - iot/mqtt-ciotc/README.md | 8 +- iot/mqtt-ciotc/setup_device.sh | 109 ++++++------ populate-bucket/README.md | 58 ++++--- pubsub-open-telemetry/BUILD.bazel | 14 +- pubsub-open-telemetry/README.md | 100 ++++++----- pubsub-open-telemetry/WORKSPACE.bazel | 13 +- setup/WORKSPACE.bazel | 8 + speech/api/README.md | 127 +++++++------- 27 files changed, 564 insertions(+), 513 deletions(-) diff --git a/.github/snippet-bot.yml b/.github/snippet-bot.yml index 8b13789..e69de29 100644 --- a/.github/snippet-bot.yml +++ b/.github/snippet-bot.yml @@ -1 +0,0 @@ - diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 46b2a08..8ac97a0 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -1,43 +1,41 @@ # Contributor Code of Conduct -As contributors and maintainers of this project, -and in the interest of fostering an open and welcoming community, -we pledge to respect all people who contribute through reporting issues, -posting feature requests, updating documentation, -submitting pull requests or patches, and other activities. - -We are committed to making participation in this project -a harassment-free experience for everyone, -regardless of level of experience, gender, gender identity and expression, -sexual orientation, disability, personal appearance, +As contributors and maintainers of this project, and in the interest of +fostering an open and welcoming community, we pledge to respect all people who +contribute through reporting issues, posting feature requests, updating +documentation, submitting pull requests or patches, and other activities. + +We are committed to making participation in this project a harassment-free +experience for everyone, regardless of level of experience, gender, gender +identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. Examples of unacceptable behavior by participants include: -* The use of sexualized language or imagery -* Personal attacks -* Trolling or insulting/derogatory comments -* Public or private harassment -* Publishing other's private information, -such as physical or electronic -addresses, without explicit permission -* Other unethical or unprofessional conduct. +- The use of sexualized language or imagery +- Personal attacks +- Trolling or insulting/derogatory comments +- Public or private harassment +- Publishing other's private information, such as physical or electronic + addresses, without explicit permission +- Other unethical or unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct. -By adopting this Code of Conduct, -project maintainers commit themselves to fairly and consistently -applying these principles to every aspect of managing this project. -Project maintainers who do not follow or enforce the Code of Conduct -may be permanently removed from the project team. +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct. By adopting this Code of Conduct, project +maintainers commit themselves to fairly and consistently applying these +principles to every aspect of managing this project. Project maintainers who do +not follow or enforce the Code of Conduct may be permanently removed from the +project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. -Instances of abusive, harassing, or otherwise unacceptable behavior -may be reported by opening an issue -or contacting one or more of the project maintainers. +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by opening an issue or contacting one or more of the project +maintainers. -This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0, -available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/) +This Code of Conduct is adapted from the +[Contributor Covenant](http://contributor-covenant.org), version 1.2.0, +available at +[http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0ac02f5..ede6319 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -2,18 +2,18 @@ ## Contributor License Agreements -We'd love to accept your patches! Before we can take them, we -have to jump a couple of legal hurdles. +We'd love to accept your patches! Before we can take them, we have to jump a +couple of legal hurdles. Please fill out either the individual or corporate Contributor License Agreement (CLA). - * If you are an individual writing original source code and you're sure you - own the intellectual property, then you'll need to sign an - [individual CLA](https://developers.google.com/open-source/cla/individual). - * If you work for a company that wants to allow you to contribute your work, - then you'll need to sign a - [corporate CLA](https://developers.google.com/open-source/cla/corporate). +- If you are an individual writing original source code and you're sure you own + the intellectual property, then you'll need to sign an + [individual CLA](https://developers.google.com/open-source/cla/individual). +- If you work for a company that wants to allow you to contribute your work, + then you'll need to sign a + [corporate CLA](https://developers.google.com/open-source/cla/corporate). Follow either of the two links above to access the appropriate CLA and instructions for how to sign and return it. Once we receive it, we'll be able to @@ -33,5 +33,5 @@ accept your pull requests. ## Style -Samples in this repository follow the [Google C++ Style Guide]( -https://google.github.io/styleguide/cppguide.html). +Samples in this repository follow the +[Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html). diff --git a/README.md b/README.md index faa2e55..a36fe4b 100644 --- a/README.md +++ b/README.md @@ -1,29 +1,34 @@ # C++ Samples -A small collection of samples that demonstrate how to call Google Cloud services from C++. +A small collection of samples that demonstrate how to call Google Cloud services +from C++. -[![style][style-badge]][style-link] [![cloud build][cloud-build-badge]][cloud-build-link] +[![style][style-badge]][style-link] +[![cloud build][cloud-build-badge]][cloud-build-link] -The samples in this repo cover only a _small fraction_ of the total APIs that you can call from C++. See -the [googleapis repo](https://github.com/googleapis/googleapis) to see the full list of APIs callable from C++. +The samples in this repo cover only a _small fraction_ of the total APIs that +you can call from C++. See the +[googleapis repo](https://github.com/googleapis/googleapis) to see the full list +of APIs callable from C++. These samples will only build and run on **Linux**. -There is a growing collection of [C++ client libraries] for Google Cloud services. These include Cloud Bigtable, Cloud -Pub/Sub, Cloud Spanner, and Google Cloud Storage. These libraries include -examples of how to use most functions. The examples in this repository typically -involve using a combination of services, or a more specific use-case. +There is a growing collection of [C++ client libraries] for Google Cloud +services. These include Cloud Bigtable, Cloud Pub/Sub, Cloud Spanner, and Google +Cloud Storage. These libraries include examples of how to use most functions. +The examples in this repository typically involve using a combination of +services, or a more specific use-case. ## Contributing changes -* See [CONTRIBUTING.md](CONTRIBUTING.md) +- See [CONTRIBUTING.md](CONTRIBUTING.md) ## Licensing -* See [LICENSE](LICENSE) +- See [LICENSE](LICENSE) -[C++ client libraries]: https://github.com/googleapis/google-cloud-cpp -[style-badge]: https://github.com/GoogleCloudPlatform/cpp-samples/actions/workflows/style.yaml/badge.svg -[style-link]: https://github.com/GoogleCloudPlatform/cpp-samples/actions/workflows/style.yaml +[c++ client libraries]: https://github.com/googleapis/google-cloud-cpp [cloud-build-badge]: https://img.shields.io/badge/cloud%20build-TODO-yellowgreen [cloud-build-link]: https://github.com/GoogleCloudPlatform/cpp-samples/issues/119 +[style-badge]: https://github.com/GoogleCloudPlatform/cpp-samples/actions/workflows/style.yaml/badge.svg +[style-link]: https://github.com/GoogleCloudPlatform/cpp-samples/actions/workflows/style.yaml diff --git a/SECURITY.md b/SECURITY.md index 8b58ae9..50e6d3e 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -2,6 +2,8 @@ To report a security issue, please use [g.co/vulnz](https://g.co/vulnz). -The Google Security Team will respond within 5 working days of your report on g.co/vulnz. +The Google Security Team will respond within 5 working days of your report on +g.co/vulnz. -We use g.co/vulnz for our intake, and do coordination and disclosure here using GitHub Security Advisory to privately discuss and fix the issue. +We use g.co/vulnz for our intake, and do coordination and disclosure here using +GitHub Security Advisory to privately discuss and fix the issue. diff --git a/bigquery/write/README.md b/bigquery/write/README.md index c4a3fee..f37978a 100644 --- a/bigquery/write/README.md +++ b/bigquery/write/README.md @@ -1,15 +1,16 @@ # Using BigQuery Storage Write -This example shows how to upload some data to BigQuery using the BigQuery Storage API. -For simplicity, the example uses a hard-coded dataset, table, and schema. It uses -the default "write stream" and always uploads the same data. +This example shows how to upload some data to BigQuery using the BigQuery +Storage API. For simplicity, the example uses a hard-coded dataset, table, and +schema. It uses the default "write stream" and always uploads the same data. If you are not familiar with the BigQuery Storage Write API, we recommend you first read the [API overview] before starting this guide. ## Compiling the Example -This project uses `vcpkg` to install its dependencies. Clone `vcpkg` in your `$HOME`: +This project uses `vcpkg` to install its dependencies. Clone `vcpkg` in your +`$HOME`: ```shell git clone -C $HOME https://github.com/microsoft/vcpkg.git @@ -21,8 +22,8 @@ Install the typical development tools, on Ubuntu you would use: apt update && apt install -y build-essential cmake git ninja-build pkg-config g++ curl tar zip unzip ``` -In this directory compile the dependencies and the code, this can take as long as an hour, depending on the performance -of your workstation: +In this directory compile the dependencies and the code, this can take as long +as an hour, depending on the performance of your workstation: ```shell cd cpp-samples/bigquery/write @@ -35,10 +36,10 @@ The program will be in `.build/single_threaded_write`. ## Pre-requisites -You are going to need a Google Cloud project to host the BigQuery dataset and table used in this example. -You will need to install and configure the BigQuery CLI tool. Follow the -[Google Cloud CLI install][install-sdk] instructions, and then the [quickstart][BigQuery CLI tool] for -the BigQuery CLI tool. +You are going to need a Google Cloud project to host the BigQuery dataset and +table used in this example. You will need to install and configure the BigQuery +CLI tool. Follow the [Google Cloud CLI install][install-sdk] instructions, and +then the [quickstart][bigquery cli tool] for the BigQuery CLI tool. Verify the CLI is working using a simple command to list the active project: @@ -63,7 +64,8 @@ bq update cpp_samples.singers schema.json ## Run the sample -Run the example, replace the `[PROJECT ID]` placeholder with the id of your project: +Run the example, replace the `[PROJECT ID]` placeholder with the id of your +project: ```shell .build/single_threaded_write [PROJECT ID] @@ -84,6 +86,6 @@ bq rm -f cpp_samples.singers bq rm -f cpp_samples ``` -[API overview]: https://cloud.google.com/bigquery/docs/write-api -[BigQuery CLI tool]: https://cloud.google.com/bigquery/docs/bq-command-line-tool -[install-sdk]: https://cloud.google.com/sdk/docs/install-sdk \ No newline at end of file +[api overview]: https://cloud.google.com/bigquery/docs/write-api +[bigquery cli tool]: https://cloud.google.com/bigquery/docs/bq-command-line-tool +[install-sdk]: https://cloud.google.com/sdk/docs/install-sdk diff --git a/ci/builds/setup-bazel.sh b/ci/builds/setup-bazel.sh index e796e47..8599d09 100755 --- a/ci/builds/setup-bazel.sh +++ b/ci/builds/setup-bazel.sh @@ -17,28 +17,28 @@ set -euo pipefail args=( - "--test_output=errors" - "--verbose_failures=true" - "--keep_going" - "--experimental_convenience_symlinks=ignore" - "--cache_test_results=auto" + "--test_output=errors" + "--verbose_failures=true" + "--keep_going" + "--experimental_convenience_symlinks=ignore" + "--cache_test_results=auto" ) if [[ -n "${BAZEL_REMOTE_CACHE:-}" ]]; then - args+=("--remote_cache=${BAZEL_REMOTE_CACHE}") - args+=("--google_default_credentials") - # See https://docs.bazel.build/versions/main/remote-caching.html#known-issues - # and https://github.com/bazelbuild/bazel/issues/3360 - args+=("--experimental_guard_against_concurrent_changes") + args+=("--remote_cache=${BAZEL_REMOTE_CACHE}") + args+=("--google_default_credentials") + # See https://docs.bazel.build/versions/main/remote-caching.html#known-issues + # and https://github.com/bazelbuild/bazel/issues/3360 + args+=("--experimental_guard_against_concurrent_changes") fi # Make some attempts to download dependencies. This is a common source of # flakes, and worth doing for less spurious failures. If they all fail we just # let the `bazel build` invocation below try its luck. for i in 1 2 3; do - if env -C /workspace/setup bazel fetch ...; then - break - fi - sleep 60 + if env -C /workspace/setup bazel fetch ...; then + break + fi + sleep 60 done env -C /workspace/setup bazel build ... diff --git a/ci/builds/setup-conda.sh b/ci/builds/setup-conda.sh index 2b45464..4202f22 100755 --- a/ci/builds/setup-conda.sh +++ b/ci/builds/setup-conda.sh @@ -22,5 +22,5 @@ conda config --set channel_priority strict conda install -y -c conda-forge cmake ninja cxx-compiler google-cloud-cpp libgoogle-cloud # [END cpp_setup_conda_install] -cmake -G Ninja -S /workspace/setup -B /var/tmp/build/setup-conda +cmake -G Ninja -S /workspace/setup -B /var/tmp/build/setup-conda cmake --build /var/tmp/build/setup-conda diff --git a/ci/builds/setup-vcpkg.sh b/ci/builds/setup-vcpkg.sh index 81b4403..8e34972 100755 --- a/ci/builds/setup-vcpkg.sh +++ b/ci/builds/setup-vcpkg.sh @@ -17,5 +17,5 @@ set -euo pipefail cmake -S /workspace/setup -B /var/tmp/build/setup-vcpkg \ - -DCMAKE_TOOLCHAIN_FILE=/usr/local/vcpkg/scripts/buildsystems/vcpkg.cmake + -DCMAKE_TOOLCHAIN_FILE=/usr/local/vcpkg/scripts/buildsystems/vcpkg.cmake cmake --build /var/tmp/build/setup-vcpkg diff --git a/ci/cloudbuild-setup-bazel.yaml b/ci/cloudbuild-setup-bazel.yaml index da52710..2ca8df5 100644 --- a/ci/cloudbuild-setup-bazel.yaml +++ b/ci/cloudbuild-setup-bazel.yaml @@ -37,7 +37,7 @@ steps: env: [ 'BAZEL_REMOTE_CACHE=https://storage.googleapis.com/${_CACHE_BUCKET}/cpp-samples/setup-bazel', ] - args: [ '/workspace/ci/builds/setup-bazel.sh' ] + args: [ '/workspace/ci/builds/setup-bazel.sh' ] # Remove the images created by this build. - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' diff --git a/ci/cloudbuild-setup-vcpkg.yaml b/ci/cloudbuild-setup-vcpkg.yaml index c2f034a..4598bac 100644 --- a/ci/cloudbuild-setup-vcpkg.yaml +++ b/ci/cloudbuild-setup-vcpkg.yaml @@ -41,7 +41,7 @@ steps: ] - name: 'gcr.io/${PROJECT_ID}/cpp-samples/ci/devtools:${BUILD_ID}' - args: [ '/workspace/ci/builds/setup-vcpkg.sh' ] + args: [ '/workspace/ci/builds/setup-vcpkg.sh' ] # Remove the images created by this build. - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' diff --git a/ci/conda.Dockerfile b/ci/conda.Dockerfile index 3ee66c3..4a3101a 100644 --- a/ci/conda.Dockerfile +++ b/ci/conda.Dockerfile @@ -15,4 +15,4 @@ FROM ubuntu:22.04 ENV DEBIAN_FRONTEND=noninteractive -RUN apt update && apt install -y bzip2 curl python3 +RUN apt update && apt install -y bzip2 curl python3 diff --git a/cloud-run-hello-world/README.md b/cloud-run-hello-world/README.md index f63cb01..6aa089f 100644 --- a/cloud-run-hello-world/README.md +++ b/cloud-run-hello-world/README.md @@ -21,8 +21,8 @@ export GOOGLE_CLOUD_PROJECT=... ``` This script will enable the necessary APIs, build a Docker image using Cloud -Build, create a service account for the Cloud Run deployment, and then create -a Cloud Run deployment using the Docker image referenced earlier. +Build, create a service account for the Cloud Run deployment, and then create a +Cloud Run deployment using the Docker image referenced earlier. ```bash cd google/cloud/examples/cloud_run_hello diff --git a/cloud-run-hello-world/bootstrap-cloud-run-hello.sh b/cloud-run-hello-world/bootstrap-cloud-run-hello.sh index fe30357..f955171 100755 --- a/cloud-run-hello-world/bootstrap-cloud-run-hello.sh +++ b/cloud-run-hello-world/bootstrap-cloud-run-hello.sh @@ -16,8 +16,8 @@ set -eu if [[ -z "${GOOGLE_CLOUD_PROJECT:-}" ]]; then - echo "You must set GOOGLE_CLOUD_PROJECT to the project id hosting Cloud Run C++ Hello World" - exit 1 + echo "You must set GOOGLE_CLOUD_PROJECT to the project id hosting Cloud Run C++ Hello World" + exit 1 fi readonly GOOGLE_CLOUD_PROJECT="${GOOGLE_CLOUD_PROJECT:-}" @@ -25,36 +25,36 @@ readonly GOOGLE_CLOUD_REGION="${REGION:-us-central1}" # Enable (if they are not enabled already) the services will we will need gcloud services enable cloudbuild.googleapis.com \ - "--project=${GOOGLE_CLOUD_PROJECT}" + "--project=${GOOGLE_CLOUD_PROJECT}" gcloud services enable containerregistry.googleapis.com \ - "--project=${GOOGLE_CLOUD_PROJECT}" + "--project=${GOOGLE_CLOUD_PROJECT}" gcloud services enable run.googleapis.com \ - "--project=${GOOGLE_CLOUD_PROJECT}" + "--project=${GOOGLE_CLOUD_PROJECT}" # Build the Docker Images gcloud builds submit \ - "--project=${GOOGLE_CLOUD_PROJECT}" \ - "--config=cloudbuild.yaml" + "--project=${GOOGLE_CLOUD_PROJECT}" \ + "--config=cloudbuild.yaml" # Create a service account that will update the index readonly SA_ID="cloud-run-hello" readonly SA_NAME="${SA_ID}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com" if gcloud iam service-accounts describe "${SA_NAME}" \ - "--project=${GOOGLE_CLOUD_PROJECT}" >/dev/null 2>&1; then - echo "The ${SA_ID} service account already exists" + "--project=${GOOGLE_CLOUD_PROJECT}" >/dev/null 2>&1; then + echo "The ${SA_ID} service account already exists" else - gcloud iam service-accounts create "${SA_ID}" \ - "--project=${GOOGLE_CLOUD_PROJECT}" \ - --description="C++ Hello World for Cloud Run" + gcloud iam service-accounts create "${SA_ID}" \ + "--project=${GOOGLE_CLOUD_PROJECT}" \ + --description="C++ Hello World for Cloud Run" fi # Create the Cloud Run deployment to update the index gcloud run deploy cloud-run-hello \ - "--project=${GOOGLE_CLOUD_PROJECT}" \ - "--service-account=${SA_NAME}" \ - "--image=gcr.io/${GOOGLE_CLOUD_PROJECT}/cloud-run-hello:latest" \ - "--region=${GOOGLE_CLOUD_REGION}" \ - "--platform=managed" \ - "--no-allow-unauthenticated" + "--project=${GOOGLE_CLOUD_PROJECT}" \ + "--service-account=${SA_NAME}" \ + "--image=gcr.io/${GOOGLE_CLOUD_PROJECT}/cloud-run-hello:latest" \ + "--region=${GOOGLE_CLOUD_REGION}" \ + "--platform=managed" \ + "--no-allow-unauthenticated" exit 0 diff --git a/gcs-fast-transfers/README.md b/gcs-fast-transfers/README.md index e928c2f..d49f17d 100644 --- a/gcs-fast-transfers/README.md +++ b/gcs-fast-transfers/README.md @@ -2,16 +2,19 @@ ## Status -This software is offered on an _"AS IS", EXPERIMENTAL_ basis, and only guaranteed to demonstrate concepts -- NOT to act -as production data transfer software. Any and all usage of it is at your sole discretion. Any costs or damages resulting -from its use are the sole responsibility of the user. You are advised to read and understand all source code in this -software before using it for any reason. +This software is offered on an _"AS IS", EXPERIMENTAL_ basis, and only +guaranteed to demonstrate concepts -- NOT to act as production data transfer +software. Any and all usage of it is at your sole discretion. Any costs or +damages resulting from its use are the sole responsibility of the user. You are +advised to read and understand all source code in this software before using it +for any reason. ---- +______________________________________________________________________ ## Compiling -This project uses `vcpkg` to install its dependencies. Clone `vcpkg` in your `$HOME`: +This project uses `vcpkg` to install its dependencies. Clone `vcpkg` in your +`$HOME`: ```shell git clone -C $HOME https://github.com/microsoft/vcpkg.git @@ -23,8 +26,8 @@ Install the typical development tools, on Ubuntu you would use: apt update && apt install -y build-essential cmake git ninja-build pkg-config g++ curl tar zip unzip ``` -In this directory compile the dependencies and the code, this can take as long as an hour, depending on the performance -of your workstation: +In this directory compile the dependencies and the code, this can take as long +as an hour, depending on the performance of your workstation: ```shell cd cpp-samples/gcs-parallel-download @@ -37,18 +40,20 @@ The program will be in `.build/download`. ## Downloading objects -The program receives the bucket name, object name, and destination file as parameter in the command-line, for example: +The program receives the bucket name, object name, and destination file as +parameter in the command-line, for example: ```shell .build/download my-bucket gcs-does-not-have-folders/my-large-object.bin destination.bin ``` -Will download an object named `gcs-does-not-have-folders/my-large-file.bin` in bucket `my-bucket` to the destination -file `destination.bin`. +Will download an object named `gcs-does-not-have-folders/my-large-file.bin` in +bucket `my-bucket` to the destination file `destination.bin`. -The program uses approximately 2 threads per core (or vCPU) to download an object. To change the number of threads use -the `--thread-count` option. For small objects, the program may use fewer threads, you can tune this behavior by setting -the `--minimum-slice-size` to a smaller number. +The program uses approximately 2 threads per core (or vCPU) to download an +object. To change the number of threads use the `--thread-count` option. For +small objects, the program may use fewer threads, you can tune this behavior by +setting the `--minimum-slice-size` to a smaller number. ## Usage diff --git a/getting-started/README.md b/getting-started/README.md index 89404ef..a89533d 100644 --- a/getting-started/README.md +++ b/getting-started/README.md @@ -2,60 +2,42 @@ ## Motivation -A typical use of C++ in Google Cloud is to perform parallel computations or -data analysis. Once completed, the results of this analysis are stored in some -kind of database. In this guide we will build such an application, we will use +A typical use of C++ in Google Cloud is to perform parallel computations or data +analysis. Once completed, the results of this analysis are stored in some kind +of database. In this guide we will build such an application, we will use "scanning" [GCS] as a simplified example of "analyzing" some data, [Cloud Spanner] as the database to store the results, and and deploy the application to [Cloud Run], a managed platform to deploy containerized applications. - -[Cloud Build]: https://cloud.google.com/build -[Cloud Run]: https://cloud.google.com/run -[Cloud Storage]: https://cloud.google.com/storage -[Cloud Cloud SDK]: https://cloud.google.com/sdk -[Cloud Shell]: https://cloud.google.com/shell -[GCS]: https://cloud.google.com/storage -[Cloud Spanner]: https://cloud.google.com/spanner -[Container Registry]: https://cloud.google.com/container-registry -[Pricing Calculator]: https://cloud.google.com/products/calculator -[cloud-run-quickstarts]: https://cloud.google.com/run/docs/quickstarts -[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects -[buildpacks]: https://buildpacks.io -[docker]: https://docker.com/ -[docker-install]: https://store.docker.com/search?type=edition&offering=community -[sudoless docker]: https://docs.docker.com/engine/install/linux-postinstall/ -[pack-install]: https://buildpacks.io/docs/install-pack/ - ## Overview Google Cloud Storage (GCS) buckets can contain thousands, millions, and even -billions of objects. GCS can quickly find an object given its name, or list +billions of objects. GCS can quickly find an object given its name, or list objects with names in a given range, but some applications need more advanced lookups. For example, one may be interested in finding all the objects within a certain size, or with a given object type. In this guide, we will create and deploy an application to scan all the objects in a bucket, and store the full metadata information of each object in a -[Cloud Spanner] instance. Once the information is in a Cloud Spanner table, -one can use normal SQL statements to search for objects. +[Cloud Spanner] instance. Once the information is in a Cloud Spanner table, one +can use normal SQL statements to search for objects. The basic structure of this application is shown below. We will create a -*deployment* that *scans* the object metadata in Cloud Storage. To schedule -work for this deployment we will use Cloud Pub/Sub as a *job queue*. Initially -the user posts an indexing request to Cloud Pub/Sub, asking to index all the -objects with a given "prefix" (often thought of a folder) in a GCS bucket. If a -request fails or times out, Cloud Pub/Sub will automatically resend it to a new +*deployment* that *scans* the object metadata in Cloud Storage. To schedule work +for this deployment we will use Cloud Pub/Sub as a *job queue*. Initially the +user posts an indexing request to Cloud Pub/Sub, asking to index all the objects +with a given "prefix" (often thought of a folder) in a GCS bucket. If a request +fails or times out, Cloud Pub/Sub will automatically resend it to a new instance. If the work can be broken down by breaking the folder into smaller subfolders the indexing job will do so. It will simply post the request to index the -subfolder to itself (though it may be handled by a different instance as the -job scales up). As the number of these requests grows, Cloud Run will -automatically scale up the indexing deployment. We do not need to worry about -scaling up the job, or scaling it down at the end. In fact, Cloud Run can -"scale down to zero", so we do not even need to worry about shutting it down. +subfolder to itself (though it may be handled by a different instance as the job +scales up). As the number of these requests grows, Cloud Run will automatically +scale up the indexing deployment. We do not need to worry about scaling up the +job, or scaling it down at the end. In fact, Cloud Run can "scale down to zero", +so we do not even need to worry about shutting it down. ![Application Diagram](assets/getting-started-cpp.png) @@ -65,12 +47,12 @@ This example assumes that you have an existing GCP (Google Cloud Platform) project. The project must have billing enabled, as some of the services used in this example require it. If needed, consult: -* the [GCP quickstarts][gcp-quickstarts] to setup a GCP project -* the [cloud run quickstarts][cloud-run-quickstarts] to setup Cloud Run in your +- the [GCP quickstarts][gcp-quickstarts] to setup a GCP project +- the [cloud run quickstarts][cloud-run-quickstarts] to setup Cloud Run in your project -Use your workstation, a GCE instance, or the [Cloud Shell] to get a -command-line prompt. If needed, login to GCP using: +Use your workstation, a GCE instance, or the [Cloud Shell] to get a command-line +prompt. If needed, login to GCP using: ```sh gcloud auth login @@ -86,15 +68,15 @@ export GOOGLE_CLOUD_PROJECT=[PROJECT ID] > :warning: this guide uses Cloud Spanner, this service is billed by the hour > **even if you stop using it**. The charges can reach the **hundreds** or > **thousands** of dollars per month if you configure a large Cloud Spanner -> instance. Consult the [Pricing Calculator] for details. Please remember to +> instance. Consult the [Pricing Calculator] for details. Please remember to > delete any Cloud Spanner resources once you no longer need them. ### Configure the Google Cloud CLI to use your project -We will issue a number of commands using the [Google Cloud SDK], a command-line -tool to interact with Google Cloud services. Specifying the project (via the -`--project=$GOOGLE_CLOUD_PROJECT` flag) on each invocation of this tool quickly -becomes tedious. We start by configuring the default project: +We will issue a number of commands using the \[Google Cloud SDK\], a +command-line tool to interact with Google Cloud services. Specifying the project +(via the `--project=$GOOGLE_CLOUD_PROJECT` flag) on each invocation of this tool +quickly becomes tedious. We start by configuring the default project: ```sh gcloud config set project $GOOGLE_CLOUD_PROJECT @@ -103,8 +85,8 @@ gcloud config set project $GOOGLE_CLOUD_PROJECT ### Make sure the necessary services are enabled -Some services are not enabled by default when you create a Google Cloud -Project. We enable all the services we will need in this guide using: +Some services are not enabled by default when you create a Google Cloud Project. +We enable all the services we will need in this guide using: ```sh gcloud services enable run.googleapis.com @@ -138,7 +120,7 @@ cd cpp-samples/getting-started # Output: none ``` -Compile the code into a Docker image. Since we are only planning to build this +Compile the code into a Docker image. Since we are only planning to build this example once, we will use [Cloud Build]. Using [Cloud Build] is simpler, but it does not create a cache of the intermediate build artifacts. Read about [buildpacks] and the pack tool [install guide][pack-install] to run your builds @@ -165,9 +147,9 @@ gcloud builds submit \ ### Create a Cloud Spanner Instance to host your data -As mentioned above, this guide uses [Cloud Spanner] to store the data. We -create the smallest possible instance. If needed we will scale up the instance, -but this is economical and enough for running small jobs. +As mentioned above, this guide uses [Cloud Spanner] to store the data. We create +the smallest possible instance. If needed we will scale up the instance, but +this is economical and enough for running small jobs. > :warning: Creating the Cloud Spanner instance incurs immediate billing costs, > even if the instance is not used. @@ -236,8 +218,8 @@ gcloud builds list --ongoing # Output: the list of running jobs ``` -If your build has completed the list will be empty. If you need to wait for -this build to complete (it should take about 15 minutes) use: +If your build has completed the list will be empty. If you need to wait for this +build to complete (it should take about 15 minutes) use: ```sh gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") @@ -248,8 +230,8 @@ gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") > :warning: To continue, you must wait until the [Cloud Build] build completed. -Once the image is uploaded, we can create a Cloud Run deployment to run it. -This starts up an instance of the job. Cloud Run will scale this up or down as +Once the image is uploaded, we can create a Cloud Run deployment to run it. This +starts up an instance of the job. Cloud Run will scale this up or down as needed: ```sh @@ -360,13 +342,13 @@ google-chrome https://pantheon.corp.google.com/run/detail/us-central1/index-gcs- ## Next Steps -* Automatically update the index as the [bucket changes](update/README.md). -* Learn about how to deploy similar code to [GKE](gke/README.md) +- Automatically update the index as the [bucket changes](update/README.md). +- Learn about how to deploy similar code to [GKE](gke/README.md) ## Cleanup -> :warning: Do not forget to cleanup your billable resources after going -> through this "Getting Started" guide. +> :warning: Do not forget to cleanup your billable resources after going through +> this "Getting Started" guide. ### Remove the Cloud Spanner Instance @@ -410,3 +392,15 @@ gcloud container images delete gcr.io/$GOOGLE_CLOUD_PROJECT/getting-started-cpp/ # Output: Deleted [gcr.io/$GOOGLE_CLOUD_PROJECT/getting-started-cpp/index-gcs-prefix:latest] # Output: Deleted [gcr.io/$GOOGLE_CLOUD_PROJECT/getting-started-cpp/index-gcs-prefix@sha256:....] ``` + +[buildpacks]: https://buildpacks.io +[cloud build]: https://cloud.google.com/build +[cloud run]: https://cloud.google.com/run +[cloud shell]: https://cloud.google.com/shell +[cloud spanner]: https://cloud.google.com/spanner +[cloud-run-quickstarts]: https://cloud.google.com/run/docs/quickstarts +[container registry]: https://cloud.google.com/container-registry +[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects +[gcs]: https://cloud.google.com/storage +[pack-install]: https://buildpacks.io/docs/install-pack/ +[pricing calculator]: https://cloud.google.com/products/calculator diff --git a/getting-started/gke/README.md b/getting-started/gke/README.md index 22eb4c6..945b04a 100644 --- a/getting-started/gke/README.md +++ b/getting-started/gke/README.md @@ -1,68 +1,46 @@ # Getting Started with GKE and C++ -This guide builds upon the general [Getting Started with C++] guide. -It deploys the GCS indexing application to [GKE] (Google Kubernetes Engine) -instead of [Cloud Run], taking advantage of the long-running servers in -GKE to improve throughput. +This guide builds upon the general [Getting Started with C++] guide. It deploys +the GCS indexing application to [GKE] (Google Kubernetes Engine) instead of +[Cloud Run], taking advantage of the long-running servers in GKE to improve +throughput. -The steps in this guide are self-contained. It is not necessary to go through -the [Getting Started with C++] guide to go through these steps. It may be -easier to understand the motivation and the main components if you do so. -Note that some commands below may create resources (such as the [Cloud Spanner] -instance and database) that are already created in the previous guide. +The steps in this guide are self-contained. It is not necessary to go through +the [Getting Started with C++] guide to go through these steps. It may be easier +to understand the motivation and the main components if you do so. Note that +some commands below may create resources (such as the [Cloud Spanner] instance +and database) that are already created in the previous guide. ## Motivation A common technique to improve throughput in [Cloud Spanner] is to aggregate -multiple changes into a single transaction, minimizing the synchronization -and networking overheads. However, applications deployed to Cloud Run -cannot assume they will remain running after they respond to a request. This -makes it difficult to aggregate work from multiple [Pub/Sub][Cloud Pub/Sub] -messages. +multiple changes into a single transaction, minimizing the synchronization and +networking overheads. However, applications deployed to Cloud Run cannot assume +they will remain running after they respond to a request. This makes it +difficult to aggregate work from multiple [Pub/Sub][cloud pub/sub] messages. In this guide we will modify the application to: -* Run in GKE, where applications are long-lived and can assume they remain +- Run in GKE, where applications are long-lived and can assume they remain active after handling a message. -* Connect to Cloud Pub/Sub using [pull subscriptions], which have lower +- Connect to Cloud Pub/Sub using \[pull subscriptions\], which have lower overhead and implement a more fine-grained flow control mechanism. -* Use background threads to aggregate the results from multiple Cloud Pub/Sub +- Use background threads to aggregate the results from multiple Cloud Pub/Sub messages into a single Cloud Spanner transaction. -[Getting Started with C++]: ../README.md -[Cloud Build]: https://cloud.google.com/build -[Cloud Monitoring]: https://cloud.google.com/monitoring -[Cloud Run]: https://cloud.google.com/run -[GKE]: https://cloud.google.com/kubernetes-engine -[Cloud Storage]: https://cloud.google.com/storage -[Cloud Cloud SDK]: https://cloud.google.com/sdk -[Cloud Shell]: https://cloud.google.com/shell -[GCS]: https://cloud.google.com/storage -[Cloud Spanner]: https://cloud.google.com/spanner -[Cloud Pub/Sub]: https://cloud.google.com/pubsub -[Container Registry]: https://cloud.google.com/container-registry -[Pricing Calculator]: https://cloud.google.com/products/calculator -[gke-quickstart]: https://cloud.google.com/kubernetes-engine/docs/quickstart -[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects -[buildpacks]: https://buildpacks.io -[docker]: https://docker.com/ -[docker-install]: https://store.docker.com/search?type=edition&offering=community -[sudoless docker]: https://docs.docker.com/engine/install/linux-postinstall/ -[pack-install]: https://buildpacks.io/docs/install-pack/ - ## Overview -At a high-level, our plan is to replace "Cloud Run" with "Kubernetes Engine" in the -[Getting Started with C++] application: +At a high-level, our plan is to replace "Cloud Run" with "Kubernetes Engine" in +the [Getting Started with C++] application: ![Application Diagram](../assets/getting-started-gke.png) For completeness, the following instructions duplicate some of the steps in the -previous guide. We will need to issue a number of commands to create the -GKE cluster, the Cloud Pub/Sub topics and subscriptions, as well as the -Cloud Spanner instance and database. With this application we will need to -create a service account (sometimes called "robot" accounts) to run the -application, and grant this service account the necessary permissions. +previous guide. We will need to issue a number of commands to create the GKE +cluster, the Cloud Pub/Sub topics and subscriptions, as well as the Cloud +Spanner instance and database. With this application we will need to create a +service account (sometimes called "robot" accounts) to run the application, and +grant this service account the necessary permissions. ## Prerequisites @@ -70,18 +48,18 @@ This example assumes that you have an existing GCP (Google Cloud Platform) project. The project must have billing enabled, as some of the services used in this example require it. If needed, consult: -* the [GCP quickstarts][gcp-quickstarts] to setup a GCP project -* the [GKE quickstart][cloud-gke-quickstart] to setup GKE in your project +- the [GCP quickstarts][gcp-quickstarts] to setup a GCP project +- the \[GKE quickstart\]\[cloud-gke-quickstart\] to setup GKE in your project -Use your workstation, a GCE instance, or the [Cloud Shell] to get a -command-line prompt. If needed, login to GCP using: +Use your workstation, a GCE instance, or the [Cloud Shell] to get a command-line +prompt. If needed, login to GCP using: ```sh gcloud auth login ``` -Throughout the example we will use `GOOGLE_CLOUD_PROJECT` as an -environment variable containing the name of the project. +Throughout the example we will use `GOOGLE_CLOUD_PROJECT` as an environment +variable containing the name of the project. ```sh export GOOGLE_CLOUD_PROJECT=[PROJECT ID] @@ -96,8 +74,8 @@ export GOOGLE_CLOUD_PROJECT=[PROJECT ID] ### Configure the Google Cloud CLI to use your project -We will issue a number of commands using the [Google Cloud SDK], a command-line -tool to interact with Google Cloud services. Adding the +We will issue a number of commands using the \[Google Cloud SDK\], a +command-line tool to interact with Google Cloud services. Adding the `--project=$GOOGLE_CLOUD_PROJECT` to each invocation of this tool quickly becomes tedious, so we start by configuring the default project: @@ -108,8 +86,8 @@ gcloud config set project $GOOGLE_CLOUD_PROJECT ### Make sure the necessary services are enabled -Some services are not enabled by default when you create a Google Cloud -Project, so we start by enabling all the services we will need. +Some services are not enabled by default when you create a Google Cloud Project, +so we start by enabling all the services we will need. ```sh gcloud services enable cloudbuild.googleapis.com @@ -156,9 +134,9 @@ gcloud builds submit \ ### Create a Cloud Spanner Instance to host your data -As mentioned above, this guide uses [Cloud Spanner] to store the data. We -create the smallest possible instance. If needed we will scale up the instance, -but this is economical and enough for running small jobs. +As mentioned above, this guide uses [Cloud Spanner] to store the data. We create +the smallest possible instance. If needed we will scale up the instance, but +this is economical and enough for running small jobs. > :warning: Creating the Cloud Spanner instance incurs immediate billing costs, > even if the instance is not used. @@ -174,10 +152,10 @@ gcloud beta spanner instances create getting-started-cpp \ ### Create the Cloud Spanner Database and Table for your data A Cloud Spanner instance is just the allocation of compute resources for your -databases. Think of them as a virtual set of database servers dedicated to -your databases. Initially these servers have no databases or tables associated -with the resources. We need to create a database and table that will host the -data for this demo: +databases. Think of them as a virtual set of database servers dedicated to your +databases. Initially these servers have no databases or tables associated with +the resources. We need to create a database and table that will host the data +for this demo: ```sh gcloud spanner databases create gcs-index \ @@ -199,7 +177,8 @@ gcloud pubsub topics create gke-gcs-indexing ### Create a Cloud Pub/Sub Subscription for Indexing Requests Subscribers receive messages from Cloud Pub/Sub using a **subscription**. These -are named, persistent resources. We need to create one to configure the application. +are named, persistent resources. We need to create one to configure the +application. ```sh gcloud pubsub subscriptions create --topic=gke-gcs-indexing gke-gcs-indexing @@ -209,14 +188,12 @@ gcloud pubsub subscriptions create --topic=gke-gcs-indexing gke-gcs-indexing ### Create the GKE cluster We use preemptible nodes (the `--preemptible` flag) because they have lower -cost, and the application can safely restart. We also configure the cluster -to grow as needed. The maximum number of nodes (in this case `64`) should be -set based on your available quota or budget. Note that we enable +cost, and the application can safely restart. We also configure the cluster to +grow as needed. The maximum number of nodes (in this case `64`) should be set +based on your available quota or budget. Note that we enable [workload identity][workload-identity], the recommended way for GKE-based applications to consume services in Google Cloud. -[workload-identity]: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity - ```sh gcloud container clusters create cpp-samples \ --region="us-central1" \ @@ -244,11 +221,11 @@ gcloud container clusters --region="us-central1" get-credentials cpp-samples ### Create a GKE service account -GKE recommends configuring a different [workload-identity] for each -GKE workload, and using this identity to access GCP services. To follow -these guidelines we start by creating a service account in the Kubernetes -Cluster. Note that Kubernetes service accounts are distinct from GCP service -accounts, but can be mapped to them (as we do below). +GKE recommends configuring a different [workload-identity] for each GKE +workload, and using this identity to access GCP services. To follow these +guidelines we start by creating a service account in the Kubernetes Cluster. +Note that Kubernetes service accounts are distinct from GCP service accounts, +but can be mapped to them (as we do below). ```sh kubectl create serviceaccount worker @@ -286,8 +263,8 @@ gcloud builds list --ongoing # Output: the list of running jobs ``` -If your build has completed the list will be empty. If you need to wait for -this build to complete (it should take about 15 minutes) use: +If your build has completed the list will be empty. If you need to wait for this +build to complete (it should take about 15 minutes) use: ```sh gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") @@ -296,8 +273,8 @@ gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") ### Deploy the Programs to GKE -We can now create a job in GKE. GKE requires its configuration files to be -plain YAML, without variables or any other expansion. We use a small script to +We can now create a job in GKE. GKE requires its configuration files to be plain +YAML, without variables or any other expansion. We use a small script to generate this file: ```sh @@ -344,9 +321,8 @@ kubectl scale deployment/worker --replicas=128 GKE has detailed tutorials on how to use Cloud Monitoring metrics, such as the length of the work queue, to [autoscale a deployment][gke-autoscale-on-metrics]. -[gke-autoscale-on-metrics]: https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub - -We also need to scale up the Cloud Spanner instance. We use a `gcloud` command for this: +We also need to scale up the Cloud Spanner instance. We use a `gcloud` command +for this: ```sh gcloud beta spanner instances update getting-started-cpp --processing-units=3000 @@ -381,8 +357,8 @@ gcloud spanner databases execute-sql gcs-index --instance=getting-started-cpp \ ## Cleanup -> :warning: Do not forget to cleanup your billable resources after going -> through this "Getting Started" guide. +> :warning: Do not forget to cleanup your billable resources after going through +> this "Getting Started" guide. ### Remove the GKE cluster @@ -436,8 +412,8 @@ done ### Create a service account for the GKE workload -The GKE workload will need a GCP service account to access GCP resources. Pick -a name and create the account: +The GKE workload will need a GCP service account to access GCP resources. Pick a +name and create the account: ```sh readonly SA_ID="gcs-index-worker-sa" @@ -497,3 +473,14 @@ kubectl annotate serviceaccount worker \ iam.gke.io/gcp-service-account=$SA_NAME # Output: serviceaccount/worker annotated ``` + +[cloud pub/sub]: https://cloud.google.com/pubsub +[cloud run]: https://cloud.google.com/run +[cloud shell]: https://cloud.google.com/shell +[cloud spanner]: https://cloud.google.com/spanner +[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects +[getting started with c++]: ../README.md +[gke]: https://cloud.google.com/kubernetes-engine +[gke-autoscale-on-metrics]: https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub +[pricing calculator]: https://cloud.google.com/products/calculator +[workload-identity]: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity diff --git a/getting-started/update/README.md b/getting-started/update/README.md index 71f9e21..c504413 100644 --- a/getting-started/update/README.md +++ b/getting-started/update/README.md @@ -1,65 +1,45 @@ # Getting Started with GCP and C++: background operations -This guide builds upon the general [Getting Started with C++] guide. -It automatically maintains the [GCS (Google Cloud Storage)][GCS] index -described in said guide using an application deployed to [Cloud Run]. +This guide builds upon the general [Getting Started with C++] guide. It +automatically maintains the [GCS (Google Cloud Storage)][gcs] index described in +said guide using an application deployed to [Cloud Run]. -The steps in this guide are self-contained. It is not necessary to go through -the [Getting Started with C++] guide to go through these steps. It may be -easier to understand the motivation and the main components if you do so. -Note that some commands below may create resources (such as the [Cloud Spanner] -instance and database) that are already created in the previous guide. +The steps in this guide are self-contained. It is not necessary to go through +the [Getting Started with C++] guide to go through these steps. It may be easier +to understand the motivation and the main components if you do so. Note that +some commands below may create resources (such as the [Cloud Spanner] instance +and database) that are already created in the previous guide. ## Motivation -In the [Getting Started with C++] guide we showed how to build an index for -GCS buckets. We built this index using a work queue to scan the contents of -these buckets. But what if the contents of the bucket change dynamically? -What if other applications insert new objects? Or delete them? Or update the -metadata for an existing objects? We would like to extend the example to -update the index as such changes take place. - -[Getting Started with C++]: ../README.md -[Cloud Build]: https://cloud.google.com/build -[Cloud Run]: https://cloud.google.com/run -[Cloud Storage]: https://cloud.google.com/storage -[Cloud Cloud SDK]: https://cloud.google.com/sdk -[Cloud Shell]: https://cloud.google.com/shell -[GCS]: https://cloud.google.com/storage -[Cloud Spanner]: https://cloud.google.com/spanner -[Container Registry]: https://cloud.google.com/container-registry -[Pricing Calculator]: https://cloud.google.com/products/calculator -[cloud-run-quickstarts]: https://cloud.google.com/run/docs/quickstarts -[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects -[buildpacks]: https://buildpacks.io -[docker]: https://docker.com/ -[docker-install]: https://store.docker.com/search?type=edition&offering=community -[sudoless docker]: https://docs.docker.com/engine/install/linux-postinstall/ -[pack-install]: https://buildpacks.io/docs/install-pack/ +In the [Getting Started with C++] guide we showed how to build an index for GCS +buckets. We built this index using a work queue to scan the contents of these +buckets. But what if the contents of the bucket change dynamically? What if +other applications insert new objects? Or delete them? Or update the metadata +for an existing objects? We would like to extend the example to update the index +as such changes take place. ## Overview -The basic structure of this application is shown below. We will configure one -or more GCS buckets to send [Pub/Sub notifications] as objects change. A new +The basic structure of this application is shown below. We will configure one or +more GCS buckets to send [Pub/Sub notifications] as objects change. A new application deployed to Cloud Run will receive these notifications, parse them and update the index accordingly. ![Application Diagram](../assets/getting-started-cpp-update.png) -[Pub/Sub notifications]: https://cloud.google.com/storage/docs/pubsub-notifications - ## Prerequisites This example assumes that you have an existing GCP (Google Cloud Platform) project. The project must have billing enabled, as some of the services used in this example require it. If needed, consult: -* the [GCP quickstarts][gcp-quickstarts] to setup a GCP project -* the [cloud run quickstarts][cloud-run-quickstarts] to setup Cloud Run in your +- the [GCP quickstarts][gcp-quickstarts] to setup a GCP project +- the [cloud run quickstarts][cloud-run-quickstarts] to setup Cloud Run in your project -Use your workstation, a GCE instance, or the [Cloud Shell] to get a -command-line prompt. If needed, login to GCP using: +Use your workstation, a GCE instance, or the [Cloud Shell] to get a command-line +prompt. If needed, login to GCP using: ```sh gcloud auth login @@ -80,8 +60,8 @@ export GOOGLE_CLOUD_PROJECT=[PROJECT ID] ### Configure the Google Cloud CLI to use your project -We will issue a number of commands using the [Google Cloud SDK], a command-line -tool to interact with Google Cloud services. Adding the +We will issue a number of commands using the \[Google Cloud SDK\], a +command-line tool to interact with Google Cloud services. Adding the `--project=$GOOGLE_CLOUD_PROJECT` to each invocation of this tool quickly becomes tedious, so we start by configuring the default project: @@ -92,8 +72,8 @@ gcloud config set project $GOOGLE_CLOUD_PROJECT ### Make sure the necessary services are enabled -Some services are not enabled by default when you create a Google Cloud -Project. We enable all the services we will need in this guide using: +Some services are not enabled by default when you create a Google Cloud Project. +We enable all the services we will need in this guide using: ```sh gcloud services enable cloudbuild.googleapis.com @@ -129,7 +109,7 @@ cd cpp-samples/getting-started/update # Output: none ``` -Compile the code into a Docker image. Since we are only planning to build this +Compile the code into a Docker image. Since we are only planning to build this example once, we will use [Cloud Build]. Using [Cloud Build] is simpler, but it does not create a cache of the intermediate build artifacts. Read about [buildpacks] and the pack tool [install guide][pack-install] to run your builds @@ -139,8 +119,8 @@ systems. To learn more about this, consult the buildpack documentation for [cache images](https://buildpacks.io/docs/app-developer-guide/using-cache-image/). You can continue with other steps while this build runs in the background. -Optionally, use the links in the output to follow the build process in your -web browser. +Optionally, use the links in the output to follow the build process in your web +browser. ```sh gcloud builds submit \ @@ -156,9 +136,9 @@ gcloud builds submit \ ### Create a Cloud Spanner Instance to host your data -As mentioned above, this guide uses [Cloud Spanner] to store the data. We -create the smallest possible instance. If needed we will scale up the -instance, but this is economical and enough for running small jobs. +As mentioned above, this guide uses [Cloud Spanner] to store the data. We create +the smallest possible instance. If needed we will scale up the instance, but +this is economical and enough for running small jobs. > :warning: Creating the Cloud Spanner instance incurs immediate billing costs, > even if the instance is not used. @@ -194,8 +174,8 @@ To use the application we need an existing bucket in your project: BUCKET_NAME=... # The name of an existing bucket in your project ``` -If you have no buckets in your project, use the [GCS guide] to select a name -and then create the bucket: +If you have no buckets in your project, use the [GCS guide] to select a name and +then create the bucket: ```sh gsutil mb gs://$BUCKET_NAME @@ -215,8 +195,6 @@ gsutil notifications create \ Note that this will create the topic (if needed), and set the right IAM permissions enabling GCS to publish on the topic. -[GCS Guide]: https://cloud.google.com/storage/docs/creating-buckets - ### Wait for the build to complete Look at the status of your build using: @@ -226,8 +204,8 @@ gcloud builds list --ongoing # Output: the list of running jobs ``` -If your build has completed the list will be empty. If you need to wait for -this build to complete (it should take about 15 minutes) use: +If your build has completed the list will be empty. If you need to wait for this +build to complete (it should take about 15 minutes) use: ```sh gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") @@ -238,9 +216,9 @@ gcloud builds log --stream $(gcloud builds list --ongoing --format="value(id)") > :warning: To continue, you must wait until the [Cloud Build] build completed. -Once the image is uploaded, we can create a Cloud Run deployment to run it. -This starts up an instance of the job. Cloud Run will scale this up or down as -this needed: +Once the image is uploaded, we can create a Cloud Run deployment to run it. This +starts up an instance of the job. Cloud Run will scale this up or down as this +needed: ```sh gcloud run deploy update-gcs-index \ @@ -311,13 +289,13 @@ gcloud spanner databases execute-sql gcs-index --instance=getting-started-cpp \ # Output: metadata for the 10 most recent objects named 'fox.txt' ``` -Use `gsutil` to create, update, and delete additional objects and run -additional queries. +Use `gsutil` to create, update, and delete additional objects and run additional +queries. ## Cleanup -> :warning: Do not forget to cleanup your billable resources after going -> through this "Getting Started" guide. +> :warning: Do not forget to cleanup your billable resources after going through +> this "Getting Started" guide. ### Remove the Cloud Spanner Instance @@ -369,3 +347,18 @@ gcloud container images delete gcr.io/$GOOGLE_CLOUD_PROJECT/getting-started-cpp/ gsutil notifications delete gs://$BUCKET_NAME # Output: none ``` + +[buildpacks]: https://buildpacks.io +[cloud build]: https://cloud.google.com/build +[cloud run]: https://cloud.google.com/run +[cloud shell]: https://cloud.google.com/shell +[cloud spanner]: https://cloud.google.com/spanner +[cloud-run-quickstarts]: https://cloud.google.com/run/docs/quickstarts +[container registry]: https://cloud.google.com/container-registry +[gcp-quickstarts]: https://cloud.google.com/resource-manager/docs/creating-managing-projects +[gcs]: https://cloud.google.com/storage +[gcs guide]: https://cloud.google.com/storage/docs/creating-buckets +[getting started with c++]: ../README.md +[pack-install]: https://buildpacks.io/docs/install-pack/ +[pricing calculator]: https://cloud.google.com/products/calculator +[pub/sub notifications]: https://cloud.google.com/storage/docs/pubsub-notifications diff --git a/iot/mqtt-ciotc/.dockerignore b/iot/mqtt-ciotc/.dockerignore index 0c59509..7ee3663 100644 --- a/iot/mqtt-ciotc/.dockerignore +++ b/iot/mqtt-ciotc/.dockerignore @@ -1,3 +1,2 @@ ci docker - diff --git a/iot/mqtt-ciotc/README.md b/iot/mqtt-ciotc/README.md index cb74ef1..6b27045 100644 --- a/iot/mqtt-ciotc/README.md +++ b/iot/mqtt-ciotc/README.md @@ -1,7 +1,5 @@ # Deprecation Notice -*

Google Cloud IoT Core will be retired as of August 16, 2023.

- -*

Hence, the samples in this directory are archived and are no longer maintained.

- -*

If you are customer with an assigned Google Cloud account team, contact your account team for more information.

+-

Google Cloud IoT Core will be retired as of August 16, 2023.

+-

Hence, the samples in this directory are archived and are no longer maintained.

+-

If you are customer with an assigned Google Cloud account team, contact your account team for more information.

diff --git a/iot/mqtt-ciotc/setup_device.sh b/iot/mqtt-ciotc/setup_device.sh index 47aa06a..66e2846 100755 --- a/iot/mqtt-ciotc/setup_device.sh +++ b/iot/mqtt-ciotc/setup_device.sh @@ -18,79 +18,80 @@ set -eu DEVICE_ID=my-device ARGUMENT_LIST=( - "registry-name" - "registry-region" - "device-id" - "telemetry-topic" + "registry-name" + "registry-region" + "device-id" + "telemetry-topic" ) -opts=$(getopt \ - --longoptions "$(printf "%s:," "${ARGUMENT_LIST[@]}")" \ - --name "$(basename "$0")" \ - --options "" \ - -- "$@" +opts=$( + getopt \ + --longoptions "$(printf "%s:," "${ARGUMENT_LIST[@]}")" \ + --name "$(basename "$0")" \ + --options "" \ + -- "$@" ) eval set -- "$opts" while [[ $# -gt 0 ]]; do - case "$1" in - --registry-name) - REGISTRY_NAME=$2 - shift 2 - ;; + case "$1" in + --registry-name) + REGISTRY_NAME=$2 + shift 2 + ;; - --registry-region) - REGISTRY_REGION=$2 - shift 2 - ;; + --registry-region) + REGISTRY_REGION=$2 + shift 2 + ;; - --device-id) - DEVICE_ID=$2 - shift 2 - ;; + --device-id) + DEVICE_ID=$2 + shift 2 + ;; - --telemetry-topic) - TELEMETRY_TOPIC=$2 - shift 2 - ;; + --telemetry-topic) + TELEMETRY_TOPIC=$2 + shift 2 + ;; - *) - shift 1 -# exit -1 - ;; - esac + *) + shift 1 + # exit -1 + ;; + esac done -if [ -z "${REGISTRY_NAME}" ] || [ -z "${REGISTRY_REGION}" ] \ - || [ -z "${DEVICE_ID}" ] || [ -z "${TELEMETRY_TOPIC}" ]; then - echo "Usage $0 --registry-name CLOUD_IOT_REGISTRY --registry-region CLOUD_IOT_REGION --device-id CLOUD_IOT_DEVICE_ID --telemetry-topic TELEMETRY_TOPIC" - exit -1 +if [ -z "${REGISTRY_NAME}" ] || [ -z "${REGISTRY_REGION}" ] || + [ -z "${DEVICE_ID}" ] || [ -z "${TELEMETRY_TOPIC}" ]; then + echo "Usage $0 --registry-name CLOUD_IOT_REGISTRY --registry-region CLOUD_IOT_REGION --device-id CLOUD_IOT_DEVICE_ID --telemetry-topic TELEMETRY_TOPIC" + exit -1 fi if [ ! -f rsa_private.pem ]; then - openssl req -x509 -newkey rsa:2048 -keyout rsa_private.pem -nodes -out rsa_cert.pem -subj "/CN=unused" -fi + openssl req -x509 -newkey rsa:2048 -keyout rsa_private.pem -nodes -out rsa_cert.pem -subj "/CN=unused" +fi HAS_REGISTRY=$(gcloud iot registries list \ - --region=${REGISTRY_REGION} \ - --filter "id = ${REGISTRY_NAME}" \ - --format "csv[no-heading](id)" | grep -c ${REGISTRY_NAME} || true) + --region=${REGISTRY_REGION} \ + --filter "id = ${REGISTRY_NAME}" \ + --format "csv[no-heading](id)" | grep -c ${REGISTRY_NAME} || true) if [ $HAS_REGISTRY == "0" ]; then - gcloud iot registries create ${REGISTRY_NAME} \ - --region=${REGISTRY_REGION} \ - --enable-mqtt-config \ - --no-enable-http-config \ - --event-notification-config=topic=${TELEMETRY_TOPIC} + gcloud iot registries create ${REGISTRY_NAME} \ + --region=${REGISTRY_REGION} \ + --enable-mqtt-config \ + --no-enable-http-config \ + --event-notification-config=topic=${TELEMETRY_TOPIC} fi HAS_DEVICE=$(gcloud iot devices list \ - --registry=${REGISTRY_NAME} \ - --region=${REGISTRY_REGION} \ - --device-ids=${DEVICE_ID} \ - --format "csv[no-heading](id)" | grep -c ${DEVICE_ID} || true) + --registry=${REGISTRY_NAME} \ + --region=${REGISTRY_REGION} \ + --device-ids=${DEVICE_ID} \ + --format "csv[no-heading](id)" | grep -c ${DEVICE_ID} || true) if [ $HAS_DEVICE == "0" ]; then - gcloud iot devices create ${DEVICE_ID} \ - --region=${REGISTRY_REGION} \ - --registry=${REGISTRY_NAME} \ - --public-key=path=./rsa_cert.pem,type=rsa-x509-pem - + gcloud iot devices create ${DEVICE_ID} \ + --region=${REGISTRY_REGION} \ + --registry=${REGISTRY_NAME} \ + --public-key=path=./rsa_cert.pem,type=rsa-x509-pem + fi diff --git a/populate-bucket/README.md b/populate-bucket/README.md index c4303a1..658a6fe 100644 --- a/populate-bucket/README.md +++ b/populate-bucket/README.md @@ -2,28 +2,36 @@ ## Motivation -From time to time the Cloud C++ team needs to generate buckets with millions or hundreds of millions of objects to test -our libraries or applications. We often generate synthetic data for these tests. Like many C++ developers, we are -impatient, and we want our synthetic data to be generated as quickly as possible so we can start with the rest of our -work. This directory contains an example using C++, CPS (Google Cloud Pub/Sub), and GKE (Google Kubernetes Engine) to -populate a GCS (Google Cloud Storage) bucket with millions or hundreds of millions of objects. +From time to time the Cloud C++ team needs to generate buckets with millions or +hundreds of millions of objects to test our libraries or applications. We often +generate synthetic data for these tests. Like many C++ developers, we are +impatient, and we want our synthetic data to be generated as quickly as possible +so we can start with the rest of our work. This directory contains an example +using C++, CPS (Google Cloud Pub/Sub), and GKE (Google Kubernetes Engine) to +populate a GCS (Google Cloud Storage) bucket with millions or hundreds of +millions of objects. ## Overview -The basic idea is to break the work into a small number of work items, such as, "create 1,000 objects with this prefix". -We will use a command-line tool to publish these work items to a CPS topic, where they can be reliably delivered to any -number of workers that will execute the work items. We will use GKE to run the workers, as GKE can automatically scale -the cluster based on demand, and as it will restart the workers if needed after a failure. +The basic idea is to break the work into a small number of work items, such as, +"create 1,000 objects with this prefix". We will use a command-line tool to +publish these work items to a CPS topic, where they can be reliably delivered to +any number of workers that will execute the work items. We will use GKE to run +the workers, as GKE can automatically scale the cluster based on demand, and as +it will restart the workers if needed after a failure. -Because CPS offers "at least once" semantics, and because the workers may be restarted by GKE, it is important to make -these work items idempotent, that is, executing the work item times produces the same objects in GCS as executing the +Because CPS offers "at least once" semantics, and because the workers may be +restarted by GKE, it is important to make these work items idempotent, that is, +executing the work item times produces the same objects in GCS as executing the work item once. ## Prerequisites -This example assumes that you have an existing GCP (Google Cloud Platform) project. The project must have billing -enabled, as some of the services used in this example require it. Throughout the example, we will use -`GOOGLE_CLOUD_PROJECT` as an environment variable containing the name of the project. +This example assumes that you have an existing GCP (Google Cloud Platform) +project. The project must have billing enabled, as some of the services used in +this example require it. Throughout the example, we will use +`GOOGLE_CLOUD_PROJECT` as an environment variable containing the name of the +project. ### Make sure the necessary services are enabled @@ -57,12 +65,12 @@ readonly GOOGLE_CLOUD_REGION ### Create the GKE cluster -We use preemptible nodes (the `--preemptible` flag) because they have lower cost, and the application can safely -restart. We also configure the cluster to grow as needed, the maximum number of nodes (in this case `64`), should be -set based on your available quota or budget. Note that we enable [workload identity][workload-identity], the recommended -way for GKE-based applications to consume services in Google Cloud. - -[workload-identity]: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity +We use preemptible nodes (the `--preemptible` flag) because they have lower +cost, and the application can safely restart. We also configure the cluster to +grow as needed, the maximum number of nodes (in this case `64`), should be set +based on your available quota or budget. Note that we enable +[workload identity][workload-identity], the recommended way for GKE-based +applications to consume services in Google Cloud. ```sh gcloud container clusters create cpp-samples \ @@ -83,7 +91,8 @@ gcloud container clusters --region=${GOOGLE_CLOUD_REGION} --project=${GOOGLE_CLO ### Create a service account for the GKE workload -The GKE workload will need a GCP service account to access GCP resources, pick a name and create the account: +The GKE workload will need a GCP service account to access GCP resources, pick a +name and create the account: ```sh readonly SA_ID="populate-bucket-worker-sa" @@ -110,7 +119,6 @@ gcloud projects add-iam-policy-binding "${GOOGLE_CLOUD_PROJECT}" \ "--role=roles/storage.objectAdmin" ``` - ### Create a k8s namespace for the example resources ```sh @@ -155,8 +163,8 @@ gcloud builds submit \ ### Create the Cloud Pub/Sub topic and subscription ```sh -gcloud pubsub topics create "--project=${GOOGLE_CLOUD_PROJECT}" populate-bucket -gcloud pubsub subscriptions create "--project=${GOOGLE_CLOUD_PROJECT}" --topic populate-bucket populate-bucket +gcloud pubsub topics create "--project=${GOOGLE_CLOUD_PROJECT}" populate-bucket +gcloud pubsub subscriptions create "--project=${GOOGLE_CLOUD_PROJECT}" --topic populate-bucket populate-bucket ``` ### Run the deployment with workers @@ -183,3 +191,5 @@ gsutil mb -p ${GOOGLE_CLOUD_PROJECT} gs://${BUCKET_NAME} --object-count=1000000 \ --task-size=100 ``` + +[workload-identity]: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity diff --git a/pubsub-open-telemetry/BUILD.bazel b/pubsub-open-telemetry/BUILD.bazel index e7df967..b5fffc2 100644 --- a/pubsub-open-telemetry/BUILD.bazel +++ b/pubsub-open-telemetry/BUILD.bazel @@ -18,29 +18,29 @@ licenses(["notice"]) # Apache 2.0 cc_library( name = "parse_args", - hdrs = ["parse_args.h"], srcs = ["parse_args.cc"], + hdrs = ["parse_args.h"], deps = [ - "@boost//:program_options", - "@google_cloud_cpp//:pubsub", - "@google_cloud_cpp//:opentelemetry", + "@boost//:program_options", + "@google_cloud_cpp//:opentelemetry", + "@google_cloud_cpp//:pubsub", ], ) cc_binary( name = "publisher", srcs = ["publisher.cc"], - deps = [ + deps = [ + ":parse_args", "@google_cloud_cpp//:opentelemetry", "@google_cloud_cpp//:pubsub", - ":parse_args", ], ) cc_binary( name = "quickstart", srcs = ["quickstart.cc"], - deps = [ + deps = [ "@google_cloud_cpp//:opentelemetry", "@google_cloud_cpp//:pubsub", ], diff --git a/pubsub-open-telemetry/README.md b/pubsub-open-telemetry/README.md index fd688ae..0336911 100644 --- a/pubsub-open-telemetry/README.md +++ b/pubsub-open-telemetry/README.md @@ -2,16 +2,25 @@ ## Background -In v2.16, we GA'd [OpenTelemetry tracing](https://github.com/googleapis/google-cloud-cpp/releases/tag/v2.16.0). This provides basic instrumentation for all the google-cloud-cpp libraries. +In v2.16, we GA'd +[OpenTelemetry tracing](https://github.com/googleapis/google-cloud-cpp/releases/tag/v2.16.0). +This provides basic instrumentation for all the google-cloud-cpp libraries. -In v2.19 release[^1], we added instrumentation for the Google Cloud Pub/Sub C++ library on the Publish side. This example provides a basic tracing application that exports spans to Cloud Trace. +In v2.19 release\[^1\], we added instrumentation for the Google Cloud Pub/Sub +C++ library on the Publish side. This example provides a basic tracing +application that exports spans to Cloud Trace. -[^1]: The [telemetry data](https://github.com/googleapis/google-cloud-cpp/blob/main/doc/public-api.md#telemetry-data) emitted by the google-cloud-cpp library does not follow any versioning guarantees and is subject to change without notice in later versions. +\[^1\]: The +[telemetry data](https://github.com/googleapis/google-cloud-cpp/blob/main/doc/public-api.md#telemetry-data) +emitted by the google-cloud-cpp library does not follow any versioning +guarantees and is subject to change without notice in later versions. ## Overview ### Quickstart -The quickstart creates a tracing enabled Pub/Sub Publisher client that publishes 5 messages and sends the collected traces to Cloud Trace. + +The quickstart creates a tracing enabled Pub/Sub Publisher client that publishes +5 messages and sends the collected traces to Cloud Trace. #### Example traces @@ -23,9 +32,11 @@ For an overview of the Cloud Trace UI, see: [View traces overview]. ### Publisher -The publisher application lets the user configure a tracing enabled Pub/Sub Publisher client to see how different configuration settings change the produced telemetry data. +The publisher application lets the user configure a tracing enabled Pub/Sub +Publisher client to see how different configuration settings change the produced +telemetry data. -#### Example traces +#### Example traces To find the traces, navigate to the Cloud Trace UI. @@ -39,34 +50,40 @@ To find the traces, navigate to the Cloud Trace UI. ## Prerequisites -### 1. Create a project in the Google Cloud Platform Console - +### 1. Create a project in the Google Cloud Platform Console + If you haven't already created a project, create one now. -Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. +Projects enable you to manage all Google Cloud Platform resources for your app, +including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console](https://console.cloud.google.com/). -2. In the drop-down menu at the top, select Create a project. -3. Give your project a name. -4. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. +1. In the drop-down menu at the top, select Create a project. +1. Give your project a name. +1. Make a note of the project ID, which might be different from the project + name. The project ID is used in commands and in configurations. ### 2. Enable billing for your project -If you haven't already enabled billing for your -project, [enable billing now](https://console.cloud.google.com/project/_/settings). Enabling billing allows the -application to consume billable resources such as Pub/Sub API calls. -See [Cloud Platform Console Help](https://support.google.com/cloud/answer/6288653) for more information about billing -settings. +If you haven't already enabled billing for your project, +[enable billing now](https://console.cloud.google.com/project/_/settings). +Enabling billing allows the application to consume billable resources such as +Pub/Sub API calls. + +See +[Cloud Platform Console Help](https://support.google.com/cloud/answer/6288653) +for more information about billing settings. ### 3. Enable APIs for your project -[Click here](https://console.cloud.google.com/flows/enableapi?apiid=speech&showconfirmation=true) to visit Cloud -Platform Console and enable the Pub/Sub and Trace API via the UI. + +[Click here](https://console.cloud.google.com/flows/enableapi?apiid=speech&showconfirmation=true) +to visit Cloud Platform Console and enable the Pub/Sub and Trace API via the UI. Or use the CLI: ``` -gcloud services enable trace.googleapis.com -gcloud services enable pubsub.googleapis.com +gcloud services enable trace.googleapis.com +gcloud services enable pubsub.googleapis.com ``` ### 5. Create the Cloud Pub/Sub topic @@ -78,9 +95,13 @@ gcloud pubsub topics create "--project=${GOOGLE_CLOUD_PROJECT}" ${GOOGLE_CLOUD_T ``` ## Build and run using CMake and Vcpkg + ### 1. Install vcpkg -This project uses [`vcpkg`](https://github.com/microsoft/vcpkg) for dependency management. Clone the vcpkg repository -to your preferred location. In these instructions we use`$HOME`: + +This project uses [`vcpkg`](https://github.com/microsoft/vcpkg) for dependency +management. Clone the vcpkg repository to your preferred location. In these +instructions we use`$HOME`: + ```shell git clone -C $HOME https://github.com/microsoft/vcpkg.git cd $HOME/vcpkg @@ -95,10 +116,13 @@ git clone https://github.com/GoogleCloudPlatform/cpp-samples ### 3. Compile these examples -Use the `vcpkg` toolchain file to download and compile dependencies. This file would be in the directory you -cloned `vcpkg` into, `$HOME/vcpkg` if you are following the instructions to the letter. Note that building all the -dependencies can take up to an hour, depending on the performance of your workstation. These dependencies are cached, -so a second build should be substantially faster. +Use the `vcpkg` toolchain file to download and compile dependencies. This file +would be in the directory you cloned `vcpkg` into, `$HOME/vcpkg` if you are +following the instructions to the letter. Note that building all the +dependencies can take up to an hour, depending on the performance of your +workstation. These dependencies are cached, so a second build should be +substantially faster. + ```sh cd cpp-samples/pubsub-open-telemetry cmake -S . -B .build -DCMAKE_TOOLCHAIN_FILE=$HOME/vcpkg/scripts/buildsystems/vcpkg.cmake -G Ninja @@ -114,6 +138,7 @@ cmake --build .build ``` #### Run basic publisher examples + ```shell .build/publisher [project-name] [topic-id] .build/publisher [project-name] [topic-id] -n 1000 @@ -122,6 +147,7 @@ cmake --build .build ``` #### Flow control example + ```shell .build/publisher [project-name] [topic-id] -n 5 --max-pending-messages 2 --publisher-action reject .build/publisher [project-name] [topic-id] -n 5 --max-pending-messages 2 --publisher-action block @@ -130,6 +156,7 @@ cmake --build .build ``` #### Batching example + ```shell .build/publisher [project-name] [topic-id] -n 5 --max-batch-messages 2 --max-hold-time 100 .build/publisher [project-name] [topic-id] -n 5 --message-size 10 --max-batch-bytes 60 --max-hold-time 1000 @@ -137,7 +164,7 @@ cmake --build .build #### To see all options -```shell +```shell .build/publisher --help Usage: .build/publisher A simple publisher application with Open Telemetery enabled: @@ -145,16 +172,16 @@ A simple publisher application with Open Telemetery enabled: --project-id arg the name of the Google Cloud project --topic-id arg the name of the Google Cloud topic --tracing-rate arg (=1) otel::BasicTracingRateOption value - --max-queue-size arg (=0) If set to 0, uses the default tracing + --max-queue-size arg (=0) If set to 0, uses the default tracing configuration. -n [ --message-count ] arg (=1) the number of messages to publish --message-size arg (=1) the desired message payload size - --enable-ordering-keys arg (=0) If set to true, the messages will be sent - with ordering keys. There will be 3 possible + --enable-ordering-keys arg (=0) If set to true, the messages will be sent + with ordering keys. There will be 3 possible ordering keys and they will be set randomly --max-pending-messages arg pubsub::MaxPendingMessagesOption value --max-pending-bytes arg pubsub::MaxPendingBytesOption value - --publisher-action arg pubsub::FullPublisherAction value + --publisher-action arg pubsub::FullPublisherAction value (block|ignore|reject) --max-hold-time arg pubsub::MaxHoldTimeOption value in us --max-batch-bytes arg pubsub::MaxBatchBytesOption value @@ -179,11 +206,13 @@ bazel build //:quickstart ### 3. Run these examples #### Run the quickstart + ```shell bazel run //:quickstart [project-name] [topic-id] ``` #### Run basic publisher examples + ```shell bazel run //:publisher [project-name] [topic-id] bazel run //:publisher -- [project-name] [topic-id] -n 1000 @@ -194,7 +223,7 @@ bazel run //:publisher -- [project-name] [topic-id] --tracing-rate 0.01 -n 10 #### Run with a local version of google-cloud-cpp ```shell -bazel run //:quickstart --override_repository=google_cloud_cpp=$HOME/your-path-to-the-repo/google-cloud-cpp -- [project-name] [topic-id] +bazel run //:quickstart --override_repository=google_cloud_cpp=$HOME/your-path-to-the-repo/google-cloud-cpp -- [project-name] [topic-id] ``` ## Cleanup @@ -228,7 +257,4 @@ set GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=%cd%\roots.pem ``` [grpc-roots-pem-bug]: https://github.com/grpc/grpc/issues/16571 -[choco-cmake-link]: https://chocolatey.org/packages/cmake -[homebrew-cmake-link]: https://formulae.brew.sh/formula/cmake -[cmake-download-link]: https://cmake.org/download/ -[view traces overview]: https://cloud.google.com/trace/docs/trace-overview \ No newline at end of file +[view traces overview]: https://cloud.google.com/trace/docs/trace-overview diff --git a/pubsub-open-telemetry/WORKSPACE.bazel b/pubsub-open-telemetry/WORKSPACE.bazel index 1f18f06..1c7b0b1 100644 --- a/pubsub-open-telemetry/WORKSPACE.bazel +++ b/pubsub-open-telemetry/WORKSPACE.bazel @@ -16,6 +16,7 @@ workspace(name = "pubsub-open-telemetery") # Google Cloud Cpp load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") + http_archive( name = "google_cloud_cpp", sha256 = "63f009092afd900cb812050bcecf607e37d762ac911e0bcbf4af9a432da91890", @@ -24,25 +25,33 @@ http_archive( ) load("@google_cloud_cpp//bazel:google_cloud_cpp_deps.bzl", "google_cloud_cpp_deps") + google_cloud_cpp_deps() + load("@com_google_googleapis//:repository_rules.bzl", "switched_rules_by_language") + switched_rules_by_language( name = "com_google_googleapis_imports", cc = True, grpc = True, ) + load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps") + grpc_deps() + load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps") + grpc_extra_deps() # Boost http_archive( name = "com_github_nelhage_rules_boost", - url = "https://github.com/nelhage/rules_boost/archive/8a2609acaa1f1317a8b9d9b5d566e8e98c3bf343.tar.gz", + sha256 = "bb0d686145a1580fbbd029ef575f534cb770328e14d85880cc6db11f9586e1c4", strip_prefix = "rules_boost-8a2609acaa1f1317a8b9d9b5d566e8e98c3bf343", - sha256 = "bb0d686145a1580fbbd029ef575f534cb770328e14d85880cc6db11f9586e1c4" + url = "https://github.com/nelhage/rules_boost/archive/8a2609acaa1f1317a8b9d9b5d566e8e98c3bf343.tar.gz", ) load("@com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps") + boost_deps() diff --git a/setup/WORKSPACE.bazel b/setup/WORKSPACE.bazel index d9865f9..50fe4f1 100644 --- a/setup/WORKSPACE.bazel +++ b/setup/WORKSPACE.bazel @@ -17,6 +17,7 @@ workspace(name = "hw") # Add the necessary Starlark functions to fetch google-cloud-cpp. # [START cpp_setup_bazel_download] load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") + http_archive( name = "google_cloud_cpp", sha256 = "63f009092afd900cb812050bcecf607e37d762ac911e0bcbf4af9a432da91890", @@ -27,15 +28,22 @@ http_archive( # [START cpp_setup_bazel_recurse] load("@google_cloud_cpp//bazel:google_cloud_cpp_deps.bzl", "google_cloud_cpp_deps") + google_cloud_cpp_deps() + load("@com_google_googleapis//:repository_rules.bzl", "switched_rules_by_language") + switched_rules_by_language( name = "com_google_googleapis_imports", cc = True, grpc = True, ) + load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps") + grpc_deps() + load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps") + grpc_extra_deps() # [END cpp_setup_bazel_recurse] diff --git a/speech/api/README.md b/speech/api/README.md index 945cf04..ad1efb2 100644 --- a/speech/api/README.md +++ b/speech/api/README.md @@ -1,84 +1,101 @@ # Speech Samples. -These samples demonstrate how to call the [Google Cloud Speech API](https://cloud.google.com/speech/) using C++. +These samples demonstrate how to call the +[Google Cloud Speech API](https://cloud.google.com/speech/) using C++. -We only test these samples on **Linux**. If you are running [Windows](#Windows) and [macOS](#macOS) please see -the additional notes for your platform. +We only test these samples on **Linux**. If you are running [Windows](#Windows) +and [macOS](#macOS) please see the additional notes for your platform. ## Build and Run -1. **Create a project in the Google Cloud Platform Console**. If you haven't already created a project, create one now. - Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, - billing, and services. - 1. Open the [Cloud Platform Console](https://console.cloud.google.com/). - 1. In the drop-down menu at the top, select Create a project. - 1. Give your project a name. - 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands - and in configurations. - -1. **Enable billing for your project**. If you haven't already enabled billing for your - project, [enable billing now](https://console.cloud.google.com/project/_/settings). Enabling billing allows the - application to consume billable resources such as Speech API calls. - See [Cloud Platform Console Help](https://support.google.com/cloud/answer/6288653) for more information about billing - settings. +1. **Create a project in the Google Cloud Platform Console**. If you haven't + already created a project, create one now. Projects enable you to manage all + Google Cloud Platform resources for your app, including deployment, access + control, billing, and services. + + 1. Open the [Cloud Platform Console](https://console.cloud.google.com/). + 1. In the drop-down menu at the top, select Create a project. + 1. Give your project a name. + 1. Make a note of the project ID, which might be different from the project + name. The project ID is used in commands and in configurations. + +1. **Enable billing for your project**. If you haven't already enabled billing + for your project, + [enable billing now](https://console.cloud.google.com/project/_/settings). + Enabling billing allows the application to consume billable resources such as + Speech API calls. See + [Cloud Platform Console Help](https://support.google.com/cloud/answer/6288653) + for more information about billing settings. 1. **Enable APIs for your project**. - [Click here](https://console.cloud.google.com/flows/enableapi?apiid=speech&showconfirmation=true) to visit Cloud - Platform Console and enable the Speech API via the UI. + [Click here](https://console.cloud.google.com/flows/enableapi?apiid=speech&showconfirmation=true) + to visit Cloud Platform Console and enable the Speech API via the UI. Or use the CLI: - + ``` gcloud services enable speech.googleapis.com ``` -1. **If needed, override the Billing Project**. - If you are using a [user account] for authentication, you need to set the `GOOGLE_CLOUD_CPP_USER_PROJECT` - environment variable to the project you created in the previous step. Be aware that you must have - `serviceusage.services.use` permission on the project. Alternatively, use a service account as described next. - -[user account]: https://cloud.google.com/docs/authentication#principals - -1. **Download service account credentials**. These samples can use service accounts for authentication. - 1. Visit the [Cloud Console](http://cloud.google.com/console), and navigate to: - `API Manager > Credentials > Create credentials > Service account key` - 1. Under **Service account**, select `New service account`. - 1. Under **Service account name**, enter a service account name of your choosing. For example, `transcriber`. - 1. Under **Role**, select `Project > Owner`. - 1. Under **Key type**, leave `JSON` selected. - 1. Click **Create** to create a new service account, and download the json credentials file. - 1. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to your downloaded service account - credentials: - ``` - export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/credentials-key.json - ``` - See the [Cloud Platform Auth Guide](https://cloud.google.com/docs/authentication#developer_workflow) for more - information. - -1. **Install vcpkg.** - This project uses [`vcpkg`](https://github.com/microsoft/vcpkg) for dependency management. Clone the vcpkg repository - to your preferred location. In these instructions we use`$HOME`: +1. **If needed, override the Billing Project**. If you are using a + [user account] for authentication, you need to set the + `GOOGLE_CLOUD_CPP_USER_PROJECT` environment variable to the project you + created in the previous step. Be aware that you must have + `serviceusage.services.use` permission on the project. Alternatively, use a + service account as described next. + +1) **Download service account credentials**. These samples can use service + accounts for authentication. + + 1. Visit the [Cloud Console](http://cloud.google.com/console), and navigate + to: `API Manager > Credentials > Create credentials > Service account key` + 1. Under **Service account**, select `New service account`. + 1. Under **Service account name**, enter a service account name of your + choosing. For example, `transcriber`. + 1. Under **Role**, select `Project > Owner`. + 1. Under **Key type**, leave `JSON` selected. + 1. Click **Create** to create a new service account, and download the json + credentials file. + 1. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to + your downloaded service account credentials: + ``` + export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/credentials-key.json + ``` + + See the + [Cloud Platform Auth Guide](https://cloud.google.com/docs/authentication#developer_workflow) + for more information. + +1) **Install vcpkg.** This project uses + [`vcpkg`](https://github.com/microsoft/vcpkg) for dependency management. + Clone the vcpkg repository to your preferred location. In these instructions + we use`$HOME`: + ```shell git clone -C $HOME https://github.com/microsoft/vcpkg.git ``` -1. **Download or clone this repo** with +1) **Download or clone this repo** with + ```shell git clone https://github.com/GoogleCloudPlatform/cpp-samples ``` -1. **Compile these examples:** - Use the `vcpkg` toolchain file to download and compile dependencies. This file would be in the directory you - cloned `vcpkg` into, `$HOME/vcpkg` if you are following the instructions to the letter. Note that building all the - dependencies can take up to an hour, depending on the performance of your workstation. These dependencies are cached, - so a second build should be substantially faster. +1) **Compile these examples:** Use the `vcpkg` toolchain file to download and + compile dependencies. This file would be in the directory you cloned `vcpkg` + into, `$HOME/vcpkg` if you are following the instructions to the letter. Note + that building all the dependencies can take up to an hour, depending on the + performance of your workstation. These dependencies are cached, so a second + build should be substantially faster. + ```sh cd cpp-samples/speech/api cmake -S. -B.build -DCMAKE_TOOLCHAIN_FILE=$HOME/vcpkg/scripts/buildsystems/vcpkg.cmake cmake --build .build ``` -1. **Run the examples:** +1) **Run the examples:** + ```shell .build/transcribe --bitrate 16000 resources/audio2.raw .build/transcribe resources/audio.flac @@ -119,6 +136,4 @@ set GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=%cd%\roots.pem ``` [grpc-roots-pem-bug]: https://github.com/grpc/grpc/issues/16571 -[choco-cmake-link]: https://chocolatey.org/packages/cmake -[homebrew-cmake-link]: https://formulae.brew.sh/formula/cmake -[cmake-download-link]: https://cmake.org/download/ +[user account]: https://cloud.google.com/docs/authentication#principals