Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS-466] Standardize docs about creating API keys and signing in #1054

Merged
merged 4 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 49 additions & 39 deletions content/guides/integrations/add-wandb-to-any-library.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,46 +76,58 @@ dev = [

### User Login

There are a few ways for your users to log in to W&B:
#### Create an API key

An API key authenticates a client or machine to W&B. You can generate an API key from your user profile.

{{% alert %}}
For a more streamlined approach, you can generate an API key by going directly to [https://wandb.ai/authorize](https://wandb.ai/authorize). Copy the displayed API key and save it in a secure location such as a password manager.
{{% /alert %}}

1. Click your user profile icon in the upper right corner.
1. Select **User Settings**, then scroll to the **API Keys** section.
1. Click **Reveal**. Copy the displayed API key. To hide the API key, reload the page.

#### Install the `wandb` library and log in

To install the `wandb` library locally and log in:

{{< tabpane text=true >}}
{{% tab header="Command Line" value="cli" %}}

{{% tab header="Bash" value="bash" %}}
Log into W&B with a bash command in a terminal:
1. Set the `WANDB_API_KEY` [environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) to your API key.

```bash
wandb login $MY_WANDB_KEY
```
{{% /tab %}}
```bash
export WANDB_API_KEY=<your_api_key>
```

{{% tab header="Notebook" value="notebook" %}}
If they're in a Jupyter or Colab notebook, log into W&B like so:
1. Install the `wandb` library and log in.

```python
import wandb
wandb.login()
```
{{% /tab %}}

{{% tab header="Environment Variable" value="environment" %}}
Set a [W&B environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) for the API key:

```bash
export WANDB_API_KEY=$YOUR_API_KEY
```
```shell
pip install wandb

or
wandb login
```

```python
os.environ['WANDB_API_KEY'] = "abc123..."
```
{{% /tab %}}

{{% tab header="Python" value="python" %}}

```notebook
!pip install wandb

import wandb
wandb.login()
```

{{% /tab %}}
{{< /tabpane >}}

If a user is using wandb for the first time without following any of the steps mentioned above, they will automatically be prompted to log in when your script calls `wandb.init`.

### Starting A wandb Run
### Start a run

A W&B Run is a unit of computation logged by W&B. Typically, you associate a single W&B Run per training experiment.

Expand Down Expand Up @@ -211,8 +223,7 @@ wandb offline

{{< /tabpane >}}

### Defining A wandb Run Config

### Define a run config
With a `wandb` run config, you can provide metadata about your model, dataset, and so on when you create a W&B Run. You can use this information to compare different experiments and quickly understand the main differences.

{{< img src="/images/integrations/integrations_add_any_lib_runs_page.png" alt="W&B Runs table" >}}
Expand All @@ -230,8 +241,7 @@ config = {"batch_size": 32, ...}
wandb.init(..., config=config)
```

#### Updating The wandb config

#### Update the run config
Use `wandb.config.update` to update the config. Updating your configuration dictionary is useful when parameters are obtained after the dictionary was defined. For example, you might want to add a model’s parameters after the model is instantiated.

```python
Expand All @@ -240,9 +250,9 @@ wandb.config.update({"model_parameters": 3500})

For more information on how to define a config file, see [Configure Experiments with wandb.config]({{< relref "/guides/models/track/config" >}}).

### Logging To W&B
### Log to W&B

#### Log Metrics
#### Log metrics

Create a dictionary where the key value is the name of the metric. Pass this dictionary object to [`wandb.log`]({{< relref "/guides/models/track/log" >}}):

Expand Down Expand Up @@ -271,7 +281,7 @@ wandb.log(metrics)

For more on `wandb.log`, see [Log Data with wandb.log]({{< relref "/guides/models/track/log" >}}).

#### Preventing x-axis Misalignments
#### Prevent x-axis misalignments

Sometimes you might need to perform multiple calls to `wandb.log` for the same training step. The wandb SDK has its own internal step counter that is incremented every time a `wandb.log` call is made. This means that there is a possibility that the wandb log counter is not aligned with the training step in your training loop.

Expand Down Expand Up @@ -299,7 +309,7 @@ for step, (input, ground_truth) in enumerate(data):

If you do not have access to the independent step variable, for example "global_step" is not available during your validation loop, the previously logged value for "global_step" is automatically used by wandb. In this case, ensure you log an initial value for the metric so it has been defined when it’s needed.

#### Log Images, Tables, Text, Audio and More
#### Log images, tables, audio, and more

In addition to metrics, you can log plots, histograms, tables, text, and media such as images, videos, audios, 3D, and more.

Expand All @@ -312,7 +322,7 @@ Some considerations when logging data include:

Refer to [Log Data with wandb.log]({{< relref "/guides/models/track/log" >}}) for a full guide on logging media, objects, plots, and more.

### Distributed Training
### Distributed training

For frameworks supporting distributed environments, you can adapt any of the following workflows:

Expand All @@ -321,7 +331,7 @@ For frameworks supporting distributed environments, you can adapt any of the fol

See [Log Distributed Training Experiments]({{< relref "/guides/models/track/log/distributed-training.md" >}}) for more details.

### Logging Model Checkpoints And More
### Log model checkpoints and more

If your framework uses or produces models or datasets, you can log them for full traceability and have wandb automatically monitor your entire pipeline through W&B Artifacts.

Expand All @@ -333,7 +343,7 @@ When using Artifacts, it might be useful but not necessary to let your users def
* The path/reference of the artifact being used as input, if any. For example, `user/project/artifact`.
* The frequency for logging Artifacts.

#### Log Model Checkpoints
#### Log model checkpoints

You can log Model Checkpoints to W&B. It is useful to leverage the unique `wandb` Run ID to name output Model Checkpoints to differentiate them between Runs. You can also add useful metadata. In addition, you can also add aliases to each model as shown below:

Expand All @@ -355,7 +365,7 @@ For information on how to create a custom alias, see [Create a Custom Alias]({{<

You can log output Artifacts at any frequency (for example, every epoch, every 500 steps, and so on) and they are automatically versioned.

#### Log And Track Pre-trained Models Or Datasets
#### Log and track pre-trained models or datasets

You can log artifacts that are used as inputs to your training such as pre-trained models or datasets. The following snippet demonstrates how to log an Artifact and add it as an input to the ongoing Run as shown in the graph above.

Expand All @@ -365,7 +375,7 @@ artifact_input_data.add_file("flowers.npy")
wandb.use_artifact(artifact_input_data)
```

#### Download A W&B Artifact
#### Download an artifact

You re-use an Artifact (dataset, model, etc.) and `wandb` will download a copy locally (and cache it):

Expand All @@ -385,11 +395,11 @@ local_path = artifact.download()

For more information, see [Download and Use Artifacts]({{< relref "/guides/core/artifacts/download-and-use-an-artifact" >}}).

### Hyper-parameter Tuning
### Tune hyper-parameters

If your library would like to leverage W&B hyper-parameter tuning, [W&B Sweeps]({{< relref "/guides/models/sweeps/" >}}) can also be added to your library.

### Advanced Integrations
### Advanced integrations

You can also see what an advanced W&B integrations look like in the following integrations. Note most integrations will not be as complex as these:

Expand Down
97 changes: 56 additions & 41 deletions content/guides/integrations/deepchem.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,70 +29,85 @@ If you'd rather dive straight into working code, check out this [**Google Colab*

Setup Weights & Biases for DeepChem models of type [KerasModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#keras-models) or [TorchModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#pytorch-models).

1. Install the `wandb` library and log in
### Sign up and create an API key

{{< tabpane text=true >}}
An API key authenticates your machine to W&B. You can generate an API key from your user profile.

{{% tab header="Command Line" value="cli" %}}
{{% alert %}}
For a more streamlined approach, you can generate an API key by going directly to [https://wandb.ai/authorize](https://wandb.ai/authorize). Copy the displayed API key and save it in a secure location such as a password manager.
{{% /alert %}}

```
pip install wandb
wandb login
```
1. Click your user profile icon in the upper right corner.
1. Select **User Settings**, then scroll to the **API Keys** section.
1. Click **Reveal**. Copy the displayed API key. To hide the API key, reload the page.

### Install the `wandb` library and log in

{{% /tab %}}
To install the `wandb` library locally and log in:

{{% tab header="Notebook" value="notebook" %}}
{{< tabpane text=true >}}
{{% tab header="Command Line" value="cli" %}}

```python
!pip install wandb
1. Set the `WANDB_API_KEY` [environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) to your API key.

import wandb
wandb.login()
```bash
export WANDB_API_KEY=<your_api_key>
```

{{% /tab %}}
1. Install the `wandb` library and log in.

{{< /tabpane >}}

2. Initialize and configure WandbLogger

```python
from deepchem.models import WandbLogger
```shell
pip install wandb

logger = WandbLogger(entity="my_entity", project="my_project")
wandb login
```

3. Log your training and evaluation data to W&B
{{% /tab %}}

Training loss and evaluation metrics can be automatically logged to Weights & Biases. Optional evaluation can be enabled using the DeepChem [ValidationCallback](https://github.com/deepchem/deepchem/blob/master/deepchem/models/callbacks.py), the `WandbLogger` will detect ValidationCallback callback and log the metrics generated.
{{% tab header="Python" value="python" %}}

{{< tabpane text=true >}}
```notebook
!pip install wandb

{{% tab header="TorchModel" value="torch" %}}
import wandb
wandb.login()
```

```python
from deepchem.models import TorchModel, ValidationCallback
{{% /tab %}}
{{< /tabpane >}}

vc = ValidationCallback(…) # optional
model = TorchModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
### Log your training and evaluation data to W&B

{{% /tab %}}
Training loss and evaluation metrics can be automatically logged to Weights & Biases. Optional evaluation can be enabled using the DeepChem [ValidationCallback](https://github.com/deepchem/deepchem/blob/master/deepchem/models/callbacks.py), the `WandbLogger` will detect ValidationCallback callback and log the metrics generated.

{{% tab header="KerasModel" value="keras" %}}
{{< tabpane text=true >}}

```python
from deepchem.models import KerasModel, ValidationCallback
{{% tab header="TorchModel" value="torch" %}}

vc = ValidationCallback(…) # optional
model = KerasModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
```python
from deepchem.models import TorchModel, ValidationCallback

vc = ValidationCallback(…) # optional
model = TorchModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```

{{% /tab %}}

{{% tab header="KerasModel" value="keras" %}}

```python
from deepchem.models import KerasModel, ValidationCallback

vc = ValidationCallback(…) # optional
model = KerasModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```

{{% /tab %}}
{{% /tab %}}

{{< /tabpane >}}
{{< /tabpane >}}
Loading