Skip to content

Commit

Permalink
[DOCS-466] Standardize docs about creating API keys and signing in (#…
Browse files Browse the repository at this point in the history
…1054)

[DOCS-466] Standardize docs about creating API keys and signing in

- Update each integration to use the same headings and structure
- Fix other section headings in the integrations
- Use the same details in the public API guide
- Use the same details in the quickstart
- Add non-notebook Python examples
- Update references to "Weights & Biases" to "W&B"
  • Loading branch information
mdlinville authored Feb 3, 2025
1 parent 1dc8b96 commit 777692c
Show file tree
Hide file tree
Showing 16 changed files with 742 additions and 282 deletions.
90 changes: 56 additions & 34 deletions content/guides/integrations/add-wandb-to-any-library.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,46 +76,70 @@ dev = [

### User Login

There are a few ways for your users to log in to W&B:
#### Create an API key

An API key authenticates a client or machine to W&B. You can generate an API key from your user profile.

{{% alert %}}
For a more streamlined approach, you can generate an API key by going directly to [https://wandb.ai/authorize](https://wandb.ai/authorize). Copy the displayed API key and save it in a secure location such as a password manager.
{{% /alert %}}

1. Click your user profile icon in the upper right corner.
1. Select **User Settings**, then scroll to the **API Keys** section.
1. Click **Reveal**. Copy the displayed API key. To hide the API key, reload the page.

#### Install the `wandb` library and log in

To install the `wandb` library locally and log in:

{{< tabpane text=true >}}
{{% tab header="Command Line" value="cli" %}}

{{% tab header="Bash" value="bash" %}}
Log into W&B with a bash command in a terminal:
1. Set the `WANDB_API_KEY` [environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) to your API key.

```bash
export WANDB_API_KEY=<your_api_key>
```

1. Install the `wandb` library and log in.



```shell
pip install wandb
wandb login
```

```bash
wandb login $MY_WANDB_KEY
```
{{% /tab %}}

{{% tab header="Notebook" value="notebook" %}}
If they're in a Jupyter or Colab notebook, log into W&B like so:
{{% tab header="Python" value="python" %}}

```bash
pip install wandb
```
```python
import wandb
wandb.login()
```
{{% /tab %}}

{{% tab header="Environment Variable" value="environment" %}}
Set a [W&B environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) for the API key:
{{% /tab %}}

```bash
export WANDB_API_KEY=$YOUR_API_KEY
```
{{% tab header="Python notebook" value="python-notebook" %}}

or
```notebook
!pip install wandb
```python
os.environ['WANDB_API_KEY'] = "abc123..."
import wandb
wandb.login()
```
{{% /tab %}}

{{% /tab %}}
{{< /tabpane >}}

If a user is using wandb for the first time without following any of the steps mentioned above, they will automatically be prompted to log in when your script calls `wandb.init`.

### Starting A wandb Run
### Start a run

A W&B Run is a unit of computation logged by W&B. Typically, you associate a single W&B Run per training experiment.

Expand Down Expand Up @@ -211,8 +235,7 @@ wandb offline
{{< /tabpane >}}
### Defining A wandb Run Config

### Define a run config
With a `wandb` run config, you can provide metadata about your model, dataset, and so on when you create a W&B Run. You can use this information to compare different experiments and quickly understand the main differences.
{{< img src="/images/integrations/integrations_add_any_lib_runs_page.png" alt="W&B Runs table" >}}
Expand All @@ -230,8 +253,7 @@ config = {"batch_size": 32, ...}
wandb.init(..., config=config)
```
#### Updating The wandb config

#### Update the run config
Use `wandb.config.update` to update the config. Updating your configuration dictionary is useful when parameters are obtained after the dictionary was defined. For example, you might want to add a model’s parameters after the model is instantiated.
```python
Expand All @@ -240,9 +262,9 @@ wandb.config.update({"model_parameters": 3500})
For more information on how to define a config file, see [Configure Experiments with wandb.config]({{< relref "/guides/models/track/config" >}}).
### Logging To W&B
### Log to W&B
#### Log Metrics
#### Log metrics
Create a dictionary where the key value is the name of the metric. Pass this dictionary object to [`wandb.log`]({{< relref "/guides/models/track/log" >}}):
Expand Down Expand Up @@ -271,7 +293,7 @@ wandb.log(metrics)

For more on `wandb.log`, see [Log Data with wandb.log]({{< relref "/guides/models/track/log" >}}).

#### Preventing x-axis Misalignments
#### Prevent x-axis misalignments

Sometimes you might need to perform multiple calls to `wandb.log` for the same training step. The wandb SDK has its own internal step counter that is incremented every time a `wandb.log` call is made. This means that there is a possibility that the wandb log counter is not aligned with the training step in your training loop.

Expand Down Expand Up @@ -299,7 +321,7 @@ for step, (input, ground_truth) in enumerate(data):
If you do not have access to the independent step variable, for example "global_step" is not available during your validation loop, the previously logged value for "global_step" is automatically used by wandb. In this case, ensure you log an initial value for the metric so it has been defined when it’s needed.
#### Log Images, Tables, Text, Audio and More
#### Log images, tables, audio, and more
In addition to metrics, you can log plots, histograms, tables, text, and media such as images, videos, audios, 3D, and more.
Expand All @@ -312,7 +334,7 @@ Some considerations when logging data include:
Refer to [Log Data with wandb.log]({{< relref "/guides/models/track/log" >}}) for a full guide on logging media, objects, plots, and more.
### Distributed Training
### Distributed training
For frameworks supporting distributed environments, you can adapt any of the following workflows:
Expand All @@ -321,7 +343,7 @@ For frameworks supporting distributed environments, you can adapt any of the fol
See [Log Distributed Training Experiments]({{< relref "/guides/models/track/log/distributed-training.md" >}}) for more details.
### Logging Model Checkpoints And More
### Log model checkpoints and more
If your framework uses or produces models or datasets, you can log them for full traceability and have wandb automatically monitor your entire pipeline through W&B Artifacts.
Expand All @@ -333,7 +355,7 @@ When using Artifacts, it might be useful but not necessary to let your users def
* The path/reference of the artifact being used as input, if any. For example, `user/project/artifact`.
* The frequency for logging Artifacts.
#### Log Model Checkpoints
#### Log model checkpoints
You can log Model Checkpoints to W&B. It is useful to leverage the unique `wandb` Run ID to name output Model Checkpoints to differentiate them between Runs. You can also add useful metadata. In addition, you can also add aliases to each model as shown below:
Expand All @@ -355,7 +377,7 @@ For information on how to create a custom alias, see [Create a Custom Alias]({{<
You can log output Artifacts at any frequency (for example, every epoch, every 500 steps, and so on) and they are automatically versioned.
#### Log And Track Pre-trained Models Or Datasets
#### Log and track pre-trained models or datasets
You can log artifacts that are used as inputs to your training such as pre-trained models or datasets. The following snippet demonstrates how to log an Artifact and add it as an input to the ongoing Run as shown in the graph above.
Expand All @@ -365,7 +387,7 @@ artifact_input_data.add_file("flowers.npy")
wandb.use_artifact(artifact_input_data)
```
#### Download A W&B Artifact
#### Download an artifact
You re-use an Artifact (dataset, model, etc.) and `wandb` will download a copy locally (and cache it):
Expand All @@ -385,11 +407,11 @@ local_path = artifact.download()
For more information, see [Download and Use Artifacts]({{< relref "/guides/core/artifacts/download-and-use-an-artifact" >}}).
### Hyper-parameter Tuning
### Tune hyper-parameters
If your library would like to leverage W&B hyper-parameter tuning, [W&B Sweeps]({{< relref "/guides/models/sweeps/" >}}) can also be added to your library.
### Advanced Integrations
### Advanced integrations
You can also see what an advanced W&B integrations look like in the following integrations. Note most integrations will not be as complex as these:
Expand Down
110 changes: 69 additions & 41 deletions content/guides/integrations/deepchem.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,76 +23,104 @@ model.fit(…)

Explore the Using [W&B with DeepChem: Molecular Graph Convolutional Networks](https://wandb.ai/kshen/deepchem_graphconv/reports/Using-W-B-with-DeepChem-Molecular-Graph-Convolutional-Networks--Vmlldzo4MzU5MDc?galleryTag=) article for an example charts generated using the W&B DeepChem integration.

If you'd rather dive straight into working code, check out this [**Google Colab**](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/deepchem/W%26B_x_DeepChem.ipynb).
To dive straight into working code, check out this [**Google Colab**](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/deepchem/W%26B_x_DeepChem.ipynb).

## Track experiments

Setup Weights & Biases for DeepChem models of type [KerasModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#keras-models) or [TorchModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#pytorch-models).
Set up W&B for DeepChem models of type [KerasModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#keras-models) or [TorchModel](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#pytorch-models).

1. Install the `wandb` library and log in
### Sign up and create an API key

{{< tabpane text=true >}}
An API key authenticates your machine to W&B. You can generate an API key from your user profile.

{{% tab header="Command Line" value="cli" %}}
{{% alert %}}
For a more streamlined approach, you can generate an API key by going directly to [https://wandb.ai/authorize](https://wandb.ai/authorize). Copy the displayed API key and save it in a secure location such as a password manager.
{{% /alert %}}

1. Click your user profile icon in the upper right corner.
1. Select **User Settings**, then scroll to the **API Keys** section.
1. Click **Reveal**. Copy the displayed API key. To hide the API key, reload the page.

### Install the `wandb` library and log in

To install the `wandb` library locally and log in:

{{< tabpane text=true >}}
{{% tab header="Command Line" value="cli" %}}

1. Set the `WANDB_API_KEY` [environment variable]({{< relref "/guides/models/track/environment-variables.md" >}}) to your API key.

```bash
export WANDB_API_KEY=<your_api_key>
```

1. Install the `wandb` library and log in.



```shell
pip install wandb
wandb login
```

{{% /tab %}}
{{% /tab %}}

{{% tab header="Notebook" value="notebook" %}}
{{% tab header="Python" value="python" %}}

```python
!pip install wandb
```bash
pip install wandb
```
```python
import wandb
wandb.login()
```

import wandb
wandb.login()
```
{{% /tab %}}

{{% /tab %}}
{{% tab header="Python notebook" value="python-notebook" %}}

{{< /tabpane >}}
```notebook
!pip install wandb
2. Initialize and configure WandbLogger
import wandb
wandb.login()
```

```python
from deepchem.models import WandbLogger
{{% /tab %}}

logger = WandbLogger(entity="my_entity", project="my_project")
```
{{< /tabpane >}}

3. Log your training and evaluation data to W&B
### Log your training and evaluation data to W&B

Training loss and evaluation metrics can be automatically logged to Weights & Biases. Optional evaluation can be enabled using the DeepChem [ValidationCallback](https://github.com/deepchem/deepchem/blob/master/deepchem/models/callbacks.py), the `WandbLogger` will detect ValidationCallback callback and log the metrics generated.
Training loss and evaluation metrics can be automatically logged to W&B. Optional evaluation can be enabled using the DeepChem [ValidationCallback](https://github.com/deepchem/deepchem/blob/master/deepchem/models/callbacks.py), the `WandbLogger` will detect ValidationCallback callback and log the metrics generated.

{{< tabpane text=true >}}
{{< tabpane text=true >}}

{{% tab header="TorchModel" value="torch" %}}
{{% tab header="TorchModel" value="torch" %}}

```python
from deepchem.models import TorchModel, ValidationCallback
```python
from deepchem.models import TorchModel, ValidationCallback
vc = ValidationCallback(…) # optional
model = TorchModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
vc = ValidationCallback(…) # optional
model = TorchModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```

{{% /tab %}}
{{% /tab %}}

{{% tab header="KerasModel" value="keras" %}}
{{% tab header="KerasModel" value="keras" %}}

```python
from deepchem.models import KerasModel, ValidationCallback
```python
from deepchem.models import KerasModel, ValidationCallback
vc = ValidationCallback(…) # optional
model = KerasModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```
vc = ValidationCallback(…) # optional
model = KerasModel(…, wandb_logger=logger)
model.fit(…, callbacks=[vc])
logger.finish()
```

{{% /tab %}}
{{% /tab %}}

{{< /tabpane >}}
{{< /tabpane >}}
Loading

0 comments on commit 777692c

Please sign in to comment.