-
Notifications
You must be signed in to change notification settings - Fork 58
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #733 from DHI/more_examples
Documentation examples
- Loading branch information
Showing
10 changed files
with
298 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
--- | ||
title: Dfs0 - CMEMS *in-situ* data | ||
jupyter: python3 | ||
--- | ||
|
||
|
||
|
||
```{python} | ||
import pandas as pd | ||
import xarray as xr | ||
import mikeio | ||
``` | ||
|
||
```{python} | ||
fino = xr.open_dataset("../../data/NO_TS_MO_FINO1_202209.nc") | ||
fino | ||
``` | ||
|
||
CMEMS *in-situ* data is provided in a standardised format, the [OceanSITES time-series data format](http://www.oceansites.org/docs/oceansites_data_format_reference_manual.pdf). | ||
|
||
> "*The OceanSITES programme is the global network of open-ocean sustained time series | ||
sites, called ocean reference stations, being implemented by an international partnership of | ||
researchers and agencies. OceanSITES provides fixed-point time series of various physical, | ||
biogeochemical, ecosystem and atmospheric variables at locations around the globe, from | ||
the atmosphere and sea surface to the seafloor. The program’s objective is to build and | ||
maintain a multidisciplinary global network for a broad range of research and operational | ||
applications including climate, carbon, and ecosystem variability and forecasting and ocean | ||
state validation*" | ||
|
||
Find out which variables we are interested in to extract: | ||
|
||
```{python} | ||
data = [ | ||
{ | ||
"name": fino[var].name, | ||
"standard_name": fino[var].standard_name, | ||
"units": fino[var].units, | ||
} | ||
for var in fino.data_vars | ||
if hasattr(fino[var], "units") | ||
] | ||
pd.DataFrame(data) | ||
``` | ||
|
||
The data have a DEPTH dimension, even though variables are only measured at a single level and doesn't vary in time although the format allows for it. | ||
|
||
I.e. temperature (TEMP) is available at level 1 (0.5 m) | ||
|
||
```{python} | ||
fino.DEPH.plot.line(x="TIME") | ||
``` | ||
|
||
```{python} | ||
fino['TEMP'].plot.line("-^",x='TIME') | ||
``` | ||
|
||
```{python} | ||
fino['VHM0'].plot.line("-^",x='TIME') | ||
``` | ||
|
||
Wave data are only available at the surface. | ||
|
||
```{python} | ||
fino[['VHM0','VTZA','VPED']].isel(DEPTH=0) | ||
``` | ||
|
||
```{python} | ||
df = fino[['VHM0','VTZA','VPED']].isel(DEPTH=0).to_dataframe() | ||
``` | ||
|
||
The data are stored on the concurrent timesteps. | ||
|
||
```{python} | ||
df[['VHM0','VTZA','VPED']].head() | ||
``` | ||
|
||
```{python} | ||
df[['VHM0','VTZA']].plot(style='+') | ||
``` | ||
|
||
Convert the wave height data to a mikeio dataset. | ||
|
||
```{python} | ||
ds = mikeio.from_pandas( | ||
df[["VHM0"]].dropna(), items=mikeio.ItemInfo(mikeio.EUMType.Significant_wave_height) | ||
) | ||
ds | ||
``` | ||
|
||
Store the results in Dfs0 format. | ||
|
||
```{python} | ||
ds.to_dfs("FINO1_VHM0.dfs0") | ||
``` | ||
|
||
Read the file again to check... | ||
|
||
```{python} | ||
ds = mikeio.read("FINO1_VHM0.dfs0") | ||
ds | ||
``` | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
--- | ||
title: Dfs0 examples | ||
--- | ||
|
||
A collection of specific examples of working with dfs0 files. For a general introduction to dfs0 see the [user guide](../../user-guide/dfs0.qmd) and the [API reference](../../api/#dfs). | ||
|
||
* [CMEMS *In-situ* data](cmems_insitu.qmd) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
--- | ||
title: Dfsu examples | ||
--- | ||
|
||
A collection of specific examples of working with dfsu files. For a general introduction to dfsu see the [user guide](../../user-guide/dfsu.qmd) and the [API reference](../../api/#dfs). | ||
|
||
|
||
* [2D spatial interpolation](spatial_interpolation.qmd) | ||
* [Merging subdomain dfsu files](merge_subdomains.qmd) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,153 @@ | ||
--- | ||
title: Merging subdomain dfsu files | ||
jupyter: python3 | ||
--- | ||
|
||
During simulation MIKE will commonly split simulation files into subdomains and output results with a p_# suffix. This script will merge dfsu files of this type into a single file. | ||
|
||
Note: Below implementation considers a 2D dfsu file. For 3D dfsu file, the script needs to be modified accordingly. | ||
|
||
|
||
## Import libraries | ||
|
||
```{python} | ||
import mikeio | ||
import numpy as np | ||
from mikeio.spatial import GeometryFM2D | ||
``` | ||
|
||
```{python} | ||
# (optional) check first file, items etc. | ||
mikeio.open("../../data/SimA_HD_p0.dfsu") | ||
``` | ||
|
||
## Choose items to process | ||
|
||
```{python} | ||
# choose items to process (when in doubt look at one of the files you want to process with mikeio.open) | ||
items = ["Surface elevation", "Current speed", "Current direction"] | ||
``` | ||
|
||
## Read files | ||
|
||
Option A: automatically find all with _p# suffix | ||
|
||
```{python} | ||
import glob | ||
import os | ||
basename = "../../data/SimA_HD" # basename of the dfsu files | ||
def find_dfsu_files(basename): | ||
pattern = f"{basename}_p*.dfsu" | ||
files = sorted(glob.glob(pattern)) | ||
if not files: | ||
raise ValueError(f"No files found matching the pattern: {pattern}") | ||
return files | ||
dfs_files = find_dfsu_files(basename) | ||
print(f"Found {len(dfs_files)} files:") | ||
for file in dfs_files: | ||
print(f" - {os.path.basename(file)}") | ||
dfs_list = [mikeio.read(file, items=items) for file in dfs_files] | ||
``` | ||
|
||
Option B: manually select files | ||
|
||
```{python} | ||
# List of input dfsu files | ||
dfs_files = [ | ||
"../../data/SimA_HD_p0.dfsu", | ||
"../../data/SimA_HD_p1.dfsu", | ||
"../../data/SimA_HD_p2.dfsu", | ||
"../../data/SimA_HD_p3.dfsu", | ||
] | ||
# read all dfsu files | ||
dfs_list = [mikeio.read(file, items=items) for file in dfs_files] | ||
``` | ||
|
||
## Extract data of all subdomains | ||
|
||
```{python} | ||
# Create a dictionary to store data for each item | ||
data_dict = {item: [] for item in items} | ||
# Get time steps (assuming all files have the same time steps) | ||
time_steps = dfs_list[0][items[0]].time | ||
# loop over items and time steps and concatenate data from all subdomains | ||
for item in items: | ||
for i in range(len(time_steps)): | ||
# Extract and combine data for the current time step from all subdomains | ||
combined_data = np.concatenate([dfs[item].values[i, :] for dfs in dfs_list]) | ||
data_dict[item].append(combined_data) | ||
# Convert the list to a numpy array | ||
data_dict[item] = np.array(data_dict[item]) | ||
# Prepare Merged Data | ||
merged_data = np.array([data_dict[item] for item in items]) | ||
``` | ||
|
||
## Merge geometry of all subdomains | ||
|
||
```{python} | ||
geometries = [dfs.geometry for dfs in dfs_list] | ||
combined_node_coordinates = [] | ||
combined_element_table = [] | ||
node_offset = 0 | ||
# loop through geometries to combine nodes and elements of all subdomains | ||
for geom in geometries: | ||
current_node_coordinates = geom.node_coordinates | ||
current_element_table = geom.element_table | ||
combined_node_coordinates.extend(current_node_coordinates) | ||
adjusted_element_table = [element + node_offset for element in current_element_table] | ||
combined_element_table.extend(adjusted_element_table) | ||
node_offset += len(current_node_coordinates) | ||
combined_node_coordinates = np.array(combined_node_coordinates) | ||
combined_element_table = np.array(combined_element_table, dtype=object) | ||
projection = geometries[0]._projstr | ||
# create combined geometry | ||
combined_geometry = GeometryFM2D( | ||
node_coordinates=combined_node_coordinates, | ||
element_table=combined_element_table, | ||
projection=projection | ||
) | ||
``` | ||
|
||
```{python} | ||
combined_geometry.plot() | ||
``` | ||
|
||
## Merge everything into dataset | ||
|
||
```{python} | ||
ds_out = mikeio.Dataset( | ||
data=merged_data, # n_items, timesteps, n_elements | ||
items=items, | ||
time=time_steps, | ||
geometry=combined_geometry | ||
) | ||
``` | ||
|
||
```{python} | ||
ds_out[items[0]].sel(time=1).plot() # plot the first time step of the first item | ||
``` | ||
|
||
## Write output to single file | ||
|
||
```{python} | ||
output_file = "area_merged.dfsu" | ||
ds_out.to_dfs(output_file) | ||
``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,8 @@ | ||
📢Example notebooks are moving to a new repo: | ||
https://github.com/DHI/mikeio-examples | ||
The notebooks in this folder are considered legacy and are being replaced by the documentation available at <https://dhi.github.io/mikeio>. | ||
|
||
The contents are moving to either the [user guide](https://dhi.github.io/mikeio/user-guide/getting-started.html) or the [examples](https://dhi.github.io/mikeio/examples/) section of the documentation. | ||
|
||
On each page in the documentation, there is a link to download as a Jupyter notebook. ![download as jupyter](download_jupyter.png) | ||
|
||
If you find a notebook here that you would like to see in the documentation, please let us know by creating an issue in the [issue tracker](https://github.com/DHI/mikeio/issues). | ||
|
||
🔍 Search for examples here <https://dhi.github.io/mikeio-examples/search.html> |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.