Skip to content

Commit

Permalink
Update getting_started to use ipython
Browse files Browse the repository at this point in the history
  • Loading branch information
JoelLucaAdams committed Nov 18, 2024
1 parent 5187ffc commit cd5c0c6
Showing 1 changed file with 24 additions and 48 deletions.
72 changes: 24 additions & 48 deletions docs/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,40 +30,25 @@ Usage
Single file loading
~~~~~~~~~~~~~~~~~~~

Basic usage::
Basic usage:

.. ipython:: python
import xarray as xr
df = xr.open_dataset("0010.sdf")
print(df["Electric_Field_Ex"])
# <xarray.DataArray 'Electric_Field_Ex' (X_x_px_deltaf_electron_beam: 16)> Size: 128B
# [16 values with dtype=float64]
# Coordinates:
# * X_x_px_deltaf_electron_beam (X_x_px_deltaf_electron_beam) float64 128B 1...
# Attributes:
# units: V/m
# full_name: "Electric Field/Ex"
df = xr.open_dataset("tutorial_dataset_1d/0010.sdf")
df["Electric_Field_Ex"]
Multi file loading
~~~~~~~~~~~~~~~~~~

To open a whole simulation at once, pass ``preprocess=sdf_xarray.SDFPreprocess()``
to `xarray.open_mfdataset`::
to `xarray.open_mfdataset`:

.. ipython:: python
import xarray as xr
from sdf_xarray import SDFPreprocess
with xr.open_mfdataset("*.sdf", preprocess=SDFPreprocess()) as ds:
print(ds)
# Dimensions:
# time: 301, X_Grid_mid: 128, ...
# Coordinates: (9) ...
# Data variables: (18) ...
# Indexes: (9) ...
# Attributes: (22) ...
xr.open_mfdataset("tutorial_dataset_1d/*.sdf", preprocess=SDFPreprocess())
`SDFPreprocess` checks that all the files are from the same simulation, and
ensures there's a ``time`` dimension so the files are correctly concatenated.
Expand All @@ -73,19 +58,12 @@ output at every time step, then those variables will have ``NaN`` values at the
corresponding time points.

Alternatively, we can create a separate time dimensions for each ``output`` block
(essentially) using `sdf_xarray.open_mfdataset` with ``separate_times=True``::
(essentially) using `sdf_xarray.open_mfdataset` with ``separate_times=True``:

.. ipython:: python
from sdf_xarray import open_mfdataset
with open_mfdataset("*.sdf", separate_times=True) as ds:
print(ds)
# Dimensions:
# time0: 301, time1: 31, time2: 61, X_Grid_mid: 128, ...
# Coordinates: (12) ...
# Data variables: (18) ...
# Indexes: (9) ...
# Attributes: (22) ...
open_mfdataset("tutorial_dataset_1d/*.sdf", separate_times=True)
This is better for memory consumption, at the cost of perhaps slightly less
friendly comparisons between variables on different time coordinates.
Expand All @@ -95,21 +73,19 @@ Reading particle data

By default, particle data isn't kept as it takes up a lot of space. Pass
``keep_particles=True`` as a keyword argument to `open_dataset` (for single files)
or `open_mfdataset` (for multiple files)::
or `open_mfdataset` (for multiple files):

df = xr.open_dataset("0010.sdf", keep_particles=True)
.. ipython:: python
xr.open_dataset("tutorial_dataset_1d/0010.sdf", keep_particles=True)
### Loading SDF files directly
Loading SDF files directly
~~~~~~~~~~~~~~~~~~~~~~~~~~

For debugging, sometimes it's useful to see the raw SDF files::
For debugging, sometimes it's useful to see the raw SDF files:

.. ipython:: python
from sdf_xarray import SDFFile
with SDFFile("0010.sdf") as sdf_file:
print(sdf_file.variables["Electric Field/Ex"])
# Variable(_id='ex', name='Electric Field/Ex', dtype=dtype('float64'), ...
print(sdf_file.variables["Electric Field/Ex"].data)
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... -4.44992788e+12 1.91704994e+13 0.00000000e+00]
sdf_file = SDFFile("tutorial_dataset_1d/0010.sdf")
sdf_file.variables["Electric Field/Ex"]

0 comments on commit cd5c0c6

Please sign in to comment.