Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for ROIs with different size or resolution #74

Open
wants to merge 41 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
444c9b1
Add a new version for the latest LabView setup
ageorgou Sep 7, 2020
b7a2f35
Split off reading of ROI table
ageorgou Sep 7, 2020
c34dffb
Start generalising ROI reader
ageorgou Sep 8, 2020
f8a2702
Allow creating ROI readers for newest LabView
ageorgou Sep 8, 2020
84826e3
Start differentiating for variable ROIs
ageorgou Sep 8, 2020
2745018
add new columns
alessandrofelder Sep 17, 2020
7555463
function call to _imaging_section needed
alessandrofelder Sep 18, 2020
f3f86c8
create simple test for ROI reader
alessandrofelder Sep 18, 2020
55c6228
move FIXME to subclass
alessandrofelder Sep 18, 2020
fab0491
add v300 ROI.dat
alessandrofelder Sep 18, 2020
c22913c
adapt to possible lack of speed data file
alessandrofelder Oct 5, 2020
4fb9626
update signatures
alessandrofelder Oct 5, 2020
e958e13
clarify classname to cover all modern versions
alessandrofelder Oct 5, 2020
cb0e263
fix typo in comments
alessandrofelder Oct 5, 2020
187c3e3
wip:add functionality for variable rois
alessandrofelder Oct 5, 2020
3324c94
Use array for ROI sizes for easier computations
ageorgou Oct 11, 2020
f56f713
move reading of roi file
alessandrofelder Oct 14, 2020
1276a6d
accommodate timings.py to variable size rois
alessandrofelder Oct 14, 2020
613731d
fix small bug
alessandrofelder Oct 15, 2020
375cf7e
fix isort
alessandrofelder Oct 15, 2020
b278687
Don't overwrite other readers' attributes!
ageorgou Oct 16, 2020
c07fc67
Check for pointing mode directly in RoiReader
ageorgou Oct 16, 2020
270d9d3
Avoid repetition, use new columns for all v3.0.0
ageorgou Oct 16, 2020
8db9623
Tidy up some epochs/trials things
ageorgou Nov 7, 2020
16ee127
Use each ROI's limits in its dedicated plane
ageorgou Nov 8, 2020
2a9c3ea
Fix some things to get variable ROI import working
ageorgou Nov 12, 2020
0104d79
Add a test with variable resolution
ageorgou Nov 12, 2020
3528285
Cast some ROI data earlier
ageorgou Nov 7, 2020
9afe882
Update datasets to int in some signatures
ageorgou Nov 11, 2020
2d8c99a
Avoid some deprecation warnings
ageorgou Nov 8, 2020
badb3cf
Update latest signature changes for Travis
ageorgou Nov 13, 2020
8115f8f
update signatures for some large files
alessandrofelder Nov 13, 2020
023f0f7
wip: getting roi dimensions right
alessandrofelder Nov 16, 2020
a06b2a9
wip:futile attempts at accounting for angle or not
alessandrofelder Nov 16, 2020
813224f
cast Pixels per ROI correctly
alessandrofelder Nov 16, 2020
0e6f167
wip: passes MatLab comparison...
alessandrofelder Nov 16, 2020
2416aa0
fix bug in add_rois, remove (possibly) unnecessary get_x_y_range func…
alessandrofelder Nov 16, 2020
c62d19c
fix missing newline
alessandrofelder Nov 16, 2020
48d4a3f
minor simplification/clarifications
alessandrofelder Nov 17, 2020
5b524c7
restore get_x_y_range because needed for pixel mask
alessandrofelder Nov 17, 2020
ddbd952
update variable-roi signature
alessandrofelder Nov 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 61 additions & 48 deletions src/silverlabnwb/nwb_file.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,8 @@ def rel(file_name):
self.read_user_config()
header_fields = self.parse_experiment_header_ini(rel('Experiment Header.ini'))
speed_data, expt_start_time = self.read_speed_data(rel('Speed_Data/Speed data 001.txt'))
if expt_start_time is None:
expt_start_time = pd.Timestamp.now() # FIXME get experiment start time from header if not present in speed data
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting one, because there seems to be some inconsistency in the data! For instance, LabViewData2020/200225_19_09_16 has in the header

Experiment Date Time = "25/02/2020 18:22:17"

but the folder name implies a later start, as does the speed data, and all 3 differ. (Speed data starts:

25/02/2020	19:12:13.160587	4	0.00	332.316

localized_start_time = expt_start_time.tz_localize(timezone('Europe/London'))
# Create the NWB file
extensions = ["e-labview.py", "e-pixeltimes.py"]
Expand Down Expand Up @@ -248,7 +250,8 @@ def rel(file_name):
"""Return the path of a file name relative to the Labview folder."""
return os.path.join(folder_path, file_name)

self.add_speed_data(speed_data, expt_start_time)
if speed_data is not None:
self.add_speed_data(speed_data, expt_start_time)
self.determine_trial_times()
self.add_stimulus()
self.read_cycle_relative_times(folder_path)
Expand Down Expand Up @@ -449,17 +452,20 @@ def read_speed_data(self, file_name):
Pandas data table, and initial_time is the experiment start time, which sets the
session_start_time for the NWB file.
"""
self.log('Loading speed data from {}', file_name)
assert os.path.isfile(file_name)
speed_data = pd.read_csv(file_name, sep='\t', header=None, usecols=[0, 1, 2, 3], index_col=0,
names=('Date', 'Time', 'Trial time', 'Speed'),
dtype={'Trial time': np.int32, 'Speed': np.float32},
parse_dates=[[0, 1]], # Combine first two cols
dayfirst=True, infer_datetime_format=True,
memory_map=True)
initial_offset = pd.Timedelta(microseconds=speed_data['Trial time'][0])
initial_time = speed_data.index[0] - initial_offset
return speed_data, initial_time
if os.path.isfile(file_name):
self.log('Loading speed data from {}', file_name)
speed_data = pd.read_csv(file_name, sep='\t', header=None, usecols=[0, 1, 2, 3], index_col=0,
names=('Date', 'Time', 'Trial time', 'Speed'),
dtype={'Trial time': np.int32, 'Speed': np.float32},
parse_dates=[[0, 1]], # Combine first two cols
dayfirst=True, infer_datetime_format=True,
memory_map=True)
initial_offset = pd.Timedelta(microseconds=speed_data['Trial time'][0])
initial_time = speed_data.index[0] - initial_offset
return speed_data, initial_time
else:
self.log('No speed data found.')
return None, None

def add_speed_data(self, speed_data, initial_time):
"""Add acquired speed data the the NWB file.
Expand Down Expand Up @@ -512,25 +518,31 @@ def determine_trial_times(self):
There is a short interval between trials which still has speed data recorded, so it's
the second reset which marks the start of the next trial.
"""
speed_data_ts = self.nwb_file.get_acquisition('speed_data')
try: # see if there was speed data in LabView, in which case it's already in our file.
speed_data_ts = self.nwb_file.get_acquisition('speed_data')
except KeyError: # otherwise, there was no speed data in the LabView folder.
speed_data_ts = None
if self.labview_version.is_legacy:
self.log('Calculating trial times from speed data')
trial_times_ts = self.nwb_file.get_acquisition('trial_times')
trial_times = np.array(trial_times_ts.data)
# Prepend -1 so we pick up the first trial start
# Append -1 in case there isn't a reset recorded at the end of the last trial
deltas = np.ediff1d(trial_times, to_begin=-1, to_end=-1)
# Find resets and pair these up to mark start & end points
reset_idxs = (deltas < 0).nonzero()[0].copy()
assert reset_idxs.ndim == 1
num_trials = reset_idxs.size // 2 # Drop the extra reset added at the end if
reset_idxs = np.resize(reset_idxs, (num_trials, 2)) # it's not needed
reset_idxs[:, 1] -= 1 # Select end of previous segment, not start of next
# Index the timestamps to find the actual start & end times of each trial. The start
# time is calculated using the offset value in the first reading within the trial.
rel_times = self.get_times(trial_times_ts)
epoch_times = rel_times[reset_idxs]
epoch_times[:, 0] -= trial_times[reset_idxs[:, 0]] * 1e-6
if speed_data_ts is not None:
self.log('Calculating trial times from speed data')
trial_times_ts = self.nwb_file.get_acquisition('trial_times')
trial_times = np.array(trial_times_ts.data)
# Prepend -1 so we pick up the first trial start
# Append -1 in case there isn't a reset recorded at the end of the last trial
deltas = np.ediff1d(trial_times, to_begin=-1, to_end=-1)
# Find resets and pair these up to mark start & end points
reset_idxs = (deltas < 0).nonzero()[0].copy()
assert reset_idxs.ndim == 1
num_trials = reset_idxs.size // 2 # Drop the extra reset added at the end if
reset_idxs = np.resize(reset_idxs, (num_trials, 2)) # it's not needed
reset_idxs[:, 1] -= 1 # Select end of previous segment, not start of next
# Index the timestamps to find the actual start & end times of each trial. The start
# time is calculated using the offset value in the first reading within the trial.
rel_times = self.get_times(trial_times_ts)
epoch_times = rel_times[reset_idxs]
epoch_times[:, 0] -= trial_times[reset_idxs[:, 0]] * 1e-6
else:
raise ValueError("Legacy code relies on having speed data to compute trial times.")
else:
epoch_times = self.trial_times
# Create the epochs in the NWB file
Expand All @@ -542,18 +554,20 @@ def determine_trial_times(self):
# maybe better thought of as the time of the last junk speed reading.
# We also massage the end time since otherwise data points at exactly that time are
# omitted.
self.nwb_file.add_epoch_column('epoch_name', 'the name of the epoch')
if speed_data_ts is not None:
self.nwb_file.add_epoch_column('epoch_name', 'the name of the epoch')
for i, (start_time, stop_time) in enumerate(epoch_times):
assert stop_time > start_time >= 0
trial = 'trial_{:04d}'.format(i + 1)
self.nwb_file.add_epoch(
epoch_name=trial,
start_time=start_time if i == 0 else start_time + 1e-9,
stop_time=stop_time + 1e-9,
timeseries=[speed_data_ts])
if speed_data_ts is not None:
trial = 'trial_{:04d}'.format(i + 1)
self.nwb_file.add_epoch(
epoch_name=trial,
start_time=start_time if i == 0 else start_time + 1e-9,
stop_time=stop_time + 1e-9,
timeseries=[speed_data_ts])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this argument be an empty list? If that doesn't cause an error, it seem simpler to change this argument (e.g. timeseries=[speed_data_ts] if speed_data_ts is not None else []) rather than switching to trials as the "main" interval type.

# We also record exact start & end times in the trial table, since our epochs
# correspond to trials.
self.nwb_file.add_trial(start_time=start_time, stop_time=stop_time)
self.nwb_file.add_trial(start_time=start_time if i == 0 else start_time + 1e-9, stop_time=stop_time + 1e-9)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I didn't remember these were different! I wonder if there was a reason that was the case.

self._write()

def add_stimulus(self):
Expand All @@ -579,11 +593,11 @@ def add_stimulus(self):
# TODO We can maybe build puffs and times a bit more efficiently or
# in fewer steps, although it probably won't make a huge difference
# (eg we can now get all the times with a single indexing expression)
num_epochs = len(self.nwb_file.epochs)
puffs = [u'puff'] * num_epochs
num_trials = len(self.nwb_file.trials)
puffs = [u'puff'] * num_trials
times = np.zeros((len(puffs),), dtype=np.float64)
for i in range(num_epochs):
times[i] = self.nwb_file.epochs[i, 'start_time'] + stim['trial_time_offset']
for i in range(num_trials):
times[i] = self.nwb_file.trials[i, 'start_time'] + stim['trial_time_offset']
attrs['timestamps'] = times
attrs['data'] = puffs
self.nwb_file.add_stimulus(TimeSeries(**attrs))
Expand Down Expand Up @@ -611,7 +625,7 @@ def rel(file_name):
timings = LabViewTimingsPre2018(relative_times_path=file_path,
roi_path=roi_path,
dwell_time=self.imaging_info.dwell_time / 1e6)
elif self.labview_version is LabViewVersions.v231:
elif self.labview_version is LabViewVersions.v231 or self.labview_version is LabViewVersions.v300:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't just else be enough here?

file_path = rel('Single cycle relative times_HW.txt')
assert os.path.isfile(file_path)
timings = LabViewTimings231(relative_times_path=file_path,
Expand Down Expand Up @@ -660,15 +674,14 @@ def read_functional_data(self, folder_path):
self.log('Loading functional data from {}', folder_path)
assert os.path.isdir(folder_path)
# Figure out timestamps, measured in seconds
epoch_names = self.nwb_file.epochs[:, 'epoch_name']
trials = [int(s[6:]) for s in epoch_names] # names start with 'trial_'
trials = [s+1 for s in self.nwb_file.trials.id.data]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can see, we only ever care about the number of trials/epochs here, so there's a chance to simplify this a bit!

cycles_per_trial = self.imaging_info.cycles_per_trial
num_times = cycles_per_trial * len(epoch_names)
num_times = cycles_per_trial * len(trials)
single_trial_times = np.arange(cycles_per_trial) * self.cycle_time
times = np.zeros((num_times,), dtype=float)
# TODO Perhaps this loop can be vectorised
for i in range(len(epoch_names)):
trial_start = self.nwb_file.epochs[i, 'start_time']
for i in range(len(trials)):
trial_start = self.nwb_file.trials[i, 'start_time']
times[i * cycles_per_trial:
(i + 1) * cycles_per_trial] = single_trial_times + trial_start
self.custom_silverlab_dict['cycle_time'] = self.cycle_time
Expand Down