Merge Tables¶
Intro¶
Developer Note: if you may make a PR in the future, be sure to copy this
notebook, and use the gitignore
prefix temp
to avoid future conflicts.
This is one notebook in a multi-part series on Spyglass.
- To set up your Spyglass environment and database, see the Setup notebook
- To insert data, see the Insert Data notebook
- For additional info on DataJoint syntax, including table definitions and inserts, see these additional tutorials
- For information on why we use merge tables, and how to make one, see our documentation
In short, merge tables represent the end processing point of a given way of processing the data in our pipelines. Merge Tables allow us to build new processing pipeline, or a new version of an existing pipeline, without having to drop or migrate the old tables. They allow data to be processed in different ways, but with a unified end result that downstream pipelines can all access.
Imports¶
Let's start by importing the spyglass
package, along with a few others.
import os
import datajoint as dj
# change to the upper level folder to detect dj_local_conf.json
if os.path.basename(os.getcwd()) == "notebooks":
os.chdir("..")
dj.config.load("dj_local_conf.json") # load config for database connection info
# ignore datajoint+jupyter async warnings
import warnings
warnings.simplefilter("ignore", category=DeprecationWarning)
warnings.simplefilter("ignore", category=ResourceWarning)
warnings.simplefilter("ignore", category=UserWarning)
import spyglass.common as sgc
import spyglass.lfp as lfp
from spyglass.utils.nwb_helper_fn import get_nwb_copy_filename
from spyglass.utils.dj_merge_tables import delete_downstream_parts, Merge
from spyglass.common.common_ephys import LFP as CommonLFP # Upstream 1
from spyglass.lfp.lfp_merge import LFPOutput # Merge Table
from spyglass.lfp.v1.lfp import LFPV1 # Upstream 2
[2024-01-29 16:15:00,903][INFO]: Connecting root@localhost:3309 [2024-01-29 16:15:00,912][INFO]: Connected root@localhost:3309
Example data¶
Check to make sure the data inserted in the previour notebook is still there.
nwb_file_name = "minirec20230622.nwb"
nwb_copy_file_name = get_nwb_copy_filename(nwb_file_name)
nwb_file_dict = {"nwb_file_name": nwb_copy_file_name}
sgc.Session & nwb_file_dict
nwb_file_name name of the NWB file | subject_id | institution_name | lab_name | session_id | session_description | session_start_time | timestamps_reference_time | experiment_description |
---|---|---|---|---|---|---|---|---|
minirec20230622_.nwb | 54321 | UCSF | Loren Frank Lab | 12345 | test yaml insertion | 2023-06-22 15:59:58 | 1970-01-01 00:00:00 | Test Conversion |
Total: 1
If you haven't already done so, insert data into a Merge Table.
Note: Some existing parents of Merge Tables perform the Merge Table insert as part of the populate methods. This practice will be revised in the future.
sgc.FirFilterParameters().create_standard_filters()
lfp.lfp_electrode.LFPElectrodeGroup.create_lfp_electrode_group(
nwb_file_name=nwb_copy_file_name,
group_name="test",
electrode_list=[0],
)
lfp_key = {
"nwb_file_name": nwb_copy_file_name,
"lfp_electrode_group_name": "test",
"target_interval_list_name": "01_s1",
"filter_name": "LFP 0-400 Hz",
"filter_sampling_rate": 30_000,
}
lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)
lfp.v1.LFPV1().populate(lfp_key)
LFPOutput.insert([lfp_key], skip_duplicates=True)