Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
3 Research products, page 1 of 1

  • Canada
  • Research data
  • 2021-2021
  • Canadian Institutes of Health Research
  • ZENODO

Relevance
arrow_drop_down
  • Open Access English
    Authors: 
    Rochon, Pierre-Luc; Theriault, Catherine; Rangel Olguin, Aline Giselle; Krishnaswamy, Arjun;
    Publisher: Dryad
    Project: NSERC , CIHR

    1) Sample registration data. Stitch2p is a function that registeres 2p and 1p images of retina. INPUT: PATH: is the string path where the recordings are saved. RANGE is a 1x2 matrix that defines the range of movies to include in the analysis this works because each recording is assigned a number matching their order(movie_n). eg: calling [0 3] will analyze movie_0 to movie_3 in the specified path. CENTER is a path containing the blood vessel pattern acquired during the recording session. OUTPUT: ROIS = struct with all the 2p recording movies and bloodvessels used in the stiching STITCHED: Array with raw stitched image TESTIMAGE: converted 8bit image with color multiplier for visualization to use with provided sample images open matlab and set the path and center variables. For example: path = "C:\Downloads\Remapping process files\sample 2p recordings" center = "C:\Downloads\Remapping process files\sample 2p recordings\movie_10" Stich2p(path,[0 3],center); Should display a stitched image and save it, along with a matfile, to the path defined in 'center' Example images: any folder in the "sample 2p recordings" folder will contain example files of remapped images along with original recording data. Remapped images are based on the image in the "confocal ROIs" folder. File naming definitions: RoiSet: mask containing the cells of interest. bloodvessels: mask containing the bloodvessels for marker intensity calculations stim_x: individual presented stimuli along with relevant information expression_23_06_20_mx: CSV file containing assigned markers remap: the remapped image from which the markers where assigned remap_points: land mark points used to make the Remapped image landmark points are listed as the confocal and 2p image. 2) Visual response data: Matfile (*.mat) containing a single struct called 'compiled'. Struct field definitions: mb: struct containing rawTrace: Response averaged from 2 presentations of a bar moving in 8 different directions. Bar velocity = 1000um/sec. 8-bar sequence was preceded by a brief (.5s) flash. stimTrace: vector showing stimulus timing allignedTrace: Time x bar array with RGC responses corrected for position mbAngleOrder: bar direction vcctor rawTime: time vector for rawTrace allignedTime: time vector for allignedTrace ff: struct containing rawTrace: response averaged from 3 presentations of a full field flash. rawtime: time vector for rawTrace stimTrace: vector showing stimulus timing mbs: struct ordered the same way as mb. Contains data averaged from 2 presentations of a bar moving in 8 different directions. Bar velocity = 200um/sec. 8-bar sequence was preceded by a brief (.5s) flash. ROI: struct containing: mask - binary mask that defines RGC. xy: Roi centroid position. ost, Brn3c, nr2, calb, gfp: 8bit intensity of the indicated marker within RGC ROI defined by mask. size: Roi area. mrk: Marker classification. theta: angular preference computed from moving bar stimulus and set relative to retinal orientation. dsi: direction selective index computed from moving bar stimulus. osi: orientation selective index computed from moving bar stimulus. Nearly 50 different mouse retinal ganglion cell (RGC) types sample the visual scene for distinct features. RGC feature selectivity arises from its synapses with a specific subset of amacrine (AC) and bipolar cell (BC) types, but how RGC dendrites arborize and collect input from these specific subsets remains poorly understood. Here we examine the hypothesis that RGCs employ molecular recognition systems to meet this challenge. By combining calcium imaging and type-specific histological stains we define a family of circuits that express the recognition molecule Sidekick 1 (Sdk1) which include a novel RGC type (S1-RGC) that responds to local edges. Genetic and physiological studies revealed that Sdk1 loss selectively disrupts S1-RGC visual responses which result from a loss of excitatory and inhibitory inputs and selective dendritic deficits on this neuron. We conclude that Sdk1 shapes dendrite growth and wiring to help S1-RGCs become feature selective. Two Datasets are provided. 1) Sample registration data - contains images, code, and ROIs to register two-photon imaged fields of retinae containing GCaMP6f+ RGCs with the same retinae following staining with antibodies to marker genes for Sdk1 RGC types. 2) Visual response data - contains responses of Sdk1 RGCs to a full-field flash and moving bar, grouped according to expression of Ost, Brn3c, Nr2f2, and Calbindin.

  • Open Access
    Authors: 
    Kenney, Justin W.; Steadman, Patrick E.; Young, Olivia; Shi, Meng Ting; Polanco, Maris; Dubaishi, Saba; Covert, Kristopher; Mueller, Thomas; Frankland, Paul W.;
    Publisher: Zenodo
    Project: CIHR

    Subjects Subjects were AB fish (15-16 weeks of age) of both sexes. Fish were housed in 2 L tanks with 8-12 fish per tank. All fish were bred and raised at the Hospital for Sick Children in high density racks with a 14:10 light/dark cycle (lights on at 8:30) and fed twice daily with Artemia salina. All procedures were approved by the Hospital for Sick Children Animal Care and Use Committee. Sample preparation Zebrafish were euthanized by anesthetizing in 4% tricaine followed by immersion in ice cold water for five minutes. Animals were then decapitated using a razor blade and heads were placed in ice cold PBS for five minutes to let blood drain. Heads were then fixed in 4% PFA overnight after which brains were then carefully dissected into cold PBS and stored at 4 C until processing for iDISCO+. Brains that were damaged during the dissection process were not used for generating the atlas. Tissue staining Tissue staining and clearing was performed using iDISCO+ (Renier et al., 2016). Samples were first washed three times in PBS at room temperature, followed by dehydration in a series of methanol/water mixtures (an hour each in 20%, 40%, 60%, 80%, 100% methanol). Samples were further washed in 100% methanol, chilled on ice, and then incubated in 5% hydrogen peroxide in methanol overnight at 4 C. The next day, samples were rehydrated in a methanol/water series at room temperature (80%, 60%, 40%, 20% methanol) followed by a PBS wash and two one-hour washes in PTx.2 (PBS with 0.2% TritonX-100). Samples were then washed overnight at 37 C in permeabilization solution (PBS with 0.2% TritonX-100, 0.3 M glycine, 20% DMSO) followed by an overnight incubation at 37 C in blocking solution (PBS with 0.2% TritionX-100, 6% normal donkey serum, and 10% DMSO). Samples were then labelled with TO-PRO3 iodide (TO-PRO) (1 night) or primary antibodies (2-3 nights) via incubation at 37 C in PTwH (PTx.2 with 10 µg/mL heparin) with 3% donkey serum and 5% DMSO. Samples were then washed at 37 C for one day with five changes of PTwH. Antibody stained samples were followed by incubation with secondary antibodies at 37 C for 2-3 days in PTwH with 3% donkey serum. For samples labelled with TO-PRO, the secondary antibody labelling step was omitted. Following secondary antibody labelling, samples were again washed at 37 C in PTwH for one day with five solution changes. Tissue clearing Labelled brains were first dehydrated in a series of methanol water mixtures at room temperature (an hour each in 20%, 40%, 60%, 80%, 100% (x2) methanol) and then left overnight in 100% methanol. Samples were then incubated at room temperature in 66% dichloromethane in methanol for three hours followed by two 15-minute washes in dicholormethane. After removal of dichloromethane, samples were incubated and stored in dibenzyl ether until imaging. Imaging All imaging was done on a LaVision ultramicroscope I. Samples were mounted using an ultraviolet curing resin (adhesive 61 from Norland Optical, Cranbury, NJ) that had a refractive index (1.56) that matched the imaging solution, dibenzyl ether. Images were acquired in the horizontal plane at 4X magnification. Image processing Data sets from light sheet imaging were stitched using Fiji’s (NIH) extension for Grid Stitching (Preibisch et al., 2009) and converted to a single stack, corresponding to the z-axis. All image processing steps were run on a Linux-workstation with 64 GB of RAM and 12-core Intel processor. Each stack was converted to a 4 µm isotropic image using custom python code with separate files for the autofluorescence channel and a second for the antibody or TO-PRO channels. These images were resampled to 8 µm isotropic due to system constraints during the image registration stages. Registration The TO-PRO and autofluorescence signals were acquired on an initial dataset of 17. To create the initial average, we used image registration to align in a parallel group-wise fashion the TO-PRO images. The variability was expected to be less in the TO-PRO because these images contained more contrast than the autofluorescence images. The creation of an initial average of the adult zebrafish brain was accomplished using 17 samples with the TO-PRO channel. The process was completed using a 3-step registration process, similar to prior work (Lerch et al., 2011) using the pydpiper pipeline framework (Friedel et al., 2014) and the minctracc registration tool (Collins and Evans, 1997). This involved taking a single sample at random and registering the 17 samples to it using a 6-parameter linear alignment process (LSQ6). This yielded 17 samples in similar orientation to allow a 12-parameter linear registration (LSQ12) to be performed in a pair-wise fashion (each sample is paired with all the other samples, to avoid sample bias) and the final output of these 12-parameter registration was a group average. This represents a linearly registered average adult zebrafish brain. This was then used as the target for non-linear registration with each of the linearly registered 17 TO-PRO samples. This non-linear alignment was repeated successively with smaller step sizes and blurring kernels to allow for an average with minimal bias from any one sample brain. We then took this average and mirrored itself along the long axis of the brain and repeated the registration process described above but instead of using a random brain as the 6-parameter target, we used this mirrored brain. The result of this second pipeline was an average brain where each plane of the brain (coronal, sagittal, horizontal) is parallel with the imaging planes (x,y,z). This final average brain represented the starting point of the atlas. The linear and non-linear transformations created in the registration pipeline were used to resample the 4 µm isotropic TO-PRO and autofluorescence images to the atlas space, yielding an average signal for each channel. The autofluorescence signal was used to register other sample datasets with the atlas because it is common across all datasets. To combine the additional cellular markers to better delineate structures and examine their distribution across the brain, we converted all images and their channels to 4 µm isotropic images as described above. We then converted them to 8 µm isotropic and used the autofluorescence channel for each set to run the above registration pipeline (LSQ6, LSQ12 and non-linear). The initial target was the autofluorescence average created with the TO-PRO dataset described above. Following each registration pipeline, the transformations were used to resample each autofluorescence and cellular marker channel to the atlas with a resolution of 4 µm isotropic. To assess registration precision using TO-PRO or autofluorescence images, for each signal we identified 6 landmarks in the atlas, and their corresponding location on 8 different image sets. These points were then brought into atlas space using the transformations from the registration process. We then computed the Euclidean distance between the points in the atlas image and the transformed images for the TO-PRO and autofluorescence signals. Precision data are presented as mean ± standard deviation unless otherwise indicated. Segmentation Segmentation was performed using ITK-SNAP, a freely available software package for working with multimodal medical images that enables side-by-side viewing of 3D images registered into the same anatomical space (Yushkevich et al., 2019). Segmentation was primarily guided by comparing TO-PRO nuclear stained images to the cresyl violet stain of the original atlas (Wullimann et al., 1996). Boundaries of nuclei were often determined using the TO-PRO stain in conjunction with a neuronal marker (HuC/D) and other antibody stains as needed. Terminology largely follows that of the original atlas with the exception of motor nuclei (Mueller et al., 2004) and the telencephalon (Porter and Mueller, 2020). Images are best viewed using ITK-SNAP: http://www.itksnap.org/pmwiki/pmwiki.php Video tutorial: https://youtu.be/uVLqFJd4LDk Basic usage and viewing after installing and opening ITK-SNAP: File --> Open main image Select 20180219_topro_average_2020.nii.gz To add the segmentation: Segmentation --> Open segmentation Select 2021-08-21_zfish_segmentation.nii.gz (or latest segmentation file) To add correct labels and colors to the segmentation: Segmentation --> Import label description Select 2021-08-21_Label_descriptions.txt (or segmentation file with date that matches the segmentation loaded in the previous step) You can now explore the atlas in the coronal, horizontal, and sagittal planes. To add an additional image alongside the main image loaded above (e.g., the tyrosine hydroxylase stains): File --> add another image Select 20180505_TH_average.nii.gz (or image set of your choice) To view images side-by-side you may need to select the "tiled layout": Edit --> Layers --> "Entered tiled layout" Alternatively, you can press the "\" key to toggle between layouts Navigating the atlas and segmentation is most easily done using "crosshair mode" (press "1") and/or "zoom/pan mode" (press "2"). These are the first two selections in the top "main toolbar" in the top left corner: Mouse wheel allows you to scroll through the image slices Right mouse click zooms in/out In 'crosshair mode' left mouse click changes location of the crosshair. You can see the abbreviation for the label under the cursor on the left side under "Label under cursor:" In 'zoom/pan mode' holding and clicking the left mouse button willl allow to move the images around Other useful commands: "x" will toggle the segmentation "c" will center the images to the location of the crosshair in all three planes simultaneously Zebrafish have made significant contributions to our understanding of the vertebrate brain and the neural basis of behavior, earning a place as one of the most widely used model organisms in neuroscience. Their appeal arises from the marriage of low cost, early life transparency, and ease of genetic manipulation with a behavioral repertoire that becomes more sophisticated as animals transition from larvae to adults. To further enhance the use of adult zebrafish, we created the first fully segmented three-dimensional digital adult zebrafish brain atlas (AZBA). AZBA was built by combining tissue clearing, light-sheet fluorescence microscopy, and three-dimensional image registration of nuclear and antibody stains. These images were used to guide segmentation of the atlas into over 200 neuroanatomical regions comprising the entirety of the adult zebrafish brain. As an open source, online (azba.wayne.edu), updatable digital resource, AZBA will significantly enhance the use of adult zebrafish in furthering our understanding of vertebrate brain function in both health and disease.

  • Restricted
    Authors: 
    Magri, Stefania; Daniela, Di Bella; Taroni, Franco;
    Publisher: Zenodo
    Project: CIHR

    Next Generation Sequencing data of leukodystrophy gene panel analysis and segregation study data Sudy supported by Italian Ministry of Health. Grant Numbers: GR2016_02363337, RF2016_02361285

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Canada. Are you interested to view more results? Visit OpenAIRE - Explore.
3 Research products, page 1 of 1
  • Open Access English
    Authors: 
    Rochon, Pierre-Luc; Theriault, Catherine; Rangel Olguin, Aline Giselle; Krishnaswamy, Arjun;
    Publisher: Dryad
    Project: NSERC , CIHR

    1) Sample registration data. Stitch2p is a function that registeres 2p and 1p images of retina. INPUT: PATH: is the string path where the recordings are saved. RANGE is a 1x2 matrix that defines the range of movies to include in the analysis this works because each recording is assigned a number matching their order(movie_n). eg: calling [0 3] will analyze movie_0 to movie_3 in the specified path. CENTER is a path containing the blood vessel pattern acquired during the recording session. OUTPUT: ROIS = struct with all the 2p recording movies and bloodvessels used in the stiching STITCHED: Array with raw stitched image TESTIMAGE: converted 8bit image with color multiplier for visualization to use with provided sample images open matlab and set the path and center variables. For example: path = "C:\Downloads\Remapping process files\sample 2p recordings" center = "C:\Downloads\Remapping process files\sample 2p recordings\movie_10" Stich2p(path,[0 3],center); Should display a stitched image and save it, along with a matfile, to the path defined in 'center' Example images: any folder in the "sample 2p recordings" folder will contain example files of remapped images along with original recording data. Remapped images are based on the image in the "confocal ROIs" folder. File naming definitions: RoiSet: mask containing the cells of interest. bloodvessels: mask containing the bloodvessels for marker intensity calculations stim_x: individual presented stimuli along with relevant information expression_23_06_20_mx: CSV file containing assigned markers remap: the remapped image from which the markers where assigned remap_points: land mark points used to make the Remapped image landmark points are listed as the confocal and 2p image. 2) Visual response data: Matfile (*.mat) containing a single struct called 'compiled'. Struct field definitions: mb: struct containing rawTrace: Response averaged from 2 presentations of a bar moving in 8 different directions. Bar velocity = 1000um/sec. 8-bar sequence was preceded by a brief (.5s) flash. stimTrace: vector showing stimulus timing allignedTrace: Time x bar array with RGC responses corrected for position mbAngleOrder: bar direction vcctor rawTime: time vector for rawTrace allignedTime: time vector for allignedTrace ff: struct containing rawTrace: response averaged from 3 presentations of a full field flash. rawtime: time vector for rawTrace stimTrace: vector showing stimulus timing mbs: struct ordered the same way as mb. Contains data averaged from 2 presentations of a bar moving in 8 different directions. Bar velocity = 200um/sec. 8-bar sequence was preceded by a brief (.5s) flash. ROI: struct containing: mask - binary mask that defines RGC. xy: Roi centroid position. ost, Brn3c, nr2, calb, gfp: 8bit intensity of the indicated marker within RGC ROI defined by mask. size: Roi area. mrk: Marker classification. theta: angular preference computed from moving bar stimulus and set relative to retinal orientation. dsi: direction selective index computed from moving bar stimulus. osi: orientation selective index computed from moving bar stimulus. Nearly 50 different mouse retinal ganglion cell (RGC) types sample the visual scene for distinct features. RGC feature selectivity arises from its synapses with a specific subset of amacrine (AC) and bipolar cell (BC) types, but how RGC dendrites arborize and collect input from these specific subsets remains poorly understood. Here we examine the hypothesis that RGCs employ molecular recognition systems to meet this challenge. By combining calcium imaging and type-specific histological stains we define a family of circuits that express the recognition molecule Sidekick 1 (Sdk1) which include a novel RGC type (S1-RGC) that responds to local edges. Genetic and physiological studies revealed that Sdk1 loss selectively disrupts S1-RGC visual responses which result from a loss of excitatory and inhibitory inputs and selective dendritic deficits on this neuron. We conclude that Sdk1 shapes dendrite growth and wiring to help S1-RGCs become feature selective. Two Datasets are provided. 1) Sample registration data - contains images, code, and ROIs to register two-photon imaged fields of retinae containing GCaMP6f+ RGCs with the same retinae following staining with antibodies to marker genes for Sdk1 RGC types. 2) Visual response data - contains responses of Sdk1 RGCs to a full-field flash and moving bar, grouped according to expression of Ost, Brn3c, Nr2f2, and Calbindin.

  • Open Access
    Authors: 
    Kenney, Justin W.; Steadman, Patrick E.; Young, Olivia; Shi, Meng Ting; Polanco, Maris; Dubaishi, Saba; Covert, Kristopher; Mueller, Thomas; Frankland, Paul W.;
    Publisher: Zenodo
    Project: CIHR

    Subjects Subjects were AB fish (15-16 weeks of age) of both sexes. Fish were housed in 2 L tanks with 8-12 fish per tank. All fish were bred and raised at the Hospital for Sick Children in high density racks with a 14:10 light/dark cycle (lights on at 8:30) and fed twice daily with Artemia salina. All procedures were approved by the Hospital for Sick Children Animal Care and Use Committee. Sample preparation Zebrafish were euthanized by anesthetizing in 4% tricaine followed by immersion in ice cold water for five minutes. Animals were then decapitated using a razor blade and heads were placed in ice cold PBS for five minutes to let blood drain. Heads were then fixed in 4% PFA overnight after which brains were then carefully dissected into cold PBS and stored at 4 C until processing for iDISCO+. Brains that were damaged during the dissection process were not used for generating the atlas. Tissue staining Tissue staining and clearing was performed using iDISCO+ (Renier et al., 2016). Samples were first washed three times in PBS at room temperature, followed by dehydration in a series of methanol/water mixtures (an hour each in 20%, 40%, 60%, 80%, 100% methanol). Samples were further washed in 100% methanol, chilled on ice, and then incubated in 5% hydrogen peroxide in methanol overnight at 4 C. The next day, samples were rehydrated in a methanol/water series at room temperature (80%, 60%, 40%, 20% methanol) followed by a PBS wash and two one-hour washes in PTx.2 (PBS with 0.2% TritonX-100). Samples were then washed overnight at 37 C in permeabilization solution (PBS with 0.2% TritonX-100, 0.3 M glycine, 20% DMSO) followed by an overnight incubation at 37 C in blocking solution (PBS with 0.2% TritionX-100, 6% normal donkey serum, and 10% DMSO). Samples were then labelled with TO-PRO3 iodide (TO-PRO) (1 night) or primary antibodies (2-3 nights) via incubation at 37 C in PTwH (PTx.2 with 10 µg/mL heparin) with 3% donkey serum and 5% DMSO. Samples were then washed at 37 C for one day with five changes of PTwH. Antibody stained samples were followed by incubation with secondary antibodies at 37 C for 2-3 days in PTwH with 3% donkey serum. For samples labelled with TO-PRO, the secondary antibody labelling step was omitted. Following secondary antibody labelling, samples were again washed at 37 C in PTwH for one day with five solution changes. Tissue clearing Labelled brains were first dehydrated in a series of methanol water mixtures at room temperature (an hour each in 20%, 40%, 60%, 80%, 100% (x2) methanol) and then left overnight in 100% methanol. Samples were then incubated at room temperature in 66% dichloromethane in methanol for three hours followed by two 15-minute washes in dicholormethane. After removal of dichloromethane, samples were incubated and stored in dibenzyl ether until imaging. Imaging All imaging was done on a LaVision ultramicroscope I. Samples were mounted using an ultraviolet curing resin (adhesive 61 from Norland Optical, Cranbury, NJ) that had a refractive index (1.56) that matched the imaging solution, dibenzyl ether. Images were acquired in the horizontal plane at 4X magnification. Image processing Data sets from light sheet imaging were stitched using Fiji’s (NIH) extension for Grid Stitching (Preibisch et al., 2009) and converted to a single stack, corresponding to the z-axis. All image processing steps were run on a Linux-workstation with 64 GB of RAM and 12-core Intel processor. Each stack was converted to a 4 µm isotropic image using custom python code with separate files for the autofluorescence channel and a second for the antibody or TO-PRO channels. These images were resampled to 8 µm isotropic due to system constraints during the image registration stages. Registration The TO-PRO and autofluorescence signals were acquired on an initial dataset of 17. To create the initial average, we used image registration to align in a parallel group-wise fashion the TO-PRO images. The variability was expected to be less in the TO-PRO because these images contained more contrast than the autofluorescence images. The creation of an initial average of the adult zebrafish brain was accomplished using 17 samples with the TO-PRO channel. The process was completed using a 3-step registration process, similar to prior work (Lerch et al., 2011) using the pydpiper pipeline framework (Friedel et al., 2014) and the minctracc registration tool (Collins and Evans, 1997). This involved taking a single sample at random and registering the 17 samples to it using a 6-parameter linear alignment process (LSQ6). This yielded 17 samples in similar orientation to allow a 12-parameter linear registration (LSQ12) to be performed in a pair-wise fashion (each sample is paired with all the other samples, to avoid sample bias) and the final output of these 12-parameter registration was a group average. This represents a linearly registered average adult zebrafish brain. This was then used as the target for non-linear registration with each of the linearly registered 17 TO-PRO samples. This non-linear alignment was repeated successively with smaller step sizes and blurring kernels to allow for an average with minimal bias from any one sample brain. We then took this average and mirrored itself along the long axis of the brain and repeated the registration process described above but instead of using a random brain as the 6-parameter target, we used this mirrored brain. The result of this second pipeline was an average brain where each plane of the brain (coronal, sagittal, horizontal) is parallel with the imaging planes (x,y,z). This final average brain represented the starting point of the atlas. The linear and non-linear transformations created in the registration pipeline were used to resample the 4 µm isotropic TO-PRO and autofluorescence images to the atlas space, yielding an average signal for each channel. The autofluorescence signal was used to register other sample datasets with the atlas because it is common across all datasets. To combine the additional cellular markers to better delineate structures and examine their distribution across the brain, we converted all images and their channels to 4 µm isotropic images as described above. We then converted them to 8 µm isotropic and used the autofluorescence channel for each set to run the above registration pipeline (LSQ6, LSQ12 and non-linear). The initial target was the autofluorescence average created with the TO-PRO dataset described above. Following each registration pipeline, the transformations were used to resample each autofluorescence and cellular marker channel to the atlas with a resolution of 4 µm isotropic. To assess registration precision using TO-PRO or autofluorescence images, for each signal we identified 6 landmarks in the atlas, and their corresponding location on 8 different image sets. These points were then brought into atlas space using the transformations from the registration process. We then computed the Euclidean distance between the points in the atlas image and the transformed images for the TO-PRO and autofluorescence signals. Precision data are presented as mean ± standard deviation unless otherwise indicated. Segmentation Segmentation was performed using ITK-SNAP, a freely available software package for working with multimodal medical images that enables side-by-side viewing of 3D images registered into the same anatomical space (Yushkevich et al., 2019). Segmentation was primarily guided by comparing TO-PRO nuclear stained images to the cresyl violet stain of the original atlas (Wullimann et al., 1996). Boundaries of nuclei were often determined using the TO-PRO stain in conjunction with a neuronal marker (HuC/D) and other antibody stains as needed. Terminology largely follows that of the original atlas with the exception of motor nuclei (Mueller et al., 2004) and the telencephalon (Porter and Mueller, 2020). Images are best viewed using ITK-SNAP: http://www.itksnap.org/pmwiki/pmwiki.php Video tutorial: https://youtu.be/uVLqFJd4LDk Basic usage and viewing after installing and opening ITK-SNAP: File --> Open main image Select 20180219_topro_average_2020.nii.gz To add the segmentation: Segmentation --> Open segmentation Select 2021-08-21_zfish_segmentation.nii.gz (or latest segmentation file) To add correct labels and colors to the segmentation: Segmentation --> Import label description Select 2021-08-21_Label_descriptions.txt (or segmentation file with date that matches the segmentation loaded in the previous step) You can now explore the atlas in the coronal, horizontal, and sagittal planes. To add an additional image alongside the main image loaded above (e.g., the tyrosine hydroxylase stains): File --> add another image Select 20180505_TH_average.nii.gz (or image set of your choice) To view images side-by-side you may need to select the "tiled layout": Edit --> Layers --> "Entered tiled layout" Alternatively, you can press the "\" key to toggle between layouts Navigating the atlas and segmentation is most easily done using "crosshair mode" (press "1") and/or "zoom/pan mode" (press "2"). These are the first two selections in the top "main toolbar" in the top left corner: Mouse wheel allows you to scroll through the image slices Right mouse click zooms in/out In 'crosshair mode' left mouse click changes location of the crosshair. You can see the abbreviation for the label under the cursor on the left side under "Label under cursor:" In 'zoom/pan mode' holding and clicking the left mouse button willl allow to move the images around Other useful commands: "x" will toggle the segmentation "c" will center the images to the location of the crosshair in all three planes simultaneously Zebrafish have made significant contributions to our understanding of the vertebrate brain and the neural basis of behavior, earning a place as one of the most widely used model organisms in neuroscience. Their appeal arises from the marriage of low cost, early life transparency, and ease of genetic manipulation with a behavioral repertoire that becomes more sophisticated as animals transition from larvae to adults. To further enhance the use of adult zebrafish, we created the first fully segmented three-dimensional digital adult zebrafish brain atlas (AZBA). AZBA was built by combining tissue clearing, light-sheet fluorescence microscopy, and three-dimensional image registration of nuclear and antibody stains. These images were used to guide segmentation of the atlas into over 200 neuroanatomical regions comprising the entirety of the adult zebrafish brain. As an open source, online (azba.wayne.edu), updatable digital resource, AZBA will significantly enhance the use of adult zebrafish in furthering our understanding of vertebrate brain function in both health and disease.

  • Restricted
    Authors: 
    Magri, Stefania; Daniela, Di Bella; Taroni, Franco;
    Publisher: Zenodo
    Project: CIHR

    Next Generation Sequencing data of leukodystrophy gene panel analysis and segregation study data Sudy supported by Italian Ministry of Health. Grant Numbers: GR2016_02363337, RF2016_02361285