Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6862 articles
Browse latest View live

CONN workshop

$
0
0
Hi everyone,

Just a heads up to let you know that the [url=https://web.conn-toolbox.org/workshops]CONN workshop [/url]will be starting in a couple of weeks (on Friday April 16th, 2021) and there is still time to subscribe if you are interested.

As always, the workshop offers intensive hands-on and highly interactive courses covering all aspects of functional connectivity analyses in CONN, and both beginners and advanced users are welcome. While we have traditionally held this workshop in Boston's MGH and it took place over 5 days in a single week, this year we are organizing the workshop online, and classes will take place over Zoom on Fridays (approximately 9am-5pm EDT, UTC−04:00) for five consecutive weeks (starting on April 16th and ending on May 14th), and there will be a bit of homework between consecutive weeks.

For more details and registration see [url=https://education.martinos.org/home/using-the-conn-toolbox-for-functional-connectivity-analysis/]https://education.martinos.org/home/using-the-conn-toolbox-for-functional-connectivity-analysis/[/url]

Best
Alfonso

RE: WARNING: possibly incorrect model

$
0
0
Hey Alfonso,

I am following up on the previous comment. I have reattached the Warning Info doc, now with more variable information. The contrasts creating warning problems (12 in total, 5 included in the document) are the following. The addition of the 'Med' covariate adds a covariate quantifying anti-hallucinatory medication use.

Current Hallucation (and Med)
Past Hallucination (and Med)
Never Hallucination (and Med)
HC & PTS (and Med) HC -> healthy control, PTS -> patients
PTS (and Med)
HC 
All, -> all subjects.

Thank you for your time,
Mikey

How to keep subject name as input file name when importing instead of "Subject 1"

$
0
0
Our lab just started using CONN and use the GUI. We can't figure out how to keep the name of our participants/their 5 digit code when importing their data since the subject names automatically show up as "Subject 1", "Subject 2", etc and need to track characteristics of the subjects within CONN. We could assume it is importing in the order that they are listed in the directory, but that seems a big assumption and I'd rather them import without renaming them generically. Any help is appreciated!

RE: seed to voxel and roi to roi analysis, networks

$
0
0
[color=#000000]Dear Agurne,[/color]

All are perfectly reasonable options, but if you select the seeds individually then you should apply an additional multiple-comparison correction (e.g. bonferroni on cluster-level p- threshold) to take into account the multiple analyses that you are performing. Instead, the more standard approach is to select all seeds simultaneously (because this allows you to make inferences about the presence of effects in [i]any [/i]of the selected seeds), and then use post-hoc analyses to determine which among the selected seeds may be responsible for each of the effects that you are finding (for example by clicking on the '[b]plot effects[/b]' button -the button with an image of a barplot-). These post-hoc analyses allow you to better characterize and interpret the effects that you are finding (e.g. is the effect present across all of the selected seeds or just among a subset?), but restricted only to clusters that were already significant in the primary analysis. 

Best
Alfonso

[i]Originally posted by Agurne Sampedro:[/i][quote]Dear CONN toolbox experts,

I am interested in analyzing the connectivity within different networks (DMN, DAN, ECN and salient network). However, I don't understand how I should carry out the analysis. Would it be more appropriate to do a seed to voxel or a roi to roi analysis?


In the case of roi to roi analysis, I understand that I could select just the rois of a particular network and analyze each network separetely.

However, in the case of seed to voxel analysis, I don't know how I should select the seeds/sources.
Could I select them individually? For example, first select the MPFC of the DMN and see the connections to these regions, then select the LP of the DMN and see the connections to these regions, etc. Or should I select all the seeds from the same network at the same time?

The problem is that if I select various seeds at the same time, then I dont know to which seed the significant association corresponds. For example, if I select the PPC (L) seed I find a significant negative connection to the inferior frontal gyrus. However, if I select at the same time the seeds of PPC (L) and the LPFC (L), I get significant positive connections with the subcallosal cortex, but I do not know how to see which of both seeds is the one connected with this subcallosal area. I do not understand how should I carry out the analysis and how to interpret them.

Thank you very much in advance.

Best regards,

Agurne[/quote]

RE: Can I load corrected_BETA*files into CONN again to extract the corrected_fALFF values for specific regions?

$
0
0
[color=#000000]Sure, that should be no problem at all, simply overwrite the original BETA* files with the corrected ones and then any second-level analyses will be using those corrected fALFF volumes instead.[/color]
[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by uwcar:[/i][quote]Dear CONN toolbox experts,

I performed a fALFF analysis with CONN. Further I corrected the received BETA*files for brain atrophy with a SPM script and wish now to extract the corrected fALFF values for the left and right hippocampus. Is it possible to load the corrected_BETA*files into CONN and extract the values as usual?

Thank you very much in advance!

Kind regards,
Carole[/quote]

ROIs data extraction

$
0
0
Hello CONN users,
These days I'm working on my first fMRI data analysis using CONN toolbox. One of the things which I am interested in is time-series data extraction from ROIs defined by an atlas. However, I have 150 time points representing a "prolonged" task, that means that it is quite similar to resting-state data processing. Accordingly, I defined one session for my 14 subjects with "one condition that spans entire session". Surprisingly, my output files include the following two: ROI_Subject001_Condition00[b]0[/b].mat and ROI_Subject001_Condition00[b]1[/b].mat. I would like to ask [b]why[/b] did  I get [b]two files[/b] for [b]one condition[/b] as described earlier? 

Best,
Nir

ROI-to-ROI gPPI analyses

$
0
0
Dear CONN toolbox experts,

Hi, I'm learning how to use CONN (version 19) for ROI-to-ROI gPPI analyses. Although I have studied the manual carefully and watched the data analysis videos shared by others, I am still not sure about some details. My questions are as follows.

1. If I have only one experimental condition, should I select ROI-to-ROI gPPI analysis or functional connectivity (weighted GLM) in the analysis type section? I have found that most of the previous studies only perform gPPI analysis when there are multiple experimental conditions.

2. If ROI-to-ROI gPPI analysis is possible, should I select "regression (bivariate)" in the analysis options section when there is only one experimental condition? Is it possible to select "correlation (bivariate)"?

Thank you very much in advance.

Best regards,

Ling

Conjunction in CONN

$
0
0
Dear experts,

I am conducting the task-FC in CONN, and would like to find the common connectivities at the network level between two groups. However, it seems like the contrast [1 0 0; 0 1 0; 0 0 1] reflects the OR conjunctions instead of AND conjunctions. Is there any other contrasts that I can consider?

Your help will be much appreciated! Thank you in advance!

Best,
Gucy

RE: Error message - Reference to non-existent field 'X1'.

$
0
0
+1 - also getting this error consistently when attempting to do rsfmri connectivity analyses with a imported atlas - any insight would be appreciated! 

ERROR DESCRIPTION:

Reference to non-existent field 'X1'.
Error in conn (line 7347)
[CONN_h.menus.m_analyses.X,CONN_h.menus.m_analyses.select,names]=conn_designmatrix(CONN_x.Analyses(ianalysis).regressors,CONN_h.menus.m_analyses.X1,[],{nregressors,nview});
Error in conn (line 6401)
conn gui_analyses;
Error in conn (line 6639)
conn('gui_analysesgo',1);
CONN20.b
SPM12 + DEM FieldMap MEEGtools
Matlab v.2020b
project: CONN20.b
storage: 939.2Gb available

reference to non-existent field?

$
0
0
Hello, 

I'm attempting to develope a pipeline for ROI to ROI analyses on 5 subjects (full dataset is 200 subjects).  I have successfully imported the preprocessing functional and structural files using Tools > From FMRIPrep dataset.  

I have imported the Schaefer400 parcellation as my atlas file.  
The only preprocessing step I've executed is functional smoothing with a 6mm Gaussian kernel.  

Setup and Denoising run successfully, however when I get to 1st level analysis I am consistently presented with two error messages that state that I have a reference to non-existent fields, either 'Y' or 'X1'.  

Example of errors below: 

ERROR DESCRIPTION:

Reference to non-existent field 'Y'.
Error in conn (line 7458)
if isfield(CONN_h.menus.m_analyses.Y,'issurface')&&CONN_h.menus.m_analyses.Y.issurface, issurface=true; else issurface=false; end
CONN20.b
SPM12 + DEM FieldMap MEEGtools
Matlab v.2020b
project: CONN20.b
storage: 939.2Gb available

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

ERROR DESCRIPTION:

Reference to non-existent field 'X1'.
Error in conn (line 7347)
[CONN_h.menus.m_analyses.X,CONN_h.menus.m_analyses.select,names]=conn_designmatrix(CONN_x.Analyses(ianalysis).regressors,CONN_h.menus.m_analyses.X1,[],{nregressors,nview});
Error in conn (line 6401)
conn gui_analyses;
Error in conn (line 6639)
conn('gui_analysesgo',1);
CONN20.b
SPM12 + DEM FieldMap MEEGtools
Matlab v.2020b
project: CONN20.b
storage: 939.2Gb available


###########################################################################################

May I ask what may be causing this error?

Postdoctoral opportunity at Brown University

$
0
0
Dear colleagues,

Brown University's Department of Psychiatry and Human Behavior is seeking an outstanding postdoctoral candidate with a strong background in neuroscience, psychology, or cognitive science to join a new research core focused on clinical neuroimaging. This position is ideal for scientists seeking to transition from basic to clinical neuroscience.

The fellow will gain exposure to research related to a broad range of psychological disorders, psychiatric illnesses, and treatment
modalities. Our growing portfolio includes research centered on suicide and self-harm, impulsivity and sensation seeking, risk taking in
youths, and mechanisms of treatment response.

In addition, Brown's exceptionally collaborative research environment provides a wealth of opportunities for pilot funding that can help successful
fellows transition to a subsequent faculty appointment and/or core leadership role.

Candidates should have 3+ years of graduate or postdoctoral experience in human neuroimaging (e.g., task fMRI, rsfMRI, MRI, diffusion) and experience using command line tools and proficiency in a relevant programming language (e.g., Python, MATLAB). Candidates must be U.S. citizens or have permanent resident status.

To learn more about the position, please see https://www.brown.edu/clinical-psychology-training/sites/clinical-psychology-training/files/21-22%20Neuro%20Research%20Clinical%20Neuroimaging%20-%20Barredo.pdf

Interested candidates are invited to submit an application through Brown's fellowship portal:
https://www.brown.edu/clinical-psychology-training/postdoctoral-fellowship/postdoctoral-fellowship-applicants/application

Look forward to meeting you!

Jennifer Barredo, PhD

Adding ROIs post analysis

$
0
0
Hello,

I would like to extract FC values from new ROIs. I previously completed the CONN analysis pipeline with a large set of data, and now have new ROIs that I would like to extract from. Is it possible to extract from these ROI without going through the whole processing pipeline again? As I understand it now, there is no way to do this without adding the new ROIs and then re-processing all of the data.

Thank you,
Mikey

2nd level covariates per volume, per person?

$
0
0
Hello,
I'm wondering if it is possible to get 2nd level covariate values (like CSF and white matter values) per volume, per person? I'm trying to make a text file per participant, in order to include them in another analysis, and for that I need 1 value per volume, for each individual.
Thank you.

Thresholding in CONN for ROI to ROI analysis

$
0
0
Dear CONN experts,

I am very novice to CONN toolbox and first time I'm using it for the analysis of resting state fMRI data.
I was successfull till second level analysis and got stuck here.

My querries are below:
1) settings I used for regression analysis for controls to check the effect of urban upbringing. Here I selected age,gender as covariates and kept urbanicity index as variable of interest i.e. [0 0 0 1]. First three 0's corresponds to controls, age and sex and 1 for urbanicity index. Does this makes sense?

2) In the 'Define Thresholds' box I selected both "threshold ROI to ROI connections (by intensity) and threshold Seed ROIs (F-test)" as shown in figure below. I just wanted to know the real meaning of using two thresholds? also, what if I use only first threshold and uncheck second one? 
Or this is copletely wrong way of doing thresholding? 

Please educate me more in this regard!

Thanks,
Korann

Uploading ROI masks

$
0
0
Dear Alfonso
In the newest version of CONN, how can I upload ROI files? In the Setup tab, I only see an option for creating ROIs from MNI coordinates but I have anatomical ROIs from a previous study that I would like to use. 
Thank you, 
Amy Roy

RE: Smoothing kernel in CONN

$
0
0
[color=#000000]Hi,[/color]

[color=#000000]You may check your project log for that information (from the GUI select 'Tools. Log History', or manually edit the logfile.txt file in your conn_* project folder). Typically, though, for ROI-to-ROI analyses the chosen smoothing level does not matter, as the ROIs will typically be extracted from the unsmoothed functional data (you may check these settings in the [i]Setup.ROIs [/i]tab, selecting your ROIs and looking at the option that reads something along the lines of '[i]computing average timeseries... from secondary dataset #2 (unsmoothed volumes)[/i]')[/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by vst:[/i][quote]I'm a new user in CONN. I was recently shown how to run an ROI - ROI bivariate correlation analysis for a project examining functional connectivity among different brain regions during an fMRI task. Does anyone happen to know how I can retrospectively check what smoothing kernel was used during the preprocessing stage of this analysis? I believe the default in CONN is 8mm, but we want to make sure this wasn't manually changed during the set-up. Many thanks in advance![/quote]

RE: Scrubbing & task-based fMRI

$
0
0
[color=#000000]Hi Natasha,[/color]

[color=#000000]Scrubbing is performed by regressing-out the effect of the outlier scans during denoising (rather than by "eliminating" those invalid scans, since, as you mention, that would create problems down the line by affecting the timing of the data), so once you have the denoised data it is generally not considered necessary to further include any information about invalid scans into your 1st-level analyses. [/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Natasha Mason:[/i][quote]Hi all,
I have an analysis I want to run on task-based fMRI. However am a bit confused. I scrubbed my volumes during pre-processing, and now don't understand what will happen with the timing of my task data. I.e. if a volume is not valid, is it removed from the scan, and thus my task timing is off? If it is not removed, how is it taken into account? I'm not sure how to model this in 1st level.
Thank you,
Natasha[/quote]

RE: extracting connectivity values in CONN vs SPM

$
0
0
[color=#000000]Hi Alexandra,[/color]

[color=#000000]They are very similar but not identical, I believe the main difference is that SPM extracts the [b]1st eigenvariate[/b] from the selected voxels (1st- component from a Singular Value Decomposition of the data, i.e. a weighted combination of the values across all voxels, weighted so that the resulting variable explains the maximum variance possible across all voxels) while CONN extracts simply the [b]average[/b] across the selected voxels. That said, if the region is relatively small and/or homogeneous, the 1st-eigenvariate is in practice very similar to the average, while the region is large or non-homogeneous these two approaches will differ more clearly. [/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Alexandra Muratore:[/i][quote]Hi all,

When reviewing 2nd-level results in CONN, my understanding is that you can extract individual connectivity values by opening the results in "Results Explorer" and then selecting "Import Values" > "Other clusters of interest or ROIs (select mask/ROI file)", and then importing these values. Assuming this is correct, how is this different from opening the SPM.mat file representing the same contrast and extracting first-level eigenvariates using a mask? Will these result in the same values or different ones, and if so, how are they different?

Thanks![/quote]

White matter dimensions denoising step

$
0
0
Hi Conn users!!

I am wondering about how "correct" is to change the dimensions of the white matter or CSF confounds in my data. There are a couple of subjects in which the distribution curve of the denoising step shows a bit to the left or to the right, so with more error, I changed the white matter dimensions from 5 to 10 and now my data show activations in places it didn't use to and some activations that were present before with 5 dimensions, do not show up now with 10. 
Also I have seen that when you change the dimensions, it is not only for one subject, but it applies to all of them. 
So my main question here would be if it is correct to do those changes in this situation and how valid it would be when publishing. 

The second part of this question is that I have applied a mask in the setup.options to restrict my analysis from what I have seen in the whole brain mask, but when putting the mask (frontal lobe mask), the distribution curve in the denoising step does not looklike before, there is less error. This is understandable since the mask is smaller, but if I saw some interesting potential significant results in the whole brain mask with 10 dimensions in white matter, should I put them now again? Because if I put 5, the results are not the ones I expect to (from the previous whole brain mask setting with 10 dimensions in the white matter). 

Thank you very much in advance!

RE: WARNING: possibly incorrect model

$
0
0
[color=#000000]Hi Mikey,[/color]

Yes, those between-subjects contrasts are likely all non-estimable due to a range of issues, from some of your models having multiple redundancies (e.g. in the PTS model, the Proband and OnlyProband regressors are likely identical within the PTS-group subjects) to the control covariates not having been centered (e.g. what is the zero-level of the "site" covariate "boston/chicago/dallas/etc"?)

I would recommend to:

1) first center (subtract the average across all subjects) any control covariates, e.g. GoodQA2, mFDpower, Age Sex, boston, chicago, dallas, georgia (alternatively you could also include the average value of these covariates as part of the contrast vector, but I find this centering approach generally simpler)

2) define a model across all subjects (subgroup-specific analyses can be misleading because then the control covariates would be controlled 'separately' within each group, which may or may not be want you are intending to do), e.g.

   GoodQA2;mFDpower;Age;Sex;boston;chicago;dallas;georgia;HealthyControl;Proband

3) use the between-subjects contrast to specify which subgroup you want to evaluate, or which groups you want to compare, e.g.

   [0 0 0 0 0 0 0 0 1 0] to look at the average connectivity within HC
   [0 0 0 0 0 0 0 0 0 1] to look at the average connectivity within PTS
   [0 0 0 0 0 0 0 0 -1 1] to look at the difference between PTS and HC groups

Hope this helps
Alfonso


[i]Originally posted by Mikey Malina:[/i][quote]Hey Alfonso,

I am following up on the previous comment. I have reattached the Warning Info doc, now with more variable information. The contrasts creating warning problems (12 in total, 5 included in the document) are the following. The addition of the 'Med' covariate adds a covariate quantifying anti-hallucinatory medication use.

Current Hallucation (and Med)
Past Hallucination (and Med)
Never Hallucination (and Med)
HC & PTS (and Med) HC -> healthy control, PTS -> patients
PTS (and Med)
HC 
All, -> all subjects.

Thank you for your time,
Mikey[/quote]
Viewing all 6862 articles
Browse latest View live