Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6864 articles
Browse latest View live

Using Freesurfer ROIs in Surface-based analysis

$
0
0
Hello!

I noticed in the CONN manual that if you are interested in performing surface-based ROI-to-ROI analyses, that you are supposed to use the subject-space non-smoothed functional image (i.e., auFunc.nii) and the subject space freesurfer structural ROIs (i.e., T1.mgz and aparc+aseg.mgz). My issue is that the non-smoothed functional (i.e., auFunc.nii") appears to NOT be in the same space as the subject's T1.mgz and aparc+aseg.gmz images. It DOES seem to be in the same space as my raw T1.nii file though. How can I get these three files in the same space?
Thanks!

Kaitlin

RE: ART clarification

$
0
0
[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]"That new .mat file is simply input into CONN as a first-level covariate which, when entered as a Confounding effect during denoising, will make CONN [i]disregard[/i] those identified time-points from any subsequent analysis (by adding a column for each outlier scan -dummy coding the offending timepoint- to the design matrix used during denoising to regress out all effects of no interest)"[/color]
[/quote][color=#000000]Hey Alfonso,[/color]

[color=#000000]I've noticed that QC_timeseries (which is the ART_regression_timeseries) is not added to the confound list by default, while the scrubbing covariate (ART_regression_outliers) is, even though both are first level covariates.[/color]
[color=#000000]What is the difference between the two? Should I add both to the confound list? [/color]
I think I should only add the outliers file and the rp*.txt file in the confounds. But I do not understand what the timeseries file is, and why it should be in the effects. 


Thank you 

[color=#000000]Fatima[/color]

RE: CONN17--ART outliers

$
0
0
Hey Yajing 

"First,
the 'art_regression_timeseries_*.mat' file contains two parts, the first line is about the global signal, and the second line is the motion information. But, what do the values in each line represent? How are the values calculated in each line? Does the second line mean the FD or the composite-motion?"

<span style="white-space: pre;"> </span>I cannot answer your questions, but I was wondering if you've used the 'art_regression_timeseries_*.mat' file in your processing at all. 



Thanks 
Fatima

RE: gPPI after denoising the Effect of the conditions

$
0
0
[color=#000000]Hi Pedro,[/color]

[color=#000000]That is a good question. Your description of both effects (the effect of including the task effects during denoising and the equal-term part of the gPPI first-level model step) is perfectly accurate. The net effect is that, at least for gPPI analyses, whether one includes or not the task effects as part of denoising is irrelevant, as those main effects will effectively be removed either way during the gPPI first-level analysis step and both approaches will lead to exactly the same gPPI interaction estimates (beta#ik in your equations, which are the values that are then passed to the second-level analysis step). Depending on the approach (whether you include or not the task effects during denoising) the gPPI main effects of conditions (beta#i in your equations) will, of course, be different, but that is fine because those estimates are simply not used in CONN other as a way to ensure that we are controlling the interaction term estimate for typically-correlated main condition effects. [/color] 

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Pedro Valdes-Hernandez:[/i][quote]Hi CONN experts,
Suppose I have a set of conditions, say rest, stim1 and stim2
I've been wondering if it is correct to do gPPI using the task conditions after having used the Effect of stim1 and Effect of stim2 as confounds in the denoising step.
The way I see it, the Effect of these confounds are regressed out from the BOLD signal in an i-th region/voxel, by estimating:
yi = yi'+beta1i*conv(hrf,stim1)-beta2i*conv(hrf,stim2)
where yi' is the denoised signal

On the other hand, gPPI estimates the betas of the following model, given the target and seed regions/voxels i and k, respectively
yi' = beta1ik*conv(hrf,stim1)*yk'+beta2*conv(hrf,stim2)*yk'+   (PPI interactions)
        beta1i*conv(hrf,stim1)+beta2i*conv(hrf,stim2)+   (main effect of conditions)
        betak*yk'  (main effect of seed)
which may seem to be controlling for the effect of the conditions for the second time.

Is this correct? Is so, is it acceptable? Is it irrelevant, i.e. after denoising, the main effect of the conditions in the gPPI model will not be significant (estimates beta1i=beta2i=0)? Or should I denoise without using the Effects of the conditions if gPPI is intended?

Thank you![/quote]

RE: running parallel analyses with different scrubbing parameters?

$
0
0
[color=#000000]Hi Emily,[/color]

[color=#000000]That is a good point. One alternative, if you are using the latest release, is, after preprocessing your data normally, to create a new scrubbing covariate "manually" by doing the following:[/color]

[color=#000000]1) in Setup.CovariatesFirstLevel select the menu option named "[i]covariate tools. compute new/derived first-level covariates", [/i]and there click on "compute scrubbing"[/color]

[color=#000000]2) in the next menu, select your new / alternative threshold options, and also change the "[i]name of output covariate[/i]" field from "scrubbing" to "scrubbing2"[/color]

[color=#000000]That will create a new set of scrubbing covariate files named "scrubbing2_art_regression_*.mat" (without overwriting your original files named "scrubbing_art_regression_*.mat") and also add that as a new first-level covariate named "scrubbing2" to your CONN project. You could then set up your two projects to use either one of those two sets of covariate files for denoising and be run in parallel without interference. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[i]Originally posted by Emily Stern:[/i][quote]Hi Conn gurus:

I want to run two full analyses (through second level) trying out both the intermediate and conservative scrubbing parameters offered in the preprocessing pipeline (using art). I was prepared to setup two different .mat files with their associated folders in order to run scrubbing and the remaining steps separately and in parallel. But it occurs to me now that the scrubbing procedures work by creating new art-related files directly in the subject functional data folder, and if I first run scrubbing with one threshold (e.g. intermediate) and proceed with analysis, and then move to a different .mat file to run a new threshold (e.g. conservative), this will overwrite the art files in the subject's functional folder and thus may interfere with the first (e.g. intermediate threshold) analysis being run.
To your knowledge is there any solution other than creating two copies of my subject functional folders in order to use one copy for one scrubbing setup and the other for the other scrubbing setup?

Thanks for any advice!
Emily[/quote]

RE: ERROR message 1st level ROI to ROI analysis

$
0
0
Hi Maria
did you figure this out? I just got a very similar error?
Cheers
O

Importing data after dartel preprocessing

$
0
0
Dear conn experts,

We want to preprocess data with the conn toolbox after the creation of a DARTEL template.
We therefore have some partially preprocessed data (realigned images for the grey matter and white matter). Do we need to realign all other segmented images (CSF) to feed our SETUP in CONN or do we use the segmented images created by DARTEL by default?

Morgane

Freesurfer's Brainmask preprocessing

$
0
0
Hello,
Because I had trouble with skull-stripping with my raw T1 images (some skull often left in the posterior aspect of the brain), I decided to use the brainmask.mgz from Freesurfer which was properly skull-stripped. By using the brainmask, I also allowed Conn to import the segmentation files from Freesurfer. I then ran a modified version of the second volume-based preprocessing pipeline in Conn (see attached image): in brief, I just removed the "functional Creation of voxel-displacement map (VDM) for distortion correction" and replaced the realignment step with "functional Realignment & unwarp (subject motion estimation and correction)", effectively getting rid of the distortion correction part.

My questions are:
1. Since I used the brainmask.mgz which is already skull-stripped, will the skull-stripping part of the "functional Indirect Segmentation & Normalization" step have altered the image too much with the second skull-stripping step? So that I lose GM or CSF information for instance?

2. From what I understand, the "functional Indirect Segmentation & Normalization" step also resegment the Grey/White/CSF and overwrite those from Freesurfer. Is that true?

3. I want to co-register functional and structural volumes and then normalize to MNI while also keeping (and normalizing) the Grey/White/CSF segmentation from Freesurfer. If instead of the step "functional Indirect Segmentation & Normalization" I use the step "functional Indirect Normalization" (which also coregister structural and functional), will it also normalize the Grey/White/CSF segmentation from Freesurfer?
<span style="white-space: pre;"> </span>- I am asking this question because the "[u]functional Indirect Segmentation & Normalization[/u]" step gives as output the skull-stripped normalized structural volume, [b]normalized Grey/White/CSF masks[/b] and normalized functional volumes (all in MNI space) [u]whereas[/u] the "functional Indirect Normalization" step gives the [b]same thing (also seems to skull-strips; normal?) [/b][u]less the normalized Grey/White/CSF masks[/u]. 

4. Finally, if the answer to question #1 is that the second skull-stripping is problematic. Can I replace the step "functional Indirect Segmentation & Normalization" which skull-strips with the "functional Direct Coregistration to structural without reslicing" and subsequently "functional Direct Normalization" which do not seem to skull-strip again. How should I arrange the steps in the sequence in this case? Also in this case, how could I make sure that the Grey/White/CSF masks from Freesurfer get normalized to MNI?

Thank you and sorry for the long post,
Olivier

Problem with preprocessing of fMRI data with non-isotropic voxels in Conn

$
0
0
[b]Dear Conn experts,[/b]

I am using conn for preprocessing of resting-state fMRI data. The voxel size of my data is 3x3x4. To do the preprocessing, for the "functional target resolution" I inserted "3", but in the niftiNORMS_Subject001_Session001, which is preprocessed data, some of the slices were identical to their adjacent slices.
Next, I tried "[3 3 4]" as the "functional target resolution", but again I got the wrong result. The slices were not normal in the niftiNORMS_Subject001_Session001 file.

Also, I noticed that the swauSubj01 file has a dimension of 61x73x61 instead of 91x109x91.

It seems like that Conn fail to estimate the slice thickness of my data. I was wondering if there is a way to insert slice thickness in Conn.

Thank you for considering my request!

best,
Mohammad

Error at First Level: Dimension Mismatch

$
0
0
Dear Conn Users,

I am running an analysis with 28 subjects that have 3 sessions each and multiple blocks of an in scanner task per session.

27/28 have all 3 sessions and one subject only has two.

In the conditions tab under setup, the participant that is missing the 3rd session has the selection 'allow missing data' chosen.

Processing ran normally until the first level analysis when I received the error below regarding a dimension mismatch. 

If anyone knows how to resolve this error, please let me know.

ERROR DESCRIPTION:

Subscripted assignment dimension mismatch.

Error in conn_process (line 2367)
ConditionWeights{nsub,n1}(:,ncondition)=X1.conditionweights{n1};

Error in conn_process (line 44)
case 'analyses_gui_seedandroi',conn_disp(['CONN: RUNNING ANALYSIS STEP (ROI-to-ROI or seed-to-voxel analyses)']); conn_process([10,11,15],varargin{:});

Error in conn (line 7420)
else conn_process('analyses_gui_seedandroi',CONN_x.Analysis); ispending=false;

Error in conn_menumanager (line 120)
feval(CONN_MM.MENU{n0}.callback{n1}{1},CONN_MM.MENU{n0}.callback{n1}{2:end});

CONN18.b
SPM12 + DEM FieldMap MEEGtools rex xjview
Matlab v.2015b
project: CONN17.f
storage: 17499.7Gb available
spm @ /Users/nnissim18/Dropbox/software/spm12
conn @ /Users/nnissim18/Downloads/conn-2018



Best regards,
Nicole

Not alble to load an identical CONN project in differenct computers

$
0
0
Hi everyone,

I am running to an issue with CONN and that is I have analysed my project on a Mac and saved it properly- but every time I try to open it on a different platform (computer) I receive an error regarding. The error mentions that the files are relocated /modified- it seems like CONN is not able to read the ROI.nii or even each individual subject's functional or anatomical data-
I had used CONN before for other similar projects but never faced this problem.

Any ideas why this is happening and how to resolve it?

Thank you,
~Shiva

RE: How is scrubbing implemented in conn?

$
0
0
[i]Originally posted by Sascha Froelich:[/i][quote]Dear Jeff,

thanks!

What I thought is that CONN completely removes volumes of high motion and then replaces them with interpolated data. But apparently this is not the case. So if I understood you correctly, CONN does not remove these volumes, but uses the bad time points to create regressors for nuisance regression, is that correct?

However, I am still a bit confused. I thought the terms "censoring" and "scrubbing" both describe the same procedure, so what is the difference?


Cheers,
Sascha[/quote][quote]
[/quote][quote]Hi Sascha,[/quote][quote]I am currently learning more about this myself. My general understanding is that Conn uses ART to identify the outlier scans based on parameters you select and then uses those scans as nuisance regressors in the first-level analysis. ART also produces a mask of the outliers that can be used as an explicit mask to avoid any influence of the outliers in the first level analysis. (I got this information from the ART code included in the ART download. It is attached) I don't see that ART does any interpolation. [/quote][quote]
[/quote][quote]In Spiegel et al (2014), scrubbing means the same thing as censoring, and it means "applying temporal masks to remove high motion volumes", and "In motion censoring, volumes in which head motion exceeded a threshold (a)re withheld from GLM estimation." It sounds like Conn (through ART) does both motion regression (if you enter the regressors into the analysis) and motion censoring, i.e., applies a temporal mask. However, to make the temporal mask, in the commented out part of the file with the code, it states "(in SPM you will also need to modify the defaults in order to skip the implicit masking operation, e.g. set defaults.mask.thresh = -inf)[/quote][quote]
[/quote][quote]Of course any or all of this could be wrong. I would appreciate any comments![/quote][quote]Best, [/quote][quote]Mary[/quote]

Using FSL first levels in conn

$
0
0
Hi all,

I have run fairly standard first levels in FSL on 72 scans, ultimately aiming to do a gPPI with a nucleus acumbens seed while adding several clinical instrument predictors as covariates. So, when I discovered conn I tried switching programs by following this post:

https://www.nitrc.org/forum/message.php?msg_id=22974

However, after going through most of the setup (much too late), I'm realizing I would also need to re-enter the stimulus onset files. In FSL .bfsl files are generally used but now they would need to be in the *_events.tsv format. It's 300 files in total that all need to be added together and reformatted, and I'm a masters student with only a couple weeks left to round up my analyses. Although I know FSL first levels are awkward to use in other programs, seeing as they do not output normalized time series and such, is there any way to work around this?

TLDR; Is there any way to use FSL first levels (activation maps) in a gPPI analysis without re-doing all the timing files?

Any answer is appreciated!

clarification on surface inputs

$
0
0
I was hoping to get some input on the surface space processing. I am analyzing HCP data and did some analyses of task data on ciftis with surface data. I don't see any updates on conn being able to read in cifti or gifti, so I'm selecting the T1w.nii in the freesurfer mri directory which was described in the manual, but what would be the most appropriate functional input? Would I still use the rfMRI_REST1_LR_hp2000_clean.nii.gz files? 

Thanks

Export correlation values for each participant from multiple regression analysis

$
0
0
Greetings all, 

I am currently trying to create some figures with correlational r values between a continuous covariate and ROI-to-ROI functional connectivity estimates. I tried the option to 'Import Values', which allowed me to copy the values into SPSS for further analyses. However, when running a regression between my covariate (i.e., age) and these functional connectivity estimates, the result is different in SPSS. I have tried extracting the functional connectivity values from the multiple regression model in CONN and from a simple one-sample t-test of functional connectivity in response to my task condition. Both of these functional connectivity estimates were exactly the same for each participant, and I was unable to reproduce the results in SPSS that I achieved in CONN. How can I acquire the functional connectivity value for each participant from a one-sample t-test and a multiple regression, and how can I reproduce my multiple regression analysis in SPSS that I produced in CONN?

Thank you for your help, 

William Denomme

ROI mat file missing

$
0
0
Dear Alfonso and other forum members,

I have run a 2nd level analysis in conn but the ROI.mat file is missing from the analysis folder even after clicking "Results Explorer" in the bottom left of the CONN 2nd-level interface. Has this happened to anyone here before who could share some advice to solve this issue?

Many thanks in advance,
Noelia

RE: MP-RAGE structural scan (.nii) all chunky

$
0
0
Hello Alfonso,

After a long hiatus, I now have some more bandwidth to explore CONN.

Same problem as before.  I am amid the steps in your didactic YouTube series leading up to groupICA, and structural MRI is still chunky, even after toggling off multi-slice.  Since the chunky structurals still look decently aligned with MNI boundaries, I'm forging ahead, but this is a bit disconcerting.  I have uploaded a screenshot.

Jim

RE: gPPI after denoising the Effect of the conditions

$
0
0
Thank you Alfonso,
I guess my suspicions were correct.
I apologize for some typos I noted after I re-read the post. Especially the second term -beta2i*conv(hrf,stim2) which should be +beta2i*conv(hrf,stim2).

I guess then that, when doing the second level analysis of the gPPI results, the contrasts on the conditions, e.g. con1-cond2 will be contrast functions of the corresponding PPI interactions betas
In contrast, for the GLM, contrasts are functions of the betas estimated for each condition estimated from their concatenated signal intervals.

Which theoretical differences would you expect between these two appraches?

[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]Hi Pedro,[/color]

[color=#000000]That is a good question. Your description of both effects (the effect of including the task effects during denoising and the equal-term part of the gPPI first-level model step) is perfectly accurate. The net effect is that, at least for gPPI analyses, whether one includes or not the task effects as part of denoising is irrelevant, as those main effects will effectively be removed either way during the gPPI first-level analysis step and both approaches will lead to exactly the same gPPI interaction estimates (beta#ik in your equations, which are the values that are then passed to the second-level analysis step). Depending on the approach (whether you include or not the task effects during denoising) the gPPI main effects of conditions (beta#i in your equations) will, of course, be different, but that is fine because those estimates are simply not used in CONN other as a way to ensure that we are controlling the interaction term estimate for typically-correlated main condition effects. [/color] 

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Pedro Valdes-Hernandez:[/i][quote]Hi CONN experts,
Suppose I have a set of conditions, say rest, stim1 and stim2
I've been wondering if it is correct to do gPPI using the task conditions after having used the Effect of stim1 and Effect of stim2 as confounds in the denoising step.
The way I see it, the Effect of these confounds are regressed out from the BOLD signal in an i-th region/voxel, by estimating:
yi = yi'+beta1i*conv(hrf,stim1)-beta2i*conv(hrf,stim2)
where yi' is the denoised signal

On the other hand, gPPI estimates the betas of the following model, given the target and seed regions/voxels i and k, respectively
yi' = beta1ik*conv(hrf,stim1)*yk'+beta2*conv(hrf,stim2)*yk'+   (PPI interactions)
        beta1i*conv(hrf,stim1)+beta2i*conv(hrf,stim2)+   (main effect of conditions)
        betak*yk'  (main effect of seed)
which may seem to be controlling for the effect of the conditions for the second time.

Is this correct? Is so, is it acceptable? Is it irrelevant, i.e. after denoising, the main effect of the conditions in the gPPI model will not be significant (estimates beta1i=beta2i=0)? Or should I denoise without using the Effects of the conditions if gPPI is intended?

Thank you![/quote][/quote]

RE: difference between Effect of condition in confounds and first level covariate

$
0
0
[color=#000000]Dear Alfonso,[/color]
I appreciate your explanation. It confirms what I had figure out by trial and error but was not 100% sure.
Thank you!
[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]Dear Pedro,[/color]

[color=#000000]Sorry that was confusing. You are right that there might not be obvious scenarios when you would want to use the 'copy task-related conditions to covariate list' option, since, as you state, those hrf-convolved condition regressors will already appear by default simply encoded as 'effect of [condition]' during Denoising and future steps, so copying there as a first-level covariate does not seem to add much if anything at all.[/color]

[color=#000000]In general, anything that is defined as a [b]condition[/b] will be hrf-convolved (assuming you have specified a continuous acquisition) and the resulting timeseries will be: a) shown in the default list of potential confounding effects in the [i]Denoising[/i] tab; and b) be used as weights in the [i]first-level[/i] tab for weighted-GLM or gPPI analyses for the estimation of condition-specific connectivity measures. In contrast, anything that is defined as a [b]first-level covariate[/b] will be: a) shown (as is, no hrf-convolution) in the default list of potential confounding effects in the [i]Denoising [/i]tab; b) appear as potential seed timeseries in the [i]first-level analysis [/i]tab (only those covariates not selected as confounding effects during the denoising step); and c) appear as potential interaction terms, both in the [i]Setup.Conditions [/i]tab as well as in the [i]first-level analysis [/i]tab for the "other temporal-modulation effects" analysis type.[/color]

[color=#000000]There are, of course, some somewhat-convoluted scenarios when you would want to genuinely use the 'copy task-related conditions...' option such as: a) for display purposes (for example, having the hrf-convolved conditions included in the list of first-level covariates allows you to include them in some plots such as QA which otherwise would need to be created manually; similarly if you want to use the 'covariate tools' gui to compute some summary measure of your conditions); or b) for more complex interaction analyses (e.g. CONN allows you define condition*covariate interactions, so sometimes it is useful to copy some subset of conditions into first-level covariates just to be able then to use the resulting timeseries as interaction terms). [/color]

[color=#000000]In practice, though, the most common use of this '[b]copy[/b] task-relate conditions...' option is a soft way to perform the '[b]move[/b] task-related conditions to covariate list' option in two steps (i.e. first use the 'copy ...' option, then, if everything looks fine, simply delete the original conditions). The 'move task-related conditions...' option is useful, as stated in the manual, when you want to perform Fair et al. -style analyses, where you still want to regress out anything that correlates with your conditions from the BOLD signal but you do not want to obtain condition-specific connectivity measures. In that case, moving a condition into a first-level covariate does exactly that, it still shows you the appropriate timeseries during the [i]Denoising [/i]step so the appropriate timeseries can still be included as a confounding effect, but it is no longer treated as a condition so CONN does not estimate condition-specific connectivity measures. [/color]

[color=#000000]Let me know if that clarifies[/color]
[color=#000000]Alfonso[/color]

[i]Originally posted by Pedro Valdes-Hernandez:[/i][quote]Dear CONN experts,

I'd like to know why one would want to copy task-related conditions to first level covariates.
Aren't these regressed out during the temporal preprocessing (denoising) anyway?
The original CONN paper (2012) suggests these effects are indeed removed in the Denoising step. It appears so since the conditions are imported as confounds with the name 'Effect of...'. I guess this is done to obtain "resting state" task-independent FC measures, as in Fair et al (2007).
However in the CONN User Manual states that, in order to achieve this, the conditions must be copied to the 1st level covariate list. Is this correct?
This is confusing. In a nutshell, what is the purpose of this 1st level covariate list, other than to provide regressors not to be HRF-convolved (like in SPM)?
On the other hand, is the HRF-free regression used to remove task effects or just HRF convolved conditions?
Looking forward any comment on this.

Pedro[/quote][/quote]

seed to voxel "results explorer" is running very very slow

$
0
0
Hi conn experts,
I am using CONN 18a or b
When I hit the "Results explorer" button in the seed to voxel second level analysis CONN undergoes in some sort of analysis that could last even hours. There is a step that lasts a lot: "Functional data second-level analyses". 

Is this normal? Should this be really quick? I guess it is using SPM engine to perform the analysis, which is a pretty fast implementation.

Thanks in advance.
Viewing all 6864 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>