Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6861 articles
Browse latest View live

RE: Non-parametric cluster correction single subj

$
0
0
[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]... The main advantage of this "randomization of residuals" approach (e.g. CONN, FSL) compared to the "permutations of residuals" or purely "permutation" approaches (e.g. SnPM, BROCCOLI) is that it works exactly in the same way for all GLM analyses, while a permutation approach requires different permutation schemes for different sorts of analyses and it does not apply to certain scenarios (e.g. one-sample t-test)...[/color]
[/quote]
After reading a bit more about non-parametric issues for functional mri, I think I understand better the above sentence, and that's a great thing you took this approach. When you mention FSL, are referring specifically to FSL PALM? If that's the case, I read that the approaches implemented there are the state-of-the-art and more generic, so that's great if CONN implements a similar approach! :D

Best!
Stephen Larroque

Basic task based fMRI question

$
0
0
I'm conducting an analysis involving a time series featuring two different stimuli, as well as rest.

When looking at the first level analyses under ROI-to-ROI and Seed-to-Voxel, it appears as though I have to select an ROI and a condition in order to see a correlation map in the preview area. I'm hoping to look at areas of the brain who's BOLD signal correlates to the presence of a particular task, so it doesn't really make sense to pick a particular ROI for this. I've tried selecting the TOTAL option under "Preview first-level analysis results", but that shows "No data to display" in the preview window.

Is there are way to preview the correlation of all brain voxels with a particular task that is independent of seed region?

Kind regards
Chris

RE: use smoothed data for ALFF analysis

$
0
0
Dear Yun,

mmm I am not sure I know what is the "u" prefix, maybe you can describe in more details what is your preprocessing pipeline?

Anyway, for bandpass filtering, I don't think it is addictive, by that I mean that if you apply twice the same bandpass filter it will have the same result as only one bandpass filter (since a filter simply suppress any frequency above or below), so if you input a dataset that is already bandpass filtered, this is not necessarily an issue as long as you use the same bandpass settings in CONN.

Hope this helps,
Stephen

RE: Question about ROIs

$
0
0
[color=#000000]Hello, I too would like to be able to look at each ROI separately in another program. I have tried to figure out a way to do this for awhile now without any success. Help would be appreciated.[/color]

[color=#000000]Best,[/color]

[color=#000000]Greg [/color]
[i]Originally posted by Patrick McConnell:[/i][quote]Hi - 

I would like to look at each atlas ROI separately in another program. Is there a convenient way to write out the atlas.nii into its component ROIs? Or perhaps download these from online somewhere?

Best,
Patrick
[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]Hi Howard,[/color]

[color=#000000]Not really, there is no stablished standard for a set of ROIs that would optimally summarize the entire brain. CONN uses by default a combination of the Harvard-Oxford atlas and the AAL atlas (see the atlas.info file in the conn/rois directory for additional info), which is a perfectly reasonable starting point, but there are of course many alternative ways to parcellate the brain into meaningful ROIs. CONN supports a very wide range of possible ways of defining your ROIs. You may find, for example, a few alternative atlases in the conn/utils/otherrois/ folder (e.g. Brodmann areas, or more agnostic large-voxel parcellations), or of course you could also define your own (perhaps better tailored to the regions that you  may be most interested in). CONN also supports subject-specific ROIs so you could also define your regions of interest functionally (e.g. using localizer contrasts) or use other automatic parcellation methods (e.g. freesurfer), just to name a few alternative approaches. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[i]Originally posted by Howard Morgan:[/i][quote]Hi Alfonso,

I was wondering:  what does Conn use to establish the ROIs?
Are the 136 ROIs used in first-level and second-level-analysis of Conn the standard ROIs that all MNI templates use?

Thank you,
Howard[/quote][/quote][/quote]

ROI-to-ROI value extraction

$
0
0
Hi,

I would like to extract subject-wise ROI-to-ROI connectivity values for external use (SPSS). Does anybody know how to do this in CONN? I am using version 17b. 

Thank you,

Martin

Slice order information in conn

$
0
0
Dear All

Is the information which slice order was used somewhere to find in conn (in a fully processes pipeline, with preprocessing done in conn)?

Greetings,
Lucas

PS: I apologise for frequent questions recently.

seed to voxel output files

$
0
0
I obtained a 2nd level result through seed to voxel analysis. I would like to run the analysis again by assigning the result to a new ROI. However, when looking at the result folder, there are several files such as beta_0001.nii, beta_0002.nii, con_0001.nii, corr_0001.hdr, mask.nii, results.ROIs.img .... I am wondering whether a file is a connectivity file, which files can be run as a new ROI in the setup stage, and if the new ROI is specified, preprocessing should be done again.

I am sorry to have a basic question.

Thank you.

jeonho

From CONN to MELODIC and back

$
0
0
Dear CONN users and experts,

I'm running a single subject ROI-to-ROI analysis. I've successfully gone through the preprocessing and denoising steps, but I also wanted to perform ICA denoising with FSL MELODIC on the data. In order to do that I've used the /results/preprocessing/niftiDATA_Subject001_Condition000.nii file, detected the noise components with MELODIC and regressed them out using fsl_regfilt. My question is this: can I import the output file back to CONN to perform ROI-to-ROI analyses? If yes, how can I skip the denoising step (since the images have already been fully denoised)?

Thank you for your help and support,
Davide

An introductory question about interpreting CONN results

$
0
0
Hi All,
Can anyone suggest me a source for learning resting-state FMRI functional connectivity from the beginning. I am having difficulty to interpret outputs given by CONN. For an instance, in the attached image there are regions in yellow color. That means, these regions are functionally connected. Then, why is region above  larger than region below? Should not they be in equal size?

Additionally, sometimes ICA results merely a region which passes the z score threshold. However, we always say that goal of functional connectivity is to find BOLD spatially different regions. Based on the fact that, I expect connectivity results should have spatially remote regions which are in same size.

I am looking forward to hearing your suggestions.
Best,
Mustafa

RE: Average within / between network connectivity

$
0
0
[color=#000000]Dear Stephen,[/color]

[color=#000000]while I would prefer to use voxel-wise FDR, as I am very aware of the issues with using uncorrected or badly corrected p-values; for this specific CONN command line (conn_withinbetweenROItest) I am fairly sure uncorrected measures are used for the calculation between-network connectivity averaged over ROIs.[/color]

[color=#000000]So I am where I am, doing a group study, having those values as output and in need to apply some sort of correction, or alternatively not use this command line for network investigation and comparison at all, and instead go for ROI-ROI connectivity values. However, since my hypothesis is in regard of whole networks and their between-network connectivity (DMN and Salience Networks), I would prefer to find a way to incorporate network analysis.[/color]

Helene
[i]Originally posted by Stephen L.:[/i][quote]Dear Helene,

I cannot answer for the second question, but for the first unfortunately p-uncorrected can never be assimilated to Bonferroni correction, as p-uncorrected does not perform any multiple comparison correction. Even for small volumes, it will always be more optimistic than Bonferroni. For more infos and a demonstration, you can read this excellent wiki page, where they advise to rather show unthresholded maps (but I don't know how well they are received by journal editors): http://imaging.mrc-cbu.cam.ac.uk/imaging/UncorrectedThreshold

An alternative can be to either use voxel-wise FDR, or even cluster-size FDR, as topological (cluster-wise) correction is more liberal than FWE.

Another alternative, particularly if you are doing a single case study, is to use non-parametric correction with a higher cluster-size or cluster-mass threshold than 0.05, using Alfonso's patch which fixes a few edge cases with non-parametric correction:

https://www.nitrc.org/forum/message.php?msg_id=23131

Indeed, it is usually recommended not to raise the cluster-size threshold above 0.05 nor the voxel-wise p-uncorrected threshold of 0.001 when doing parametric correction since Eklund et al's "Cluster failure" paper, because higher thresholds would lead to unquantified and exponentially higher nominal false positive rates, but with non-parametric correction you can use higher thresholds (both at voxel-wise and cluster-wise levels) as they reliably correct to the nominal rate as demonstrated in the same paper (and in others).

Hope this helps,
Stephen Larroque[/quote]

Contrast Definition

$
0
0
Hello All,

I am having some difficulty interpreting the differences between the following contrasts:

1) AllSub, behav1

<span style="white-space: pre;"> </span>[0 1]

2) behav1

<span style="white-space: pre;"> </span>[1]

3) AllSub, behav1, behav2

<span style="white-space: pre;"> </span>[0 1 0; 0 0 1]

4) Patients, Controls, behav1

<span style="white-space: pre;"> </span>[1 -1 0]

How would (1) be different from (2)? Particularly why would AllSub need to be selected to do a regression of behav1?

If (4) is a one-way Ancova controlling for behav1 would (3) essentially be regressions controlling for the other two variables?

From,

Humza Ahmed

RE: Error during importing functionla data step

$
0
0
Worked like a charm!

Thanks Stephen.

Set up - MNI Boundaries

$
0
0
Hi,

I am currently going through QAs.
It seems like I am getting imperfect MNI boundaries (picture attached)
This boundary is not covering the whole brain in mid-top region.
I am wondering whether this may lead to the inaccurate result. 

Also, the boundary is looking much better when I display raw data instead of smoother display.

Any tips of solving this would much appreciated 

Thank you

error in segmentaion using conn

$
0
0
Hi everyone,

I would like to know whether any of you see this error before?
Does anyone know why I got this error?

Undefined function or variable 'cfg_branch'.

Error in cfg_getfile>reg_filter (line 1185)
dprms = {cfg_branch};
Error in cfg_getfile (line 177)
t = reg_filter(varargin{2:end});
Error in spm_select (line 115)
cfg_getfile('regfilter', 'mesh', {'\.gii$'});
Error in spm_select (line 110)
spm_select('regfilter');
Error in spm_jobman (line 173)
spm_select('init');
Error in conn_setup_preproc (line 1936)
spm_jobman('initcfg');
Error in conn_process (line 25)
case 'setup_preprocessing', disp(['CONN: RUNNING SETUP.PREPROCESSING STEP']);
conn_setup_preproc(varargin{:});
Error in conn_batch (line 955)
conn_process('setup_preprocessing',steps,'subjects',SUBJECTS,OPTIONS{:});

Many thanks in advance for all your help,

Boshra

RE: HRF Weighting vs. Task/Condition Weighting?

$
0
0
Hi There,

I have a question concerning the choice of "hrf-convolved task response or "none." I am doing an audio loclaizer with sparse sampling. I chose the sparse option in the basic setup page. I know that makes the conditions not convolved with the hrf. However, will choosing the "hrf-convolved task response" in the 1st level stats part then convolve them, or will it used the unconvolved ones created from the sparse sampling option? I kindly thank you in advance.

Many Blessings,
Benson

RE: Single 2nd Level ROI-ROI Analysis with Some Missing ROI's

$
0
0
Hi Stephen,

Sorry for the slow reply. Thanx for the heads up about using NaN instead of zeros, makes total sense. I guess it sounds like I can't exclude subjects in a single analysis from some of the target to sources but keep them in other targets to sources. I would be a nice function. Say having 3 ROI's, and including all 15 subjects between two of them, but use 13 to the third one. Hope all is well.

Many Blessings,
Benson

RE: ROI-to-ROI value extraction

$
0
0
Hi Greg,

thanks a lot. Unfortunately, I did not find this folder in my project. However, there is another option. In the Results explorer, you can import the connectivity values as 2nd level covariates back into CONN (Import values button). Here you can also specify ROIs that are different from the clusters in the present contrast. After importing the values, switch back to the Setup tab, and there you find the subject-wise connectivity values.

Best,

Martin

RE: Correlations

$
0
0
[color=#000000]Hi Lucas,[/color]

[color=#000000]The latter is typically the correct way to evaluate the correlation between the post-pre differences in connectivity and symptom scores (selecting "patient_ratings" and "patients" and entering a [1 0] contrast). The former (selecting only "patient_ratings" and entering a [1] contrast value instead) is missing the intersect/constant term, and that is typically incorrect [b]unless[/b] you have good reason to believe that the intersect of the above regression should be zero (e.g. if you want your model to assume that connectivity differences will be zero for those patients with patient_ratings equal to zero)[/color]

Hope this helps
Alfonso

[i]Originally posted by Lucas Moro:[/i][quote]Hi!

Given two groups coded as "patients" and "controls" and "patients_rating" and "controls_ratings" (symptoms scores), what is the difference between:

Selected only "patients_rating" in between subject contrast: 1 (effect of patients rating)
or "patients_rating" and "patients": 1 0 (simple main effect of patients_rating")?

Both -1 1 in between-condition contrast (post>pre).
The question is what is the correlation between the post-pre difference in connectivity of a seed (seed-to-voxel) and symptoms for patients.

I would appreciate your comments on that.
Best,
Lucas[/quote]

RE: Contrast Definition

$
0
0
[color=#000000]Hi Humza,[/color]

[color=#000000]Regarding the difference between (1) and (2), see this[url=forum.php?thread_id=8968&forum_id=1144]#mce_temp_url#[/url] post (briefly, the AllSubs term models the constant/intersect term in a regression model)[/color]

[color=#000000]Contrast (1) evaluates a bivariate regression model (between connectivity values and behav1 scores). A significant effect there means that connectivty values are associated/correlated with behav scores. [/color]

[color=#000000]Contrast (3) evalutes a multiple regression model (between connectivity values and both behav1 and behav2 scores). A significant effect there means that connectivity values are associated/correlated with one or both (or some linear combination) of the two behav scores.  If, instead, you want to evaluate the correlation/association between connectivity values and behav1 scores after controlling for behav2 associations (e.g. associations between connectivity and behav1 that may be mediated by behav2 scores) you would simply use the contrast [0 1 0] instead.[/color]

[color=#000000]And you are right that contrast (4) is a one-way ANCOVA where you are evaluating the between-group differences in connectivity after controlling for potential differences in behav1 scores between the two groups. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[i]Originally posted by Humza Ahmed:[/i][quote]Hello All,

I am having some difficulty interpreting the differences between the following contrasts:

1) AllSub, behav1

<span style="white-space: pre;"> </span>[0 1]

2) behav1

<span style="white-space: pre;"> </span>[1]

3) AllSub, behav1, behav2

<span style="white-space: pre;"> </span>[0 1 0; 0 0 1]

4) Patients, Controls, behav1

<span style="white-space: pre;"> </span>[1 -1 0]

How would (1) be different from (2)? Particularly why would AllSub need to be selected to do a regression of behav1?

If (4) is a one-way Ancova controlling for behav1 would (3) essentially be regressions controlling for the other two variables?

From,

Humza Ahmed[/quote]

RE: Average within / between network connectivity

$
0
0
[color=#000000]Dear Helene,[/color]

[color=#000000]Regarding (1) yes, you are exactly right that a Bonferroni correction of 6 (3 within-/between- network comparisons and 2-tailed) would suffice for these analyses (that is the number of multiple tests that are being evaluated by that script). If you prefer, you could also (perhaps a bit more standard) first convert the three uncorrected p-values output by the script into two-sided p-values (using a [b]p2 = 2*min(p,1-p)[/b] formula), and then apply FDR across those multiple tests (using a [b]P = conn_fdr(p2)[/b] command). That should be similarly valid and a bit less conservative than Bonferroni. [/color]

[color=#000000]Regarding (2), the former computation is used (i.e. the script averages the ROI-to-ROI connectivity values between all ROI pairs with ROI1 in set1 and ROI2 in set2)[/color]

[color=#000000]Hope this helps, and my apologies that this conn_withinbetween* script/functionality is still undocumented, I will eventually get around to making this part of the standard set of ROI/network analyses in CONN[/color]

[color=#000000]Alfonso[/color]
[i]Originally posted by Helene Veenstra:[/i][quote]Dear Alfonso,

Reviving an older thread as I am using this code line for my analyses, as I was interested in network connectivity changes related to covariates and groups. I have a couple of questions. 

1) since this analysis uses (uncorrected) two-tailed p-values; however draws from a small pool of regions to test, would it be appropriate to consider the results being adequately corrected with a Bonferroni correction for number of tests, and appropriate choice of number of tails?

Example: I investigate several hypotheses (different covariates/group comparisons) for the connectivity within two networks (SN, DMN) and the connectivity between those two. To obtain a p-value <0.05 per hypothesis, I use a corrected p-value with a Bonferroni correction of 6 (p<0.0083) based on 3 network (SN, DMN, SN-DMN) tests * 2 for two-tailed. 

2) as you explained this gives an average over all ROI connections within each chosen network. But how exactly is the between-network value calculated? As an average of every possible ROI (group1) to ROI (group2) connection? Or a calculated connectivity of the averaged connectivity over all ROIs within each group?[/quote]
Viewing all 6861 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>