Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6872 articles
Browse latest View live

RE: Dynamic FC factor loadings

$
0
0
Hello Alfonso,

I'm not sure if you've had a chance to see this post, but I wanted to make sure it didn't get buried. 
I was wondering if you could please provide an explanation about the dynamic connectivity factor loadings, or suggest a reference for how this is calculated in the CONN toolbox. 

Thank you very much for your help and your time in this!
Amanda

RE: Errror in dynamic FC - again

$
0
0
Dear Alfonso,
Thank you very much, no further issues have been reported using CONN16b.
Best wishes
J.

Batch Multiple ROIs per file

$
0
0
Hello,

I have two ROI files per subject, and each contains 8-subject specific ROIs. 

In the GUI, one can simply click the 'multiple ROIs' box for CONN to recognize all of the ROIs in each file. I had no problem with this. However, I'd like to use batch processing. Is there a way to specify using batch processing that there are multiple ROIs per file? I did not see this as an option in the batch manual.

Thanks!

RE: How to use "age" as covariates?

$
0
0
Hi CONN-ers!
I have a similar question, but want to make sure I'm not over-generalizing to my data.
I have a group comparison between patients and controls and I'd like to see the effect of variables that are not present in the control group (e.g., lesion size and severity). I have set up the contrast as [1 0 -1] for 'control' 'patient' and 'lesion size'. Am I way off? I am hoping to be able to say something like: As lesion size increases, the difference in connectivity between groups increases for these connections. I've looked at the calculator for one of the connections, and this seems to be the case. I just want to make sure the way I've set up the contrast is legit.
Thanks!
Chaleece

RE: Custom mask.surface.brainmask.nii

$
0
0
[color=#000000]Hi Daniel,[/color]

[color=#000000]The mask.volume.brainmask.nii and mask.surface.brainmask.nii files are only used as potential analysis-masks (when selecting those in the Setup.Options tab) but are otherwise not used in the 3D ROI-to-ROI displays (they represent default analysis masks for voume- and surface- based analyses, respectively). If you want to modify the default 3D surfaces shown in the ROI-to-ROI 3d displays (or the surface- and volume- seed-to-voxel displays) you would typically need to replace the files named lh.* and rh.* in the conn/utils/surf directory (e.g. lh.white.surf, lh.pial.surf, etc.) with equivalent freesurfer files of your reference cat brain (these files are in the same format as those lh.white and lh.pial files directly generated by freesurfer when processing your structural volumes).[/color]

[color=#000000]Let me know if you run into any issues[/color]
Best
Alfonso

[color=#000000] 
[/color][i]Originally posted by Daniel Stolzberg:[/i][quote]Hello CONN forum,

I have been trying to modify the default surface files (/conn/utils/surf/) to map ROI-to-ROI results onto a cat brain.  We have a surface model made in FreeSurfer and I have been simply replacing the appropriate default *.surf files with the new ones.  Results explorer correctly shows the cat brains around the Connectome Ring, however the 3D Display does not work.  After some debugging, I think the issue is that I need to also change the mask.surface.brainmask.nii file; however, I have no idea what this file actually represents.  I replaced the mask.volume.brainmask.nii file with the appropriate cat brain mask, but the mask.surface.brainmask.nii file is hard to decipher.

Anyone have experience with using custom surface files?  Anyone have any idea what the mask.surface.brainmask.nii file contains?

While solving this part will get me a bit closer to a 3D display of the cat brain, does anyone have any further advice on some additional files I would need to replace?  

Thank you in advance,
Dan[/quote]

RE: Output raw correlation (r^2) values

$
0
0
[color=#000000]Hi Kevin,[/color]

[color=#000000]Yes, in the [i]second-level results [/i]ROI-to-ROI tab, simply select all of your seeds/source ROIs of interest in the 'seeds/sources' list, and select all of your target ROIs of interest in the 'analysis results' list, then click on 'import values' and that will import the connectivity values between those pairs of ROIs as new second-level covariates. You may then go to the [i]Setup.SecondLevelCovariates [/i]tab, select the newly defined variables there and click on 'covariate tools.Export' to create an excel file with those connectivity values (typically Fisher-transformed correlation values) for each subject/condition and for each ROI pair.[/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[color=#000000] 
[/color][i]Originally posted by Kevin Mann:[/i][quote]Hello Alfonso,

Is there a simple way to export the connectivity value (or at least the transformed r^2) for each subject between two chosen regions (ROI analysis)? I have found the REX GUI in the seed based area, but I think this gives the connectivity within a cluster of correlated regions. I would just simply like an R^2 value between two selected regions on a per subject basis so I can correlate with clinical measures in excel.

Thank you in advance for your help!

Kevin[/quote]

RE: ROI-to-ROI various p-FDR corrections question

$
0
0
[color=#000000]Hi Jeff,[/color]

[color=#000000]The p-FDR analysis-level correction is typically used when you want to make inferences about individual connections. It corrects the individual connection-level statistics for the total number of individual connections in your entire analysis (e.g. the size of the ROI-to-ROI matrix for the selected ROIs). In this case (for connection-level inferences) you typically just want to use this connection-level threshold (and disregard/uncheck any seed-level or network-level additional thresholding options). A typical example would be if you have say 10 ROIs of interest and would like to know whether among all the 45 connections between those ROIs any of them show significant differences between two subject groups. To be able to do this you perform a two-sample t-test to evaluate between-group differences in those ROI-to-ROI connections, but then you still need to correct the individual statistics (e.g. one T-value for each connection) by the total number of connetions tested (in this exaple 45), so one way to do this is by selecting a p-FDR analysis-level <.05 threshold which will apply an FDR correction to those 45 individual p-values. If any connection survives this threshold then you can confidently say that [i]those individual connections [/i]show different strengths between the two subject groups. [/color]

[color=#000000]The "intensity"-based thresholding options are part of the Network Based Statistics (NBS) analyses, and these are typically used instead when you want to make inferences either about individual ROIs or about individual networks of ROIs (instead of inferences about individual connections). Often times (when looking at a relatively large number of ROIs and connections) connection-level inferences require a very strong correction and the analysis sensitivity/power may simpy be too low to reach any sort of significance at this level (e.g. for 100 ROIs you now have 4500 individual connections to test, so you may simply not have the power to identify individual connections that survive such a strong correction). Seed- and network- level inferences offer higher sensitivity at the cost of lower specificity. The way they work is by combining a (typically uncorrected) connection-level threshold with a properly corrected seed- or network-level threshold. For example, for the same between-group comparison in the example above, you may now use a connection-level threshold of p<.01 uncorrected (to threshold the individual connection results at this somewhat arbitrary level), and then, for each seed-ROI you may want to simply count the number of significant connections emanating from this seed-ROI (this is what the "NBS (by size)" statistics compute), or alternatively compute the weighted sum of those significan connections emanating from this seed-ROI weighted by the strength of those individual connection effects (this is what the "NBS (by intensity)" statistics compute), and then determine whether those counts are themselves significant (this is performed in NBS using permutation/randomization analyses). If you have more than a single seed-ROI of interest then you would also need to apply a multiple-comparison correction of those seed-level statistics for the number of seeds tested (and this is what the associated seed-level p-FDR threshold does). So, summarizing, in this example you would simply activate/check both a connection-level threshold (and enter there p-uncorrected p<.01) and a seed-level threshold (select "seed-ROI (NBS by size)", and enter a p-FDR < .05 threshold there; note that you would need to click on the 'enable permutation analyses' button first to enable this thresholding option). If any ROI survives this threshold then you can confidently say that [i]those individual ROIs[/i] show different patterns of connectivity between the two subject groups.  [/color]

[color=#000000]Let me know if this clarifies. I realize that the sheer number of potential thresholding combinations in ROI-to-ROI analyses might be a bit excessive/confusing and we are thinking of ways to simplify this interface and/or make it a bit more intuitive so any thoughts/suggestions are most welcome.[/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Jeff Browndyke:[/i][quote]What do the p-FDR intensity correction and p-FDR analysis-level correction options actually denote or measure?
 
Which is more appropriate?  To not p-FDR at the intensity level (whatever that corrects for or denotes), but correct at the F-test ROI multiple comparison correction level?  Or, p-FDR at the intensity level and not correct for subsequent F-test ROI comparisons?
 
Thanks,
Jeff[/quote]

RE: Preprocessed fMRI data into CONN toolbox

$
0
0
[color=#000000]Hi Talia,[/color]

[color=#000000]Typically the way to perform surface-level analyses in CONN from FreeSurfer-processed data is the following:[/color]

[color=#000000]1) in [i]Setup.Structural[/i] enter the freesurfer-generated T1 files (e.g. T1.mgz in the subject-specific folders), and when prompted select 'Yes' if you also want CONN to automatically import the aseg gray/white/CSF masks computed by FreeSurfer[/color]
[color=#000000]2) in [i]Setup.functional [/i]enter the functional data for each subject that is coregistered to the above structural volumes[/color]
[color=#000000]3) in [i]Setup.Options [/i]select the 'fsaverage (surface-level analyses)' option in 'Analysis space'[/color]

[color=#000000]Everything else will be defined in the same way as for volume-based analyses. Note that in [i]Setup.ROIs [/i]you typically would also want to remove the MNI-space default ROIs included there and enter instead surface-based ROIs (e.g. FreeSurfer aparc files in fsaverage space). If you prefer to combine MNI-space ROIs with your surface-based analyses then typically you would want to define an additional MNI-space dataset in [i]Setup.Functionals [/i]pointing to your MNI-normalized functional volumes and then make sure that the MNI-space ROIs are being extracted from that new dataset instead (e.g. select 'extract from dataset-2' for those ROIs in the [i]Setup.ROIs [/i]tab), while the surface-based ROIs are being extracted from the original dataset [/color](e.g. select 'extract from dataset-0' for those ROIs)

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Talia R:[/i][quote]Hi all,
I preprocessed my resting state data using FreeSurfer(specifically using FSFAST) and I'm curious to know if anyone has tried importing that kind of data into the CONN toolbox and which files to import to do first level and group level analyses. I like the connectivity images that are outputted using the CONN toolbox but I would prefer not starting from scratch by having the data preprocessed using SPM.
Does anyone have any idea about how I can potentially do this?
Thank you![/quote]

Issue with ART-based scrubbing procedure

$
0
0
Dear Alfonso (or anyone else who may be able to help!),

I am trying to apply the ART-based scrubbing procedure to my mostly preprocessed (external to CONN) dataset. I have tried both saving my 6-parameter motion estimates as a text file in the same directory as the functional data ( where the text file is named rp_functionalname.txt) AND loading them as a first-level covariate named "realignment", however, I keep getting the following error: 

ERROR DESCRIPTION:

Error using cellstr (line 34)
Input must be a string.
Error in conn_setup_preproc (line 798)
temp=cellstr(CONN_x.Setup.functional{nsubject}{nses}{1});
Error in conn (line 776)
ok=conn_setup_preproc('',varargin{2:end});
Error in conn_menumanager (line 119)
feval(CONN_MM.MENU{n0}.callback{n1}{1},CONN_MM.MENU{n0}.callback{n1}{2:end});
CONN v.16.b
SPM12 + DEM FieldMap MEEGtools
Matlab v.2012b
storage: 2558.3Gb available

I am not sure what the issue is. I have attached an example motion parameter file to this post, in case this may be of use. Any help would be greatly appreciated!

Buddhika

RE: Default preprocessing pipeline with BATCH?

$
0
0
[color=#000000]Hi Sascha,[/color]

[color=#000000]To run the entire preprocessing pipeline using batch commands simply add the line:[/color]

[color=#000000]  batch.Setup.preprocessing.steps = 'default_mni';[/color]

[color=#000000]to your batch structure (note: do not use the batch.New fields, that functionality is obsolete, instead simply enter your functional data in the batch.Setup.functionals field and your structural data in the batch.Setup.structurals field as usual, and simply add the line above to run the standard preprocessing pipeline; see [i]help conn_batch[/i] for additional details)[/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[color=#000000] 
[/color][i]Originally posted by Sascha Froelich:[/i][quote]Hello everyone,

I am trying to write a BATCH script that preprocesses my data like the default preprocessing pipeline does. While there appears to be no command that does all of the comprised preprocessing steps at once, I could specify them one by one in BATCH.New.steps. However, in the conn batchscripting manual there seems to be no command for, e.g., Outlier Detection & Scrubbing, etc. ...

Probably I just overlooked something, however this is the question: How do I send my data through the "default preprocessing pipeline" with BATCH commands?

Any help is highly appreciated!

Kind regards,
Sascha[/quote]

RE: Unplausible first-level results

$
0
0
[color=#000000]Hi Ami,[/color]

[color=#000000]My guess would be that this reflects a misalignment/miscoregistration between the ROI definition files and your functional data. Standard preprocessing steps in CONN are meant to bring all of your data to MNI space. If you are skipping this step simply make sure that all of your data is appropriately coregistered and in the same space as your ROI files. For example, you may:[/color]

[color=#000000]  1) in [i]Setup.ROIs[/i], select your ROI file (e.g. atlas) and click on '[i]ROI tools. Check ROI/functional coregistration[/i]'[/color]
[color=#000000]  2) in [i]Setup.functional click on 'functional tools. Check functional/anatomical coregistration'[/i][/color]
[color=#000000]  3) in the [i]Denoising [/i]tab, the display there is overlaying the structural data, functional data, as well as the analysis-mask used, so that is also a good place to detect if something looks incorrectly coregistered early on. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[color=#000000]   
[/color][i]Originally posted by Ami Tsuchida:[/i][quote]Hello,

I am running simple Seed-to-voxel/ROI-to-ROI functional connectivity (correlation) analyses from single-session resting-state data on 200+ subjects. I used already preprocessed and denoised functional data, and skipped all the preprocessing and demonising steps. Although with some trouble initially to set it up due to the large dataset, I was able to finish everything up to the second-level stage. However, when reviewing the first-level results, I noticed that they were not plausible. Just scanning the results of the first few subjects, some had negative values around the seed region and the homologous region in the opposite hemisphere, where one expects to find the strongest connectivity. I know that the first-level results can be noisy, but I expect to see positive values at least around the seed region. 

I initially thought these were problems in data, but we processed the same raw data (preprocessed, denoised) using in-house program just to look at the seed-to-voxels connectivity, and I could see that in the same subjects, there are robust positive correlation around the seed region and homotopic area on the other hemisphere.

I looked inside various intermediate files, but could not find anything that's obviously wrong. I attached the example of BETA_Subject...nii (source is in Left Hippocampus), resultsROI...mat as well as ROI_Subject...Condition...mat from preprocessing for the same subject. I'm not sure whether these are enough to give you information about what went wrong, but please let me know if there is any other file I should attach.

Thank you!

Ami[/quote]

RE: Use of different dimensions of seed ROIs?

$
0
0
[color=#000000]Hi Daniel,[/color]

[color=#000000]Typically the first dimension is the one being most commonly used/reported to characterize each ROI. In particular, when extracting multiple dimensions from an ROI the first dimension always represents the average BOLD timeseries across all voxels within the ROI, while the following dimensions represent the principal components of the timeseries variability across all voxels within the ROI. If the ROI is relatively small and homogeneous then the average timeseries should adequately characterize the BOLD response within this ROI, while if the ROI is larger and non-homogeneous then the average timeseries might fail to capture the variablity in BOLD responses within the ROI (and in those cases using a multivariate representation instead of a single dimension helps capture that additional variability).[/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso [/color]
[i]Originally posted by Daniel Kopala-Sibley:[/i][quote]Hi, Hopefully this isn't redundant with another question. I'm new to Conn, and am I'm running resting state connectivity analyses, with a focus on the DMN, and asked for seeds in the mPFC, PCC, and left and right parietal cortex. I then had it extract two dimensions for each at the first level. I've been running connectivity analyses with a covariate (parenting behaviors), and find that results differ quite a bit depending on whether I use the first or second dimensions for each seed. Are there any guidelines on which dimension to use?

Thanks in advance

Daniel[/quote]

ICC Technical Question RE: Step Function?

$
0
0
[color=#000000]Hi Jeff,[/color]

[color=#000000]Not exactly, GlobalCorrelation is still a weighted metric, in particular it is simply computing the row-average of the voxel-to-voxel correlation matrix (ie. for each voxel, compute the connectvitiy/correlation between this voxel and every other voxel in the brain, the average of all those r values is the GlobalCorrelation value for this voxel). The ICC metric is just the same as the GlobalCorrelation metric, but it instead averages the r^2 values (instead of the actual -signed- r values)[/color]

[color=#000000]All of these voxel-to-voxel measures are simple metrics that describe [i]some aspect [/i]of the connectivity pattern between a voxel and the rest of the brain (and each metric focuses on slightly different aspects of these patterns). ICC focuses on the "strength" of those patterns (how large are those individual r-values, irrespective of sign), while GlobalCorrelation focuses on the "height" of those patterns (what is the average r-value in those patterns). If, for example, you find positive task-related differences in ICC at some region (e.g. higher ICC during task compared to rest), that means that the patterns of connectivity between that region and the rest of the brain are "stronger" (higher absolute values) during the task condition compared to the rest condition (this may indicate stronger positive correlations, stronger anticorrelations, or a combination of the two). You can then look at those actual patterns simply by using the resulting cluster/blob as a seed in standard seed-to-voxel analyses to further interpret what might be driving these "stronger" connectivity patterns during the task condition.[/color]

[color=#000000]Regarding the actual sign of the individual ICC values, if you are [i]not [/i]using normalized ICC measures (ie. unchecking the 'normalization' box in the first-level anaysis tab) then all of the ICC values will be positive (since those represent average r^2 values, which are always positive). When using normalized measures instead, then those average r^2 values are normalized to z-scores (with mean zero, variance 1 across the entire brain, separately for each subject), so in that case you may find positive as well as negative ICC values (and they simply represent above-average or below-average, respectively, original ICC values). [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Jeff Browndyke:[/i][quote]Hi, Alfonso.

I noticed that CONN 16.b includes a few new voxel-wise metrics, one of which is Global Correlation.  Is this an example of an unweighted ICC metric as mentioned in your prior posts?  If it is, maybe it could address the question I have below.

I'm trying to get my head around how to interpret directionality of connectivity (increase/decrease) using the ICC metric results.  I'm obtaining results in default-related networks that should be anti-correlated with task condition, but when I look at the eye (X) analysis bars for group x time (i.e., ICC effect sizes for each condition and group at each time point) some of our significant ICC blobs are positive and others are negative.  If ICC is a weighted metric using absolute values, in which both positive and negative connectivity are incorporated, why am I getting negative ICC effect size bars?  How does one drill down to see if the ICC blobs reflect task-positive or task-negative connectivity?

Thanks,
Jeff[/quote]

RE: Between session differences in movement

$
0
0
[color=#000000]Dear Isabella,[/color]

[color=#000000]If I am understanding correctly, in order to check for potential differences in movement across the two sessions simply select in the predictor variables list the  'AllSubjects' term, and then select both 'max realignment at session2' and 'max realignment at session1' in the outcome variables list and enter a [-1 1] between-measures contrast there. That will implement a paired t-test looking at differences in movement between the two sessions. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]

[i]Originally posted by Isabella Breukelaar:[/i][quote]Dear CONN users,

I am trying to test for differences in movement and ART outliers between two sessions which I have set up as different conditions in my analysis. I have tried the method posted here: https://www.nitrc.org/forum/forum.php?thread_id=5916&forum_id=1144 but as there is no "session 1" and "Session 2" variables pre-existing in my second level covariates/predictor variables in the calculator, I created a subject level aggregate for max realignment at rest (which spans session 1 and session 2) and then one for each of the sessions. I set up my between-subject constrasts as "max realignment at session 2" > "max realignment at session 1" [-1 1]  and have outcome variable "max realighment at rest" with between measures contrast at 1. This is a little different from the method described in the previous post so am unsure if this is correct?


Thank you,
Isabella[/quote]

RE: Fix effect analysis

$
0
0
[color=#000000]Dear Katia,[/color]

[color=#000000]Sorry but fixed-effect analyses are not available in any of the more recent CONN versions. You may check this thread (http://www.nitrc.org/forum/message.php?msg_id=10082 ) for some alternative options to see if any might fit what you need.[/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[color=#000000] 
[/color][i]Originally posted by Katia Andrade:[/i][quote]Dear Alfonso,
Thank you a lot for always replying so quickly.
Is it possible to run a Second Level Fixed effect analysis in Conn?
Thank you again, Kátia.[/quote]

RE: Voxel-Voxel Analysis : Unresovable error

$
0
0
[color=#000000]Hi Sneha,[/color]

[color=#000000]Regarding the mis-registration shown in the [i]Denoising [/i]display, in your case that perhaps represents an incorrectly defined analysis mask? (look in [i]Setup.Options [/i]'analysis mask' field to make sure that your analysis mask is defined in the same space as your functional data). In general, this display is showing three things simultaneously: a subject structural volume; outlier/confounding effects derived from his/her functional data; and the analysis mask (functional results are only computed/shown within the defined analysis mask). [/color]So if something looks miscoregistered in this display then it is just a matter of figuring out which one(s) of those three things seem to be miscoregistered (and in your case since you mention that your analysis are defined in subject-space, a good guess would be that perhaps the analysis mask was not defined in the same space -since the default mask defined in CONN will be in MNI space instead-)

Hope this helps
Alfonso
 
[i]Originally posted by Sneha Pandya:[/i][quote]Hi Alfonso and Vinay,

I want to follow up on the same error posted by Vinay and want to see if it was resolved. I am having similar issue while doing voxel-voxel analysis. I have tried running the analysis with selecting only voxel-voxel option during setup/options and for denoising step, and by selecting all the options for both the steps and still end up with same error. It does not even create any nifti files under results folder. In addition, we are trying to do our analysis in the subject space, so after doing pre-processing, co-registration of structural/functional looks file when displayed under "functional tools" option and also by using mricron and freeview, but when we go to denoising step they look misaligned just as showed in the attached screenshot by Vinay. Is it a bug or co-registration indeed failed?

Can you'll please help resolve this error.

Thanks,
Sneha[/quote]

event-related functional connectivity

$
0
0
Dear experts,

I am wondering if CONN could be used for event-related designs?

Many thanks for any advice,

Best regards,

Bahri

how to get the subtract of two image in CONN

$
0
0
Hi all, I am new to CONN and also neuroscience. I am wondering if  CONN has the feature to get subtract of two image  in order to find the changes between different sessions? if not, what is the best way to do it..
Thanks

FieldMap PhaseMap error (fixed)

$
0
0
Hi,

In CONN 16a and 16b, we were getting a weird error in CONN when we input the vdm files in .nii using the PhaseMap pipeline without using the batch.

The error said it could not find an "fmfile" in conn_setup_preproc.m

After much tests, I went to the code and seems the error was in line 774.

773 - if isempty(tmfile), tmfile=spm_select(1,'^vdm.*',['SUBJECT ',num2str(nsubject),'SESSION ',num2str(nses),' Phase Map volume (vdm*)'],{tmfile},fileparts(ttemp{1})); end

774 - if isempty(tmfile),return;end

Seems the name "fmfile" was an error, instead I change it to "tmfile", and the unwarp ran without issues.

Hope this helps and it is a good correction.

Kind regards

Eduardo

compcor clarification

$
0
0
Dear Alfonso, 
I am doing some tests on the aCompCor method and till now I was not able to reproduce the exactly same denoising results of CONN. I wanted to ask you what the 5 dimensions exactly are. According to your paper (and to some of my tests), it seems that the first component is just the average time course in the ROI. So, I assume that the other 4 dimensions are the the first 4 PCA components. Is that true?
In respect to the original aCompCor paper, does CONN make any additional steps (I am excluding any other obvious steps such as filter, detreding, despiking, PSC trasformation,..)?
Also, I would really appreciate if you could point me to where the 5 temporal series are stored.
Thank you, 
Best
Daniele
Viewing all 6872 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>