Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6864 articles
Browse latest View live

missing value in CONN Quality Assurance: # of voxels in GreyMatter

$
0
0
Dear Alfonso,
After completing denoising step in our analysis, the new QA generated as a second level covariate QC_GreyMatter_vol_session1 and other QC for while matter and CSF shows 6 subjects value as nan. Why could this be? The files looks normal.
What do you suggest?
Thanks
Hale

RE: subject's specific ROIs from freesurfer

$
0
0
Hello!

I have a similar question. I am trying to perform surface-based (rather than volume-based) analysis for the first time. I have preprocessed my data using the default surface-based parameters. Now, I am trying to enter each subject's Desikan-Killiany atlas from Freesurfer as an ROI. I am not sure if this is possible because it looks like each subject's atlas is named "lh.aparc.annot" which doesn't seem to end in the correct kind of file (e.g., nii, etc.) Is there a straightforward way of doing this?

Thanks!
Kaitlin

problem voxel to voxel analysis

$
0
0
Dear Conn users,

I am using conn version 18b. When I am at the first level analysis (voxel-to-voxel) and choose to perform intrinsic connectivity analysis, conn starts to run and processes all the different options in the voxel-to-voxel analysis (ICA, MVPA, ALFF...).
Next to the fact that this unnecessarily increases the computation time, it has some other consequences:
1) In the Setup domain, I get all ICA components into the 2nd level covariates. I am not sure what to do with these and if it would be good or bad to have them for intrinsic connectivity analysis.
2) In the Results domain, Intrinsic Connectivity is displayed twice in the list of voxel-to-voxel measures, where each gives different results. Also here, I don't know why it does this and how the two measures differ from each other

I hope someone could help out with this.
Thanks.

Best,
Steven

RE: Problem Loading ROI

$
0
0
Hello,

I would like to bump this thread because I am getting the same problem, but with matlab 2018 and Conn 18b. 

Is it indicating that there are NaN values in the ROIs? Non of my 1st or second level covariates have NaN values. I used fslmaths -nanm to see if there were any NaN values in my CSF, WM, and GM masks. No problem there as far as I can tell.

Any input is appreciated. Thanks!

- Harris

Removal of initial scans

$
0
0
Hi all,

I have a question regarding the removal of initial functional scans for resting-state analyes. Is there a difference between manually excluding the first X scans when importing functional images into CONN and using the "remove (X) initial scans" step in the included preprocessing pipeline? I imagine there could be a difference regarding e.g. the motion regressor, but I am not sure how CONN uses the "remove initial scans" step internally.

Best regards,
Johann Philipp Zöllner

Batch processing help!

$
0
0
Hi Alfonso,
Sorry to bother you again about batch processing, I've tried everything I can think of but can't figure out this error in my BATCH script. I've updated my version of CONN and I'm using MATLAB 2017a. I've tried changing many little things in the script but I still keep getting the error:
Error using cell/strmatch (line 19)
Requires character array or cell array of character vectors as inputs.

Error in conn_batch (line 1466)
idx=strmatch(batch.Results.between_subjects.effect_names{neffect},CONN_x.Setup.l2covariates.names,'exact');

line 19 in my script is BATCH.Setup.preprocessing.steps={'default_mni'} but I've tried substituting other steps in that array and get the same error.

Any advice would be much appreciated! My script is below

clear BATCH;
BATCH.filename= '/media/nir/SharedDrive/Ben/BATCHtest.mat';
BATCH.Setup.isnew=0;
BATCH.Setup.nsubjects=3;
for i = 1:3
for j = 1:2
filename = sprintf('/media/nir/SharedDrive/Ben/BATCHtest/Sub014%d_Ses1/Sub014%d_Ses1_Scan_0%d_BOLD%d.nii.gz', i, i, j + 1, j)
BATCH.Setup.functionals{i}{j}= filename;
clear filename;
end
end

for i = 1:3
filename = sprintf('/media/nir/SharedDrive/Ben/BATCHtest/Sub014%d_Ses1/Sub014%d_Ses1_Scan_01_ANAT1.nii.gz', i, i)
BATCH.Setup.structurals{i}= filename;
clear filename;
end
BATCH.Setup.preprocessing.steps={'default_mni'};
BATCH.Setup.preprocessing.fwhm=8;
BATCH.Setup.preprocessing.voxelsize_func=2;
BATCH.Setup.preprocessing.sliceorder=[1:2:47,2:2:47];
BATCH.Setup.RT=3.0;
BATCH.Setup.analyses=[1,2];
BATCH.Setup.voxelmask=1;
BATCH.Setup.voxelresolution=1;
BATCH.Setup.outputfiles=[0,1,0];
BATCH.Setup.roi.names={'LeftVentralStriatum','RightVentralStriatum', 'LeftDorsalPutamen', 'RightDorsalPutamen', 'LeftMedialDorsalThalamus', 'RightMedialDorsalThalamus', 'LeftVentralPutamen', 'RightVentralPutamen', 'LeftDorsalStriatum', 'RightDorsalStriatum'};
%works this far
BATCH.Setup.rois.names={'LeftVentralStriatum','RightVentralStriatum', 'LeftDorsalPutamen', 'RightDorsalPutamen', 'LeftMedialDorsalThalamus', 'RightMedialDorsalThalamus', 'LeftVentralPutamen', 'RightVentralPutamen', 'LeftDorsalStriatum', 'RightDorsalStriatum'}
BATCH.Setup.rois.dimensions={1,1,1,1,1,1,1,1,1,1}
BATCH.Setup.rois.files{1}='/media/nir/SharedDrive/Ben/ROIs/binLeftVentralStriatum.nii';
BATCH.Setup.rois.files{2}='/media/nir/SharedDrive/Ben/ROIs/binRightVentralStriatum.nii';
BATCH.Setup.rois.files{3}='/media/nir/SharedDrive/Ben/ROIs/binLeftDorsalPutamen.nii';
BATCH.Setup.rois.files{4}='/media/nir/SharedDrive/Ben/ROIs/binRightDorsalPutamen.nii';
BATCH.Setup.rois.files{5}='/media/nir/SharedDrive/Ben/ROIs/binLeftMedialDorsalThalamus.nii';
BATCH.Setup.rois.files{6}='/media/nir/SharedDrive/Ben/ROIs/binRightMedialDorsalThalamus.nii';
BATCH.Setup.rois.files{7}='/media/nir/SharedDrive/Ben/ROIs/binLeftVentralPutamen.nii';
BATCH.Setup.rois.files{8}='/media/nir/SharedDrive/Ben/ROIs/binRightVentralPutamen.nii';
BATCH.Setup.rois.files{9}='/media/nir/SharedDrive/Ben/ROIs/binLeftDorsalStriatum.nii';
BATCH.Setup.rois.files{10}='/media/nir/SharedDrive/Ben/ROIs/binRightDorsalStriatum.nii';
BATCH.Setup.conditions.names={'preop'};
for i = 1:9
for j = 1:2
BATCH.Setup.conditions.onsets{1}{i}{j}=[0];
BATCH.Setup.conditions.durations{1}{i}{j}=[inf];
end
end
BATCH.Setup.subjects.effect_names{1}={'HealthyControl';'Jimmy'};
BATCH.Setup.subjects.effects{1}=[1;1;0];
BATCH.Setup.subjects.effects{2}=[0;0;1];
BATCH.Setup.done=1;
BATCH.Setup.overwrite='No';
BATCH.Preprocessing.filter=[.01,.1];
%BATCH.Preprocessing.confounds.names={'White','CSF','realignment'};
%BATCH.Preprocessing.confounds.dimensions={3,3,6};
%BATCH.Preprocessing.confounds.deriv={0,0,1};
BATCH.Preprocessing.done=1;
BATCH.Preprocessing.overwrite='No';
BATCH.Analysis.type=3;
BATCH.Analysis.measure=1;
BATCH.Analysis.weight=2;
BATCH.Analysis.sources.names={'LeftVentralStriatum','RightVentralStriatum', 'LeftDorsalPutamen', 'RightDorsalPutamen', 'LeftMedialDorsalThalamus', 'RightMedialDorsalThalamus', 'LeftVentralPutamen', 'RightVentralPutamen', 'LeftDorsalStriatum', 'RightDorsalStriatum'};
BATCH.Analysis.sources.dimensions={1,1,1};
BATCH.Analysis.sources.deriv={0,0,0};
BATCH.Analysis.done=1;
%BATCH.Analysis.overwrite='No';
BATCH.Results.between_subjects.effect_names={'HealthyControl';'Jimmy'};
BATCH.Results.between_subjects.contrast=[1;-1];
BATCH.Results.between_conditions.effect_names={'preop'};
BATCH.Results.between_conditions.contrast=[1];
BATCH.Results.between_sources.effect_names={'RightVentralStriatum'};
BATCH.Results.between_sources.contrast=[1];
%BATCH.Results.analysis_number=2;
BATCH.Results.done=1;
conn_batch(BATCH);

FIR task regression approach

$
0
0
Hi all,

I was wondering what the best way would be to use the FIR task regression approach described by Cole et al. (Cole, M. W., Ito, T., Schultz, D., Mill, R., Chen, R., & Cocuzza, C. (2019). Task activations produce spurious but systematic inflation of task functional connectivity estimates. NeuroImage, 189, 1-18.) in CONN. I've included a series of regressors (one per time point) for each condition as first order covariates. Anything else I need to pay attention to?

Best regards,
Wouter De Baene

FIR task regression

$
0
0
Dear all,

I was wondering what would be the best way to apply the FIR task regression approach described by Cole et al. (2019, see https://doi.org/10.1016/j.neuroimage.2018.12.054) in CONN. Does it suffice to add a series of regressors (one per time point) for every task condition as covariates in the setup?

Best regards,
Wouter De Baene

running parallel analyses with different scrubbing parameters?

$
0
0
Hi Conn gurus:

I want to run two full analyses (through second level) trying out both the intermediate and conservative scrubbing parameters offered in the preprocessing pipeline (using art). I was prepared to setup two different .mat files with their associated folders in order to run scrubbing and the remaining steps separately and in parallel. But it occurs to me now that the scrubbing procedures work by creating new art-related files directly in the subject functional data folder, and if I first run scrubbing with one threshold (e.g. intermediate) and proceed with analysis, and then move to a different .mat file to run a new threshold (e.g. conservative), this will overwrite the art files in the subject's functional folder and thus may interfere with the first (e.g. intermediate threshold) analysis being run.
To your knowledge is there any solution other than creating two copies of my subject functional folders in order to use one copy for one scrubbing setup and the other for the other scrubbing setup?

Thanks for any advice!
Emily

RE: ROI-to-ROI various p-FDR corrections question

$
0
0
HI Alfonso. This is a bit confusing to me, even though I know some of the theory behind.
Is there any text I can go to see this in more detail?
You've explained only part of the options. And it is quite confusing!

[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]Hi Jeff,[/color]

[color=#000000]The p-FDR analysis-level correction is typically used when you want to make inferences about individual connections. It corrects the individual connection-level statistics for the total number of individual connections in your entire analysis (e.g. the size of the ROI-to-ROI matrix for the selected ROIs). In this case (for connection-level inferences) you typically just want to use this connection-level threshold (and disregard/uncheck any seed-level or network-level additional thresholding options). A typical example would be if you have say 10 ROIs of interest and would like to know whether among all the 45 connections between those ROIs any of them show significant differences between two subject groups. To be able to do this you perform a two-sample t-test to evaluate between-group differences in those ROI-to-ROI connections, but then you still need to correct the individual statistics (e.g. one T-value for each connection) by the total number of connetions tested (in this exaple 45), so one way to do this is by selecting a p-FDR analysis-level <.05 threshold which will apply an FDR correction to those 45 individual p-values. If any connection survives this threshold then you can confidently say that [i]those individual connections [/i]show different strengths between the two subject groups. [/color]

[color=#000000]The "intensity"-based thresholding options are part of the Network Based Statistics (NBS) analyses, and these are typically used instead when you want to make inferences either about individual ROIs or about individual networks of ROIs (instead of inferences about individual connections). Often times (when looking at a relatively large number of ROIs and connections) connection-level inferences require a very strong correction and the analysis sensitivity/power may simpy be too low to reach any sort of significance at this level (e.g. for 100 ROIs you now have 4500 individual connections to test, so you may simply not have the power to identify individual connections that survive such a strong correction). Seed- and network- level inferences offer higher sensitivity at the cost of lower specificity. The way they work is by combining a (typically uncorrected) connection-level threshold with a properly corrected seed- or network-level threshold. For example, for the same between-group comparison in the example above, you may now use a connection-level threshold of p<.01 uncorrected (to threshold the individual connection results at this somewhat arbitrary level), and then, for each seed-ROI you may want to simply count the number of significant connections emanating from this seed-ROI (this is what the "NBS (by size)" statistics compute), or alternatively compute the weighted sum of those significan connections emanating from this seed-ROI weighted by the strength of those individual connection effects (this is what the "NBS (by intensity)" statistics compute), and then determine whether those counts are themselves significant (this is performed in NBS using permutation/randomization analyses). If you have more than a single seed-ROI of interest then you would also need to apply a multiple-comparison correction of those seed-level statistics for the number of seeds tested (and this is what the associated seed-level p-FDR threshold does). So, summarizing, in this example you would simply activate/check both a connection-level threshold (and enter there p-uncorrected p<.01) and a seed-level threshold (select "seed-ROI (NBS by size)", and enter a p-FDR < .05 threshold there; note that you would need to click on the 'enable permutation analyses' button first to enable this thresholding option). If any ROI survives this threshold then you can confidently say that [i]those individual ROIs[/i] show different patterns of connectivity between the two subject groups.  [/color]

[color=#000000]Let me know if this clarifies. I realize that the sheer number of potential thresholding combinations in ROI-to-ROI analyses might be a bit excessive/confusing and we are thinking of ways to simplify this interface and/or make it a bit more intuitive so any thoughts/suggestions are most welcome.[/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Jeff Browndyke:[/i][quote]What do the p-FDR intensity correction and p-FDR analysis-level correction options actually denote or measure?
 
Which is more appropriate?  To not p-FDR at the intensity level (whatever that corrects for or denotes), but correct at the F-test ROI multiple comparison correction level?  Or, p-FDR at the intensity level and not correct for subsequent F-test ROI comparisons?
 
Thanks,
Jeff[/quote][/quote]

Outlier ROI in Seed-Voxel 2nd Level Results?

$
0
0
Hi Alfonso and other CONN users.

I'm doing a Seed-Voxel analysis where I have 30 healthy controls and 68 patients (most of whom have baseline and post scans). I computing second level seed-voxel analysis for 18 seeds, all in S1. 

For a HC vs Contrast at baseline, most of the seeds have no clusters at p unc < .001 pFDR < .05, but one seed has 11 clusters that are fairly widespread around Pre and post Central gyri, all decrease connectivity clusters in patients relative to controls. This seed is also fairly close to another seed that has no clusters at the same threshold.

To me this result seems anomalous and potentially indicating a problem with the data. Where should I look to figure this out (seed timecourse, Denoising, individual subject data that might be biasing?).

Also, more from a theoretical standpoint, does just the number of seeds selected impact the actual thresholding, or will the specific seeds and their associated data impact the second level results of other seeds?

Best,
- Harris

Results Intrepretation

$
0
0
Hi guys,

I got the attached results with the following set-up and would like to know if my intrepretation to the results is accurate or not.

Between-subjects contrast
selected 'All subjects' & 'Change in the number of symptoms' >> Effects of the change in the number of symptoms

Between-conditions contrast:
selected 'pre' & 'post' >> Effect of post > pre

Between-sources contrast:
selected default mode network (PCC)

Intrepretation:
e.g. The change in the number of symptoms was positively correlated with PCC connectivity with the right temporal occipital fusiform cortex.

If my intrepretation is correct, however, i'd like to ask whether it means
As the number of symptoms increases, there would be 
1) a stronger positive PCC connectivity with the right temporal occipital fusiform cortex OR
2) a sronger negative PCC connectivity with the right temporal occipital fusiform cortex? or does not say from this result? If that is not said in this result, what other analysis should I conduct to find out the answer?

Thanks for your kind help.

Can CONN remove lesion artefacts automatically?

$
0
0
HI,

My new data has lots of white matter hyperintensities scattered throughout the brain. Since the CONN pipepline regresses out the white matter and CSF, does it mean i don't have to worry about these? Or is there a chance of misclassification of WMH lesion into Grey Matter? If there is chance of misclassification, can I regress out any related artefact by using LSG toolbox on FLAIR which creates lesion maps and using these maps as an additional regressor in my CONN pipeline? If so, at what stage/how am I supposed to use the lesion maps into my CONN pipeline?

Any leads on this will be extremely helpful.

Kind regards,
Dilip

Seed-based analysis in subject-space

$
0
0
Hello,
I have structural T1 images processed with Freesurfer 5.3 that I want to use in Conn along some resting-state fMRI images (which are raw and not processed). I am currently in the process of tuning the preprocessing steps since I'm just starting to experiment with Conn (and new in the MRI world). However, I have a question concerning the analysis I want to do after preprocessing. I want to do seed-based analysis of resting-state functional connectivity with a specific seed. The problem is that the coordinates I want to use come from the literature and are in MNI-space. Is there a way to obtain the appropriate seed coordinates in every subject space and then use this seed (which would have different subject-space coordinates for each subject) for seed-to-voxel analysis? Since I'm (very) new to the field, I am totally open for suggestions here! :)
Thank you and have a nice day,
Olivier Roy

To use weighted GLM or gPPI and why?

$
0
0
Hello,

I am new to CONN and connectivity analysis in general and as such wanted to ask a few questions to help guide my analysis. 

1: I am doing an event-related design using a Go/NoGo task and am unsure whether to use weighted GLM or gPPI. The forum posts I have gone through seem to indicate that gPPi is the preferred method for for event-related designs but wanted to confirm this was the case and possibly why this is the case.

2: The creation and removal of task main effect confounds still confuses me. As far as I can understand the task main effects remove are orthogonal to the interaction so removing them does not remove the main task-driven connectivity effects and average task connectivity across the entire time series. 

3: For event-related task designs would you still remove the main task effects from the analysis? I assume removing or leaving them in makes the subsequent results differ in how they would be interpreted?

Thank you very much,
Austin

Merging studies

$
0
0
Is there a way to merge 2 studies with varying numbers of sessions and conditions?

Thanks in advance for any help!
Ben

Plotting Regressions in CONN

$
0
0
Currently correlating behavioral data to resting-state functional connectivity maps, in the overlays the scaling suggests that not r-squared values are being plotted.  Are these instead transformed into the related t-values for the regression values, and then plotted on the surface renderings?

Thanks!

L

Can CONN use GPU CUDA cores to accelerate processing

$
0
0
Hi everyone,

I'm wondering if the latest version of CONN (18.b) could use GPU CUDA cores to accelerate processing. Please share your ideas and methods, thank you.

download ABIDE dataset

$
0
0
Hello everyone,

I want to download all of ABIDE DPARSF Preprocessed images ,but I face a problem in download! when I select some subject or select all of subjects ,just images that belong to one subject downloaded, how can I download all of ABIDE DPARSF Preprocessed images for all subjects?

Thank you
sr.shn

QC_timeseries not in confounds

$
0
0
Hey

I really appreciate this forum, I have been able to find answers to questions I've faced several times, and I think that has really helped make CONN a successful and relevant toolbox. 

I'm running a task-related connectivity analysis using CONN 18.a. I have noticed that QC_timeseries; which is a first level covariate, is not regressed out of BOLD as a confound in the first level analysis (under the denoising tab, choosing confounds). This surprised me since I thought by default all first level covariates are regressed out in this stage. Since this is the output of ART which includes the raw motion and global signal change timeseries, shouldn't be regressed out? Is it left in the effects because it is regressed out before in the preprocessing step? 

Thanks
Fatima
Viewing all 6864 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>