Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6864 articles
Browse latest View live

RE: Second-level covariate -> change scores

$
0
0
[color=#000000]Hi Natasha,[/color]

[color=#000000]That's all perfectly fine, and scores with "actual" values of 0 are treated appropriately in group-specific covariates (and differently from the subjects in other groups even if they share those "0" values). Relatedly, coding with "0" the subjects in the opposite group in these group-specific covariates is often just a matter of simplicity/convention, and the choice is irrelevant for most analyses. For example, in your Drug,Placebo,ScoresDrug,ScoresPlacebo [0 0 1 -1] analysis, the results will be [b][i]exactly [/i][/b]the same if you code your Placebo subjects in the "ScoresDrug" covariate and your Drug subjects in the "ScoresPlacebo" covariate with a value of 100 (or any other arbitrary value) instead of 0. Yet I still recommend coding those "opposite-group" subjects with 0's instead of any other arbitrary value, just because in the VERY few cases where that choice matters the choice of 0 is almost invariably the appropriate one. [/color]

[color=#000000]Hope this helps[/color]
[color=#000000]Alfonso[/color]
[color=#000000] 
[/color][i]Originally posted by Natasha Mason:[/i][quote]hello all,
I've been running an analysis, where I want to look at the association between scores on a test and connectivity in two different treatment groups (drug vs placebo; between subjects). I have been running the analysis according to previous help I found on this forum: https://www.nitrc.org/forum/message.php?msg_id=14045

Where in the second-level analysis i've 1) created 2 second level covariates (drug, placebo) containing 1s and 0s.

2) created my test scores in the second-level covariates; an overall test scores with all scores in it ('scores'), and 2 additional covariates divided by groups (ScoresDrug, ScoresPlacebo; where I put a 0 for the everyone who is not in that group).

I then select Drug, Placebo, and Scores and enter a contrast of [0 0 1] to look at association between symptom scores and connectivity across all of your subjects (jointly across the three groups) after discounting potential differences in average connectivity between your groups.

I also select Drug, Placebo, ScoresDrug, ScoresPlacebo and enter a contrast of [0 0 1 -1] to look at the difference between drug and placebo in their association between scores and connectivity.

My question, however, is that for some of these participants, the actual scores are a "0", which shows that there is no difference from the baseline test we did with each participant. Thus I'm wondering if I am losing this information, as I code participants as 0's in the treatment specific contrast to split them into groups. They can also score negatively on the test, so I cannot just add a constant to each participant.[/quote]

RE: WARNING: possibly incorrect model

$
0
0
[color=#000000]Hi Mikey,[/color]

[color=#000000]The "non-estimable contrast" warning means that your contrast cannot be uniquely estimated from the data (typically because ether the between-subjects contrast itself is incorrectly defined, or because your predictors include some redundancies which your contrast does not acknowledge; e.g. if I create an analysis with predictors 'AllSubjects', 'Patients', and 'Controls' , then the contrast [0 1 -1] is perfectly estimable but the contrast [1 0 0] cannot be estimated). [/color]Could you please let me know the details of your second-level model (in particular the choice of 11 predictors entered into your GLM, and the between-subjects contrast used, for at least one of these analyses)?

[color=#000000]Thanks[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Mikey Malina:[/i][quote]Hello,

I am conducting second-level analyses on a large set of data (1084 subjects, many many second level-covariates) and am frequently getting the below error when running contrasts:

WARNING: possibly incorrect model: non-estimable contrasts (suggestion: simplify second-level model)

I'm not too sure what the problem is, I am unable to simplify the model without getting rid of valuable information, and can't seem to find anything explicitly wrong with the way I have it set up. Of the ~30 contrasts I am running in this analysis, about 15 are showing this warning. I have attached a document showing the warning window in CONN.

Thank you,
Mikey[/quote]

RE: Abnormal beta value

$
0
0
Dear Alfonso,

Just a quick follow up to kindly ask you if you would be able to help with the above inquiry?

Many thanks.

Kind regards,
Hugo

RE: Error while Preprocessing in GUI ("No Executable Modules, but still unresolved dependencies..."

$
0
0
Hello!

I am actually having the exact same issue / error message as Cristina did in 2018. I checked the scans/acquisitions on my files and they all have 56. Is there any other potential reason for this issue arising?

Thanks!
Neta

RE: Apple cannot verify the developer for the SPM download

$
0
0
[color=#000000]Hi,[/color]

[color=#000000]Try running the following Matlab command:[/color]

   conn bugfix_catalina2019

and then loading CONN again. Let me know if that seems to fix this issue

Best
Alfonso
[i]Originally posted by Maya Brawer-Cohen:[/i][quote]I'm having the exact same issue. I also changed my System Preferences to allow these downloads anyways and I am still getting the message. Let me know what I should do! Thank you.[/quote]

seed to voxel and roi to roi analysis, networks

$
0
0
Dear CONN toolbox experts,

I am interested in analyzing the connectivity within different networks (DMN, DAN, ECN and salient network). However, I don't understand how I should carry out the analysis. Would it be more appropriate to do a seed to voxel or a roi to roi analysis?


In the case of roi to roi analysis, I understand that I could select just the rois of a particular network and analyze each network separetely.

However, in the case of seed to voxel analysis, I don't know how I should select the seeds/sources.
Could I select them individually? For example, first select the MPFC of the DMN and see the connections to these regions, then select the LP of the DMN and see the connections to these regions, etc. Or should I select all the seeds from the same network at the same time?

The problem is that if I select various seeds at the same time, then I dont know to which seed the significant association corresponds. For example, if I select the PPC (L) seed I find a significant negative connection to the inferior frontal gyrus. However, if I select at the same time the seeds of PPC (L) and the LPFC (L), I get significant positive connections with the subcallosal cortex, but I do not know how to see which of both seeds is the one connected with this subcallosal area. I do not understand how should I carry out the analysis and how to interpret them.

Thank you very much in advance.

Best regards,

Agurne

RE: Export .surf.nii file to Freesurfer ?

$
0
0
Hi

Thanks a lot for your answer !

Unfortunately I am a bit lost because I did not find any function called 'conn_surf_read'.
So I tried with 'conn_surf_readsurf' but I couldn't read my 4D .suf.nii file.

I also tried to proceed like in conn_surf_surf2vol.m to read the surf.nii file :

a1 = spm_vol(filename)
b1=spm_read_vols(a1);

Then
save(gifti(permute(b1(:,1,:),[1,3,2])),conn_prepend('lh.',filename,'.gii'));
save(gifti(permute(b1(:,2,:),[1,3,2])),conn_prepend('rh.',filename,'.gii'));

but I am still unable to load the resulting lh./rh.filename.gii file in freeview.

I would really appreciate a little more help ;-)

Best regards,
Mélanie

Can I load corrected_BETA*files into CONN again to extract the corrected_fALFF values for specific regions?

$
0
0
Dear CONN toolbox experts,

I performed a fALFF analysis with CONN. Further I corrected the received BETA*files for brain atrophy with a SPM script and wish now to extract the corrected fALFF values for the left and right hippocampus. Is it possible to load the corrected_BETA*files into CONN and extract the values as usual?

Thank you very much in advance!

Kind regards,
Carole

RE: a few questions about small ROIs

$
0
0
[color=#000000]Hi Alfonso,[/color]

[color=#000000]RE: point number 2:[/color]

If there are, for example, 14 voxels at 1mm res, will CONN force to obtain signal from 14 voxels at 2mm even if those coordinates are not in that region? Or would it pull from fewer voxels so that the anatomical specificity remains intact?

Thank you,
Sarah


[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]Hi Ely,[/color]

[color=#000000]Those are very good questions, some thoughts on these below[/color]
[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by Benjamin Ely:[/i][quote]Hi Alfonso,

I'm working on a CONN analysis (version 15a) that uses several small (2-5 functional voxels), subject-specific ROIs. I have a couple of questions:

1) Is it an issue that the subject-specific ROIs I've generated are already in MNI space? I do most of my preprocessing (i.e. realignment, warping to MNI, smoothing) outside of CONN, so all the structural/functional/ROI files I uploaded into the setup page were already in MNI space. I just noticed from the manual that subject-specific ROIs should be in subject space. Is CONN applying an additional MNI transformation to the ROIs I uploaded, and if so, can I disable this?
[/quote]
That is perfectly fine, and sorry if the manual is not perfectly clear on this regard. CONN does not apply any transformation at all to your ROIs, so the ROI files that you enter into CONN in Setup.ROIs (whether subject-specific or subjet-independent) are expected to always be coregistered (in the same space, but of course not necessarily resampled to the same resolution) as the corresponding functional volumes that you enter in Setup.functionals (after any preprocessing steps if applicable). So, basically, after any preprocessing step and right before running the Setup step CONN expects all of the files (functionals, structurals, ROIs) to be already appropriately coregistered. The manual remark concerning subject-specific ROIs is just referring to the case (not yours) where you have subject-specific ROIs [i]in subject-space [/i](e.g. defined anatomically from the original -before normalization- structural volumes), and of course in that case you should make sure that your associated functional files are also in the same space. [quote]
2) How does CONN interpolate higher-resolution ROIs to functional resolution? The ROIs I have were created in anatomical space (0.7mm iso); I did my own downsampling to functional space (2mm iso) to ensure fidelity and imported both sets during setup. The results between the two look quite similar, but not identical.
[/quote]
In this regard CONN will always respect the original ROI files resolution. When extracting functional data from an ROI CONN will first get the coordinates of all of the voxels within the ROI (at the resolution of the ROI file), and then extract the functional data at these same coordinates from the functional volumes using nearest neighbor interpolation. So, for example, if an ROI contains 5 voxels (at 0.7mm resolution), CONN will get the MNI coordinates of those 5 voxels and extract the BOLD timeseries from the 5 voxels in the functional data that are closest to these 5 coordinate positions (which may all be from the same voxel in the functional data or from several voxels). After getting these 5 timeseries CONN will compute the average (or PCA for multidimensional ROIs like White/CSF areas) across these 5 timeseries to get the ROI-level BOLD timeseries. The reason it is done this way (instead of the opposite way: sampling the ROI files at the functional data resolution) is because the latter approach can lead to loss of very small ROIs due to resampling, while the former approach guarantees no loss of small ROIs and a more appropriate partial-volume weighting of the functional data.  [quote]
3) In a similar vein, how does CONN decide which functional voxels to exclude when using the grey matter masking option? The grey matter mask I input (generated from the standard SPM segmentation program) is graded, not binary, and is also at anatomical resolution. I'm a bit unclear on how CONN translates this information into a binary grey matter mask at functional resolution.
[/quote]
Same principle as above. The grey matter mask values are sampled at the coordinates of the ROI-file voxels (in the example above we get 5 values from the grey matter files extracted from the voxels in those files that are closest to the coordinates of the 5 ROI voxels), and any ROI voxels outside of the mask are disregarded. CONN uses a relatively conservative threshold/masking for the 'mask with grey matter voxels' ROI option (only removes voxels with 0 values in the gray-matter mask,, despite the gray-matter mask generated by SPM segmentation being graded). This is mainly because the transients between 1-values and 0-values in these volumes are typically relatively fast (e.g. removing only the 0-values results in perhaps only ~20% more voxels within the "grey matter" mask than removing values below .5) and because for this masking (compared to those masks used for White-matter and CSF areas to be used in CompCor, for example, that benefit from a more aggresive approach -for those CONN uses instead a <.5 threshold followed by an additional erosion step) a more conservative approach is probably preferable (but of course if you prefer a more agressive masking you may do so simply by thresholding the corresponding c1*.nii files using any desired approach before running the Setup step). 

Hope this helps
Alfonso[/quote]

RE: How to add respiration & cardiac data in setup

$
0
0
[color=#000000]Hi everyone,[/color]

[color=#000000]I have a same question related to the physiological noise correction. I have cardio and respiration data during the rsfMRI and would like to use those data for CONN analysis. I have slibase.1d files and am curious how to utilize those files to CONN analysis.[/color]
Any suggestions or experiences?

Thank you so much!
Aki
 
[i]Originally posted by samane Shojaei:[/i][quote]Dear all,

I know I should add my physiological data in setup step in CONN, and then use RETROICOR regressor to clean my data, however I cannot find any help in CONN web-page/manual. My respiration and cardiac data are given me as .resp and .puls formats. Could you please help me to know where/how to add these data in CONN gui step by step?

Many Thanks,
Samane[/quote]

RE: Unable to display significant values in ROI-ROI connectome ring

$
0
0
Dear alfonso,

Thanks for your response, perhaps i have framed my question in a wrong way, i am not into investigating whole brain connectome, i only want ROI-ROI effects for a sample of selected ROI's. My concern is that when i selected a sample of ROI's instead of whole brain connectome, i do not see ROI effects associated with selected seed  and all other ROI's in a connectome ring , i see a blank display.

Thanks
Vasudev

RE: REX extract changes image dimensions

Multiple conditions for a contrast

$
0
0
Dear all,

I am new to the CONN toolbox and have a question about setting a between-condition contrast at the second level analysis. I have four conditions (A, B, C, and baseline). A, B and C are the reading task. After the ROI-ROI connectivity analysis, I would like to check if there is an overall task effect in reading versus baseline (i.e., (A, B, C) > baseline). But I am not sure how to set the contrast. Shall I give a contrast, [ 1 1 1 -3] or [1 1 1 -1] or [1 0 0 0;0 1 0 0; 0 0 1 0; 0 0 0 -1] or [1 0 0 -1; 0 1 0 -1; 0 0 1 -1]? I have tried all of these options, and the results are different so I would like to make sure which one is correct (or none of them!)? I think that I have not had clear understanding about how the conditions can be combined so I would appreciate your help and suggestions!

best,
YaNing

Error when starting 2nd Level ROI-to-ROI analysis

$
0
0
Hello CONN developers and community members,

I encounter the following error when I attempt do some second-level analysis. I was able to finish set-up, denoising, and first-level analysis of the data with no issue, however when I begin any second-level analysis I am shown the following error. 
Any advice on how to resolve this is very much appreciated. 

Best,
Jagan Jimmy.

ERROR DESCRIPTION:

Undefined function or variable 'Z'.
Error in conn_process (line 4662)
yt=permute(Z(iroi(nroi),:,:),[3,2,1]);
Error in conn_process (line 57)
case 'results_roi', [varargout{1:nargout}]=conn_process(17,varargin{:});
Error in conn (line 9255)
CONN_h.menus.m_results.roiresults=conn_process('results_ROI',CONN_x.Results.xX.nsources,CONN_x.Results.xX.csources);
CONN18.b
SPM12 + AAL3 DEM FieldMap MEEGtools
Matlab v.2019a
project: CONN18.b
storage: 9454.2Gb available
spm @ /usr/local/neuro/spm12
conn @ /usr/local/neuro/conn

Error in Pre-Processing

$
0
0
Hello!

When I am pre-processing, I run into the attached error which states that there are no executable modules. I have checked to make sure that there are multiple scans / acquisitions for each file (56 vols for each one), and would appreciate any input into what this error may be a result of.

Thanks!
Neta

"GE_data-GE_lattice+LE_data-LE_random"

$
0
0
Dear Alfonso or other experts of CONN,

Hi, I found the answer on determining optimal cost in this post and another (https://www.nitrc.org/forum/message.php?msg_id=20071) post really helpful.

I'd like to adopt the threshold that maximize "GE_data-GE_lattice+LE_data-LE_random"
In the middle of this process, I got some questions:

(1) I noticed that, whenever I delete the threshold value,
I found that the graphs showed somewhat different results.
I tried around 10 times, and I got 4 different values for "GE_data-GE_lattice+LE_data-LE_random"
Would you let me know the reason behind it? can I choose one of these values?

(2) The current plots only show the results within 0 ~ 0.5 cost value. Is there any way that I can get results from above 0.5 cost? like adjusting internal function?

(3) It might be the problem of my computer, but the legend indicating data, lattice, random is not properly shown (I attach the picture). Would you let me know how to fix this problem or let me know which line means random or lattice?

(4) Is there any way that I can see the "GE_data-GE_lattice+LE_data-LE_random" per cost value in a form of table such as .mat file? I may utilize the internal function that generates this plot. Would you let me know which function is used for generating the plot?

I appreciate your help, thanks a lot.

Best,
Irene

Smoothing kernel in CONN

$
0
0
I'm a graduate student and a new user in CONN. I was recently shown how to run an ROI - ROI bivariate correlation analysis in CONN for a project examining functional connectivity among different brain regions during an fMRI task. Does anyone happen to know how I can retrospectively check what smoothing kernel was used during the preprocessing stage of this analysis? I believe the default in CONN is 8mm, but we want to make sure this wasn't manually changed during the set-up. Many thanks in advance!

Scrubbing & task-based fMRI

$
0
0
Hi all,
I have an analysis I want to run on task-based fMRI. However am a bit confused. I scrubbed my volumes during pre-processing, and now don't understand what will happen with the timing of my task data. I.e. if a volume is not valid, is it removed from the scan, and thus my task timing is off? If it is not removed, how is it taken into account? I'm not sure how to model this in 1st level.
Thank you,
Natasha

RE: Apple cannot verify the developer for the SPM download

$
0
0
the problem is that Mac quarantines spm12 and does not allow it to communicate with other programs. The solution is:

Go to terminal and run this command: sudo xattr -r -d com.apple.quarantine /Users/*urusername/**Downloads/spm12

*the user who installed the software initially on ur Mac (etc. you or someone else) 
com.apple.quarantine can only be removed by user that installed the software
** location that you downloaded the file.

extracting connectivity values in CONN vs SPM

$
0
0
Hi all,

When reviewing 2nd-level results in CONN, my understanding is that you can extract individual connectivity values by opening the results in "Results Explorer" and then selecting "Import Values" > "Other clusters of interest or ROIs (select mask/ROI file)", and then importing these values. Assuming this is correct, how is this different from opening the SPM.mat file representing the same contrast and extracting first-level eigenvariates using a mask? Will these result in the same values or different ones, and if so, how are they different?

Thanks!
Viewing all 6864 articles
Browse latest View live