Quantcast
Channel: NITRC CONN : functional connectivity toolbox Forum: help
Viewing all 6861 articles
Browse latest View live

v20b Preprocessing stuck at ART

$
0
0
Hi Alfonso,

I am trying to run the default preprocessing pipeline through the GUI on one subject using CONN v20b on my University's cluster. However, it appears to be getting "stuck" at the Outlier detection stage. That is, MATLAB does not throw an error, but the program will not move on from this step, even after the art_screenshot.jpg file is saved. I am not sure what to do, because MATLAB is not throwing an error, so I cannot diagnose the actual problem. 

I have successfully run this pipeline on my local computer, with the same subject, so I am pretty confident that it is not an issue with my data. I am curious if you have come across this issue before when people try to run this version of the toolbox on a cluster?

I am using the following programs:
- CONN v20b
- SPM12 v7771

Any advice would be greatly appreciated!

Thanks,
Ryan

How to correct CONN inflation of subject number/T-statistic

$
0
0
Hello,

This post is in reference to a previous post (Possibly incorrect model, https://www.nitrc.org/forum/message.php?msg_id=32233) on the same project. We have been trying to sort out warnings of non-estimable contrasts, and like you said in your previous post, we believe it has something to do with redundant covariates (SZP and onlySZP, as an example) being included in our contrasts, covering the same exact space for one-sample t-tests.

The solution would be to only include one of these covariates, but this does not work for us as it causes t-statistic inflation. By including just SZP and removing onlySZP, the warning goes away, but our subject number shoots up to 785 when the actual number is 269. To be clear, the expected analysis sample is not supposed 269, it should be 204 after intersection with GoodQA2. Either way, we are getting subject number inflation. Are there are any ways to avoid this problem?

I have attached a document showing a marked difference in results (SZP vs onlySZP vs both) in an example of LSecAud connectivity for our patient sample.

Thank you,
Mikey

GE_data-GE_lattice+LE_data-LE_random

$
0
0
Dear Alfonso or other experts of CONN,

Hi, I found the answer on determining optimal cost in these posts (https://www.nitrc.org/forum/message.php?msg_id=24601, https://www.nitrc.org/forum/message.php?...) post really helpful.

I'd like to adopt the threshold that maximize "GE_data-GE_lattice+LE_data-LE_random"
In the middle of this process, I have some questions:

(1) I noticed that, whenever I delete the threshold value,
I found that the graphs showed somewhat different results.
I tried around 10 times, and I got 4 different values for "GE_data-GE_lattice+LE_data-LE_random"
Would you let me know the reason behind it? can I choose one of these values?

(2) The current plots only show the results within 0 ~ 0.5 cost value. Is there any way that I can get results from above 0.5 cost? like adjusting internal function (script)?

(3) Is there any way that I can see the "GE_data-GE_lattice+LE_data-LE_random" per cost value in a form of table such as .mat file? I may utilize the internal function that generates this plot. Would you let me know which function is used for generating the plot?
I appreciate your help, thanks a lot.

Best,
Irene

Preprocessing Error

$
0
0
Dear experts, Dear Alfonso, 

I would so thankful, if you may help me with an Error in the Preprocessing step in the Conn toolbox, which I now got for the fourth time. I really don´t know whats wrong. It´s called `segmentation violation`.
I´m working with two paths of 509 nifti files and started another project before successful, because of that I´m actually sure I did the setup settings correctly. But now in the other workspace I got this Error Warning the fourth time after I checked the data everytime of correctness, no pictures are missing and both sequences for structural and functional data are correct as well. Conn stops rather to the end of process, so i would guess one single file disrupt it but I´m not sure. Is there any option to localize the file or problem path? 
Below the Error Warning. 

Thanks in advance! 
Marie



Configuration:
Crash Decoding : Disabled - No sandbox or build area path
Crash Mode : continue (default)
Current Graphics Driver: NVIDIA Corporation GeForce GTX 1050/PCIe/SSE2 Version 4.5.0 NVIDIA 384.130
Current Visual : 0x21 (class 4, depth 24)
Default Encoding : UTF-8
Deployed : false
GNU C Library : 2.23 stable
Host Name : linuxrechner2
MATLAB Architecture : glnxa64
MATLAB Entitlement ID: 1335693
MATLAB Root : /usr/local/MATLAB/R2017b
MATLAB Version : 9.3.0.713579 (R2017b)
OpenGL : hardware
Operating System : Linux 4.15.0-65-generic #74~16.04.1-Ubuntu SMP Wed Sep 18 09:51:44 UTC 2019 x86_64
Processor ID : x86 Family 143 Model 8 Stepping 2, AuthenticAMD
Virtual Machine : Java 1.8.0_121-b13 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
Window System : The X.Org Foundation (11906000), display :0

Fault Count: 1
Abnormal termination:
Segmentation violation
Register State (from fault):
RAX = 0000000000000001 RBX = 00007f988da53740
RCX = 0000000000000001 RDX = 00007f9995d25b00
RSP = 00007f98a6e63c78 RBP = 0000000000000002
RSI = 0000000000000001 RDI = 00007f988da53740
R8 = 0000000000000000 R9 = 0000000000000000
R10 = 0000000000000003 R11 = 00007f999bdcf7e0
R12 = 0000000000000008 R13 = 0000000000000002
R14 = 0000000000000004 R15 = 0000000000000002
RIP = 0000000000000001 EFL = 0000000000010202
CS = 0033 FS = 0000 GS = 0000
Stack Trace (from fault):
[ 0] 0x0000000000000001 +00000000
If this problem is reproducible, please submit a Service Request via:
http://www.mathworks.com/support/contact_us/
A technical support engineer might contact you with further information.
Thank you for your help.** This crash report has been saved to disk as /home/demenzbild/matlab_crash_dump.21651-1 **
MATLAB is exiting because of fatal error
Killed

RE: How to keep subject name as input file name when importing instead of "Subject 1"

$
0
0
[color=#000000]Hi Jordan,[/color]

Sorry CONN's GUI always uses sequential subject numbers as IDs. If you are using the GUI to import your data then yes, the data will be imported using alphanumeric sorting of folder/filenames. In any way, one quick/simple way to check the order in which your data has been imported is to list all structural files in your CONN project (assuming you have one structural per subject) using something like:

   char(conn_module('get','structurals'))

which will return something like the following:

'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub93269/anat/mprage_anonymized.nii'
'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub93488/anat/mprage_anonymized.nii'
'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub93609/anat/mprage_anonymized.nii'
'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub94304/anat/mprage_anonymized.nii'
'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub95187/anat/mprage_anonymized.nii'
'/Volumes/ext20/Cambridge/Cambridge_Buckner_part4/sub95644/anat/mprage_anonymized.nii'
... etc.

Sometimes it is useful to create a "subject ID" code and enter that into CONN for safekeeping. For example, in the example above, perhaps you would like to read the "sub####" portion of those filenames as a new ID variable using something like:

   ID = str2double(regexp(conn_module('get','structurals'),'(?<=sub)\d+','match','once'));

That would allow you to then save those ID values into CONN as a new 2nd-level covariate using something like:

   conn_module('set','l2covariates',ID(:),{'ID'},{'IDs read from filenames'},true);

Hope this helps
Alfonso

[i]Originally posted by Jordan Galbraith:[/i][quote]Our lab just started using CONN and use the GUI. We can't figure out how to keep the name of our participants/their 5 digit code when importing their data since the subject names automatically show up as "Subject 1", "Subject 2", etc and need to track characteristics of the subjects within CONN. We could assume it is importing in the order that they are listed in the directory, but that seems a big assumption and I'd rather them import without renaming them generically. Any help is appreciated![/quote]

RE: Error message - Reference to non-existent field 'X1'.

$
0
0
[color=#000000]Hi Nikolov,[/color]

[color=#000000]I believe this error indicates that the Denoising step has not really finished correctly yet. Please check in your project conn_*/results/preprocessing folder whether files named ROI_Subject*_Condition*.mat exist there, and if not, please try re-running the denoising step (and let me know if any error/warning messages occur during denoising)[/color]

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by nikolov:[/i][quote]+1 - also getting this error consistently when attempting to do rsfmri connectivity analyses with a imported atlas - any insight would be appreciated! 

ERROR DESCRIPTION:

Reference to non-existent field 'X1'.
Error in conn (line 7347)
[CONN_h.menus.m_analyses.X,CONN_h.menus.m_analyses.select,names]=conn_designmatrix(CONN_x.Analyses(ianalysis).regressors,CONN_h.menus.m_analyses.X1,[],{nregressors,nview});
Error in conn (line 6401)
conn gui_analyses;
Error in conn (line 6639)
conn('gui_analysesgo',1);
CONN20.b
SPM12 + DEM FieldMap MEEGtools
Matlab v.2020b
project: CONN20.b
storage: 939.2Gb available[/quote]

RE: ROI-to-ROI gPPI analyses

$
0
0
[color=#000000]Dear Ling,[/color]

[color=#000000]gPPI analyses allow you to estimate task-dependent connectivity, so you can still apply these analyses if you have only one task-condition -assuming there is a "baseline" reference to compare that condition to. For example, you may have an event-related design with a single finger-tapping task, and you may use gPPI to estimate connectivity related to the tapping task (the estimated gPPI interaction terms will tell you whether connectivity increases or decreases during the tapping task compared to the baseline). Of course, you cannot use gPPI to analyze a pure resting state analysis or an analysis of a task that encompasses the entire scanning session, since in those cases you do not have a "baseline" reference to compare that connectivity to. [/color]

[color=#000000]In general weighted-GLM is often used to look at either resting-state data or task-dependent connectivity when the tasks are "long" (e.g. block designs with >10s blocks, or when comparing entire sessions/runs), and gPPI is often used to look at task-dependent connectivity when the task conditions are "short" (e.g. block-design with <10s blocks, or event-related designs)[/color]

Hope this helps clarify
[color=#000000]Alfonso[/color]
[i]Originally posted by Huiling Li:[/i][quote]Dear CONN toolbox experts,

Hi, I'm learning how to use CONN (version 19) for ROI-to-ROI gPPI analyses. Although I have studied the manual carefully and watched the data analysis videos shared by others, I am still not sure about some details. My questions are as follows.

1. If I have only one experimental condition, should I select ROI-to-ROI gPPI analysis or functional connectivity (weighted GLM) in the analysis type section? I have found that most of the previous studies only perform gPPI analysis when there are multiple experimental conditions.

2. If ROI-to-ROI gPPI analysis is possible, should I select "regression (bivariate)" in the analysis options section when there is only one experimental condition? Is it possible to select "correlation (bivariate)"?

Thank you very much in advance.

Best regards,

Ling[/quote]

RE: White matter dimensions denoising step

$
0
0
Thank you very much Alfonso! It actually clarifies my problem very well!!

Seed to Voxel Analysis Multiple Seeds FDR Correction

$
0
0
Hello! I'm currently working on a Seed to Voxel Analysis with 5 different seeds.
I am aware that one way to correct for multiple comparisons is to use Bonferroni correction and multiply the cluster-size p-FWE or p-FDR with the number of seeds (5 in my case) and see if it still survives the threshold value (0.05). However, I'm looking for a more liberal FDR approach and this is what I've got so far:

For all 5 seeds, I have extracted all the cluster-size p-uncorrected values that survive only the voxel threshold p<0.001 in Advanced Family-wise Error Control settings (that would be many more than the clusters that show up in Random Field Theory parametric statistics). After transforming all the cluster-size p-uncorrected into a single vector in R, I have used the "Benjaminie-Hochberg" method to transform them into p-FDR values. To me, this seems like the correct way to do it, as I get the same cluster-size p-FDR values on RFT when I try this method with p-values from one seed.

I'd very much appreciate it if you could please let me know if this is the correct way to do it; if not, I'd appreciate it if you could tell me other ways to compute the seed-corrected p-FDR values.

Thanks!

Soo Park

RE: CONN workshop

$
0
0
Hi Alfonso,

Being in Australia, will there be a way to watch the workshop in replay?

Thank you,
Hugo

stroke patients in CONN

$
0
0
Hi Alfonso and the rest of the community.

I am trying to perform a connectivity analysis in stroke patients (Stroke patients vs Controls). I have binary masks that delimit the lesion (1 inside the lesion, 0 outside) in the native space of the stroke subjects (one mask per subject). As a potential strategy to regress lesion effects I have planned to include lesion masks in the TPM of the CSF of stroke subjects (as proposed in this paper https://doi.org/10.3389/fnhum.2017.00091). Browsing the CONN forum I found a post in which you advise to modify the spm TPM in order to contemplate the lesion as a new tissue type (https://www.nitrc.org/forum/message.php?msg_id=32176). What strategy should I follow? Would it be correct/possible to modify the CSF maps generated by CONN during segmentation (concatenate them with the lesion maps of each subject) to regress lesion-linked effects during the denoising step? Or, should I just create a mask that contains all the lesions in the stroke group and include it as a new tissue class in the SPM TPM.nii file?


Apologies in advance if this is an absurd and/or poorly formulated question.


Benxa.

RE: v20b Preprocessing stuck at ART

$
0
0
Hi Alfonso,

Thanks for the patch!

I just ran a couple of tests and unfortunately this didn't appear to fix the issue. Again, no error was thrown, but the pipeline appears to get hung up after the art_screenshot.jpg was saved.

Thanks,
Ryan

RE: Export .surf.nii file to Freesurfer ?

$
0
0
Hi everyone,

I'm afraid I am still struggling with my surface data.

Would someone know a way to export surface-smoothed rsfMRi data (3D+t) to textures that I could further display in Freeview and/or use in other surface-based processing tools, please ? I would really be grateful :-D

Best,
Mélanie

RE: ROIs data extraction

$
0
0
[i]Originally posted by Nir Habouba:[/i][quote]Hello CONN users,
These days I'm working on my first fMRI data analysis using CONN toolbox. One of the things which I am interested in is time-series data extraction from ROIs defined by an atlas. However, I have 150 time points representing a "prolonged" task, that means that it is quite similar to resting-state data processing. Accordingly, I defined one session for my 14 subjects with "one condition that spans entire session". Surprisingly, my output files include the following two: ROI_Subject001_Condition00[b]0[/b].mat and ROI_Subject001_Condition00[b]1[/b].mat. I would like to ask [b]why[/b] did  I get [b]two files[/b] for [b]one condition[/b] as described earlier? 

Best,
Nir[/quote]Dear Alfonso,
As a new CONN user I would like, first of all, to thank you for your previous forum-comments which seem to be very helpful. Since I did not find any solution for the attached question I would appreciate if you could please share your thoughts and suggestions.

Best regards,
Nir

d*.nii files are not in my original folder

$
0
0
Hi all,

I'm using the latest version of CONN (v20b) and I want to use the denoised files. I checked the 'create confound-corrected timeseries' option in Setup.Options, but no d*.nii files appeared in the same folder as my original functional data.

Also, when I edit the file conn_process.m and change the line that reads "DONEWOUTPUTCONFCORR=1;" to "DONEWOUTPUTCONFCORR=2;" the old denoised files are generated (in the preprocessing folder), but not the new ones.

I'm sure this is something easy to solve, but I have no idea what am I missing.

Any help? Thanks!

Len

RE: conn toolbox- behavioral regressor.

$
0
0
Dear Alfonso,
can you please help me understand how to read the ROI.mat file from this analysis?
- wich variable is represent the r correlation ? 
- can you explain me about the " h , F and p " ?
thank you,
Sandy.

[i]Originally posted by Alfonso Nieto-Castanon:[/i][quote][color=#000000]H[/color]i Sandy[/quote][quote]
[color=#000000]You may simply import your behavioral measure into CONN in [i]Setup.Covariates (2nd-level) [/i](e.g. use the "covariate tools -> import new covariate data from file" menu), and then in the second-level results tab you may select 'AllSubjects' and 'YourCovariate' and enter a [0 1] between-subjects contrast in in order to define that correlation analysis. [/color]

(also, if you are using CONN's latest release, after entering your covariate in CONN you will find in the second-level results tab directly an option that reads  "[i]does the connectivity between connectivity and [/i] [i]differ from zero?" [/i]and you may simply select that option to have CONN define that correlation analysis automatically for you, without having to select 'AllSubjects', etc.)

[color=#000000]Best[/color]
[color=#000000]Alfonso[/color]
[i]Originally posted by sandy11:[/i][quote]hi,
I have rs-fMRI data, and I would like to find a correlation between the brain connectivity at resting state and a behavioral measure.
Can I add the behavioral measure as a regressor in the conn toolbox and have the analysis there?
Or should I extract the results of the connectivity and do the analysis on SPSS for example?
Im using conn toolbox.
thank you,
Sandy.[/quote][/quote]

RE: Error message - Reference to non-existent field 'X1'.

$
0
0
Dear All,
I'm experiencing the same error.
I have a data set with 3 sessions for each subject, but for some subjects, a session is missing.
I indicated 'allow missing data' at the 'set-up' condition tab, but was now wondering whether the imbalanced design could be related to the error message I get after the denoising (and prior to setting up the first-level analysis)?

Thank you for any input!
All best wishes,
Kaat

Z scores- ICA second level (CONN)

$
0
0
Hello,

I want to extract the z scores from CONN ICA second level analysis. Could you please guide me through it?



Thank you
Regards
Akshita Joshi

Conn project export

$
0
0
Hello,

I wish to export the CONN project to another system. But it asks to re- run the results!! Is there any directory that I need to change?


Thank you
Akshita Joshi

Spatially constrained ICA or something similar?

$
0
0
Hello,

I have a question about the application of the V2V approaches. I have established connectivity between a region 1 (seed) and a larger cluster 2 encompassing the hippocampus. I would like to now use the maps of the the seed region 1 to identify hotspots within the hippocampus, to better assess where FC from region 1 is strongest in region 2. 

I am curious if there is a method for performing spatially constrained ICA to examine potential hotspots of region 1 functional connectivity, constrained to region 2, the hippocampus (without considering the rest of the brain).

Thank you!
Sarah
Viewing all 6861 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>