Dear Alfonso,
I am still stuck with the issue:
Error using conn (line 872)
Failed to load file /scratch/betka/02012021_5Analysis_5_1015.mat.
- I checked the cluster side, everything is working well (HPC IT people even complimented the way conn parralelises the jobs etc.).
- I am using the latest conn version 20.
- I have no storage issue (68Tb available).
- The filname is correct, the .mat exists, however I am not able to open it in matlab.When I run the command you suggested on this forum
load filename.mat ; nothing is happening, when I allocate it to a variable, the structure has no feilds.
-I tried to reproduce the .mat several times interactivly on the hpc (while commenting the conn_batch(BATCH) line), matlab crashes. I am interacting with the HPC IT but I dont think the issue is related to the HPC -I may be wrong, but I dont have anough free space to try such analysis locally.
-Important point: the analyses work very well on 1, 4 or 10 subjects; but the error happens when nsubjects = 1015.
I went throughout the forum and was not able to find a proper solution for such problem.
Should I ask the cluster IT to install the Linux standalone version of conn on the hpc?
Could it be a memory issue?
Should I just run less subjects at once and merged the data at the end "manually"?
Your help would be super appreciated,
Best,
s.
I am still stuck with the issue:
Error using conn (line 872)
Failed to load file /scratch/betka/02012021_5Analysis_5_1015.mat.
- I checked the cluster side, everything is working well (HPC IT people even complimented the way conn parralelises the jobs etc.).
- I am using the latest conn version 20.
- I have no storage issue (68Tb available).
- The filname is correct, the .mat exists, however I am not able to open it in matlab.When I run the command you suggested on this forum
load filename.mat ; nothing is happening, when I allocate it to a variable, the structure has no feilds.
-I tried to reproduce the .mat several times interactivly on the hpc (while commenting the conn_batch(BATCH) line), matlab crashes. I am interacting with the HPC IT but I dont think the issue is related to the HPC -I may be wrong, but I dont have anough free space to try such analysis locally.
-Important point: the analyses work very well on 1, 4 or 10 subjects; but the error happens when nsubjects = 1015.
I went throughout the forum and was not able to find a proper solution for such problem.
Should I ask the cluster IT to install the Linux standalone version of conn on the hpc?
Could it be a memory issue?
Should I just run less subjects at once and merged the data at the end "manually"?
Your help would be super appreciated,
Best,
s.