V
Twinee
Alexandre Abraham's website

January 2013

January was mainly filled with programming and testing. This was a war against bugs and code mistakes. The good point is that I really have a good understanding of most of the code.

Preprocessing ABIDE

ABIDE preprocessing is still performed by Elvis. We now dispose of a pretty good preprocessing pipeline and I am working on preprocessed ABIDE image from the Leuven site. Recently, preprocessing has been improved thanks to the use of DARTEL. I have not played with these data for the moment as I didn't want to change my data in the middle of a debugging session.

Multi Subject Dictionnary Learning

Use Total Variation as group maps regularizer

In the first version of algorithm, the L1 norm was used as a regularization for the group maps to impose sparsity. However, about half of the maps were composed of few scattered voxels. To solve that, using TV as a regularizer seems a good option.

Unfortunately, the proximal operator used to minimise the group maps cannot be formally calculated for the TV. It has to be estimated. To do that, I used [the code of Emmanuelle Gouillard|https://github.com/emmanuelle/tomo-tv|en|Total Variation Proximal Operator] that uses Beck and Teboulle FISTA adapted for TV denoising in "Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems" (2009).

While using this operator, we observed that the global energy of MSDL was sometimes increasing between two steps which is abnormal since the algorithm is meant to converge. After an inquiry, I discovered that the energy was gained during the masking process. In fact, we work with masked data to save memory but total variation must be computed on a 3D image as it requires geometry information. Thus our pipeline is :

We followed several threads to solve this problem. We cannot, at this moment, assert that the problem is solved. In fact, due to a problem in energy computation, I spent the whole week turning round and roudn to figure out why the energy was increasing even while we used our tricks. Gael helped me to find it out today and I have very promising results but I would like to have validated results before crowing over victory !

Positivity constraint on TV

As we had meaningless negative values in our maps and as values outside of the mask are negative, we thought that imposing positivity while computing the TV denoising may solve our problem. This really helps to get better maps.

Map constraint on TV

As we impose a threshold on values, imposing to stay in the map may help. This has been implemented and gives good results in term of energy but, as said before, I have still no strong results.

Inner mask TV computation

As TV increases on the edges of the mask, computing it inside the mask should get rid of the problem. This is in fact one of the most promising solution even when energy was not well computed.

Take outter mask values into account for energy computation

It is possible to compute energy on the whole data values (even outside of the mask) by "caching" some calculus just after TV denoising.

Conclusion

We strongly hope to tune the algorithm by combining these tricks together. The map constraint though requires to be well formulated and mathematically validated to ensure TV denoising convergence.

Stochastic Gradient Descent

This was the second thread about algorithm optimization. I have a beta code to use it but, as I had problems with energy computation, I have put aside this feature until now (we cannot validate this feature without proper energy computation).

Correlation matrix

We talked about the possibility to extract correlation matrices from maps and try to use them to classify subjects (or do regression on some characteristic). I have made a script to compute correlation matrices but for the moment we do not have maps satisfying enough to get into this. This will come next week !

Back to blog