# 2.3. SpaceNet: decoding with spatial structure for better maps¶

## 2.3.1. The SpaceNet decoder¶

SpaceNet implements spatial penalties which improve brain decoding power as well as decoder maps:

- penalty=”tvl1”: priors inspired from TV (Total Variation) [Michel et al. 2011], TV-L1 [Baldassarre et al. 2012], [Gramfort et al. 2013] (option: ),
- penalty=”graph-net”: GraphNet prior [Grosenick et al. 2013])

These regularize classification and regression problems in brain imaging. The results are brain maps which are both sparse (i.e regression coefficients are zero everywhere, except at predictive voxels) and structured (blobby). The superiority of TV-L1 over methods without structured priors like the Lasso, SVM, ANOVA, Ridge, etc. for yielding more interpretable maps and improved prediction scores is now well established [Baldassarre et al. 2012], [Gramfort et al. 2013], [Grosenick et al. 2013].

Note that TV-L1 prior leads to a difficult optimization problem, and so can be slow to run. Under the hood, a few heuristics are used to make things a bit faster. These include:

- Feature preprocessing, where an F-test is used to eliminate non-predictive voxels, thus reducing the size of the brain mask in a principled way.
- Continuation is used along the regularization path, where the solution of the optimization problem for a given value of the regularization parameter alpha is used as initialization for the next regularization (smaller) value on the regularization grid.

**Implementation:** See [Dohmatob et al. 2015 (PRNI)] and [Dohmatob
et al. 2014 (PRNI)] for
technical details regarding the implementation of SpaceNet.

## 2.3.2. Empirical comparisons¶

### 2.3.2.2. Comparison on Haxby study¶

**Code**

The complete script can be found here.

See also

- Age prediction on OASIS dataset with SpaceNet.
- The scikit-learn documentation has very detailed explanations on a large variety of estimators and machine learning techniques. To become better at decoding, you need to study it.