We have discussed several methods that can be applied in the last post. They have come up with several unique ideas, but for me, the Neural Connectomics Workshop done by Antonio Sutera, Arnaud Joly, Vincent Francois-Lave is the best one. Since I'm interested in their method, I have gone through the research paper very deeply to get some gaps where I can contribute to it.

First I will describe the abstract idea of the Neural Connectomics Workshop. These discussions may spread around the logic of the algorithm, what the data they needed to build the algorithm, how the input data related to each other, what kind of process lead the contributions to give connectomic as the output.
The basic idea behind Neural Connectomics Workshop is they have proposed the simple but perfect solution to the problem of Connectomics. The algorithm which they have proposed involves two steps. First, they are processing the raw signals which can't be directly analyzed. They detect neural peak activities caused by the tissues around the neurons and reduce the noise using different layers and the equations. Then by using partial correlation statistics, they are subtracting the degree of connotation between neurons. They remove noise mainly due to fluxes they further remove account for weak fluorescence decay. They diminish the importance of high global activity in the link. In that process, they cover Signal processing, Connectome inference from partial correlation statistics, Data and evaluation metrics and Evaluations.
If you have studied graph theory, it would be more easy to understand the problem. Graph(G) vertices(V) can be thought as neurons and Edges (V)are the connection between the neurons. Formally, the connectome can be represented as a directed graph G = (V, E). If we take two nodes v(i),v(j) if the connection is there they indicate it as E(i,j) representing direct synaptic connections between neurons.
They couldn’t recognize the direction of the connections between neurons which is one of the drawbacks of them. It is almost underlying on Deep Convolutional Neural Networks and Graph Decomposition techniques. It means a lot for the future development and can continue to work for the problem extension.
They used three methods to remove the noise from the given data. The first part of their algorithm consists of cleaning the raw fluorescence data. Time-series are processed using standard signal processing filters.
The purpose of the process is to,
1) remove noise mainly due to fluctuations
2) account for weak fluorescence decay
3) reduce the importance of high global activity in the network
| Picture describes the Signal processing pipeline for extracting peaks from the raw fluorescence data: Sourse http://www.montefiore.ulg.ac.be/~ernst/uploads/news/id179/connectomics.pdf |
They used many complicated equations to solve the above three problems then they used partial correlation statistics.
Fluorescence concentrations of all neurons at each time point can be modeled as a set of random variables X = f(X1),..., X(pg). They have used an equation to map the details. And finally get the connections between neurons. Now I think you have got clear idea about their research.
They have used partial correlation in their analysis. But the problem comes here. partial correlation statistic is symmetric (i.e. p(i,j) = p(j,i)). Therefore, their approach cannot identify the direction of the interactions between neurons.
We intend to make an algorithm to find the direction of the connections. I will discuss this method in the next post
References :
A. J. V. F.-L. A. Q. G. L. D. E. a. P. G. Antonio Sutera, "Simple connectome inference from partial correlation," 2014.
Comments
Post a Comment