Skip to main content

Connectomics deep drive into challenge project

Connectomics deep drive into challenge project

We have discussed several methods that can be applied in the last post. They have come up with several unique ideas, but for me, the Neural Connectomics Workshop done by Antonio Sutera, Arnaud Joly, Vincent Francois-Lave is the best one. Since  I'm interested in their method, I have gone through the research paper very deeply to get some gaps where I can contribute to it. 
First I  will describe the abstract idea of the Neural Connectomics Workshop. These discussions may spread around the logic of the algorithm, what the data they needed to build the algorithm, how the input data related to each other, what kind of process lead the contributions to give connectomic as the output. 
The basic idea behind Neural Connectomics Workshop is they have proposed the simple but perfect solution to the problem of Connectomics. The algorithm which they have proposed involves two steps. First, they are processing the raw signals which can't be directly analyzed. They detect neural peak activities caused by the tissues around the neurons and reduce the noise using different layers and the equations. Then by using partial correlation statistics, they are subtracting the degree of connotation between neurons. They remove noise mainly due to fluxes they further remove account for weak fluorescence decay. They diminish the importance of high global activity in the link. In that process, they cover Signal processing, Connectome inference from partial correlation statistics, Data and evaluation metrics and Evaluations. 
If you have studied graph theory, it would be more easy to understand the problem. Graph(G) vertices(V) can be thought as neurons and Edges (V)are the connection between the neurons. Formally, the connectome can be represented as a directed graph G = (V, E).  If we take two nodes v(i),v(j) if the connection is there they indicate it as E(i,j) representing direct synaptic connections between neurons. 
 They couldn’t recognize the direction of the connections between neurons which is one of the drawbacks of them. It is almost underlying on Deep Convolutional Neural Networks and Graph Decomposition techniques. It means a lot for the future development and can continue to work for the problem extension.
They used three methods to remove the noise from the given data. The first part of their algorithm consists of cleaning the raw fluorescence data.  Time-series are processed using standard signal processing filters. 
The purpose of the process is to,
1) remove noise mainly due to fluctuations 
2) account for weak fluorescence decay 
3) reduce the importance of high global activity in the network
Picture describes the Signal processing pipeline for extracting peaks from the raw fluorescence data: Sourse http://www.montefiore.ulg.ac.be/~ernst/uploads/news/id179/connectomics.pdf

They used many complicated equations to solve the above three problems then they used partial correlation statistics.
Fluorescence concentrations of all neurons at each time point can be modeled as a set of random variables X = f(X1),..., X(pg).  They have used an equation to map the details. And finally get the connections between neurons. Now I think you have got clear idea about their research.  
They have used partial correlation in their analysis. But the problem comes here. partial correlation statistic is symmetric (i.e. p(i,j) = p(j,i)). Therefore, their approach cannot identify the direction of the interactions between neurons.
We intend to make an algorithm to find the direction of the connections. I will discuss this method in the next post


References :

A. J. V. F.-L. A. Q. G. L. D. E. a. P. G. Antonio Sutera, "Simple connectome inference from partial correlation," 2014.

Comments

Popular posts from this blog

Methodology to analyse calcium fluorescence data

One of the great challenges in modern science is to understand the structure of the human brain. Neurologists want to work out the complete map of connections between neurons, it’s a wiring diagram and a structure known as the human connectome. If we predict the connection by the firing of the Calcium ion we can predict the connections between neurons using calcium fluorescence data. first, so we Are moving to analyses the “small” dataset involving a network of 100 neurons.  The process must go through deep learning. We initially processed several steps, first we include clipping maximum values to reduce noise and extending each series values to maximize contrast. These steps will help to recognize neurons visually but it will not improve performance Finally, the training datasets have semantic differences we must know the process like labeling technique etc. In some cases, we may improve performance by training one model per dataset. Here we analysis several methods that ca...

Introduction to Connectomics

We have been asked to write a literature review on network reconstruction algorithms that can be applied  Predicting Connections from Calcium Channel Imaging for our programming Challenge project. As we go through the research I feel Like updating this on the blog will help a lot for the students who are interested in this research. As this is the 1st post I will cover the introduction to our Problem. Connectomics is the production and study of connectomes (comprehensive maps of connections within an organism's nervous system, typically its brain or eye) . These structures are extremely complex.  In Order to study  Connectomics  we should first know some hot topics such as  Neural network and deep learning, Biological Neuron, Calcium Imaging Data analysis. As It's introduction, We will walk you through all the aspects of Connectomics with simple explanations. First we go through  Neural Networks and Deep Learning ...