Join the movement to make two years of community college as free and universal as high school is today at HeadsUpAmerica.us/Act.
Rurouni Kenshin Week
Day 7 | Free Day
↳ What a Wonderful World…
This version is required listening. The verses don’t go in the same order, but mmmmmmm, super atmospheric.
A round of high-fives for the RK fandom for a fantastic week of weeping feelings! We survived did it!! (ノ◕ヮ◕)ノ*:・゚✧
#sound #Arduino #mbed #make #Blender #Unity #GenerativeArt #ARM by prototechno @ http://ift.tt/1RDsP2i
Model sheds light on purpose of inhibitory neurons
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.
The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.
Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.
The researchers presented their results at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.
For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.
“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”
Artificial neurology
In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.
An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.
Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.
In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.
Biological plausibility
Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.
Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.
Finally, the MIT researchers’ network is probabilistic. In a typical artificial neural net, if a node’s input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers’ model. Again, this modification is crucial to enacting the winner-take-all strategy.
In the researchers’ model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. “We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons,” Parter explains. “We consider neurons to be a resource; we don’t want too spend much of it.”
Inhibition’s virtues
Parter and her colleagues were able to show that with only one inhibitory neuron, it’s impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons — which the researchers call a convergence neuron — sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron — the stability neuron — sends a much weaker signal as long as any output neurons are firing.
The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it’s been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.
Without randomness, however, the circuit won’t converge to a single output neuron: Any setting of the inhibitory neurons’ weights will affect all the output neurons equally. “You need randomness to break the symmetry,” Parter explains.
The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.
Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn’t improve efficiency. And just one stability neuron is already optimal.
But perhaps more intriguingly, the researchers showed that including excitatory neurons — neurons that stimulate, rather than inhibit, other neurons’ firing — as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn’t observe the distinction between convergence and stability neurons will be less efficient than one that does.
Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?
“This computation of winner-take-all is quite a broad and useful motif that we see throughout the brain,” says Saket Navlakha, an assistant professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. “In many sensory systems — for example, the olfactory system — it’s used to generate sparse codes.”
“There are many classes of inhibitory neurons that we’ve discovered, and a natural next step would be to see if some of these classes map on to the ones predicted in this study,” he adds.
“There’s a lot of work in neuroscience on computational models that take into account much more detail about not just inhibitory neurons but what proteins drive these neurons and so on,” says Ziv Bar-Joseph, a professor of computer science at Carnegie Mellon University. “Nancy is taking a global view of the network rather than looking at the specific details. In return she gets the ability to look at some larger-picture aspects. How many inhibitory neurons do you really need? Why do we have so few compared to the excitatory neurons? The unique aspect here is that this global-scale modeling gives you a much higher-level type of prediction.”
Earth and Moon from Saturn
via reddit
NASA - Aquarius Mission logo. June 17, 2015 An international Earth-observing mission launched in 2011 to study the salinity of the ocean surface ended June 8 when an essential part of the power and attitude control system for the SAC-D spacecraft, which carries NASA’s...
http://www.kurzweilai.net/these-self-propelled-microscopic-carbon-capturing-motors-may-reduce-carbon-dioxide-levels-in-oceans?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=aec011d0f7-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-aec011d0f7-282055766
[OC] Current distribution of Basque, the last pre-Indo-European language in Western Europe [1060x950] CLICK HERE FOR MORE MAPS! thelandofmaps.tumblr.com
Pretty cool
Just the other week, Baltimore Ravens offensive lineman John Urschel co-published a paper in the Journal of Computational Mathematics. The paper “A Cascadic Multigrid Algorithm for Computing the Fiedler Vector of Graph Laplacians” can be found on arXiv.
In an article for the Player’s Tribune, Urschel says, “I am a mathematical researcher in my spare time, continuing to do research in the areas of numerical linear algebra, multigrid methods, spectral graph theory and machine learning. I’m also an avid chess player, and I have aspirations of eventually being a titled player one day.”
This reminded me of this tumblr post by classidiot I saw the other day that describes how it’s common to see mathematicians that are proficient in some non-mathematical hobby (playing an instrument, dancing, hiking, so on…), but often not the other way around. I think it’s really fantastic that John Urschel does mathematics just on the side as something he truly enjoys.
Our solar system is huge, so let us break it down for you. Here are 5 things to know this week:
1. Make a Wish
The annual Leonids meteor shower is not known for a high number of “shooting stars” (expect as many as 15 an hour), but they’re usually bright and colorful. They’re fast, too: Leonids travel at speeds of 71 km (44 miles) per second, which makes them some of the fastest. This year the Leonids shower will peak around midnight on Nov. 17-18. The crescent moon will set before midnight, leaving dark skies for watching. Get more viewing tips HERE.
2. Back to the Beginning
Our Dawn mission to the dwarf planet Ceres is really a journey to the beginning of the solar system, since Ceres acts as a kind of time capsule from the formation of the asteroid belt. If you’ll be in the Washington DC area on Nov. 19, you can catch a presentation by Lucy McFadden, a co-investigator on the Dawn mission, who will discuss what we’ve discovered so far at this tiny but captivating world. Find out how to attend HERE.
3. Keep Your Eye on This Spot
The Juno spacecraft is on target for a July 2016 arrival at the giant planet Jupiter. But right now, your help is needed. Members of the Juno team are calling all amateur astronomers to upload their telescopic images and data of Jupiter. This will help the team plan their observations. Join in HERE.
4. The Ice Volcanoes of Pluto
The more data from July’s Pluto flyby that comes down from the New Horizons spacecraft, the more interesting Pluto becomes. The latest finding? Possible ice volcanoes. Using images of Pluto’s surface to make 3-D topographic maps, scientists discovered that some mountains on Pluto, such as the informally named Piccard Mons and Wright Mons, had structures that suggested they could be cryovolcanoes that may have been active in the recent geological past.
5. Hidden Storm
Cameras aboard the Cassini spacecraft have been tracking an impressive cloud hovering over the south pole of Saturn’s moon Titan. But that cloud has turned out to be just the tip of the iceberg. A much more massive ice cloud system has been found lower in the stratosphere, peaking at an altitude of about 124 miles (200 kilometers).
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Game of Thrones Filming Locations
Machine Learning, Big Data, Code, R, Python, Arduino, Electronics, robotics, Zen, Native spirituality and few other matters.
107 posts