Spiking neural networks are different from "standard" neural networks.
First, an explanation of what neural networks are. In general, neural networks are simulations of central nervous systems, with the main idea being that by simulating the various aspects of the brain, you should be able to replicate some aspects of the brain's capabilities (e.g., pattern recognition, decision-making, learning, etc.).
There are, of course, many different aspects to how a brain works, but the most fundamental aspect is definitely the "neuron", a relatively simple type of cell that takes a bunch of incoming electrochemical signals (from other neurons) and "decides" whether or not it will send out its own such signal. It is in the accumulation and "interpretation" of these signals that the brain does all its work (further discussion along this path would get pretty philosophical).
So then neural network simulations are so called because they are networks of a whole bunch of simulated neurons (not 10^10 neurons like the human brain, but network sizes of 10^3 neurons are very achievable in present-day simulations). All various types of neural network simulations are similar in that they consist of networks of neurons (as the name rather obviously implies).
However, there are many further details and nuances about how the brain's "neural network" functions, and neural network simulators necessarily have to pick and choose which of these details they will bother to model and which they will overlook. One such detail is the very nature of the "electrochemical signal" that one neuron sends to another. The majority of present-day neural network simulations
- employ some notion of "clock timing" (much like digital computer processors do) that specify moments at which inter-neuron signals can be sent and
- assume that the "amplitude" of the signal sent by any neuron (which has "decided" to send its signal) is constant ("on" or "off") over the full period between clock timings.
This approach is, in fact, very different from how the human brain's neurons send their signals; the signal sent out by a real neuron is a brief "spike", lasting about one millisecond and remaining at its peak amplitude (of about 0.1 volts, net) for only a very brief moment (see the Amygdala logo for a rough illustration... if it were to scale, the horizontal red line would span about 16 milliseconds, and you can see how briefly the neuron's output signal is actually "at peak").
Spiking neural networks (SNNs), then, are neural network simulations that attempt to use a more-accurate "spiking" model of neural output signals. SNNs are also called "pulsed neural networks", interchangeably. Amygdala is, among other things, a tool for simulating SNNs.
Because of this different model of output signals, an SNN is much better suited for applications where the timing of input signals carries important information (e.g., speech recognition and other signal-processing applications). SNNs can also be applied to the same problems as non-spiking neural networks (as the human brain clearly proves).
Note that non-spiking neural network models are certainly not "invalid" (all models have to make some simplifications), nor are they useless (the majority of useful NN work to date uses non-spiking models). But it can be shown mathematically that non-spiking neural networks have considerably less innate processing power than similarly-sized spiking neural networks [1].
Since there is a time dimension and enough non-linearity, a network composed of spiking neurons with recurrent connectivity can form a dynamic system. See the demo page to get an impression of what happens.
References
[1] W. Maass. Computation with spiking neurons [PDF]. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks. MIT Press (Cambridge), 2nd edition, 2001.