Neuromorphic computing is an emerging field of research. In machine learning, spiking neural networks (SNN) are now widely used to exploit the low-power consumption promise of these brain-inspired systems, saving up to an order of magnitude of energy in inference. Recently, advanced training methods for spiking neural networks have been developed to bridge the performance gap with deep learning, enabling use in real life applications at the edge such as continuous heart rate monitoring in smartwatches or on-sensor detection of dangerous sounds. More precisely, the Liquid State Machine (LSM) a recurrent reservoir-based SNN, has come forward as a simple, yet inherently very powerful computational framework for spatio-temporal data processing. The spike-based processing of time-series in a reservoir allows the LSM to retrieve features in a unique way. There are many open research questions, such as what type of learning best suits the neuromorphic reservoir and how multiple reservoirs can be connected in an optimal way so the most important features are passed through. In this proposal we introduce new spike-based learning rules that will allow us to derive relevant features inside the LSM, optimally connect multiple reservoirs by focusing on the important features and consequently boost the performance of LSM at low power consumption.