Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Resistive memory-based zero-shot liquid state machine for multimodal event data learning

A preprint version of the article is available at arXiv.

Abstract

The human brain is a complex spiking neural network (SNN) capable of learning multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, it maintains minimal power consumption through event-based signal propagation. However, replicating the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore’s law and Von Neumann bottleneck, hinder the efficiency of digital computers. In addition, SNNs are characterized by their software training complexities. Here, to this end, we propose a hardware–software co-design on a 40 nm 256 kB in-memory computing macro that physically integrates a fixed and random liquid state machine SNN encoder with trainable artificial neural network projections. We showcase the zero-shot learning of multimodal events on the N-MNIST and N-TIDIGITS datasets, including visual and audio data association, as well as neural and visual data alignment for brain–machine interfaces. Our co-design achieves classification accuracy comparable to fully optimized software models, resulting in a 152.83- and 393.07-fold reduction in training costs compared with state-of-the-art spiking recurrent neural network-based contrastive learning and prototypical networks, and a 23.34- and 160-fold improvement in energy efficiency compared with cutting-edge digital hardware, respectively. These proof-of-principle prototypes demonstrate zero-shot multimodal events learning capability for emerging efficient and compact neuromorphic hardware.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Hardware–software co-design using the hybrid analog–digital system for a combined LSM–ANN model.
Fig. 2: Event-based image classification with the N-MNIST dataset.
Fig. 3: Event-based audio classification with the N-TIDIGITS dataset.
Fig. 4: Zero-shot transfer learning of multimodal event data.
Fig. 5: Zero-shot transfer learning of brain–machine interface.

Similar content being viewed by others

Data availability

The N-MNIST dataset36, the N-TIDIGITS dataset37, the E-MNIST dataset45 and the neural recordings dataset44 are publicly available. Source data for Figs. 1–5 are provided with this paper and are available at https://doi.org/10.25442/hku.27873162 (ref. 54).

Code availability

The code that supports the plots within this paper is available via GitHub at https://github.com/MrLinNing/MemristorLSM and https://doi.org/10.25442/hku.27873663 (ref. 55).

References

  1. Liu, K. et al. An optoelectronic synapse based on α-In2Se3 with controllable temporal dynamics for multimode and multiscale reservoir computing. Nat. Electron. 5, 761–773 (2022).

    Article  MATH  Google Scholar 

  2. Bartolozzi, C., Indiveri, G. & Donati, E. Embodied neuromorphic intelligence. Nat. Commun. 13, 1024 (2022).

    Article  Google Scholar 

  3. Liu, S.-C., van Schaik, A., Minch, B. A. & Delbruck, T. Asynchronous binaural spatial audition sensor with 2 × 64 × 4 channel output. IEEE Trans. Biomed. Circuits Syst. 8, 453–464 (2014).

    Article  Google Scholar 

  4. Jiménez-Fernández, A. et al. A binaural neuromorphic auditory sensor for FPGA: a spike signal processing approach. IEEE Trans. Neural Netw. Learn. Syst. 28, 804–818 (2016).

    Article  MATH  Google Scholar 

  5. Choo, K. D. et al. Energy-efficient motion-triggered IoT CMOS image sensor with capacitor array-assisted charge-injection SAR ADC. IEEE J. Solid-State Circuits 54, 2921–2931 (2019).

    Article  MATH  Google Scholar 

  6. Finateu, T. et al. 5.10 A 1280 × 720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86 μm pixels, 1.066GEPS readout, programmable event-rate controller and compressive data-formatting pipeline. In 2020 IEEE International Solid-State Circuits Conference 112–114 (IEEE, 2020).

  7. Gallego, G. et al. Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154–180 (2020).

    Article  MATH  Google Scholar 

  8. Yang, M., Liu, S.-C. & Delbruck, T. A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE J. Solid-State Circuits 50, 2149–2160 (2015).

    Article  MATH  Google Scholar 

  9. Liu, S.-C. & Delbruck, T. Neuromorphic sensory systems. Curr. Opin. Neurobiol. 20, 288–295 (2010).

    Article  MATH  Google Scholar 

  10. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    Article  MATH  Google Scholar 

  11. Ielmini, D. & Wong, H.-S. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333–343 (2018).

    Article  MATH  Google Scholar 

  12. Yu, S. Neuro-inspired computing with emerging nonvolatile memorys. Proc. IEEE 106, 260–285 (2018).

    Article  MATH  Google Scholar 

  13. Chen, X., Han, Y. & Wang, Y. Communication lower bound in convolution accelerators. In 2020 IEEE International Symposium on High Performance Computer Architecture 529–541 (IEEE, 2020).

  14. Rao, M. et al. Thousands of conductance levels in memristors integrated on CMOS. Nature 615, 823–829 (2023).

    Article  MATH  Google Scholar 

  15. Neftci, E. O., Mostafa, H. & Zenke, F. Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36, 51–63 (2019).

    Article  MATH  Google Scholar 

  16. Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M. & Liu, S.-C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017).

    Article  MATH  Google Scholar 

  17. Wu, Y. et al. Direct training for spiking neural networks: faster, larger, better. In Proc. AAAI Conference on Artificial Intelligence 1311–1318 (AAAI, 2019).

  18. Bi, G.-q & Poo, M.-m Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998).

    Article  MATH  Google Scholar 

  19. Morrison, A., Diesmann, M. & Gerstner, W. Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 98, 459–478 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  20. Brown, T. et al. Language models are few-shot learners. Adv. Neural Inform. Process. Syst. 33, 1877–1901 (2020).

    MATH  Google Scholar 

  21. Dosovitskiy, A. et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale (ICLR, 2021).

  22. Karunaratne, G. et al. Robust high-dimensional memory-augmented neural networks. Nat. Commun. 12, 2468 (2021).

    Article  MATH  Google Scholar 

  23. Zhong, Y. et al. Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing. Nat. Commun. 12, 408 (2021).

    Article  MATH  Google Scholar 

  24. Milano, G. et al. In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks. Nat. Mater. 21, 195–202 (2022).

    Article  MATH  Google Scholar 

  25. Dalgaty, T. et al. In situ learning using intrinsic memristor variability via Markov Chain Monte Carlo sampling. Nat. Electron. 4, 151–161 (2021).

    Article  Google Scholar 

  26. Lin, N. et al. In-memory and in-sensor reservoir computing with memristive devices. APL Mach. Learn. 2, 010901 (2024).

    Article  Google Scholar 

  27. Maass, W., Natschläger, T. & Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).

    Article  MATH  Google Scholar 

  28. Wu, T. F. et al. Brain-inspired computing exploiting carbon nanotube fets and resistive ram: hyperdimensional computing case study. In 2018 IEEE International Solid-State Circuits Conference 492–494 (IEEE, 2018).

  29. Radford, A. et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning 8748–8763 (PMLR, 2021).

  30. Li, Y., Geller, T., Kim, Y. & Panda, P. Seenn: towards temporal spiking early exit neural networks. Adv. Neural Inform. Process. Syst. 36, 63327–63342 (2024).

    Google Scholar 

  31. Li, Y., Moitra, A., Geller, T. & Panda, P. Input-aware dynamic timestep spiking neural networks for efficient in-memory computing. In 2023 60th ACM/IEEE Design Automation Conference 1–6 (IEEE, 2023).

  32. Moitra, A., Bhattacharjee, A., Kim, Y. & Panda, P. Xpert: peripheral circuit & neural architecture co-search for area and energy-efficient xbar-based computing. In 2023 60th ACM/IEEE Design Automation Conference 1–6 (IEEE, 2023).

  33. Datta, G., Kundu, S., Jaiswal, A. R. & Beerel, P. A. ACE-SNN: algorithm-hardware co-design of energy-efficient & low-latency deep spiking neural networks for 3D image recognition. Front. Neurosci. 16, 815258 (2022).

    Article  Google Scholar 

  34. Apolinario, M. P., Kosta, A. K., Saxena, U. & Roy, K. Hardware/software co-design with adc-less in-memory computing hardware for spiking neural networks. In IEEE Transactions on Emerging Topics in Computing 35–47 (IEEE, 2023).

  35. Shi, Y. et al. Adaptive quantization as a device-algorithm co-design approach to improve the performance of in-memory unsupervised learning with snns. IEEE Trans. Electron Devices 66, 1722–1728 (2019).

    Article  MATH  Google Scholar 

  36. Orchard, G., Jayawant, A., Cohen, G. K. & Thakor, N. Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 9, 437 (2015).

    Article  Google Scholar 

  37. Anumula, J., Neil, D., Delbruck, T. & Liu, S.-C. Feature representations for neuromorphic audio spike streams. Front. Neurosci. 12, 23 (2018).

    Article  MATH  Google Scholar 

  38. Snell, J., Swersky, K. & Zemel, R. Prototypical networks for few-shot learning. Adv. Neural Inform. Process. Syst. 30, 4080–4090 (2017).

    MATH  Google Scholar 

  39. Abbott, L. F. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 50, 303–304 (1999).

    Article  MATH  Google Scholar 

  40. Jia, C. et al. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning 4904–4916 (PMLR, 2021).

  41. Sutskever, I. Training Recurrent Neural Networks (Univ. Toronto, 2013).

  42. Simeral, J., Kim, S.-P., Black, M., Donoghue, J. & Hochberg, L. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array. J. Neural Eng. 8, 025027 (2011).

    Article  Google Scholar 

  43. Bullard, A. J., Hutchison, B. C., Lee, J., Chestek, C. A. & Patil, P. G. Estimating risk for future intracranial, fully implanted, modular neuroprosthetic systems: a systematic review of hardware complications in clinical deep brain stimulation and experimental human intracortical arrays. Neuromodulation Technol. Neural Interface 23, 411–426 (2020).

    Article  Google Scholar 

  44. Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M. & Shenoy, K. V. High-performance brain-to-text communication via handwriting. Nature 593, 249–254 (2021).

    Article  Google Scholar 

  45. Cohen, G., Afshar, S., Tapson, J. & Van Schaik, A. EMNIST: extending MNIST to handwritten letters. In 2017 International Joint Conference on Neural Networks 2921–2926 (IEEE, 2017).

  46. Woźniak, S., Pantazi, A., Bohnstingl, T. & Eleftheriou, E. Deep learning incorporating biologically inspired neural dynamics and in-memory computing. Nat. Mach. Intell. 2, 325–336 (2020).

    Article  Google Scholar 

  47. Shen. S. et al. Reservoir transformers. In Proc. 59th Annual Meeting of the Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (eds Zong, C. et al.) 4294–4309 (Association for Computational Linguistics, 2021).

  48. Yan, B. et al. RRAM-based spiking nonvolatile computing-in-memory processing engine with precision-configurable in situ nonlinear activation. In 2019 Symposium on VLSI Technology T86–T87 (IEEE, 2019).

  49. Jouppi, N. et al. TPU v4: an optically reconfigurable supercomputer for machine learning with hardware support for embeddings. In Proc. 50th Annual International Symposium on Computer Architecture 82, 1–14 (ACM, 2023).

  50. Soures, N. & Kudithipudi, D. Deep liquid state machines with neural plasticity for video activity recognition. Front. Neurosci. 13, 686 (2019).

    Article  Google Scholar 

  51. Zhang, Y., Li, P., Jin, Y. & Choe, Y. A digital liquid state machine with biologically inspired learning and its application to speech recognition. IEEE Trans. Neural Netw. Learn. Syst. 26, 2635–2649 (2015).

    Article  MathSciNet  MATH  Google Scholar 

  52. Ponghiran, W., Srinivasan, G. & Roy, K. Reinforcement learning with low-complexity liquid state machines. Front. Neurosci. 13, 883 (2019).

    Article  MATH  Google Scholar 

  53. de Azambuja, R., Klein, F. B., Adams, S. V., Stoelen, M. F. & Cangelosi, A. Short-term plasticity in a liquid state machine biomimetic robot arm controller. In 2017 International Joint Conference on Neural Networks 3399–3408 (IEEE, 2017).

  54. Lin, N. Source data for 5 main figures in resistive memory-based zero-shot liquid state machine for multimodal event data learning. HKU Library https://doi.org/10.25442/hku.27873162 (2024).

  55. Lin, N. Source code for resistive memory-based zero-shot liquid state machine for multimodal event data learning. HKU Library https://doi.org/10.25442/hku.27873663 (2024).

Download references

Acknowledgements

This research is supported by the National Key R&D Program of China (grant no. 2022YFB3608300), the National Natural Science Foundation of China (grant nos. 62122004 and 62374181), the Strategic Priority Research Program of the Chinese Academy of Sciences (grant no. XDB44000000), Beijing Natural Science Foundation (grant no. Z210006), Hong Kong Research Grant Council (grant nos. 27206321, 17205922 and 17212923). This research is also partially supported by ACCESS – AI Chip Center for Emerging Smart Systems, sponsored by Innovation and Technology Fund (ITF), Hong Kong SAR.

Author information

Authors and Affiliations

Authors

Contributions

N.L., W.Z. and D.S. conceived the work. N.L., Shaocong Wang, Y.L., B.W., S.S., Y.H. and Songqi Wang contributed to the design and development of the models, software and hardware experiments. N.L., Y.L., W.Z., Y.Y., Y.Z., Xinyuan Zhang, K.W., Songqi Wang, X.C. and X.Q. interpreted, analyzed and presented the experimental results. N.L., W.Z. and D.S. wrote the paper. All authors discussed the results and implications and commented on the paper at all stages.

Corresponding authors

Correspondence to Zhongrui Wang or Dashan Shang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Computational Science thanks Qinyu Chen, Chang Gao, Quanying Liu, Abbas Rahimi and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Jie Pan, in collaboration with the Nature Computational Science team. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–40, Tables 1–31 and Notes 3.1–3.11.

Peer Review File

Source data

Source Data Fig. 1

Statistical source data.

Source Data Fig. 2

Statistical source data.

Source Data Fig. 3

Statistical source data.

Source Data Fig. 4

Statistical source data.

Source Data Fig. 5

Statistical source data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, N., Wang, S., Li, Y. et al. Resistive memory-based zero-shot liquid state machine for multimodal event data learning. Nat Comput Sci 5, 37–47 (2025). https://doi.org/10.1038/s43588-024-00751-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s43588-024-00751-z

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing