A modern, lightweight library for Neuromorphic Audio Processing using Spiking Neural Networks
AcouSpike is a PyTorch-based framework designed for neuromorphic audio processing using Spiking Neural Networks (SNNs). It provides a flexible and efficient way to build, train, and deploy SNN models for various audio processing tasks.
-
Flexible Architecture
- Build custom SNN models using PyTorch
- Support for various neuron types and synaptic connections
- Modular design for easy extension
-
Audio Processing
- Built-in support for common audio tasks
- Efficient spike encoding for audio signals
-
Developer Friendly
- Minimal dependencies
- Comprehensive documentation
- Full test coverage
- Easy-to-follow examples
git clone https://github.com/ZhangShimin1/AcouSpike
cd AcouSpike
pip install -i https://test.pypi.org/simple/ acouspike==0.0.0.1
Ready-to-use examples are available in the recipes
directory:
- Speaker Identification
cd recipes/speaker_identification
python run.sh
- Keyword Spotting
cd recipes/keyword_spotting
python run.sh
Performance benchmarks and comparisons are available in our benchmarks page.
We welcome contributions! Please see our Contributing Guidelines for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- Issue Tracker: GitHub Issues
- Email: [email protected]
- List of contributors
- Supporting organizations
- Related projects and inspirations