NanoFed: Simplifying the development of privacy-preserving distributed ML models.
Federated Learning (FL) is a distributed machine learning paradigm that trains a global model across multiple clients (devices or organizations) without sharing their data. Instead, clients send model updates to a central server for aggregation.
🌟 Feature | Description |
---|---|
🔒 Privacy Preservation | Data stays securely on devices. |
🚀 Resource Efficiency | Decentralized training reduces transfer overhead. |
🌐 Scalable AI | Enables collaborative training environments. |
- Python
3.10+
- Dependencies installed automatically
pip install nanofed
git clone https://github.com/camille-004/nanofed.git
cd nanofed
make install
📚 Learn how to use NanoFed in our guides and API references. 👉 Read the Docs
- 🔒 Privacy-First: Keep data on devices while training.
- 🚀 Easy-to-Use: Simple APIs with seamless PyTorch integration.
- 🔧 Flexible: Customizable aggregation strategies and extensible architecture.
- 💻 Production Ready: Robust error handling and logging.
Feature | Description |
---|---|
🔒 Privacy-First | Data never leaves devices. |
🚀 Intuitive API | Built for developers with PyTorch support. |
🔧 Flexible Aggregation | Supports custom strategies. |
💻 Production Ready | Async communication, robust error handling. |
Train a model using federated learning in just a few lines of code. Follow the tutorial notebook here.
Need assistance? Here are some helpful resources:
Resource | Description |
---|---|
📚 Documentation | Learn how to use NanoFed effectively. |
🐛 Issue Tracker | Report bugs or request features. |
🛠️ Source Code | Browse the NanoFed repository on GitHub. |
NanoFed is licensed under the GNU General Public License (GPL-3.0). See the LICENSE file for details.
Contributions are welcome! We follow the Conventional Commits specification. See our contribution guidelines for detailed instructions.
Example commit message:
feat(client): add retry mechanism
Core Features for V1
- Basic client-server architecture with HTTP communication
- Simple global model management
- Basic FedAvg implementation
- Local training support
- Support for PyTorch models
- Synchronous training (all clients must complete before aggregation)
- Basic error handling and logging
Planned Features
- Advanced privacy features: Differential Privacy (DP), Secure Multiparty Computation (MPC), Homomorphic Encryption (HE)
- Asynchronous updates for faster and more flexible training
- Non-IID data handling for diverse client datasets
- Custom aggregation strategies for specific use cases
- gRPC implementation for high-performance communication
- Model compression techniques to reduce bandwidth usage
- Fault tolerance mechanisms for unreliable clients or servers
Made with ❤️ and 🧠 by Camille Dunning.