Sana, an efficient text-to-image model from NVIDIA, generates high-resolution images quickly on a laptop GPU. While limited in complex scene generation, its capabilities improve with detailed prompts. This project shows how to create a self-hosted, private API that deploys text-to-image generation model with LitServe, an easy-to-use, flexible serving engine for AI models built on FastAPI.
The project is structured as follows:
server.py
: The file containing the main code for the web server.client.py
: The file containing the code for client-side requests.LICENSE
: The license file for the project.README.md
: The README file that contains information about the project.assets
: The folder containing screenshots for working on the application..gitignore
: The file containing the list of files and directories to be ignored by Git.
- Python (for the programming language)
- PyTorch (for the deep learning framework)
- Hugging Face Diffusers Library (for the model)
- LitServe (for the serving engine)
To get started with this project, follow the steps below:
- Run the server:
python server.py
- Upon running the server successfully, you will see uvicorn running on port 8000.
- Open a new terminal window.
- Run the client:
python client.py
Now, you can see the model's output based on the input request. The model will generate image output with the given prompt.
The project can be used to serve the Sana text-to-image generation model using LitServe. It allows you to input a prompt and generate an image based on it, showcasing potential use cases in prototype, illustration, design, educational content generation, etc.
Contributions are welcome! If you would like to contribute to this project, please raise an issue to discuss the changes you want to make. Once the changes are approved, you can create a pull request.
This project is licensed under the Apache-2.0 License.
If you have any questions or suggestions about the project, please contact me on my GitHub profile.
Happy coding! 🚀