Skip to content

A Deep learning-based web application for captioning images developed using CNN-LSTM encoder-decoder model.

License

Notifications You must be signed in to change notification settings

miladbehrooz/Image_Caption_Generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Caption Generator App

demo

In this project, the image caption generator using CNN-LSTM encoder-decoder model was implemented. The image features were extracted from the CNN model and then fed into the LSTM model responsible for generating the image captions. Project workflow consists of the following steps:

  • Created features for each image using VGG16 pre-trained CNN.
  • Prepared text data which involves text cleaning and text tokenization.
  • Transformed image features and text data into input-output data pairs for training the CNN-LSTM model.
  • Built, trained, and evaluated an encoder-decoder neural network.
  • Built an image caption generator web application with Streamlit based on the CNN-LSTM model.

Usage

  • Clone the git repository: git clone https://github.com/miladbehrooz/Image_Caption_Generator.git
  • Download Flicker8K Dataset
  • Unzip and copy Flicker8k_Dataset to data folder
  • Install the requirements: pip install requirements.txt
  • Run web app locally: streamlit app/app.py

To do

  • Use Inception pre-trained image model instead of VGG16
  • Use the pre-trained Embedding layer in the LSTM model
  • Tune the configuration of the model to achieve better performance

About

A Deep learning-based web application for captioning images developed using CNN-LSTM encoder-decoder model.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published