An awesome A.I that you can use anytime with or without an internet!
Get Started »
View Demo
·
Report Bug
·
Request Feature
Kaalaman AI is an open source web app that use GPT-Generated Unified Format (GGUF) such as:
Requires Higher Specs
- llama-2-7b-chat.Q8_0 Download URL
- codellama-7b.Q8_0 Download URL
- mistral-7b-instruct-v0.2.Q8_0 Download URL
- orca-2-7b.Q8_0.gguf Download URL
Low - Mid Specs
- llama-2-7b-chat.Q4_0 Download URL
- codellama-7b.Q4_0 Download URL
- mistral-7b-instruct-v0.2.Q4_0 Download URL
- orca-2-7b.Q4_0.gguf Download URL
Things you need to use and knowlegde you need to use this web app.
- GIT
- TYPESCRIPT & JAVASCRIPT
- PYTHON
- NODEJS & NPM
- NEXTJS
- REST API
- GGUF MODELS
- MODEL TRAINING (FINE-TUNING, PYTORCH)
-
Download all of the Models, create "server/models" directory and put the Models inside the model directory
-
Clone the repo
git clone https://github.com/Chysev/kaalaman-ai
-
Install NPM packages - both server and client
npm install or yarn
-
Define the API Model - /server/server.js
// Models const Models = { llama2_7b_chat_Q8: "llama-2-7b-chat.Q8_0.gguf", // Requires higher specs codellama_7b_Q8: "codellama-7b.Q8_0.gguf", // Requires higher specs mistral_7b_instruct_Q8: "mistral-7b-instruct-v0.2.Q8_0", // Requires higher specs orca2_7b_Q8: "orca-2-7b.Q8_0.gguf", // Requires higher specs }; const Models2 = { llama2_7b_chat_Q4: "llama-2-7b-chat.Q4_0.gguf", // Mid-low higher specs codellama_7b_Q4: "codellama-7b.Q4_0.gguf", // Mid-low higher specs mistral_7b_instruct_Q4: "mistral-7b-instruct-v0.2.Q4_0", // Mid-low higher specs orca2_7b_Q4: "orca-2-7b.Q4_0.gguf", // Mid-low higher specs }; // Model Path const model = new LlamaModel({ modelPath: path.join(process.cwd(), "models", Models.llama2_7B_Chat),- Define here });
-
Start the Server
cd server
npm run server or yarn server
-
Start the Client
npm run start or yarn start
Distributed under the MIT License. See LICENSE
for more information.
John Layda - John Layda (Chysev) - Johnlayda92@gmail.com
Project Link: kaalaman-ai