More information about the project are on our website EIC Chula Robocup
For Python beginnering or new NLP team recruitment, check out the guide please.
Service | Name | Offline | Status | Legend | ||
---|---|---|---|---|---|---|
Intent Classification | Rasa Open Source | ✅ | 🟩 | 🟩 | Working | |
Large-Language-Model | OpenAI ChatGPT3.5/4 | ❌ | 🟩 | 🟨 | Developing | |
Large-Language-Model | Meta Llama | ✅ | 🟨 | 🟥 | Broken | |
ROS Server | Custom Package | ✅ | 🟨 | |||
Speech-to-Text | OpenAI/whisper | ✅ | 🟩 | |||
Speech-to-Text | hugging-face/distil-whisper | ✅ | 🟩 | |||
Text-to-Speech | IBM Mimic | ✅ | 🟩 | |||
Text-to-Speech | Azure Cognitive Service | ❌ | 🟥 | |||
Wake Word | Porquipine | ✅ | 🟩 |
- ✅ Trancription as accurate as human
- ✅ Hardware acceleration on NVIDIA & Macbook Apple Silicon
- ✅ Multi-Intent Extraction
- Offline LLM
- Streaming Live-Transcription
- Multi-language Support
- ✅ English
- Thai
Supported Operating system:
- ✅
- ✅
- ❌ 1.
Hardware preference:
- Nvidia GPU with CUDA support
- AMD GPU with ROCm support
- Macbook with Apple Silicon
1: Libraries conflict with Pydub and PyAudio. Dual-booting is recommended.
Software installation is required. Please install the following:
-
In VS Code, open a terminal, in the root directory of the project (/Robocup-2024-NLP).
source init_macos.sh
-
In VS Code, open a terminal, in the root directory of the project (/Robocup-2024-NLP).
source init_ubuntu.sh
If the above scripts do not work, please install manually.
- In VS Code, open a terminal, in the root directory of the project (/Robocup-2024-NLP).
# Create a new conda environment
conda create -n "nlp" python=3.9.16
# Activate the environment
conda activate nlp
# Update pip
pip install --upgrade pip
# Install all the requirements
pip install -r requirements.txt
# install the client package, which is used to communicate with the server in python files
cd src_client_pkg
pip install -e .
-
Install pytorch for the GPU accerlation:
- For MacOS, use the following command:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 -c pytorch
- For Ubuntu go to Pytorch Run Locally select:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
This NLP system is built on a client-server architecture. The client is a python package src_client_pkg
. the client is a installable library. The server can be run independently.
Each service has a unique port number. however each service server has the same port to make switch from one service to another easier. For example, Azure and Mimic has the same port number. To switch from Azure to Mimic, simply start one server over another. This it to avoid having to change the client code.
More information at config.py
main.py
runs the available services indexed at socketconfig.yaml
. when running main.py
, the user will be prompted to select which service to run.
Any of the services' server in the root directory can be run independently with python <service_name>.py
.
main.py can run any combination of services. To run all services, run the following command and select option 1.
# Run all services
conda activate nlp
python main.py
------ OUTPUT ------
Choose task:
1. nlpall[offline]
2. nlpstt
3. nlptts
4. nlprasa
5. nlpwakeword
task:
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
First Author: Tinapat (Game) Limsila - LinkedIn - @gametl02 - limsila.limsila@gmail.com
Second Author: Suppakit (Jom) Laomahamek - LinkedIn
Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!