Clone repo into a folder of your choice.
run in the same folder:
- docker build -t interview_bot .
- docker run -p 7860:7860 --env KEY=your_openai_api_key interview_bot
- navigate to http://127.0.0.1:7860/ with your browser
- create a Python 3.9 venv or conda environment
- install dependencies e.g. pip install -r requirements.txt
- set up your OpenAI API key as "KEY" environment variable (e.g. via .env file if using VSCode)
- As the UI might not fully render in a small window within an IDE I recommend connecting to the URL returned by Gradio, e.g. the default http://127.0.0.1:7860/ with your browser
provide your CV pdf on the left-most tab, provide the Job Description on the middle tab and begin the interview on the right-most tab.
Copyright 2023, Jozsef Szalma
Creative Commons Attribution-NonCommercial 4.0 International Public License
Gradio code was partially reused from / informed by this guide
Before repurposing this code for an HR use-case consider:
-
OpenAI's useage policies expliclitly prohibit:
"Activity that has high risk of economic harm, including [...] Automated determinations of eligibility for [...] employment [...]" -
The EU AI Act proposal contains the following language:
"AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons [..] should also be classified as high-risk"
- incomplete error handling around job description, e.g. if an invalid JD URL is provided the code won't fall back to the copy-pasted JD
- if no JD and/or CV are provided GPT-4 might on occasion ignore instructions to only ask one interview question at a time
- the current workflow consumes a lot of tokens as the JD and the CV aren't summarized, but considered as-is for each question
- the scraping logic breaks once the job is in the "no longer accepting applications" status