We provide an online interview analyser which can extract information related to online interviews from the video and audio recording. The system is designed to analyse the emotions of the interviewees using image processing and deep learning techniques along with analysis on the audio of the interview using NLP and Machine learning techniques. Our system provides a detailed analysis of the interview. Companies can use the results from the system to judge the candidate.
Video.mp4
https://drive.google.com/file/d/1PH8WC63IqV0deTqNUHjxg2hr--zlkZJJ/view?usp=sharing
- Clone this repository
$ git clone https://github.com/VaibhaveS/Focus.git
- Change directory to that folder
$ cd Focus
- Run the jupyter notebooks
$ open them manually or use py -m Audio.ipynb and Process.ipynb
- Enable Google cloud Speech-to-Text API
https://console.cloud.google.com/
- Open Homepage.html in the browser
- Average response time of the interviewee
The delta seconds for each question answer pair is calculated and added to a global sum to calculate the overall average response time of the interviewee. This allows the interviewer to gauge the interviewee in terms of how quick he is to think and answer questions i.e, decision making.
-
Bar chart signifying the count of each emotion
-
Number of active speakers (in a normal interview setting it should be two, but may vary)
-
Number of questions asked
-
Percentage of non-trivial questions
-
Percentage of interesting questions