-
Notifications
You must be signed in to change notification settings - Fork 8
Emotional expressivity v1.0
Date completed | April 13, 2023 |
Release where first appeared | OpenWillis v1.0 |
Researcher / Developer | Vijay Yadav |
import openwillis as ow
framewise, summary = ow.emotional_expressivity(filepath = 'video.mov' , baseline_filepath = 'video_baseline.mov')
Measurement of emotional expressivity in the face
We utilize deepface to quantify framewise intensity of the following emotions:
- Happiness
- Sadness
- Anger
- Fear
- Disgust
- Surprise
- Neutral (the absence of any emotion)
We also calculate a composite expressivity score, which averages the expressivity of each of the emotions above (except for neutral).
Framewise values for each variable, ranging from 0-1, are saved in framewise.
In case a baseline input is provided, all values are baseline-corrected and normalized using the same method that is used by the **facial_expressivity **function. The resulting normalized values range between -1 and 1, with negative values signifying expressivity for that emotion being lower than baseline and positive values signifying expressivity greater than baseline.
The framewise expressivity values are compiled for the video and saved in the **summary **output, which contains the primary outcome measures of the function, namely the mean expressivity of each emotion over the course of the video.
Type |
str
|
Description |
path to main video
|
Type |
str, optional
|
Description |
path to baseline video
|
Type |
data-type
|
Description |
dataframe with framewise output of facial emotion expressivity. columns are emotional expressivity measures, with the last column being composite emotional expressivity i.e. a mean of all individual emotions except neutral. range for these values is -1 to 1 in case of baselining and 0-1 otherwise. rows represent frames in the video.
|
What the data frame looks like:
frame
|
angry
|
disgust
|
fear
|
happiness
|
sadness
|
surprise
|
neutral
|
composite
|
0
|
||||||||
1
|
||||||||
...
|
Type |
data-type
|
Description |
dataframe with summary measurements. first column is name of statistic, subsequent columns are facial emotions, last column is composite expressivity i.e. the mean of all emotions except neutral. first row contains mean expressivity and the second row contains standard deviation.
|
What the data frame looks like:
stat
|
happiness
|
sadness
|
anger
|
fear
|
disgust
|
surprise
|
neutral
|
composite
|
mean
|
||||||||
stdev
|
Here, we use the sample data included as part of the repository to calculate emotional expressivity.
import openwillis as ow
framewise, summary = ow.emotional_expressivity(filepath = 'data/subj01.mp4', baseline_filepath = 'data/subj01_base.mp4')
framewise.head(2)
frame
|
angry
|
disgust
|
fear
|
happiness
|
sadness
|
surprise
|
neutral
|
composite
|
0
|
0.051450
|
0.000031
|
0.250485
|
-0.113090
|
0638163
|
-0.000234
|
-0.446323
|
0.137801
|
1
|
0.006272
|
-0.000004
|
0.281984
|
-0.113109
|
0.662542
|
-0.000263
|
-0.452409
|
0.139570
|
Below are dependencies specific to calculation of this measure.
Dependency | License | Justification |
deepface | MIT | Free, open-source, MIT
Emotion detection No action unit detection No facial landmark detection |
OpenWillis was developed by a small team of clinicians, scientists, and engineers based in Brooklyn, NY.
- Release notes
- Getting started
-
List of functions
- Video Preprocessing for Faces v1.0
- Video Cropping v1.0
- Facial Expressivity v2.0
- Emotional Expressivity v2.0
- Eye Blink Rate v1.0
- Speech Transcription with Vosk v1.0
- Speech Transcription with Whisper v1.0
- Speech Transcription with AWS v1.0
- WillisDiarize v1.0
- WillisDiarize with AWS v1.0
- Speaker Separation with Labels v1.1
- Speaker Separation without Labels v1.1
- Audio Preprocessing v1.0
- Speech Characteristics v3.2
- Vocal Acoustics v2.1
- Phonation Acoustics v1.0
- GPS Analysis v1.0
- Research guidelines