Note: Read doc/library_introduction.md before this page.
In order to use and/or slightly extend the OpenPose library, we try to explain the 2 main components on this section. doc/UML/ contains the class diagram of all these modules.
-
The basic module:
core
. -
The multi-threading module:
thread
. -
The multi-person keypoint detection module:
pose
.
This template class implements a multidimensional data array. It is our basic data container, analogous to cv::Mat
in OpenCV, Tensor in Torch and TensorFlow or Blob in Caffe. It wraps a cv::Mat
and a std::shared_ptr
, both of them pointing to the same raw data. I.e. they both share the same memory, so we can read this data in both formats, while there is no performance impact. For instance, op::Datum
has several op::Array<float>
, for instance the op::Datum<float> pose
with the pose data.
There are 4 different ways to allocate the memory:
-
The constructor
Array(const std::vector<int>& size)
, which callsreset(size)
. -
The constructor
Array(const int size)
, which callsreset(size)
. -
The
reset(const std::vector<int>& size)
function: It allocates the memory indicated for size. The allocated memory equals the product of all elements in the size vector. Internally, it is saved as a 1-D std::shared_ptr<T[]>. -
The
reset(const int size)
function: equivalent for 1-dimension data (i.e. vector). -
The
setFrom(const cv::Mat& cvMat)
function: It callsreset()
and copies the data fromcvMat
.
The data can be access as a raw pointer, shared pointer or cv::Mat
. So given your Array<T>
array:
-
Similar to the std::vector:
array[index]
orarray.at(index)
. If the code is in debug mode, they both has the same functionality. In release mode, the only difference is that theat
function checks whether the index is within the limits of the data. -
As
const cv::Mat
:array.getConstCvMat()
. We do not allow to directly modify thecv::Mat
, since some operations might change the dimensional size of the data. If you want to do so, you can clone thiscv::Mat
, perform any desired operation, and copy it back to the array class withsetFrom()
. -
As raw pointer:
T* getPtr()
andconst T* const getConstPtr()
. Similar to std:: and std::shared_ptr::get(). For instance, CUDA code usually requires raw pointers to access its data.
There are several functions to get information about the allocated data:
-
bool empty()
: Similar tocv::Mat::empty()
. It checks whether internal data has been allocated. -
std::vector<int> getSize()
: It returns the size of each dimension. -
int getSize(const int index)
: It returns the size of theindex
dimension. -
size_t getNumberDimensions()
: It returns the number of dimensions (i.e. getSize().size()). -
size_t getVolume()
: It returns the total internal number of T objects, i.e. the product of all dimensions size.
The Datum
class has all the variables that our Workers need to share to each other. The user can inherit from op::Datum
in order to add extra functionality (e.g. if he want to add new Workers and they require extra information between them). We highly recommend not to modify the op::Datum
source code. Instead, just inherit it and tell the Workers and ThreadManager
to use your inherited class. No changes are needed in the OpenPose source code for this task.
UserDatum : public op::Datum {/* op::Datum + extra variables */}
// Worker and ThreadManager example initialization
op::WGui<std::shared_ptr<std::vector<UserDatum>> userGUI(/* constructor arguments */);
op::ThreadManager<std::shared_ptr<std::vector<UserDatum>> userThreadManager;
Since UserDatum
inherits from op::Datum
, all the original OpenPose code will compile and run with your inherited version of op::Datum
.
It manages and automates the multi-threading configuration and execution. The user just needs to add the desired Worker classes to be executed and the parallelization mode, and this class will take care of it.
Just call op::ThreadManager<TypedefDatums> threadManager
.
There are 4 ways to add sequence of workers:
-
void add(const std::vector<std::tuple<unsigned long long, std::vector<TWorker>, unsigned long long, unsigned long long>>& threadWorkerQueues)
. -
void add(const std::vector<std::tuple<unsigned long long, TWorker, unsigned long long, unsigned long long>>& threadWorkerQueues)
. -
void add(const unsigned long long threadId, const std::vector<TWorker>& tWorkers, const unsigned long long queueInId, const unsigned long long queueOutId)
. -
void add(const unsigned long long threadId, const TWorker& tWorker, const unsigned long long queueInId, const unsigned long long queueOutId)
.
There are 3 basic configuration modes: single-threading, multi-threading and smart multi-threading (mix of single- and multi-threading):
-
Single-threading, with 2 variations:
- Just call
threadManager.add(0, std::vector<TypedefWorker> VECTOR_WITH_ALL_WORKERS, 0, 1);
- Add the workers one by one, but keeping the same threadId:
auto threadId = 0; auto queueIn = 0; auto queueOut = 0; threadManager.add(threadId, {wDatumProducer, wCvMatToOpInput}, queueIn++, queueOut++); // Thread 0, queues 0 -> 1 threadManager.add(threadId, wPose, queueIn++, queueOut++); // Thread 0, queues 1 -> 2
- Just call
-
Multi-threading: Just increase the thread id for each new sequence:
auto threadId = 0; auto queueIn = 0; auto queueOut = 0; threadManager.add(threadId++, wDatumProducer, queueIn++, queueOut++); // Thread 0, queues 0 -> 1 threadManager.add(threadId++, wCvMatToOpInput}, queueIn++, queueOut++); // Thread 1, queues 1 -> 2 threadManager.add(threadId++, wPose, queueIn++, queueOut++); // Thread 2, queues 3 -> 3
-
Smart multi-threading: Some classes are much more faster than others (e.g. pose estimation takes ~100 ms while extracting frames from a video only ~10 ms). In addition, any machine has a limited number of threads. Therefore, the library allows the user to merge the faster threads in order to potentially speed up the code. Check the real-time pose demo too see a more complete example.
auto threadId = 0; auto queueIn = 0; auto queueOut = 0; threadManager.add(threadId++, {wDatumProducer, wCvMatToOpInput}, queueIn++, queueOut++); // Thread 0, queues 0 -> 1, 2 workers merged together into 1 thread threadManager.add(threadId++, wPose, queueIn++, queueOut++); // Thread 1, queues 1 -> 2, 1 worker
In order to have X different threads, you just need X different thread ids in the add()
function. There should not be any missing thread or queue id. I.e., when start
is called, all the thread ids from 0 to max_thread_id must have been added with the add()
function, as well as all queue ids from 0 to the maximum queue id introduced.
The threads will be started following the thread id order (first the lowest id, last the highest one). In practice, thread id ordering might negatively affect the program execution by adding some lag. I.e., if the thread ids are assigned in complete opposite order to the temporal order of the Workers (e.g. first the GUI and lastly the webcam reader), then during the first few iterations the GUI Worker will have an empty queue until all other Workers have processed at least one frame.
Within each thread, the Workers are executed in the order that they have been added to ThreadManager
by the add()
function.
In addition, each queue id is forced to be the input and output of at least 1 Worker sequence. Special cases are the queue id 0 (only forced to be input of >= 1 Workers) and max_queue_id (forced to be output of >=1 Workers). This prevent users from accidentally forgetting connecting some queue ids.
Recursive queuing is allowed. E.g. a Worker might work from queue 0 to 1, another one from 1 to 2, and a third one from 2 to 1, creating a recursive queue/threading. However, the index 0 is reserved for the first queue, and the maximum index for the last one.
Classes starting by the letter W
+ upper case letter (e.g. WGui
) directly or indirectly inherit from Worker. They can be directly added to the ThreadManager
class so they can access and/or modify the data as well as be parallelized automatically.
The easiest way to create your own Worker is to inherit Worker, and implement the work() function such us it just calls a wrapper to your desired functionality (check the source code of some of our basic Workers). Since the Worker classes are templates, they are always compiled. Therefore, including your desired functionality in a different file will let you compile it only once. Otherwise, it would be compiled any time that any code which uses your worker is compiled.
All OpenPose Workers are templates, i.e. they are not only limited to work with the default op::Datum. However, if you intend to use some of our Workers, your custom TDatums
class (the one substituting op::Datum) should implement the same variables and functions that those Workers use. The easiest solution is to inherit from op::Datum
and extend its functionality.
Users can directly implement their own W
from Worker or any other sub-inherited Worker[...] class and add them to ThreadManager
. For that, they just need to: inherit those classes from...
-
Inherit from
Worker<T>
and implement the functionalitywork(T& tDatum)
, i.e. it will use and modify tDatum. -
Inherit from
WorkerProducer<T>
and implement the functionalityT work()
, i.e. it will create and return tDatum. -
Inherit from
WorkerConsumer<T>
and implement the functionalitywork(const T& tDatum)
, i.e. it will use but will not modify tDatum.
We suggest users to also start their inherited Worker<T>
classes with the W
letter for code clarity, required if they want to send us a pull request.
All Workers wrap and call a non-Worker non-template equivalent which actually performs their functionality. E.g. WPoseExtractor<T>
and PoseExtractor
. In this way, threading and functionality are completely decoupled. This gives us the best of templates and normal classes:
-
Templates allow us to use different classes, e.g. the user could use his own specific equivalent to
op::Datum
. However, they must be compiled any time that any function that uses them changes. -
Classes can be compiled only once, and later the algorithm just use them. However, they can only be used with specific arguments.
By separating functionality and their Worker<T>
wrappers, we get the good of both points, eliminating the cons. In this way, the user is able to:
-
Change
std::shared_ptr<std::vector<op::Datum>>
for a custom class, implementing his ownWorker
templates, but using the already implemented functionality to create new customWorker
templates. -
Create a
Worker
which wraps several non-Worker
s classes.
The human body pose detection is wrapped into the WPoseExtractor<T>
worker and its equivalent non-template PoseExtractor. In addition, it can be rendered and/or blended into the original frame with (W)PoseRenderer
class.
Currently, only PoseExtractorCaffe
is implemented, which uses the Caffe framework. We might add other famous frameworks later (e.g. Torch or TensorFlow). If you compile our library with any other framework, please email us or make a pull request! We are really interested in adding any other Deep Net framework, and the code is mostly prepared for it. Just create the equivalent PoseExtractorDesiredFramework
and make the pull request!
In order to be initialized, PoseExtractorCaffe
has the following constructor and parameters: PoseExtractorCaffe(const Point<int>& netInputSize, const Point<int>& netOutputSize, const Point<int>& outputSize, const int scaleNumber, const double scaleGap, const PoseModel poseModel, const std::string& modelsFolder, const int gpuId)
.
-
netInputSize
is the resolution of the first layer of the deep net. I.e., the frames given to this class must have that size. -
netOutputSize
is the resolution of the last layer of the deep net. I.e., the resulting heatmaps will have this size. Currently, it must be set to the same size asnetInputSize
. -
outputSize
is the final desired resolution to be used. The human pose keypoint locations will be scaled to this output size. However, the heat-maps will have thenetOutputSize
size due to performance. -
scaleNumber
andscaleGap
specify the multi-scale parameters. Explained in the README.md, in the demo section. -
poseModel
specifies the model to load (e.g. COCO or MPI). -
modelsFolder
is the resolution of the last layer of the deep net. I.e., the resulting heat-maps will have this size. -
gpuId
specifies the GPU where the deep net will run. To parallelize the process along the number of available GPUs, just create the class with the same number of parameters but a different GPU id.
In order to detect the human pose:
-
First run the deep net over the desired target image, by using
forwardPass(const Array<float>& inputNetData, const Point<int>& inputDataSize)
.inputNetData
is the input image scaled tonetInputSize
, whileinputDataSize
indicates the original frame resolution before being rescaled tonetInputSize
(this is required given that we resize the images keeping the original aspect ratio). -
After, you can choose either to get:
- The people pose as a op::Array:
Array<float> getPose()
. - The scale used (keeping the aspect ratio) to rescale from
netOutputSize
tooutputSize
:double getScaleNetToOutput()
. - The people pose as a constant GPU float pointer (not implemented yet):
const float* getPoseGpuConstPtr()
. - The heatmap data as a constant CPU or GPU float pointer:
const float* getHeatMapCpuConstPtr()
andconst float* getHeatMapGpuConstPtr()
.
- The people pose as a op::Array:
Due to performance reasons, we only copy the people pose data given by getPose()
. However, we do not copy the heatmap and GPU pose values and just give you a raw pointer to them. Hence, you need to manually copy the data if you pretend to use it later, since we reuse that memory on each forwardPass
.
After estimating the pose, you usually desired to visualize it. PoseRenderer
does this work for you.
In order to be initialized, PoseRenderer
has the following constructor and parameters: PoseRenderer(const Point<int>& netOutputSize, const Point<int>& outputSize, const PoseModel poseModel, const std::shared_ptr<PoseExtractor>& poseExtractor, const float alphaKeypoint, const float alphaHeatMap)
.
-
netOutputSize
,outputSize
andposeModel
are the same as the ones used forPoseExtractorCaffe
. -
poseExtractor
is the pose extractor used previously. It is only used for heatmap and PAFs rendering, since the GPU data is not copied toop::Datum
for performance purposes. If any of the heatmaps are gonna be rendered,PoseRenderer
must be placed in the same thread asPoseExtractor
. Otherwise, it will throw a runtime exception. -
alphaKeypoint
andalphaHeatMap
controls the blending coefficient between original frame and rendered pose or heatmap/PAF respectively. A valuealphaKeypoint = 1
will render the pose with no transparency at all, whilealphaKeypoint = 0
will not be visible. In addition,alphaHeatMap = 1
would only show the heatmap, whilealphaHeatMap = 0
would only show the original frame.
In order to render the detected human pose, run std::pair<int, std::string> renderPose(Array<float>& outputData, const Array<float>& pose, const double scaleNetToOutput)
.
-
outputData
is the Array where the original image resized tooutputSize
is located. -
pose
is given byPoseExtractor::getPose()
. -
scaleNetToOutput
is given byPoseExtractor::getScaleNetToOutput()
. -
The resulting std::pair has the element rendered id, and its name. E.g. <0, "Nose"> or <19, "Part Affinity Fields">.