Skip to content

Commit

Permalink
optional hardware accelerated scaling
Browse files Browse the repository at this point in the history
- extend hardware config with input_width, input_height fields
- if specified and different from width/height perform hardware accelerated scaling before encoding
- update examples
- update docs
- update readme

Adds dependency on libavfilter.

Closes #24
Makes #5 more complex
Indirectly related to #25
Indirectly related to #6
  • Loading branch information
bmegli committed Apr 10, 2020
1 parent 98be90d commit 14fa19b
Show file tree
Hide file tree
Showing 6 changed files with 232 additions and 59 deletions.
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ project(
)

add_library(hve hve.c)
target_link_libraries(hve avcodec avutil)
target_link_libraries(hve avcodec avutil avfilter)
install(TARGETS hve DESTINATION lib)
install(FILES hve.h DESTINATION include)

Expand Down
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# HVE - Hardware Video Encoder C library

This library wraps hardware video encoding in a simple interface.
This library wraps hardware video encoding and scaling in a simple interface.
There are no performance loses (at the cost of library flexibility).

Currently it supports VAAPI and various codecs (H.264, HEVC, ...).\
Expand All @@ -20,7 +20,7 @@ Raw encoding (H264, HEVC, ...):
- raw dumping (H264, HEVC, ...)
- ...

Complex pipelines (muxing, scaling, color conversions, filtering) are beyond the scope of this library.
Complex pipelines (muxing, filtering) are beyond the scope of this library.

## Platforms

Expand All @@ -34,7 +34,7 @@ Intel VAAPI compatible hardware encoders ([Quick Sync Video](https://ark.intel.c
## Dependencies

Library depends on:
- FFmpeg `avcodec` and `avutil` (at least 3.4 version)
- FFmpeg `avcodec`, `avutil`, `avfilter` (at least 3.4 version)

Works with system FFmpeg on Ubuntu 18.04 and doesn't on 16.04 (outdated FFmpeg and VAAPI ecosystem).

Expand All @@ -46,7 +46,7 @@ Tested on Ubuntu 18.04.
# update package repositories
sudo apt-get update
# get avcodec and avutil (and ffmpeg for testing)
sudo apt-get install ffmpeg libavcodec-dev libavutil-dev
sudo apt-get install ffmpeg libavcodec-dev libavutil-dev libavfilter-dev
# get compilers and make and cmake
sudo apt-get install build-essential
# get cmake - we need to specify libcurl4 for Ubuntu 18.04 dependencies problem
Expand Down Expand Up @@ -108,7 +108,7 @@ You should see procedurally generated video (moving through greyscale).

## Using

See examples directory for a more complete and commented examples with error handling.
See examples directory for more complete and commented examples with error handling.

There are just 4 functions and 3 user-visible data types:
- `hve_init`
Expand All @@ -117,15 +117,15 @@ There are just 4 functions and 3 user-visible data types:
- `hve_close`

```C
struct hve_config hardware_config = {WIDTH, HEIGHT, FRAMERATE, DEVICE, ENCODER,
PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve_config hardware_config = {WIDTH, HEIGHT, INPUT_WIDTH, INPUT_HEIGHT, FRAMERATE,
DEVICE, ENCODER, PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve *hardware_encoder=hve_init(&hardware_config);
struct hve_frame frame = { 0 };

//later assuming PIXEL_FORMAT is "nv12" (you may use something else)

//fill with your stride (width including padding if any)
frame.linesize[0] = frame.linesize[1] = WIDTH;
frame.linesize[0] = frame.linesize[1] = INPUT_WIDTH;

AVPacket *packet; //encoded data is returned in FFmpeg packet
int failed; //error indicator while encoding
Expand Down Expand Up @@ -170,13 +170,13 @@ You have several options.
For static linking of HVE and dynamic linking of FFmpeg libraries (easiest):
- copy `hve.h` and `hve.c` to your project and add them in your favourite IDE
- add `avcodec` and `avutil` to linked libraries in IDE project configuration
- add `avcodec`, `avutil`, `avfilter` to linked libraries in IDE project configuration
For dynamic linking of HVE and FFmpeg libraries:
- place `hve.h` where compiler can find it (e.g. `make install` for `/usr/local/include/hve.h`)
- place `libhve.so` where linker can find it (e.g. `make install` for `/usr/local/lib/libhve.so`)
- make sure `/usr/local/...` is considered for libraries
- add `hve`, `avcodec` and `avutil` to linked libraries in IDE project configuration
- add `hve`, `avcodec`, `avutil`, `avfilter` to linked libraries in IDE project configuration
- make sure `libhve.so` is reachable to you program at runtime (e.g. set `LD_LIBRARIES_PATH`)
### CMake
Expand Down Expand Up @@ -208,7 +208,7 @@ add_library(hve SHARED hardware-video-encoder/hve.c)
add_executable(your-project main.cpp)
target_include_directories(your-project PRIVATE hardware-video-encoder)
target_link_libraries(your-project hve avcodec avutil)
target_link_libraries(your-project hve avcodec avutil avfilter)
```

For example see [realsense-ir-to-vaapi-h264](https://github.com/bmegli/realsense-ir-to-vaapi-h264)
Expand All @@ -219,14 +219,14 @@ Assuming your `main.c`/`main.cpp` and `hve.h`, `hve.c` are all in the same direc

C
```bash
gcc main.c hve.c -lavcodec -lavutil -o your-program
gcc main.c hve.c -lavcodec -lavutil -lavfilter -o your-program
```

C++
```bash
gcc -c hve.c
g++ -c main.cpp
g++ hve.o main.o -lavcodec -lavutil -o your program
g++ hve.o main.o -lavcodec -lavutil -lavfilter -o your program
```

## License
Expand All @@ -240,7 +240,7 @@ This is similiar to LGPL but more permissive:
Like in LGPL, if you modify this library, you have to make your changes available.
Making a github fork of the library with your changes satisfies those requirements perfectly.

Since you are linking to FFmpeg libraries. Consider also `avcodec` and `avutil` licensing.
Since you are linking to FFmpeg libraries consider also `avcodec`, `avutil` and `avfilter` licensing.

## Additional information

Expand Down
18 changes: 10 additions & 8 deletions examples/hve_encode_raw_h264.c
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@

const int WIDTH=1280;
const int HEIGHT=720;
const int INPUT_WIDTH=1280; //optional hardware scaling if different from width
const int INPUT_HEIGHT=720; //optional hardware scaling if different from height
const int FRAMERATE=30;
int SECONDS=10;
const char *DEVICE=NULL; //NULL for default or device e.g. "/dev/dri/renderD128"
Expand All @@ -40,8 +42,8 @@ int main(int argc, char* argv[])
return -1;

//prepare library data
struct hve_config hardware_config = {WIDTH, HEIGHT, FRAMERATE, DEVICE, ENCODER,
PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve_config hardware_config = {WIDTH, HEIGHT, INPUT_WIDTH, INPUT_HEIGHT, FRAMERATE,
DEVICE, ENCODER, PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve *hardware_encoder;

//prepare file for raw H.264 output
Expand Down Expand Up @@ -76,20 +78,20 @@ int encoding_loop(struct hve *hardware_encoder, FILE *output_file)
//we are working with NV12 because we specified nv12 pixel format
//when calling hve_init, in principle we could use other format
//if hardware supported it (e.g. RGB0 is supported on my Intel)
uint8_t Y[WIDTH*HEIGHT]; //dummy NV12 luminance data
uint8_t color[WIDTH*HEIGHT/2]; //dummy NV12 color data
uint8_t Y[INPUT_WIDTH*INPUT_HEIGHT]; //dummy NV12 luminance data
uint8_t color[INPUT_WIDTH*INPUT_HEIGHT/2]; //dummy NV12 color data

//fill with your stride (width including padding if any)
frame.linesize[0] = frame.linesize[1] = WIDTH;
frame.linesize[0] = frame.linesize[1] = INPUT_WIDTH;

//encoded data is returned in FFmpeg packet
AVPacket *packet;

for(f=0;f<frames;++f)
{
//prepare dummy image data, normally you would take it from camera or other source
memset(Y, f % 255, WIDTH*HEIGHT); //NV12 luminance (ride through greyscale)
memset(color, 128, WIDTH*HEIGHT/2); //NV12 UV (no color really)
memset(Y, f % 255, INPUT_WIDTH*INPUT_HEIGHT); //NV12 luminance (ride through greyscale)
memset(color, 128, INPUT_WIDTH*INPUT_HEIGHT/2); //NV12 UV (no color really)

//fill hve_frame with pointers to your data in NV12 pixel format
frame.data[0]=Y;
Expand Down Expand Up @@ -149,7 +151,7 @@ int hint_user_on_failure(char *argv[])
void hint_user_on_success()
{
printf("finished successfully\n");
printf("output written to \"outout.h264\" file\n");
printf("output written to \"output.h264\" file\n");
printf("test with:\n\n");
printf("ffplay output.h264\n");
}
18 changes: 10 additions & 8 deletions examples/hve_encode_raw_hevc10.c
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@

const int WIDTH=1280;
const int HEIGHT=720;
const int INPUT_WIDTH=1280; //optional scaling if different from width
const int INPUT_HEIGHT=720; //optional scaling if different from height
const int FRAMERATE=30;
int SECONDS=10;
const char *DEVICE=NULL; //NULL for default or device e.g. "/dev/dri/renderD128"
Expand All @@ -40,8 +42,8 @@ int main(int argc, char* argv[])
return -1;

//prepare library data
struct hve_config hardware_config = {WIDTH, HEIGHT, FRAMERATE, DEVICE, ENCODER,
PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve_config hardware_config = {WIDTH, HEIGHT, INPUT_WIDTH, INPUT_HEIGHT, FRAMERATE,
DEVICE, ENCODER, PIXEL_FORMAT, PROFILE, BFRAMES, BITRATE, QP, GOP_SIZE, COMPRESSION_LEVEL};
struct hve *hardware_encoder;

//prepare file for raw HEVC output
Expand Down Expand Up @@ -76,21 +78,21 @@ int encoding_loop(struct hve *hardware_encoder, FILE *output_file)
//we are working with P010LE because we specified p010le pixel format
//when calling hve_init, in principle we could use other format
//if hardware supported it (e.g. RGB0 is supported on my Intel)
uint16_t Y[WIDTH*HEIGHT]; //dummy p010le luminance data (or p016le)
uint16_t color[WIDTH*HEIGHT/2]; //dummy p010le color data (or p016le)
uint16_t Y[INPUT_WIDTH*INPUT_HEIGHT]; //dummy p010le luminance data (or p016le)
uint16_t color[INPUT_WIDTH*INPUT_HEIGHT/2]; //dummy p010le color data (or p016le)

//fill with your stride (width including padding if any)
frame.linesize[0] = frame.linesize[1] = WIDTH*2;
frame.linesize[0] = frame.linesize[1] = INPUT_WIDTH*2;

//encoded data is returned in FFmpeg packet
AVPacket *packet;

for(f=0;f<frames;++f)
{
//prepare dummy image data, normally you would take it from camera or other source
for(int i=0;i<WIDTH*HEIGHT;++i)
for(int i=0;i<INPUT_WIDTH*INPUT_HEIGHT;++i)
Y[i] = UINT16_MAX * f / frames; //linear interpolation between 0 and UINT16_MAX
for(int i=0;i<WIDTH*HEIGHT/2;++i)
for(int i=0;i<INPUT_WIDTH*INPUT_HEIGHT/2;++i)
color[i] = UINT16_MAX / 2; //dummy middle value for U/V, equals 128 << 8, equals 32768
//fill hve_frame with pointers to your data in P010LE pixel format
//note that we have actually prepared P016LE data but it is binary compatible with P010LE
Expand Down Expand Up @@ -152,7 +154,7 @@ int hint_user_on_failure(char *argv[])
void hint_user_on_success()
{
printf("finished successfully\n");
printf("output written to \"out.hevc\" file\n");
printf("output written to \"output.hevc\" file\n");
printf("test with:\n\n");
printf("ffplay output.hevc\n");
}
Loading

0 comments on commit 14fa19b

Please sign in to comment.