Skip to content

Release v0.3.0

Compare
Choose a tag to compare
@MRXLT MRXLT released this 11 Jun 03:25
· 6481 commits to develop since this release
66fdcce
  • New features
    • Add ir_optim, use_mkl(only for cpu version)argument
    • Support custom DAG for prediction service
    • HTTP service supports prediction with batch
    • HTTP service supports startup by uwsgi
    • Support model file monitoring, remote pull and hot loading
    • Support ABTest
    • Add image preprocessing, Chinese word segmentation preprocessing, Chinese sentiment analysis preprocessing module, and graphics segmentation postprocessing, image detection postprocessing module in paddle-serving-app
    • Add pre-trained model and sample code acquisition in paddle-serving-app, integrated profile function
    • Release Centos6 docker images for compile Paddle Serving
  • Bug fixed
  • New documents
  • Performance optimization
    • Optimized the time consumption of input and output memory copy in numpy.array format. When the client-side single concurrent batch size is 1 in the resnet50 imagenet classification task, qps is 100.38% higher than the 0.2.0 version.
  • Compatibility optimization
    • The client side removes the dependency on patchelf
    • Released paddle-serving-client for python27, python36, and python37
    • Server and client can be deployed in Centos6/7 and Ubuntu16/18 environments
  • More demos
    • Chinese sentiment analysis task : lac+senta
    • Image segmentation task : deeplabv3、unet
    • Image detection task : faster_rcnn
    • Image classification task : mobilenet、resnet_v2_50