Collection of models that automatically find UI components of design systems on design screens such as Figma or Zeplin.
I provide a collection of detection models custom trained on the UI data of Mobbin, RICO, CLAY, and Enrico. These models can be useful for out-of-the-box inference if you are interested in the design systems.
They are also useful for initializing your models when training on your UI design assets. Please look at this blog guide and my Mobbin-training Colab for more details.
Dataset | Model name | Backbone | Training | Note |
---|---|---|---|---|
Mobbin | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 50,000 steps | Tensorboard |
Mobbin | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 7,800 steps | Colab |
RICO | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 50,000 steps | Tensorboard |
RICO | TF Lite Model | EfficientNet-lite | 74,540 steps | Colab |
RICO | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 7,800 steps | |
CLAY | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 7,500 steps | |
CLAY | Saved Model Web Model |
SSD MobileNet V2 FPNLite 640x640 | 50,000 steps | Tensorboard |
ENRICO | Web Model | Batch=32, epoch=20 | Colab | |
VINS | Web Model | Batch=6, epoch=20 | Colab |
Specifically thank you to my friend who rented me his computing resources.