Releases: carefree0910/carefree-learn
carefree-learn 0.1.15
Release Notes
carefree-learn 0.1.15
improved overall performances.
DDP
Since PyTorch
is introducing ZeRO
optimizer, we decided to remove deepspeed
dependency and use native DDP
from PyTorch
.
results = cflearn.ddp(tr_file, world_size=2)
predictions = results.m.predict(te_file)
JitLSTM
Since native RNN
s of PyTorch
do not support dropouts on w_ih
and w_hh
, we followed the official implementation of jit
version LSTM
and implemented these dropouts.
m = cflearn.make(
"rnn",
model_config={
"pipe_configs": {
"rnn": {
"extractor": {
"cell": "JitLSTM"
}
}
}
}
)
Misc
- Fixed
NNB
whenstd
is 0 (177363e). - Fixed
summary
in some edge cases (945ca15, f95f667, 2768153). - Introduced
ONNXWrapper
for more general ONNX exports (226de5b).
carefree-learn 0.1.14
Release Notes
carefree-learn 0.1.14
improved overall performances.
Summary
For non-distributed trainings, carefree-learn
will print out model summaries now by default (inspired by torchsummary
):
========================================================================================================================
Layer (type) Input Shape Output Shape Trainable Param #
------------------------------------------------------------------------------------------------------------------------
RNN [-1, 5, 1] [-1, 256] 198,912
GRU [-1, 5, 1] [[-1, 5, 256], [-1, 128, 256]] 198,912
FCNNHead [-1, 256] [-1, 1] 395,777
MLP [-1, 256] [-1, 1] 395,777
Mapping-0 [-1, 256] [-1, 512] 132,096
Linear [-1, 256] [-1, 512] 131,072
BN [-1, 512] [-1, 512] 1,024
ReLU [-1, 512] [-1, 512] 0
Dropout [-1, 512] [-1, 512] 0
Mapping-1 [-1, 512] [-1, 512] 263,168
Linear [-1, 512] [-1, 512] 262,144
BN [-1, 512] [-1, 512] 1,024
ReLU [-1, 512] [-1, 512] 0
Dropout [-1, 512] [-1, 512] 0
Linear [-1, 512] [-1, 1] 513
========================================================================================================================
Total params: 594,689
Trainable params: 594,689
Non-trainable params: 0
------------------------------------------------------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.30
Params size (MB): 2.27
Estimated Total Size (MB): 2.57
------------------------------------------------------------------------------------------------------------------------
Zoo.search
Now carefree-learn
supports empirical HPO via Zoo.search (2c19505), which can achieve good performances without searching in a large search space.
Misc
carefree-learn 0.1.13
Weekly patch with miscellaneous fixes and updates.
carefree-learn 0.1.12
Weekly patch with miscellaneous fixes and updates.
carefree-learn 0.1.11
carefree-learn 0.1.11
is mainly a patch release which supported more customizations. However, these features are still at early stage and are likely to be changed in the future. If you want to customize carefree-learn
, it is still highly recommended to clone this repo and install it with edit mode (pip install -e .
).
carefree-learn 0.1.10
Release Notes
carefree-learn 0.1.10
improved overall performances and deepspeed
accessibility.
Versioning
carefree-learn
now supports checking its version via __version__
:
import cflearn
cflearn.__version__ # '0.1.10'
Distributed Training
carefree-learn
now provides out-of-the-box API for distributed training with deepspeed
:
import cflearn
import numpy as np
x = np.random.random([1000000, 10])
y = np.random.random([1000000, 1])
m = cflearn.deepspeed(x, y, cuda="0,1,2,3").m
Misc
- Supported
use_final_bn
inFCNNHead
(#75). - Ensured that models are always in eval mode in inference (#67).
- Supported specifying
resource_config
ofParallel
inExperiment
(#68). - Implemented
profile_forward
forPipeline
.
carefree-learn 0.1.9
Release Notes
carefree-learn 0.1.9
improved overall performances and accessibilities.
ModelConfig
carefree-learn
now introduces ModelConfig
to manage configurations more easily.
Modify extractor_config
, head_config
, etc
v0.1.8 | v0.1.9 |
head_config = {...}
cflearn.make(
model_config={
"pipe_configs": {
"fcnn": {"head": head_config},
},
},
) |
head_config = {...}
cflearn.ModelConfig("fcnn").switch().head_config = head_config |
Switch to a preset config
v0.1.8 | v0.1.9 |
# Not accessible, must register a new model
# with the corresponding config:
cflearn.register_model(
"pruned_fcnn",
pipes=[
cflearn.PipeInfo(
"fcnn",
head_config="pruned",
)
],
)
cflearn.make("pruned_fcnn") |
cflearn.ModelConfig("fcnn").switch().replace(head_config="pruned")
cflearn.make("fcnn") |
Misc
- Enhanced
LossBase
(#66). - Introduced callbacks to
Trainer
(#65). - Enhanced
Auto
and support specifyingextra_config
with json file path (752f419).
carefree-learn 0.1.8
Release Notes
carefree-learn 0.1.8
mainly registered all PyTorch schedulers and enhanced mlflow
integration.
Backward Compatible Breaking
carefree-learn
now keeps a copy of the orignal user defined configs (#48), which changes the saved config file:
v0.1.7 (config.json) | v0.1.8 (config_bundle.json) |
{
"data_config": {
"label_name": "Survived"
},
"cuda": 0,
"model": "tree_dnn"
// the `binary_config` was injected into `config.json`
"binary_config": {
"binary_metric": "acc",
"binary_threshold": 0.49170631170272827
}
}
|
{
"config": {
"data_config": {
"label_name": "Survived"
},
"cuda": 0,
"model": "tree_dnn"
},
"increment_config": {},
"binary_config": {
"binary_metric": "acc",
"binary_threshold": 0.49170631170272827
}
} |
New Schedulers
carefree-learn
newly supports the following schedulers based on PyTorch schedulers:
step
: theStepLR
withlr_floor
supported.exponential
: theExponentialLR
withlr_floor
supported.cyclic
: theCyclicLR
.cosine
: theCosineAnnealingLR
.cosine_restarts
: theCosineAnnealingWarmRestarts
.
These schedulers could be utilized easily with scheduler=...
specified in any high-level API in carefree-learn
, e.g.:
m = cflearn.make(scheduler="cyclic").fit(x, y)
Better mlflow Integration
In order to utilize mlflow
better, carefree-learn
now handles some better practices for you under the hood, e.g.:
- Makes the initialization of mlflow multi-thread safe in distributed training.
- Automatically handles the
run_name
in distributed training. - Automatically handles the parameters for
log_params
. - Updates the artifacts in periodically.
The (brief) documentation for mlflow Integration could be found here.
carefree-learn 0.1.7.1
Release Notes
carefree-learn 0.1.7
integrated mlflow
and cleaned up Experiment
API, which completes the machine learning lifecycle.
v0.1.7.1
: Hotfixed a critical bug which will load the worst checkpoint saved.
mlflow
mlflow
can help us visualizing, reproducing, and serving our models. In carefree-learn
, we can quickly play with mlflow
by specifying mlflow_config
to an empty dict
:
import cflearn
import numpy as np
x = np.random.random([1000, 10])
y = np.random.random([1000, 1])
m = cflearn.make(mlflow_config={}).fit(x, y)
After which, we can execute mlflow ui
in the current working directory to inspect the tracking results (e.g. loss curve, metric curve, etc.).
We're planning to add documentation for the mlflow integration and it should be available at
v0.1.8
.
Experiment
Experiment
API was embarrassingly user unfriendly before, but has been cleaned up and is ready to use since v0.1.7
. Please refer to the documentation for more details.
Misc
- Integrated
DeepSpeed
for distributed training on one single model (experimental). - Enhanced
Protocol
for downstream usages (e.g. Quantitative Trading, Computer Vision, etc.) (experimental).
- Fixed other bugs.
- Optimized
TrainMonitor
(#39) - Optimized some default settings.