-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Hackathon 6th Code Camp No.15] support earthformer #816
Closed
Yang-Changhui
wants to merge
53
commits into
PaddlePaddle:develop
from
Yang-Changhui:add_earthformer
Closed
Changes from 8 commits
Commits
Show all changes
53 commits
Select commit
Hold shift + click to select a range
4673353
add-earthformer
b1d026b
add-earthformer
cc17151
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleScien…
5b8ec36
add-earthformer
fd7aadb
add-earthformer
074a32d
add-earthformer
Yang-Changhui 6f08e82
add-earthformer
Yang-Changhui 2dfbabb
Merge branch 'develop' into add_earthformer
Yang-Changhui 457b8a7
add-earthformer
Yang-Changhui 54463dd
add-earthformer
Yang-Changhui 0eebf42
Merge branch 'develop' into add_earthformer
Yang-Changhui 541ae87
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleScien…
Yang-Changhui ede9618
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui a58869b
add-earthformer
Yang-Changhui 872c763
add-earthformer
Yang-Changhui ff41984
add-earthformer
Yang-Changhui a98e284
add-earthformer
Yang-Changhui 5111ab8
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleScien…
Yang-Changhui 83f221e
add-earthformer
Yang-Changhui 18eb8aa
add-earthfromer
Yang-Changhui 9ca51c9
Merge branch 'develop' into add_earthformer
Yang-Changhui 2d4d682
add-earthformer
Yang-Changhui f946a1f
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui f4cb1ca
add-earthformer
Yang-Changhui e4ae8f3
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleScien…
Yang-Changhui 4979d89
'add-earthfromer'
Yang-Changhui b3ec330
add-earthfromer
Yang-Changhui d64e3e6
add-earthformer
Yang-Changhui 5d37aa1
Merge branch 'develop' into add_earthformer
Yang-Changhui 90d343d
add-earthformer
Yang-Changhui 4109f12
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui 6ad448c
Merge branch 'develop' into add_earthformer
Yang-Changhui ca8d402
add-earthformer
Yang-Changhui fc10355
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleScien…
Yang-Changhui c3a2ed6
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui bc19623
add-earthformer
Yang-Changhui 0ae6c9c
add-earthformer
Yang-Changhui 43f7a46
Merge branch 'develop' into add_earthformer
Yang-Changhui 149ac74
add-earthfromer
Yang-Changhui ec87043
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui 25c7bc8
Merge branch 'develop' into add_earthformer
Yang-Changhui 79aed10
add-earthformer
Yang-Changhui bddc5e4
add-earthformer
Yang-Changhui b44c2a5
add-earthformer
Yang-Changhui b15ca3a
Merge branch 'develop' into add_earthformer
Yang-Changhui 50d5137
add-earthformer
Yang-Changhui cdf1216
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui 8d06a9b
Merge branch 'develop' into add_earthformer
Yang-Changhui 884e8f5
add-earthformer
Yang-Changhui 0d3deb1
Merge branch 'add_earthformer' of https://github.com/Yang-Changhui/Pa…
Yang-Changhui 87e158f
Merge branch 'develop' into add_earthformer
Yang-Changhui e240608
Merge branch 'develop' into add_earthformer
Yang-Changhui 3dccb22
Merge branch 'develop' into add_earthformer
zhiminzhang0830 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,5 +26,6 @@ | |
- HEDeepONets | ||
- ChipDeepONets | ||
- AutoEncoder | ||
- CuboidTransformerModel | ||
show_root_heading: true | ||
heading_level: 3 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -23,4 +23,6 @@ | |
- MeshCylinderDataset | ||
- RadarDataset | ||
- build_dataset | ||
- ENSODataset | ||
- SEVIRDataset | ||
show_root_heading: true |
155 changes: 155 additions & 0 deletions
155
examples/earthformer/enso/conf/earthformer_enso_pretrain.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,155 @@ | ||
hydra: | ||
run: | ||
# dynamic output directory according to running time and override name | ||
dir: outputs_earthformer_pretrain | ||
job: | ||
name: ${mode} # name of logfile | ||
chdir: false # keep current working direcotry unchaned | ||
config: | ||
override_dirname: | ||
exclude_keys: | ||
- TRAIN.checkpoint_path | ||
- TRAIN.pretrained_model_path | ||
- EVAL.pretrained_model_path | ||
- mode | ||
- output_dir | ||
- log_freq | ||
callbacks: | ||
init_callback: | ||
_target_: ppsci.utils.callbacks.InitCallback | ||
sweep: | ||
HydrogenSulfate marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# output directory for multirun | ||
dir: ${hydra.run.dir} | ||
subdir: ./ | ||
|
||
# general settings | ||
mode: train # running mode: train/eval/export/infer | ||
seed: 0 | ||
output_dir: ${hydra:run.dir} | ||
log_freq: 20 | ||
|
||
# set train and evaluate data path | ||
FILE_PATH: ./datasets/enso/enso_round1_train_20210201 | ||
|
||
# dataset setting | ||
DATASET: | ||
label_keys: ["sst_target","nino_target"] | ||
in_len: 12 | ||
out_len: 14 | ||
nino_window_t: 3 | ||
in_stride: 1 | ||
out_stride: 1 | ||
train_samples_gap: 2 | ||
eval_samples_gap: 1 | ||
normalize_sst: true | ||
|
||
# model settings | ||
MODEL: | ||
self_pattern: "axial" | ||
cross_self_pattern: "axial" | ||
cross_pattern: "cross_1x1" | ||
afno: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. afno模型时FourCastNet的模型设置,此处直接使用即可,对应训练代码中也需要修改使用方式cfg.MODEL MODEL:
input_keys: ["sst_data"]
output_keys: ["sst_target","nino_target"]
input_shape: [12, 24, 48, 1]
target_shape: [14, 24, 48, 1]
base_units: 64
scale_alpha: 1.0
.... |
||
input_keys: ["sst_data"] | ||
output_keys: ["sst_target","nino_target"] | ||
input_shape: [12, 24, 48, 1] | ||
target_shape: [14, 24, 48, 1] | ||
base_units: 64 | ||
# block_units: null | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. block_units这个参数是需要的吗,不需要的话可以删除? |
||
scale_alpha: 1.0 | ||
|
||
enc_depth: [1, 1] | ||
dec_depth: [1, 1] | ||
enc_use_inter_ffn: true | ||
dec_use_inter_ffn: true | ||
dec_hierarchical_pos_embed: false | ||
|
||
downsample: 2 | ||
downsample_type: "patch_merge" | ||
upsample_type: "upsample" | ||
|
||
num_global_vectors: 0 | ||
use_dec_self_global: false | ||
dec_self_update_global: true | ||
use_dec_cross_global: false | ||
use_global_vector_ffn: false | ||
use_global_self_attn: false | ||
separate_global_qkv: false | ||
global_dim_ratio: 1 | ||
|
||
dec_cross_last_n_frames: null | ||
|
||
attn_drop: 0.1 | ||
proj_drop: 0.1 | ||
ffn_drop: 0.1 | ||
num_heads: 4 | ||
|
||
ffn_activation: "gelu" | ||
gated_ffn: false | ||
norm_layer: "layer_norm" | ||
padding_type: "zeros" | ||
pos_embed_type: "t+h+w" | ||
use_relative_pos: true | ||
self_attn_use_final_proj: true | ||
dec_use_first_self_attn: false | ||
|
||
z_init_method: "zeros" | ||
initial_downsample_type: "conv" | ||
initial_downsample_activation: "leaky" | ||
initial_downsample_scale: [1, 1, 2] | ||
initial_downsample_conv_layers: 2 | ||
final_upsample_conv_layers: 1 | ||
checkpoint_level: 2 | ||
|
||
attn_linear_init_mode: "0" | ||
ffn_linear_init_mode: "0" | ||
conv_init_mode: "0" | ||
down_up_linear_init_mode: "0" | ||
norm_init_mode: "0" | ||
|
||
|
||
# training settings | ||
TRAIN: | ||
epochs: 100 | ||
save_freq: 20 | ||
eval_during_train: true | ||
eval_freq: 10 | ||
lr_scheduler: | ||
epochs: ${TRAIN.epochs} | ||
learning_rate: 0.0002 | ||
by_epoch: True | ||
min_lr_ratio: 1.0e-3 | ||
wd: 1.0e-5 | ||
batch_size: 16 | ||
pretrained_model_path: null | ||
checkpoint_path: null | ||
|
||
|
||
# evaluation settings | ||
EVAL: | ||
pretrained_model_path: ./checkpoint/enso/earthformer_enso.pdparams | ||
compute_metric_by_batch: False | ||
eval_with_no_grad: true | ||
batch_size: 1 | ||
|
||
INFER: | ||
pretrained_model_path: ./checkpoint/enso/earthformer_enso.pdparams | ||
export_path: ./inference/earthformer/enso | ||
pdmodel_path: ${INFER.export_path}.pdmodel | ||
pdpiparams_path: ${INFER.export_path}.pdiparams | ||
device: gpu | ||
engine: native | ||
precision: fp32 | ||
onnx_path: ${INFER.export_path}.onnx | ||
ir_optim: true | ||
min_subgraph_size: 10 | ||
gpu_mem: 4000 | ||
gpu_id: 0 | ||
max_batch_size: 16 | ||
num_cpu_threads: 4 | ||
batch_size: 1 | ||
data_path: ./datasets/enso/infer/SODA_train.nc | ||
in_len: 12 | ||
in_stride: 1 | ||
out_len: 14 | ||
out_stride: 1 | ||
samples_gap: 1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,143 @@ | ||
from typing import Dict | ||
from typing import Optional | ||
from typing import Union | ||
|
||
import numpy as np | ||
import paddle | ||
from paddle.nn import functional as F | ||
|
||
from ppsci.data.dataset.enso_dataset import NINO_WINDOW_T | ||
from ppsci.data.dataset.enso_dataset import scale_back_sst | ||
|
||
|
||
def compute_enso_score( | ||
y_pred, y_true, acc_weight: Optional[Union[str, np.ndarray, paddle.Tensor]] = None | ||
): | ||
""" | ||
Parameters | ||
---------- | ||
y_pred: paddle.Tensor | ||
y_true: paddle.Tensor | ||
acc_weight: Optional[Union[str, np.ndarray, paddle.Tensor]] | ||
None: not used | ||
default: use default acc_weight specified at https://tianchi.aliyun.com/competition/entrance/531871/information | ||
np.ndarray: custom weights | ||
|
||
Returns | ||
------- | ||
acc | ||
rmse | ||
""" | ||
pred = y_pred - y_pred.mean(axis=0, keepdim=True) # (N, 24) | ||
true = y_true - y_true.mean(axis=0, keepdim=True) # (N, 24) | ||
cor = (pred * true).sum(axis=0) / ( | ||
paddle.sqrt(paddle.sum(pred**2, axis=0) * paddle.sum(true**2, axis=0)) | ||
+ 1e-6 | ||
) | ||
|
||
if acc_weight is None: | ||
acc = cor.sum() | ||
else: | ||
nino_out_len = y_true.shape[-1] | ||
if acc_weight == "default": | ||
acc_weight = paddle.to_tensor( | ||
[1.5] * 4 + [2] * 7 + [3] * 7 + [4] * (nino_out_len - 18) | ||
)[:nino_out_len] * paddle.log(paddle.arange(nino_out_len) + 1) | ||
elif isinstance(acc_weight, np.ndarray): | ||
acc_weight = paddle.to_tensor(acc_weight[:nino_out_len]) | ||
elif isinstance(acc_weight, paddle.Tensor): | ||
acc_weight = acc_weight[:nino_out_len] | ||
else: | ||
raise ValueError(f"Invalid acc_weight {acc_weight}!") | ||
acc_weight = acc_weight.to(y_pred) | ||
acc = (acc_weight * cor).sum() | ||
rmse = paddle.mean((y_pred - y_true) ** 2, axis=0).sqrt().sum() | ||
return acc, rmse | ||
|
||
|
||
def sst_to_nino(sst: paddle.Tensor, normalize_sst: bool = True, detach: bool = True): | ||
""" | ||
|
||
Parameters | ||
---------- | ||
sst: paddle.Tensor | ||
Shape = (N, T, H, W) | ||
|
||
Returns | ||
------- | ||
nino_index: paddle.Tensor | ||
Shape = (N, T-NINO_WINDOW_T+1) | ||
""" | ||
if detach: | ||
nino_index = sst.detach() | ||
else: | ||
nino_index = sst | ||
if normalize_sst: | ||
nino_index = scale_back_sst(nino_index) | ||
nino_index = nino_index[:, :, 10:13, 19:30].mean(axis=[2, 3]) # (N, 26) | ||
nino_index = nino_index.unfold(axis=1, size=NINO_WINDOW_T, step=1).mean( | ||
axis=2 | ||
) # (N, 24) | ||
|
||
return nino_index | ||
|
||
|
||
def train_mse_func( | ||
output_dict: Dict[str, "paddle.Tensor"], | ||
label_dict: Dict[str, "paddle.Tensor"], | ||
*args, | ||
) -> paddle.Tensor: | ||
return F.mse_loss(output_dict["sst_target"], label_dict["sst_target"]) | ||
|
||
|
||
def eval_rmse_func( | ||
output_dict: Dict[str, "paddle.Tensor"], | ||
label_dict: Dict[str, "paddle.Tensor"], | ||
nino_out_len=12, | ||
*args, | ||
) -> Dict[str, paddle.Tensor]: | ||
pred = output_dict["sst_target"] | ||
sst_target = label_dict["sst_target"] | ||
nino_target = label_dict["nino_target"].astype("float32") | ||
# mse | ||
mae = F.l1_loss(pred, sst_target) | ||
# mse | ||
mse = F.mse_loss(pred, sst_target) | ||
# rmse | ||
nino_preds = sst_to_nino(sst=pred[..., 0]) | ||
nino_preds_list, nino_target_list = map(list, zip((nino_preds, nino_target))) | ||
nino_preds_list = paddle.concat(nino_preds_list, axis=0) | ||
nino_target_list = paddle.concat(nino_target_list, axis=0) | ||
|
||
valid_acc, valid_nino_rmse = compute_enso_score( | ||
y_pred=nino_preds_list, y_true=nino_target_list, acc_weight=None | ||
) | ||
valid_weighted_acc, _ = compute_enso_score( | ||
y_pred=nino_preds_list, y_true=nino_target_list, acc_weight="default" | ||
) | ||
valid_acc /= nino_out_len | ||
valid_nino_rmse /= nino_out_len | ||
valid_weighted_acc /= nino_out_len | ||
valid_loss = -valid_acc | ||
|
||
return { | ||
"valid_loss_epoch": valid_loss, | ||
"mse": mse, | ||
"mae": mae, | ||
"rmse": valid_nino_rmse, | ||
"corr_nino3.4_epoch": valid_acc, | ||
"corr_nino3.4_weighted_epoch": valid_weighted_acc, | ||
} | ||
|
||
|
||
def get_parameter_names(model, forbidden_layer_types): | ||
result = [] | ||
for name, child in model.named_children(): | ||
result += [ | ||
f"{name}.{n}" | ||
for n in get_parameter_names(child, forbidden_layer_types) | ||
if not isinstance(child, tuple(forbidden_layer_types)) | ||
] | ||
# Add model specific parameters (defined with nn.Parameter) since they are not in any child. | ||
result += list(model._parameters.keys()) | ||
return result |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CuboidTransformerModel建议改为CuboidTransformer