-
Couldn't load subscription status.
- Fork 0
consolidation fluxnet hybrid #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ndbadOptmization in the same Project.toml
| # "experiment.hybrid.fold.fold_path" => "/", | ||
| # "experiment.hybrid.fold.which_fold" => 10, | ||
| # "experiment.model_output.path" => path_output, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"hybrid.ml_training.options.batch_size" => 16,
"hybrid.ml_training.which_fold" => 10000,
"hybrid.ml_training.fold_path" => "blablabla",
"hybrid.ml_model.options.n_layers" => 300,
"hybrid.ml_model.options.n_neurons" => 5000,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works as we need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good, they work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we also need the syntax for the experiment output, currently is relative to the project's directory, but in the cluster we want to send the output to an absolute path. In raven I was doing this:
| remote_raven = "/ptmp/lalonso/HybridOutputALL/HyALL_ALL_fold_$(_nfold)_nlayers_$(nlayers)_n_neurons_$(n_neurons)_batch_size_$(batch_size)/" |
and then passing that to
path_experiment=checkpoint_path .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its better to change the whole output directory to avoid overwriting in parallelized runs as,
n_fold = 5
n_layers = 3
n_neurons = 32
batch_size = 32
replace_info = Dict(
"experiment.basics.name" => "hybrid_fold_$(n_fold)_nlayers_$(n_layers)_n_neurons_$(n_neurons)_batch_size_$(batch_size)",
)
Then the checkpoints will be saved in:
"/Users/skoirala/research/RnD/SINDBAD/examples/exp_WROASTED/../exp_fluxnet_hybrid/output_FLUXNET_hybrid_fold_5_nlayers_3_n_neurons_32_batch_size_32/hybrid/training_checkpoints"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and the base path for output can be set by a full path as:
"experiment.model_output.path" => "/Users/youruser/yourdirectory/tmp_abspath_test",
or in the experiment.json. Note that you can also add the unique info of folds and layers to the base path, and not use the information in experiment.basics.name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I would rather use an absolute path. Thanks. I will update accordingly.
| space_forcing = run_helpers.space_forcing; | ||
| space_observations = run_helpers.space_observation; | ||
| space_output = run_helpers.space_output; | ||
| space_spinup_forcing = run_helpers.space_spinup_forcing; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dr-ko spinup per site should be taken care of here, I think. Not sure if this is part of prepareHybrid. We need to double check this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks just like the forcing and not sequence.
| land_init = run_helpers.loc_land; | ||
| loc_forcing_t = run_helpers.loc_forcing_t; | ||
|
|
||
| space_cost_options = [prepCostOptions(loc_obs, info.optimization.cost_options) for loc_obs in space_observations]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dr-ko spinup/costOptions per site should be taken care of here, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, cost per site.
|
There was a small error for Polyester, now is working. And is really fast! > julia -t 24julia > include("exp_fluxnet_hybrid_replace_r.jl") |
Things to consider:
I'm using
fAPARwith thecVegLeafBareFracapproach where with consider thebare soilSee:SINDBAD/examples/exp_fluxnet_hybrid/settings_fluxnet_hybrid/model_structure.json
Line 100 in b701b3f
fAPAR_bareused later in the cost function.SINDBAD/examples/exp_fluxnet_hybrid/settings_fluxnet_hybrid/model_structure.json
Line 85 in b701b3f