List of keywords#
For your convenience, the description of the keywords in the ERT configuration file are divided into the following groups:
Commonly used keywords not related to parametrization. I.e. keywords giving the data, grid, and observation file, defining how to run simulations and how to store results. These keywords are described in Commonly used keywords.
Keywords related to parametrization of the ECLIPSE model. These keywords are described in Parametrization keywords.
Keywords related to the simulation in Keywords controlling the simulation.
Advanced keywords not related to parametrization. These keywords are described in Advanced keywords.
Table of keywords#
Keyword name |
Required |
Default value |
Purpose |
---|---|---|---|
NO |
Set analysis module internal state variable |
||
NO |
Deprecated |
||
NO |
Provide an ECLIPSE data file for the problem |
||
NO |
Replace strings in ECLIPSE .DATA files |
||
NO |
Define keywords with config scope |
||
NO |
Define a name for the ECLIPSE simulations. |
||
NO |
1e-6 |
Determines the threshold for ensemble variation in a measurement |
|
NO |
3.0 |
Parameter controlling outlier behaviour in EnKF algorithm |
|
NO |
0.98 |
Cutoff used on singular value spectrum |
|
NO |
storage |
Folder used for storage of simulation results |
|
NO |
Adds grid parameters |
||
NO |
Add the running of a job to the simulation forward model |
||
NO |
Specify a general type of data created/updated by the forward model |
||
NO |
Add a scalar parameter |
||
NO |
Provide an ECLIPSE grid for the reservoir model |
||
NO |
REFCASE_HISTORY |
Source used for historical values |
|
NO |
Install a workflow to be run automatically |
||
NO |
2.5 |
Gauss-Newton steplength decline |
|
NO |
0.6 |
Gauss-Newton maximum steplength |
|
NO |
0.3 |
Gauss-Newton minimum steplength |
|
NO |
Include contents from another ert config |
||
NO |
Install a job for use in a forward model |
||
NO |
Set inversion method for analysis module |
||
NO |
IES%d |
Case name format - iterated ensemble smoother |
|
NO |
4 |
Number of iterations - iterated ensemble smoother |
|
NO |
4 |
Number of retries for a iteration - iterated ensemble smoother |
|
NO |
<CONFIG_FILE>-<IENS> |
Name used for simulation files. |
|
NO |
Python script managing the forward model |
||
NO |
Load a workflow into ERT |
||
NO |
Load a workflow job into ERT |
||
NO |
False |
Enable experimental adaptive localization correlation |
|
NO |
0.30 |
Specifying adaptive localization correlation threshold |
|
NO |
0 |
Set the maximum runtime in seconds for a realization (0 means no runtime limit) |
|
NO |
2 |
How many times the queue system should retry a simulation |
|
NO |
0 |
Set the number of minimum realizations that has to succeed in order for the run to continue (0 means identical to NUM_REALIZATIONS - all must pass). |
|
NO |
1 |
Set the number of CPUs. Intepretation varies depending on context |
|
YES |
Set the number of reservoir realizations to use |
||
NO |
File specifying observations with uncertainties |
||
NO |
Set options for an ERT queue system |
||
NO |
LOCAL_DRIVER |
System used for running simulation jobs |
|
NO |
Reference case used for observations and plotting (See HISTORY_SOURCE and SUMMARY) |
||
NO |
results/step_%d |
Define where ERT should store results |
|
NO |
realization-<IENS>/iter-<ITER> |
Directory to run simulations; simulations/realization-<IENS>/iter-<ITER> |
|
NO |
.ert_runpath_list |
Name of file with path for all forward models that ERT has run. To be used by user defined scripts to find the realizations |
|
NO |
Install arbitrary files in the runpath directory |
||
NO |
You can modify the UNIX environment with SETENV calls |
||
NO |
Lightweight alternative FORWARD_MODEL |
||
NO |
FALSE |
Stop long running realizations after minimum number of realizations (MIN_REALIZATIONS) have run |
|
NO |
Add summary variables for internalization |
||
NO |
Surface parameter read from RMS IRAP file |
||
NO |
Ability to manually enter a list of dates to establish report step <-> dates mapping |
||
NO |
update_log |
Summary of the update steps are stored in this directory |
|
NO |
Directory containing workflow jobs |
Commonly used keywords#
Keywords controlling the simulations#
Parameterization keywords#
The keywords in this section are used to define a parametrization of the ECLIPSE model. I.e. defining which parameters to change in a sensitivity analysis and/or history matching project.
GEN_KW
The General Keyword, or GEN_KW
is meant used for specifying a limited number of parameters.
A configuration example is shown below:
GEN_KW ID priors.txt
where ID
is an arbitrary unique identifier,
and priors.txt
is a file containing a list of parameters and a prior distribution for each.
Given a priors.txt
file with the following distribution:
A UNIFORM 0 1
where A
is an arbitrary unique identifier for this parameter,
and UNIFORM 0 1
is the distribution.
The various prior distributions available for the GEN_KW
keyword are described here.
When the forward model is started the parameter values are added to a file located in
runpath called: parameters.json
.
{
"ID" : {
"A" : 0.88,
},
}
This can then be used in a forward model, an example from python below:
#!/usr/bin/env python
import json
if __name__ == "__main__":
with open("parameters.json", encoding="utf-8") as f:
parameters = json.load(f)
# parameters is a dict with {"ID": {"A": <value>}}
Note: A file named parameters.txt
is also create which contains the same information,
but it is recommended to use parameters.json
.
GEN_KW
also has an optional templating functionality, an example
of the specification is as follows;
GEN_KW ID templates/template.txt include.txt priors.txt
where ID
is an arbitrary unique identifier,
templates/template.txt
is the name of a template file,
include.txt
is the name of the file created for each realization
based on the template file,
and priors.txt
is a file containing a list of parameters and a prior distribution for each.
As a more concrete example, let’s configure GEN_KW
to estimate pore volume multipliers,
or MULTPV
, by for example adding the following line to an ERT config-file:
GEN_KW PAR_MULTPV multpv_template.txt multpv.txt multpv_priors.txt
In the GRID or EDIT section of the ECLIPSE data file, we would insert the following include statement:
INCLUDE
'multpv.txt' /
The template file multpv_template.txt
would contain some parametrized ECLIPSE
statements:
BOX
1 10 1 30 13 13 /
MULTPV
300*<MULTPV_BOX1> /
ENDBOX
BOX
1 10 1 30 14 14 /
MULTPV
300*<MULTPV_BOX2> /
ENDBOX
Here, <MULTPV_BOX1>
and <MULTPV_BOX2>`
will act as magic
strings. Note that the <
and >
must be present around the magic
strings. In this case, the parameter configuration file
multpv_priors.txt
could look like this:
MULTPV_BOX2 UNIFORM 0.98 1.03
MULTPV_BOX1 UNIFORM 0.85 1.00
In general, the first keyword on each line in the parameter configuration file
defines a key, which when found in the template file enclosed in <
and >
,
is replaced with a value. The rest of the line defines a prior distribution
for the key.
Note that ERT only stores values sampled from a standard normal distribution, and a transformation is performed based on the configuration that is loaded from file. This means that if the distribution file is changed, the transformed values written to the run path will be different the next time ERT is started, even though the underlying value stored by ERT has not changed
Example: Using GEN_KW to estimate fault transmissibility multipliers
Previously ERT supported a datatype MULTFLT for estimating fault transmissibility multipliers. This has now been deprecated, as the functionality can be easily achieved with the help of GEN_KW. In the ERT config file:
GEN_KW MY-FAULTS MULTFLT.tmpl MULTFLT.INC MULTFLT.txt
Here MY-FAULTS
is the (arbitrary) key assigned to the fault multiplers,
MULTFLT.tmpl
is the template file, which can look like this:
MULTFLT
'FAULT1' <FAULT1> /
'FAULT2' <FAULT2> /
/
and finally the initial distribution of the parameters FAULT1 and FAULT2 are
defined in the file MULTFLT.txt
:
FAULT1 LOGUNIF 0.00001 0.1
FAULT2 UNIFORM 0.00 1.0
Loading GEN_KW values from an external file
The default use of the GEN_KW keyword is to let the ERT application sample
random values for the elements in the GEN_KW instance, but it is also possible
to tell ERT to load a precreated set of data files, this can for instance be
used as a component in an experimental design based workflow. When using
external files to initialize the GEN_KW instances you supply an extra keyword
INIT_FILE:/path/to/priors/files%d
which tells where the prior files are:
GEN_KW MY-FAULTS MULTFLT.tmpl MULTFLT.INC MULTFLT.txt INIT_FILES:priors/multflt/faults%d
In the example above you must prepare files priors/multflt/faults0
,
priors/multflt/faults1
, … priors/multflt/faultsn
which ERT
will load when you initialize the case. The format of the GEN_KW input
files can be of two varieties:
The files can be plain ASCII text files with a list of numbers:
1.25
2.67
The numbers will be assigned to parameters in the order found in the
MULTFLT.txt
file.
Alternatively values and keywords can be interleaved as in:
FAULT1 1.25
FAULT2 2.56
in this case the ordering can differ in the init files and the parameter file.
The heritage of the ERT program is based on the EnKF algorithm, and the EnKF algorithm evolves around Gaussian variables - internally the GEN_KW variables are assumed to be samples from the N(0,1) distribution, and the distributions specified in the parameters file are based on transformations starting with a N(0,1) distributed variable. The slightly awkward consequence of this is that to let your sampled values pass through ERT unmodified you must configure the distribution NORMAL 0 1 in the parameter file; alternatively if you do not intend to update the GEN_KW variable you can use the distribution RAW.
Regarding templates: You may supply the arguments TEMPLATE:/template/file and KEY:MaGiCKEY. The template file is an arbitrary existing text file, and KEY is a magic string found in this file. When ERT is running the magic string is replaced with parameter data when the ECLIPSE_FILE is written to the directory where the simulation is run from. Consider for example the following configuration:
TEMPLATE:/some/file KEY:Magic123
The template file can look like this (only the Magic123 is special):
Header line1
Header line2
============
Magic123
============
Footer line1
Footer line2
When ERT is running the string Magic123 is replaced with parameter values, and the resulting file will look like this:
Header line1
Header line2
============
1.6723
5.9731
4.8881
.....
============
Footer line1
Footer line2
SURFACE
The SURFACE keyword can be used to work with surface from RMS in the irap format. The surface keyword is configured like this:
SURFACE TOP OUTPUT_FILE:surf.irap INIT_FILES:Surfaces/surf%d.irap BASE_SURFACE:Surfaces/surf0.irap
The first argument, TOP in the example above, is the identifier you want to use for this surface in ERT. The OUTPUT_FILE key is the name of surface file which ERT will generate for you, INIT_FILES points to a list of files which are used to initialize, and BASE_SURFACE must point to one existing surface file. When loading the surfaces ERT will check that all the headers are compatible. An example of a surface IRAP file is:
-996 511 50.000000 50.000000
444229.9688 457179.9688 6809537.0000 6835037.0000
260 -30.0000 444229.9688 6809537.0000
0 0 0 0 0 0 0
2735.7461 2734.8909 2736.9705 2737.4048 2736.2539 2737.0122
2740.2644 2738.4014 2735.3770 2735.7327 2733.4944 2731.6448
2731.5454 2731.4810 2730.4644 2730.5591 2729.8997 2726.2217
2721.0996 2716.5913 2711.4338 2707.7791 2705.4504 2701.9187
....
The surface data will typically be fed into other programs like Cohiba or RMS. The data can be updated using e.g. the smoother.
Initializing from the FORWARD MODEL
Parameter types like FIELD and SURFACE (not GEN_KW) can be initialized from the forward model. To achieve this you just add the setting FORWARD_INIT:True to the configuration. When using forward init the initialization will work like this:
The explicit initialization from the case menu, or when you start a simulation, will be ignored.
When the FORWARD_MODEL is complete ERT will try to initialize the node based on files created by the forward model. If the init fails the job as a whole will fail.
If a node has been initialized, it will not be initialized again if you run again.
When using FORWARD_INIT:True ERT will consider the INIT_FILES setting to find which file to initialize from. If the INIT_FILES setting contains a relative filename, it will be interpreted relatively to the runpath directory. In the example below we assume that RMS has created a file petro.grdecl which contains both the PERMX and the PORO fields in grdecl format; we wish to initialize PERMX and PORO nodes from these files:
FIELD PORO PARAMETER poro.grdecl INIT_FILES:petro.grdecl FORWARD_INIT:True
FIELD PERMX PARAMETER permx.grdecl INIT_FILES:petro.grdecl FORWARD_INIT:True
Observe that forward model has created the file petro.grdecl and the nodes PORO and PERMX create the ECLIPSE input files poro.grdecl and permx.grdecl, to ensure that ECLIPSE finds the input files poro.grdecl and permx.grdecl the forward model should contain a job which will copy/convert petro.grdecl -> (poro.grdecl,permx.grdecl), this job should not overwrite existing versions of permx.grdecl and poro.grdecl. This extra hoops is not strictly needed in all cases, but strongly recommended to ensure that you have control over which data is used, and that everything is consistent in the case where the forward model is run again.
SUMMARY
The SUMMARY keyword is used to add variables from the ECLIPSE summary file to the parametrization. The keyword expects a string, which should have the format VAR:WGRNAME. Here, VAR should be a quantity, such as WOPR, WGOR, RPR or GWCT. Moreover, WGRNAME should refer to a well, group or region. If it is a field property, such as FOPT, WGRNAME need not be set to FIELD.
Example:
-- Using the SUMMARY keyword to add diagnostic variables
SUMMARY WOPR:MY_WELL
SUMMARY RPR:8
SUMMARY F* -- Use of wildcards requires that you have entered a REFCASE.
The SUMMARY keyword has limited support for ‘*’ wildcards, if your key contains one or more ‘*’ characters all matching variables from the refcase are selected. Observe that if your summary key contains wildcards you must supply a refcase with the REFCASE key - otherwise only fully expanded keywords will be used.
Note: Properties added using the SUMMARY keyword are only diagnostic. I.e. they have no effect on the sensitivity analysis or history match.
Analysis module#
The term analysis module refers to the underlying algorithm used for the analysis, or update step of data assimilation. The keywords to load, select and modify the analysis modules are documented here.
ANALYSIS_SET_VAR
The analysis modules can have internal state, like e.g. truncation cutoff values. These can be manipulated from the config file using the ANALYSIS_SET_VAR keyword for either the STD_ENKF or IES_ENKF module.
ANALYSIS_SET_VAR <STD_ENKF|IES_ENKF> ENKF_TRUNCATION 0.98
INVERSION
The analysis modules can specify inversion algorithm used. These can be manipulated from the config file using the ANALYSIS_SET_VAR keyword for either the STD_ENKF or IES_ENKF module.
STD_ENKF
Description |
INVERSION |
IES_INVERSION (deprecated) |
Note |
---|---|---|---|
Exact inversion with diagonal R=I |
EXACT |
0 |
|
Subspace inversion with exact R |
SUBSPACE_EXACT_R / SUBSPACE |
1 |
Preferred name: SUBSPACE |
Subspace inversion using R=EE’ |
SUBSPACE_EE_R |
2 |
Deprecated, maps to: SUBSPACE |
Subspace inversion using E |
SUBSPACE_RE |
3 |
Deprecated, maps to: SUBSPACE |
IES_ENKF
Description |
INVERSION |
IES_INVERSION (deprecated) |
Note |
---|---|---|---|
Exact inversion with diagonal R=I |
EXACT / DIRECT |
0 |
Preferred name: DIRECT |
Subspace inversion with exact R |
SUBSPACE_EXACT_R / SUBSPACE_EXACT |
1 |
Preferred name: SUBSPACE_EXACT |
Subspace inversion using R=EE’ |
SUBSPACE_EE_R / SUBSPACE_PROJECTED |
2 |
Preferred name: SUBSPACE_PROJECTED |
Subspace inversion using E |
SUBSPACE_RE |
3 |
Deprecated, maps to: SUBSPACE_PROJECTED |
Setting the inversion method
-- Example for the `STD_ENKF` module
ANALYSIS_SET_VAR STD_ENKF INVERSION DIRECT
IES_MAX_STEPLENGTH
The analysis modules can specify the Gauss-Newton maximum steplength
for the IES_ENKF
module only.
This is default set to 0.60
, valid values in range [0.1, 1.00]
ANALYSIS_SET_VAR IES_ENKF IES_MAX_STEPLENGTH 0.6
IES_MIN_STEPLENGTH
The analysis modules can specify the Gauss-Newton minimum steplength
for the IES_ENKF
module only.
This is default set to 0.30
, valid values in range [0.1, 1.00]
ANALYSIS_SET_VAR IES_ENKF IES_MIN_STEPLENGTH 0.3
IES_DEC_STEPLENGTH
The analysis modules can specify the Gauss-Newton steplength decline
for the IES_ENKF
module only.
This is default set to 2.5
, valid values in range [1.1, 10.0]
ANALYSIS_SET_VAR IES_ENKF IES_DEC_STEPLENGTH 2.5
LOCALIZATION
The analysis module has capability for enabling adaptive localization
correlation threshold.
This can be enabled from the config file using the
ANALYSIS_SET_VAR keyword but is valid for the STD_ENKF
module only.
This is default False
.
ANALYSIS_SET_VAR STD_ENKF LOCALIZATION True
LOCALIZATION_CORRELATION_THRESHOLD
The analysis module has capability for specifying the adaptive
localization correlation threshold value.
This can be specified from the config file using the
ANALYSIS_SET_VAR keyword but is valid for the STD_ENKF
module only.
This is default 0.30
.
ANALYSIS_SET_VAR STD_ENKF LOCALIZATION_CORRELATION_THRESHOLD 0.30
ENKF_TRUNCATION
Truncation factor for the SVD-based EnKF algorithm (see Evensen, 2007). In this algorithm, the forecasted data will be projected into a low dimensional subspace before assimilation. This can substantially improve on the results obtained with the EnKF, especially if the data ensemble matrix is highly collinear (Saetrom and Omre, 2010). The subspace dimension, p, is selected such that
\(\frac{\sum_{i=1}^{p} s_i^2}{\sum_{i=1}^r s_i^2} \geq \mathrm{ENKF\_TRUNCATION}\)
where si is the ith singular value of the centered data ensemble matrix and r is the rank of this matrix. This criterion is similar to the explained variance criterion used in Principal Component Analysis (see e.g. Mardia et al. 1979).
-- Example for the `IES_ENKF` module
ANALYSIS_SET_VAR IES_ENKF ENKF_TRUNCATION 0.98
The default value of ENKF_TRUNCATION is 0.98. If ensemble collapse is a big problem, a smaller value should be used (e.g 0.90 or smaller). However, this does not guarantee that the problem of ensemble collapse will disappear. Note that setting the truncation factor to 1.00, will recover the Standard-EnKF algorithm if and only if the covariance matrix for the observation errors is proportional to the identity matrix.
References
Evensen, G. (2007). “Data Assimilation, the Ensemble Kalman Filter”, Springer.
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). “Multivariate Analysis”, Academic Press.
Saetrom, J. and Omre, H. (2010). “Ensemble Kalman filtering with shrinkage regression techniques”, Computational Geosciences (online first).
Keywords controlling the ES algorithm#
ENKF_ALPHA
This controls the scaling factor used when detecting outliers. Increasing this factor means that more observations will potentially be included in the assimilation. The default value is 3.00.
Including outliers in the Smoother algorithm can dramatically increase the coupling between the ensemble members. It is therefore important to filter out these outlier data prior to data assimilation. An observation, \(\textstyle d^o_i\), will be classified as an outlier if
\(|d^o_i - \bar{d}_i| > \mathrm{ENKF\_ALPHA} \left(s_{d_i} + \sigma_{d^o_i}\right)\)
where \(\textstyle\boldsymbol{d}^o\) is the vector of observed data, \(\textstyle\boldsymbol{\bar{d}}\) is the average of the forecasted data ensemble, \(\textstyle\boldsymbol{s_{d}}\) is the vector of estimated standard deviations for the forecasted data ensemble, and \(\textstyle\boldsymbol{s_{d}^o}\) is the vector standard deviations for the observation error (specified a priori).
Observe that for the updates many settings should be applied on the analysis module in question.
STD_CUTOFF
If the ensemble variation for one particular measurement is below this limit the observation will be deactivated. The default value for this cutoff is 1e-6.
Observe that for the updates many settings should be applied on the analysis module in question.
UPDATE_LOG_PATH
A summary of the data used for updates are stored in this directory.
ITER_CASE
Case name format - iterated ensemble smoother. By default, this value is
set to default_%d
.
ITER_COUNT
Number of iterations - iterated ensemble smoother. Default is 4.
ITER_RETRY_COUNT
Number of retries for a iteration - iterated ensemble smoother. Defaults to 4.
MAX_SUBMIT
How many times the queue system should retry a simulation. Default is 2.
Advanced keywords#
The keywords in this section, controls advanced features of ERT. Insight in the internals of ERT and/or ECLIPSE may be required to fully understand their effect. Moreover, many of these keywords are defined in the site configuration, and thus optional to set for the user, but required when installing ERT at a new site.
TIME_MAP
Normally the mapping between report steps and true dates is inferred by ERT indirectly by loading the ECLIPSE summary files. In cases where you do not have any ECLIPSE summary files you can use the TIME_MAP keyword to specify a file with dates which are used to establish this mapping. This is only needed in cases where GEN_OBSERVATION is used with the DATE keyword, or cases with SUMMARY observations without REFCASE.
Example:
-- Load a list of dates from external file: "time_map.txt"
TIME_MAP time_map.txt
The format of the TIME_MAP file should just be a list of dates formatted as YYYY-MM-DD. The example file below has four dates:
2000-01-01
2000-07-01
2001-01-01
2001-07-01
Workflow hooks#
HOOK_WORKFLOW
With the keyword HOOK_WORKFLOW
you can configure workflow
‘hooks’; meaning workflows which will be run automatically at certain
points during ERTs execution. Currently there are five points in ERTs
flow of execution where you can hook in a workflow:
Before the simulations (all forward models for a realization) start using
PRE_SIMULATION
,after all the simulations have completed using
POST_SIMULATION
,before the update step using
PRE_UPDATE
after the update step using
POST_UPDATE
andonly before the first update using
PRE_FIRST_UPDATE
.
For non interactive algorithms, PRE_FIRST_UPDATE
is equal to PRE_UPDATE
.
The POST_SIMULATION
hook is typically used to trigger QC workflows.
HOOK_WORKFLOW initWFLOW PRE_SIMULATION
HOOK_WORKFLOW preUpdateWFLOW PRE_UPDATE
HOOK_WORKFLOW postUpdateWFLOW POST_UPDATE
HOOK_WORKFLOW QC_WFLOW1 POST_SIMULATION
HOOK_WORKFLOW QC_WFLOW2 POST_SIMULATION
In this example the workflow initWFLOW
will run after all the
simulation directories have been created, just before the forward
model is submitted to the queue. The workflow preUpdateWFLOW
will be run before the update step and postUpdateWFLOW
will be
run after the update step. When all the simulations have completed the
two workflows QC_WFLOW1
and QC_WFLOW2
will be run.
Observe that the workflows being ‘hooked in’ with the
HOOK_WORKFLOW
must be loaded with the LOAD_WORKFLOW
keyword.
LOAD_WORKFLOW
Workflows are loaded with the configuration option LOAD_WORKFLOW
:
LOAD_WORKFLOW /path/to/workflow/WFLOW1
LOAD_WORKFLOW /path/to/workflow/workflow2 WFLOW2
The LOAD_WORKFLOW
takes the path to a workflow file as the first
argument. By default the workflow will be labeled with the filename
internally in ERT, but you can optionally supply a second extra argument
which will be used as the name for the workflow. Alternatively,
you can load a workflow interactively.
LOAD_WORKFLOW_JOB
Before the jobs can be used in workflows they must be “loaded” into ERT. This can be done either by specifying jobs by name, or by specifying a directory containing jobs.
Use the keyword LOAD_WORKFLOW_JOB
to specify jobs by name:
LOAD_WORKFLOW_JOB jobConfigFile JobName
The LOAD_WORKFLOW_JOB
keyword will load one workflow job.
The name of the job is optional, and will be fetched from the configuration file if not provided.
WORKFLOW_JOB_DIRECTORY
Alternatively, you can use the command
WORKFLOW_JOB_DIRECTORY
which will load all the jobs in a
directory.
Use the keyword WORKFLOW_JOB_DIRECTORY
to specify a directory containing jobs:
WORKFLOW_JOB_DIRECTORY /path/to/jobs
The WORKFLOW_JOB_DIRECTORY
loads all workflow jobs found in the /path/to/jobs directory.
Observe that all the files in the /path/to/jobs directory
should be job configuration files. The jobs loaded in this way will
all get the name of the file as the name of the job. The
WORKFLOW_JOB_DIRECTORY
keyword will not load configuration
files recursively.
Manipulating the Unix environment#
SETENV
You can use the SETENV keyword to alter the unix environment where ERT runs forward models.
Example:
-- Setting up LSF
SETENV MY_VAR World
SETENV MY_OTHER_VAR Hello$MY_VAR
This will result in two environment variables being set in the compute side and available to all jobs. MY_VAR will be “World”, and MY_OTHER_VAR will be “HelloWorld”. The variables are expanded in order on the compute side, so the environment where ERT is running has no impact, and is not changed.