Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Base class for recording information from a pywr.model.Model.
Recorder components are used to calculate, aggregate and save data from a simulation. This base class provides the basic functionality for all recorders. API Reference
model
pywr.core.Model
Yes
agg_func
Scenario aggregation function to use when aggregated_value is called
Yes
name
Name of the recorder
comment
Comment or description of the recorder
ignore_nan
Flag to ignore NaN values when calling aggregated_value
is_objective
Flag to denote the direction, if any, of optimisation undertaken with this recorder
epsilon
Epsilon distance used by some optimisation algorithms
constraint_lower_bounds, constraint_upper_bounds
The value(s) to use for lower and upper bound definitions. These values determine whether the recorder instance is marked as a constraint. Either bound can be None (the default) to disable the respective bound. If both bounds are None then the is_constraint property will return False. The lower bound must be strictly less than the upper bound. An equality constraint can be created by setting both bounds to the same value.
The constraint bounds are not used during model simulation. Instead they are intended for use by optimisation wrappers (or other external tools) to define constrained optimisation problems.
coming soon...
model
pywr.core.Model
Yes
agg_func
Scenario aggregation function to use when aggregated_value is called
Yes
name
Name of the recorder
comment
Comment or description of the recorder
ignore_nan
Flag to ignore NaN values when calling aggregated_value
is_objective
Flag to denote the direction, if any, of optimisation undertaken with this recorder
epsilon
Epsilon distance used by some optimisation algorithms
constraint_lower_bounds, constraint_upper_bounds
The value(s) to use for lower and upper bound definitions. These values determine whether the recorder instance is marked as a constraint. Either bound can be None (the default) to disable the respective bound. If both bounds are None then the is_constraint property will return False. The lower bound must be strictly less than the upper bound. An equality constraint can be created by setting both bounds to the same value.
The constraint bounds are not used during model simulation. Instead they are intended for use by optimisation wrappers (or other external tools) to define constrained optimisation problems.
coming soon...
Base class for recorders that track Parameter values. API Reference
model
pywr.core.Model
Yes
param
The parameter to record
Yes
agg_func
Scenario aggregation function to use when aggregated_value is called
Yes
name
Name of the recorder
Yes
comment
Comment or description of the recorder
Yes
ignore_nan
Flag to ignore NaN values when calling aggregated_value
Yes
is_objective
Flag to denote the direction, if any, of optimisation undertaken with this recorder
Yes
epsilon
Epsilon distance used by some optimisation algorithms
Yes
constraint_lower_bounds, constraint_upper_bounds
The value(s) to use for lower and upper bound definitions. These values determine whether the recorder instance is marked as a constraint. Either bound can be None (the default) to disable the respective bound. If both bounds are None then the is_constraint property will return False. The lower bound must be strictly less than the upper bound. An equality constraint can be created by setting both bounds to the same value.
The constraint bounds are not used during model simulation. Instead they are intended for use by optimisation wrappers (or other external tools) to define constrained optimisation problems.
Yes
coming soon...
model
pywr.core.Model
Yes
param
The parameter to record
Yes
agg_func
Scenario aggregation function to use when aggregated_value is called
Yes
name
Name of the recorder
Yes
comment
Comment or description of the recorder
Yes
ignore_nan
Flag to ignore NaN values when calling aggregated_value
Yes
is_objective
Flag to denote the direction, if any, of optimisation undertaken with this recorder
Yes
epsilon
Epsilon distance used by some optimisation algorithms
Yes
constraint_lower_bounds, constraint_upper_bounds
The value(s) to use for lower and upper bound definitions. These values determine whether the recorder instance is marked as a constraint. Either bound can be None (the default) to disable the respective bound. If both bounds are None then the is_constraint property will return False. The lower bound must be strictly less than the upper bound. An equality constraint can be created by setting both bounds to the same value.
The constraint bounds are not used during model simulation. Instead they are intended for use by optimisation wrappers (or other external tools) to define constrained optimisation problems.
Yes
coming soon...
Recorder for timeseries information from a Storage node.
This class stores volume from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method.
coming soon...
An overview of Pywr recorders supported by WaterStrategy
This page describes the process of creating a new Recorder in WaterStrategy and a selection of the most commonly used Pywrrecorder types, and most commonly used attributes. The full list of built-in recorders and their exhaustive list of attributes are .
In a Network page, click the 'Recorders' Tab:
Next to the 'Recorders-Type Categories' Text, click the '+' button and select 'PYWR_RECORDER':
Enter the name of your recorder. This can be anything you like, but must be unique within the network.
Populate the recorder in the JSON editor:
Currently, there are two different ways to input a Recorder in WaterStrategy. ....
This class stores flow from a specific node for each time-step of a simulation. The results of a recorder are output on the Network Attributes panel, and will be named 'simulated_<recordername>'
Recorder for level timeseries from a Storage node.
This class stores level from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method.
coming soon...
Recorder for timeseries information from a Node.
type
numpyarraynoderecorder
Yes
None
node
Name of the node to record
Yes
None
temporal_agg_func
Optional
mean
agg_func
Optional
mean
model
pywr.core.Model
Yes
node
Node instance to record
Yes
proportional
Whether to record proportional [0, 1.0] or absolute storage volumes (default=False)
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Yes
model
pywr.core.Model
Yes
node
Node instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Yes
Utility class for computing aggregate values.
Users are unlikely to use this class directly. Instead Recorder sub-classes will use this functionality to aggregate their results across different dimensions (e.g. time, scenarios, etc.). API Reference
func
The aggregation function to use. Can be a string or dict defining aggregation functions, or a callable custom function that performs aggregation.
When a string it can be one of: “sum”, “min”, “max”, “mean”, “median”, “product”, or “count_nonzero”. These strings map to and cause the aggregator to use the corresponding numpy functions.
A dict can be provided containing a “func” key, and optional “args” and “kwargs” keys. The value of “func” should be a string corresponding to the aforementioned numpy function names with the additional options of “percentile” and “percentileofscore”. These latter two functions require additional arguments (the percentile and score) to function and must be provided as the values in either the “args” or “kwargs” keys of the dictionary. Please refer to the corresponding numpy (or scipy) function definitions for documentation on these arguments.
Finally, a callable function can be given. This function must accept either a 1D or 2D numpy array as the first argument, and support the “axis” keyword as integer value that determines which axis over which the function should apply aggregation. The axis keyword is only supplied when a 2D array is given. Therefore,` the callable function should behave in a similar fashion to the numpy functions.
Yes
func_args
func_args: list
Yes
func_kwargs
func_kwargs: dict
Yes
model
pywr.core.Model
Yes
agg_func
Scenario aggregation function to use when aggregated_value is called
Yes
name
Name of the recorder
comment
Comment or description of the recorder
ignore_nan
Flag to ignore NaN values when calling aggregated_value
is_objective
Flag to denote the direction, if any, of optimisation undertaken with this recorder
epsilon
Epsilon distance used by some optimisation algorithms
constraint_lower_bounds, constraint_upper_bounds
The value(s) to use for lower and upper bound definitions. These values determine whether the recorder instance is marked as a constraint. Either bound can be None (the default) to disable the respective bound. If both bounds are None then the is_constraint property will return False. The lower bound must be strictly less than the upper bound. An equality constraint can be created by setting both bounds to the same value.
The constraint bounds are not used during model simulation. Instead they are intended for use by optimisation wrappers (or other external tools) to define constrained optimisation problems.
coming soon...
Recorder for timeseries information from a Parameter.
This class stores the value from a specific Parameter for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
pywr.core.Model
Yes
param
Parameter instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Yes
coming soon...
Recorder for area timeseries from a Storage node.
This class stores area from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
pywr.core.Model
Yes
node
Node instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Yes
coming soon...
Recorder for timeseries information from a Node.
This class stores flow from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method.
coming soon...
model
pywr.core.Model
Yes
node
Node instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument.
Yes
factor
A factor can be provided to scale the total flow (e.g. for calculating operational costs).
Yes
This recorder calculates a storage duration curve for each scenario. API Reference
model
pywr.core.Model
Yes
node
The node to record
Yes
percentiles
The percentiles to use in the calculation of the flow duration curve. Values must be in the range 0-100
Yes
agg_func
function used for aggregating the FDC across percentiles. Numpy style functions that support an axis argument are supported
Yes
sdc_agg_func
optional different function for aggregating across scenarios
Yes
coming soon...
Recorder for timeseries of deficit from a Node. This class stores deficit from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
node
Node instance to record
Optional
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument.
Optional
coming soon...
This recorder calculates a flow duration curve for each scenario for a given season specified in months. API Reference
model
pywr.core.Model
Yes
node
The node to record
Yes
percentiles
The percentiles to use in the calculation of the flow duration curve. Values must be in the range 0-100
Yes
agg_func
Function used for aggregating the FDC deviations across percentiles. Numpy style functions that support an axis argument are supported
Yes
fdc_agg_func
Optional different function for aggregating across scenarios
Yes
months
The numeric values of the months the flow duration curve should be calculated for
Yes
coming soon...
This recorder calculates a Flow Duration Curve (FDC) for each scenario and then calculates their deviation from upper and lower target FDCs. The 2nd dimension of the target duration curves and percentiles list must be of the same length and have the same order (high to low values or low to high values).
Deviation is calculated as positive if actual FDC is above the upper target or below the lower target. If actual FDC falls between the upper and lower targets zero deviation is returned. API Reference
model
pywr.core.Model
Yes
node
The node to record
Yes
percentiles
The percentiles to use in the calculation of the flow duration curve. Values must be in the range 0-100
Yes
lower_target_fdc
The lower FDC against which the scenario FDCs are compared
Yes
upper_target_fdc
The upper FDC against which the scenario FDCs are compared
Yes
agg_func
Function used for aggregating the FDC deviations across percentiles. Numpy style functions that support an axis argument are supported
Yes
fdc_agg_func
Optional different function for aggregating across scenarios
Yes
coming soon...
Recorder for timeseries of ratio of supplied flow from a Node. This class stores supply ratio from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
node
Node instance to record
Optional
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Optional
coming soon...
Recorder for timeseries of curtailment ratio from a Node. This class stores curtailment ratio from a specific node for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
node
Node instance to record
Optional
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Optional
coming soon...
This Recorder is used to aggregate across multiple other Recorder objects.
The class provides a method to produce a complex aggregated recorder by taking the results of other records. The .values() method first collects unaggregated values from the provided recorders. These are then aggregated on a per scenario basis and returned by this classes .values() method. This method allows AggregatedRecorder to be used as a recorder for in other AggregatedRecorder instances.
By default the same agg_func function is used for both steps, but an optional recorder_agg_func can undertake a different aggregation across scenarios. For example summing recorders per scenario, and then taking a mean of the sum totals. API Reference
model
pywr.core.Model
Optional
recorders
The other Recorder instances to perform aggregation over
Optional
agg_func
Scenario aggregation function to use when aggregated_value is called (default=”mean”)
Optional
recorder_agg_func
Recorder aggregation function to use when aggregated_value is called (default=`agg_func`)
Optional
coming soon...
Record the mean flow for a Node.
A factor can be provided to scale the total flow (e.g. for calculating operational costs). API Reference
model
name
The name of the recorder
Optional
nodes
List of pywr.core.Node instances to record
Optional
factors
List of factors to apply to each node
Optional
coming soon...
Recorder to total the difference between modelled flow and max_flow for a Node. API Reference
model
node
Node instance to record
Optional
comment
comment
Optional
coming soon...
Record the mean value of a Parameter during a simulation.
This recorder can be used to track the mean of the values returned by a Parameter during a models simulation. An optional factor can be provided to apply a linear scaling of the values. API Reference
model
pywr.core.Model
Optional
name
The name of the recorder
Optional
param
The parameter to record
Required
factor
Scaling factor for the values of param
Optional
coming soon...
Recorder to return the frequency of timesteps with a failure to meet max_flow. API Reference
model
node
Node instance to record
Optional
comment
comment
Optional
coming soon...
Records the mean flow of a Node for the previous N timesteps. API Reference
model
pywr.core.Model
Optional
node
The node to record
Required
name
The name of the recorder
Optional
timesteps
The number of timesteps to calculate the mean flow for
Optional
coming soon...
Record the minimum volume in a Storage node during a simulation. API Reference
model
pywr.core.Model
Optional
node
The node to record
Required
name
The name of the recorder
Optional
coming soon...
Record the number of times an index parameter exceeds a threshold for each scenario.
This recorder will count the number of timesteps so will be a daily count when running on a daily timestep. API Reference
model
pywr.core.Model
Optional
parameter
The parameter to record
Required
threshold
The threshold to compare the parameter to
Optional
coming soon...
model
name
The name of the recorder
Optional
nodes
List of pywr.core.Node instances to record
Optional
factors
List of factors to apply to each node
Optional
For each scenario, count the number of times a list of parameters exceeds a threshold in each year. If multiple parameters exceed in one timestep then it is only counted once.
Output from data property has shape: (years, scenario combinations). API Reference
model
pywr.core.Model
Optional
parameters
List of pywr.core.IndexParameter to record against
Required
name
The name of the recorder
Optional
threshold
Threshold to compare parameters against
Optional
exclude_months
Optional list of month numbers to exclude from the count
Optional
coming soon...
Recorder for timeseries information from an IndexParameter.
This class stores the value from a specific IndexParameter for each time-step of a simulation. The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
pywr.core.Model
Yes
param
Parameter instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. This can be used to return, for example, the median flow over a simulation. For aggregation over scenarios see the agg_func keyword argument
Yes
coming soon...
Recorder for an annual profile from a Parameter.
This recorder stores a daily profile returned by a specific parameter. For each day of the year it stores the value encountered for that day during a simulation. This results in the final profile being the last value encountered on each day of the year during a simulation. This recorder is useful for returning the daily profile that may result from the combination of one or more parameters. For example, during optimisation of new profiles non-daily parameters (e.g. RbfProfileParameter) and/or aggregations of several parameters might be used. With this recorder the daily profile used in the simulation can be easily saved.
The data is saved internally using a memory view. The data can be accessed through the data attribute or to_dataframe() method. API Reference
model
pywr.core.Model
Yes
param
Parameter instance to record
Yes
temporal_agg_func
Aggregation function used over time when computing a value per scenario. For aggregation over scenarios see the agg_func keyword argument
Yes
coming soon...
For each scenario, record the total flow in each year across a list of nodes. Output from data property has shape: (years, scenario combinations).
A list of factors can be provided to scale the total flow (e.g. for calculating operational costs).
coming soon...
model
name
The name of the recorder
Optional
nodes
List of pywr.core.Node instances to record
Optional
factors
List of factors to apply to each node
Optional
model
pywr.core.Model
Optional
parameter
The parameter to record
Required
name
The name of the recorder
Optional
threshold
Threshold to compare the parameter against
Optional
This recorder calculates a flow duration curve for each scenario. API Reference
model
pywr.core.Model
Yes
node
The node to record
Yes
percentiles
The percentiles to use in the calculation of the flow duration curve. Values must be in the range 0-100
Yes
agg_func
function used for aggregating the FDC across percentiles. Numpy style functions that support an axis argument are supported
Yes
fdc_agg_func
optional different function for aggregating across scenarios
Yes
coming soon...
model
pywr.core.Model
Optional
parameter
The parameter to record
Required
name
The name of the recorder
Optional
temporal_agg_func
Optional
window
Optional
A recorder that saves to PyTables CArray
This Recorder creates a CArray for every node passed to the constructor. Each CArray stores the data for all scenarios on the specific node. This is useful for analysis of Node statistics across multiple scenarios. API Reference
model
The model to record nodes from
Optional
h5file
The tables file handle or filename to attach the CArray objects to. If a filename is given the object will open and close the file handles
Required
nodes
Nodes to save in the tables database. Can be an iterable of Node objects or node names. It can also be a iterable of tuples with a node specific where keyword as the first item and a Node object or name as the second item. If an iterable of tuples is provided then the node specific where keyword is used in preference to the where keyword (see below)
Required
parameters
Parameters to save. Similar to the nodes keyword, except refers to Parameter objects or names thereof
Required
where
Default path to create the CArrays inside the database
Required
time
Default full node path to save a time tables.Table. If None no table is created
Optional
scenarios
Default full node path to save a scenarios tables.Table. If None no table is created
Optional
routes_flows
Relative (to where) node path to save the routes flow CArray. If None (default) no array is created
Optional
routes
Full node path to save the routes tables.Table. If None not table is created
Optional
filter_kwds
Filter keywords to pass to tables.open_file when opening a file
Required
mode
Model argument to pass to tables.open_file. Defaults to ‘w’
Optional
metadata
Dict of user defined attributes to save on the root node (root._v_attrs)
Required
create_directories
If a file path is given and create_directories is True then attempt to make the intermediate directories. This uses os.makedirs() underneath
Optional
coming soon...
Record whether a Storage node falls below a particular volume threshold during a simulation.
This recorder will return a value of 1.0 for scenarios where the volume Storage is less than or equal to the threshold at any time-step during the simulation. Otherwise it will return zero. API Reference
model
pywr.core.Model
Optional
node
The node to record
Required
name
The name of the recorder
Optional
threshold
The threshold to compare the parameter to
Optional
coming soon...
Calculates the total energy production using the hydropower equation from a model run.
This recorder saves the total energy production in each scenario during a model run. It does not save a timeseries or power, but rather total energy. API Reference
water_elevation_parameter
Elevation of water entering the turbine. The difference of this value with the turbine_elevation gives the working head of the turbine
Required
turbine_elevation
Elevation of the turbine itself. The difference between the water_elevation and this value gives the working head of the turbine
Required
efficiency
The efficiency of the turbine
Optional
density
The density of water
Optional
flow_unit_conversion
A factor used to transform the units of flow to be compatible with the equation here. This should convert flow to units of
Optional
energy_unit_conversion
A factor used to transform the units of total energy. Defaults to 1e-6 to return
Optional
coming soon...
Calculates the power production using the hydropower equation
This recorder saves an array of the hydrpower production in each timestep. It can be converted to a dataframe after a model run has completed. It does not calculate total energy production. API Reference
water_elevation_parameter
Elevation of water entering the turbine. The difference of this value with the turbine_elevation gives the working head of the turbine
Required
turbine_elevation
Elevation of the turbine itself. The difference between the water_elevation and this value gives the working head of the turbine
Required
efficiency
The efficiency of the turbine
Optional
density
The density of water
Optional
flow_unit_conversion
A factor used to transform the units of flow to be compatible with the equation here. This should convert flow to units of
Optional
energy_unit_conversion
A factor used to transform the units of total energy. Defaults to 1e-6 to return
Optional
coming soon...
A Recorder that saves Node values to a CSV file.
This class uses the csv package from the Python standard library. API Reference
model
The model to record nodes from
Optional
csvfile
The path to the CSV file
Required
scenario_index
The scenario index of the model to save
Required
nodes
An iterable of nodes to save data. It defaults to None which is all nodes in the model
Required
kwargs
Additional keyword arguments to pass to the csv.writer object
Optional
coming soon...
Record the total value of a Parameter during a simulation.
This recorder can be used to track the sum total of the values returned by a Parameter during a models simulation. An optional factor can be provided to apply a linear scaling of the values. If the parameter represents a flux the integrate keyword argument can be used to multiply the values by the time-step length in days. API Reference
model
pywr.core.Model
Optional
param
The parameter to record
Required
name
The name of the recorder
Optional
factor
Scaling factor for the values of param
Optional
integrate
Whether to multiply by the time-step length in days during summation
Optional
coming soon...