15.6. Changelog¶
Versions
- pSeven Core 6.34
- pSeven Core 6.33
- pSeven Core 6.32
- pSeven Core 6.31.1
- pSeven Core 6.31
- pSeven Core 6.30
- pSeven Core 6.29
- pSeven Core 6.28
- pSeven Core 6.27
- pSeven Core 6.26
- pSeven Core 6.25
- pSeven Core 6.24
- pSeven Core 6.23
- pSeven Core 6.22
- pSeven Core 6.21
- pSeven Core 6.20
- pSeven Core 6.19
- pSeven Core 6.18
- pSeven Core 6.17
- pSeven Core 6.16.4
- pSeven Core 6.16.3
- pSeven Core 6.16.2
- pSeven Core 6.16.1
- pSeven Core 6.16
- pSeven Core 6.15.1
- pSeven Core 6.15
- pSeven Core 6.14.4
- pSeven Core 6.14.3
- pSeven Core 6.14.2
- pSeven Core 6.14.1
- pSeven Core 6.14
- pSeven Core 6.13 Service Pack 1
- pSeven Core 6.13
- pSeven Core 6.12 Service Pack 2
- pSeven Core 6.12 Service Pack 1
- pSeven Core 6.12
- pSeven Core 6.11 Service Pack 1
- pSeven Core 6.11
- pSeven Core 6.10
- pSeven Core 6.9 Service Pack 1
- pSeven Core 6.9
- pSeven Core 6.8
- pSeven Core 6.7
- MACROS 6.6
- MACROS 6.5 Service Pack 1
- MACROS 6.5
- MACROS 6.4
- MACROS 6.3
- MACROS 6.2
- MACROS 6.1
- MACROS 6.0
- MACROS 6.0 Release Candidate 1
- MACROS 5.3
- MACROS 5.2
- MACROS 5.1
- MACROS 5.0
- MACROS 5.0 Release Candidate 3
- MACROS 5.0 Release Candidate 2
- MACROS 5.0 Release Candidate 1
- MACROS 4.3
- MACROS 4.2 Service Pack 1
- MACROS 4.2
- MACROS 4.1
- MACROS 4.0
- MACROS 4.0 Release Candidate 1
- MACROS 4.0 Beta 1
- MACROS 3.4
- MACROS 3.3
- MACROS 3.2
- MACROS 3.1
- MACROS 3.0
- MACROS 3.0 Release Candidate 2
- MACROS 3.0 Release Candidate 1
- MACROS 3.0 Beta 2
- MACROS 3.0 Beta 1
- MACROS 2.4
- MACROS 2.3
- MACROS 2.2
- MACROS 2.1
- MACROS 2.1 Release Candidate 2
- MACROS 2.1 Release Candidate 1
- MACROS 2.0
- MACROS 2.0 Release Candidate 2
- MACROS 2.0 Release Candidate 1
- MACROS 1.11.1
- MACROS 1.11.0
- MACROS 1.10.5
- MACROS 1.10.4
- MACROS 1.10.3
- MACROS 1.10.2
- MACROS 1.10.1
- MACROS 1.10.0
- MACROS 1.9.6
- MACROS 1.9.5
- MACROS 1.9.4
- MACROS 1.9.3
- MACROS 1.9.2
- MACROS 1.9.1
- MACROS 1.9.0
- MACROS 1.8.5
- MACROS 1.8.4
- MACROS 1.8.3
- MACROS 1.8.2
- MACROS 1.8.1
- MACROS 1.8.0
- MACROS 1.7.7
- MACROS 1.7.6
- MACROS 1.7.5
- MACROS 1.7.4
- MACROS 1.7.3
- MACROS 1.7.2
- MACROS 1.7.1
- MACROS 1.7.0
- MACROS 1.6.3
- MACROS 1.6.2
- MACROS 1.6.1
- MACROS 1.6.0
- MACROS 1.5.3
- MACROS 1.5.2
- MACROS 1.5.1
15.6.1. pSeven Core 6.34¶
15.6.1.1. Updates and Changes¶
- 252 GTOpt, GTDoE: minor performance improvements in some kinds of optimization and adaptive design tasks.
15.6.1.2. Bugfixes¶
- 246 GTOpt, GTDoE: fixed an issue with re-using the same problem instance
in different
solve()
orbuild_doe()
runs, wheredesigns()
in an optimization or adaptive designResult
always returned all designs obtained in all runs that had used the same problem instance, instead of returning only designs from the last run as intended.
15.6.2. pSeven Core 6.33¶
15.6.2.1. New Features¶
- 237 Problem validation support: GTOpt and GTDoE provide methods
to validate your problem definition before running —
see
gtopt.Solver.validate()
andgtdoe.Generator.validate()
.
15.6.2.2. Updates and Changes¶
- 249 GTOpt: added support for categorical variables. A problem with categorical variables is solved as a number of subproblems, one for each possible combination of categories (levels of categorical variables).
- 221 GTApprox: improved quality of MoA models with categorical variables.
- 212 GTApprox: improved smart training (
build_smart()
) performance when training RSM models. - 249 GTDoE: added support for categorical variables in the Adaptive Design technique. An adaptive design that includes categorical variables is generated by running an independent subtask for each possible combination of categories.
- 247 GTDoE: blackboxes with constraints are now supported by all DoE techniques.
15.6.2.3. Documentation¶
- 237 Added the
gtopt.Solver.validate()
andgtdoe.Generator.validate()
method descriptions. - 237 Added
gtopt.ValidationResult
andgtdoe.ValidationResult
descriptions. - 247 249 Updated the
build_doe()
method description. - 247 249 Updated section Types of Variables in the GTDoE guide.
15.6.2.4. Bugfixes¶
- 242 GTOpt: fixed an issue in robust optimization problems with computationally cheap responses, where GTOpt could produce a solution that clearly violates a chance constraint.
- 221 236 GTApprox: fixed an issue where a MoA model with categorical variables could evaluate some inputs incorrectly and output NaN.
- 212 GTApprox: fixed an issue with RSM technique settings in smart training
(
build_smart()
) where the GTApprox/RSMFeatureSelection option was internally set to"RidgeLS"
by default. - 85 GTDoE: fixed issues with watcher support.
15.6.3. pSeven Core 6.32¶
15.6.3.1. Updates and Changes¶
86 redmine-19155 GTOpt, GTDoE: in optimization and Adaptive Design, you can now limit the number of evaluations individually for any response (objective or constraint) regardless of its computational cost using the new @GT/EvaluationLimit response hint. In Adaptive Design, you can also use the new hint to require a certain number of linear response evaluations to train a linear response model. See the GTOpt and GTDoE hint references for full details.
This update deprecates the @GTOpt/ExpensiveEvaluations response hint, which works only with computationally expensive responses. GTOpt will keep supporting @GTOpt/ExpensiveEvaluations for compatibility with previous pSeven Core versions, but you are advised to avoid using that deprecated hint in favor of @GT/EvaluationLimit.
231 GTApprox: optimized operations with models that have a high number of inputs or outputs. You may notice an increase in performance when loading such models from disk or accessing their
details
.
15.6.3.2. Documentation¶
- 86 redmine-19155 Updated Hint Reference in the GTOpt guide: added the @GT/EvaluationLimit description and related details.
- 86 redmine-19155 Updated Hint Reference in the GTDoE guide: added the @GT/EvaluationLimit hint description, updated the @GTOpt/LinearityType hint description.
- 86 redmine-19155 Updated the GTOpt/RestoreAnalyticResponses and GTDoE/AdaptiveDesign/RestoreAnalyticResponses option descriptions regarding the @GT/EvaluationLimit hint behavior for linear responses.
- Fixed outdated descriptions of the @GTApprox/QualityMetrics and @GTApprox/AcceptableQualityLevel smart training hints: the R2 error metric was missing from the descriptions.
15.6.3.3. Bugfixes¶
- 234 GTOpt, GTDoE: fixed an issue where you could get an empty result
if variable or response names contain Unicode characters —
for example, when using a GTApprox model with Unicode names of inputs and outputs
as a blackbox in
build_doe()
. - 219 231 GTApprox: fixed an issue where training a model
with a high number of outputs could fail at the finalization stage
with a
MemoryError
exception. - 243 GTApprox: fixed an issue where you could not compile a model exported to C (program source, MEX source, Excel-compatible DLL source) or one of the supported FMI formats due to a syntax error in the exported C code (undefined symbol).
15.6.4. pSeven Core 6.31.1¶
15.6.4.1. Updates and Changes¶
- 217 GTOpt, GTDoE: improved accuracy of linear response models trained internally when you enable the GTOpt/RestoreAnalyticResponses or GTDoE/AdaptiveDesign/RestoreAnalyticResponses option.
- 227 GTOpt, GTDoE: improved stability and accuracy when training internal models of quadratic constraints, in particular, equality type constraints, which previously could lead to an infeasible problem.
15.6.4.2. Bugfixes¶
- 230 227 GTOpt, GTDoE: fixed an issue with determining the evaluation limit (the maximum number of iterations allowed) when it is not set by user: in problems where all variables are non-continuous, meaning that there is a finite number of possible combinations of variable values, the evaluation limit determined by optimization or Adaptive Design techniques could get higher than the number of possible combinations, leading to duplicate evaluations and other unwanted effects.
15.6.5. pSeven Core 6.31¶
15.6.5.1. New Features¶
- 176 FMI 2.0 support: you can export a GTApprox model to a
Functional Mock-up Unit (FMU) for Model Exchange and Co-Simulation —
a new type of FMU introduced by the FMI 2.0 standard.
See section Model Export and
export_fmi_20()
for details. GTApprox also keeps the support for exporting FMI 1.0 compliant FMU for Model Exchange (export_fmi_me()
) and FMU for Co-Simulation (export_fmi_cs()
). - 170 GTApprox provides a new function to split a data sample
into train and test subsets optimized for model training and validation —
see
train_test_split()
.
15.6.5.2. Updates and Changes¶
- 98 GTApprox: improved quality of models with many categorical variables, or a few categorical variables that have large sets of levels.
15.6.5.3. Documentation¶
- 98 Updated section Regression Model Information in the GTApprox guide.
- 170 Added the
train_test_split()
function description.
15.6.5.4. Compatibility Issues¶
This release changes handling of categorical variables in GTApprox, which results
in improved quality of models with categorical variables but creates a minor compatibility issue
in RSM models with categorical variables.
For such models, details
may now omit the "Regression Model"
key,
which previously was present in most cases.
See sections Version Compatibility Issues and Regression Model Information for details.
15.6.5.5. Bugfixes¶
- 219 GTApprox: fixed a “maximum recursion depth exceeded” error
in smart training (
build_smart()
) when training a model with a high number of outputs. - 223 GTDoE: fixed an issue in Adaptive Design of Experiments where it ignored the batch size settings (GTDoE/ResponsesScalability or GTDoE/Adaptive/OneStepCount) in tasks with non-continuous variables.
- 215 GTDoE: fixed an issue with the Adaptive Design technique where the number of designs generated could be less than expected, if some responses evaluate to NaN.
15.6.6. pSeven Core 6.30¶
15.6.6.1. Updates and Changes¶
- 135 216 136 GTApprox: several updates in smart training aimed at improving the balance between training performance and final model quality, and providing better support for incremental training (training with an initial model). In particular, the MoA and GBRT techniques are now enabled in smart training by default.
- 136 GTApprox: smart training models are no longer required to support gradients. This change was necessary to enable the GBRT technique in smart training by default. To train a model with gradient support, add the gradient requirement using the @GTApprox/ModelFeatures hint (see section Model Features for details).
- 134 GTApprox: improved performance of the RSM technique when training quadratic models.
15.6.6.2. Documentation¶
- 136 Updated description of the model features smart training hint (section Model Features and @GTApprox/ModelFeatures).
15.6.6.3. Bugfixes¶
- 203 GTOpt, GTDoE: fixed an issue where optimization or adaptive DoE results could contain NaN values of linear responses.
- 216 GTOpt, GTDoE, GTApprox: fixed an internal issue which sometimes caused performance degradation of optimization, adaptive DoE, or smart training in the Windows version of pSeven Core. That issue did not affect the quality of optimization or adaptive DoE results, or quality of GTApprox models.
- 134 201 GTApprox: fixed a few issues with training parallelization, which could cause performance degradation with large training samples.
- 195 GTApprox: fixed an issue with train and test subsets selection in smart training, which could negatively affect the final model quality.
- 209 GTApprox: fixed a deadlock issue in smart training, which could appear if the GBRT technique is enabled.
- 135 GTApprox: fixed an issue where details of a MoA model obtained from smart training did not contain information about MoA option values used in training.
- 135 GTApprox: fixed an issue where it was impossible to interrupt training when the MoA technique is used.
- 134 GTApprox: fixed an issue where smoothing was enabled for RSM models despite that feature is practically useless for RSM.
15.6.7. pSeven Core 6.29¶
15.6.7.1. New Features¶
- 169 The Adaptive Design technique in GTDoE now supports batch mode, which enables efficient usage of blackboxes that support concurrent response evaluations. To set up concurrency, use the new GTDoE/ResponsesScalability option.
- 186 When training a GTApprox model with linear dependency between outputs, you can specify an error threshold for the internal model of that dependency using the new GTApprox/PartialDependentOutputs/RRMSThreshold option. For details, see section Output Dependency Modes and the option description.
- 189 150 Introduced a new type of variables — stepped variables, supported in GTOpt and GTDoE. This type is intended for a rather common case of a variable that represents a continuous quantity, but can be changed only in certain increments due to various practical reasons. To specify variable type, set the @GT/VariableType hint when adding a variable. For more information about stepped variables, see section Types of Variables in the GTDoE guide.
- 189 In optimization, you can specify resolution for variables, which enables GTOpt to adjust precision of optimization algorithms accordingly and often improves performance. For details, see the @GT/Resolution hint description. A similar hint is also supported in GTDoE.
15.6.7.2. Updates and Changes¶
- 90 GTOpt, GTDoE: you can set up a strict requirement to train internal models of linear responses, which requires initial sampling of such responses prior to starting optimization but makes the behavior regarding linear responses more predictable: if modeling succeeds, the algorithm continues to use the obtained internal response model; if linear response modeling fails, the algorithm stops immediately. See the GTOpt/RestoreAnalyticResponses and GTDoE/AdaptiveDesign/RestoreAnalyticResponses option descriptions for more details.
- 177 GTDoE: unified behavior of DoE techniques in cases when it is not possible to generate the requested number of points. Space-filling techniques no longer raise an exception in such cases but issue a warning and return a result, which contains all points the technique could generate.
- 188 GTApprox, GTDF, GTDR: the following methods are deprecated and will be removed in future versions:
15.6.7.3. Documentation¶
- 186 Updated section Output Dependency Modes in the GTApprox guide.
- 186 Added the GTApprox/PartialDependentOutputs/RRMSThreshold option description.
- 169 Added the GTDoE/ResponsesScalability option description.
- 169 Updated the GTDoE/Adaptive/OneStepCount option description.
- 90 Added the GTDoE/AdaptiveDesign/RestoreAnalyticResponses option description.
- 90 Updated the GTOpt/RestoreAnalyticResponses option description.
- 150 Updated the following parts of documentation with regard to stepped variables:
- Section Hint Reference in the GTOpt guide.
- Section Types of Variables in the GTDoE guide.
- @GT/VariableType GTDoE hint description.
gtopt.ProblemGeneric.add_variable()
method description.gtdoe.ProblemGeneric.add_variable()
method description.build_doe()
method description.
- 189 Added the @GT/Resolution GTOpt hint description.
- 189 Added the @GT/Resolution GTDoE hint description.
- 188 Fixed an outdated list of available techniques in the GTApprox/MoATechnique option description.
- 188 Noted the following methods as deprecated:
15.6.7.4. Bugfixes¶
- 191 GTOpt: fixed an issue where getting the string representation of an infeasible problem result (for example, printing the result) raised an exception.
- 199 GTOpt, GTDoE: fixed an issue where optimization and adaptive DoE techniques
in tasks with discrete variables that have non-uniform distribution of levels
(for example:
[0.0, 0.5, 1.0, 2.0]
) incorrectly handled linear constraints that do not depend on those discrete variables. Due to internal variable and response type conversions, all linear constraints were treated as generic functions, which often led to their violation when it could be avoided. Now linear constraints that do not depend on discrete variables of the said kind are always processed as linear functions. Note that if a constraint is defined as linear, and it depends on a discrete variable, behavior does not change from the previous version: such constraints are internally treated as generic. - 177 GTApprox: fixed a compatibility issue in export of TA models with BSPL factors to the Octave format.
- 197 GTApprox: fixed an issue where details of a model with categorical variables could contain incorrect information about input constraints.
- 187 GTDoE: fixed an issue where if the Adaptive Design technique
was used in a task with no responses, result
designs()
returned an empty array. - 177 GTDoE: fixed issues with frozen variables (constants) in the Optimal Design technique.
- 161 GTDoE: fixed an issue where using a GTApprox model with a neglected variable
as a blackbox in
build_doe()
caused an exception. - 196 GTDoE: fixed an issue where using a GTApprox model with categorical outputs
as a blackbox in
build_doe()
raised an exception in space-filling DoE techniques that actually support such models. - 192 GTDoE: fixed incorrect behavior of the GTDoE/Sequential/Leap GTDoE/Sequential/Skip options in the Random Sampling technique.
15.6.8. pSeven Core 6.28¶
15.6.8.1. Updates and Changes¶
- 158 GTApprox: improved time limit implementation
(@GTApprox/TimeLimit) in smart training
(
build_smart()
) to take advantage of the time-quality trade-off provided by the GTApprox/Accelerator option. Smart training now automatically adjusts GTApprox/Accelerator individually for every approximation technique it tries. With lower time limits, internal models created during smart training are less accurate but train faster, which allows GTApprox to try more techniques in given time. This potentially yields better results than training internal models with maximum quality but limiting the set of used techniques, as in previous versions. - 141 GTDoE: improved automatic selection of levels for continuous variables in the Full Factorial technique. Levels are now selected with the aim to generate a full factorial design that has size as close as possible to the requested points count. The technique is now able to select a different number of levels for each continuous variable. For example: in previous versions, the Full Factorial technique always selected the same number of levels for each continuous variable, so for a design with 5 continuous variables and requested point count of 242 (35−1) it generated 32 points (25), because it could select only 2 levels for each of the 5 variables. New version in this case generates 216 points, variables in the resulting design have varying number of levels (4×3×3×3×2=216).
- 141 GTDoE: similarly, improved selection of levels for continuous variables in the Orthogonal Array technique in the case when the number of levels is not specified. The technique now selects a varying number of levels for continuous variables in order to generate a proper or balanced array (see Array Types and Requirements) with size close to the requested point count. Previously it often generated a full factorial DoE with a minimum number of levels.
15.6.8.2. Bugfixes¶
- 158 GTApprox: fixed an issue with using the GBRT technique in smart training
(
build_smart()
) where it stopped prematurely after training a candidate GBRT model, if you did not use a test sample. - 185 GTApprox: fixed an issue where internal validation information was missing from a model trained in the partial linear output dependency mode, if the training sample contained duplicates.
- 158 GTApprox: fixed an issue where parallel threads sometimes were not used in training despite parallelization is enabled.
- 141 GTDoE: fixed various issues with categorical variables in the Adaptive Design of Experiments technique.
- 141 GTDoE: fixed various issues with discrete variables in the Optimal Design technique.
- 141 GTDoE: fixed an issue where the Parametric Study technique incorrectly selected levels for continuous variables, using one of the variable bounds instead of a central point placed between bounds.
- 141 GTDoE: fixed various issues related to using constant variables in different DoE techniques (discrete and categorical variables defined with 1 level or “frozen” continuous variables defined with equal bounds).
15.6.9. pSeven Core 6.27¶
15.6.9.1. Updates and Changes¶
- 111 GTDoE: the following techniques now have a default
for the number of DoE points they can generate:
Full Factorial, Fractional Factorial, Parametric Study, Optimal Design, Box-Behnken, Orthogonal Array.
To use the default, set the count of points to generate to
0
. - 151 You can now get all available problem data from
p7core.Result
using the newdesigns()
method. - 122 GTApprox: exported models with categorical inputs or outputs now support string values of those inputs and outputs.
- 122 GTApprox: models exported to C can now load samples from CSV files.
- 122 GTApprox, GTDR: models exported to C now output only model values
(no gradient values) to simplify data piping: for example, now you can
redirect a GTDR model output to a GTApprox model input as is.
To output gradient values as in previous versions, add the
-d
option to the model’s command line.
15.6.9.2. Documentation¶
- 111 Updated descriptions of GTDoE methods
build_doe()
andgenerate()
. - 151 Updated
p7core.Result
description. - Fixed the GTDoE/OrthogonalArray/LevelsNumber option description, which was not describing convenient option settings for designs with a mix of continuous and categorical (or discrete) variables.
15.6.9.3. Bugfixes¶
- 174 GTApprox: fixed an issue where training a model with partial linear output dependency mode caused an error if point weights were specified.
- 146 GTApprox: fixed an issue where smart training with an initial model sometimes failed because it could not train a model with specified options and hints.
- 160 GTApprox: fixed several issues with training a model with categorical outputs, which could cause errors in training.
- 123 GTApprox: fixed issues with incorrect statistics
for categorical outputs in model
details
. - 122 GTApprox: fixed several issues related to using string values with models that have categorical inputs or outputs.
- 122 GTApprox: fixed an issue where Unicode names of model variables could cause malformed log messages and other errors in logging.
- 151 GT: fixed an issue where
da.p7core.Result.solutions()
returned result data with field order not as specified by its fields argument.
15.6.10. pSeven Core 6.26¶
15.6.10.1. Updates and Changes¶
- 106 GTOpt: improved stability of surrogate-based optimization algorithms in problems where local search methods are applicable to improve solution accuracy.
- 95 GTDoE: discrete and categorical variables are now supported in most DoE techniques with some technique-specific limitations — see section Types of Variables in the GTDoE guide for details.
- 95 GTDoE: the
build_doe()
method now supports all GTDoE techniques.
15.6.10.2. Documentation¶
- 56 Updated section Internal Validation of the GTApprox guide to explain the use of internal validation options (updated in 6.19) in common model validation scenarios such as leave-one-out cross-validation.
- 159 Updated the GTDoE guide:
- Added section Types of Variables describing types of variables recognized by GTDoE and support for them in different GTDoE techniques.
- Added section Hint Reference describing hints used to specify types of variables and responses in GTDoE.
- 159 Added usage details to the
build_doe()
method description. - 159 Updated GTDoE option descriptions:
15.6.10.3. Bugfixes¶
- 137 GTOpt: fixed an issue with batch evaluation support where the GTOpt/ResponsesScalability option was sometimes ignored in multi-objective problems or if local search is enabled.
- 173 GTApprox: fixed an issue with internal validation where specifying size of validation subsets (GTApprox/IVSubsetSize) could lead to an error if the training sample contains ambiguous points (different response values for same inputs).
- 163 GTApprox: fixed an issue with smart training where
build_smart()
produced a warning if the GBRT technique is enabled in training settings. - 147 GTDoE: fixed an issue with the Adaptive Design of Experiments technique where it could raise an exception if the initial sample consists of a single point or contains a constant input column.
15.6.11. pSeven Core 6.25¶
15.6.11.1. New Features¶
- 94 GTApprox, GTDF and GTDR now support using pandas DataFrame and Series
as data samples when training and evaluating models.
In GTApprox in particular, pandas data samples may be used to specify
categorical inputs and outputs — see sections Categorical Variables
and Categorical Outputs in the GTApprox guide for details.
Model evaluation methods now return pandas data samples
if the sample to evaluate is a
pandas.DataFrame
orpandas.Series
.
15.6.11.2. Updates and Changes¶
- 119 GTDoE: improved quality of the Adaptive Design technique in cases where objective evaluations are prohibited (sample-based Adaptive Design with objectives), or the blackbox returns NaN values of an objective.
- 116 GTApprox: improved parallelization of the stepwise regression
algorithm in RSM, resulting in increased performance in cases where
stepwise regression is used as the RSM feature selection method
(see GTApprox/RSMFeatureSelection),
and also increased performance of smart training (
build_smart()
) with stepwise regression enabled (default). - 120 GTApprox: improved tuning of GBRT and MoA technique parameters in smart training.
- 140 GTApprox: introduced new model input and output variability types: set (input) and piecewise constant (output). Variability type is automatically determined by GTApprox when training a model. See section Input and Output Descriptions in the GTApprox guide for details.
- 120 140 GTApprox: the TBL technique now supports model update (training with an initial model). Note that the TBL technique can update only TBL models.
- 120 GTApprox: clarified various error and warning messages related to handling categorical data in training samples.
- 144 GTOpt: the GTOpt/Techniques option that limits the set of allowed optimization techniques is now deprecated and is kept for version compatibility only. It should no longer be used as it is going to be removed in future versions.
15.6.11.3. Documentation¶
- 143 Updated descriptions of GTApprox, GTDF and GTDR methods that now support pandas DataFrame and Series as data samples.
- 143 Updated array-like definition in the Glossary and removed the related note in section API Reference as outdated.
- 143 Updated sections Categorical Variables and Categorical Outputs in the GTApprox guide to explain the use of pandas DataFrame and Series with categorical data.
- 140 144 Updated section Input and Output Descriptions in the GTApprox guide.
- 14 Removed the outdated Adaptive DoE example.
- Noted the GTOpt/Techniques option as deprecated.
15.6.11.4. Bugfixes¶
- 125 116 GTApprox: fixed several issues in smart training
(
build_smart()
), due to which it did not behave as intended when being interrupted by a watcher (see Watchers) and could ignore interrupts, become unresponsive or enter a deadlock. - 116 GTApprox: fixed an issue where using the @GTApprox/TimeLimit hint in smart training could lead to a deadlock.
- 149 GTDoE: fixed an issue in the Adaptive Design technique where it ignored the initial sample in problems with categorical or discrete variables.
- 140 GTApprox: fixed several issues in determining types of model inputs and outputs, which could lead to inability to use an initial model in training.
- 140 GTApprox: fixed an issue with editing metainformation
in models with categorical outputs,
where values of output levels (class labels)
could be replaced with indexes of levels (class indexes)
if you use
modify()
. - 94 GTApprox: fixed an issue with updating a model with categorical inputs (using an initial model in training), where values of categorical inputs in the training sample were not checked against the levels of categorical inputs in the initial model.
- 94 GTApprox: fixed an issue with GBRT models where categorical inputs were described as continuous in model details.
- 132 GTOpt: fixed an issue where a problem sometimes could be incorrectly qualified as a NaN/Inf problem if it returns NaN values of objectives or constraints at some points.
15.6.12. pSeven Core 6.24¶
15.6.12.1. New Features¶
- 52 GTApprox, GTDR: added the support for exporting a model to a set of source files,
which helps avoid compilation problems when exporting large models.
Also you can now pack source files into an archive upon export.
See the
export_to()
,compress_export_to()
, anddecompress_export_to()
method descriptions for details.
15.6.12.2. Updates and Changes¶
- 127 GTOpt: improved solver behavior when the GTOpt/ResponsesScalability option is specified so that sizes of point batches sent for evaluation are better adjusted to the GTOpt/ResponsesScalability value. Note that it does not guarantee that point batch size is a multiple of GTOpt/ResponsesScalability at every solving iteration; see the option description for details.
- 128 GTOpt: all problem classes now support skipped response evaluations,
so you are no longer required to implement
gtopt.ProblemGeneric.evaluate()
just for this one feature. Skipped evaluations are indicated byNone
values in evaluation results. In addition,evaluate()
now also supportsNone
response values. - 52 GTApprox, GTDR: model export to C and Octave now generates less code.
15.6.12.3. Documentation¶
- 131 Updated descriptions of GTApprox model export methods and functions: see
export_to()
,export_fmi_cs()
,export_fmi_me()
. - 131 Updated descriptions of GTDR model export methods: see
compress_export_to()
,decompress_export_to()
. - 128 130 Updated the
gtopt.ProblemGeneric.evaluate()
method description. - 128 130 Updated descriptions of constraint and objective definition methods in GTOpt problem classes.
15.6.12.4. Bugfixes¶
- 107 GTApprox: fixed an issue with multithreading on processors with a high number of cores (10 or more) which caused training performance degradation compared to pSeven Core 6.16.
- 96 GTApprox: fixed an issue where you could not update a GBRT model (use incremental training), if the initial model was trained with a sample that contains duplicates.
- 129 GTOpt: fixed an error when using analytical gradients in an optimization problem with a combination of cheap and expensive responses.
15.6.13. pSeven Core 6.23¶
15.6.13.2. Updates and Changes¶
- 89 GTApprox: improved smart training algorithms to avoid loss of model quality in cases where the training sample is a Cartesian product of several factor sets (as in Tensor Approximation).
- 114 GTApprox: improved smart training algorithms to avoid performance degradation when training models with a high number of input variables.
- 114 GTApprox: the GTApprox/Accelerator option now affects RSM parameter estimation in smart training.
15.6.13.3. Documentation¶
- 72 Updated section Watchers.
- 114 Updated the GTApprox/Accelerator option description.
15.6.13.4. Bugfixes¶
- 103 GTApprox: fixed an issue where training a model with named inputs and outputs in Python 2.7
raised a
UnicodeEncodeError
exception if the names contain local language characters. - 115 GTApprox: fixed an issue where you could not use a model with categorical outputs as an initial model in training (update a model).
- 99 GTApprox: fixed issues with compatibility of GTApprox/CategoricalOutputs with the GTApprox/Componentwise and GTApprox/DependentOutputs options.
- 99 GTApprox: fixed an issue where the
training_sample
of a model with categorical outputs contained enumerations of output levels instead of actual output values. - 99 GTApprox: fixed meaningless values in output sample statistics for models with categorical outputs.
- 72 GTApprox: fixed a rare error that occurred when training a model with categorical variables with the @GTApprox/EnabledTechniques smart training hint.
- 104 GTDoE: fixed the Adaptive Design technique behavior in cases where the blackbox returns NaN values of an objective.
15.6.14. pSeven Core 6.22¶
15.6.14.1. New Features¶
- GTApprox now supports training models with categorical (discrete) outputs, which can take values only from a predefined set (the output levels) and can be strings. See section Categorical Outputs for details.
15.6.14.2. Updates and Changes¶
- 78 GTOpt, GTApprox: improved the support for interactive Python
environments, such as JupyterLab or interactive Python consoles.
You can now use the
Ctrl
C
hotkey to stop optimization or model training that runs in an interactive session. - 67 GTApprox: added the GTApprox/CategoricalOutputs option.
- 67 GTApprox: updated
details
:- For a categorical output indexed j,
details["Output Variables"][j]["variability"]
is"enumeration"
, anddetails["Output Variables"][j]["enumerators"]
contains the list of output levels. - Added the cross-entropy loss metric for categorical outputs.
In dictionaries containing model accuracy information,
values of this metric are stored under the
"LogLoss"
key (smaller values are better). For continuous outputs, this metric is always NaN.
- For a categorical output indexed j,
- 67 GTApprox, GTDF: updates similar to the above appear in
gtapprox.Model.validate()
,gtapprox.Model.iv_info
,gtdf.Model.validate()
, andgtdf.Model.iv_info
— see their descriptions for details. - 67 GTApprox: if the model has string categorical outputs,
calc()
returns anndarray
withdtype=object
to accommodate for string values.
15.6.14.3. Documentation¶
- Added section Categorical Outputs for GTApprox.
- 67 Added the GTApprox/CategoricalOutputs option description.
- 67 Updated sections Input and Output Descriptions and Accuracy in Model Details with information about categorical outputs.
15.6.14.4. Bugfixes¶
- 83 GTOpt: fixed a bad allocation error when solving a large-scale problem with integer variables.
- 83 GTOpt: fixed slow initialization of large-scale problems.
- 67 GTApprox: fixed a bug in the GBRT technique, which caused an error when training a GBRT model with many categorical variables with large sets of levels.
15.6.15. pSeven Core 6.21¶
15.6.15.1. New Features¶
- 63 GTDoE: improved Adaptive Design of Experiments to minimize
the RRMS error of Gaussian processes (GP) models trained using a DoE
generated by this technique.
Added two new options to control the technique behavior:
- GTDoE/Adaptive/AcceptableQualityLevel sets the acceptable RRMS error of the internal model trained by the Adaptive DoE technique.
- GTDoE/Adaptive/InitialModelQualityLevel sets the RRMS error threshold, after which the technique switches from the space-filling to the adaptive generation mode.
- 63 GTDoE: the Adaptive DoE generation result now includes the
internal model trained during generation (see
gtdoe.Result.model
).
15.6.15.2. Updates and Changes¶
- 26 GTApprox: increased performance and improved stability of the PLA training technique.
- 26 GTApprox: added extrapolation support to models trained using the PLA technique.
15.6.15.3. Documentation¶
- 63 79 Added the GTDoE/Adaptive/AcceptableQualityLevel option description.
- 63 79 Added the GTDoE/Adaptive/InitialModelQualityLevel option description.
- 63 79 Added the
gtdoe.Result.model
attribute description.
15.6.15.4. Bugfixes¶
- 71 GT: fixed an issue with multithreading in the Windows version of pSeven Core, due to which since version 6.17 it did not utilize available CPU cores if running on a system with several processor groups — only one processor group was used.
- 75 GTApprox: fixed an issue in
build_smart()
due to which it performed internal validation twice if a testing sample is specified. - 75 GTApprox: fixed a bug in internal validation, due to which it produced incorrect results if any of the training samples is a slice of a NumPy array.
15.6.16. pSeven Core 6.20¶
15.6.16.1. New Features¶
- 2 SHapley Additive exPlanations (SHAP) support for GTApprox models.
Use
shap_value()
to get SHAP values from a model; see the SHapley Additive exPlanations example for details.
15.6.16.3. Documentation¶
- Added the SHapley Additive exPlanations GTApprox example.
- 19260 Updated descriptions of the following methods, which implement blackbox-based techniques:
- Updated section Blackbox.
- Updated the GTDoE/AdaptiveDesign/ContourLevel option description.
15.6.16.4. Bugfixes¶
- 43 GTOpt: fixed an issue where solving a valid robust optimization problem
raised an
UnsupportedProblemError
. - 57 GTApprox: fixed an error that could occur when loading a model from a file, if the model was trained with an accuracy evaluation requirement (GTApprox/AccuracyEvaluation enabled).
15.6.17. pSeven Core 6.19¶
15.6.17.1. New Features¶
- 16 The Adaptive Design technique in GTDoE (
build_doe()
) now supports two essential settings: the aim for the number of feasible points to generate (the count argument) and the response evaluation limit (see GTDoE/AdaptiveDesign/MaximumIterations, GTDoE/AdaptiveDesign/MaximumExpensiveIterations). Generation stops either when it finds the aimed number of feasible points, or when it reaches the evaluation limit — whatever happens first. You can also use the method with such parameter combinations as:- Aim for the maximum possible number of feasible points within some evaluation limits:
set count to
0
and impose limits with the above options. - Aim for a specific number of feasible points without predefined evaluation limits: set count to the aimed number of points and leave the limiting options default.
- Aim for the maximum possible number of feasible points within some evaluation limits:
set count to
15.6.17.2. Updates and Changes¶
- 16 GTOpt: the solver can now handle evaluation failures —
evaluate()
may return a “could not calculate” response, and optimization continues, if the number of failures is not too high. See the method description for details. - 16 GTOpt: when optimization finishes or is interrupted,
and there are no results to return,
solve()
now returns an empty solution with an appropriate result status, instead of raising an exception. - 34 GTApprox, GTDF: more flexible internal validation settings. Set either the number or size of cross validation data subsets with GTApprox/IVSubsetCount or GTApprox/IVSubsetSize (and similar GTDF options). A suitable number of training sessions is determined automatically, or you can limit it with GTApprox/IVTrainingCount (GTDF/IVTrainingCount). See the option descriptions for full details.
- 9 GTApprox: the GBRT technique now handles NaN values of variables in the training sample in a specific way. If you set the GTApprox/InputNanMode option, it keeps points where some (but not all) variables are NaN and actually uses them in training.
- 9 GTApprox: if the model validation sample contains invalid data,
validate()
now issues a warning instead of raising an exception. - 25 GTDoE: improved uniformity and variability of DoE generated by the Adaptive Design technique in tasks with constraint responses and no objectives.
- 16 GTDoE: due to the recent improvements in performance and quality of the OLHS technique,
it is now the default initial DoE technique in the blackbox-based adaptive mode in
generate()
(see GTDoE/Adaptive/InitialDoeTechnique).
15.6.17.3. Documentation¶
- 16 Updated the
evaluate()
method description. - 16 Updated the
build_doe()
method description. - 16 Added the GTDoE/AdaptiveDesign/MaximumIterations and GTDoE/AdaptiveDesign/MaximumExpensiveIterations option descriptions.
- 34 40 Updated the GTApprox/IVSubsetCount, GTApprox/IVSubsetSize, and GTApprox/IVTrainingCount option descriptions.
- 34 40 Updated the GTDF/IVSubsetCount, GTDF/IVSubsetSize, and GTDF/IVTrainingCount option descriptions.
- 9 Updated the GTApprox/InputNanMode option description.
- 17 Updated the GTApprox/TAModelReductionRatio option description.
15.6.17.4. Bugfixes¶
- 16 GTOpt, GTDoE: fixed some issues in validation of variable bounds, which rarely led to incorrectly qualifying the problem as infeasible.
- 17 GTApprox: fixed an issue where a TA or iTA model training failed
if GTApprox/TAModelReductionRatio is
1
and exact fit is required. Reduction ratio 1 means no reduction, so the exact fit requirement is valid in this case. - 54 GTApprox: fixed a crash when smoothing an SGP model trained on noisy data with given output noise variance (see Data with Errorbars).
- 19 GTDF: fixed the inability to export some GTDF models to C#.
Note that GTDF model export requires you to convert it to a GTApprox model first
by loading the GTDF model from a file via the
gtapprox.Model
constructor. - 8 GT: fixed various issues with incorrect or excessively strict argument type checks, which did not agree with common Python practices.
15.6.18. pSeven Core 6.18¶
Note
This release drops the support for old versions of Python 2 and the support for 32-bit Linux platforms.
- Using pSeven Core in Python 2 now requires at least Python 2.7.
- pSeven Core for Linux no longer supports 32-bit platforms. pSeven Core for Windows continues to support both 32-bit and 64-bit Windows editions.
15.6.18.1. Updates and Changes¶
- 4 33 37 18293 GTApprox: the Tensor Approximation (TA) technique now supports piecewise linear approximation (PLA) as a technique that you can use for TA factors. In particular, this enables TA to provide n-linear approximations on a grid.
- 18423 GTApprox: improved compatibility of exported model code, added more validity checks to avoid compilation issues.
- 18293 GTApprox: updated training sample size requirements for the HDA, GP, HDAGP, SGP, and MoA techniques. See section Sample Size Requirements in the GTApprox guide for details.
- 18552 GTApprox: models converted from GTDF models (by loading a GTDF model from a file) now support accuracy evaluation.
- 19226 GTDoE: added the
da.p7core.gtdoe.measures.discrepancy()
function to calculate the DoE discrepancy metric, which is a robust uniformity metric (see section Uniformity). - 18552 GTDF: added the
gtdf.Model.grad_ae()
method to calculate model’s accuracy evaluation gradients.
15.6.18.2. Documentation¶
- 24 Updated section Sample Size Requirements in the GTApprox guide.
- 31 Updated section Installation due to changes in system requirements.
- 4 Updated the GTApprox/TensorFactors option description.
- 19226 Added the
da.p7core.gtdoe.measures.discrepancy()
function description. - 18552 Added the
gtdf.Model.grad_ae()
method description. - 19517 Fixed a few minor issues in examples.
15.6.18.3. Bugfixes¶
- 18796 GTApprox: fixed an extrapolation issue in some MoA models, which returned 0 everywhere outside of the training sample domain.
- 10 GTApprox: fixed an issue where evaluating a MoA model exported to Octave (
.m
) raised an error if the input point is outside of the training sample domain.
15.6.19. pSeven Core 6.17¶
Note
pSeven Core 6.17 is going to be the last major release that provides support for old versions of Python 2 and for 32-bit Linux platforms.
- Future versions of pSeven Core will not support Python 2.5 and 2.6. Using pSeven Core in Python 2 will require at least Python 2.7.
- Future versions of pSeven Core for Linux will not provide a 32-bit setup package. This concerns only the Linux version: pSeven Core for Windows will continue to support both its 32-bit and 64-bit editions.
15.6.19.1. New Features¶
- 18958 GTApprox: you can now select which techniques are enabled or disabled in smart training using the new @GTApprox/EnabledTechniques hint. See section Training Features in Smart Training for details.
- 17845 GTApprox: added the support for model output thresholds — another kind of output bounds. When thresholds are set for a model, the model guarantees that its outputs are always within bounds: if it calculates some output value which exceeds a threshold, it returns the threshold value instead. See sections Model Metainformation and Output Constraints for details.
- 19166 GTDoE: the OA technique now supports the property-preservation mode: given an initial sample which is an orthogonal array, it updates the sample with the requested number of points in such a way that the resulting sample is also an orthogonal array.
15.6.19.2. Updates and Changes¶
- 19494 GTOpt, GTDoE: added more robust checks for the initial sample data passed to Surrogate-Based Optimization and Adaptive Design. When analyzing the initial sample, these techniques now update it as needed to ensure that the sample properly covers the design space defined by variable bounds. In particular, this helps in avoiding unwanted localization in tasks where the initial sample is clustered in a relatively small area of the design space.
- 19508 GTOpt: added an optional local search mode to the Surrogate-Based Optimization technique. In this mode, GTOpt searches an area close to the current optimum, adjusting the location and size of this area for new iterations. It uses local response models, which take less time to train, thus making the algorithm less time-consuming. See the GTOpt/LocalSearch option for details.
- 18797 18946 19083 GTApprox: several internal improvements in
smart training (
build_smart()
), which in many cases result in increased model quality and higher training performance. - 19083 GTApprox: updated the GP and HDAGP techniques to better detect overtraining and adjust the algorithm so it preserves model quality.
- 18954 GTApprox: improved the parallel training implementation so its performance under Linux now scales better with the number of CPU cores.
- 18807 GTApprox: improved performance of the PLA, RSM, SPLT, TA, and iTA techniques
in the case when the model is trained with an option to discover and keep linear dependencies between outputs
(GTApprox/DependentOutputs set to
"PartialLinear"
). - 18422 GTApprox: when the GP technique automatically selects its training algorithm version (GTApprox/GPLearningMode is default), it now takes the GTApprox/Accelerator setting into account.
- 17845 GTApprox: when you train with an initial model, its metainformation is now re-used — GTApprox copies it to the trained model and updates with new information if you want to make changes. See Model Metainformation for details.
- 18946 GTApprox: the model accuracy and internal validation statistics in smart training are now calculated using the entire training dataset, even if the data was split to the train and test subsets using the @GTApprox/TrainingSubsampleRatio hint. Previously smart training did not use the test subset data when calculating statistics.
- 19165 GTApprox: changed the formatting of model accuracy and internal validation information printed to the training log for better readability.
- 18551 GTApprox: optimized the output constraints formula for brevity (see Constraints Formula).
- 19113 GTApprox: the GTApprox/GPType,
GTApprox/RSMFeatureSelection,
GTApprox/RSMType, and
GTApprox/SPLTContinuity
options now support the
"Auto"
value which becomes default. This value is primarily intended to explicitly “unlock” an option for automatic tuning, which is a part of smart training. - 19120 18927 19500 GTApprox: improved compatibility with GTDF models that
are loaded from a file with the
gtapprox.Model
constructor in order to convert a GTDF model to GTApprox. - 18927 GTDF:
gtdf.Model.details
now contain information about the training technique, options, model accuracy, and training sample statistics. - 18914 GTDoE: significantly increased the OA technique performance and stability for big samples and designs where variables have many levels (about 10 or more). Added an auto setting for the GTDoE/OrthogonalArray/MultistartIterations option, which is now default.
- 19196 GTDoE: noticeably increased performance of the OLHS technique in cases where it is used to generate big samples (thousands of points).
- 19166 GTDoE: improved the support for categorical variables in the LHS and OLHS techniques, resulting in better sample point distribution when generating a DoE with a combination of continuous and categorical variables.
- 19147 GTDoE: improved OLHS generation quality in the property-preservation mode.
- 19022 GTDR: updated the NLPCA technique to better avoid overtraining.
- 19219 GTDR: model details now contain information about options used on training.
- 18435 GTApprox, GTDF, GTDR: improved the string representation of models.
- 18425 GTApprox, GTDF, GTDR: leading and trailing whitespaces are now stripped from strings stored to model metainformation. Also, Unicode strings in metainformation are NFKC-normalized.
- 19262 GTApprox, GTDF, GTDR: updated the methods that export model C code for compatibility with the Tiny C Compiler.
- 19162 GT: changed the upper limit for the number of parallel processes set by the GTApprox/MaxParallel, GTDF/MaxParallel, GTDR/MaxParallel, GTDoE/MaxParallel, GTOpt/MaxParallel, and GTSDA/MaxParallel options to 512.
15.6.19.3. Documentation¶
- 17845 18837 Updated sections Model Metainformation and Output Constraints in Model Details with details on model output thresholds.
- 18958 19106 Updated section Training Features in Smart Training.
- 18958 19106 Added the @GTApprox/EnabledTechniques hint description.
- 17845 18837 Updated the
gtapprox.Builder.build()
andgtapprox.Model.modify()
method descriptions. - 18927 18837 Updated the
gtdf.Model.details
attribute description. - 19166 Updated the
gtdoe.Generator.generate()
method description with details on the property-preservation mode support now available for orthogonal arrays. - 19508 Added the GTOpt/LocalSearch option description.
- 19117 Updated the GTApprox/GPType,
GTApprox/RSMFeatureSelection,
GTApprox/RSMType, and
GTApprox/SPLTContinuity
option descriptions with the new
"Auto"
value. - 18422 18837 Updated the GTApprox/GPLearningMode option description.
- 18914 18837 Updated the GTDoE/OrthogonalArray/MultistartIterations option description.
- 19162 18837 Updated the GTApprox/MaxParallel, GTDF/MaxParallel, GTDR/MaxParallel, GTDoE/MaxParallel, GTOpt/MaxParallel, and GTSDA/MaxParallel option descriptions.
15.6.19.4. Compatibility Issues¶
This release changes the upper limit for the options that set the maximum number of parallel threads created by pSeven Core (GTApprox/MaxParallel and similar). Also changes the upper limit for GTDoE/OrthogonalArray/MultistartIterations. These updates may require minor updates in your code — see section Version Compatibility Issues for details.
15.6.19.5. Bugfixes¶
- 19190 19195 GTOpt: fixed incorrect processing of user-defined gradients in mean variance problems.
- 19157 GTApprox: fixed a bug in smart training (
build_smart()
), which could cause a training error when @GTApprox/TryOutputTransformations is enabled. In such cases, training continued but some submodels were not trained completely, so the final model returned bybuild_smart()
could miss some information or contain other issues. - 19127 GTApprox: fixed an error when training is configured to limit the model input domain
(GTApprox/InputDomainType is set),
and the training sample contains NaN values of variables which should be ignored
(GTApprox/InputNanMode is set to
"ignore"
). - 19269 GTApprox: added correct error handling for the case when
export_to()
cannot export a GBRT model to C code due to its size. It will now raise an exception when out of memory, instead of generating incorrect C code that does not compile. - 19203 GTApprox: fixed an issue where model training could freeze
if the training sample contains many NaN output values,
and the GTApprox/OutputNanMode option is set to
"Predict"
. - 19131 GTApprox: fixed an issue where the GP technique could enter an infinite loop if the training sample contains degenerate data.
- 19165 GTApprox: fixed an issue where
build_smart()
raised an exception if the HDAGP technique was selected manually. - 19165 GTApprox: fixed a bug in the HDAGP technique which could lead to overtraining in some cases.
- 18807 GTApprox: fixed the GTApprox/InputNanMode option compatibility with the SPLT technique.
- 18807 GTApprox: fixed the support for output transformation (GTApprox/OutputTransformation) in the PLA technique.
- 18946 GTApprox: fixed a bug that rarely lead to incorrect calculation of model accuracy and internal validation statistics, if the training sample contains many NaN output values.
- 17879 GTApprox: fixed an issue where model details contained incorrect information about constant outputs.
- 18807 GTApprox: fixed an issue where the training log contained incorrect values of the GTApprox/OutputTransformation option.
- 19041 GTApprox: fixed an issue where GTApprox could not export a MoA model, which was trained in a deprecated pSeven Core version, to C# code.
- 19111 GTApprox: fixed a bug in model export to the Octave code, due to which GTApprox failed
to export a model that was trained with GTApprox/OutputNanMode set to
"Predict"
. - 19512 GTApprox: fixed a compatibility issue in the exported Octave code.
- 19137 GTApprox: fixed an incorrect error message in the case when smart training fails to train any model.
- 19165 GTApprox: fixed several Python 2.5 compatibility issues.
- 18432 GTDF: fixed incorrect training option information in model details.
- 19153 GTDoE: fixed a few problems with the adaptive DoE algorithms in
generate()
which negatively affected their performance, stability, and result quality. - 19185 GTDoE: fixed an issue where exception messages, which report a technique error, contained a technique number instead of its name.
- 19184 GT: fixed several cases where pSeven Core did not correctly release a license, which caused incorrect license usage count and other license issues.
15.6.20. pSeven Core 6.16.4¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.20.1. Documentation¶
- 18318 Updated the GTApprox Smart Selection example
to show the intended usage of
gtapprox.Model.details
instead ofgtapprox.Model.info
, and updated its results description. - 17926 Updated examples in section Quick Start for compatibility with Python 3.
- 19178 Minor updates to some other examples.
15.6.21. pSeven Core 6.16.3¶
15.6.21.1. Updates and Changes¶
- 18983 GTOpt: increased performance of surrogate-based optimization algorithms, resulting in noticeably faster problem solving.
- 19019 GTOpt: increased solver performance in problems with a high number (about 500 or more) of variables or constraints.
- 19036 GTApprox: improved stability of several approximation techniques, including MoA and GBRT.
- 19092 GTApprox: clarified some error messages.
- 19047 19079 GTDoE: increased performance of the Adaptive Design techinique
(
build_doe()
) and improved its stability and result quality in certain kinds of tasks, in particular when generating a uniform adaptive DoE with linear constraints. - 18811 GT: a
LicenseError
exception message now contains the text of a FlexNet Publisher license server error message when available.
15.6.21.2. Documentation¶
- 19112 Updated the
gtopt.Solver.solve()
method description. - 19104 Fixed the GTDF/HFA/SurrogateModelType option description:
"GBRT"
and"PLA"
were not listed as valid values while the techniques are actually supported. - 18811 Added section License in Known Issues.
15.6.21.3. Bugfixes¶
- 19112 GTOpt: fixed incorrect behavior in cases when the initial sample
(the sample_x argument to
solve()
) contains an invalid value of a discrete or integer variable. The method now informs about such errors by raising anInvalidProblemError
exception. - 19068 GTApprox: fixed a bug in internal validation of models trained on samples with a grid-like structure (multigrid mesh), which could cause an access violation error in the case when the training sample contains a high number (order of 105) of points.
- 18902 GTApprox: fixed an issue where using an initial model in training caused an error,
if this initial model was a GTDF model trained in the componentwise mode
(see GTDF/DependentOutputs)
and loaded to GTApprox with the
gtapprox.Model
constructor. - 19104 GTApprox: fixed an issue where export to the Octave format was not available
for GTDF models trained with the MFGP or DA techniques
and loaded to GTApprox with the
gtapprox.Model
constructor. - 19057 GTDoE: fixed incorrect behavior of adaptive DoE algorithms
in tasks with linear constraints when any of the following conditions is met:
- the design includes discrete variables,
- an initial sample is given, or
- the design includes variables with different orders of magnitude, and design space normalization (GTDoE/Normalize) is disabled.
- 19104 GTDF: fixed the GTDF/HFA/SurrogateModelType option
not recognizing
"GBRT"
and"PLA"
(actually supported approximation techniques) as its valid values. - 18811 GT: fixed license requests performed by pSeven Core to make them thread-safe.
15.6.22. pSeven Core 6.16.2¶
15.6.22.1. Updates and Changes¶
- 19003 GTApprox: added a summary of model output properties to the end of the model training log. This summary lists training options applied to each output and contains a few other details, such as used approximation techniques and output dependencies and constraints, if they exist.
- 18992 GTApprox: the automatic sample splitting algorithm,
which is used in smart training (enabled by setting
@GTApprox/TrainingSubsampleRatio to
0
, see Training Features), was adjusted for the case of a large training sample (several thousands of points). This change helps to avoid certain negative effects on model quality, which were observed in previous versions when automatic splitting was used with large samples.
15.6.22.2. Bugfixes¶
- 19024 GTApprox: fixed an issue where
the trained model could show unexpectedly high errors for some outputs,
if the search for linear dependencies between outputs is enabled
(GTApprox/DependentOutputs is set to
"PartialLinear"
), and internal validation is on.
15.6.23. pSeven Core 6.16.1¶
15.6.23.1. Updates and Changes¶
- 18509 GTApprox: the C# source code export format is now supported for all GTApprox models
(not yet supported for GTDF models loaded to
gtapprox.Model
). Previously, it was available only for GTApprox models trained with the HDA, RSM, or TBL technique. Note that C# source export requires an up to date license valid for pSeven Core 6.16 and above. - 18962 GTApprox: improved quality of approximation techniques based on Gaussian processes (GP, SGP and other) for noisy training data and for models with high input dimension.
- 18888 GTDoE: changed the default value of the GTDoE/OrthogonalArray/MultistartIterations option to 500. The old default (10) was often too low to generate a proper orthogonal array.
- 18888 GTDoE: removed the deprecated GTDoE/OrthogonalArray/MaxIterations option. Use GTDoE/OrthogonalArray/MultistartIterations to control the time-quality trade-off.
15.6.23.2. Documentation¶
- Updated the GTDoE/OrthogonalArray/MultistartIterations option description.
- Removed the deprecated GTDoE/OrthogonalArray/MaxIterations option description.
- Updated
gtapprox.ExportedFormat
description with notes on C# source export.
15.6.23.3. Compatibility Issues¶
This release removes the deprecated GTDoE/OrthogonalArray/MaxIterations option, which may require updates in your code if you use the Orthogonal Array DoE technique. See section Version Compatibility Issues for details.
15.6.23.4. Bugfixes¶
- 18918 GTOpt: fixed an issue with occasionally long solving time of the Surrogate-Based Optimization technique in problems with a high number of linear constraints.
- 18913 GTApprox: fixed an error in the internal validation procedure,
which occurred when the training sample contains many non-numeric values in the input part
(values of variables), and GTApprox/InputNanMode
is set to
"ignore"
. - 18888 GTDoE: fixed incorrect behavior of the GTDoE/OrthogonalArray/MultistartIterations option.
15.6.24. pSeven Core 6.16¶
15.6.24.1. New Features¶
- 17970 GTApprox: you can now limit the model’s input domain, so it returns NaN outputs for input points which do not satisfy the input constraints. The constraints can be specified manually or determined automatically by GTApprox, based on the bounding box of the training sample. See the GTApprox/InputDomainType option for details.
- 18469 18494 GTDoE: the Adaptive Design technique and space-filling DoE techniques, which use internal quality criteria based on spatial measures (OLHS in particular), now by default perform design space normalization internally to ensure that their criteria work correctly. Also added an option to control this behavior, see GTDoE/Normalize.
- 17455 18403 GTDoE: completely reimplemented the Orthogonal Array technique. The new version is significantly faster, more convenient to use and eventually more stable. One of its distinctive features is the ability to adjust to any sample size by generating a “nearly orthogonal” array — in contrast with the old implementation, which always required a specific sample size. See section Orthogonal Array for details.
15.6.24.2. Updates and Changes¶
- 18023 16990 17770 18500 18463 18248 GTOpt: several improvements and corrections in the internal solving algorithms resulting in higher stability and performance for certain problem types, in particular problems with no objectives (constraint satisfaction) or without constraints (unconstrained problems), and problems where design variables have significantly different order of magnitude.
- 18467 GTOpt: added an option to disable the special treatment of response evaluation failures, which is used by default in surrogate-based optimization (in the current and previous pSeven Core versions). Note that disabling it is useful only in rare cases where evaluation failures can occur at random. See GTOpt/DetectNaNClusters for details.
- 17348 GTOpt: the minimum allowed number of evaluations for computationally expensive responses is now 1 instead of 3 (see GTOpt/MaximumExpensiveIterations).
- 17881 GTApprox: improved the model validation procedure used internally in smart training to estimate quality of intermediate (candidate) models in the case when you do not supply a separate test sample. The new method provides more accurate quality estimates, eventually resulting in more accurate final models. See section Smart Training for more details.
- 18320 18460 18462 GTApprox: several performance and stability improvements for the MoA technique. In particular, increased performance when parallelization is enabled (see GTApprox/MaxParallel).
- 17443 GTApprox: extended the set of GP technique options tuned by
the smart training algorithm (
build_smart()
), potentially allowing it to create more accurate GP models. - 18394 GTApprox: added the initial support for periodic kernel to Gaussian processes-based techniques, see GTApprox/GPType.
- 17443 GTApprox: further improvements (continuing from pSeven Core 6.14) in the data analysis algorithm, which is used in smart training to prevent oscillations of model outputs when a smooth model is required.
- 17265 GTApprox: you can now convert a GTDF model into a GTApprox model
by loading it from a file with the
gtapprox.Model
constructor. In particular, this allows exporting GTDF models to various formats usinggtapprox.Model.export_to()
. - 18351 GTApprox: models trained with the HDA, RSM, or TBL technique
can now be exported to C# source code (see Model Export and
export_to()
). Note that C# export requires an updated pSeven Core license. - 17968 17908 GTApprox: added information on model input and output constraints
to
gtapprox.Model.details
. Alsodetails
now stores a copy of warnings extracted from the model’s training log (build_log
). See sections Model Details, Input Constraints, and Output Constraints for details. - 18063 GTDoE: when an initial sample with missing values of objectives or constraints is given to adaptive DoE, it calculates the missing values when possible and includes these calculations in the final result.
- 18467 GTDoE: the Adaptive Design technique (provided by
build_doe()
) now supports special treatment of response evaluation failures (similar to GTOpt), which enables it to avoid design space areas where response functions are undefined. Also added an option to control this behavior, see GTDoE/AdaptiveDesign/DetectNaNClusters. - 18444 GTDoE: categorical variables with only 1 level are now allowed (see GTDoE/CategoricalVariables).
- 17908 GTDF, GTDR:
gtdf.Model.details
andgtdr.Model.details
now store a copy of warnings extracted from the model’s training log.
15.6.24.3. Documentation¶
- 17945 Updated section Training Features in the GTApprox guide.
- 17943 Updated section Model Details in the GTApprox guide, added sections Input Constraints and Output Constraints.
- 17970 Updated section Model Metainformation with explanations on how to specify model input domain.
- 17970 Added the GTApprox/InputDomainType option description.
- 17945 Updated the @GTApprox/TrainingSubsampleRatio smart training hint description.
- 18404 Updated the GTApprox/OutputTransformation option description.
- 18394 18586 Updated the GTApprox/GPType option description.
- 17348 Updated the GTOpt/MaximumExpensiveIterations option description.
- 18467 Added the GTOpt/DetectNaNClusters and GTDoE/AdaptiveDesign/DetectNaNClusters option descriptions.
- 17701 Updated the Orthogonal Array section in the GTDoE guide.
- 18469 Added the GTDoE/Normalize option description.
- 17701 Added the GTDoE/OrthogonalArray/ArrayType option description.
- 17701 Updated the GTDoE/OrthogonalArray/MultistartIterations and GTDoE/OrthogonalArray/LevelsNumber option descriptions.
- 17701 Noted the GTDoE/OrthogonalArray/MaxIterations option as deprecated.
- 17943 Updated
gtdf.Model.details
description. - 17943 Updated
gtdr.Model.details
description. - 16990 18379 Updated section Open Source Components.
15.6.24.4. Compatibility Issues¶
This release contains a new implementation of the Orthogonal Array DoE technique, which is generally compatible with the old version and does not require changes in code, but introduces some changes in options and features. It may also slightly affect results of the Taguchi ranking technique in GTSDA, which uses orthogonal arrays internally. See sections Version Compatibility Issues and Orthogonal Array for more details.
15.6.24.5. Bugfixes¶
- 18450 GTOpt: fixed the support for integer variables in problems with no objectives (constraint satisfaction problems).
- 18478 GTApprox: fixed an exception in the MoA technique when it is used with an initial model and a training sample which has a grid-like structure (multigrid mesh).
- 18050 GTApprox: fixed the incompatibility of exported model C code with 32-bit compilers.
- 18066 GTApprox: fixed
build_smart()
sometimes not saving option values to modeldetails
for those options, which were tuned automatically during model training. - 18461 GTApprox: fixed a cross-validation bug in smart training, which could lead to an error when training a model with categorical variables and point weights without a separate test sample.
- 18461 GTApprox: fixed a bug in smart training, which could negatively affect the quality of models with categorical variables if GTApprox/IVSavePredictions is disabled.
- 18404 GTApprox: fixed the inability to use an initial model with output transformation (GTApprox/OutputTransformation) when training a model with the GBRT or HDAGP technique.
- 18806 GTApprox: fixed the RSM, SPLT, TA, iTA, and PLA techniques ignoring the linear output dependency option (see GTApprox/DependentOutputs).
- 18556 GTApprox: fixed a bug in the HDA technique due to which a HDA model could produce false predictions of NaN outputs when the NaN prediction feature is enabled.
- 18798 GTApprox: fixed a bug in MoA model export to the C source code format, which could lead to incorrect behavior of exported MoA models.
- 18348 GTApprox: fixed a bug in model export to C due to which exported TA and iTA models with SPLT factors could return NaN outputs for points which are out of the region covered by the training sample, instead of using linear extrapolation as intended.
- 17968 GTApprox: fixed incomplete model decomposition information
in
details
for models trained to keep linear dependencies between outputs (GTApprox/DependentOutputs set to"PartialLinear"
). - 18392 GTApprox: fixed a bug in distributed training (
set_remote_build()
) due to which it sometimes did not delete temporary directories on the remote host. Also fixed a bug due to which it sometimes did not close the SSH connection properly, so the training never finished. - 17994 GTDoE: fixed the GTDoE/Adaptive/OneStepCount
option behavior in
generate()
: the blackbox-based adaptive DoE algorithm used bygenerate()
always generated 1 point per iteration, even if more points were allowed by GTDoE/Adaptive/OneStepCount — so it was effectively ignoring this option. Note that the intended option behavior remains the same: it sets an upper limit for the number of points generated per iteration, but does not require exactly this number. - 18444 GTDoE: fixed incorrect behavior of some techniques when generating a DoE with “frozen” variables (with equal lower and upper bounds).
- 18452 GTOpt, GTDoE: removed some redundant messages from run logs.
- 18379 GT: fixed pSeven Core for Windows becoming unresponsive when the system it runs on has a long uptime, or the system event log was cleared recently.
- 18387 GT: fixed several misleading exception messages.
- 16644 GT: fixed some inconsistencies in option descriptions
returned by the
Options
interface.
15.6.25. pSeven Core 6.15.1¶
15.6.25.1. New Features¶
- 14017 GTApprox: the Mixture of Approximators technique now supports using an initial model, so it can update existing models with new data or improve their accuracy by retraining. See Initial Model for details.
15.6.25.2. Updates and Changes¶
- 14017 GTApprox: added several internal improvements to the Mixture of Approximators technique, in many cases making it faster and more accurate.
15.6.25.3. Documentation¶
- 18307 Added section Initial Model in Mixture of Approximators.
- 18307 Updated the GTApprox
build()
method description.
15.6.25.4. Bugfixes¶
- 18287 GTApprox: fixed a possible crash when using a big training sample
with point weights specified (see Sample Weighting),
and GTApprox/OutputTransformation is set to
"auto"
.
15.6.26. pSeven Core 6.15¶
15.6.26.1. New Features¶
- 15764 GTOpt: significantly improved performance of several internal algorithms used in constrained optimization problems.
- 17376 GTOpt: added the support for discrete variables — non-integer variables which take values from a predefined set.
- 17723 GTApprox: added a special training mode, in which GTApprox tests the training sample to find linear dependencies between responses, and trains a special model which keeps these dependencies. See section Output Dependency Modes for details.
- 17724 GTApprox: the smart training mode was updated with a new, even more accurate method which decides whether to apply log transformation to the outputs data in the training sample. Log transformation may improve model accuracy when training outputs are exponentially distributed. The new method essentially compares various models trained with and without the transformation and selects the best one. Note that this testing can noticeably increase model training time. See section Training Features and the @GTApprox/TryOutputTransformations hint for details.
- 17989 17886 GTApprox: improved quality and stability of all techniques based on Gaussian processes. In particular, the robust version of the Gaussian processes algorithm was significantly improved and now provides both high accuracy and robustness. This version of the algorithm is now default, see the GTApprox/GPLearningMode option for details.
- GTDoE: improvements in the Adaptive Design technique
(
da.p7core.gtdoe.Generator.build_doe()
), including:- 15764 Improved technique performance and stability when generating a DoE with constraints.
- 17955 Significantly increased performance when generating a high-dimensional DoE with no objectives but with a high number of linear constraints, solving the specific problem of generating a uniform DoE in a narrow subset of the design space in high dimension.
- 17935 18042 Better support for sample-based generation when response evaluation is restricted. The technique will now generate uniformly distributed points instead of concentrating them in a limited design space area.
- 17811 GTSDA: data points containing non-numeric values can now be automatically removed from the analysis (see GTSDA/NanMode).
15.6.26.2. Updates and Changes¶
- 9767 GTOpt: removed the compatibility with OpenTURNS since the
supported version (OpenTURNS 0.15) is very outdated and is not compatible
with its current version, and the feature itself is rarely used.
This resulted in simplifying configuration of a robust optimization problem:
set_stochastic()
now requires only the distribution argument which can be an object of any custom class implementing the probability distribution. This change does not affect compatibility with code based on previous versions of pSeven Core. - 18042 GTOpt: NaN values in the response part of an initial sample are now processed as special values indicating that a response failed to evaluate. Previously such points were simply excluded from consideration.
- 15711 GTOpt: global search mode now always requires bounds for variables.
Previously, variables with no bounds were allowed in global search,
but GTOpt silently switched to the local search mode in this case.
Now it will return a
Result
with the unsupported problem status (see Statuses). - 17366 GTOpt: improved stability when solving problems with stochastic variables (robust optimization methods).
- 17817 17821 17922 GTOpt: improved stability in mixed-integer linear problems which include variables bound to a very small range (order of 10−6 and less).
- 16693 GTOpt: noticeably reduced memory consumption for some problems.
- 17959 GTApprox: improved overall quality and performance of smart training, in particular thanks to the latest updates in the HDA and GP techniques.
- 17846 GTApprox: improved load balancing and computational resource utilization when training models in parallel mode.
- 17959 GTApprox: improved quality of the HDA technique when used with default settings.
- 17487 GTApprox: improved the support for categorical variables in the SPLT and GBRT techniques.
- 17842 GTApprox: when loading models trained in versions prior to 6.8,
GTApprox now automatically calculates the R2 error metric,
which was not stored to the model in those old versions, and adds it to model
details
andiv_info
. - 17415 GTApprox: when exporting a model, descriptions of its inputs and outputs are now added to the comment in the exported code along with other model information.
15.6.26.3. Documentation¶
- 9767 Updated section Using Stochastic Variables in the GTOpt guide,
and updated the
set_stochastic()
method description. - 13962 Updated section Hint Reference in the GTOpt guide with more details on hint usage.
- 13962 Fixed incorrect hint usage in example_gtopt_rdo.py.
- 17815 Updated the GTOpt/Deterministic option description.
- 15711 Updated the GTOpt/Techniques option description.
- 17724 Updated section Training Features in Smart Training.
- 17937 Updated section Categorical Variables in the GTApprox guide.
- 17938 Updated section Output Dependency Modes in the GTApprox guide.
- 17938 Updated the GTApprox/DependentOutputs option description.
- 17944 Fixed some outdated details in section Exact Fit.
- 17944 Updated the GTApprox/ExactFitRequired option description.
- 17944 17886 Updated the GTApprox/GPLearningMode option description.
- 17921 Added the @GTApprox/TryOutputTransformations hint description.
- 17939 Added the GTSDA/NanMode option description.
15.6.26.4. Bugfixes¶
- 18042 17375 GTOpt: fixed a bug in processing response evaluation results in specific cases when a limit is imposed on the allowed number of evaluations (for example, GTOpt/MaximumExpensiveIterations is set). Due to this bug, when reaching the evaluation limit, GTOpt stopped evaluating the response, but further internally treated new points where the response was not evaluated as points where it failed to evaluate. This did not affect the solution quality, but the skipped points were incorrectly attributed as infeasible, which led to misinterpretation of results.
- 15711 GTOpt: fixed a minor bug in the surrogate-based multistart method due to which it behavior depended on the GTOpt/ResponsesScalability option while it should not.
- 15711 GTOpt: fixed several inconsistencies in the GTOpt/Techniques option behavior.
- 17989 17886 GTApprox: fixed several inconsistencies related to interaction of the GTApprox/ExactFitRequired option with other options and techniques.
- 17967 GTApprox: fixed a bug in
build_smart()
which caused a crash with a training sample containing categorical variables, when @GTApprox/TrainingSubsampleRatio is specified and internal validation is disabled. - 17817 GTDF, GTDR: fixed
gtdf.Builder
andgtdr.Builder
sometimes ignoring the log level options (GTDF/LogLevel, GTDR/LogLevel). - 17672 GTDF: fixed an error in automatic technique selection.
- 17376 17901 GTDoE: fixed the support for discrete variables in the Adaptive Design technique.
- 17901 GTDoE: fixed the Adaptive Design technique sometimes ignoring the initial samples.
- 17916 GTDoE: fixed the Adaptive Design technique
(
build_doe()
) crashing when functional constraints in the generation space are defined in such a way that there is only one feasible point. - 17441 GTDoE: fixed the Orthogonal Array technique ignoring the GTDoE/Deterministic option.
- 17383 GTDoE: fixed incorrect error messages in the blackbox-based DoE mode regarding insufficient blackbox budget.
15.6.27. pSeven Core 6.14.4¶
15.6.27.1. Updates and Changes¶
- 17284 This version contains certain changes in behavior of pseudorandom generators used in adaptive DoE. These changes were required for the GTDoE/Adaptive/OneStepCount option fix described below. The changes do not break compatibility (there is no need to update your code), but affect adaptive DoE results in the deterministic mode (with GTDoE/Deterministic enabled). The results of deterministic adaptive DoE from versions 6.14.3 and below cannot be exactly reproduced in 6.14.4: adaptive DoE algorithms in 6.14.4 and 6.14.3 will generate different DoE for the same GTDoE/Seed. Otherwise, the deterministic adaptive DoE in 6.14.4 is not changed — if you use a fixed seed, generation in 6.14.4 is fully reproducible, although it does not match the results from previous pSeven Core versions. Note also that other DoE techniques (space-filling DoE) were not affected.
15.6.27.2. Bugfixes¶
- 17500 GTApprox: fixed a bug in the heteroscedastic Gaussian processes algorithm (see Heteroscedastic data) due to which the GTApprox/Heteroscedastic option had no effect.
- 17500 GTApprox: fixed a bug in output noise variance handling which could
cause the “invalid point weights” error when the output noise data is not available
for some output component
(the outputNoiseVariance argument to
build()
contains a column filled with NaN values). - 17468 GTApprox: fixed
build()
requiring numeric values of output noise variance (elements of outputNoiseVariance) for NaN values of outputs (elements of the y training sample). - 17284 GTDoE: fixed incorrect behavior of the GTDoE/Adaptive/OneStepCount option, due to which the number of points added to DoE per iteration was always 1 even if a greater number was allowed.
15.6.28. pSeven Core 6.14.3¶
15.6.28.1. New Features¶
- 17496 GTApprox now supports model export to a Functional Mock-up Unit for Model Exchange (FMI standard)
in addition to the FMU for Co-Simulation export support added in version 6.9.
See section Model Export and
export_fmi_me()
for details. - 17486 You can now remove the accuracy evaluation and smoothing information from GTApprox models.
For some models, this can noticeably reduce the model size (memory consumption) or the volume of exported code.
See
modify()
for details.
15.6.28.2. Updates and Changes¶
- 17716 GTApprox, GTDF, GTDR: added training time to model details (see section Training Time in Model Details). The training time is now also printed to the training log.
- 17678 GTDoE: slightly improved generation speed and quality of adaptive DoE
produced by
build_doe()
. - 17764 GT: removed needless details from some exception messages for clarification.
15.6.28.3. Documentation¶
- 17702 Updated section Model Export in the GTApprox guide.
- 17716 Updated section Model Details in the GTApprox guide.
- 17702 Added the
gtapprox.export_fmi_me()
function description (model export to FMU for Model Exchange). - 17700 Updated the
gtapprox.Model.modify()
method description. - 17716 Updated the
gtdf.Model.details
andgtdr.Model.details
descriptions (training time added).
15.6.29. pSeven Core 6.14.2¶
15.6.29.1. Documentation¶
- 17779 Updated section License Usage.
- 17779 Updated the GTDoE/Technique option description.
15.6.29.2. Bugfixes¶
- 17766 17779 GT: Fixed a crash when trying to get license information (see License Usage) for a standalone license.
15.6.30. pSeven Core 6.14.1¶
This is a maintenance release, which does not contain any functional changes or updates.
Note
Since this release, the updates following a release of a stable pSeven Core version
will have short version numbers, for example 6.14.1 instead of 6.14 Service Pack 1.
This change does not affect the internal version format used when importing the da
package.
15.6.31. pSeven Core 6.14¶
Note
17492 This release requires an updated license file. Licenses issued for the previous versions of pSeven Core are not compatible with versions 6.14 and above. For more details on how to obtain a new license file, see section License Setup.
15.6.31.1. New Features¶
- 17074 GTApprox is updated with a new method of model training parallelization, which can provide a significant increase in performance when training composite models on multi-core CPUs. See section Submodels and Parallel Training for details.
- 15542 Added the support for input and output metainformation to GTApprox, GTDF and GTDR models. See section Model Metainformation for details.
- 16153 11821 17268 Added the Adaptive Design technique,
provided by
build_doe()
. This technique further improves the blackbox-based adaptive DoE methods:- Supports adaptive DoE generation in presence of functional constraints.
To generate a constrained DoE, use
gtopt.ProblemGeneric
as a blackbox. - Improved quality of adaptive DoE generation and an option to control the trade-off between speed and quality, see GTDoE/AdaptiveDesign/SearchIntensity.
- Supports the response contour restoration mode, see GTDoE/AdaptiveDesign/ContourLevel.
- Supports adaptive DoE generation in presence of functional constraints.
To generate a constrained DoE, use
15.6.31.2. Updates and Changes¶
- GTOpt:
- 17476 Improved multi-objective optimization algorithms for high-dimensional problems.
- 17012 When provided with an initial data sample, GTOpt now does not request additional evaluations for linear and quadratic functions but restores their analytical form using the initial sample data only.
- 16688 Some performance improvements in surrogate-based optimization methods.
- 16153 11821 Added the support for the new common
p7core.Result
class.
- GTApprox:
- 17326 Smart training (
build_smart()
) now supports the automatic output transformation feature (applies GTApprox/OutputTransformation when needed), which helps in cases when values of some training outputs are exponentially distributed. Also improved accuracy of the statistical tests used to determine whether transformation should be applied when GTApprox/OutputTransformation is manually set to"auto"
. - 16175 Improved the special data analysis algorithm used in smart training to prevent oscillations of model outputs when a smooth model is required.
- 17419 Optimized the exported C code for the accuracy evaluation (AE) information in Gaussian Processes models. Compiled GP models with AE are now up to 2 times smaller. This change does not affect the models which do not store AE information.
- 17326 An existing HDA model can now be used as an initial model for the HDAGP technique
(see the initial_model parameter to
build()
). - 16951 GTApprox is now more responsive to interrupts when training a model with the GP, HDAGP, or SGP technique.
- 16951 Added an upper limit of 4000 for the GTApprox/SGPNumberOfBasePoints option.
- 16951 The maximum training sample size supported by the GP and HDAGP techniques is now limited to 4000 points.
- 17367 Added new special values for the HDA technique options which enable automatic tuning for them. These values are now default. Updated options are:
- 17074 Added the
"Auto"
setting for the GTApprox/RSMStepwiseFit/inmodel option which selects the type of initial RSM model when using the stepwise regression method. This setting (now default) performs automatic selection based on the number of model terms. This modification noticeably speeds up stepwise regression RSM model training in some cases, for example, in case of quadratic dependency with multiple inputs or in presence of categorical variables. - 17436 Improved GTApprox stability by addressing a few potential memory issues.
- 17326 Smart training (
- 15542 GTDR: added the
annotations
andcomment
attributes,modify()
method to the GTDRModel
. - 16661 GTDoE: Improved quality of the OLHS technique in high-dimension cases.
- 17329 GTSDA: Improved accuracy and performance of outlier detection algorithms.
- GT:
- 17389 Increased the timeout between watcher calls (see da.p7core.watchers) so that a watcher now receives no more than 1-2 messages per second.
- 16222 Added a new common
p7core.Result
which is now used by GTDoE (build_doe()
only) and GTOpt when compatibility mode is disabled (see compatibility insolve()
). - 16153 16222
ProblemGeneric
is now used as the base blackbox class both in GTOpt and GTDoE in order to support the constrained DoE generation method provided bybuild_doe()
. - 16153 16222 Added the new
"@GT/VariableType"
hint in order to support discrete and categorical variables inProblemGeneric
(required bybuild_doe()
). See Hint Reference for details.
15.6.31.3. Documentation¶
- 17074 17287 Added section Submodels and Parallel Training.
- 15542 17266 Updates related to the model input and output metainformation support:
- Added section Model Metainformation.
- Added section Input and Output Descriptions.
- Updated section Regression Model Information.
- Updated section Structure Reference.
- Updated section Approximation Model Structure.
- Updated section Model Details.
- Updated section Data Fusion Model Structure.
- 17074 17287 Added the GTApprox/SubmodelTraining option description.
- 16153 11821 17268 16222 Added the
build_doe()
method description. - 16153 11821 17268 16222 Added
da.p7core.Result
description. - 16153 11821 17268 16222 Added the GTDoE/AdaptiveDesign/SearchIntensity and GTDoE/AdaptiveDesign/ContourLevel option descriptions.
- 17074 17287 Updated section Output Dependency Modes.
- 17074 17287 Updated section Cross-validation procedure details.
- 16951 Updated section Sample Size Requirements.
- 16153 11821 17268 16222 Updated section Hint Reference.
- 17270 Updated the GTOpt/BatchSize and GTOpt/ResponsesScalability option descriptions.
- Updated GTApprox option descriptions:
- 17367 GTApprox/HDAFDGauss
- 17367 GTApprox/HDAFDLinear
- 17367 GTApprox/HDAFDSigmoid
- 17367 GTApprox/HDAMultiMax
- 17367 GTApprox/HDAMultiMin
- 17367 GTApprox/HDAPhaseCount
- 17367 GTApprox/HDAPMax
- 17367 GTApprox/HDAPMin
- 17270 17450 GTApprox/MaxAxisRotations
- 17450 GTApprox/MoATechnique
- 17074 17287 GTApprox/RSMStepwiseFit/inmodel
- 17287 GTApprox/DependentOutputs
- 17267 16951 GTApprox/SGPNumberOfBasePoints
- 17371 Updated section System Requirements.
- 15542 17266 Added and updated method and attribute descriptions
related to the model input and output metainformation support:
gtapprox.export_fmi_cs()
gtapprox.Builder.build()
gtapprox.Builder.build_smart()
gtapprox.Model.annotations
gtapprox.Model.available_sections()
gtapprox.Model.comment
gtapprox.Model.details
gtapprox.Model.fromstring()
gtapprox.Model.is_smoothed
gtapprox.Model.load()
gtapprox.Model.modify()
gtdf.Builder.build()
gtdf.Builder.build_BB()
gtdf.Builder.build_MF()
gtdf.Model.annotations
gtdf.Model.available_sections()
gtdf.Model.comment
gtdf.Model.details
gtdf.Model.fromstring()
gtdf.Model.load()
gtdf.Model.modify()
gtdr.Builder.build()
gtdr.Model.annotations
gtdr.Model.comment
gtdr.Model.details
15.6.31.4. Compatibility Issues¶
This release contains a number of changes GTApprox which do not create major compatibility issues but may require some updates in your code and introduce certain changes in its behavior in some cases. For details, see section Version Compatibility Issues.
15.6.31.5. Bugfixes¶
- GTApprox:
- 16174 Fixed incorrect clustering of data samples having tensor structure in the Mixture of Approximators
technique, which could lead to inability to train a model if GTApprox/MoATechnique
is set to
"TA"
or"TGP"
. - 17450 Fixed a bug in the Mixture of Approximators technique, which could cause an unhandled error
when GTApprox/MoATechnique is set to
"iTA"
. - 17404 Fixed a numerical issue which could cause the
InvalidProblemError
exception when using the Gaussian Process technique with a small training sample containing a constant input and specifying different sample point weights. - 17353 Fixed a slowdown of the Tensor Approximation technique in the case when some tensor factors are processed with the HDA technique and GTApprox/MaxAxisRotations is used.
- 17669 Fixed incorrect (sometimes crashing) internal validation for Tensor Approximation models in the case when the training sample contains ambiguous points.
- 17673 Fixed a crash in internal validation when all point weights are set to
0
. - 16174 Several minor fixes in distributed model training
(see
set_remote_build()
). - 17436 Minor internal algorithm fixes.
- 16174 Fixed incorrect clustering of data samples having tensor structure in the Mixture of Approximators
technique, which could lead to inability to train a model if GTApprox/MoATechnique
is set to
- GTDoE:
- 17276 Fixed a bug in the OLHS technique due to which it sometimes did not preserve properties of the initial LHS data sample (see Property Preservation Mode).
- 17289 Fixed incorrect behavior of the Adaptive DoE technique with initial sample in presence of categorical variables.
- 17329 GTSDA: fixed incorrect calculation of correlation coefficients in some cases, in particular when the input sample contains constant components or duplicated points.
15.6.32. pSeven Core 6.13 Service Pack 1¶
15.6.32.1. New Features¶
- GTApprox can now automatically apply log transformation to outputs in the training sample, which improves model accuracy in cases when values of some outputs are exponentially distributed. You can also select transformations for specific outputs manually — see GTApprox/OutputTransformation for details.
15.6.32.2. Documentation¶
- Added the GTApprox/OutputTransformation option description.
15.6.32.3. Bugfixes¶
- GTOpt:
fixed incorrect behavior in some cases when the direct surrogate-based optimization method
(see Direct SBO) is applied to a robust optimization problem:
the solving process could not finish for a problem which defines an objective with
the @GTOpt/EvaluationCostType hint set to
"Expensive"
, adds stochastic variables (seeset_stochastic()
), and sets GTOpt/GlobalPhaseIntensity to0
.
15.6.33. pSeven Core 6.13¶
15.6.33.1. New Features¶
- pSeven Core for Windows x64 now supports systems with more than 64 logical processors or several processor groups (see the Processor Groups section in Microsoft Docs for details). In general, it means that pSeven Core 6.13 shows significant performance increase when using high degree parallelization on Windows systems with more than 64 logical processors, compared to previous versions of pSeven Core for Windows, where the ineffective distribution of threads among logical processors could negatively affect performance.
15.6.33.2. Updates and Changes¶
- GTOpt:
- Added the @GTOpt/ExpensiveEvaluations hint which allows to specify the limit for the number of evaluations individually for any expensive objective or constraint.
- Optimization results are now always sorted by values of variables.
- GTApprox:
- Improved the algorithm that splits the data sample given to
build_smart()
into the training and test subsamples when @GTApprox/TrainingSubsampleRatio is specified. The test sample points now better cover the function (response) space.
- Improved the algorithm that splits the data sample given to
- GTDF:
- Improved quality of the low-fidelity model bias compensation algorithm used when GTDF/UnbiasLowFidelityModel is enabled.
- General:
- Exceptions raised when input samples contain invalid data now provide more details explaining why the input was not accepted.
- The FlexNet License Finder dialog will no longer be shown when a license is not found.
15.6.33.3. Documentation¶
- Added the @GTOpt/ExpensiveEvaluations hint description.
- Updated the GTOpt/MaximumIterations option description.
- Updated the GTOpt/MaximumExpensiveIterations option description.
- Removed the deprecated Release Cycle section since the release schedule described there is no longer used.
15.6.33.4. Bugfixes¶
- GTOpt:
- Fixed a bug which in some cases lead to exceeding the evaluation budget set by GTOpt/MaximumIterations.
- Fixed incorrect calculation of budget for expensive evaluations in some cases: the evaluations of initial guesses expended the budget set by GTOpt/MaximumExpensiveIterations despite they should not.
- Fixed incorrect caching of expensive response values when working in batch mode:
the
Solver
lost useful information because it did not cache the values of those responses which it did not request to evaluate, even if these responses were returned by the problem. - Fixed a bug in handling NaN response values in batch mode which could negatively affect the solution quality.
- Fixed a bug which sometimes caused a stack corruption warning in Python 2.5.
- GTApprox:
- Fixed incorrect
build_smart()
behavior when training a model where all variables are categorical (see GTApprox/CategoricalVariables) and exact fit is required (see @GTApprox/ModelFeatures and GTApprox/ExactFitRequired). - Fixed a bug due to which wrong indices of categorical variables were stored
in model
details
.
- Fixed incorrect
- GTDF:
- Fixed a bug in the low-fidelity model bias compensation algorithm (see GTDF/UnbiasLowFidelityModel) due to which it worked incorrectly when parallelization is enabled.
- GTApprox, GTDF, GTDR:
- Fixed a malformed exception on loading a corrupt model.
- General:
- Fixed a
FutureWarning
exception when using pSeven Core with NumPy 1.14.
- Fixed a
15.6.34. pSeven Core 6.12 Service Pack 2¶
15.6.34.1. Updates and Changes¶
- GT: if you are using the Linux version of the pSeven Core, you can notice certain performance increase in general, thanks to recent pSeven Core for Linux build optimizations.
- GTOpt: the GTOpt/BatchSize option is now respected when evaluating points from the initial sample (the sample_x argument to
solve()
). Previously the initial sample points were never evaluated in batches even if batch mode was enabled for the solver. - GTApprox: slightly improved algorithm stability for the Gaussian Processes approximation technique.
- GTSDA: Mutual Information, Distance Correlation, and Partial Distance Correlation techniques now work significantly faster.
- GTSDA: reduced memory consumption when calculating Distance Correlation.
- GTSDA: improved the dependency-based feature selection algorithm (see Dependency-based feature selection).
- GTSDA: if GTSDA/Checker/PValues/Method is
"Auto"
, GTSDA now selects the p-values calculation algorithm taking sample size into account (previously the selection was based only on the technique specified by GTSDA/Checker/Technique).
15.6.34.2. Documentation¶
- Updated the GTSDA/Checker/PValues/Method option description.
15.6.34.3. Bugfixes¶
- GT: fixed a bug in handling invalid option names.
- GT: fixed some exceptions causing an additional warning (“During handling of the above exception, another exception occurred”) in Python 3.
- GTOpt: fixed a bug due to which some of the evaluation results for points from the initial guesses sample (the sample_x argument to
solve()
) were sometimes ignored by the solver. - GTOpt: fixed small discrepancy between the values of variables provided in the initial guesses sample and their values actually sent to
evaluate()
in order to get values of objectives and constraints when the latter are not provided as sample_f, sample_c tosolve()
. - GTOpt: fixed a rare memory violation when a large number of parallel threads is used by the solver.
- GTApprox: fixed a bug in Gaussian Processes model evaluation due to which model output for a single input point could be slightly different from the output for the same input point passed as a part of an input sample.
- GTSDA: fixed GTSDA/Ranker/NoiseCorrection affecting the Screening Indices technique (this option is intended for the Sobol indices only).
- GTSDA: fixed incorrect Sobol indices calculation for very noisy inputs in case when GTSDA/Ranker/NoiseCorrection is enabled.
- GTSDA: fixed incomplete
RankResult.info
(info["Ranker"]["Detailed info"]
missing) for Sobol indices calculated using the EASI technique.
15.6.35. pSeven Core 6.12 Service Pack 1¶
15.6.35.1. Updates and Changes¶
- GTSDA: added the GTSDA/MaxParallel option.
- GTSDA: improved string representation of GTSDA result objects.
15.6.35.2. Documentation¶
- Added the GTSDA/MaxParallel option description.
- Updated the pSeven Core End-User License Agreement.
15.6.35.3. Bugfixes¶
- GTOpt: fixed the bug due to which GTOpt and some DoE techniques recurrently degraded the default number of parallel threads to use if the GTOpt solver (or DoE generator) was used in a cycle, even if the optimizer (or generator) was re-created at each cycle step.
- GTOpt: fixed numerical instability in solving some linear problems that lead to infeasible results despite that an internally feasible solution was found.
15.6.36. pSeven Core 6.12¶
15.6.36.1. Updates and Changes¶
- GTOpt: added the capability to run in deterministic mode which makes the optimization process reproducible exactly. See the related GTOpt/Deterministic and GTOpt/Seed options.
- GTOpt: added an example to illustrate GTOpt/ResponsesScalability use-cases.
- GTApprox: added an example to illustrate GTApprox/MaxAxisRotations use-cases.
- GTOpt, GTApprox, GTDF, GTDR, GTDoE: changed the default behavior of GTOpt/MaxParallel, GTApprox/MaxParallel, GTDF/MaxParallel, GTDR/MaxParallel, and GTDoE/MaxParallel options to limit parallelization on hyper-threading CPUs. See option descriptions for details.
- GTSDA: control variables sample argument became mandatory for
check()
if a partial correlation method is selected using the GTSDA/Checker/Technique option.
15.6.36.2. Documentation¶
- Added GTOpt/Deterministic and GTOpt/Seed option descriptions.
- Updated GTApprox/MaxParallel, GTDF/MaxParallel, GTDR/MaxParallel, and GTOpt/MaxParallel option descriptions.
15.6.36.3. Bugfixes¶
- GTApprox: fixed SmartSelection not being able to train a model, if the test sample contained NaN values. Now user-provided test samples are checked for NaN values and constant columns. Depending on GTApprox/InputNanMode and GTApprox/OutputNanMode, either an exception is raised or test point is removed from the test sample.
- GTApprox: fixed the bug due to which internal validation ignored the GTApprox/MaxParallel option value.
- GTApprox: fixed a crash that could occur when calling select_subsample function.
- GTSDA: fixed the bug which sometimes resulted in
p_values
miscalculation incheck()
.
15.6.37. pSeven Core 6.11 Service Pack 1¶
15.6.37.1. New Features¶
- GT: introduced official Python 3.6 support. The current implementation of pSeven Core will also support Python 2 versions from 2.5 to 2.7. See section System Requirements for details.
- GTApprox: smart training procedure was extended with special analysis helping to prevent oscillations of the model between the training points when the smooth model requirement is specified (see section Model Features for details).
- GTApprox: added a new GTApprox/MaxAxisRotations option that enables using model gradients during training in order to obtain smoother models.
- GTOpt: added a new
ProblemFitting
class to solve fitting problems (see section Data Fitting Problem). - GTOpt: added the GTOpt/ResponsesScalability option that changes the computational model used by surrogate-based and robust optimization methods.
- GTOpt: added a new scheduler taking into account the value of GTOpt/ResponsesScalability option and using this information to facilitate calculations.
15.6.37.2. Updates and Changes¶
- GTOpt: added a watcher implementation to interrupt optimization process using the
Ctrl
C
hotkey. - GTOpt: in batch mode optimizer will now determine the batch size automatically, unless GTOpt/BatchSize is set to a non-default value. See option description for details.
- GTOpt: removed the
gtopt.ProblemGeneric.elements_unused_hints()
method.
15.6.37.3. Documentation¶
- Updated section System Requirements.
- Updated section Introduction.
- Added the GTApprox/MaxAxisRotations option description.
- Added the GTOpt/ResponsesScalability option description.
- Updated the GTOpt/BatchSize option description.
- Removed section Regression tests.
- Removed paragraph Integrating GTApprox with Excel.
- Renamed section Benchmarks and Tests.
15.6.37.4. Compatibility Issues¶
- The new computational model tuned by the GTOpt/ResponsesScalability option may introduce changes in optimizer behavior if it was used with a non-default value of GTOpt/BatchSize. Now to reproduce previous optimization results for cases where the GTOpt/BatchSize value was other than
1
, GTOpt/ResponsesScalability value should be set equal to GTOpt/BatchSize.
15.6.37.5. Bugfixes¶
- GTOpt: eliminated the possible influence of initial designs violating confluent bounds on the initial training sample — now such initial designs are ignored as intended.
- GTOpt: fixed incorrect behavior when received evaluation results are greater than the maximum floating point value.
15.6.38. pSeven Core 6.11¶
15.6.38.1. New Features¶
- GTOpt: added Direct Surrogate-Based Optimization (Direct SBO) technique which uses an alternative criterion to select new points for evaluation (see section Direct SBO for details). The new technique can be useful for constrained or multi-objective optimization problems when searching for non-trivial trade-offs between different expensive objectives and constraints.
- GTApprox: introduced weighted descriptive statistics in approximation models. See Quality Assessment for details.
15.6.38.2. Updates and Changes¶
- GTOpt: increased performance of stochastic optimization algorithms.
- GTOpt: intermediate optimization result can now be requested using a watcher. See section Watchers for details.
- GTApprox: increased robustness of the MoA clustering algorithm.
- GTApprox: added the ability to stop PLA model building at any time with no intermediate result.
- GTApprox: updated VBA wrapper code to provide better compatibility with Excel.
- GTApprox: added details about output noise variance to model
info
. - GTApprox, GTDR: improved compatibility of models exported to the Octave format.
- GTApprox, GTDF, GTDR: accelerated calculations in case of batch operations.
- GTApprox, GTDF: added new methods that return the list of sections available in a model or a model file.
- GTSDA: improved performance of the Kendall Correlation technique.
- GTDoE: improved OLHS performance to enhance the space-filling criterion of generated DoE.
- GTDoE: changed generation result structure — now initial designs extended by new generated points are returned.
- GTDR: added support for the models exported to the Excel DLL format.
15.6.38.3. Documentation¶
- Updated section License Setup. Added a note on the necessity to update the vendor daemon and instructions on how to update.
- Updated the GTApprox/MoACovarianceType option description.
- Updated the GTOpt/GlobalPhaseIntensity option description.
- Updated section Known Issues.
- Updated section Surrogate Based Optimization: added section Direct SBO.
- Updated the GTDoE usage guide.
15.6.38.4. Compatibility Issues¶
- The GTOpt/GlobalPhaseIntensity option behavior has changed due to the addition of the Direct SBO technique. The default is now
"Auto"
, and0
turns on Direct SBO that does not consider error estimations of surrogate models. See Direct SBO section for more details.
15.6.38.5. Bugfixes¶
- GTOpt: fixed a rare crash that could occur when solving an expensive mixed-integer problem with frozen variables.
- GTOpt: fixed a crash when a problem had analytical gradients and some frozen variables.
- GTOpt: fixed incorrect treatment of initial designs violating confluent bounds — now such initial designs are ignored.
- GTOpt: fixed potential problems with DoE in highly eccentric polytops.
- GTApprox: fixed a bug in the PLA technique that caused NaN values being returned in unexpected cases.
- GTApprox: fixed a bug that occurred when training PLA models in case of one-dimensional sorted input data.
- GTApprox: fixed validation of categorical variables when training models.
- GTApprox: fixed incorrect handling of point weights in MoA technique.
- GTApprox: fixed Unicode symbols issue in model annotations.
- GTDoE: fixed an error which resulted in wrong behavior of adaptive DoE in case of small initial sample.
15.6.39. pSeven Core 6.10¶
15.6.39.1. Updates and Changes¶
- GTOpt: improved the support for NaN objective and constraint values and missing values in the initial designs data. Such points are now taken into consideration if values of variables are within bounds defined by the problem. Also, GTOpt requests additional evaluations to fill the design points with missing values, and these evaluations do not consume the solving budget.
- GTApprox, GTDR: reduced memory usage when exporting models.
- GTApprox, GTDR: added
str
aliases for export formats inexport_to()
,compress_export_to()
, anddecompress_export_to()
.
15.6.39.2. Documentation¶
- Updated GTOpt documentation: sections Optimal Solution and Mean Variance Optimization, more details in
evaluate()
,size_full()
,enable_constraints_gradient()
, andenable_objectives_gradient()
descriptions. - Updated the
export_to()
,compress_export_to()
, anddecompress_export_to()
method descriptions. - Updated the
gtapprox.ExportedFormat
andgtdr.ExportedFormat
descriptions withstr
aliases that can be used to specify the model export format.
15.6.39.3. Bugfixes¶
- GTOpt: fixed non-deterministic behavior of surrogate based optimization algorithms in multithreaded environment.
- GTOpt: fixed incorrect behavior when a floating point number is specified as an initial guess for an integer variable. GTOpt will now raise the
InvalidProblemError
exception in this case. - GTApprox: fixed internal validation (see section Internal Validation) ignoring GTApprox options when training the cross-validation models.
- GTApprox: fixed a bug in the HDA technique that caused accuracy degradation of HDA models.
- GTApprox: fixed a bug in tensor approximation due to which model smoothing could not be applied to some TA and TGP models.
- GTApprox: fixed incorrect behavior of the @GTApprox/TrainingSubsampleRatio smart training hint in some cases.
- GTApprox, GTDF, GTDR: correct exceptions are now raised when trying to load a model from an invalid file.
- GTDR: fixed incorrect export of Feature Extraction models to C.
15.6.40. pSeven Core 6.9 Service Pack 1¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.40.1. Documentation¶
- Updated section System Requirements.
- Updated section License Setup.
15.6.40.2. Bugfixes¶
- Fixed incompatibility with some old Linux distributions that was caused by a build inconsistency: pSeven Core required a newer version of the GNU C Library (glibc). Note that the current requirements (Linux kernel 2.6.18, glibc 2.5) are still different from older pSeven Core versions — see System Requirements for details.
- Fixed a bug that could cause conflicts with third-party Python modules.
15.6.41. pSeven Core 6.9¶
Note
This release depends on the updated licensing system: floating licenses for pSeven Core 6.9 and above require the updated DATADVANCE vendor daemon. To get the updated daemon, re-download the License Server package available from the Download page at DATADVANCE. The new version of the daemon is compatible with earlier versions of the license server and pSeven Core licenses. For more details on the server and daemon setup, see Server Configuration.
15.6.41.1. New Features¶
- GTApprox: added the support for exporting an approximation model as a Functional Mock-up Unit for Co-Simulation (FMI standard). See section Model Export and
export_fmi_cs()
for details. - GTApprox: optimized model training on HPC clusters for less load and higher performance.
- GTApprox: added the capability to train a model on a remote host which is not a cluster submit node (remote training not using a HPC cluster). See details in
set_remote_build()
. - GTOpt: you can now specify which optimization methods to use when solving the problem. By default, GTOpt selects algorithms automatically, but now you can also explicitly configure the problem as single- or multi-objective, enable robust or surrogate based optimization, select global search methods, and other. See GTOpt/Techniques for details.
15.6.41.2. Updates and Changes¶
- GTApprox: added the capability to set remote environment variables in remote model training (see
set_remote_build()
). - GTApprox: added the support for calculating all model outputs and writing them to a cell range for the models exported to the Excel DLL format.
- GTApprox: improved performance of GTApprox models exported to the Excel DLL format.
- GTApprox: if the model includes submodels, the technique name in model details will now be
"Composite"
, not"Auto"
. - GTApprox: improved handling of categorical variables in SmartSelection algorithms.
- GTApprox: clarified error messages in SmartSelection.
- GTOpt: reworked the computational budget allocation policies in various GTOpt algorithms, making them more consistent and as a result increasing stability of affected optimization methods. This change primarily affects methods involving global optimization stages, such as the surrogate based optimization that deal with computationally expensive problems (also including usage of these methods in robust optimization problems).
- GTOpt: improved results filtering in multi-objective robust optimization.
- GTOpt: added proper support for NaN objective and constraint values and missing values in the initial designs data.
15.6.41.3. Documentation¶
- Updated GTOpt documentation:
- Added section Local and Global Methods.
- Updated section Robust Optimization — see Robust Problem Optimal Solution.
- Added GTOpt/Techniques description.
- Updated section Model Details.
- Updated section Model Export.
- Added
export_fmi_cs()
description. - Updated
set_remote_build()
description.
15.6.41.4. Compatibility Issues¶
- Due to updates in computational budget allocation policies in GTOpt, you can observe changes in optimization results as compared to the results from previous versions. This is not a compatibility issue at the user level and should not require changes in code, except possibly adjusting optimization options in specific cases — for details, see sections Version Compatibility Issues and Local and Global Methods.
15.6.41.5. Bugfixes¶
- GTApprox: fixed SmartSelection not being able to train the model if given a test sample that contains only one point.
- GTApprox: fixed incorrect SmartSelection behavior in some tasks where the training sample includes categorical variables.
- GTApprox: fixed smoothing methods accepting NaN smoothness factor values.
- GTApprox: fixed incorrect order of compiler options in the Model Export example.
- GTApprox: fixed incorrect export of model to C in case when the model has many independent outputs.
- GTApprox: fixed model details sometimes including option values that were not set by user (showing defaults).
- GTApprox: fixed a bug in data type conversion that could crash the builder.
- GTApprox: fixed builder always raising exception when interrupted by a watcher, even if the model is already built.
- GTOpt: fixed a bug due to which optimization log could appear in the console even if a logger is not set.
- GTSDA: fixed a bug in Kendall correlation due to which it could be less than -1.
15.6.42. pSeven Core 6.8¶
15.6.42.1. New Features¶
- GTApprox: approximation models can now predict undefined function behavior, and training samples including non-numeric values are supported. Depending on option settings, data points with non-numeric values can be either accepted (and used in prediction), or automatically removed from the training sample — see section Sample Cleanup or details.
- GTApprox: smart training can now select train and test subsets from the given data sample. See section Training Features for details.
- GTApprox: added coefficient of determination (R2) to model error metrics. See section Error metrics for details.
- GTApprox: added the support for exporting models into a special format that provides better compatibility with Excel. It exports C code ready to be compiled into a DLL that can be imported in Excel without making changes in the code manually.
- GTApprox: added the Table Function technique (simple table lookup). See section Table Function for details.
- GTDF, GTDR: data points containing non-numeric values can now be automatically removed from the training sample (see GTDF/InputNanMode, GTDR/InputNanMode).
15.6.42.2. Updates and Changes¶
- GTApprox: significantly improved performance of cross validation in smart training. Also increased speed of smart training in general thanks to internal improvements in technique parameter selection.
- GTApprox: pre- and post-processing functionality is now completely integrated into Smart Selection (see Smart Training) and is no longer available as a separate tool.
- GTApprox: training data saved to the model (
training_sample
) now also contains the test sample if it was used when training the model. - GTApprox: model
details
now include smart training hints, test sample statistics, and detailed information about component-wise models. See section Model Details. - GTApprox: improved performance of the High-Dimensional Approximation (HDA) technique.
- GTApprox: removed the deprecated Linear Regression (LR) technique.
- GTApprox: the number of numeric values in componentwise
iv_info
(iv_info["Componentwise"]["Count"]
) for constant outputs is now set to 0 to indicate that the validation sample is empty in this case. - GTOpt: improved solving algorithm for mixed-integer surrogate based optimization (SBO) problems — the points to evaluate that are generated at later solving stages are now more evenly distributed in the design space.
- GTOpt: narrowed the selection of optimum solutions in robust optimization problems to avoid adding dominated points into the optimal set.
15.6.42.3. Documentation¶
- Added an example of integrating GTApprox with Excel showing how to train and evaluate approximation models from an Excel workbook.
- GTApprox documentation:
- Updated section Model Details.
- Updated section Sample Cleanup, added a description of NaN value handling.
- Updated section Training Features.
- Updated section Model Smoothing.
- Added option descriptions: GTApprox/InputNanMode and GTApprox/OutputNanMode.
- Added the @GTApprox/TrainingSubsampleRatio hint description.
- Added the GTDF/InputNanMode option description.
- Added the GTDR/InputNanMode option description.
- Updated the
ExportedFormat
description. - Corrected the
evaluate()
implementation in GTOpt code samples: example_gtopt_gradients.py and example_gtopt_generic.py. - Fixed some math rendering issues in WebKit-based browsers.
15.6.42.4. Compatibility Issues¶
- The deprecated Linear Regression (LR) technique in GTApprox is finally removed in this release. It can no longer be selected; if you used LR, it has to be replaced with the RSM technique and GTApprox/RSMType set to
"Linear"
. This will result in training the same model as before, since LR actually used the linear RSM technique internally. - Since GTApprox pre- and post-processing functionality is now a part of the Smart Selection algorithm (see Smart Training), the older methods were removed in this release as deprecated. See section Version Compatibility Issues for details.
15.6.42.5. Bugfixes¶
- GTApprox: fixed incorrect smart training behavior in the case when no suitable training technique can be found. A correct warning is now raised by
build_smart()
. - GTApprox: fixed a bug in anisotropic smoothing (
smooth_anisotropic()
) that lead to an exception when the x_weights argument contained a row of zeros. - GTApprox: fixed a few bugs in error-based smoothing (
smooth_errbased()
) related to incorrect handling of method arguments. - GTApprox: fixed empty
build_log
after usingmodify()
. - GTApprox: fixed incorrect calculation of internal validation errors for models trained with the TA, iTA, or TGP technique in component-wise mode.
- GTApprox: corrected the text of exception message from
validate()
. - GTApprox: fixed the private key request when remote model training is used (
set_remote_build()
). - GTDF: corrected the text of exception message from
validate()
. - GTOpt: fixed a bug in DoE generation at the model training stage in multi-objective surrogate based optimization (SBO) problems due to which the generated DoE did not fill the design space properly if problem variables had significantly different scales.
- GTOpt: fixed a bug in multi-objective surrogate based optimization due to which it did not count NaN results as performed evaluations, consequently exceeding the GTOpt/MaximumExpensiveIterations budget.
- GTOpt: fixed incorrect behavior (exceeding the budget) of the robust surrogate based optimization algorithm when NaN values are found in results of evaluations performed at the model training stage.
- GTOpt: fixed a bug in surrogate based optimization (SBO) due to which it could crash when training an internal approximation model.
- GTOpt: fixed the possibility of repeating evaluations in the global optimization mode.
- GTOpt: fixed ignoring invalid names or values of optimization hints. Correct warnings are now raised.
- GTOpt: fixed a rare crash that could occur when problem solving is finished.
- GTOpt: fixed incompatibility with Python 2.5 in the Surrogate Based Robust Optimization example.
- GTSDA: fixed a bug in the Mutual Information technique that could cause an index error.
15.6.43. pSeven Core 6.7¶
Note
Since this release, MACROS is renamed to pSeven Core. The algorithms and methods implemented in MACROS form the algorithmic core of pSeven, DATADVANCE’s design space exploration software platform, so it was decided to rename the MACROS project in order to better show the consistency of the two products. The new name, pSeven Core, is now used throughout this manual.
15.6.43.1. Updates and Changes¶
- GTApprox: improved smart training performance, in particular when training GBRT models.
- GTApprox: you can now limit the total time of smart training using the @GTApprox/TimeLimit hint. See section Smart Training for details.
- GTApprox: significantly reduced memory usage of high-dimensional RSM and HDA models.
- GTApprox: added the capability to control the median, the 95th percentile, and the 99th percentile of absolute error when performing error-based model smoothing (see the error_type and error_thresholds arguments to
smooth_errbased()
).
15.6.43.2. Documentation¶
- Added the Surrogate Based Robust Optimization example showing the usage of the new robust optimization algorithm for problems that include expensive objectives or constraints (implemented in version 6.6).
- Updated section Smart Training.
- Added the description of the @GTApprox/TimeLimit smart training hint.
- Added section Noise Correction to section Sobol Indices in the GTSDA usage guide.
- Added a warning on possibly long run time in GTOpt examples with high CPU usage.
15.6.43.3. Compatibility Issues¶
- Due to the project rename, the main
macros
module was also renamed top7core
. The old module name can still be used as an alias, so this rename does not affect compatibility with scripts implemented for previous versions. However, the namesmacros
andp7core
cannot be used interchangeably in the same scope — see section Version Compatibility Issues for details.
15.6.43.4. Bugfixes¶
- GTApprox: fixed an error in smart training when the effective size of the input training sample is 1 point and this point is ambiguous.
- GTApprox: fixed incorrect smoothing of RSM models trained using the ElasticNet algorithm (see section Parameters Estimation in Response Surface Model).
- GTApprox: fixed incomplete model
details
(the"Regression Model"
key missing) for RSM models trained using the ElasticNet algorithm. - GTApprox: fixed incorrect handling of very large point weight values which could result in training invalid models when GBRT, iTA, HDA, HDAGP, or RSM technique is used.
- GTApprox: fixed a rare bug due to which invalid data could be saved in
iv_info
ortraining_sample
for models with multidimensional output or models including categorical variables. - GTOpt: fixed a bug in the surrogate based optimization algorithm which could cause to it to work indefinitely in some problems.
15.6.44. MACROS 6.6¶
15.6.44.1. New Features¶
- GTOpt: implemented a new robust optimization algorithm for problems that include expensive objectives or constraints. The new method is aimed to significantly reduce the required number of function evaluations at the cost of increasing the computational time spent in solving internal subproblems. Note that this overhead can be significant, since the method is intended for cases where function evaluations are time consuming or limited in number.
- GTApprox: added a new model training method that automatically chooses an approximation technique and tunes values of its options in order to obtain the most accurate model. See
build_smart()
and section Smart Training for details. - GTApprox: introduced the new model storage format that allows to store training samples with the model, add detailed comments, and work with specific parts of the model in order to save or load it faster and consume less memory. See section Approximation Model Structure for full details.
- GTApprox: distributed model training (see
set_remote_build()
) has become even more effective thanks to the support for parallel training of componentwise models (which is now the default mode) and a higher degree of parallelization possible for models that include categorical variables (sub-models for different combinations of categorical variables can be trained in parallel). - GTDF: data fusion models now use a storage format similar to GTApprox, providing the same capabilities. See section Data Fusion Model Structure for full details.
- GTSDA: added
score2rank()
— a convenience method to transform ranker scores returned byrank()
into a sorted list which can be directly passed toselect()
as the ranking argument. - GTSDA: added an option to calculate confidence intervals for the screening and first-order Sobol indices — see GTSDA/Ranker/VarianceEstimateRequired.
- GTSDA: added noise correction capability to the algorithm calculating Sobol indices. See sections Sobol Indices and Noise Correction for details.
15.6.44.2. Updates and Changes¶
- GTOpt:
solve()
now accepts initial samples containing points with values of variables that are outside of bounds set inadd_variable()
. Previously these points were automatically removed from the initial sample. Now they are used, in particular, to build internal approximation models thus increasing model accuracy and stability. - GTOpt: improved result filtering — now it excludes solutions that are insufficiently close to optimum even if they satisfy problem constraints. Due to this the number of points in optimization result can decrease compared to previous versions, but the result will contain more high-quality points.
- GTOpt: the @GTOpt/LinearityType hint for objectives and constraints is no longer ignored in robust optimization problems. However, GTOpt now assumes that objectives hinted as linear or quadratic do not depend on any stochastic variable. Note that it can lead to unexpected behavior in some problems with invalid formulation that were solved in previous versions of MACROS (see section Version Compatibility Issues for details).
- GTOpt: clarified messages describing reasons of stopping the optimization process when the it is interrupted by a watcher (see Watchers).
- GTSDA: added p-value correction in correlation checks in order to avoid overconfident results (zero p-values).
- GTSDA: the screening method in GTSDA ranker (Screening Indices) no longer sets all scores to NaN when it encounters a NaN value in analyzed outputs.
15.6.44.3. Documentation¶
- Updated the GTOpt/OptimalSetType option description with regard to robust optimization problems.
- Clarified the
Result.optimal.c
andResult.infeasible.c
descriptions with regard to robust optimization problems. - Added section Smart Training to the GTApprox guide.
- Added GTApprox
build_smart()
method description. - Added section Approximation Model Structure describing the new GTApprox model storage format.
- Updated GTApprox
Model
method and attribute descriptions with regard to the new model storage format. - Added more details to the GTApprox model
iv_info
attribute description. - Added a description of the ElasticNet algorithm in section Response Surface Model (see Parameters Estimation).
- Updated
set_remote_build()
function description. - Added section Data Fusion Model Structure describing the new GTDF model storage format.
- Updated GTDF
Model
method and attribute descriptions with regard to the new model storage format. - Updated section Sobol Indices: CSTA Method with more details on calculating second-order Sobol indices.
- Added GTSDA/Ranker/NoiseCorrection option description.
- Added GTSDA/Ranker/Sobol/IndicesType option description.
- Added GTSDA/Ranker/VarianceEstimateRequired option description.
- Added
score2rank()
method description. - Updated section Outlier Detection.
- Updated section Open Source Components.
15.6.44.4. Compatibility Issues¶
- Potential issue is the change in treatment of the @GTOpt/LinearityType hint for objectives and constraints in robust optimization problems. It does not directly affect compatibility but can lead to unexpected behavior in some problems with invalid formulation that were solved in previous versions of MACROS. See section Version Compatibility Issues for details.
15.6.44.5. Bugfixes¶
- GTApprox: fixed the incompatibility of models exported to the Octave format with older versions of Octave (3.0.5 and below).
- GTApprox: fixed incorrect behavior of
postprocess()
when validation is requested but the test sample is not given. - GTApprox: fixed a crash when the training sample includes only 1 point and this point is passed as an array slice.
- GTApprox: fixed the Training with Limited Memory example not working in Python 2.5.
- GTApprox: fixed a runtime warning when the training sample contains NaN values.
- GTDF: fixed
build_MF()
mutating the list passed as the samples argument. - GTDoE: fixed an incorrect error message in the case when the initial sample provided for adaptive DoE is too small.
- GTOpt: fixed a bug which could cause incorrect initial sample generation in multi-objective and surrogate based optimization algorithms.
- GTSDA: fixed a bug due to which Sobol indices calculated using the CSTA method (see Sobol Indices: CSTA Method) could have values greater than 1.
- GTSDA: fixed incorrect calculation of main Sobol indices (see Sobol Indices) for very noisy data.
- GTSDA: fixed runtime warnings when using partial Pearson correlation.
- GTSDA: fixed some minor typos in error messages.
- Statistical utilities: fixed incorrect handling of memory overflow errors in
detect_outliers()
. - General: fixed incorrect error messages about invalid option values for options that have special default values that are outside the valid range.
15.6.45. MACROS 6.5 Service Pack 1¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.46. MACROS 6.5¶
15.6.46.1. New Features¶
- GTApprox: added the support for model export, sample weighting and categorical variables to the piecewise-linear approximation (PLA) technique.
15.6.46.2. Updates and Changes¶
- GTOpt: the internal limit on the maximum number of expensive function evaluations will no longer override GTOpt/MaximumExpensiveIterations if the latter is set to a higher value.
- GTOpt: added several internal changes in multi-objective and surrogate based optimization algorithms that improve their stability in certain cases.
15.6.46.3. Documentation¶
- Added a detailed GTDoE manual in section GTDoE.
- Added section Categorical Variables that provides more details on training approximation models containing categorical (discrete) variables.
- Updated section Sample Weighting.
- Updated the GTApprox/CategoricalVariables option description.
- Updated the GTOpt/MaximumExpensiveIterations option description.
- Updated section GTIVE to GTSDA Migration Guide.
- Added GTSDA examples in section Examples.
- Minor improvements in example descriptions.
- Fixed incorrect embedded sRGB profiles in some images in the manual.
15.6.46.4. Bugfixes¶
- GTOpt: fixed a bug due to which a solution to a robust optimization problem or a problem with an expensive function (surrogate based optimization) could contain a non-optimal point if an initial sample containing values of variables and objectives is given.
15.6.47. MACROS 6.4¶
15.6.47.1. New Features¶
- GTApprox: the GBRT technique now supports memory overflow prevention, allowing to use the GBRT incremetal training feature to automatically process very large training samples — see GTApprox/MaxExpectedMemory and the Training with Limited Memory example for details.
15.6.47.2. Documentation¶
- Added an example showing how GTApprox/MaxExpectedMemory can be used to avoid memory overflow errors when training a model on a huge dataset — see Training with Limited Memory.
- Added the GTApprox/MaxExpectedMemory option description.
- Updated the GBRT Incremental Training section with regard to GTApprox/MaxExpectedMemory.
15.6.47.3. Bugfixes¶
- GTApprox: fixed the GBRT technique crashing when the minimum weight of points in a leaf set by GTApprox/GBRTMinChildWeight is too high for the given (small) sample size.
- GTSDA: fixed excessive memory consumption when performing sensitivity analysis on high-dimensional data samples.
15.6.48. MACROS 6.3¶
15.6.48.1. New Features¶
- GTApprox: added new piecewise-linear approximation technique. See section Piecewise Linear Approximation for details.
- GTApprox: added an option to tolerate deviations of input values from a grid-like DoE, applying input rounding and allowing to use tensor approximation techniques with noisy samples with “almost factorial” DoE. See GTApprox/InputsTolerance and section Sample Cleanup (in particular, Input Rounding) for more details.
- GTApprox: added the support for discrete (categorical) variables in HDA, GP, SGP, HDAGP, TGP, and iTA techniques. Discrete variables are specified by GTApprox/CategoricalVariables; note that the same option now specifies discrete variables for the RSM and TA techniques, making the GTApprox/RSMCategoricalVariables and GTApprox/TADiscreteVariables deprecated. The latter options are kept for version compatibility only and should not be used further, as they are going to be removed in future versions.
- GTDoE: added the support for grid-based adaptive DoE generation. Setting GTDoE/CategoricalVariables now forces the adaptive generation algorithm to work on a user-defined grid, so the generated sample includes only grid points.
15.6.48.2. Updates and Changes¶
- GTApprox, GTDF: componentwise approximation is now enabled by default in order to avoid problems when training models with many independent outputs. Old behavior (disabling componentwise approximation) can be achieved using the GTApprox/DependentOutputs option (or GTDF/DependentOutputs, respectively). Note that the GTApprox/Componentwise and GTDF/Componentwise options are now deprecated and are kept for version compatibility only. These options should not be used further, as they are going to be removed in future versions.
- General: added new exception type
OutOfMemoryError
to indicate memory allocation problems that previously raised generic exceptions such asInternalError
. This allows to detect if a problem is actually related to data size — for example, a too big training data set in approximation.
15.6.48.3. Documentation¶
- Added an example showing one of the possible approaches to checking whether a given point belongs to the validity domain of an approximation model — see Checking Model Domain.
- Added an example of using grid-based adaptive DoE in section Code Samples, see example_gtdoe_adaptive_grid.py.
- Added section Piecewise Linear Approximation.
- Updated the GTDoE/CategoricalVariables option description with regard to grid-based adaptive DoE generation.
- Updated section Tensor Products of Approximations and the GTApprox/TensorFactors option description with regard to the support for SGP factors added in MACROS 6.2.
- Updated section Sample Cleanup and added section Input Rounding.
- Added the GTApprox/InputsTolerance option description.
- Added the GTApprox/DependentOutputs and GTDF/DependentOutputs option descriptions.
- Updated the GTApprox/Componentwise and GTDF/Componentwise option descriptions.
- Added the GTApprox/CategoricalVariables option description.
- Updated the GTApprox/RSMCategoricalVariables and GTApprox/TADiscreteVariables option descriptions.
- Updated section GTIVE to GTSDA Migration Guide.
15.6.48.4. Bugfixes¶
- General: fixed incorrect import of MACROS modules if the path to the current working directory contains Unicode characters.
- General: fixed a thread safety issue that could cause a crash when running multiple MACROS threads in parallel — for example, using the
threading
module (this issue does not affect the built-in parallelization, such as with GTApprox/MaxParallel). - GTOpt: fixed a fatal error when solving a mixed-integer problem without constraints.
- GTApprox: fixed a bug that sometimes lead to inability to train a tensor approximation model when model reduction (GTApprox/TAModelReductionRatio) is enabled.
15.6.49. MACROS 6.2¶
15.6.49.1. New Features¶
- GTApprox: added an option to reduce the complexity of tensor approximation models — see GTApprox/TAModelReductionRatio and section Model Complexity Reduction for details.
- GTDoE: the adaptive DoE method now supports updating a sample generated using the LHS or OLHS technique in such a way that preserves sample’s space-filling properties — that is, ensures that the new sample is also an (optimized) Latin hypercube. See
generate()
for details. - GTSDA: added the robust Pearson correlation technique, see section Robust Pearson Correlation.
- GTSDA: added calculation of second order Sobol indices in the CSTA method, see section Sobol Indices: CSTA Method for details.
- GTSDA: added Scott’s method of determining the bin size for histogram based estimation of mutual information. This method noticeably increases the mutual information technique performance and is now default. To switch back to the previously used full search method, use the GTSDA/Checker/MutualInformation/BinsMethod option.
15.6.49.2. Updates and Changes¶
GTOpt: an improvement in the surrogate based optimization (SBO) method enables solving problems with a lower limit for GTOpt/MaximumExpensiveIterations if an initial sample containing variable, objective, and constraint values is supplied.
This update finalizes improvement of SBO for large-scale optimization that was gradually implemented in previous MACROS versions and now includes:
- The support for high dimensional problems with hundreds of design variables (see sections High-Dimensional SBO and Parallel SBO for details).
- Noticeable runtime reduction thanks to hierarchical and multilevel surrogate based optimization.
- Improved support for NaN responses.
GTOpt: specific methods of handling cusp-like singularities in problem functions are no longer used if analytical gradients are enabled in the problem.
GTApprox: significantly reduced size of models trained using the GBRT technique.
GTApprox: reduced size of componentwise (GTApprox/Componentwise on) models trained using the GP technique.
GTApprox: improved the algorithm that selects the subset of the training sample to be stored into a GBRT model. The stored subset is now more smoothly distributed over the initial training sample, which positively affects quality of incremental model training.
GTApprox: added the support for SGP (sparse Gaussian processes) factors to the Tensor Approximation technique.
15.6.49.3. Documentation¶
- Added sections High-Dimensional SBO and Parallel SBO in Surrogate Based Optimization.
- Added algorithm details in section Gradient Boosted Regression Trees.
- Added section Robust Pearson Correlation.
- Updated section Sobol Indices: CSTA Method.
- Added the GTSDA/Checker/MutualInformation/BinsMethod option description.
- Updated the
generate()
description. - Added section Model Complexity Reduction in Tensor Products of Approximations.
- Fixed several incorrect references in GTApprox option descriptions.
15.6.49.4. Bugfixes¶
- GTOpt: fixed a bug which could lead to incorrect behavior of the surrogate based optimization method in mixed-integer problems with small values of the GTOpt/MaximumExpensiveIterations option.
- GTApprox: fixed a bug in determining the GTApprox/HDAMultiMax bounds that did not allow to set a value less than 5 (the default value for GTApprox/HDAMultiMin).
- GTDoE: fixed a bug in the OLHS technique due to which it could generate a sample which, in terms of the ϕp metric (see
phi_p()
), is worse than a non-optimized LHS with the same random seed. - GTSDA: fixed incorrect calculation of total Sobol indices if the function (output) is constant.
- GTSDA: fixed a bug in the mutual information technique due to which it produced different results if the order of inputs is changed.
- GTSDA: fixed a bug in
select()
due to which it could enter an infinite loop. - GTSDA: fixed incorrect behavior of the
rank()
andcheck()
methods regarding watchers (see section Watchers).
15.6.50. MACROS 6.1¶
15.6.50.1. New Features¶
- GTApprox: added the elastic net regularization method in the RSM technique; see GTApprox/RSMFeatureSelection and GTApprox/RSMElasticNet/L1_ratio.
- GTApprox: in addition to the C interface to evaluate GTApprox models, MACROS now provides a similar Java interface (see GTModel for Java).
- GTSDA: added the Taguchi sensitivity analysis technique — see section Taguchi Indices.
15.6.50.2. Updates and Changes¶
- GTOpt: solving mixed-integer problems no longer requires using the surrogate based optimization (SBO) method. Previously to solve a mixed-integer problem GTOpt required to define at least one objective or constraint as expensive using the @GTOpt/EvaluationCostType hint.
- GTOpt: improved handling of cusp-like singularities in objective and constraint functions.
- GTApprox: GBRT models now store a portion of the training sample, which improves quality of incremental training (see the initial_model argument to
build()
). - GTApprox: internal improvements in the GBRT algorithm.
15.6.50.3. Documentation¶
- Added a detailed GTApprox manual in section GTApprox.
- Added an example of using already available evaluation data in surrogate based optimization (SBO) — see Re-using Evaluation Data in SBO.
- Added section Taguchi Indices in the GTSDA manual.
- Updated
da.p7core.gtapprox.Model.details
description. - Updated the GTApprox/RSMFeatureSelection option description (added the elastic net regularization method).
- Added the GTApprox/RSMElasticNet/L1_ratio option description.
- Added section GTModel for Java to the GTModel guide.
- Added GTModel for Java API page.
- Added GTModel for Java usage sample, see EvaluateGTModel.java.
- Fixed an incorrect plot height in the GTDF vs GTApprox example.
15.6.50.4. Bugfixes¶
- GTOpt: fixed malformed header in the evaluation history file if analytical constraint gradients are enabled in the problem.
- GTOpt: fixed the enormous amount of logger output when solving a mixed integer problem with multi-threading enabled.
- GTSDA: fixed incorrect behavior of the partial Pearson correlation technique in the case when the analyzed sample contains constant columns.
- GTDF: fixed an algorithmic bug in the MFGP technique due to which it could be unable to train a model if the difference in responses in the training samples of different fidelity is low enough.
- GTDF: fixed GTDF/MaxParallel not being applied to the approximator used internally by GTDF.
15.6.51. MACROS 6.0¶
- GTApprox
- More information added to model
details
— see the attribute’s description. - Parallel training is now disabled for small training samples by default due to computational inefficiency. See GTApprox/MaxParallel description for more details.
- Fixed a bug in pre-processing due to which it crashed when the input dimension is higher than the number of sample points.
- Updated non-regression test results — see section GTApprox Test Report.
- More information added to model
- GTDoE
- Added a new technique for generating mixed orthogonal arrays — see section Orthogonal Array for details.
- GTOpt
- All gradient-based methods are now capable to deal with cusp-like singularities in objective and constraint functions (codimension-one discontinuities of first derivatives). In particular, this can result in better performance when solving problems like structural optimization, where cusp singularities are common.
- Updated non-regression test results — see section GTOpt Test Report.
- GTSDA
- Significantly improved overall performance.
- Updated the FAST method of calculating Sobol sensitivity indices so it provides more accurate estimates.
- Corrected partial Pearson correlation coefficients calculation in the case when only the input dataset is provided.
- Added a guide on using GTSDA instead of GTIVE – see section GTIVE to GTSDA Migration Guide.
- Various smaller documentation updates and fixes.
15.6.52. MACROS 6.0 Release Candidate 1¶
Warning
This release removes the deprecated GTIVE tool. Sensitivity analysis methods are available in GTSDA.
- GTSDA
- Added getting started examples: Correlation Analysis, Sensitivity Analysis, and Feature Selection.
15.6.53. MACROS 5.3¶
- GTApprox
- GBRT models now support incremental training (see the initial_model argument to
build()
). - Distributed training on a HPC cluster is now supported for all componentwise models (previously was available for MoA models only). See
set_remote_build()
for details. - Fixed a bug in the SPLT technique which sometimes caused discontinuities in SPLT models.
- GBRT models now support incremental training (see the initial_model argument to
- GTSDA
- Significantly improved the performance of the Kendall correlation technique in
check()
. - Added a fast correlation-based feature selection algorithm. See GTSDA/Selector/QualityMeasure and section Dependency-based feature selection for details.
- Added a correlation-based method to compute Sobol sensitivity indices. See section Sobol Indices: CSTA Method for details.
- Correlation scores calculated using the Mutual Information technique are now normalized to [0,1] by default. Normalization can be disabled by GTSDA/Checker/MutualInformation/Normalize.
- Fixed a bug in the partial Pearson correlation technique which lead to incorrect results when the analyzed sample contains the input part only.
- Significantly improved the performance of the Kendall correlation technique in
15.6.54. MACROS 5.2¶
- GTApprox
- RSM models now provide detailed information that allows to obtain the model in analytical form (see
details
). - Added the support for model pickling and unpickling.
- Fixed the GBRT technique incorrectly treating deterministic mode options (GTApprox/Deterministic, GTApprox/Seed).
- Fixed exception when GTApprox/GBRTNumberOfTrees is set to 0 and allowed 0 as a special value (auto setting, default).
- Fixed GTApprox/Accelerator working incorrectly for the HDAGP technique.
- Prohibited infinite point weights in the training sample (see the weights argument to
build()
) because they can lead to numerical instability in certain cases.
- RSM models now provide detailed information that allows to obtain the model in analytical form (see
- GTDF
- Added the support for model pickling and unpickling.
- Added option GTDF/Deterministic to switch between the deterministic and non-deterministic training modes.
- Added option GTDF/Seed that sets the fixed seed used in the deterministic training mode.
- Fixed GTDF/Accelerator not working for the SVFGP and MFGP techniques.
- GTDR
- Added the support for model pickling and unpickling.
- GTOpt
- Fixed a bug due to which the surrogate based optimization (SBO) could end prematurely and without notable improvement in case of limited GTOpt/MaximumExpensiveIterations.
15.6.55. MACROS 5.1¶
- GTApprox
- Added the Gradient Boosted Regression Trees (GBRT) technique. See section Gradient Boosted Regression Trees for details.
- GTDF
- GTDF no longer has separate sample- and blackbox-based model classes. Instead,
build_BB()
returns an instance ofModel
which supports blackbox-based calculations (see the updatedcalc()
,grad()
,calc_ae()
,validate()
methods andhas_bb
,has_ae_bb
). This change fixes several inconsistencies related to blackbox-based training techniques support. Version compatibility was not affected:ModelWithBlackbox
is kept as an alias forModel
, andModelWithBlackbox
methods are added toModel
(though considered deprecated).
- GTDF no longer has separate sample- and blackbox-based model classes. Instead,
15.6.56. MACROS 5.0¶
GTApprox
Reworked options related to various randomized algorithms used in GTApprox:
- Added option GTApprox/Deterministic to switch between the deterministic and non-deterministic training modes.
- Added option GTApprox/Seed that sets the fixed seed used in the deterministic training mode.
- The GTApprox/HDAInitialization, GTApprox/HDARandomSeed, and GTApprox/SGPSeedValue options were removed. Randomized aspects of the HDA, HDAGP and SGP techniques are now controlled with GTApprox/Deterministic and GTApprox/Seed.
- Added GTApprox/IVDeterministic to switch between the deterministic and non-deterministic cross validation modes.
- Renamed GTApprox/IVRandomSeed to GTApprox/IVSeed and changed it to work with GTApprox/IVDeterministic.
- Renamed the GTApprox/Postprocess/IVRandomSeed hint to GTApprox/Postprocess/IVSeed.
For more details, see Version Compatibility Issues and option descriptions.
- GTDF
- Added changes similar to GTApprox:
- Added GTDF/IVDeterministic to switch between the deterministic and non-deterministic cross validation modes.
- Renamed GTDF/IVRandomSeed to GTDF/IVSeed and changed it to work with GTDF/IVDeterministic.
- Added changes similar to GTApprox:
15.6.57. MACROS 5.0 Release Candidate 3¶
- GTOpt
- Added a more detailed usage guide in the GTOpt section — see section Optimization Workflow.
- GTSDA
- Fixed incorrect behavior of the Kendall Correlation technique.
- Reworked GTSDA options interface. Most options were renamed to make their functions more clear; see GTSDA Option Reference for details.
- Statistical Utilities
- Fixed incorrect calculation of Kendall rank correlation coefficient for rank components in
calculate_statistics()
.
- Fixed incorrect calculation of Kendall rank correlation coefficient for rank components in
15.6.58. MACROS 5.0 Release Candidate 2¶
- GT
- Added an option to set the maximum number of parallel threads MACROS tools can use, so there is no need to change the
OMP_NUM_THREADS
environment variable. This option is currently available in GTApprox, GTDF, GTDoE, GTDR and GTOpt (see GTApprox/MaxParallel, GTDF/MaxParallel, GTDoE/MaxParallel, GTDR/MaxParallel, and GTOpt/MaxParallel). Note that the option applies to a specific tool instance only, which is also more convenient than the system-wideOMP_NUM_THREADS
setting.
- Added an option to set the maximum number of parallel threads MACROS tools can use, so there is no need to change the
- GTApprox
- Extended sample point weighting support: point weights are now supported in the LR, RSM, HDA, GP, SGP, HDAGP, iTA, and MoA techniques (previously was available in the iTA technique only). Point weights affect the model fit to the training sample — see
build()
for details (the weights argument).
- Extended sample point weighting support: point weights are now supported in the LR, RSM, HDA, GP, SGP, HDAGP, iTA, and MoA techniques (previously was available in the iTA technique only). Point weights affect the model fit to the training sample — see
- GTDF
- Added sample point weighting support similar to GTApprox. See
build()
andbuild_MF()
for details.
- Added sample point weighting support similar to GTApprox. See
- GTOpt
- Performance improvements and significant internal bugfixes in mixed-integer linear problems.
- Fixed undefined behavior in case of low GTOpt/TimeLimit.
15.6.59. MACROS 5.0 Release Candidate 1¶
- GTSDA
- Added mutual information technique in correlation analysis. See section Mutual Information for details.
- GTOpt
- Fixed a crash when solving a robust constraint satisfaction problem (a special problem type which includes no objectives, and constraint functions depend on some stochastic variables).
15.6.60. MACROS 4.3¶
- GTApprox
- Added initial support for running approximation model training on a remote host or a HPC cluster (currently only LSF clusters are supported). See
set_remote_build()
for details.
- Added initial support for running approximation model training on a remote host or a HPC cluster (currently only LSF clusters are supported). See
15.6.61. MACROS 4.2 Service Pack 1¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.62. MACROS 4.2¶
- GTApprox
- Improved the Tensor Approximation (TA) model quality in certain cases with noisy training data. This update can have an effect if the technique is configured to use only BSPL and DV factors (see GTApprox/TensorFactors), and GTApprox/ExactFitRequired is off.
- Improved the incomplete Tensor Approximation (iTA) model quality in similar cases of noisy training data. Note that this update can have an effect only if GTApprox/ExactFitRequired is off. However, it is not affected by GTApprox/TensorFactors (iTA always uses BSPL factors and ignores the latter option).
- Added exact fit support to the iTA technique. Note it is now affected by the GTApprox/ExactFitRequired option.
- Fixed a bug in the Tensor Approximation (TA) technique due to which
grad()
returned nonsensical gradient values for discrete variables. All partial derivatives with respect to discrete variables are now NaN.
- GTOpt
- Updated the Surrogate Based Optimization (SBO) algorithm in such a way that it includes a finalization stage where convergence to the optimum is more smooth and the search becomes more localized. This can potentially result in better solutions because the algorithm is now able to “push” to the optimum more actively.
15.6.63. MACROS 4.1¶
- GTOpt
- Added mixed integer linear programming support. Single-objective problems with a mix of integer and continuous variables can now be solved without using the Surrogate Based Optimization (SBO) method, provided that all objectives and constraints are linear functions (see @GTOpt/LinearityType in Hint Reference) and the problem supports analytical gradients (see
enable_objectives_gradient()
,enable_constraints_gradient()
). - Overall performance increased thanks to significant improvements of internal algorithms — in particular, the methods to solve saddle point systems and quadratic problems.
- The trace level log output (see GTOpt/VerboseOutput) is now more readable.
- Added mixed integer linear programming support. Single-objective problems with a mix of integer and continuous variables can now be solved without using the Surrogate Based Optimization (SBO) method, provided that all objectives and constraints are linear functions (see @GTOpt/LinearityType in Hint Reference) and the problem supports analytical gradients (see
15.6.64. MACROS 4.0¶
- GT
- Added a more convenient method to configure saving the history of blackbox evaluations — see
set_history()
, alsoclear_history()
anddisable_history()
. Note thatenable_history()
becomes deprecated; it is now recommended to useset_history()
instead.
- Added a more convenient method to configure saving the history of blackbox evaluations — see
- GTDF
- New technique, Multiple Fidelity Gaussian Processes (MFGP) which allows using more than two samples of different fidelity to train a data fusion model. MFGP also supports additional output noise variance data in training samples — see
build_MF()
description for details.
- New technique, Multiple Fidelity Gaussian Processes (MFGP) which allows using more than two samples of different fidelity to train a data fusion model. MFGP also supports additional output noise variance data in training samples — see
- GTDoE
- Fixed a bug which could cause
generate()
to freeze when used in the blackbox-based adaptive mode without an initial sample.
- Fixed a bug which could cause
- GTOpt
- Added a more convenient method to configure saving the history of problem evaluations — see
set_history()
, alsoclear_history()
anddisable_history()
. Note thatenable_history()
becomes deprecated; it is now recommended to useset_history()
instead.
- Added a more convenient method to configure saving the history of problem evaluations — see
- GTSDA
- GTSDA is no longer in beta. Note that it means GTIVE will become deprecated in future versions; it is now recommended to use GTSDA for sensitivity analysis and related tasks. For details on features and methods implemented in GTSDA, see the GTSDA Guide.
15.6.65. MACROS 4.0 Release Candidate 1¶
- GTDoE
- Fixed a bug in the sample-based adaptive DoE algorithm that works with a training sample (both init_x and init_y, see
generate()
). Due to this bug, the algorithm performed iterative retraining of the internal approximation model, updating the training sample with data obtained from the model itself.
- Fixed a bug in the sample-based adaptive DoE algorithm that works with a training sample (both init_x and init_y, see
15.6.66. MACROS 4.0 Beta 1¶
- GT
- Example scripts illustrating MACROS usage are now automatically installed with the package and can be run without manually copying the
.py
files, using thepython -m
option. See the updated section Running Examples for details.
- Example scripts illustrating MACROS usage are now automatically installed with the package and can be run without manually copying the
- GTDF
- Fixed a bug in preprocessing of the high-fidelity sample for the blackbox-based VFGP_BB technique, due to which presence of duplicates in the sample could alter the trained model.
- GTDoE
- The Optimal Design technique is now available as an initial sampling technique for the blackbox-based adaptive DoE (see GTDoE/Adaptive/InitialDoeTechnique).
- Fixed blackbox-based adaptive DoE always including all points of the initial sample into the final result, even if some of them violate specified generation bounds. Now the result will include only those initial points that do not violate the bounds.
- Fixed adaptive DoE crashing with certain values of GTDoE/Adaptive/TrainIterations.
GTOpt
Major improvements in Surrogate Based Optimization (SBO):
- The method now uses a new family of internal surrogate models with greatly decreased training time and complexity which are specifically tuned for SBO.
- Internal SBO algorithms are now multi-scale and multi-resolution capable, meaning that functions with intricate landscape could be modelled easily.
Note that the updated SBO algorithms are slightly more demanding with respect to required budget: the total time to solve usually decreases due to the increased algorithm efficiency, but new algorithms require more function evaluations.
15.6.67. MACROS 3.4¶
- GT
- Lowered the NumPy requirement to version 1.6.0 (was 1.6.1). As before, MACROS installation can proceed without NumPy — see the System Requirements section for details.
- Changed the licensing system to count license features on a per-process basis. See section License Usage for details.
- GTApprox
- MACROS now provides a simple C interface to evaluate GTApprox models. See the GTModel Guide for details.
- Improved pre-processing: added an option to average output values for coincident input points and to calculate variance for averaged output (useful for
build()
with outputNoiseVariance). See GTApprox/Preprocess/AverageCoincidentPointValues for details.
15.6.68. MACROS 3.3¶
- GTApprox
- Added smoothing support to response surface models (the models built with the RSM technique).
- Fixed a bug in model export to C due to which exported models performed the following evaluations incorrectly:
- Accuracy estimation gradient for HDAGP models and GP models with linear or quadratic trend (see GTApprox/GPTrendType).
- Model gradient for componentwise (GTApprox/Componentwise on) TA models with discrete variables (see GTApprox/TADiscreteVariables) and at least one nonlinear multidimensional tensor factor set up to use the HDA technique (see GTApprox/TensorFactors).
- GTDF
- Fixed a specific bug which crashed the model builder when VFGP technique is selected (either manually or automatically), GTDF/AccuracyEvaluation is off, GTDF/Componentwise is on, and GTDF/ExactFitRequired is on.
15.6.69. MACROS 3.2¶
- GTApprox
- Allowed automatic selection of the Tensor Gaussian Processes (TGP) technique.
- Updated gradient accuracy evaluation method (see
grad_ae()
) provides significantly faster gradient AE calculation for Tensor Gaussian Processes (TGP) models. - Added quadratic trend support to the Gaussian Processes (GP) technique. Due to this the GTApprox/GPLinearTrend option becomes deprecated and is replaced by GTApprox/GPTrendType.
- GTApprox/RSMType default changed to
"purequadratic"
. - The Linear Regression (LR) technique is added back to automatic selection (was excluded in MACROS 1.11.1). This is due to changing the default GTApprox/RSMType option value.
- GTDoE
- Fixed non-uniform distribution of points generated by adaptive DoE when GTDoE/Adaptive/Criterion is set to
"Uniform"
.
- Fixed non-uniform distribution of points generated by adaptive DoE when GTDoE/Adaptive/Criterion is set to
- GTOpt
- Added information on applied hints (see Hint Reference) to the string representation of GTOpt problem classes.
15.6.70. MACROS 3.1¶
- GTDoE
- The adaptive DoE technique now supports functions with multidimensional output (in
generate()
: init_y with 2 or more columns, and/or blackbox withsize_f()
2 or greater). - Adaptive DoE can now handle NaN values in the response part of an initial sample and in function responses (init_y and blackbox in
generate()
, respectively).
- The adaptive DoE technique now supports functions with multidimensional output (in
- GTOpt
- Fixed an error in results processing due to which the
infeasible
set always contained no points in case of an infeasible problem, while in fact (assuming GTOpt/OptimalSetType is"Extended"
) it should contain the points that did not violate the threshold set by GTOpt/OptimalSetRigor, even if there were no feasible points (that is, no evaluated points did satisfy problem constraints).
- Fixed an error in results processing due to which the
15.6.71. MACROS 3.0¶
- GT
- Added
da.p7core.gtsda
module. GTSDA allows to perform global sensitivity analysis, correlation tests and forward/backward feature selection and is meant to replace GTIVE in future releases. Note it is currently in beta state.
- Added
- GTApprox
- Implemented internal validation support for the Mixture of Approximators (MoA) technique.
- GTOpt
- Implemented mixed-integer optimization support in SBO (see Surrogate Based Optimization). SBO now allows to solve single- and multi-objective problems with a mix of integer and continuous variables.
- Removed the converged point set from
Result
since it is in fact intended for internal purposes and usually is of no interest to the end user. See Version Compatibility Issues for more details.
15.6.72. MACROS 3.0 Release Candidate 2¶
- GTApprox
- New technique, Tensor Gaussian Processes (TGP), which is a further development of the methods first introduced in the Tensor Approximation (TA) technique. TGP modifies the Gaussian Processes (GP) algorithm so it is able to handle a very large data set, provided it was obtained using a Cartesian product DoE, and also provides the accuracy evaluation support not available in TA.
- Fixed a bug in Octave code generated by
export_to()
in case of a Tensor Approximation model featuring discrete variables, which caused errors when evaluating the exported model.
- GTDoE
- Added the Fractional Factorial technique (generates 2-level fractional designs only). This technique supports the GTDoE/CategoricalVariables option and allows semi-automatic and fine-tuned fraction selection (see options GTDoE/FractionalFactorial/MainFactors and GTDoE/FractionalFactorial/GeneratingString, respectively).
15.6.73. MACROS 3.0 Release Candidate 1¶
GT
Following the introduction of NumPy requirement in version 1.9.6, MACROS now uses
ndarray
as the primary data type for return values and internal interfaces. This change allows faster data processing, especially in iterative methods, but can lead to version compatibility issues.Most methods that work with data samples still accept array-like arguments, but now return
ndarray
when possible. Notable exceptions areda.p7core.gtopt.ProblemGeneric.evaluate()
,da.p7core.blackbox.Blackbox.evaluate()
, and similar methods — see section Version Compatibility Issues for more details.
- GTOpt
- Improved quality of internal approximations used in Surrogate Based Optimization (SBO).
- Changed queryx argument type in
evaluate()
fromlist[list[float]]
tondarray
. As a result,evaluate()
no longer supports name indexing of variables (such asqueryx[0]["var_name"]
orqueryx[0].var_name
). See section Version Compatibility Issues for details. - All
Result
attributes now contain NumPy arrays instead of array-like data type. Name indexing for variables is no longer supported. - Fixed exception type in case of too low GTOpt/MaximumIterations value.
15.6.74. MACROS 3.0 Beta 2¶
- GT
- Finish status found in
gtdoe.Result
andgtopt.Result
is now aStatus
object. See section Status for details. This update may impact compatibility — see section Version Compatibility Issues.
- Finish status found in
- GTApprox
- Added the support for incomplete output noise variance data. Missing values can now be specified using NaN elements in noise variance array (see the outputNoiseVariance argument to
build()
). - Removed the GTApprox/InterpolationRequired option (deprecated since 1.8.0) in favor of GTApprox/ExactFitRequired.
- Removed
save_to_octave()
method (deprecated since 1.8.0) in favor ofexport_to()
. - Fixed smoothing methods not working for some models trained with GTApprox/Componentwise on.
- Fixed a bug in the Gaussian Processes technique which could cause an exception if the training sample contains values greater than 1015.
- Added the support for incomplete output noise variance data. Missing values can now be specified using NaN elements in noise variance array (see the outputNoiseVariance argument to
- GTDF
- Removed the GTDF/InterpolationRequired option (deprecated since 1.8.1) in favor of GTDF/ExactFitRequired.
- GTDoE
- Removed the GTDoE/Adaptive/InterpolationRequired option (deprecated since 1.8.1) in favor of GTDoE/Adaptive/ExactFitRequired.
- GTDR
- Added compression and decompression model export to Octave, MEX file and C (see
compress_export_to()
,decompress_export_to()
). This update makessave_to_octave_compress()
andsave_to_octave_decompress()
methods deprecated; the latter were removed.
- Added compression and decompression model export to Octave, MEX file and C (see
- GTOpt
- Removed the GTOpt/GlobalSearch option (deprecated since 1.10.5) in favor of GTOpt/GlobalPhaseIntensity. See section Version Compatibility Issues for an update guide.
- Added
names
(the names of problem variables, objectives, and constraints) toResult
.
15.6.75. MACROS 3.0 Beta 1¶
- GTApprox
- Fixed a defect in the internal validation procedure due to which all RRMS error values in
iv_info
appeared to be NaN in case of leave-one-out cross-validation (GTApprox/IVSubsetCount set equal to the effective size of training sample). - Better support for 64-bit compilation in the Model Export example. The script shall now compile a shared library type model properly if run by a 64-bit Python interpreter under 64-bit Windows.
- Fixed a defect in the internal validation procedure due to which all RRMS error values in
- GTDF
- Added the possibility to save model values calculated during internal validation (see option GTDF/IVSavePredictions).
- Fixed the same internal validation defect as in GTApprox.
- GTOpt
- Optimization result can now include additional points that satisfy optimality criteria but violate problem constraints and feasibility measures to a certain extent (the
infeasible
point set). Related new options are:- GTOpt/OptimalSetType — whether to include infeasible points in the result.
- GTOpt/OptimalSetRigor — sets the allowed degree of feasibility violation.
- New option, GTOpt/RestoreAnalyticResponses — allows to restore analytic forms of problem objectives and constraints hinted as linear or quadratic, so they are evaluated internally by solver without calling
evaluate()
.
- Optimization result can now include additional points that satisfy optimality criteria but violate problem constraints and feasibility measures to a certain extent (the
15.6.76. MACROS 2.4¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.77. MACROS 2.3¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.78. MACROS 2.2¶
- GTOpt
- Fixed an important bug in multi-objective Surrogate Based Optimization. Incorrect processing of constraints at anchor search stage in this mode could lead to selecting infeasible solutions as anchor points, potentially causing problems in solving.
15.6.79. MACROS 2.1¶
- GTOpt
- Fixed a bug in evaluated set filtering which could sometimes lead to incorrect identification of optimal points in multi-objective problems (a point that is marginally worse than optimal could be included in result).
15.6.80. MACROS 2.1 Release Candidate 2¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.81. MACROS 2.1 Release Candidate 1¶
- GTOpt
- Fixed a bug that possibly lead to incorrect results in case of multi-objective Surrogate Based Optimization in badly scaled design space.
15.6.82. MACROS 2.0¶
- GTApprox
- Fixed an incorrect exception in the Tensor Approximation technique that occurs when GTApprox/TensorFactors defines a discrete factor, and GTApprox/ExactFitRequired is set.
- GTDF
- Improved the low-fidelity sample bias compensation algorithm implemented in version 1.10.4 (see GTDF/UnbiasLowFidelityModel).
- GTDoE
- Implemented the sample-based adaptive DoE technique that allows to perform the adaptive DoE process without a blackbox (see option GTDoE/Technique and combinations of arguments in
generate()
).
- Implemented the sample-based adaptive DoE technique that allows to perform the adaptive DoE process without a blackbox (see option GTDoE/Technique and combinations of arguments in
15.6.83. MACROS 2.0 Release Candidate 2¶
- GTApprox
- Added the possibility to save model values calculated during internal validation (see option GTApprox/IVSavePredictions).
- Fixed an internal bug in the Mixture of Approximators (MoA) technique which created version compatibility issues in side-by-side installations (see Version Upgrade).
- Fixed model smoothing being unavailable in Mixture of Approximators (MoA) models that provide accuracy evaluation (trained with GTApprox/AccuracyEvaluation on).
15.6.84. MACROS 2.0 Release Candidate 1¶
- GT
- Since this release, MACROS provides separate setup packages for 32-bit and 64-bit platforms.
- GTApprox
- Fixed incorrect training cache size calculation which could cause excessive memory consumption when using the Sparse Gaussian Processes (SGP) technique.
- Fixed a bug in model evaluation which resulted in
calc()
returning incorrect values when evaluating a Gaussian Processes (GP) model built using a large training sample (more than 1024 points). Techniques other than GP were not affected. Also, since the evaluation method is not stored within a model when saving it to a file withsave()
, models saved in previous MACROS versions will now evaluate correctly even if earlier they produced unexpected results due to this bug. Note there is no need to rebuild such models. - When using internal validation (see GTApprox/InternalValidation option), predicted IV outputs are now saved in the
iv_info
model property.
- GTDoE
- Fixed incorrect handling of NumPy slices when using Adaptive DoE which corrupted initial sample data if init_x, init_y arguments to
generate()
are NumPy slices.
- Fixed incorrect handling of NumPy slices when using Adaptive DoE which corrupted initial sample data if init_x, init_y arguments to
- GTOpt
- Implemented initial data sample support, see the sample_x, sample_f, and sample_c arguments to
da.p7core.gtopt.Solver.solve()
. Initial sample allows to specify multiple initial guesses for solver (if only sample_x is specified) or use cached data when solving a problem (if sample_f and/or sample_c is specified in addition to sample_x). - Increased maximum points batch size in the batch optimization mode to 16384 (see GTOpt/BatchSize).
- Implemented initial data sample support, see the sample_x, sample_f, and sample_c arguments to
15.6.85. MACROS 1.11.1¶
- GTApprox
- The Tensor Approximation (TA) and incomplete Tensor Approximation (iTA) techniques now may be selected automatically if the training sample fits TA/iTA technique requirements. Note that due to this the GTApprox/EnableTensorFeature option is now on by default.
- Removed the Linear Regression (LR) technique from automatic selection. In cases when LR was selected previously, the RSM technique will be selected instead. Note that RSM options (for example, GTApprox/RSMType) do apply when RSM is selected automatically. This change does not affect the manual LR technique selection.
- Corrected the exported function name generation in
export_to()
.
- GTOpt
- Implemented the support for stochastic variables in Surrogate Based Optimization (SBO), thus allowing to combine the robust optimization and SBO approaches.
- Enabled using analytical gradients in SBO problems. Note that in this case GTOpt requests gradient values for cheap functions only, so there is no need to calculate expensive gradients in
evaluate()
. However, if gradients were enabled as dense (see the arguments forenable_objectives_gradient()
andenable_constraints_gradient()
), thenevaluate()
should still return some placeholder values to preserve the response structure. The placeholder value itself is ignored, only the return structure is meaningful. - Fixed budget violation in multi-objective SBO.
15.6.86. MACROS 1.11.0¶
- GT
- All blackbox-based techniques now use the common
Blackbox
class. Deprecated blackbox classes (da.p7core.blackbox.InteractiveBlackbox
, GTDR blackboxes) were removed, and their functionality has been moved to the common blackbox. This change breaks compatibility with previous versions, see the Version Compatibility Issues section for details. - Added the support for analytical gradients and evaluation history to
Blackbox
. Seeenable_gradients()
andenable_history()
.
- All blackbox-based techniques now use the common
- GTDF
- Blackbox-based data fusion now uses the common
Blackbox
class;da.p7core.blackbox.InteractiveBlackbox
was removed. - Fixed an algorithmic bug which in rare cases could freeze the builder.
- Fixed the GTDF/IVSubsetCount option not being applied correctly.
- Blackbox-based data fusion now uses the common
- GTDoE
- Adaptive Doe now uses the common
Blackbox
class;da.p7core.blackbox.InteractiveBlackbox
was removed.
- Adaptive Doe now uses the common
- GTDR
- Blackbox-based Feature Extraction now uses the common
Blackbox
class;da.p7core.gtdr.Blackbox
was removed.
- Blackbox-based Feature Extraction now uses the common
- GTOpt
- Added evaluation history support to problem classes, see
enable_history()
.
- Added evaluation history support to problem classes, see
- Statistical Utilities
- Fixed a bug in
calculate_statistics()
which produced an error if the effective input dimension is 1.
- Fixed a bug in
15.6.87. MACROS 1.10.5¶
- GT
- Corrected NumPy version checking which previously allowed using MACROS with an older NumPy version, leading to unexpected behavior in various cases. From now if MACROS detects an older version of NumPy on initialization, it will raise an exception. See the System Requirements section for the required version.
- GTApprox
- Fixed old GP-based models (trained using versions 1.8.4 and older) setting their
has_smoothing
attribute toTrue
on load despite they do not support smoothing. - Fixed a bug in model export to C which resulted in an assertion from
export_to()
in a specific case when the exported model was trained using the MoA technique, and the training sample contained constant input or output components.
- Fixed old GP-based models (trained using versions 1.8.4 and older) setting their
- GTDF
- Fixed RRMS value not being found in
validate()
result.
- Fixed RRMS value not being found in
- GTOpt
- Improved global search: now allows finer tuning via the GTOpt/GlobalPhaseIntensity option controlling the complexity of applied globalization algorithms. This option also deprecates GTOpt/GlobalSearch.
15.6.88. MACROS 1.10.4¶
- GTApprox
- Corrected processing of strings, NaN and Inf values in training samples when using the MoA technique.
- Fixed MoA technique to support the output noise variance properly (see outputNoiseVariance argument to
build()
). - Updated automatic technique selection logic: in 1-dimensional case, if sample size is less than 5 points, LR technique is now selected provided that both GTApprox/AccuracyEvaluation and GTApprox/ExactFitRequired are off (previously tried SPLT and stopped).
- Corrected exception type and message in case of duplicate values in X sample corresponding to different values in Y.
- GTDF
- Implemented a method to compensate the low-fidelity sample bias. See GTDF/UnbiasLowFidelityModel option description for details.
- GTDoE
- Adaptive DoE example added.
- Updated
measures
module documentation.
- GTOpt
- Fixed an internal solver bug which could cause a fatal error if GTOpt/TimeLimit is set.
15.6.89. MACROS 1.10.3¶
- GT
- Better exception handling in user functions: exceptions from user-defined methods will now contain informative error messages.
- GTApprox
- The GP technique was updated to support Gaussian processes with additive kernel. Primarily, this feature improves model quality in high-dimensional cases or when the functional dependence contains interaction terms. To use it, the additive covariance function type has to be specified via the GTApprox/GPType option.
- Updated automatic technique selection logic. Selecting the HDAGP technique now depends on the response dimensionality: if it is greater that 15, GTApprox will select the GP technique where it selected HDAGP before. For more details, see Section 5.2 in GTApprox User Manual.
- GTDoE
- Fixed a bug in the
measures
module which did not allow to calculate metrics if the input sample included only two points.
- Fixed a bug in the
- GTOpt
- Fixed
set_stochastic()
always requiring a name for the stochastic distribution (the name argument).
- Fixed
- Statistical Utilities
- Fixed a bug in
detect_outliers()
which sometimes produced probabilityscores
that exceeded 1.
- Fixed a bug in
15.6.90. MACROS 1.10.2¶
- GTApprox
- Lowered the sample size requirements; in particular, even 1 point is now enough to build a model if using LR or RSM techniques, and accuracy evaluation and exact fit options are off. See the GTApprox Sample Size Requirements section for details.
- Fixed a bug which lead to a division by zero error in MoA technique when GTApprox/MoAWeightsConfidence is explicitly set to its default value.
- Fixed GTApprox/Technique appearing in the options list twice.
- GTDF
- HFA technique now allows manual approximation algorithm selection — see option GTDF/HFA/SurrogateModelType.
- Corrected some sample size restrictions, see the GTDF Sample Size and Budget Requirements section.
- GTDoE
- Corrected adaptive DoE generator behavior in case when GTDoE/Adaptive/Criterion is
"IntegratedMseGainMaxVar"
.
- Corrected adaptive DoE generator behavior in case when GTDoE/Adaptive/Criterion is
- GTOpt
- Implemented multi-objective Surrogate Based Optimization support. See section Surrogate Based Optimization for details.
15.6.91. MACROS 1.10.1¶
- GT
- The seed for random number generation in Generic Tools modules is now received from the system random device instead of using the time-based seed. This fixes the weakness in the behavior of non-deterministic random number generators under Windows (previously it was possible to occasionally initialize different generators with the same seed).
- GTApprox
- Correctly prohibited
0
and1
values for the GTApprox/MoAPointsAssignmentConfidence option to avoid crashing theBuilder
.
- Correctly prohibited
- GTDoE
- Added Box-Behnken design generation technique, see the GTDoE/Technique option values and the technique-specific GTDOE/BoxBehnken/IsFull option. Also available as initial sampling technique for adaptive DoE (see GTDoE/Adaptive/InitialDoeTechnique).
- GTOpt
- Added GTOpt Tests.
15.6.92. MACROS 1.10.0¶
- GT
- Mixture of Approximators has been moved from Extras, and MoA is now directly available from GTApprox as one of its approximation techniques — see GTApprox/Technique and MoA options in the GTApprox Option Reference. This change may break the compatibility with previous versions, please see Version Compatibility Issues.
- Added
da.p7core.stat
module implementing various statistical analysis methods.
- GTApprox
- Fixed a bug in model gradients calculation (
grad()
) which made the results not reproducible. - Fixed the GTApprox/GPLearningMode option not appearing in the public interface.
- Fixed a bug in model gradients calculation (
- GTDF
- Fixed a bug in internal validation in the blackbox-based mode which caused the error estimates to be NaN when some of the high-fidelity sample points were outside the blackbox domain. There is still a possibility that internal validation results are NaN, but it happens only when cross-validation subsets were generated in such a way that all sample points in at least one of them are outside the blackbox domain. See section Version Compatibility Issues for details and workarounds.
- GTDoE
- Added the Parametric Study technique (see GTDoE/Technique). Also available as initial sampling technique for adaptive DoE (see GTDoE/Adaptive/InitialDoeTechnique).
- GTDR
- The GTDR/SurrogateModelType option for sample-based Feature Extraction now allows to select any approximation technique except LR and SPLT.
- Fixed a bug in the Feature Extraction algorithm which lead to incorrect results in classification tasks.
- Fixed the
decompress()
method mutating the compressed vector which was given as an argument.
15.6.93. MACROS 1.9.6¶
- GT
- MACROS now requires NumPy version 1.6.1 or newer to run. Installation will proceed without NumPy, but the installer will issue a warning; see the System Requirements section for details. The decision to include NumPy into system requirements is due to the fact that it is a widely used package in scientific computing, and usually it is already installed on target systems; MACROS, in turn, will benefit from the capabilities provided by NumPy. This update does not break version compatibility: scripts made for previous MACROS versions will work as is.
- GTApprox
- Implemented several significant improvements to the Gaussian Processes (GP) technique, which, as a result:
- Resolved the problem of GP models degrading to constant in certain parameter space investigation tasks.
- Allowed to handle noisy sample data better, in particular avoiding automatic exact fit when it is not wanted.
- Added a new option to control the trade-off between model accuracy and robustness in GP models — see GTApprox/GPLearningMode for details.
- GTApprox models exported to C will now contain two dedicated methods to calculate model values and accuracy estimation. Both methods also allow to calculate values and gradients separately.
- Fixed incorrect processing of point weights in the incomplete Tensor Approximation technique which sometimes lead to an undefined
Builder
behavior.
- Implemented several significant improvements to the Gaussian Processes (GP) technique, which, as a result:
- GTDF
- Updated the data fusion algorithm to prevent degenerate behavior of Data Fusion models in the cases when the high- and low-fidelity training samples are located in different design space regions.
- GTDR
- Added an option to specify the algorithm for the internal approximator in the sample-based Feature Extraction mode — see GTDR/SurrogateModelType.
- GTOpt
- Fixed unstable methods in multithreading implementation which could lead to crashes.
15.6.94. MACROS 1.9.5¶
- GTApprox
- Added sample point weighting support to the incomplete Tensor Approximation technique (iTA). Point weights affect iTA model fit to the training sample — see the
build()
method description for details (new optional parameter, weights).
- Added sample point weighting support to the incomplete Tensor Approximation technique (iTA). Point weights affect iTA model fit to the training sample — see the
- Mixture of Approximators
- Added the method to evaluate model gradients to MoA models.
15.6.95. MACROS 1.9.4¶
- GTApprox
- Implemented linear extrapolation support for BSPL factors in Tensor Approximation models. See options GTApprox/TALinearBSPLExtrapolation and GTApprox/TALinearBSPLExtrapolationRange.
- Model export to C: fixed a bug in C99 header generation which made the generated headers unusable in C++ code.
- Fixed a few bugs with parameter types in smoothing methods which could lead to incorrect parameter interpretation in
smooth_anisotropic()
andsmooth_errbased()
. - Fixed
Builder
not updating internal validation accuracy data in modelinfo
properly which, when the sameBuilder
instance was used to train several models in succession, lead to the same data (actually related to the first model) appearing in the internal validation accuracy information for all models. - Certain internal dependencies of the GTApprox techniques required to allow degenerate smoothing for linear models. To clarify: linear models will no longer throw an exception on an attempt to use smoothing (
has_smoothing
is set toTrue
for linear models), but all smoothing methods will simply return a copy of the original model. This includes all models built by the LR technique, and RSM models if they include only constant and linear terms. Note the latter may happen even if GTApprox/RSMType was set to something other than"linear"
because this option only restricts the term types allowed in model and, for example, will not create a quadratic model if the original dependency is linear. Proper smoothing for interaction and quadratic RSM models is to be implemented in future releases.
15.6.96. MACROS 1.9.3¶
- GTApprox
- Corrected the Model Export example so it will work on Windows (provided that gcc is installed).
15.6.97. MACROS 1.9.2¶
- GTApprox
- New technique, incomplete Tensor Approximation (iTA), allows to apply the tensor approximation approach when the training sample was obtained using an incomplete Cartesian product DoE (such as a full factorial with some points missing) or a combination of several complete sets with different complete Cartesian product DoE.
- New option GTApprox/TAReducedBSPLModel allows trade-off between model size and accuracy for Tensor Approximation models.
- Fixed a bug in Tensor Approximation models smoothing which made
smooth()
to fail randomly when used from a TA model. - Fixed Tensor Approximation bug which resulted in an exception from
build()
if the training sample contained repeating values. - Fixed incorrect parsing of the array form of the x_weights parameters in
smooth_anisotropic()
which caused the method to apply wrong smoothing settings.
15.6.98. MACROS 1.9.1¶
- GTDoE
- Added the support for classic DoE methods (discrete parameters sampling) to a greater number of GTDoE techniques: now supported by Full Factorial, Latin Hypercube Sampling, Optimal Latin Hypercube Sampling, and Optimal Design for RSM. See also the GTDoE/CategoricalVariables option.
- Documentation
- Added a simpler example of using analytical gradients in GTOpt to Code Samples (see example_gtopt_gradients.py).
15.6.99. MACROS 1.9.0¶
GT
In addition to Generic Tools modules, MACROS package will now include derived tools for specific tasks — the MACROS Extras. First of them is added in this release — Mixture of Approximators (approximation based on space partitioning).
Changed in version 1.10.0: the Mixture of Approximators functionality has been moved to the GTApprox module, making it available as one of the approximation techniques (see GTApprox/Technique).
MACROS switched to a new build system, increasing the number of compatible platforms (see the System Requirements section for details).
- GTApprox
- The dynamic smoothing feature of GTApprox models (the one which used the smoothness parameter of the
calc()
,grad()
,calc_ae()
andgrad_ae()
methods) was completely replaced by a more convenient and easy to usesmooth()
method. Also, two advanced smoothing methods are implemented — seesmooth_anisotropic()
andsmooth_errbased()
. This change breaks compatibility with previous versions, see the Version Compatibility Issues section for details. - Increased the number of pre- and postprocessing hints.
- Implemented a new version of the GP and HDAGP techniques (based on Gaussian Processes) for processing input samples with heteroscedastic noise variance, see the GTApprox/Heteroscedastic option description.
- Added an option to limit the number of points selected from the training set to calculate model accuracy due to this test may be time consuming when large training samples are used. See the GTApprox/TrainingAccuracySubsetSize option description for details.
- The dynamic smoothing feature of GTApprox models (the one which used the smoothness parameter of the
15.6.100. MACROS 1.8.5¶
- GTApprox
- BSPL technique (one of the techniques available in Tensor Approximation, see the GTApprox/TensorFactors option description) is now affected by the GTApprox/ExactFitRequired option: the technique will not try to fit training data exactly when the option is off.
- Corrected exception messages for the Tensor Approximation technique.
- Fixed a bug in the implementation of the Gaussian Processes based techniques (GP, HDAGP, SGP) which made GTApprox to build different models depending on the number of OpenMP threads set by user.
- GTDF
- Removed superfluous training set accuracy information from the GTDF model
info
. It will now contain accuracy values only for the model it belongs to.
- Removed superfluous training set accuracy information from the GTDF model
15.6.101. MACROS 1.8.4¶
- GT
- Revised the Installation section to be more verbose.
- The License Setup section now includes detailed instructions on using floating licenses as well as a node-locked license (see Floating License, Node-Locked License) in addition to the general information on the MACROS licensing system.
- GTApprox
- Fixed wrong exception type when the vector of mean values specified by the GTApprox/GPMeanValue option has incorrect dimension.
- Fixed an internal bug in the compactness calculation of an array which could lead to a segmentation fault when building a model.
- Preprocessing/postprocessing fixes:
- Fixed incorrect parsing of the GTApprox/Postprocess/Artifacts hint.
- Corrected the mentions of default option values in the recommendations received as the result of preprocessing to avoid confusion.
- Corrected exception types in pre- and postprocessing functions.
- GTDF
- Fixed a bug in the blackbox-based version of the Difference Approximation technique which could lead to a segmentation fault when building a model.
- GTOpt
- Fixed Surrogate Based Optimization bug which caused a noticeable slowdown when solving 1-dimensional SBO problems.
15.6.102. MACROS 1.8.3¶
- GTApprox
- Fixed the pre- and postprocessing functions to remove the internal NumPy dependency.
15.6.103. MACROS 1.8.2¶
- GTApprox
- Improved model export to C:
- Now supports exporting models built using the Splines with Tension (SPLT) technique.
- Supports exporting model gradient and accuracy evaluation features if they are available in the model.
- Improved model export to C:
- GTOpt
- Global search now works in multi-objective optimization problems.
15.6.104. MACROS 1.8.1¶
- GT
- Added section “Regression Tests”.
- Fixed an installation bug which could lead to a file access violation error under Windows when reinstalling the same version of MACROS on top of the installed one (without prior uninstall).
- GTApprox
- Updated the documentation on the pre- and postprocessing modes of the builder and added examples.
- Added the Model Export example to illustrate the functionality of exporting a GTApprox model to C code implemented before in 1.8.0.
- Fixed a bug in the GTApprox/Accelerator option processing which made it apply wrong settings if accelerator was set to 5 while Gaussian Processes or another GP-based technique was used. This bugfix also affects other tools using GTApprox internally (for example, GTDF).
- GTDF
- The GTDF/InterpolationRequired option was renamed to GTDF/ExactFitRequired for the name to be consistent with the GTApprox/ExactFitRequired option and for the same disambiguation purposes (see the MACROS 1.8.0 changelog). The old name is kept for the compatibility but is considered deprecated.
- GTDoE
- The GTDoE/Adaptive/InterpolationRequired option was renamed to GTDoE/Adaptive/ExactFitRequired for the name to be consistent with the GTApprox/ExactFitRequired option and for the same disambiguation purposes (see the MACROS 1.8.0 changelog). The old name is kept for the compatibility but is considered deprecated.
15.6.105. MACROS 1.8.0¶
- GTApprox
- Added model export to Octave, MEX file and C (see the
export_to()
function description). Makessave_to_octave()
deprecated in GTApprox. - The
Builder
now allows to save user comments with the model - see the comment parameter inbuild()
function description. - Added RRMS factor to the
da.p7core.gtapprox.Model.validate()
output. - Significantly increased the quality of noisy functions approximation with the Gaussian Process technique.
- Due to an ambiguity in meaning of “interpolation”, the GTApprox/InterpolationRequired option was renamed to GTApprox/ExactFitRequired. Old name is kept for compatibility but is considered deprecated.
- Sample pre- and postprocessing functionality added, see
preprocess()
andpostprocess()
.
- Added model export to Octave, MEX file and C (see the
GTOpt
New option to force searching for the global optimum, GTOpt/GlobalSearch.
Changed in version 1.10.5: GTOpt/GlobalSearch replaced by the advanced GTOpt/GlobalPhaseIntensity option.
15.6.106. MACROS 1.7.7¶
- GTApprox
- Fixed the strong dependency of internal validation results on the random seed.
- GTDF
- Fixed a bug in the internal validation procedure which lead to crash in the blackbox mode.
- GTDoE
- Fixed
da.p7core.gtdoe.Generator.generate()
not accepting a NumPy array as an initial sample in adaptive DOE mode. - In adaptive DoE mode, if the generator and the blackbox have different bounds, generator will now correctly request values only for the points which belong to the intersection of these two domains.
- Fixed
15.6.107. MACROS 1.7.6¶
Warning
This release drops Python 2.4 support (see the System Requirements section).
- GT
- Since this release, MACROS supports installing and using multiple versions on the same host (see Version Upgrade for details).
15.6.108. MACROS 1.7.5¶
- GTApprox
- Added response noise variance information to the internal validation output.
- Added method to calculate accuracy evaluation gradients, see
da.p7core.gtapprox.Model.grad_ae()
.
15.6.109. MACROS 1.7.4¶
- GTApprox
- Added an optional parameter to specify the variance of response values in the training sample (improves the quality of noisy approximations when given an estimate of noise variance over the sample data). See
da.p7core.gtapprox.Builder.build()
description.
- Added an optional parameter to specify the variance of response values in the training sample (improves the quality of noisy approximations when given an estimate of noise variance over the sample data). See
15.6.110. MACROS 1.7.3¶
This is a maintenance release, which does not contain any functional changes or updates.
15.6.111. MACROS 1.7.2¶
- GT
- Improved the support for non-English system locales.
- GTApprox
- New technique added: RSM (Response Surface Model). See GTApprox/Technique and corresponding RSM options in the GTApprox Option Reference.
- GTDoE
- New technique added, Optimal Design. See GTDoE/Technique and corresponding options in the GTDoE Option Reference.
- GTOpt
- Added surrogate based optimization support, see section Surrogate Based Optimization and SBO example.
- Implemented batch mode, see GTOpt/BatchSize option and the Batch Mode example.
15.6.112. MACROS 1.7.1¶
- GT
- The Examples section was reworked and now contains annotated examples. More examples can also be found in section the Code Samples.
GTApprox
Implemented new smoothing method which allows user to create and save models with default smoothness, see
ironing()
.Changed in version 1.9.0: this method was replaced with more advanced smoothing methods, see the MACROS 1.9.0 changelog.
Multiple Tensor Approximation technique improvements:
- Speed up TA values calculation.
- Added exporting to human-readable format (Octave).
- Implemented internal validation for TA.
- Implemented discrete variables support, see the GTApprox/TADiscreteVariables option.
- Fixed automatic technique selection logic in case of conflicting options.
Fixed a bug that made smoothing unusable for exact fit models.
- GTDF
- Fixed multithreading bug that could make GTDF stop or crash.
- Fixed gradient calculation problem in case of model trained on a sample with constant output value.
- Fixed a bug in automatic technique selection in blackbox mode.
- Evaluation of a model built by a blackbox technique now does not require an input from the blackbox which was used when creating the model.
- Implemented cross-validation for blackbox techniques.
- GTDoE
- Implemented Adaptive Design of Experiments technique based on Gaussian Processes, see
da.p7core.gtdoe.Generator.generate()
description for details.
- Implemented Adaptive Design of Experiments technique based on Gaussian Processes, see
- GTOpt
- Implemented new internal solver algorithm (SQCQP - Sequential Quadratic Constraints Quadratic Programming) which further reduces the number of objective functions and constraints evaluations.
- Updated GTOpt documentation to disambiguate using optimizer hints — see
add_variable()
,add_objective()
,add_constraint()
descriptions and the Hint Reference. - New example in the Optimization tutorial.
15.6.113. MACROS 1.7.0¶
- GTApprox
- Added new technique: TA (Tensor Approximation).
- Added capability to perform model validation using separate test set in addition to internal validation.
- New options for Gaussian Processes technique: GTApprox/GPLinearTrend and GTApprox/GPMeanValue.
GTDF
Completely reworked tool. Added support of large training sets, O(100 000) points. Blackbox-based model training support added.
Implemented internal validation, a number of corresponding options added (see GTDF/InternalValidation, GTDF/IVRandomSeed, GTDF/IVSubsetCount, GTDF/IVTrainingCount).
Changed in version 5.0: GTDF/IVRandomSeed renamed to GTDF/IVSeed.
Added capability to perform model validation using separate test set in addition to InternalValidation.
Documentation updated.
- GTDR
- Added capability of exporting Dimension Reduction and Feature Extraction procedures to human-readable Octave format (see
save_to_octave_compress()
,save_to_octave_decompress()
methods).
- Added capability of exporting Dimension Reduction and Feature Extraction procedures to human-readable Octave format (see
- GTOpt
- Improved performance of Multi-Objective Robust Optimization algorithm.
- Introduced new problem type - mean variance problem, see
ProblemMeanVariance
class. - Options added: GTOpt/EnsureFeasibility makes optimizer always stay within feasible domain, GTOpt/RobustGradientTolerance sets gradient threshold for robust optimization.
15.6.114. MACROS 1.6.3¶
- GTApprox
- Memory footprint of approximation training process with SGP technique significantly reduced for big samples (100 000 points and more).
- Fixed wrong model serialization on some platforms.
- GTOpt
- Bugfixes in internal algorithms.
15.6.115. MACROS 1.6.2¶
- GT
- Fixed warning loglevel messages sometimes showing on error loglevel.
- More wrong exception type bugfixes.
- Documentation has undergone complete revision fixing more inconsistencies with current development state, expanding descriptions and disambiguating various issues.
- GTApprox
- It is now possible to serialize model to/from string using
tostring()
andfromstring()
methods.
- It is now possible to serialize model to/from string using
- GTDF
- It is now possible to serialize model to/from string using
tostring()
andfromstring()
methods.
- It is now possible to serialize model to/from string using
- GTDR
- Added
has_variable_compression()
method which allows user to check whether the model supports variable compressed vector size. - It is now possible to serialize model to/from string using
tostring()
andfromstring()
methods. - Applied some corrections to dimension estimation procedure.
- Added
- GTOpt
- Added GTOpt/VerboseOutput option which allows user to turn on trace level logging of optimization process.
- Added section Robust Optimization to documentation.
15.6.116. MACROS 1.6.1¶
- GT
- Multiple wrong exception type bugfixes.
- Fixed problem with the model trained and saved on 32-bit OS being impossible to load on 64-bit platform.
- Multiple minor documentation inconsistencies are fixed.
- All interfaces became more strict and safe, they check type and values, for instance, for NaN and Inf values.
- GTApprox
- Technique based on sparse Hessian for additional high precision tuning of HDA approximation for the case of big samples is introduced. New option GTApprox/HDAHessianReduction was added for HDA. This option controls the trade-off between time and accuracy. Option GTApprox/HDAHPMode is removed as obsolete.
- Implemented algorithm for sparse approximation of inverse covariance matrix and its determinant to accelerate GP training process.
- Additional tuning of acceleration switch for GPHDA technique.
- Fixed lower bound of GTApprox/SGPNumberOfBasePoints.
- Setting options GTApprox/SGPSeedValue, GTApprox/SGPNumberOfBasePoints to -1 now causes exception to be thrown immediately instead of at the time of building approximation.
- Fixed allowed range for GTApprox/SGPNumberOfBasePoints.
- GTDR
- DR in FE mode outputs cumulative loadings matrix into model info. This is important for getting qualitative information about which input coordinate has the most ifluence on the output.
- Dimensionality of compressed space in FE mode became optional parameter. It will be chosen automatically if omitted.
- Added check to ensure that number of points in X is equal to number of points in F.
- GTOpt
- It is possible to hint optimizer with additional information about components of variables, objectives and constraints, such as ‘LinearityType’ for objectives and constraints.
15.6.117. MACROS 1.6.0¶
- GT
- Trained models now store build log inside.
- Multiple improvements in GenericTools algorithms.
- All calc-like methods now accept both scalar and vector inputs.
- Multiple bugfixes.
- GTApprox
- New approximation technique added: SGP - Sparse Gaussian Processes. It makes accuracy evaluation available for large data samples.
- Handling of constant columns in training set is improved, especially for GP and HDAGP techniques.
- Fixed AccuracyEvaluation not working correctly in Componentwise modes.
- Multiple bugfixes in option handling.
- GTDF
- Introduced new public interface.
- DF tool completely reworked.
- Added new DF algorithms (techniques): DA (Difference Approximation), HFA (High Fidelity Approximation) and VFGP (Variable Fidelity Gaussian Process).
- Implemented Decision Tree for automatic selection of DF algorithms.
- GTDoE
- Several minor bugfixes.
- GTDR
- DR model now provides gradients.
- It is possible to force DR to work in PCA mode by setting GTDR/Technique option.
- Feature extractor FE model can be built on black box.
- Algorithms of sample based techniques are improved. In particular, additional iterations of gradient estimation are done for improvement of FE model accuracy.
- GTOpt
- Added robust optimization functionality.
- Robust optimization functionality uses well-known OpenTURNS distribution generation mechanisms.
15.6.118. MACROS 1.5.3¶
- GT
- Fixed minor bug causing zero byte appearing in log output.
- Overall improvement of exception handling. Tools now throw more adequate exceptions on errors.
- GTApprox
- Constant columns are removed automatically in HDA technique.
- Fixed rare occasional crashing on model save/load.
- GTDoE
- Fixed a bug which caused second request for generated points to return empty list (for batch technique).
- GTOpt
- Initial guess values are now checked for being correct.
15.6.119. MACROS 1.5.2¶
- GT
- Default log level set to ‘Info’.
- GTDR
- Zero-dimensional compressed space is forbidden.
- GTOpt
- Intermediate result callback is simplified and is now only used to interrupt the optimizer.
15.6.120. MACROS 1.5.1¶
- GTApprox
- Several minor bugfixes.
- Added Approximation tutorial.
- GTDF
- Added new option GTDF/Accelerator. It allows control over trade-off between speed and accuracy.
- GTDoE
- Added Design of Experiments tutorial.
- GTDR
- Model builder accepts dimensionality of compressed space even in case of FE technique. It will be treated as default value in model.
- GTOpt
- Optimizer options are now split in two groups: Basic and Advanced for convenience.
- Two basic options added: GTOpt/ObjectivesSmoothness and GTOpt/ConstraintsSmoothness. They allow user to hint optimizer what kind of optimization problem it deals with.
- Some CUTEr issues.
- Number of evaluations of objectives and constraints was decreased significantly (approximately in half).
- Added examples of solving problem with DFO methods.
- Added Optimization tutorial.
- Fixed a bug which led to fail when NumPy arrays were used in objective functions.
- Fixed missing status messages.