v. 0.14.1

v. 0.14.0

The changes are relatively small, but some of them can be potentially breaking, hence the version is bumped up to 0.14.0.

v. 0.13.1

v. 0.13.0

This release brings an updated implementation of PLS algorithm (SIMPLS) which is more numerically stable and gives sufficiently less warnings about using too many components in case when you work with small y-values. The speed of pls() method in general has been also improved.

Another important thing is that cross-validation of regression and classification models has been re-written towards more simple solution and now you can also use your own custom splits by providing a vector with segment indices associated with each measurement. For example if you run PLS with parameter cv = c(1, 2, 3, 4, 1, 2, 3, 4, 1, 2) it is assumed that you want to use venetian blinds split with four segments and your dataset has 10 measurements. See more details in the tutorial, where description of cross-validation procedure has been moved to a separate section.

Other changes and improvements:

v. 0.12.0

This release is mostly about preprocessing - added some new methods, improved the existent once and implemented a possibility to combine preprocessing methods together (including parameter values) and apply them all together in a correct sequence. See preprocessing section in the tutorials for details

New features and improvements

Bug fixes

Other changes

v. 0.11.5

v. 0.11.4

v. 0.11.3

v. 0.11.2

v. 0.11.1

v. 0.11.0

New features

Improvements and bug fixes

v. 0.10.4

v. 0.10.3

v. 0.10.2

v. 0.10.1

v. 0.10.0

Many changes have been made in this version, but most of them are under the hood. Code has been refactored significantly in order to improve its efficiency and make future support easier. Some functionality has been re-written from the scratch. Most of the code is backward compatible, which means your old scripts should have no problem to run with this version. However, some changes are incompatible and this can lead to occasional errors and warning messages. All details are shown below, pay a special attention to breaking changes part.

Another important thing is the way cross-validation works starting from this version. It was decided to use cross-validation only for computing performance statistics, e.g. error of predictions in PLS or classification error in SIMCA or PLS-DA. Decomposition results, such as explained variance or residual distances are not computed for cross-validation anymore. It was a bad idea from the beginning, as the way it has been implemented is not fully correct — distances and variances measured for different local models should not be compared directly. After a long consideration it was decided to implement this part in a more correct and conservative way.

Finally, all model results (calibration, cross-validation and test set validation), are now combined into a single list, model$res. This makes a lot of things easier. However, the old way of accessing the result objects (e.g. model$calres or model$cvres) still works, you can access e.g. calibration results both using model$res$cal and model$calres, so this change will not break the compatibility.

Below is more detailed list of changes. The tutorial has been updated accordingly.

Breaking changes

Here are changes which can potentially lead to error messages in previously written code.

General

Plotting functions

PCA

As mentioned above, the biggest change which can potentially lead to some issues with your old code is that cross-validation is no more available for PCA models.

Other changes: * Default value for lim.type parameter is "ddmoments" (before it was "jm"). This changes default method for computing critical limits for orthogonal and score distances. * Added new tools for assessing complexity of model (e.g. DoF plots, see tutorial for details). * More options available for analysis of residual distances (e.g marking objects as extremes, etc.). * Method setResLimits() is renamed to setDistanceLimits() and has an extra parameter, lim.type which allows to change the method for critical limits calculation without rebuilding the PCA model itself. * Extended output for summary() of PCA model including DoF for distances (Nh and Nq). * plotExtreme() is now also available for PCA model (was used only for SIMCA models before). * For most of PCA model plots you can now provide list with result objects to show the plot for. This makes possible to combine, for example, results from calibration set and new predictions on the same plot. * You can now add convex hull or confidence ellipse to groups of points on scores or residuals plot made for a result object. * New method categorize() allowing to categorize data rows as “regular”, “extreme” or “outliers” based on residual distances and corresponding critical limits.

SIMCA/SIMCAM

Regression coefficients

PLS regression

As mentioned above, the PLS calibration has been simplified, thus selectivity ratio and VIP scores are not computed automatically when PLS model is created. This makes the calibration faster and makes parameter light unnecessary (removed). Also Jack-Knifing is used every time you apply cross-validation, there is no need to specify parameters coeffs.alpha and coeffs.ci anymore (both parameters have been removed). It does not lead to any additional computational time and therefore it was decided to do it automatically.

Other changes are listed below:

v. 0.9.6

v. 0.9.5

v. 0.9.4

v. 0.9.3

v. 0.9.2

v. 0.9.1

v. 0.9.0

v. 0.8.4

v. 0.8.3

v. 0.8.2

v. 0.8.1

v. 0.8.0

v. 0.7.2

v. 0.7.1

v. 0.7.0

v. 0.6.2

v. 0.6.1

v. 0.6.0

v. 0.5.3

v. 0.5.2

v. 0.5.1

v. 0.5.0

v. 0.4.0

v. 0.3.2

v. 0.3.1

v. 0.3.0