Model tester

class pyfume.Tester.SugenoFISTester(model, test_data, variable_names, golden_standard=None, list_of_outputs=['OUTPUT'])

Bases: object

Creates a new Tester object to be able to calculate performance metrics of the fuzzy model.

Parameters
  • model – The model for which the performance metrics should be calculated

  • test_data – The data to be used to compute the performance metrics

  • variable_names – A list of the variables names of the test data (which should correspond with the variable names used in the model).

  • golden_standard – The ‘True’ labels of the test data. If not provided, the only predictions labels can be generated, but the error will not be calculated (default = None).

  • list_of_outputs – List of the output names (which should correspond with the output names used in the model) (default: OUTPUT).

calculate_MAE()

Calculates the Mean Absolute Error of the model given the test data.

Returns

The Mean Absolute Error of the fuzzy model.

calculate_MAPE()

Calculates the Mean Absolute Percentage Error of the model given the test data.

Returns

The Mean Absolute Percentage Error of the fuzzy model.

calculate_MSE()

Calculates the Mean Squared Error of the model given the test data.

Returns

The Mean Squared Error of the fuzzy model.

calculate_RMSE()

Calculates the Root Mean Squared Error of the model given the test data.

Returns

The Root Mean Squared Error of the fuzzy model.

calculate_performance(metric='MAE')

Calculates the performance of the model given the test data.

Args:

metric: The performance metric to be used to evaluate the model. Choose from: Mean Absolute Error (‘MAE’), Mean Squared Error (‘MSE’), Root Mean Squared Error (‘RMSE’), Mean Absolute Percentage Error (‘MAPE’).

Returns

The performance as expressed by the chosen performance metric.

predict()

Calculates the predictions labels of the test data using the fuzzy model.

Returns

Tuple containing (result, error)
  • result: Prediction labels.

  • error: The difference between the prediction label and the ‘true’ label.