Tool detects bias in machine learning models

December 14, 2020 //By Nick Flaherty
Tackling bias in edge AI systems
Tools are emerging to easily add machine learning to camera systems, along with tools to address potential bias in edge AI frameworks

End-to-end machine learning tools such as AI Studio from Blaize are aimed at removing the need for data scientist in developing and deploying edge AI applications, especially in camera systems. But those data scientists have also highlighted that AI frameworks are vulnerable to bias in the data that is used for training.

New tools are emerging this week to analyse and even correct bias in such frameworks, which is a key step for edge AI system developers.

Amazon SageMaker Clarify is a new tool released this week that helps customers detect bias in machine learning (ML) models, and increase transparency by helping explain the behaviour of the model. These models are built by training algorithms that learn statistical patterns present in datasets, but there are issues on how the framework makes a prediction and how to detect anomalies.

With AI Studio, launched yesterday, Blaize has tried to address some of this with transparent steps and open frameworks such as ONNX and openVL.

Even with the best of intentions, bias issues may exist in datasets and be introduced into models with business, ethical, and regulatory consequences, says Julien Simon, Artificial Intelligence & Machine Learning Evangelist for EMEA at Amazon Wes services (AWS). This means it is important for model administrators to be aware of potential sources of bias in production systems.

For simple and well-understood algorithms like linear regression or tree-based algorithms, it’s reasonably easy to crack the model open, inspect the parameters that it learned during training, and figure out which features it predominantly uses, he says.

However, as models become more and more complex with deep learning, this kind of analysis becomes impossible. Many companies and organizations may need ML models to be explainable before they can be used in production. In addition, some regulations may require explainability when ML models are used as part of consequential decision making, and closing the loop, explainability can also help detect

Picture: 
Tools from synthesized.io can analyse AI systems and generate synthetic data to correct bias

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.