However, many people are concerned about how AI systems can integrate with existing processes and workflows, how they handle user and other data, and how they protect user data and privacy.
Microsoft is cognisant of all these concerns and is taking steps to address them, resolve problems and mitigate risks. The initiatives below are just some ways it ensures its AI products and systems are safe, secure and fit for purpose.
Fairlearn is ‘an open-source, community-driven project to help data scientists improve [the] fairness of AI systems.
It’s a ‘sociotechnical’ tool built to recognise that algorithms and other applications of AI, machine learning and other tools may contain unacknowledged biases and cause social harm.
To cite a well-known example, banks’ models to approve or reject loan applications may make more errors for one group of applicants than others. Fairlearn can assess how these errors impact different groups and suggest how to mitigate such impacts.
AI fairness checklist
Microsoft acknowledges that many organisations have guidelines to ensure their AI projects and deployments consider fairness, privacy and security. But in practice, not every organisation provides the checklists, procedures and other tools needed for project and team leads to comply with said guidelines.
Microsoft’s AI fairness checklist project involves working with AI practitioners to get their input into checklist design and support. It also focuses on supporting checklists’ adoption and integration into AI projects at the design, development and deployment stages.
The HAX Workbook is a tool for structuring early conversations across the different roles needed to implement the Guidelines for Human-AI Interaction. It supports bringing together user experience, project management, engineering and AI disciplines when planning a project.
It helps avoid the problems that can arise when guidelines are tilted too heavily towards one discipline, such as engineering. Such deployments can be too rigid to implement other disciplines’ requirements, thus putting entire projects at risk. The Workbook is a downloadable resource that guides teams through the necessary steps to avoid such problems.
InterpretML is a project to ensure responsible machine learning. It’s a community-driven open-source toolkit that supports different models and algorithms and uses state-of-the-art techniques to explain model behaviour.
Specifically, it can help organisations understand how their models perform, including how performance changes with different data subsets. It can explore errors and analyse datasets.
It can also help organisations understand their models holistically by running ‘what-if?’ analyses, filtering data to observe feature importance and explore global and local explanations.
Error analysis is a toolkit that identifies data cohorts with unusually high error rates. It has two essential functions: identification and diagnosis.
The tool can identify discrepancies in data error rates using decision trees or heatmaps, as required. Decision trees uncover error rates across features using indicators such as error rate, error coverage and data representation. Heat maps show how input features impact error rates across cohorts.
Diagnosis tools include data exploration, global explanations, local explanations and ‘what-if?’ analysis. These tools allow organisations to understand the reasons underlying identified error rates and take appropriate steps to counter them.
Counterfit is ‘an automation tool for security testing AI systems as an open-source project’. It can help organisations ensure their algorithms are ‘robust, reliable and trustworthy’.
Consumers are concerned that AI systems powering critical areas, including healthcare, finance, and defence, are secure against attack. But assessing AI security is complex and difficult. The Counterfit tool was born from Microsoft’s need to evaluate its systems’ vulnerabilities and has evolved into an automation tool that can attack multiple AI systems.
It’s environment- and model-agnostic and strives to be data-agnostic. It provides an automation layer for adversarial AI systems, including Adversarial Robustness Toolbox and TextAttack. It can assist organisations with penetration testing, vulnerability scanning and logging for AI systems.
With these initiatives and more, Microsoft hopes to build AI systems that are not only useful and trustworthy but also used and trusted. Here at Eagle360, we’re excited by AI’s potential to help businesses. Why not contact us today to discover how it might help yours?