Skip to main content

EXAMPLE: AI risks

C
Written by Cameron Olshansky
Updated over a year ago

Risk Grouping

Risk Statement

Risk Description

AI Governance

Accountability - Lack of accountability or responsibility over AI systems could lead to regulatory violations.

Responsibility over the AI system is not well defined.

AI Governance

AI Expertise - Lack of dedicated specialists with interdisciplinary skill sets/expertise to assess, develop, and deploy AI systems could result in AI systems development or specifications.

Users do not have sufficient understanding of how to use the AI system or empowered to detect/override erroneous decisions/outputs.

AI Governance

Availability and Quality of Training/Test Data - Training, test, and production data do not fit the intended behavior or purpose of the AI resulting in data type and quality issues.

Training/test day weren't validated to ensure their currency/relevance to the intended purpose OR the amount of training/testing data required was not sufficient to provide strong predictive power for the AI system. Data quality is not ensure in this AI system.​

AI Governance

Environmental Impact - The use of AI can cause effects from an environmental point of view.

Environmental impacts in the use of AI system is not considered.

AI Governance

Fairness - The use of AI systems for automated decision-making can be unfair to specific persons or groups of persons.

Unfair outcomes can cause bias in objective functions, imbalanced data sets, and human biases in training data along with providing inaccurate feedback to the system.

AI Governance

Maintainability - AI systems based on ML are trained and do not follow a rule-based approach leading to possible defects or the inability to adjust to new requirements.

Maintainability of an AI system and its implications need to be investigated.

AI Governance

Privacy - The misuse or disclosure of sensitive data can have harmful effects on data subject.

AI systems may infer sensitive personal data and data used for building/operating the AI system must be protected to ensure AI systems cannot be used to give unwarranted access to data.

AI Governance

Robustness - The inability of a system to maintain its level of performance under various circumstances/usage can lead to improper functioning.
​

The AI system needs to be tested against invalid inputs or stressful environmental conditions to ensure its ability to reproduce credible measures/results.

AI Governance

Safety - The inability of the AI system to respond in a safe manner can endanger human life, health, property, or the environment.

Specific standards for particular application domains need to be considered.

AI Governance

Security - Data Poisoning - An adversary may intentionally compromise a training dtaset used by an AI (or machine learning) model resulting in the manipulation of operations.

AI Governance

Security - Adversarial Attacks - An adversary may manipulate input data in an AI system to trick the system into making incorrect decisions.

AI Governance

Security - Model Stealing - An adversary may reverse engineer a machine learning model leading to significant losses of intellectual poperty and various security risks.

AI Governance

Transparency - Lack of transparency over AI systems to provide the appropriate information can lead to capabilities (or limitations) to stakeholders in enabling them to assess development, operation, or use of AI systems against their objectives.

The lack of how AI systems are applied, how data is collected, which measures are implemented to manage AI systems, or understanding ways to control risks can lead to a lack of transparency.

AI Governance

Explainability - Lack of the ability to rationalize or help to understand how the AI system generated its outcome can lead to a lack of trust over the AI system.

AI systems need to be explainability in order to build trust and accountability within the system. Excessive transparency/explainability could lead to risks in relation to privacy, security, confidentiality requirements, and intellectual properties.

AI Governance

Complexity of AI System Environment - Partial understanding due to high complexity of the AI environment can lead to uncertainty causing risks.

The inability to identify the design/development process over all relevant situations the AI system is expected to handle and train/test data to cover these situations may add risks in a complex environment.

AI Governance

Level of Automation - Automated decisions of AI systems can impact various areas of concerns over safety, fairness, or security.

The handover from an AI system to a 'human' agent can increase risk due to time constraints or attention of the human agent.

AI Governance

Machine Learning Risks - Lack of quality data can affect the functionality of the AI system and the use of data collection may not be representative of the application domain as well as the data source may incur significant ethical and legal risks. Continuous learning of AI systems may result in changing their behavior from the intended use.

Machine learning is dependent on the data used to train and introduces certain risks related to data quality, data collection, and continuous learning.

AI Governance

System Hardware Issues - Risk of hardware errors based on defective components.

AI Governance

System Hardware Issues - Risk of soft errors such as unwanted temporary state changes of memory cells (or logic components) caused by high energy radiation.

AI Governance

System Hardware Issues - Risks of transferring trained ML models between different systems constrained by different hardware capabilities in terms of processing power, memory, and availability of dedicated AI hardware accelerators.

AI Governance

System Hardware Issues - Risks when an AI system requires remote processing and storage, network errors, bandwidth restrictions, and increased latency due to limited/shared nature of network resources

AI Governance

System Lifecycle Issues - Design and Development - Flawed design process fails to anticipate the contexts in which the AI system is used resulting in unexpected failures when used in these contexts.

AI Governance

System Lifecycle Issues - Verification and Validation - Inadequate verification/validation process to release update versions of an AI system can lead to accidental regressions (or unintended deterioration/degradation) in quality, reliability, or safety.
​

AI Governance

System Lifecycle Issues - Deployment - Inadequate deployment configuration can lead to resource problems related to memory, compute, network, storage, redundancy, or load balancing.

AI Governance

System Life Cycle Issues - Maintenance, Update, and Revision - AI system no longer supported/maintained by the developer, but still in use can present long-term risks or liability to the developing organization.

AI Governance

System Life Cycle Issues - Reuse - Functioning AI system can be used in a context for which it was not originally designed causing problems due to differing requirements between the designed and actual use.

AI Governance

System Lifecycle Issues - Decommissioning - Termination of the use of a certain AI system (or component based on AI technologies) can lose information (or decision expertise) having been provided by the decommissioned system.

If another system is used to replace the decommissioned one, the way information is processed or decisions are made can change.

AI Governance

Technology Readiness - Less mature technologies used in development/application of AI system can impose unknown risks or are hard to assess.

More mature technologies may be easier to identify and assess; however, risks of complacency and technical debt of technology can rise.

Did this answer your question?