What is AI risk

While the particular forms of these risks continue to evolve, at least the following factors can create, contribute to, or exacerbate these risks:

  1. AI outputs that are inaccurate or misleading;

  2. AI outputs that are unreliable, ineffective, or not robust;

  3. AI outputs that are discriminatory or have a discriminatory effect;

  4. AI outputs that contribute to actions or decisions resulting in harmful or unsafe outcomes, including AI outputs that lower the barrier for people to take intentional and harmful actions;

  5. AI being used for tasks to which it is poorly suited or being inappropriately repurposed in a context for which it was not intended;

  6. AI being used in a context in which affected people have a reasonable expectation that a human is or should be primarily responsible for a decision or action; and

  7. the adversarial evasion or manipulation of AI, such as an entity purposefully inducing AI to misclassify an input.

Safety-Impacting AI:58 The term “safety-impacting AI” refers to AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety of:

  1. Human life or well-being, including loss of life, serious injury, bodily harm, biological or chemical harms, occupational hazards, harassment or abuse, or mental health, including both individual and community aspects of these harms;

  2. Climate or environment, including irreversible or significant environmental damage;

  3. Critical infrastructure, including the critical infrastructure sectors defined in Presidential Policy Directive 2159 or any successor directive and the infrastructure for voting and protecting the integrity of elections; or,

  4. Strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.

Reference:

Last updated