Glossary of terms

Bias

Bias refers to prejudice in favor of or against certain individuals, groups, or outcomes, often leading to unfair results. This bias can arise from various sources, such as the training data used, the algorithm’s design, or human intervention during development.

Bias Read More »

Explainability

Explainability refers to the ability to understand and describe how an AI model arrives at its decisions or outcomes. It’s essential for building trust, especially in critical areas like healthcare, finance, and law, where decisions can have significant consequences. Explainable AI allows users to understand the reasoning behind predictions, helping ensure that the system is

Explainability Read More »

Accountability

Accountability in AI means that individuals, organizations, or institutions are responsible for the outcomes and actions of an AI system. When AI systems cause harm, make unfair decisions, or malfunction, there must be mechanisms in place to hold the creators and operators accountable. This ensures that there is recourse and corrective actions can be taken

Accountability Read More »

Transparency

Transparency in AI means making AI systems and their decision-making processes clear and understandable to all stakeholders. It involves disclosing how AI models are built, the data they use, and how decisions are made. Transparent AI helps build trust, ensures accountability, and allows users to understand and challenge decisions if necessary. In complex systems, transparency

Transparency Read More »

Fairness

Fairness in AI is the principle that AI-driven decisions should be free from discrimination and treat all groups equitably. It ensures that AI models don’t perpetuate biases or create unfair advantages for specific groups. Fairness can be approached in different ways depending on the context, such as striving for demographic parity or ensuring equality of

Fairness Read More »

Algorithmic Bias

Algorithmic bias refers to a systematic and repeatable error in AI systems that produces unfair outcomes, often favoring one group over another. This bias can emerge from skewed data used in training or from flawed assumptions in the model itself. For example, a recruitment algorithm may unfairly prioritize certain candidates due to historical data reflecting

Algorithmic Bias Read More »