top of page

ABOUT

INTRODUCTION
TO AI ETHICS

Welcome to your Introduction to AI Ethics course page. This course is designed to help you develop practical ethical reasoning skills for working with AI. We move from core moral frameworks to AI governance, operational and contextual values, and ethical design. Along the way, you will work with real cases and structured dilemmas, and develop your own AI ethics guidelines for a context you know.

​

This page will be updated throughout the duration of the course with session materials, resources, and links. Slides will only be available after the session.

 

Remember: We meet weekly for 2-hour sessions from 4 February, on Wednesdays at 19:00 SAST, via meet.ethicedge.co.za

Foundations for Ethical Reasoning

Wed, 4 Feb 2026 @19:00 (SAST)

​

Instructor(s)

Cindy, Brian & Kristy​​​

​

Session Overview

This session introduces the foundations of ethical reasoning and explores how different moral frameworks shape how we judge right and wrong. You will reflect on your own ethical instincts, learn to recognise how others reason ethically, and understand how ethical approaches influence decisions in AI and technology. We then introduce four key ethical frameworks - consequentialism, deontology, virtue ethics, and Ubuntu - which will serve as core tools for ethical analysis throughout the course.

​

Lecture Slides​

​

Resources​​​

2

The Landscape of AI Ethics Guidelines

Wed, 11 Feb 2026 @19:00 (SAST)

​

Instructor(s)

Kristy​​​

​

Session Overview

In this session, we explore major international and industry led frameworks that guide AI ethics today. While many organisations share common principles, such as fairness, transparency, and accountability, they differ in emphasis, cultural framing, and enforcement mechanisms. We also clarify the relationship between ethics and law. Ethics and regulation are not the same, but ethical reasoning shapes how decisions about regulation are made, including what should be governed and why.

​

Lecture Slides

​

Resources​​​​​​

3

Operational Values

Wed, 18 Feb 2026 @19:00 (SAST)​

​

Instructor(s)

Cindy​​​

​

Session Overview

This section fleshes out the distinction between operational values and contextual values. Where operational values are those values that are built into the AI systems itself (e.g. privacy protocols, data protection, explainability and robustness), contextual values are the societal, cultural, political, and economic values present in the environment where AI is deployed (e.g. responsibility/accountability, fairness, non-discrimination and sustainability). We start with data protection, a critical operational value, because AI depends entirely on data—without it, AI systems cannot function. By beginning with data protection, we can see how technical safeguards form a foundation for ethical AI before moving on to the broader contextual values that shape its impact in later sessions. 

​

Lecture Slides

​

Resources​​​​​​

4

Explainability & Privacy

Wed, 25 Feb 2026 @19:00 (SAST)​

​

Instructor(s)

Brian​

​

Session Overview

In the previous session, we were introduced to the distinction between operational and contextual values, and we discussed one specific kind of operation value: data protection. In this session, we will discuss two further operational values: explainability and privacy. Explainability is important in AI ethics because it allows users, developers, and regulators to understand how AI systems make decisions, increasing trust, accountability, and fairness. Without explainability, errors, biases, or harmful outcomes may go unnoticed or uncorrected. Privacy, often implemented through privacy protocols, is crucial because it protects individuals’ personal data from misuse or unauthorised access, ensuring autonomy, security, and ethical compliance. Alongside data protection, strong privacy measures are essential for maintaining public trust and responsible AI deployment.
​

Lecture Slides

​

Resources​​​​​​

5

Privacy & Non-Discrimination

Wed, 4 Mar 2026 @19:00 (SAST)​

​

Instructor(s)

Cindy​

​

Session Overview

The previous two sessions focused on operational values. We now shift to contextual values, and in this session, we specifically explore privacy and non-discrimination. Although privacy was discussed previously as an operational value, we now examine it as a contextual value, because different societies define and prioritise privacy in different ways. This highlights that the line between operational and contextual values is not always clear-cut. We then turn to non-discrimination, focusing on how it functions as a contextual value in AI, particularly in relation to bias in automated decision-making. 
​

Lecture Slides

​

Resources​​​​​​

6

Contextual Values

Wed, 11 Mar 2026 @19:00 (SAST)​

​

Instructor(s)

Brian​

​

Session Overview

This session focuses on contextual values, with sustainability as a central example. We examine how ethical concerns around AI are shaped by social, environmental, and political contexts, rather than by technical design alone. In particular, we explore the tension between using AI to support sustainability goals and the environmental and social costs of AI itself. The session highlights why ethical evaluation of AI must consider broader impacts, including energy use, resource extraction, and long-term effects on communities and future generations.

​

Lecture Slides

​

Resources​​​​​​

7

Ethical AI Design

Wed, 18 Feb 2026 @19:00 (SAST)​

​

Instructor(s)

Kristy​

​

Session Overview

AI ethics deals with the moral principles guiding the development, deployment, and use of AI. It can be approached reactively (i.e. addressing harm after it occurs) or proactively (i.e. preventing harm from the start). In this session, we introduce you to a proactive approach in the form of the EthicEdge Design Cycle which guides us to designing AI that aligns with human values and promotes benefit while avoiding harm. Ethical AI design matters because technologies not only reflect the world but also shape it. Developers are, therefore, responsible not just for how a system works, but also for how it affects people’s lives. Therefore, as we will see, ethics should be built into AI from the ground up, not simply added afterward.
​

Lecture Slides

​

Resources​​​​​​

6

Personalised Ethics Guidelines

Wed, 25 Mar 2026 @19:00 (SAST)​

​

Instructor(s)

Cindy, Brian & Kristy​

​

Session Overview

What would AI ethics look like if you were the one in charge?

In this final session, you bring together everything from the course to develop and reflect on your own personalised AI ethics guidelines. You will make explicit the ethical framework you are working from, the values you prioritise, and the trade offs you are willing to defend. The focus is not on producing perfect or universal rules, but on taking responsibility for ethical decision making in a real context. Through presentation, discussion, and reflection, this session emphasises ethical judgement, transparency about limitations, and learning from critique.

​

Lecture Slides

​

Resources​​​​​​

bottom of page