Elements of AI Ethics
Guidance for understanding and acting on organisational pitfalls, risks and dangers in efforts to approach "AI" responsibly and ethically.
-
Organisation
Accountability projection
Organisations have a tendency not only to evade moral responsibility but also to project the very real accountability that accompanies the making of products and services. Projection is borrowed from psychology and refers to how manufacturers, and those who implement AI, wish to project their own responsibilities, shortcomings and failures onto a tool or product.
How do we ensure that "AI complexity" does not keep us from fully understanding and regularly evaluating how well the system works? How do we prevent ”it’s the fault of the AI” from becoming an acceptable answer to users and stakeholders?
-
Organisation
Monoculture
Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet's web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity. How ”AI” is built and implemented limits perspectives and influences who benefits.
How does our "AI" reinforce the worldview of a small subset of humanity while appearing neutral and universal? How can we actively seek out and incorporate diverse cultural perspectives in ”AI” use?
-
Organisation
Deceptive anthropomorphism
Systems are often designed to provide the illusion of talking with something that is thinking, considering or even being remorseful. These design decisions reinforce a mindset of ”AI” as sentient beings. This contributes to complex trust and relationship issues, and can affect psychological health and wellbeing. Read AI-Human Communication.
What are the ethical implications of using "AI" designed to appear human? How do we help users understand the limited computational aspects of ”AI” when they are interacting with an automated system?
-
Organisation
Power concentration
When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs and interests are prioritized, the more power they gain, taking control from others.
How does our use of "AI" contribute to or challenge the concentration of power among a few large technology providers and owners? What steps are take to diversify "AI" suppliers and reduce dependency on dominant platforms and viewpoints?
-
Organisation
Fearmongering hyperbole
Exaggerated rumors of impending danger and overstated ”AI” capabilities are often used and repeated to grab attention and control the narrative. There is talk of sentience and unstoppable, inevitable all-powerful intelligence. Careless reporting repeats what tech leaders say with little scrutiny.
How does sensationalist media coverage of "AI" affect our decision-making around "AI" adoption? Are we addressing real, present harms or speculative future risks? How do we critically evaluate claims about AI capabilities and risks?
-
Machine
Acceleration of bias and prejudice
AI will act as an accelerator of other harms. Unsupervised training will consist of biases, abandoned values and prejudiced commentary – all being reproduced more or less in any output, more or less obscurely.
How might biases manifest in our specific context? How do we monitor for signs of bias or prejudice, especially toward marginalised groups? What would it take us to halt or modify a service if discriminatory outputs were discovered?
-
Machine
Invisible decision-making
Complex algorithms are hard to understand. Understanding lessens even more as more people are involved, time passes, integrations with other systems are made and documentation falls behind, Critical code and design decisions are often proprietary and hidden from scrutiny. Even makers themselves can admit to losing understanding of the full picture of how the code works.
Can we explain to users how key decisions made by our "AI" systems are reached — if not, what does that mean for trust? How do we ensure that individuals affected by automated decisions have meaningful recourse? What documentation is needed for understanding and auditing AI decision-making over time?
-
Society
Skill displacement & value reform
When deferring decision-making to machines (that were built by someone else and trained on an obscure mass of content) many people will give up doing work and employing skills that may have been more important than the decisions themselves. When societal values change over time, breaking free from old thinking can prove difficult when those ”thoughts” are embedded in a technological system.
What skills are we at risk of losing when reasoning, judgment, or reflection is outsourced to "AI"? How do we ensure that deferring to "AI" does not erode ethical reasoning and professional judgment? When "AI" efficiency is prioritised, what values or practices might be quietly abandoned, and is this purposeful?
-
Society
Acceleration of misinformation
An abundance of believable misinformation will be generated at virtually no cost to bad, unwitting or careless actors.
What policies do we have for verifying generated content before it is published or shared? How might generated misinformation — even when unintentional — affect trust in our organisation? What training do we need to critically evaluate "AI" outputs rather than accepting them as authoritative?
-
Human
Acceleration of fraud & deepfakes
”AI” can be used to believably imitate voice and likeness, making everyone or anyone a target for abuse and criminal deception. It opens up the playing field for fraudulent activities that can be harmful to mind, economy, reputation and relationships.
What procedures do we have for verifying the identity of people in communications? How do we know if our brand, voice, or likeness is being fraudulently imitated using "AI"? What guidance do employees and stakeholders need about the risk of "AI" fraud?
-
Human
Acceleration of injustice
Systemic, embedded bias will have dire consequences for people who are already disempowered. For example, scoring systems deployed via automated decision-making can affect job opportunities, welfare/housing eligibility and judicial outcomes.
Do we use automated scoring or decision-making tools that affect people's access to services or opportunities? How are the tools audited? How might existing inequalities in our context be amplified by "AI" tools that were trained on historically biased data? How can people challenge automated decisions that affect them unfairly?
-
Human
Content moderator trauma
Workers employed to label, tag and moderate content are often exploited and suffer trauma and distress without adequate care for their wellbeing. Part of their work is to read about or watch physical violence, self-harm, child abuse, killings and torture, generally to filter this content and prevent it from reaching ”ordinary” users.
Do we rely on content moderation, and do we know under what conditions those workers operate? What responsibility do we carry for the wellbeing of workers who process harmful content on our behalf? How do we vet AI vendors for their ethical treatment of human reviewers and content moderators?
-
Human
Data & privacy breaches
There are several ways personal data makes its way into the AI tools. For example: through training, online scraping and negligent use, Data points from different sources can, in unison, lead to complex and unwanted revelations about people’s lives.
What personal data is your organisation, knowingly or not, feeding into "AI tools", and where does that data go? How do we ensure employees do not share sensitive client or user data with "AI services"? What are the effects when personal data entered into our "AI tools" is exposed or used in unexpected ways?
-
Human
Instigation of self-harm and violence
Numerous witness reports tell of ”AI” chatbots encouraging or endorsing a human operator's abuse towards themselves or others.
When our "AI" interacts with vulnerable users, what safeguards exist to prevent harmful outcomes? How do we respond if our "AI" produces content that encourages self-harm or violence? What level of liability do we accept for the outputs of "AI tools" we deploy, or recommend?
-
Supervision
Obscured data theft
Image and text generators have been trained on vast amounts of data that were not intended for this purpose, and whose owners and makers were not asked for consent.
Do we know whether any "AI" we use was trained on copyrighted material without consent, and does that matter to us? How would we respond if a creator or rights holder challenged our use of "AI" trained on their work? What standards do we apply when evaluating the provenance and legality of AI training data?
-
Supervision
Regulatory avoidance
”AI” appears to avoid regulations despite many oversteps reported as part of their deployment and use. The exploitation of other people's content is becoming normalised and proponents see it as inevitable. Effective oversight is often described as unattainable, while efforts are underway to regulate ”AI” systems in most countries around the world..
Do we wait for regulation or are do we proactively set our own standards? What would responsible self-regulation look like in our context, given that existing laws may not yet cover our use of "AI"? How do we stay informed on AI regulation, and how does it feed into our policies?
-
Environment
Supply chain neglect
To realise digital services and solutions we need both software and hardware. Behind the supply chains of ”AI” there are any number of oppressive relationships and human rights abuses. Minerals can for example be mined under unfair conditions in oppressive environments reminiscent of slavery.
Do we know what hardware and materials underpin the "AI" tools in our organisation? Do we know human or environmental costs involved? How do we assess the ethical supply chain of the tech vendors we rely on? At what point do we choose a less convenient "AI" tool because of ethical concerns regarding supply chains?
-
Environment
Carbon, watercooling and e-waste
Energy and water required to source data, train models, power models and compute the output is reported as significant. While exact figures are often kept under wraps, there have been many studies into the large environmental cost of developing ”AI”, including its contributions to e-waste. With this in mind, it is apt to ask if every challenge really is looking for an ”AI” solution.
Do we calculate or estimate the carbon footprint of the "AI" tools we use? How do we decide when the efficiency gains from "AI" justify its environmental cost? Are we considering energy consumption and environmental impact as criteria in "AI" procurement decisions?
- 1–9 Show N random cards
- A Show all cards
- F Show/hide filter
- S Focus search field
- O Open all cards
- C Close all cards
- ←→ Navigate between cards
- Space Flip focused card
- Esc Close modal