• Home
  • Services
  • Blog
  • Case Studies
  • About Us
  • Contact
Book Now

Episode 3: AI Ethics

AI ethics covers a broad range of topics, often dominated by futurist style debates about sentient machines and AI military usage. More recently, discussions have focused on how to fairly share the economic benefits expected from the "AI boom." This blog will narrow its focus to ethical issues relevant to intereach as AI adoption grows, using the risk framework introduced in Episode 2. Given the many applicable use cases, too many to cover succinctly in a blog, a recommended reading by an Australian AI researcher expert will be included. The blog concludes with a summary of major learnings seen from business AI ethics failures, plus AI prompts for further research.
Model Ethics - Development & Training
intereach faces three main ethical considerations when adopting AI models:
  • Model Transparency: Closed models (see AI Prompt) like GPT-4 offer limited insight into their training foundations, making it difficult to assess their suitability for used by intereach participants and staff. It may be impossible to verify whether an AI model has been appropriately trained for intereach's needs.
  • Data Training Bias: Biased training data can result in unethical outcomes, particularly for underrepresented groups. There are already a long history reputational and legal issues for some businesses in US from ignoring this ethical concern.
  • Data Consent: Maintaining effective AI performance requires representative data, raising challenges for intereach around balancing model effectiveness (especially closed models) with the data consent rights of participants.
Implementation Ethics - Labour & Intelligibility
  • intereach faces two main ethical considerations when considering the adoption of AI models:
  • Workforce Displacement: AI-related workforce displacement presents a growing ethical challenge for intereach. Whilst Agentic AI rarely replaces entire roles, it is being used to increasingly automate substantial job components and swiftly disrupting the employment landscape. intereach executives should draw insight from historical labor-saving revolutions (e.g. industrial revolution) noting AI's impact is likely to be widespread and swift coming with the challenge of retraining workers for an AI-augmented workplace.
  • Lack of Explainability/Intelligibility: This is a material ethical concern as access to many social services already comes with significant overhead. The introduction of AI supported processing may make service navigation more difficult/opaque and unfairly disadvantage any intereach participants that struggle with this type of complexity.
Usage Ethics - Accountability and Misrepresentation
intereach faces two main ethical considerations when considering the usage of AI models:
  • Accountability for AI: AI models are fallible and also frequently fail when faced with boundary conditions e.g. responding in an emergency. For intereach the decision to adopt AI solutions will also come with organisational accountability for any harm they cause (see AI Prompt).
  • AI Misrepresentation Concerns: Another AI ethical usage concern is organisations either falsely claiming to use AI when relying on basic automation or human effort, or failing to disclose actual AI usage. Transparency is crucial for building trust with customers, intereach will need to consider when AI usage disclosure becomes necessary.
Want to know more?
In most situations AI ethics concerns will translate into direct operational risks (see Episode 2). For further reading an Australian authored text is "Made by Humans: The AI Condition" by Ellen Broad is recommended providing a more detailed treatment of the AI ethics and risks landscape relevant to intereach. (NOTE: Yvette I have ordered you a copy of this book and had it shipped to your office in Bendigo)
Lessons from AI Ethical Failures
  • Already there are many well documented AI ethical failures globally (see AI Prompt), with an increasing pattern of governments, legislators and regulators holding organisations directly accountable for the behaviour of their AI systems (ignorance is no excuse).
  • Multiple ethical failures compound the harm => bias training data with an unfair implementation and no accountability in usage. Marginalised groups suffer the biggest consequences of these failures - a key insight for intereach.
  • Based on previous ethical failures reputational damage for the offending organisation frequently exceeds the direct costs.

AI Prompts for further research...

"What does it mean when an AI model is 'closed,' and why does that matter for businesses and users?"
"What are the key ethical principles and practical frameworks organizations should adopt to ensure clear accountability when their AI systems fail or cause harm, and why is AI accountability uniquely challenging compared to traditional technology failures?"
"What are compelling real-world case studies of AI ethical failures organized by: model ethics (bias, data issues), implementation ethics (workforce, explainability), and usage ethics (accountability, transparency)? Include the business consequences for each."

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.