Skip to Main Content

Artificial Intelligence: Ethics in AI

Last Updated: Jan 8, 2024 3:29 PM

What are "ethics" when it comes to AI?

With the rise of big data, companies have shifted their focus to drive automation and data-driven decision-making across their organizations.

Automating different tasks can be a cost saving benefit to many companies, as this practice can increase efficiency and save time on manual tasks that a machine can do. 

While the intention is usually to improve business outcomes, businesses are experiencing unforeseen consequences in some of their AI applications, particularly due to poor upfront research design and biased datasets.

While many AI applications are built with good intentions, there are also many that are not. Regardless of the reasoning, the tech world is finding themselves questioning the morality of AI use cases more and more. 

 

Below are explanations surrounding ethics and the use of AI.

Facial Recognition

There are several elements to facial recognition technology that are widely viewed as unethical due to their potential for being abused/misued. For example, some governments have used facial recognition technology to monitor and track citizens, violating their privacy and civil liberties. Facial recognition could also be used to target specific groups or individuals, leading to discrimination and bias.

Additionally, facial recognition software has been shown to be less accurate for certain groups, such as people of color and women, due to bias in the algorithms and data used to train them. This can lead to false positives and false accusations, further perpetuating discrimination and injustice.

Another concern is the lack of transparency and consent in the use of facial recognition technology. Many people may not be aware that their image is being captured and analyzed, and may not have given their consent for this use of their personal data. According to Politico, Amazon shares video from Ring doorbell cameras without permission.

Labor for AI Training

The way that chat bots get trained is through Natural Language Processing (NLP). IBM defines Natural Language Processing as 

"A branch of computer science—and more specifically, the branch of artificial intelligence or AI—concerned with giving computers the ability to understand text and spoken words in much the same way human beings can.

NLP combines computational linguistics—rule-based modeling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment."

This means, the more conversations it processes, the more it learns, and the smarter it gets. 

Training AI bots is a practice that has the potential to be very exploitative. 

An investigative report done by Time revealed that OpenAI, ChatGPT's parent company, outsourced work to Kenyan laborers who earned less than $2 per hour. OpenAI is one of the most valuable AI companies, as they are in talks with investors to raise funds at a bank $29 billion valuation. 

Another element to the "ethical grey area" of training AI chat bots, is the use of public websites, like Reddit, to analyze and interpret conversations to continue training them. The New York Times said, "In recent years, Reddit’s array of chats also have been a free teaching aid for companies like Google, OpenAI and Microsoft. Those companies are using Reddit’s conversations in the development of giant artificial intelligence systems that many in Silicon Valley think are on their way to becoming the tech industry’s next big thing." 

Algorithmic Bias

Wikipedia defines algorithmic bias as: Systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. Algorithmic bias can present itself in many ways. One example, provided by the Brookings Institution, is:

Bias in online recruitment tools

Online retailer Amazon, whose global workforce is 60 percent male and where men hold 74 percent of the company’s managerial positions, recently discontinued use of a recruiting algorithm after discovering gender bias.The data that engineers used to create the algorithm were derived from the resumes submitted to Amazon over a 10-year period, which were predominantly from white males. The algorithm was taught to recognize word patterns in the resumes, rather than relevant skill sets, and these data were benchmarked against the company’s predominantly male engineering department to determine an applicant’s fit. As a result, the AI software penalized any resume that contained the word “women’s” in the text and downgraded the resumes of women who attended women’s colleges, resulting in gender bias.

Academic Ethics in AI

Using Chatbots in an Academic Setting:

Using chat bots to help generate ideas can be fun. It is advised to always use your best judgment when communicating with an AI chatbot. An unethical use of a chatbot would be asking it to write a paper for you. Not only is that against school policy, it's also inaccurate. Here is an example of an interaction between a student and ChatGPT:

You can't tell by looking at it, but all of those sources that the bot provided are fake. Why? Because chatbots are essentially "word pullers." If you ask it for a source, it generates a response of what it thinks you want to see. The "source" provided looks exactly like a legitimate source, but in actuality, it is many pieces of data from all over the internet, compiled to look like a source. 

*The above photo is a screenshot of a conversation with ChatGPT that reads:

Student: Do you have a source you can provide on the dartmouth conference?

ChatGPT: Certainly! Here are a few sources you can consult for more information on the Dartmouth Conference:

  • "The Dartmouth Conference: The Birth of AI" by John Markoff, in AI Magazine (Volume 27, Number 4, 2006). This article provides an overview of the conference and its significance, including quotes from some of the attendees.
  • "The Dartmouth Conference, 1956: The Birth of AI" by James Hendler, in IEEE Intelligent Systems (Volume 21, Number 4, 2006). This article provides a detailed account of the conference, including its organization, goals, and outcomes.
  • "The Dartmouth Conference: An Artificial Intelligence Milestone" by Oren Etzioni, in Communications of the ACM (Volume 57, Number 10, 2014). This article provides a brief history of the conference and its impact on the field of AI.

These sources should give you a good starting point for learning more about the Dartmouth Conference and its significance.

Academic Integrity:

"The University at Buffalo takes very seriously its commitment to principles of academic integrity as foundational to the enterprise of teaching and learning. Our academic community expects the highest ethical behavior on the part of our students in the completion and submission of academic work. Regardless of whether instruction is in-person or remote, it is every student’s responsibility to complete their work in an honest fashion, upholding both the expectations of their individual instructors as well as the university’s broader academic integrity policy. Only through these practices can students maintain their personal integrity and ensure UB’s tradition of academic excellence."

Please click here to view academic integrity policies.