Topics:Artificial Intelligence
Artificial Intelligence (AI), is used in many applications in society and its future appears to be limitless. Doctoral students were asked to take "Pro", "Con", and "In Between" positions regarding the usefulness and harm of AI.
AI is hotly debated because the risks and benefits are so far from each other on either end of the spectrum.
Team
- Amiji, A.
- Brown, K.
- Estevam, J.
- Holbrook, O.
- Meng, Z.
- Oh, S.
- Pentikis, J.
- Przybyla, A.
- Sevilla, R.
(authors and sources listed alphabetically)
Arguments
Pro arguments
|
Con arguments
|
|---|
|
Results Summary
AI services provide overall benefit to society
Pro arguments
- Various sectors of society can benefit from AI. (Platform benefits)
- Manufacturing and healthcare can benefit from automation, educational systems benefit from personalized learning tools, the environmental sector and social services can benefit from creative problem solving, modeling various outcomes, and implementing and streamlining solutions. Thus, AI serves as a platform with wide applications.
- AI services will allow humankind to innovate much more rapidly than ever before. (Efficiency/ Rapidity)
- Many problems that are too complicated to solve in an efficient manner can be addressed more quickly using AI technology. This includes complex logic, engineering, and mathematical problems, and even ethical dilemmas. In general, using AI can increase efficiency and reduce mental load since some of the solutions may be novel, very challenging, time consuming, or otherwise resource intensive.
- AI is already benefiting society in the same way other transformative innovation and inventions like antibiotics, electricity, modern transportation, and calculators/computers have. People have been using AI powered services for years and they have integrated into our lives so seamlessly, many people don’t even realize they are using them. Here are some examples: (Impact)
- AI services are used in healthcare, for example in diagnostic imaging. These services aid in image analysis and can help with diagnosing disease and recommending treatment plans. Another health related example is screen readers and other communication devices that the disabled community can use to improve their quality of life, for example, computer generated speech (like what Steven Hawking was able to take advantage of). Financial institutions use AI to detect fraudulent account activity and stop some transactions from taking place to protect from suspicious activity. Siri, Alexa, and other virtual assistants are powered by AI to answer questions and control devices through voice commands. Translation services, like Google translate, are also powered by AI. AI tools also help us with decision making in driving fully autonomous and semi-autonomous vehicles. Personalized recommendations via streaming services, such as Netflix, YouTube, and Spotify, also rely on AI services, enabling us to find, watch, or listen to content we will likely enjoy.
- These are just some examples showing the various ways AI technology has a positive impact on health and well-being, personal and business financial accounts, reducing language barriers, and generally makes our lives easier and adds to our leisure enjoyment.
- Guide to explore black box
- Even though AI trains with input/output relationships and not how these relationships are connected, it does provide the framework to study the details of how one connects the steps from input to output.
AI services like ChatGPT constitute greater harms to society
Con arguments
- The inscrutability of AI (the “black box” problem)
- Refers to the difficulty in understanding how artificial intelligence systems make decisions, especially for systems that involve complex algorithms like deep learning models (eg. Image recognition). The decision-making processes of AI systems aren’t known to users, developers, or the people who designed it.
- For example, an AI is created to detect abnormality in x-ray images. But during that training it begins to focus on irrelevant data – such as the presence of doctors’ thumbs in the x-ray images as part of the medical diagnosis. (Similar real world example here: https://doi.org/10.48550/arXiv.2103.01938)
- Could also perpetuate hidden biases garnered from training data (eg. Ethnicity, income, gender identity) and when used in places such as admissions to universities, could pick students unfairly.
- When people decide to use AI for complex tasks in healthcare, business, or even scientific research, inscrutability can occur, making results hard to trust or validate.
- Dependence and Degradation of Learning
- As a tool, AI has started to become very useful for navigation, planning trips, gathering information, writing a passable essay, or even decision-making. However, this convenience can potentially impact our cognitive skills as we won’t feel the need to develop certain knowledge and abilities.
- This dependency could also really affect the educational system as students become dependent on AI for problem solving, creating challenges for educators to identify work using AI.
- Generating Misinformation (Misinformation source)
- AI is really good at making up information that seems credible, which is dangerous as it can spread misinformation more easily than ever before. This capability is particularly concerning in areas such as news, politics, and even public health where inaccuracy could impact our democracy and the safety of the public.
- A current example would be the use of AI impersonating President Biden in an attempt to suppress votes: https://www.pbs.org/newshour/politics/ai-robocalls-impersonate-president-biden-in-an-apparent-attempt-to-suppress-votes-in-new-hampshire
- Infringement on Privacy
- AI needs a lot of information to train, which can include stock photos or your personal information that you accidentally leaked on a website many years ago. There’s a risk to this data being misused by AI or the creators of AI. Not to mention, AI is now very capable of generating realistic videos of people through their image, personal styles, or behavior which could lead to false exposure.
Overall team summary of analysis (pre and post-discussion)
The overall benefit of Al is hotly debated because the risks and benefits are so far from each other on either end of the spectrum. Al has wide-reaching potential to benefit society from education to social services. The counterpoint is that it could replace too many jobs and it's difficult to fully appreciate certain ethical limitations introduced by the programmer or algorithmic bias produced by the data themselves. The pace of innovation Al enables will be directly correlated to a balance between society as a whole adopting it and regulating it. Imposing controls that allow for properly paced development that doesn’t stifle its improvement and implementation will be important.
There are many examples of how Al has already seamlessly integrated itself into routine existence, for example: banks tracking fraudulent activity, translation services such as Google Translate, virtual assistants such as Siri, and streaming services such as Spotify all rely on Al technology. There are an equal number of examples of how its integration has been detrimental, such as misdiagnoses when used in medical imaging, hidden bias during hiring or university admissions, overdependence on Al as a guide for navigation, and inaccurate, unethical, or dangerous image, audio, and video generation. A major problem with Al services is how widely they are already used and how little oversight there is to control then use and output. While the benefits and drawbacks are polarizing, it's a topic that is not particularly conducive to a middle ground since there is general consensus surrounding the most frightening aspects of Al. Unfortunately, these are innately tied to how the technology works.
Prior to our class discussion, seven people had an overall con stance toward Al's benefit to society. This was driven by its impact and its nature as a black box. After our class discussion, six people had an overall con stance toward Al's societal benefit and the major driving factors had shifted towards fear of how it is an unknown or untamed "black box". We spent much of our discussion focused on this topic, specifically using the argument as the major counterpoint to its benefits, so this shift in locus is in line with how we spent our time before our class discussion, there was a single major pro-stance driver in support of Al, which was its potential for "impact". Con stance rankings were almost split between "dependence" on these sorts of tools and their "black box nature. After our class discussion, pro-stance support was still influenced by Als potential for impact but for the con stance rankings its "black box" nature became a more important factor. Yet again, I think the amount of time spent in class on AI’s "black box" nature is what influenced the shift to its dominance in determining overall position on the topic. This topic was not addressed at all in the pro stance argument, which could have been a missed opportunity to change class opinion. That said, when experts in the field publicly comment or Al, this specific unknown and impossible to control phenomenon is frequently described as the most dangerous aspect of Al. This lack of control and understanding seems to be a common experience lor both laymen and experts. My opinion is that Al technology has been so widely adopted that there's no turning back. The dilemma of appropriate reputation will become a more important argument than its implementation in short order. Its benefit to society hinges on how it ends up achieving the delicate balance between what it offers and its potential to harm.