AI and Ethics

Design Theory Reflection 10

Katelynn Browne
4 min readApr 27, 2021

I found the two Ted Talks from this week’s homework particularly interesting. On the one hand, I think that it’s promising that we have leading scholars working to create pedagogy that focuses on ethical considerations for the future of artificial intelligence. The tech industry seems to focus a lot more on the ability to create things just for the sake of creating them, instead of considering (on a larger scale) how they can make things that will actually and meaningfully impact people’s lives — instead of pushing everyone’s lives further and further into a dystopian reality. So, in that sense, I am glad that there are tech academics and professionals that are working to consider the ethical implications of new technologies. However, the question always remains — especially around things like artificial intelligence — should we even be making it in the first place?

When it comes to AI, as mentioned in the Ted Talks, we often think of dystopian movies like The Terminator or I, Robot, and quake in fear thinking about the day that robots will take over. And, I will be honest — I am definitely in this camp. I’ve watched and read too much science fiction over my life to not be nervous about robots gaining consciousness. However, I never really stopped and considered what these robots really represent, and why it might seem so scary to us that they’re taking over, and why they’re framed as evil. In both aforementioned films, the AI robots were being exploited as workers/essentially enslaved. They end up rising up and overtaking humans as the dominant class of beings, usually in a manner of revenge for the abuse that we as humans made them suffer under. The movies don’t usually feel triumphant though for these beings who are liberated from the dominant beings’ mistreatment. I think for this reason, the movies are used to make movements that fight for the liberation of oppressed groups seem frightening and dystopian. Generally speaking, these movies are used as metaphorical propaganda to warn us of the dangers of liberation movements.

So where does that leave us in the AI debate? Should we be creating artificial intelligence? Should we heed the warnings of science fiction? I personally don’t think we should be creating artificial intelligence. It kind of creeps me out, and I think that as a society, we are not ready to handle the implications of treating a robot with kindness in the way that we would a human being. Many people can’t even recognize other people as human beings. I don’t think that humanity is ready for it, honestly. I really enjoy watching how people interact with AI assistants like Alexa, or Siri. My friends and I were asking Alexa questions about the Amazon union as a joke to see how it would respond. Alexa didn’t know how to respond, and said something kind of sad, which drew sympathy from us, despite her not having consciousness. My parents, on the other hand, verbally abuse Alexa, calling her stupid and useless when she can’t answer their questions right away. It makes me kind of uncomfortable. I know she can’t feel anything but it still feels wrong. And, thinking of it this way — this is how many people will be introduced to AI robots. What will happen when we get older, and we treat conscious AI robots, programmed with feelings, or more human-like qualities, the same way that we treated our Alexas?

Of course, this isn’t really the worst part of developing AI. The more immediate impacts of AI will result in loss of probably millions of jobs, with no clear solution as to how people will get different jobs to sustain themselves. It’s going to be cheaper for companies to use AI for things in the future, but I wonder what will happen to the people whose jobs will be replaced by these machines. Also, a lot of AI software takes the human element out of complicated social situations, such as hiring. These softwares are also maybe too underdeveloped to actually be effective.

This reminds me of Timnit Gebru’s discussion of HireVue in her talk. Over the last year, I’ve completed probably around 5 or 6 HireVue interviews. I think I’ve only ever made it to the next round for one of them, and I wondered if anyone actually watched my video. The HR emails made it seem like someone from HR was supposed to sit down and watch every single interview video — which I kind of doubted because that sounds overly time-consuming, but that’s what they said. I was of course, shocked when Gebru mentioned that HireVue uses some kind of emotion tracking metric. I felt kind of violated knowing that this technology was used on me to determine whether or not I was a good candidate for the job. I began immediately overanalyzing the amount of energy I had presented when conducting the HireVue interviews. I feel like looking back, my face was probably very calm, and I could have smiled more. I then was questioning if the HireVue algorithm was counting every smile, every time I looked away from the camera, and what information it had taken from my interview. I wondered where my data was going — what my video would be used for in order to refine their tool, so that it could make more snap judgments about who I was as a person and how good of a worker I might potentially be. I feel like we always call for transparency around our data, but I think it might be time for us to call for an end to developing technology like this in the first place instead.

--

--

Katelynn Browne

Katelynn Browne is a current graduate student at NYU who specializes in user experience design with interests around social change.