Science Unscripted: Decoding Bias in AI
Articles Blog

Science Unscripted: Decoding Bias in AI


– Good evening. (crowd applauding) You all know UMBC, good evening. – [Audience] Good evening. – See, that’s the way we do that. My name is Karl Steiner,
I’m the vice president for research here at UMBC. I’m really delighted to welcome you to the second evening of
Science Unscripted AI. How many of you were here yesterday, just a quick show of hands? About a third. I’m glad to see a lot of new faces here and I’m thrilled that
so many of you decided to come back for a second
evening of these conversations. When Andy Rathmann-Noonan,
the Executive Director of the National Science and Technology Metals
Foundation approached us back in February of this year, so
it started a long time ago, about the idea of hosting this event here at UMBC, I immediately became intrigued. Talking about AI, a set
of technologies and a set of new ways of looking at data
is a rapidly growing market. I looked it up on the
famous Internet, right? $20 billion currently in that market, but people are projecting
in about five, six years. That’s $200 billion and growing
very rapidly after that. But even more importantly, that set of technologies does pervade
all of society, right? If you’re looking at automotive
technologies, all kind of health technologies, national security, so many segments that AI
is being implemented in. And on our side here at
UMBC, we just announced a formal partnership with the school of medicine at
the University of Maryland in Baltimore about six, seven miles from here where UMBC is
going to provide a new cyber and artificial intelligence core into their Center for
Conservational Research. And really applying some of
the expertise that we have and we all need in some way
into these rapidly growing sets of data that have been generated in the healthcare community. And I’m also proud that our own
professor, Tim Finan, who is a professor in computer science
and electric engineering was just named a Fellow of the Association for
Computing Machinery. Only 1% of the members
get that recognition, but the reason why I point
him out is he got his for his work in artificial intelligence and semantic web technology. So this is an area that we are
very strong in that we have a strong footprint in and
so I could really not think of a better place to host
this event together with the foundation than right here at UMBC. So before I hand it over,
special thanks to the organizers, to our partners, the National Science
Technology Metals Foundation, especially, Andy, who we’ll introduce in a second, Ryan Alaska, Allison Courten, and the entire UMBC
community that was behind the scenes to make this all happen. And with that, it’s really my honor to introduce Andy Rathmann-Noonan who’s the Executive Director of the National Science
Technology and Metals Foundation. Give him a hand and welcome him to UMBC. (crowd applauding) – Hi, everyone, for those who were here last night,
thanks for coming back. Again, as you heard, my name
is Andy Rathmann-Noonan, I’m the Executive Director of the National Science and
Technology Metals Foundation. I do wanna say a special thanks to UMBC for hosting this with us. I do wanna thank the National Science Foundation,
the United States Patent and Trademark Office, and the Howard Hughes Medical Institute. We do receive a considerate
amount of funding from them and they do allow us to
make these events free and open to students like
all of you here tonight as well as the students
tuning in on our live stream. For those of you who
weren’t here last night, just a little background on us. We were founded about 30
years ago based around the belief that scientific
and technological advancement are powerful agents of positive change. So today, we not only
celebrate STEM excellence. We also advocate for the creation
of inclusive and diverse, and equitable STEM communities, and the tangible benefits they have on scientific and technological progress. So over the last two days here
on this campus, we’ve joined with high school students over lunch and college students like you have here tonight
like students at UMBC. To have the opportunity to discuss the many challenges we see occurring over a growing technological landscape. In many cases and perhaps
in all cases, a pursuit of technological progress
needs to be insured and needs to be pursued
with the consideration of an end result or a
discovery that is inclusive and equitable for all. In the case of artificial
intelligence development, it’s absolutely imperative that this worthy and sometimes
controversial pursuit is approached with an
allegiance and adherence to ethical practices and an opportunity to create a better world
for the entire population. So with that in mind, I’d like to introduce our honored guests as well as our moderator tonight. Our first guest is Dr. Jimmy Folds. Dr. Folds is an assistant professor here at UMBC in the Department
of Information Systems. His work aims to promotes the practice of probabilistic modeling
for computational and social science, and improve AI’s role in society regarding privacy and fairness. Our second guest is Dr. Loretta Cheeks. She is a data science
expert research consultant and CEO of Strong Ties. During her tenure, she has helped organizations
gain dynamic data insights, serving enterprises,
governments and non-profits. Our third guest is Emmanuel Johnson. He is a National Science
Foundation graduate research fellow pursuing a PhD in Computer Science at the University of Southern California. His research focuses on
building a ton of systems for teaching soft skills,
specifically negotiation. And we’re thrilled to have
Deborah Karaoke here tonight who will be leading the discussion. Professor Karaoke is a member of the computer science
education faculty here at UMBC, she is an
evangelist and a visionary of computer science education committed to equitable education for all by increasing
computer science education from kindergarten all the way through the academic experience. So why don’t you join me as we welcome these four individuals to the stage? (crowd applauding) – Good evening, thank
you so much for coming. We appreciate you coming. As they say, I am Deborah Karaoke. I am new to UMBC. I am creating a program
to teach people how to teach computer science. That includes teachers and
anybody else who would later want to teach computer science
so that’s my new role here. I’ve been a software engineer for 16 years before I got into academia and since we keep saying computer science for all, I decided why not me? If you have all the expertise and all the knowledge
then you should step up and that’s why I’m here and I’m very grateful
that you’re here today and I’m going to be the
moderator, and go ahead. – Hello, everyone, I am Emmanuel Johnson. I am a PhD student at the
University of Southern California and my work primarily focuses on teaching negotiation
as was said earlier. And I think sort of the
reason why we focus on this, if you think about it, AI
has been primarily focused on teaching hard skills. So you have intelligent
tutoring systems for math, for reading, so on and so forth, but if you realize, we all have
to interact with each other and there’s these social
skills that we need, especially negotiation
whether it’s trying to decide which movie to go to with
a friend or getting that for a job offer and deciding how best to maximize your initial salary. And so these are very critical skills
that we don’t typically think about as things that
AI systems could teach and that’s where my work lies. – I’m Dr. Loretta Cheeks and I am an artificial
intelligence thought leader as well as an esteem advocate. In my AI research, I focus
on unstructured bodies of text and I am interested in that topic because I’m interested in social influence and migration paths as are
pertained to societal issues. For instance, my research
focused on water insecurity in the southwest region
because that’s where I live in Arizona or understanding
how allocations are made, how people think about sharing,
and so on, and so forth. – Hi, everyone, I’m Jimmy Folds. I am the assistant professor in the information systems
department here at UMBC. And I’ve been working on
AI and machine learning for a long time. How do you make these algorithms better? How do you learn from
data, studying things like text and social networks? And more recently I’ve been
looking at, how do you make sure that these AI and machine learning data mining algorithms are fair? So how do you make sure that they don’t unintentionally discriminate against certain populations
based on what’s in the data or perhaps because of
the algorithm itself. – All right, so today we are going to be talking about
decoding the bias in AI. And before we can talk about
that, we wanna talk about our own implicit biases. So what is implicit bias? We all look at it differently
because we all have our own background that we come
with and whatever background or culture or schooling or environmental exposures that we have that can create biases in
us in how we look at things. So everybody does have a
bias and so we are going to be talking about
artificial intelligence bias and I’m going to let them
go ahead and explain what their own bias is that
they can talk about. – Right, so when I think about bias and I think about bias,
especially in AI, it can often be these algorithms that
are more and more being used in situations where humans
have typically been in. And then often times when we put an algorithm there, we
often assume that, well, if a machine gave us this
answer then it must be objective and in actuality often times
those results aren’t objective. And so what we try to do in our work is to say, “Well, how do you
begin to teach people how “to interact with different
cultures and train them “to better interact with those
who are different with them?” And the hope is that by
doing that, you begin to address your bias, and so
as you go on, whether you’re a programmer or you’re in a
different field, you can begin to see that in the way that
you interact and address that. – And the way that I think
about this whole idea of implicit bias is to
think of it in math, right? And when I think about
implicit bias as a coefficient and bias as that strong slant, right? Even if you remember, even in
algebra, linear algebra, what the coefficient represents, and that’s what, when you’re
thinking about biases, the implicit are the
internal biases that may be collected along the
way may be unintentional. It may be informed by one view of things, but understanding how that
factor, just as simple as it may seem, the coefficient
of someone’s strong slant on something may have influence or may have unintended consequences that one may not be aware of. – And that’s a really nice way to put it, but I’m just gonna pick up on some of the same things you’re both saying, so my perspective is we’ve created all of these AI tools,
they’re taking over more and more parts of our lives. Anytime you go on
Netflix and it recommends a movie to you, when you’re
applying for a college and you want to see that you can get into a different college, if you interact with the criminal justice system
and you’ve been arrested, maybe the police would try to predict, or the judge would try to predict whether you might re-offend and that may effect your sentence. When you’re applying for a job, it’s more and more common that the AI tools may have impacted whether
or not you get that job. So we’ve gone and built all
these beautiful AI systems. They work pretty well most of the time, but we haven’t taken stock
and really thought about what are the consequences of just blindly applying these symptoms? So bias can come up in
a lot of different ways. It can come up because our
society is biased and the bias in society filters into the bias of data, and the data then beats the algorithm, and then we learn our
algorithm based on the data. It also could be a matter
of how we collect the data, but the ultimate effect
is potentially there are unfair harms that
happen to people because of AI systems, so we need
to look at why that happens and what we can do to
prevent that from happening. – And I think about even,
definitely, we can see, I just wanna make sure everybody
understands what’s out there, just surveying some of the ways that AI have
most recently shown bias for everything from classification of a group that were not represented in the data set, meaning
images for instance. Then the pattern recognition or the facial recognition
doesn’t recognize that group of people, so there’s
underrepresentation of the data set. And then if you think about as most computer scientists,
we’re just trying to get something done
(laughing) with some level of performance so that we can
write some publication, right? And the unintended
consequences of moving forward with just that view performance
and not realizing that there are consequences if we
don’t ground our research with human experience. For instance, in my research,
I was very intentional to reach out to interdisciplines to make sure that I
did a content analysis. So even when I was training
my data, I trained it on the content analysis, the
labels that were applied based on humans actually reading and framing, my interest was
to understand frame times, framing and understands
the frames that were in these news articles that
was my body, my use case of unstructured text. And so it’s very important that
we always include the human in the problem space and not just look at only performance,
that’s one thing as well. – Yeah, that’s a very good statement because my next question was,
how would we address the fears of AI because many people learn
about AI taking their jobs and also making decisions
that are not equitable? So you started on that and we would like to continue on that line. – Great, and I think to the
point that was made earlier is that I think AI is
essentially reflecting many of the biases that we
see in society already. So this isn’t anything new and
I think one way we can begin to address that is to look at
what has happened in the past and to look at a diversity of authors and what they’ve said regarding that. And we go back to quite
a few inventions now. If you look at when the
first movie was created, the first movie shown to us
was the Birth of a Nation. And when we first started taking motion picture,
Thomas Edison took, one of the first photos taken was
a black guy eating watermelon. So those things came out as, “Oh, this “is how this group is.” So I think looking at what
we’ve done in the past and saying, “Okay, how do
we make sure we don’t make “these mistakes and then
also including groups “that haven’t traditionally
been represented “in these fields.” So that they bring a different perspective because the idea is that if
we want these systems to work for everybody, we have to make
sure that we have everybody at the table helping to make the decision because no matter who
you are, you’re going to come with a set of lenses. And you may cover 90% of the use cases, but often time that 10%
that you’re not thinking of can be addressed if you
had other people in the room, helping to collect data,
analyze it, system it, as well as provide feedback
on how well they should work. – So to pick up on one thing you said, so you said that it’s nothing
new that there’s a bias in AI because we have bias in data and that’s always been there. One thing that’s new is the
scale that this is happening on. So now that we’ve deployed these systems in almost every facet of life,
those same issues that have always been there are
being magnified, amplified, and they’re having a bigger impact on our lives even though they’re not new. And in some ways it can also be that AI can amplify
the bias in society. So some experiments that my
colleagues and I have done at UMBC we found that AI
would take the bias that was in the data, and so perhaps
one group when underperform and some tasks compared
to other groups that would then be predicted to be
magnified by the algorithm. So the bias that’s in data that’s in there, it’s always been there. AI can make it worse and it can make it more harmful potentially. – Another thing I was thinking about, what Emmanuel just said,
looking at the images of a people group and how
those images may have formed how people view a group of
people, what I thought about as well, as we know, one
of the models that we use in institutions and for a good reason is to partner with corporations. And so one of the things that
students particularly have to just keep in mind as we partner with corporations,
corporations are for profit and so when you’re coming
up with a solution, even though it may be profitable, you have to always ask the
question, who will it harm? Who’s it impacting? What are some of the consequences? And have that lens and to me that’s where the interdisciplinary
comes into play, giving you those lenses to ask the questions
the social scientists may be interested in, the
humanities may be interested in, and get in the lens of these
are potential, either risks, or unintended consequences
and uncertainties that we just may not know based on where we are. So as we’re racing to
create that next next, whatever that is, we always
keep in mind that it’s not all the time about just the profit and sometimes our view is skewed. Especially if you create
something great, you want it to get out there, but,
yeah, so we just have to keep that in mind as well. – Right, I think a big
challenge of bias in the AI is to get the corporations
to take this on board and be motivated to take action. One way is to threaten them with lawsuits, but maybe if we can change the culture from the ground up, maybe at the point where we teach
our students to eventually become those software engineers
to make sure that they think about these issues, and will raise the alarm when things are going wrong. – Even in terms of whenever
we’re creating our data set or deciding the attributes
that we want to even include in our model we have
to think about what are the implications of including
one factor versus the other. For instance, yeah, we’ve seen that with, I think it was Amazon. They’re hiring and
they’ve created a data set that really will cause
disparity for the women, right? You were looking at gender
and some things that had some correlation whenever
you ran the model to still more male slanted, more male biased inserted in there. So being aware that, I loved
this idea of where we are in the factors and the attributes, but what are the implications of that as it pertains to others? – Right, so we can
think of some attributes as something called a proxy
variable so you can take away from your system, the explicit
attributes that we run for test, so your gender, or your race, you don’t wanna
make decisions based on that. So somebody gets a low
one because they’re male or they’re white, or
something, you can delete those attributes from your system, but everything else is potentially correlated
with those things. So for example, a zip
code is highly correlated with race and social class. So if you delete race and
social class from your system, and leave in zip code, then this apparently innocuous variable is actually going to allow
your algorithm to discriminate. – And I think another piece
is, even when we write papers and we publish these algorithms, it’s best to describe the data set that we use. So even we decide to use that data set for what you’re doing there should be some or we should strive to
have some disclaimer that tells you, “Look, this
was trained on this population “and it may not work
well across the board.” So that when others decide
to use it, they’re aware of the biases in your data set. – Yeah, so again you are mentioning very, very
good important points. So now let’s talk about
biases and algorithms. It’s a phenomenon that will cause when an algorithm produces results that are systematically prejudiced due to erroneous assumption
which you have mentioned a lot and about machine
learning and the process? Can you explain to the
students and the audience more about some of the issues
you have gone through and how you have accomplished some of the machine learning
information that you find. – Well, access, having access, we talked, just talking earlier
about the idea of access. And access particularly in communities that may not have access to a certain technology and I’ll
tell you what that looks like, and being able to understand
how to appropriate or contextualize the technology, the AI that you’re
inserting, understanding that something that may not work in Europe may not work
in Africa, all right. And so I love, I’m
fascinated with AI and art. I love it, what it allows
for and spacial computing, but often times those
who are creating that is very underrepresented
and there are consequences of that in terms of appropriation because that is a part of our future. And so access to me and
being able to appropriate or contextualize the AI within the culture that it’s intended or even
you think about cultures that may need the AI, but
if you build it without that thinking of going back to interdisciplinary learning
without the design thinking in place then you’re
going to make a misstep. – So an example of a case where you have a system that’s trained
on a certain set of people and then used in a different
set of people, there’s a tool used in criminal justice including in Maryland called the VRAG,
Violent Risk Appraisal Guide, I think it stands for which
was designed to predict the risk of recidivism which means that re-offending
after you being arrested. This system was developed
using data from Canadians and is now used in Maryland,
so that’s one example of a system where the data that
it was designed with may not be applicable to the places
where it’s being used. There are others used with
the system including the types of questions that are
used in the assessment. So things like where your
parents divorced before a certain age, that’s one of the questions and that is a question that is
very split along racial lines and there’s a number of such
problematic questions that are being used to assess your risk score. The system also doesn’t
take into account anything that happened since you’ve been in jail or your good behavior, anything like that. You might have been in jail for 30 years and all these factors that
were measured when you came in are used to determine
whether you’re unlikely to re-offend 30 years later. So there’s some of the ways
that these systems can go wrong. – And I think one recent
example I saw in the news is in medical data where
you have various populations that have different
disorders at varying levels. So in these cases you’re training these medical diagnostic
systems in one group and then using it across
a wide range of groups and it’s causing a huge issue. – All right, so we have
mentioned quite a bit about AI and devices and some of the
bad things, so what can we talk about AI and what is good
about AI, and what it does, but also in the process
could introduce some biases, but the biases end up
producing good results? – So I guess one example
is that personalization, there is more and more
algorithms that really allow us to have personalized
suggestions to our tastes. We all know with Netflix, if you use Netflix it
recommends you movies you like. If you go on Amazon, it’s going to pick other products
you might like to buy. So these interact with the
systems a lot and this is a good way that the system is
biased towards you, hopefully. – I think one example is and what we do is personalized learning
because in those cases you want an algorithm that’s able
to learn your behavior and then give you feedback based
off of your learning style. So in our case specifically
is, when it comes to teaching negotiation, we
have a system that’s able to pick up on different signals about your negotiation style and then provides feedback there. And in that case we want
the system to be biased because it’s tailored towards your needs. And I think what’s key about
that is when we often think of biasing in a very negative way, but it can be beneficial if done right and done in the right context. – I think about smart
information, having AI and understanding context and you, again, the personalization when
you’re reading a book and you’re able to have some
more information in that book. That is a beautiful example also. I think about nurse robot, right? And being able to have that strong slant, but you’ll have it for good
for the elderly community who may get scammed and we
know that exists, right? Or that humanoid actually
can be an advocate, so that’s a good use and can
be trained to help for good. – All right. – Just another example before we move on, personalized medicine, so this is something that
I think is really going to change all of our
lives in the near future, being able to have medicine
that’s designed just for you. AI can really help to make that happen. – All right, that’s very good. So we have touched on this, but I’d like you to
elaborate on it a little bit. So society has historically
injected biases in every design as many infrastructures of this country, so how did we get here? And what do we need to happen
to make the technology that is being built not to be as biased? – So I think about the
challenge of mitigating some of the biases, right? The good thing about it, I came into the industry when software
engineering was coming into being, so even safety,
critical, and you think about disruptive technologies, this is not new that we have
disruptive technologies. I think one of the things
we learn even as we see with the recent case with
Boeing and the MAX, right? Processings are still good,
so I think in this area of AI that is pretty new and
new in the sense that there is a lot of momentum in there and
a lot of investment right now in the promise of AI that
we make sure that we take from our past some of the
safety nets that we institute it back in the day for
nuclear, for all of the areas of society that we knew it
was safety that we needed to make sure that our
people were safe, that there was no harm done, and those
things are really good to me is making sure that
we have the right set of processes in place to
make sure that we’re not only looking at things from one perspective. Another thing is again
interdisciplinary learning is to me key or even when you’re creating a product, you make sure that it’s not just all engineers in the room. That is almost a winning strategy
right there that you take and make sure that
you’re getting the views of others so that they can
at least insert some voice of reason when we’re on this path that of “we’ve gotta get it done
and it’s gonna be great,” but yes, at what risk? – You’ll probably wanna
make sure your team of engineers is also diverse and represents different interests. There’s a danger, if your
engineering team is always, say, white men that they don’t
realize the problems that it might cause for other groups because maybe they’re testing the system on their own data when
they’re playing with it. Maybe they are only noticing
things that effect them. So if you have a diverse team in the room that may also help. – Right.
– Excellent. – And I think to the point
of how did we get here? Is I think we’ve historically
not listened to every group and so because many of the
things, even when you look at what’s happening with
the Me Too Movement, these are things that women are
saying for years is not new. It’s just that now with social media and we have these platforms for
people to express themselves and to gain that support, and gain that community quite rapidly. And I think that’s allowing us to bring some of these problems to life. And I think that as these
communities are more powered and as both panelists said, we create a more inclusive culture, we
can begin to address that, but these things have
been around for a while. – I think about, when you think about world systems are built and how world systems were created, one of the problems that
we had in even ushering in the industrialization
is now that we’ve taken these rural people and assembled them so that they can build
something, now we have a problem with space. We have a problem with people
and AI is one of the solutions with our smart grids and Internet
of things, and that’s one of the promises that we have, we have to be careful that we look at things like what AI for society or for social good is doing. Or these bodies or
institutions that are looking at, “Okay, what is AI in society?” And the reason I think that
it’s good because often times the way that people approach the problems of today is the way that they were taught to approach it even historically
for instance, well, we have a people in a space problem, we will categorize them, we will
pin them in, or pin people in, into their individual bins and categorize them in that way, right? It was done using algorithms
because we didn’t have a plethora of data like we
have now or we didn’t have the intelligence that we have
now or the computing problems, but what AI offers us is a
way to grapple with people in space and we shouldn’t
be, hopefully go beyond the classification of people. And we look at other problems in the world where AI will afford us to
live better as human beings. – Just to come in on the original
question about how systems in society that hold people
down are already part of the problem and I
think that our systems in society, they really reflect
power, power relationships between people, and so
humanities, scholars, such as intersectional feminists
have been studying this a long time and then my wife, Rosy, is an intersectional feminist
who’s been telling me about all this so I’m really
channeling her right now, but anything like the criminal
justice system, other parts of society reflect the power
dynamics that are in society. And so our AI systems are often just going to reflect those same power structures, so when an AI becomes biased,
it’s really just building on what’s already happening,
maybe just taking it a step further perhaps amplifying it. And in many cases, those in power may not be interested
in fixing the problems. So it’s up to the rest
of us to actually raise the issues, realize that
people are being harmed, and trying to draw attention
to that when it’s happening. – And I would say that the uniqueness about these institutions
like this is you drive the way that the world would
go in terms of your research, and so the good thing about
it, the program that in the ways that you are actually approaching problem spaces
will cause people to think. And your publications,
they are very important because what you’re publishing
now, the implications of that may not be manifested
for two years or so, but it will get there and this
body of manuscripts and body of ways of looking at
things, it will inform what businesses are doing. When I go now to the conferences
where there’s a big arm in business, it’s what we
were doing in AI years back, but it’s catching up. It’s like, now we can see
what we can do with this. It’s real, it’s really real,
so what you’re doing in terms of researching and
discovering and exploring ways and asking very good
questions about things, and allowing your mind to go to places where other people may not have thought about different solutions
and it’s posing oneself to the problems of the world
is very, very important for understanding how
and where AI can sit. I’m thinking about particularly,
I don’t know if it’s a shout out, but there is an approach that someone was
interfering, Yoshua Bengio, Dr. Yoshua Bengio, and he was
saying that we should focus on problems of the world, right? If you go to most, we have a lot in America, we’re so blessed. And that blessing can sometimes
cause us to not understand the power of what we have in this technology that
will enable, I’m talking about us and other people
to live and be and also to institute processes and innovations that would revolutionize the way that we all can
be better connected. – And that’s a really good point because when we were talking to the high school students most of them were thinking in that way. Some of the questions
that they were asking were about how AI is effecting medical and then also how we can use AI for sustainability and climate change. So they are thinking about the future, and if they are our
future then they really are thinking about some of these issues. So I would like you to talk more as you had just talked, Loretta, with the students, you know, what is it you’ve been thinking? So how did these technologies
effect people’s lives? How did they perpetuate injustices
in hiring which we talked a bit about, but retail, security, and also may already be doing
so in the criminal system. – So one of the things about
the introduction of me, I’m the CEO of Strong Ties and just talking about that name, Strong
Ties is grounded in the work of Granovetter understanding strong or weak ties network theory. And my organization particularly worked with underrepresented
and underserved students for understanding and
contextualizing the problems of the world and terms of
introducing engagement and esteem. And what that means is I
use, for instance, this is the same where development goes for understanding these
are the world problems in general, these are the goals that all of us in the world should be achieving. Now how do I introduce AI and data? I used the human development index for introducing students
to what we know as just playing list and then we go on to understanding matrixes, we go on to understanding correlations,
and what that means as it pertains to solving, for instance, we’re focusing right now on ending child marriage, right? And child marriage is
big around the world, and what does that mean in
terms of poverty and economics, and health and wellness, and so forth? And all of the sustainable
development goes. So that early, as these
students today, we were speaking to high school students
during the lunch time period and these high school students
I called them brilliant because they were thinking about things and they were really a good
example of if you give a child a high schooler, a middle schooler, as early as you can,
some way to contextualize or be introduced to
subjects that we want them to start thinking about
as they matriculate into higher learning that
they have to start there and you bring, I always say,
and my motto is you give a student what they know
so that you can introduce what they don’t know, just simple as that. And in that context, the
students were interested in climate change and you’re able to now say, “Okay, well, climate change “is just something that many
people are also interested in.” I definitely am when I study water to understand how attitudes and beliefs and values are shifted about this idea of who gets water,
where did our water come from and is it equitable? – A lot of my students
are also really interested in these social good applications. I don’t have to motivate them to do that. They just by themselves that those are the projects they pick. So I don’t know how many of my students are in the audience, but some of them are working on things like storm prediction or
wildfires, predicting crimes. They were looking at access to food and how that effects crime whether it’s the food desert’s a problem. And so my students are
very, very motivated to do those types of things. So I really have a lot
of hope for the teacher on where AI’s going to help us solve a lot of real world problems. – And I would say, well, one of the things that has
helped sort of shape my view is just traveling, traveling to places that you wouldn’t commonly go to, meet other people from different backgrounds, interacting and learn about their
experiences and don’t come at it with a very defensive lens. Just listen to people, take in what they say, go back, think about it, and even if you’re not
able to travel there are many organizations on
campus that aren’t necessarily in your major, not in your discipline, or not in your focus, go
to your organizations, look at what they’re doing,
volunteer, just talk to people. I think you start to get a sense of how other people’s world
view differs from yours. In many ways, in those
interactions, you start to understand some of the
problems you’re dealing with. I think often times we
tend to sit in the lab and read papers and think through all of these elaborate
solutions without talking to the people that they’re going to effect because often times papers
may be focused one aspect of the problem, but as you
begin to talk and engage with people, you realize
that there’s other issues that you’re very well equipped to solve. – I was thinking as you were
talking about that, a mentor, a student that’s from the Congo, and I also take students around the world and I said, “Before I take
you back to Africa, I need you “to talk to Africa, she’s from the Congo.” And because, again, people
are, like here, different. You have people from
all over the world here and so much to learn. I know that food is my friend and food is a good conversation piece to actually share food
that’s from other places and start a conversation about differences and understanding other’s views, be it even if you think
that it’s different from yours, I say refrain from judgment and listen so that you can
understand someone else’s views because you may be
working alongside of him or they may be the creatives
of the next big it, and you want to be a part of that it in the conversation so that you can interject
your view of things. – All right, thank you so much. This has been very, very wonderful and we have appreciated it. So if you just have last
comments, we just have a couple of minutes that you wanna
meet with the students, the importance of AI and all that, and then after that we’re
going to have question time. During the question
time, if you would like to ask a question you are
going to walk over to the mic and as soon as you ask
your question you sit. If you would like to have
a fun walk, you go back and line up behind other people, so that’s how we are going to do it. So just give them a quick wrap around. – Let’s start with Jimmy. – So I just wanted to say a lot of things about AI, some
of them not so nice. So I wanted to also talk about
the great potential for AI. It’s really changing
our lives, it’s really doing things that weren’t
possible before like being a champion at the game of Go, we never thought that would happen in even 10 years, 20 years and all of a sudden it happens to you, the deep learning and
very great engineering. So it’s really exciting
what’s happening with AI and I really think also
responsible AI is the frontier so we do know that there
was these issues with AI and we were working hard to solve it. I hope that everybody
can be involved with that and be part of the conversation. – Well, I would say,
especially for the sake of AI survival, don’t be fearful, be cautious, ask good
questions because I was around, when I did my masters, I did my masters in artificial neural networks and we know this time was
called the second winter. And that means I did all this nice work and Weston House said, “Thank you, “but we can’t use that,
we’re not ready, okay?” And so we were at a time
where there’s so much momentum and as much as we do know that we should be cautious, we want to move AI forward and we want to explore it in
possibilities where people are thinking about what is,
what is actually possible? That’s why I love the art. And the art has been a good window in which to say let’s look at something and allow
open-ended questions to be answered, allow people
to, the observer to look in and it creates a window in
which we can actually explore. – And to add to that, I would
say, as students, faculty, and those of us who are both
in and outside of AI, we have to realize that this is our world shape. And so whether you’re
developing the technology or not, fun ways to get engaged and understand what’s
happening, being able to creatively think about it because you never know
where you’re going to be in the future and your ability to help shape where this field is going. And as students we have to
realize, this is our world that is being shaped and being a part of that conversation allows us to get AI to do the things that we care about and lastly, take the time while you’re in college to meet people that are not like you, meet people that have different experiences and so that informs your worldview, and I think that’s very valuable as we move into this new
interconnected world. – All right, good, let’s go
ahead and give them a hand, and then we go ahead
and clap for everyone. (crowd applauding) (laughing) So if you would like to ask a
question, let me get the mic. – [Attendee] Hi, there,
I’ll ask a question. So thanks for being here,
I really appreciate it. I am curious, why did each of you decide to get involved in AI? There are so many different things you could have studied within
the degrees that you chose, what made you decide specifically
artificial intelligence was important to you? – Oh, sorry, so my
oldest son was diagnosed with autism, he’s 33 years old and I wanted to know what happened to his brain, as simple as that. And so I started to study
the brain and cognition and that was the early
90s, and he was born in ’85, he was diagnosed about ’87 or so, and I really wanted to
understand what happened because I had no context
of that in my family, in my environment and artificial
neural network allowed me to explore what damage may look like and also to explore other ways of learning so that it
created possibilities for me to teach and have him to learn. And so he’s doing fine
now, he’s quite funny, so he’s a good guy. – I think for me, I grew
up in inner city, I was a first generation college student and even going into
undergrad I didn’t have the best grades coming out of high school, but I never felt like I could not compete or that I was dumb. And so as I went through my
undergraduate career, one of the things I started to realize was that
there was certain spaces where I wasn’t prepared and
so what I wanted to do was, and when I started learning about AI and this idea that technology can do the same thing that people do. I kept thinking to myself, man, imagine if I had some computer
system that I could’ve filled in those gaps or that
could have even alerted me to those gaps that I had
in my knowledge and so one of the things I was interested
in was how people learn and then how do you get machines to teach people certain skills? And so that drove my interest
in AI and as I got more and more into it, I realized
was that what helped me be successful wasn’t, well,
although my technical skills are very important, it was
also those soft skills, the ability to connect
with people, the ability to communicate and I wanted to make sure that other students who came from my background got that because, and many times, people aren’t, when you meet people, they’re
not gonna pull out an exam and test your abilities, your knowledge and what you know comes
through in what you say and how you say it. And also what you’re
able to convey to people as you interact with them and so that’s what kinda got me interested in where I’m at now. – My story is probably
a little less inspiring. (laughing) So I was an angsty teenager way back and I was thinking about
studying philosophy because I was feeling so
frustrated with the world and I thought philosophy
maybe would give me all the answers, but then I was thinking also about computer science because I enjoyed playing computer games and I liked messing
around with programming and programming my own games. And then so I was deciding, and then computer science
gave me a scholarship, so that made the choice easier. (laughing) And then I took AI because it allowed me to take some philosophy
courses on the side. Your AI was not such
a big thing back then. It was really just, it was
the AI winter as you said. So it had been a big thing for a while and it was a no longer a big thing. It wasn’t clear if it was
gonna be a big thing again. So I didn’t get into it
because I thought it was gonna be such a huge step in my career. It was gonna make everything great. I jut kind of stumbled into it, but then I did really fall in love with it because it was a such beautiful marriage of interesting theory and actually applications
that matter really making a difference to the world and that was just really cool so I stuck with it. – Yeah, and for me, I’m
just getting into AI, an AI fellowship form MIT, so I’m going to be doing
this this whole year. And I’m going to learn more about AI. I’m excited to see how
AI works with education, so that what I’ll do and hopefully next year when we meet again, I can
have more information. (laughing) – [Attendee] Thank you, so
this talk was incredibly timely because just this week,
I believe it was in the Washington Post there was report about an AI program that was being used by a healthcare company and
what they’ve discovered was there was incredible
disparity in treatment of people who are white
versus African Americans, and one of the things that
reporting brought attention to is the fact that the
academic researchers who had evaluated this program
had unprecedented access to proprietary artificial
intelligence development that had been done commercially. So I think you all have
sort of hit on that fact, but this potential for
that partnership between the academic researchers who
aren’t necessarily invested in the profit versus commercial
entities that wanna use it to try and help their bottom line is a really important intersection and I would welcome your
comments on that, thank you. – So I’m really glad that you brought up the medical context. I think this is really relevant
to what your question is. So my colleagues and I at UMBC,
we just got a grant to work on fairness in the allocation
of healthcare resources. So it’s in partnership with
the Hilltop Institute which is an institute on campus
which deals with Medicaid data. So they recently deployed
a system that is re-ranking the Medicaid wait list by me and so it previously
was first in, first out, and now they look at the
time to being admitted to a nursing home is predicted
by a statistical model. And so it’s not clear
yet whether there’s bias in that system, but it seems
very likely that there would be that article you mentioned that showed another similar case where there was such a bias, so I believe that it will be there. And so we’re going to
develop the techniques to make sure that these allocations of healthcare resources
are going to be fair. – And I think that relationship
between corporation and academia is very critical because before anybody was working for a company, nine times out of 10 you were at an institution, and so by bringing us that data into institutions, we were
able to understand it as well as prepare students to tackle
those biases that you see in the data and to better understand it. And I think by us understanding
that then you create better and more informed engineers as
they go into these companies and you still can maintain
that relationship so that even the company releases a
product that hasn’t been thoroughly vetted, then the academic institution
has some way of interjecting and saying, “Hey, these are the aspects “of it that we need to be aware of.” – All right, so it’s really
important to start teaching the students early to think
about ethics and to think about the consequences of their
decisions that they make that may effect many people’s lives when they build these systems. So we just recently had a
workshop in DC in Alexandria about how to teach ethics in AI and we really had some good
conversations about that and how one of my colleagues,
Dr. Vondernogenatia described this as data firefighters
that we send out into the world who are really out there, potentially fixing
problems as they see them. They have a lot of agencies
really solve these kinds of issues as they come up
and this really flows onto the businesses and corporations that they end up working for. – Hi, considering the future of AI, where do you realistically see bias in our AI systems going as our
social landscape progresses, say, 10, 20, 30 years in the future? – Well– – Go ahead, go ahead. – Well, I mean, the obvious
is diversity and inclusion. As the question before, the
underrepresenation of data and also you’re either
training an inaccurate or incomplete data set. So having representation
is very important in terms of even the future of AI. That challenges us to make sure that those who aren’t included in the
creating process are those who are not creating that we get
in the business of creating. I was talking about this. So in architecture,
along time ago we came up with this notion of plug and
play, and understanding that our skill, the architecture
styles should allow for portability and
adaptability, usability, and all these other “ilities,” right? And the same thing should
be happening when we think about creating AI, that we’re
thinking about the future in terms of adaptability, plug and play, and making sure that others
are there and represented. Representation is really huge and again, those often times rely on unfortunately, those decisions aren’t usually made with all included. So we have to challenge what we see when there isn’t inclusion and it’s a risk because ones who do may be the token so that other
people can come through and which it’s okay if you
know that, and so we want to challenge to make sure that we have this idea that we have to create, we have to be able
to plug in and make sure that as we’re creating the next
augmentation, spacial computing, working aggressively into
our future as we know AI, and all of the tracking, and the motion, the recognition, the extractions,
whatever way we express an AI that it really includes,
and that really starts here. You’re educating, you are the future. So your peers and your labs,
it’s you that will be called the future and so what you
model right now will probably be what is in in the future. So making sure that you
lean across the table and when you have a team then you say, “Okay,
let’s make sure we have a female “in this group, let’s make sure we have “at least some representation “of other than the guys getting together.” Because I know you guys, you get very good at, you can chop away,
you can speed through, but make sure that you bring
those who may not have, even as fast as you can and have even some of the skills that you can and include other ways people learn. I’m an advocate for disability. And so as such, it’s no their
inability, it’s their ability, them learning different ways,
and so slow down a little bit so that other people can chime in and be more inclusive in this space. – And so we need to
think ahead what’s going to happen 20 years, that’s requested. I think European interventions that you’re talking about are so important because you can imagine two
very different futures 10 or 20 years form now and one is a dystopia where AI makes all the decisions on us and they’re all biased and they’re all really
unfair and our lives are in misery because we’re just inflicted by endless decisions that are made by uninterrogable machines that we can’t question. The other futures where AI
has been really beneficial and we have gone and
made these interventions like ensuring that we have
diverse teams of engineers and all the other things,
bias mitigation algorithms that we’ve been talking about. If we do all those things we
can make sure that the impacts of AI are all positive ones and they ended up making our
lives better and not worse. – Go ahead. – Thank you so much for
sharing all this knowledge. I am a masters student in my
first studying data science. AI is a big field, so often students like me, we all really struggle outside, so can you share what’s the
first step we should take in order to start learning about the AI? – The obvious is there are
free data sets out there. So with those data sets like data.gov, I’m a NASA Datanaut so go
to the nasa.gov, there is data there, NOA has data. Find out what you’re interested
in, the human index, right? And then everything starts
with data, it literally does. So start just looking through
data and use what you have in your hand, start
using Excel, start there and then when you start
using Excel, then start thinking about, “Well, what did I learn “in mathematics about correlation?” How do these things relate to each other? And ask questions, what does this have to do with the other thing? And just even starting
there, and then you can go into more intriguing, “Oh,
well, whose rate ties?”, And other things that you
raise up looking at the data, but there are data sets out
there and the cool thing about it is they not only,
for instance, NASA gives you the data, but you can
also get your data set and then there are websites like CalGold and you can go in there
and you can look at, for instance, if you go to CalGold and you do HMI, HDI, human
development index, HDI, and then it’ll bring up people
who have been looking at HDI, for instance, Akiva, for how much investment
in a country, right? Or other people and then
you can look at their notes. And then you go and try out those programs using Python so that you can
get some contextualization. So bring what you know. I always say you start with what’s in your hand, you have a lot in your hand. – [Attendee] It’s probably the right part, what you have said. – Yes, and so I also advocate getting your hands dirty playing with data. Going in CalGold competitions,
I think that’s a great idea. Learn Python, maybe take
everything as course zero course on deep learning that would probably be a great one to shift
your thinking forward. – Looking at it from free
market perspective, most of the time bias is only
going to be addressed in the black box algorithms of
these companies when it presents a problem that is very
humiliating for them. And so my question to you all
is how do we address the need for bringing in more inclusion,
bringing in more oversight, and being more involved in the development of these algorithms when
many companies focus on avoiding that by means of pursuing their profit
motives and their bottom lines. So how do we make AI more,
I don’t know, beneficial? – Well, I would say, it sounds
like it’s a daunting task, but we have to learn from our history. It wasn’t that long ago
that software engineering, that whole discipline of
software engineering came into play and what did the government say? If you are going to
get government dollars, so it basically follows
the money, if you are going to get government dollars, you have to ensure that you’re
giving us a system that is compliant, that’s safe or considers the customer, that the
requirements are met and so you have this line of processes and really that’s really
accountability, that whole idea of fair, accountable, and
transparency in AI engaging in policy is a big deal, and
so it starts with your vote. – So what– – I think that’s timely. (laughing) – So you said that companies may not actually list their vows so one way is just to embarrass
them, and so there was the Angwin, et. al.
paper or magazine article about the bias in this compass
criminal justice system that’s predicting whether people re-offend and was bound to be biased
against African Americans so then they are now forced to engage with that once it comes
into the public sphere. So actually going out
there and embarrassing the companies is one way to do it. I think we can also start
from the pedagogy and try and make sure that our students
who become the engineers are going to have these values and maybe from the inside they can
try and change things. And we can always talk about regulation. I think it’s a tricky issue whether we can introduce new regulation to address the challenges of bias and AI. You’re right, the companies
may not be motivated to do it themselves
unless there’s some kind of stick that says if you
do this you’re gonna get in trouble, or maybe embarrassed, or maybe even have a
lawsuit, so if we need to do that, we may have to change things. Of course our existing laws about discrimination still apply to AI. So Title VII, all those
things that American’s with Disabilities Act,
they were all applicable, so we can always enforce AI bias through those existing legal mechanisms, but there may come a point where we need to make further AI specific interventions. – I would say, like you guys, there’s a book, it’s called Team
Human, I just read it. You read that book? – [Attendee] I was at the All-Star Students
Conference in New York. – Yeah, I loved that book. At first I was kinda mad
with you when I started. I was like, “I don’t like
the way this is going.” But then I loved the end which is engage. You have to be engaged. We learn as human beings
through either our mothers or daddies teach us something
and we heed their warning, or we go and experience it
ourselves and then we learn. And it’s no different with others. That’s how people learn. They either will through
advocacy or education or saying we’ll do the upfront work or there are consequences
if they don’t do that, and then you have to learn. So it’s really, how do I
in society want to learn? – There’s another book and it’s relevant by Safiya Noble called
Algorithms of Oppression and so she was looking
at bias in Google search. So she found things like if
you typed in “black girls,” if you find positive representations of black girls, you found negative things. And then so Google was
then embarrassed by that and then their response was that we can’t fix every possible
query to make it unbiased, but then they quietly did
go and fix that issue. (laughing) So we can potentially embarrass companies into fixing these issues. – Right, and I think one of
the things that we can do is you take a system from
a company, try it out, if you can expose what the bias is. Now we have social media,
we have a various way of sharing our views
and sharing our opinion, and also if you’re able to create a system that’s better, do
that and start a company. And I think then you are able to create a more applicable system that
then challenges the norm, and I think that then causes
people to change their behavior because often times a company can say, “Yes,
we can eliminate that.” But now if you’re saying,
“Well, no, here’s a product “that I just invented.” You can even make it open-source
so that people don’t use these company’s products
or you can sell it and then make money off a new
and more equitable system. So I think that’s one way to
address that as individuals. – Thank you for that question. – [Attendee] Hi, thank
you, so I highly doubt, so as we are talking about bias today, so what is considered morally right today, after 20 years can be
considered morally wrong, so how do you think the AI
is gonna be effected by that? – I’m sorry, could you
explain just a little more. – So let’s say there’s an issue which is considered wrong now
and we have developed an algorithm for that, but after 20 years, the same issue is considered
right, so don’t you think the whole point of the AI’s, the knowledge or the effort towards the
algorithm is wasted after a while? So isn’t this a very
broader topic to talk about? – Well, I think about
the way that we learned. So most algorithms have
update and the idea is that it’s continually
learning, and also the meaning of things should be embedded as well. So you’re not just training models that learn only once, you’re training models
that can back propagate, that can have this feedback
loop and update, right? And even in that space,
you still contextualize it, you insert that meaning,
the linguistic portion into the embededness of the model. You look at that as well, and
I think that will help you to inform the morality in
that day, in that context because it will shift,
it will change over time. We will change over time. There’s nothing that’s constant. – And you have the ethics
and morality changed by geographic region
or culture, even within our society, we have a
lot of different ideas on what is ethically right. So what part of that is
thinking about the context? I totally agree about
updating the algorithms. There’s a book, Weapons
of Math Destruction by Cathy O’Niel and this is something that
she really emphasizes. So in her definition, a weapon
of math destruction which is an algorithm or system that’s gone wrong, it’s usually one that is
not being kept up to date. It’s one where people
have just built the system and then they have deployed it, and then they haven’t
thought about it anymore. But if we kinda keep
critiquing our algorithms and our systems, keep improving them and maybe if our values change, we need to go back and change them, then we will makes sure that we don’t get these
weapons of math destruction. – Thank you for that question. – Hi, I’m a senior CS major here and I’m interested in doing
research for AI and equity. I know you guys have
talked about reducing bias in AI through things such as access and checking your assumptions
and making algorithms. In terms of data sets
though, I know there’s gonna be a mad scramble to make
data sets more inclusive, more diverse, and how can companies do that without exploiting
people of color and women, a creative way to do that. – I guess they need to be compensated if you’re asking people to be in your data set,
so that’s important. Another aspect really is
privacy, so we have a notion of privacy called differential privacy is something that has been come up with in the computing community which
roughly says if I add my data to this data set then I won’t
be harmed by the algorithm because it won’t effect the
algorithm’s behavior too much. And so you hide within the crowd, so if you have 100,000 people in the data, then one
more person won’t change the algorithm very much,
so we can reverse engineer the algorithm to figure
out your credit score or gender or whatever else. So that’s one way we can make sure that when people contribute their
data that they are not harmed, but I think compensation
is also very important. – I will say, definitely
including it is one way, so instead of just taking your data sets
without including them in the decision making, it is bad because you may not understand the context from which that data was taken. So the more you have people
from diverse backgrounds at the table as you’re making decisions about these data sets. I think they help give
context to what’s happening. So for example, this is a simple example, but I did my masters in England
and in the British system it says that you rarely get
90s or above, you typically are from a 70 to an 80, that’s really good. So if you took the grades from British schools without
somebody that understands that system, you might assume that they just do worse in certain
subjects than we do. So although that’s an example
of just by having the data from a different group
doesn’t necessarily mean that you’re going to
get rid of those biases, but having somebody that
understands those groups at the table as you’re
interpreting that data. And understanding like
Thomas Kuhn said, there’s a subjective nature to science. Just because we spit out
a number doesn’t mean that there isn’t a context to that value. And so we have to be cognizant
of that including people who are represented in the data at the table. – But only if they want to be included and they are compensated
for their time, right? – Absolutely.
– I think maybe that speaks to your question,
I’m not sure that maybe there is a temptation to say,
“Okay, let’s force one “of our employees from a minority
or underrepresented group “to be in this committee
or do all the work.” – [Emmanuel] Right. – So you also have to make sure if we go to do that it’s done in an
equitable and fair way that they are compensated for their labor. – Right.
– All right, and again, we wanna say thank you very much for coming and after this we are going to have a reception right
out here and we can continue with this conversation,
so give them another hand. (crowd applauding) All right, and we’ll see you outside.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top