Bias & Technology

“Our individual encounters with bias embedded into coded systems – a phenomenon I call the “coded gaze”- are only shadows of persistent problems with inclusion in tech and in machine learning”
– Joy Buolamwini

What?

The impact of technologies we use within our personal or professional lives is quite significant. Our dependencies on technology have increased exponentially in a post-pandemic world. The topic of bias & technology explores how some of these technologies may have a bias embedded in their software or business processes, and how that may be impacting our inclusion efforts without us realizing it. Some examples are the impact of facial recognition software, data analytics and visualization models used to create narratives, and bias that may be embedded in natural language processing tools and search algorithms.

Why?

There is a push for digital information where any manual, paper processes are automated by leveraging the processing power of technology. There has also been an exponential growth in the need for quick access to data and using this data to build visualization models that can predict the future. Due to these dependencies, it is quite important to ensure that the data, software and business processes used to build these systems do not have a bias embedded in them. In her book Algorithms of Oppression, Dr Safiya Noble provides a number of examples where search engines such as Google, have shown racist and sexist results for certain search phrases. While we as end users may not have a direct control over these algorithms, it will surely benefit us to examine and analyse data, technology and processes used at our institutions with a goal of minimizing any implicit bias that may be inherent in them. 

How?

Acting toward reducing or eliminating bias in technology within higher education involves a change in individual attitudes, coupled with organizational practices and policies. As individuals, we rely on technology more than we would like to admit. Dr. Jennifer Eberhardt and her team’s efforts to introduce the concept of friction in the NextDoor app to interrupt and reduce racial profiling illustrates how we need to be more responsible while using or building such apps. Data analytics is used for most aspects of an organization’s business. Data stewards need to educate themselves by asking questions on where the data comes from, who/what is it being used for, as well as how its misuse can be prevented. Finally, when considering vendor software for chatbots, campus security/surveillance systems as well as facial recognition systems, we need to be mindful of how these systems may negatively impact marginalized communities. In addition to software, we need to advocate for diversity and inclusion in our business or design processes and software lifecycle models so that equity is more of a forethought and will enable us to establish a strong foundation for the future. 

The Basics

8 types of bias in data analysis and how to avoid them

Bias in data analysis can come from human sources because they use unrepresentative data sets, leading questions in surveys and biased reporting and measurements. Often bias goes unnoticed until you’ve made some decision based on your data, such as building a predictive model that turns out to be wrong. Read more…

[MEC id="1049"]
|

Related

Listen

The month of September 2020 is all about Listening.Sign up for the Listen EDUCAUSE Quick Talk, and don't forget to…

Read More

Reach Out

For August, the ARiA series begins with Part 1: Reach Out, or the intentional effort to proactively and respectfully engage…

Read More