Robots Don’t Solve Bias, People Do 15:15, August 28, 2016

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone

Our Resources

Robots Don’t Solve Bias, People Do

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone

Large businesses lean heavily on technology. Data analytics, through the use of big data and even robots (artificial intelligence (“AI”)), is a popular way to supplement decisions with a certain amount of accuracy, facts, and evidence. However, it is not a panacea for biased or unethical decisions– a responsibility that lays squarely in human hands.

The Technology Out There

At its core, data analytics is the science of analyzing raw data and drawing conclusions from it, usually to assist decisions. Big data is a form of data analytics, where computers quickly draw patterns from huge sets of different types of data. AI, like Google’s AlphaGo computer, has impressive capabilities to not only draw patterns from big data sets, but make decisions based on discovered patterns (called “intuition” in the case of AlphaGo). Data analytics has existed for decades, big data has gained recent popularity, and AI is no longer science fiction.

Robots Making Business Decisions

In his post Will Artificial Intelligence Make All Business Decisions?Christopher Koch of SAS argues that AI can help solve “unconscious behavioral biases that can lead to deeply flawed decisions” in business by using “massive quantities of analysis” that don’t contain the biases that humans have (and we all have them). Koch isn’t alone in this belief. Fast Company highlights many companies throughout the US that are using AI to “spot nuanced biases in workplace language and behavior” that pervade critical areas like performance reviews and interviews that can favor certain groups of people over others. Bias can lead to sexual harassmentdiscrimination based on military status, and employees feeling excluded and undervalued. Tools, like big data and AI, that reduce bias can be good things that make the workplace better.

It is tempting to imagine data analysis to be incorruptible and objective. It isn’t. Big data and AI have weaknesses that we, as humans, are responsible for creating and addressing.

The Data Weak Spot

Big data is not always “free of bias.” In January 2016 the Federal Trade Commission (FTC) released a report on the dangers of big data, emphasizing that data sets and algorithms behind an identified pattern may “reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society.” If the input (data) reflects bias, so can the output (decisions). For example, if an employer collected data on successful existing employees to define the “best employee” to hire, previous discrimination and bias could be imbedded in that data, making false positives for future employment decisions. This raises compliance issues because an employer may be relying on years of bias that excludes people based on age, race, or gender.

Like big data, AI also has critics. The once-radical sentiment that technology and AI will overtake humans is now a common hot topic. It isn’t just conspiracy theorists, but tech and science leaders like Stephen Hawking, Elon Musk, and Bill Gates who warn of AI threatening our existence if we’re “very foolish” with its development. Asking AI to do human things produces an “ethical dilemma” to address and “calls for rigorous safety and preventative measures that are fail-safe.” AI can perpetuate biases under the same paradigm that big data can. After all, “‘[h]umans are a model from which the AI learns,” explains startup lawyer Olga Mack in The San Francisco Daily Journal. “All members of the community have the responsibility to contribute, or at least stay aware of, AI development,” she explains further.

Ethical Decision Making

Just like scientists and engineers need to be careful when developing AI, we all need to be careful and deliberate about addressing bias, discrimination, and harassment in the workplace. A company that has effective ethics training and robust policies promotes ethical behaviors and better decision making. While robots can help us to make better decisions, they still need to be scrutinized and managed by humans. This is an ethical decision that both organizational leaders and employees should make. To learn more, check out our white paper on ethical decision making and how it can benefit the workplace.

You might also be interested in...

Douglas Kelly
Douglas Kelly is EverFi's lead legal editor. He writes on corporate compliance and culture, analyzing new case law, legislation and regulations affecting US companies. Before joining EverFi, he litigated federal and state employment cases and wrote about legal trends. He earned his JD from Berkeley Law and BBA from Emory University.

Leave a Reply

Leave a Reply

White Paper
Data Security training
for employees

  |   Download White Paper

 

Compliance Course Catalog
  |   Download Catalog