I believe I may have coined a new term, the AI Divide, in my paper titled, The Potential Societal Impact of the AI Divide (at least when I googled the term prior to submitting the article). About two weeks ago, I shared a paper and presentation on the AI Divide at the American Association for Artificial Intelligence (AAAI) Spring Symposium on AI and Society: Ethics, Safety, and Trustworthiness in Intelligent Agents held at Stanford University. My paper and presentation discussed what the AI Divide is and asked the following questions:
- Does an AI Divide exist?
- Are there populations that are negatively impacted by the AI Divide?
- Should there be public policy that will protect “AI-marginalized” populations?
- Should we provide AI Literacy for all citizens?
- Will the AI Divide continue to increase or shrink?
These are questions that will need continued discussion, exploration, and answers. The Digital Divide began in the 80’s with the advent of the personal PC and later the Internet and the disparities in access to computing devices, fast Internet, and access to internet-accessible knowledge sources. This has helped contribute to socio-economic disparities including education quality, college readiness and career outlook and income.
The AI Divide is developing because AI is becoming more and more ubiquitous in our daily lives. AI is becoming increasingly ubiquitous in e-commerce (e.g. Amazon), natural language recognition (e.g. Siri), social media (e.g. Facebook), information technology, and even wearable tech (e.g. Apple Watch). Several startups and automakers want to make AI ubiquitous in driverless cars, although lately Uber and Tesla have had untimely deaths related to the AI involved in driverless cars.
Before this week, I saw the AI Product Cycle involving people as consumers of AI products, and companies that develop and control the hardware, data, and algorithms as the producers of AI products. People interact with this hardware to generate data closely tied to their emotions and behavior, which are in turned used by companies’ algorithms to produce AI-enabled products (e.g. Facebook app).
But as we are seeing recently, companies such as Facebook, are using people’s personal data to fuel their social network algorithms to influence people’s behavior. In this case, influencing voter behavior, as well as their purchasing behavior. But Facebook is not unique. Other big companies, including the usual suspects (e.g.Google) are doing the same to monetize these algorithms by using ads to influence behavior.
One user said it best in an online interview about Facebook’s tactics and said that he realized that he is the product. His data was being sold so that others could feed their psychographic machine learning algorithms to know how to best exploit his personal information to make him vote or buy the way the company wanted him to. What he said raises the question: Are YOU the product of these companies that use your data and AI algorithms to influence YOUR behavior?
The AI Divide is the split between the companies that own the hardware, data, algorithms, and applications that you and I use, so that they can exploit our emotions and behavior and those of us that down own them. Most people are AI illiterate and don’t understand the basics of how their data is used, nor how these machine learning algorithms work. The disparity between those who create, own, use, and understand these algorithms and those who don’t is the AI divide and has potential to create disparities in quality of health, safety and security, and prosperity.
Unlike the digital divide, the AI divide won’t necessarily exist along racial, socio-economic, or even political and educational lines. The AI Divide can exist across these lines between the producers/owners of the hardware, data, algorithms, and applications of AI and those that are only the consumers, and in some cases, the living and breathing “products” sold and influenced by AI.
What can be done to address the AI divide? Those are the answers we need to decide on before it’s too late.
Andrew B. Williams, Ph.D., is Associate Dean for Diversity, Equity, and Inclusion for the School of Engineering and the Charles E. And Mary Jane Spahr Professor in Electrical Engineering and Computer Science, at the University of Kansas (KU). Dr. Williams is also Director of the Humanoid Engineering & Intelligent Robotics (HEIR) Lab at KU.
© 2018 Andrew B. Williams
This article was written on April 9, 2018.