AI Ethics Beyond the Anglo-Analytic Approach

Humanistic Contributions from Chinese Philosophy

Authors

  • Paul D’AMBROSIO East China Normal University, Shanghai, China

DOI:

https://doi.org/10.4312/as.2023.11.3.17-46

Keywords:

AI, algorithms, Chinese philosophy, humanism, machine learning, Confucianism, Daoism, comparative philosophy

Abstract

That artificial intelligence (AI), algorithms, and related technologies could use a few good booster shots of “humanism” is widely apparent. In both program code and implementation, AI and algorithms have been accused of harbouring deep-seated flaws that conflict with human values. They are prime examples of the skew towards white, Western, men and demonstrate the bankruptcy in the face of neoliberalist, profit- and market-oriented social paradigms that this special issue seeks to address.

Currently, computer scientists and AI researchers who are looking to remedy these problems are often in favour of more data, more powerful machines, more complex algorithms—in short, that we should fix problems with AI by building better AI. In this view human beings and the world can be modelled in code—our lives, interactions, society, and our very selves can be broken down into data points which can be assessed by highly advanced technologies. When these scientists and researchers seek to broaden their approach they often look to philosophy. However, the philosophy they look to is overwhelmingly Anglo-analytic, which views the world in extremely similar ways. Both AI and Anglo-analytic philosophy argue for solutions to humanistic problems which are essentially mathematical. They share in seeing important concepts, such as persons, emotions, agency, and ethics, as mechanistic, atomistic, and calculable.

In this paper I will argue that Classical Chinese philosophy offer insightful resources for addressing the humanist problems in AI. Rather than arguing for mathematical solutions, or envisioning persons, emotions, agency, and ethics, as other rigid, atomistic, and mechanistic approaches, Chinese philosophy emphasizes transformation, interrelatedness, and correlative developments. Accordingly, it offers tools for appreciating the world, society, and ourselves as spontaneous, complex, and full of tension. AI can be programmed and used in ways that do not reduce the complexity and conflict in the world, but provide us instead with tools to make sense of it—tools that are humanistic in nature. To this end, Chinese philosophy can be a helpful collaborative partner.

Downloads

Download data is not yet available.

References

Blackman, Reid. 2022. Ethical Machines. Cambridge, MA: Harvard University Press.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Bryne, Sarah. 2020. “Predict and Surveil: Data, Discretion, and the Future of Policing.” YouTube, accessed November 7, 2022, https://www.youtube.com/watch?v=Jo-3vRTPTDw. DOI: https://doi.org/10.1093/oso/9780190684099.001.0001

Buolamwini, Joy. 2019. “AI, Ain’t I a Women?” YouTube, accessed June 29, 2022, https://www.youtube.com/watch?v=HZxV9w2o0FM.

Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81: 1‒15. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Christian, Brian. 2020. The Alignment Problem: Machine Learning and Human Values. New York: W.W. Norton and Company.

———. 2022. “The Alignment Problem: Machine Learning and Human Values with Brian Christian.” YouTube, accessed November 23, 2022, https://m.youtube.com/watch?v=z6atNBhItBs.

Christian, Brian, and Tom Griffiths. 2016. Algorithms to Live By: The Computer Science of Human Decisions. New York: Henry Holt and Co.

Fridman, Lex. 2022. “Rana el Kaliouby: Emotion AI, Social Robots, and Self-Driving Cars | Lex Fridman Podcast #322.” YouTube, accessed November 11, 2022, https://www.youtube.com/watch?v=36_rM7wpN5A.

Gans, Joshua. 2010. Parentonomics: An Economist Dad Looks at Parenting. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/8258.001.0001

Gebru, Timnit. 2020. “Race and Gender.” In The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 251–69. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/oxfordhb/9780190067397.013.16

Haidt, Johnathan. 2003. “The Moral Emotions.” In Handbook of Affective Sciences, edited by R. J. Davidson, K. R. Scherer, and H. H. Goldsmith, 852‒70. Oxford: Oxford University Press.

Han, Byung-Chul. 2022. Infocracy: Digitization and the Crisis of Democracy. Cambridge, MA: MIT Press.

Huberman, Andrew. 2022. “How to Maximize Dopamine & Motivation—Andrew Huberman.” YouTube, accessed November 17, 2022, https://www.youtube.com/watch?v=ha1ZbJIW1f8.

Kearns Michael, and Aaron Roth. 2019a. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford: Oxford University Press.

———. 2019b. “The Ethical Algorithm | Michael Kearns & Aaron Roth Talks at Google.” YouTube, accessed November 13, 2022. https://www.youtube.com/watch?v=tmC9JdKc3sA.

Kissinger, Henry, Eric Schmidt, and Daniel Huttenlocher. 2021. The Age of AI. New York: Little, Brown, and Company.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. DOI: https://doi.org/10.2307/j.ctt1pwt9w5

———. s.d. “A Revealing Look at How Negative Biases Against Women of Color are Embedded in Search Engine Results and Algorithms.” Amazon. Accessed November 17th, 2022. https://www.amazon.com/Algorithms-Oppression-Search-Engines-Reinforce/dp/1479837245.

Nussbaum, Martha. 2010. Not for Profit. Princeton, NJ: Princeton University Press.

O’Neill, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers.

Roberts, Huw, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang, and Luciano Floridi. 2021. “The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation.” AI & Society 36: 59–77. https://doi.org/10.1007/s00146-020-00992-2. DOI: https://doi.org/10.1007/s00146-020-00992-2

Rošker, Jana S. 2014. “Ji Kang’s Essay ‘Music has in it Neither Grief nor Joy’ (聲無哀樂論) and the Structure (理) of Perception.” Philosophy East and West 64 (1): 109–122. http://www.jstor.org/stable/43285882. DOI: https://doi.org/10.1353/pew.2014.0013

Russell, Stuart. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin Books.

Sapolsky, Robert. 2021. “Dr. Robert Sapolsky: Science of Stress, Testosterone & Free Will | Huberman Lab Podcast #35.” YouTube accessed November 27, 2022, https://www.youtube.com/watch?v=DtmwtjOoSYU.

Simanowski, Roberto. 2018. The Death Algorithm and Other Digital Dilemmas. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/11857.001.0001

Turow Joseph. 2017. The Aisles Have Eyes. New Haven, CT: Yale University Press.

Wu, Tim. 2016. The Attention Merchants. New York: Vintage Books.

Zuboff, Shoshana. 2018. Surveillance Capitalism. New York: Public Affairs.

Downloads

Published

7. 09. 2023

How to Cite

D'Ambrosio, Paul. 2023. “AI Ethics Beyond the Anglo-Analytic Approach: Humanistic Contributions from Chinese Philosophy”. Asian Studies 11 (3): 17-46. https://doi.org/10.4312/as.2023.11.3.17-46.