According to the Online Etymology Dictionary (n.d.a), the adjective “artificial” originates from 14c France meaning “not natural or spontaneous”, and it began to be used in the English language from 16c to describe “anything made in imitation of, or as a substitute for, what is natural”. The etymology of the noun “mind” is rooted in late 12c, when the word “mynd” was used to describe “that which feels, wills, and thinks; the intellect” (Online Etymology Dictionary, n.d.b); a derivation from the Old English word “gemynd” which encompassed the concepts of memory, conscience, intention, and purpose among other things. This essay explores the concept of “artificial minds”, some of its psychological perspectives and what all this reveals about human minds.
What is an artificial mind? Based on the above explained, if the word artificial has for centuries carried the meaning of imitation and substitution for- in this case- human nature; it is not surprising that some people have reported feeling afraid about the possibility of robots taking over the world (McDonald, 2019). Assuming that machines think in the same way as humans is like anthropomorphising (ascribing human qualities to nonhuman animals; Hewson, 2015). Yasemin J. Erden defined this phenomenon as “problem of other minds” in her chapter of Living Psychology: From the Everyday to the Extraordinary (2015, p. 109), where she posed the question “how do you know that the author of this chapter is a person?”. Her name is written as the author of the chapter, and the paragraphs are written in a meticulously eloquent manner. The content is highly specialised. Yet, the reader is invited to question all this, and to consider the possibility of her identity being robotic. Once her name is searched though, it can be seen that she is human, as well as a philosopher at St. Mary University in London (Google, n.d.). Nevertheless, her question should not be underestimated in any way, as there exist bots that can rigorously write essays for humans (Essaybot, n.d).
If mind is software and body is hardware (Computational modeling of the brain – Sylvain Baillet, 2016), does that mean that the two work independently? Descartes initially questioned whether matter (body) was the result of mind (imagination). He stated “I think, therefore I am”, claiming that body was a manifestation or hallucination of thought (Erden, 2015, pp. 111-112); and eventually evolved his perspective to say that mind and body are connected specifically through the pineal gland of the brain. Under the same token, dualist theorists believe that the strongest evidence for the existence of mind as a separate entity from brain is the concept of qualia- coined by Chalmers (1996) as cited in Erden (2015)- which encompasses the subjective, first-person experience of the individual. Erden illustrates this concept with an analogy of eating and enjoying chocolate (2015), explaining that one thing is to understand how the body absorbs and digests chocolate, and another thing is to enjoy the taste of it. Could a bot understand the experience of enjoyment? After all, not even some legislators seem to understand the concept of enjoyment in relation to- for example- human rights law (United Nations, n.d.; ECHR, 1950).
In contrast, materialist theorists claim that specific parts of the human brain are responsible for intelligent functions such as the processing of sensory inputs (stimuli), and the creation of responses (outputs; Erden, 2015, pp. 115-117). But, what is meant by intelligence? The answer to Alan Turing’s question (1950) cited in (Erden, 2015, pp. 120-121) “can machines think?” depends on the way the words “mind” and “thinking” are used (Erden, 2015, p. 122). For instance, the intelligent nature of human memory is highly complex (Prosecution Witness Janeen DeMarte Explains Why She Does Not Believe Jodi Arias’ Memory Fog Story, 2013). Could a machine learn to absorb, encode, store, and retrieve information similarly to a person? In order to understand this, Naoyuki Sato and Yogo Yamaguchi (2010) from Japan reviewed computational models of the hippocampi, the two organs of the brain mainly responsible for the formation of episodic memory (remembering what, where, and when). Their (Sato and Yamaguchi, 2010) evidence suggests that when the hippocampal system is damaged, the ability for self object-space processing is lost. Nevertheless, they state that more brain regions are involved in the process, and that models which can take into account more than one brain region simultaneously need to be developed.
This is why one of the biggest challenges in computational modeling is to equip artificial minds and robotic bodies with proprioception (Erden, 2015), the human ability to position one’s body within timespace and context. Understanding such computational complications elucidates the everyday complexity of human nature (including perceptual, sensorimotor abilities; Erden, 2015). For humans, working their way from point A to point B in timespace can be relatively straightforward, and if uncertainties or anomalies arise, these can be dealt with successfully (e.g. avoiding an obstacle). However, with no hippocampus and no cognitive map on which to rely; robots find it overwhelming to understand the where, when, and what of situations; especially when it comes to unexpected contingencies or events. John McCarthy and Patrick Hayes (1969) cited in Erden (2015) called this phenomenon the frame problem. As a consequence, psychologists such as Aaron Sloman (The Open University, 2019b) have placed their emphasis on the computational modelling of the human information processing system. Erden (2015, p. 124) defines this framework as computational theory of mind (CTM), and the most advanced artificially intelligent robotic inventions are equipped with proprioceptive sensors which allow them to compute and interact with the world around them more competently (Erden, 2015). Nevertheless, Margaret Boden from the University of Sussex in England states that to model some mysterious processes such as creativity is difficult, because humans do not always understand how they do what they do (The Open University, 2019a).
To summarise, the concept of artificial minds has helped cognitive scientists understand the complex functions of everyday living in humans. Machines can indeed think, they just don’t think in the same way as humans. Human intelligence and its neuroscientific structure is not easy to model in full magnitude, and not all functions are clear enough to warrant replication. The human mind remains somewhat mysterious, and subjective experience remains an area for further research. Could this be what is meant by the philosophical latin concept of DEUS EX MACHINA? (GOD FROM THE MACHINE). The future is uncertain.
References
Computational modeling of the brain – Sylvain Baillet (2016) Youtube video, added by Serious Science [Online]. Available at https://www.youtube.com/watch?v=2oW6DN08wwE (Accessed 29 October 2019).
Council of Europe, European Convention on Human Rights, as amended by Protocols Nos. 11 and 14, ECHR, (4 November 1950) [Online]. Available at https://www.echr.coe.int/Documents/Convention_ENG.pdf (Accessed 28 October 2019).
Erden, Y. J. (2015) ‘Artificial minds’, in Turner, J., Hewson, C., Mahendran, K. and Stevens, P. (eds), Living Psychology: From the Everyday to the Extraordinary, Milton Keynes, The Open University, pp. 109-146.
EssayBot (n.d.) How It Works [Online]. Available at https://www.essaybot.com/ (Accessed 28 October, 2019)
Google (n.d.) “Yasemin J. Erden” Search Results [Online]. Available at https://www.google.com/search?q=%22Yasemin+J.+Erden%22&oq=%22Yasemin+J.+Erden%22&aqs=chrome..69i57j0l2.7175j1j4&sourceid=chrome&ie=UTF-8 (Accessed 28 October, 2019).
Hewson, C., Ramsden P., and Turner, J. (2015) ‘Animal minds’, in Turner, J., Hewson, C., Mahendran, K. and Stevens, P. (eds), Living Psychology: From the Everyday to the Extraordinary, Milton Keynes, The Open University, pp. 63-99.
McDonald, H. (2019) ‘Ex-Google worker fears ‘killer robots’ could cause mass atrocities’, The Guardian, 15 September [Online] Available at https://www.theguardian.com/technology/2019/sep/15/ex-google-worker-fears-killer-robots-cause-mass-atrocities (Accessed 28 October 2019).
Online Etymology Dictionary (n.d.a) Artificial (adj) [Online]. Available at https://www.etymonline.com/word/artificial (Accessed 28 October, 2019).
Online Etymology Dictionary (n.d.b) Mind (n) [Online]. Available at https://www.etymonline.com/word/mind (Accessed 28 October, 2019).
Prosecution Witness Janeen DeMarte Explains Why She Does Not Believe Jodi Arias’ Memory Fog Story (2013) Youtube video, added by PK Report [Online]. Available at https://www.youtube.com/watch?v=NlnoRHufmok (Accessed 29 October 2019).
Sato, N. and Yamaguchi, Y. (2010) ‘Simulation of Human Episodic Memory by Using a Computational Model of the Hippocampus’, Advances in Artificial Intelligence, Japan, Future University/ Brain Science Institute, pp. 1-11 [Online]. Available at http://downloads.hindawi.com/archive/2010/392868.pdf (Accessed 29 October, 2019).
The Open University (2019a) ‘5.6 Margaret Boden: artificial intelligence’, DD210-19J Week 5: artificial minds [Online]. Available at https://learn2.open.ac.uk/mod/oucontent/view.php?id=1467711§ion=5.6 (Accessed 28 October 2019).
The Open University (2019b) ‘5.3 Aaron Sloman: AI and cognitive modelling’, DD210-19J Week 5: artificial minds [Online]. Available at https://learn2.open.ac.uk/mod/oucontent/view.php?id=1467711§ion=5.3 (Accessed 29 October 2019).
United Nations (n.d.) Human Rights Law [Online]. Available at https://www.un.org/en/sections/universal-declaration/human-rights-law/ (Accessed 28 October 2019).