Artificial Defined

The word artificial, like many other words, is a mixed bag, best conceived of as a semantic continuum encompassing positive, neutral and negative connotations. An exploration of the etymology and historical development of the usage of the word can greatly increase our understanding of its more complex nuances, ambiguities and shifts in meaning and, therefore, prevent us from falling into prescriptive, dogmatic or monolithic conceptions which confine it within a single meaning. 

In discussing Artificial Intelligence (AI), the term coined in 1956, we may be tempted to paraphrase the ‘Four legs good, two legs bad’ refrain of George Orwell’s Animal Farm as ‘Natural good, artificial bad’ especially in the light of the fears of the late Stephen Hawking that AI ‘could spell the end of the human race.’ Speaking at MIT in 2014, Tesla and SpaceX leader and innovator Elon Musk, though hardly a technological pessimist, also called AI humanity’s ‘biggest existential threat’ and compared it to ‘summoning the demon.’ We will return to such fears and reservations about ‘transhumanism’ and ‘dehumanisation’ in due course, for they are shared to a varying degree by so many who are justifiably concerned about the future of humanity, but I first want to anchor the subject in the actual word artificial by excavating its roots. 

Going back as far as possible to its earliest known (or hypothetical) roots, the word artificial comes from Indo-European ar-dhe-. The root ar- had the sense of ‘joining’ or 'fitting together', very much like the modern sense of an orderly, congruent and aesthetically pleasing arrangement of parts, as in its Greek derivative harmonia. The root dhe- had the sense of ‘to set down, put, make, shape’ (as in its derivative thesis and all its relatives, for example, hypothesisprosthesis, synthesis, thesaurus). These senses come through in its Latin derivatives: artificialis ‘of or belonging to art’, from artificium ‘a work of art; skill; theory, system,’ from artifex ‘craftsman, artist, master of an art’, from ars ‘skill, art’ + -fex ‘maker’, from facere ‘to do, make’. The original sense of the ‘skill’ required to ‘join things together’ is retained in the English word artisan. 

Further etymological excavation reveals that the ar- root is not only the source of Greek harmonia but also produced the Greek word areté, which is usually translated as 'virtue' although it is not a specifically moral term. It was used to refer not only to human skills but also to inanimate objects, natural substances and domestic animals. A good knife had the virtue (areté) of being able to cut well 'by virtue of' its sharpness. The term denoted any sort of excellence, distinctive power, capacity, skill or merit, rather like Latin virtus, which, like the Greek, also had the sense of bravery and strength. The Italian word virtuoso preserves the sense of exceptional skill. The connotation of excellence in the word areté also comes through in the related word aristos, ‘fittest, best’.  

Homer often associates areté with courage, but more often with effectiveness. The person of areté uses all their faculties to achieve their objectives, often in the face of difficult circumstances, hardship or danger. One heroic model is Odysseus, not only brave and eloquent, but also wily, shrewd and resourceful, with the practical intelligence and wit (in the sense of quick thinking) of the astute tactician able to use a cunning ruse to win the day.

Although the Latin word virtus comes from vir, 'man' (source of virility or manliness), itself originally from the Indo-European base wi-ro, 'man', Homer uses the word areté to describe not only male Greek and Trojan heroes but also female figures, such as Penelope, the wife of Odysseus, who embodies areté by showing how misfortune and sorrow can be stoically endured to an excellent degree. Such is the virtue of sabr (patient endurance) in Islamic tradition, in the same way as the aesthetic sense of refinement the Greeks also associated with areté converges at one level with that of Arabic ihsan, 'doing what is good and beautiful', behaving in an excellent manner. In Islamic ethics and spirituality, ihsan embraces the aesthetic, moral and spiritual dimensions of a beautiful and virtuous character (akhlaq and adab).  In the same way, the concept of 'beauty' expressed by the word husn transcends what is merely decorative in appearance and encompasses not only the aesthetic sense of beauty in its homage to the 'due measure and proportion' with which all of creation is endowed, but also the intimate equation between what is beautiful and what is good. In this sense, great art always transcends artifice. 

In the original Greek of the New Testament, areté is included in the list of virtues for cultivation in Christian moral development, and is associated primarily with the moral excellence of Jesus. It figures in the celebrated 'Admonition of Paul' in Philippians 4:8: 'Finally, brethren, whatever is true, whatever is honourable, whatever is just, whatever is pure, whatever is lovely, whatever is gracious, if there is any excellence (areté), if there is anything worthy of praise, think on these things.' 

While in modern English, usage of the word artificial can still carry an essentially neutral sense of simply being caused or produced by human agency (e.g. social, political, medical, nutritional) it has progressively taken on some ambiguous and even negative connotations. The sense of ‘made by man, contrived by human skill and labour’ (from the early fifteenth century) could also carry the implication of ‘unnatural, lacking in spontaneity’. From the sixteenth century it could refer to anything made in imitation, or as a substitute for, what is natural. The meaning ‘full of affectation, insincere’ is from the 1590s and ‘fictitious, not genuine, sham’ from the 1640s.

A striking embodiment of the connotation of insincerity is the Artificial Smile (also called Pretend Smile, Fake Smile, Phony Smile, or False Smile). The Body Language Project describes this as ‘a feigned smile where the orbicularis oculi muscles surrounding the eyes play no part and the lips are only stretched across the face with the help of the zygomatic muscles surrounding the mouth. The tell-tale cue is the fake smile is the lack of crow’s feet. The teeth are often bared, with a tense jaw and the lips show asymmetry.’ Such a fake smile, where the eyes play no part, shows others that one is not sincerely expressing approval or happiness. It is used to appear cooperative and to appease others in a polite way, whilst showing that one is not really on board with another person or their ideas. 

The negative connotations of artificial come through unambiguously in the related word artifice.’ Although this originally meant something made with technical skill (an ‘artifact’) it has come to mean ‘artfulness’ in its negative sense of a clever trick, cunning ruse, tactical manoeuvre, ploy or stratagem. The same ambiguity can be found in the word fabrication, which can mean either something man-made or a lie. There is a revealing semantic correspondence here with the word craft in English. This originally meant ‘strength’ or ‘power’ in Germanic, but it also developed the additional sense of ‘skill’ in Old English, probably because of the way in which skill (as in the forging of weapons) was an obvious source of power. The pejorative sense of ‘crafty’ as deviously ‘artful’ or ‘cunning’ may have arisen from the influence of the church in rejecting any association of power with pre-Christian pagan culture. ‘The Craft’ can be used to refer to sorcery, as much as to the society of Freemasons. In the same way, the word cunning itself did not always mean ‘skillfully deceitful’ (as in the wiles of the devil), but simply meant ‘knowledge’ or ‘ability’, as preserved in the words can and canny, or Scots ken.  

The varying connotations of artificial are easily illustrated in modern usage. At the positive end of the continuum, artificial replacements (prostheses) for limbs and joints developed in orthopaedic surgery have been a godsend to countless disabled people. They are among the world's truly great inventions. I speak from experience, having had a hemiarthroplasty to replace my upper left arm after a recent accident. I remember the joy my mother expressed at the age of 80 when knee replacements enabled her to return to active work in the garden she so loved, and the relief experienced by my daughter when she had hip replacements as a young woman as a result of congenital hip dislocation that caused her severe pain and progressive immobility. Prosthetic arms and hands have now advanced to the point where they give individual control of all five fingers. In the same way, the transformational benefits of artificial hearing aids, spectacles and dentures can hardly be disputed. Dentures used to be called false teeth, and it’s interesting to note that the word ‘false’ carries no negative connotation in this context. 

AI will also bring numerous other medical benefits. Major developments are being hailed in almost daily reports. Recently, news came through of how an AI algorithm was used to discern a powerful antibiotic (halicin) that kills some of the most dangerous antibiotic-resistant strains of bacteria in the world. These include Acinetobacter baumannii and Enterobacteriaceae, two of the three high-priority pathogens that the World Health Organization ranks as ‘critical’ for new antibiotics to target. Another example of the major impact of AI on healthcare is the better detection of cancer through improved radiology. A recent article in The Lancet reports that the performance level of an AI algorithm in detecting breast cancer on mammograms was significantly higher and much faster than that of radiologists without AI assistance. 

When it comes to complex human organs, artificial replacements are much more elusive.  An artificial replacement for the heart remains a long-sought ‘holy grail’ of modern medicine. The demand for organs always greatly exceeds supply, so a functional synthetic heart would be a boon by reducing the need for heart transplants. The heart, however, is not merely a pump, and its subtleties defy straightforward emulation with synthetic materials and power supplies. Severe foreign-body rejection limited the lifespan of early human recipients to hours or days. A new concept of an artificial heart was presented in the Journal of Artificial Organs in 2017 by Nicholas Cohr and colleagues. This ‘soft artificial heart’ (SAH) was created from silicone with the help of 3D printing technology. The goal was to develop an artificial heart that imitates the human heart as closely as possible in form and function. This sounds promising, but the stark reality is that this SAH prototype only manages to achieve 3000 beats in a hybrid mock circulation machine. The working life of a more recent Cohrs prototype (replacing silicone with various polymers) was still limited, according to reports in early 2018, with that model providing a useful life of one million heartbeats, or about ten days in a human body.  Since then, Cohrs and his team have been striving to develop a model that would last up to fifteen years, but Cohrs admits that it cannot be predicted when a working heart which fulfils all requirements and is ready for implantation would be available. 

As for artificial brains, in an article in Futurism, Lou Del Bello proclaims that ‘Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do.’ He claims that a new superconducting switch could soon empower computers to make decisions in much the same the way we do, essentially turning them into artificial brains. The switch ‘learns’ by processing incoming electrical signals and generating appropriate output signals, and this is held to mirror the function of biological synapses in the brain which allow neurons to communicate with each other. What is more, the performance of this synthetic switch surpasses its biological counterpart, using much less energy than our brains and firing signals much faster at one billion times per second, in comparison to fifty times per second for natural synapses. Researchers are confident that the new artificial synapse may eventually power a new generation of artificial brains capable of improving on the current capabilities of AI systems. This enhanced capacity would include the ability to deal with ethical conundrums that impinge on decision-making. For example, the development of driverless cars needs to factor in the imperative for the AI driver to resolve the moral dilemma of having to decide whether to prioritise the safety of its own passengers or others who might be involved in a collision. Such challenges highlight only too clearly how far there is to go before an artificial brain is capable, if ever, of encompassing the full range of faculties residing in the human brain. Just as the heart is much more than a mechanical pump, so the brain should not be reduced to a mere calculating machine. 

It would be useful at this point to consider other instances of the ‘artificial’ that carry ambivalent implications with a varying degree of balance between pros and cons. Take the practice of ‘artificial insemination’ (first recorded in the lexicon in 1894). This common practice in animal breeding and a fertility treatment for humans has brought many obvious benefits, although it has to be said that the rise of dependency on assisted reproductive technology (ARTs) also has some potentially negative social implications, including the pressure placed on couples to conceive. Where parenthood is culturally mandatory, childlessness becomes socially unacceptable.  

As in the case of ARTs, it is not easy to assess the extent to which the benefits of artificial aids are offset by negative consequences. Artificial lighting is a case in point. Despite its numerous benefits, it is exerting pervasive, long-term stress on ecosystems, from coasts to farmland to urban waterways, many of which are already suffering from other, more well-known forms of pollution. As Solarspot warns, the constant glare of artificial lighting can have adverse effects on physical and mental health. Ninety percent of our light sources now use LED (light-emitting diode) lighting, and this includes a blue spectrum that is more intense than it is in natural sunlight. The blue light emitted at peak emission from smartphones, tablets and computers can disrupt natural sleeping and waking patterns (the circadian rhythm). Exposure to too much artificial light also can have a negative effect on memory and lead to a build-up of neurotoxins, with increased risk of breast and prostate cancer. It can also degrade eyesight through macular degeneration not only in older people but also in children.

Varying degrees of pros and cons are also associated with such things as artificial sweeteners, flavourings and colourings. In the US, the Food and Drug Administration (FDA) have approved six artificial sweeteners – saccharin, acesulfame, aspartame, neotame, sucralose and stevia – all of which help to combat obesity, metabolic syndrome, and diabetes (all risk factors for heart disease). Nevertheless, overstimulation of sugar receptors from frequent use of these hyper-intense sweeteners may limit tolerance for more complex tastes and can make you shun healthy, filling, and highly nutritious foods while consuming more artificially flavoured foods with less nutritional value. Participants in one heart study who drank more than twenty-one diet drinks per week were twice as likely to become overweight or obese as those who didn’t drink diet soda. 

As for artificial flavourings and colourings, defenders claim that they must pass strict safety testing, but there is growing concern about their potential health risks, and not only amongst ‘organic purists.’ In particular, the effect on children’s development and behaviour is a topic of ongoing discussion. The health risks related to the consumption of artificial food additives include allergic reactions such as anaphylaxis, food hypersensitivity, and the worsening of asthmatic symptoms. Azo-dyes, a group of food colourings commonly used to add bright colours to edible products, contain chemical substances that when metabolised by intestinal bacteria may become potentially carcinogenic. As is so often the case, the gravity of such risks is difficult to evaluate because any toxic effects depend on the amount of colouring ingested, which typically is of negligible amount, and in any case the azo-dyes also tend to be poorly absorbed into the blood stream. Concern has also been raised as to a possible link between food additives and neurological development, including attention deficit hyperactivity disorder (ADHD) in children, although no conclusive evidence had been found to date.

This brings us to Artificial Intelligence. As might be expected, there is a wide divergence in perceptions of its benefits and risks. On the plus side, as an article for World Economic Forum by Julia Bossmann claims, ‘intelligent machine systems are transforming our lives for the better, optimizing logistics, detecting fraud, composing art, conducting research, providing translations’ and so on. She confidently predicts that ‘as these systems become more capable, our world becomes more efficient and consequently richer.’ On the minus side, we have doom-laden prophecies of a dystopian future, or even the extinction of humanity. It is likely that most people would occupy the middle ground in a broad continuum of views, neither being taken in by a utopian vision of AI as a panacea to increase efficiency and reduce the ever-increasing complexity of modern life, nor persuaded by the fearful predictions of Stephen Hawking and Elon Musk. After all, Hawking’s relationship with AI was far more complex than the oft-cited soundbite that it could ‘spell the end of the human race.’ As Ana Santos Rutschman, associate professor with the Center for Health Law Studies at Saint Louis University, explains, ‘the deep concerns he expressed were about superhuman AI, the point at which AI systems not only replicate human intelligence processes, but also keep expanding them – a stage that is at best decades away, if it ever happens at all.’ The fact that we need to avoid a one-sided evaluation of AI is pointedly highlighted by the irony that, as Rutschman notes, Hawking’s very ability to communicate his thoughts and feelings depended on basic AI technology.

The ‘One Hundred Year Study on Artificial Intelligence’ launched by Stanford University in 2014, highlighted various concerns, but so far it has identified no evidence that AI will pose any imminent threat to humankind. The ‘top nine ethical concerns’ that ‘keep AI experts up at night’ identified by the World Economic Forum include Unemployment (What happens after the end of jobs?), Humanity (How do machines affect our behaviour and interaction?), and Singularity (How do we stay in control of a complex intelligent system?). There is also increasing concern about the threat of ‘data colonialism’, by which major world powers might exert domination over the less powerful, and enact total surveillance arising from the monitoring of every aspect of a citizen’s life to a degree that previous totalitarian regimes could only dream of. In 1978, Zbigniew Brzezinski was already describing the ‘technotronic era’ as one involving the gradual appearance of a stringently controlled society, ‘dominated by an elite, unrestrained by traditional values’, with the technological means to impose almost continuous surveillance over every citizen. Advances in facial recognition technology make such mass surveillance even more attainable, although politicians and police forces are quick to justify this technology on the grounds that it improves public safety by reducing crime.

At the World Economic Forum (WEF) in Davos in January 2020, Sundar Pichai, CEO of Alphabet Inc. (one of the world’s biggest companies valued at $1 trillion) and its subsidiary Google LLC, stated that AI ‘has tremendous, positive sides to it, but is has real negative consequences.’ His message, however, was mainly upbeat. Asked what kept him awake at night, Pichai replied, ‘I worry that we turn our backs on technology. I worry when people do that they get left behind. It is our duty to drive this growth in an inclusive way.’ History shows, however, that advances in technology do leave people behind. The Luddites, the nineteenth century radical faction which destroyed textile machinery as a form of protest against the replacement of their skills by machines, did not manage to prevent the rapid expansion of the ‘dark satanic mills’ which so appalled William Blake as a sign of the social, moral and spiritual devastation wrought by the industrial revolution. Bringing the same narrative of the ‘left behind’ up to date, Jean Schradie’s very recent study, The Revolution That Wasn’t: How Digital Activism Favors Conservatives unearths the way in which digital technology, far from being the democratising and levelling force that it was expected to be, has actually become another weapon in the arsenal of the wealthy and powerful. This is because digital platforms like Google and Facebook work better for top-down, well-funded, well-organized movements which favour conservatives rather than liberal, progressive or leftist groups.

So, should we be scared of artificial intelligence? The futurist, Bernard Marr, identifies what he considers the six greatest risks relating to AI that we ought to be thinking about. The first is the development of autonomous weapons with a mind of their own, which, once deployed, will likely be difficult to dismantle or combat. The second is AI's power for social manipulation, as was all too evident in the way Cambridge Analytica misused the data from fifty million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K.'s Brexit referendum. ‘By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, fact or fiction.’ Thirdly, there is the risk of invasion of privacy and social oppression that I have already discussed in relation to the exponential growth in surveillance capability through ubiquitous cameras and facial recognition algorithms. Fourthly, humans and machines may find that they are not on the same page when it comes to following instructions. For example, an AI may not be concerned with the interests of roadway regulations or public safety in fulfilling a request to get a passenger from point A to point B in the quickest possible amount of time. Accidents abound over this simple disorder in hierarchy of interests. The fifth danger identified by Marr is ‘discrimination’ based on the vast amount of personal information that machines can collect, track and analyse about individuals and therefore use against them (as, for example, in denying them employment or other opportunities). The sixth is the misuse of AI for dangerous or malicious purposes. 

The website CBInsights goes further, surveying the predictions of fifty-two experts on ‘How AI will go out of control.’ One of them, Dr. Oren Etzioni, Chief Executive of the Allen Institute for Artificial Intelligence, believes that it will be a challenge to build ‘common sense’ into AI systems, where it is not even a guarantee in human beings. Science and technology writer, Clive Thompson, agrees, pointing out that the years spent feeding neural nets vast amounts of data has produced ‘crazy-smart’ machines, but they have absolutely no ‘common sense’ and ‘just don’t appear to work the way human brains do.’ For a start, they’re insatiably data-hungry, requiring thousands or millions of examples to learn from. Worse, you have to start from scratch each time you want a neural net to recognize a new type of item. ‘A neural net trained to recognize only canaries isn’t of any use in recognizing, say, birdsong or human speech.’ Thompson cites Gary Marcus, a professor of psychology and neuroscience at New York University, who is convinced that AI ‘will never produce generalized intelligence, because truly humanlike intelligence isn’t just pattern recognition.’ There are intractable limitations to ‘deep learning’, such as ‘visual-recognition systems that can be easily fooled by changing a few inputs, making a deep-learning model think a turtle is a gun.’ As Marcus uses the example of how children come to learn the world. Their education does not require the review of massive quantities of data to recognise one object. Rather they generalise, building the world out of commonalities. He explains how a tractor can be encountered as being somewhat like a car and from that simple move, a knowledge of the world can be constructed.

Another way of conceptualizing this issue is to recognize the importance of ‘top-down processing’ in human comprehension.  This is the kind of ‘fast thinking’ that has obvious survival benefits in enabling us to use our existing knowledge to generate likely expectations and inferences. Without the rapid automatic routines generated by top-down processing we would not be able to function in the world, for we would have to analyse everything laboriously from the bottom-up as if we were encountering it for the first time. 

Another of the fifty-two sceptics on CBInsights, Joanna Bryson, an AI researcher at the University of Bath, is concerned about the danger of AI being contaminated by unconscious bias, including stereotypes, in the underlying data. Rana el Kaliouby, the co-founder and CEO of Affectiva, which develops emotion recognition technology, believes that social and emotional intelligence have not been prioritized enough in the AI field which has traditionally been focused on computational intelligence. Martyn Thomas, British consultant and software engineer, boils it all down to one pre-eminent risk, claiming that human error, not artificial intelligence, poses the greatest threat’. The risk facing humanity, he says, ‘comes not from malevolent machines but from incompetent programmers.’  

The inventory of possible risks can be continually expanded, but let us move on to what for many of us is the heart of the matter, eloquently expressed by Kabir Helminski in ‘The Spiritual Challenge of Artificial Intelligence, Trans-Humanism, and the Post-Human World.’  The word ‘Trans-Humanism’ (abbreviated as H+ or h+) refers to the movement that aims to transform the human condition and create ‘post-human’ beings through technologies that are designed to overcome what trans-humanists see as ingrained human limitations, especially by ‘improving’ human intellect and physiology. As Helminski comments, ‘Trans-humanism is not merely some geeky tech subculture, not a futuristic daydream, but a pervasive phenomenon that is already impacting our humanness itself. We're talking about the merging of human beings with technology, and not just at the physical level, but possibly a merging that encroaches upon the most intimate dimensions of the soul.’  This is the likely destination if the ‘qualitative dimension of human experience’ is overshadowed by ‘the ideology of Dataism, the belief that all entities and processes are fundamentally algorithms.’ 

Helminski is with those who are concerned about the escalating concentration of power and influence in media conglomerates and wary of the surveillance state with its history of indoctrination and mind control. He warns that humanity is in danger of being reduced to an impoverished level of existence where it may forfeit its awareness of the full range of reality and confine itself in a mental box. Algorithms, no matter how sophisticated, are still applicable only within the box, and tell us nothing about what lies beyond it. In the face of this blinkered reductionism, our task is to develop our humanness through the awakening of the full range of our innate human faculties. The sense of ‘development’ here reflects the original meaning of the Old French des-voloper, ‘to unwrap, unveil’. Our destiny as human beings is surely not to reach for a bogus level of transcendence ‘by downloading the data of memory into super-computing cyborg flesh, or merging our brains with the simulated reality of an oncoming singularity’ but ‘to align and harmonize ourselves with the cosmological order, and in the end to upload our souls into eternity.’ 

The emergence of AI should, above all, alert us to our primary duty to awaken and nurture the totality of our human faculties. The starting point for this needs to be an understanding of the multi-layered and multi-faceted semantic universe encompassed by the word intelligence, in the same way as the heart is much more than a mechanical pump, ‘so the brain should not be reduced to a mere calculating machine.’ 

This contention can be supported in various traditions of psychology, philosophy, ethics and spirituality. In Islamic tradition, the ‘intellect’ (‘aql) encompasses not only the language-based rational and deliberative faculty (Latin ratio, Greek dianoia) but also the higher organ of moral and spiritual intelligence and insight (intellectus, nous). One very appropriate translation of the term ‘aql in its higher sense is ‘Mind-Heart’. In a detailed study of the concept of ‘aql, professor of Islamic studies, Karim Douglas Crow, has also noted the re-appearance of the term ‘wisdom’ in recent descriptions of human intelligence to connote 'a combination of social and moral intelligence, that blend of knowledge and understanding within one’s being manifested in personal integrity, conscience, and effective behaviour.’  He concludes that one of the key components of the concept of ‘intelligence’ expressed by the term ‘aql is 'ethical-spiritual'. 

The full scope of intelligence also goes far beyond what Guy Claxton, director of the Research Programme of Culture and Learning in Organisations (CLIO), has labelled as 'd-mode' (deliberation mode), that mode of thinking based on reason and logic. Seeking clarity and precision through literal and explicit language, it neither likes nor values confusion or ambiguity and works best when tackling problems which can be treated as an assemblage of nameable parts and are therefore accessible to the function of language in atomising, segmenting and analysing. Claxton himself points out that the growing dissatisfaction with the assumption that d-mode is the be-all and end-all of human cognition is reflected in various alternative approaches to the notion of intelligence. Modern advances in the field of cognitive psychology question the conventional reduction of human intelligence to a single unitary or g factor for 'general intelligence' as measured by IQ tests, and point instead to 'multiple intelligences'. Developmental Psychologist, Howard Gardner, identifies seven of these: linguistic, visual-spatial, logico-mathematical, body-kinesthetic, musical-rhythmic, interpersonal and intrapersonal. Daniel Goleman, a science journalist, has also introduced the influential concept of 'emotional intelligence', and more recently that of 'ecological intelligence'. Of importance too is, Cornell University professor of human development, Robert Sternberg's triarchic theory of intelligence, which proposes three essential components: practical intelligence, creative intelligence, and analytical intelligence, and I have already drawn attention to the faculty of ‘common sense’ that Clive Thompson finds absent in AI. One might add to these alternative approaches the work of scientists such as F. David Peat who has synthesised anthropology, history, linguistics, metaphysics, cosmology and even quantum theory to describe the way in which the worldviews and indigenous teachings of traditional peoples differ profoundly from the way of seeing the world embedded in us by linear Western science. 

Rumi refers to the discursive intellect as the 'husk' and the higher intellect (or, in his terms, 'the Intellect of the intellect') as the 'kernel', the 'knowing heart', the organ of moral and spiritual intelligence. In the tradition of Orthodox Christianity (Hesychasm) this is the transcendent Intellect, the supreme human faculty, through which man is capable of the recognition of Reality or knowledge of God. Dwelling in the depth of the soul, it constitutes the innermost aspect of the Heart, the organ of contemplation, which alone can reach to the inner essence or principles (logoi) of created things by means of direct apprehension or spiritual perception. 

We also need to include ‘imagination’ in our enlarged inventory of human faculties. Imagination is the ability to form images or pictures in the mind, or think of new ideas. It is the faculty that enables us to tell stories, write novels, to visualize and envisage, and also to envision the future. And the new buzz word is to ‘re-imagine’, to revise or reform an outdated view of the world, continually update our guiding myths and stories about ourselves, our societies and the wider world. The higher octave of the ‘imagination’ is the spiritual imagination, or the ‘creative imagination’ as described in such depth by Ibn ‘Arabi. This is the higher faculty of symbolic perception through which one glimpses the transcendent through the mediating forms in the imaginal world. It allows us to inhabit an interworld or isthmus (barzakh), an interface between the Unseen (al-ghayb) and the seen, the inwardly hidden (batin) and the outwardly manifest (zahir), and thus perceive the hierarchical order of creation in which everything in existence is a sign (ayah), an analogy or similitude (mithal) pointing to its transcendent origin.  We may well ask how such an imaginal world accessed by direct perception or ‘tasting’ (dhawq) can be accessed by algorithms. 

And this takes us to the magisterial work of the psychiatrist and neuropsychologist Iain McGilchrist whose unprecedented mastery of a vast body of recent brain research is distilled in the striking title and subtitle of his tour de force, The Master and His Emissary: The Divided Brain and the Making of the Western World. I believe that if we apply McGilchrist’s essential conclusions to the phenomenon of AI, we find a clear direction in diagnosing the way in which AI is reinforcing and compounding the ‘divided brain’.  

What does McGilchirst mean by the ‘divided brain’? He first distinguishes the essential differences between the two cerebral hemispheres. The right hemisphere, he explains, sees the whole, whereas the left hemisphere is adept at homing in on detail. New experience is better apprehended by the right hemisphere, which also sees things in context, as inseparably interconnected, so it recognizes the vast extent of what remains implicit. The left hemisphere, however, deals better with what is predictable, the narrow focus of its detailed, distinct mechanisms able to isolate what it sees, but relatively blind to things that can be conveyed only indirectly. The knowledge mediated by the left hemisphere therefore tends to be knowledge within a closed system – ‘perfect’ knowledge to be sure within its box, but bought ultimately at the price of emptiness. Where the left hemisphere is literalistic, the right as ‘the ground of empathy’ recognizes all that is nonverbal, metaphorical, ironic or humorous.’ At ease with ambiguity, paradox and the co-existence of complementary opposites, it cherishes ‘the reciprocal relationship between ourselves and one another, ourselves and the world.’ 

McGilchrist emphasises that there is a good reason we have two hemispheres: ‘We need both versions of the world.’ He contends, however, that in the West there has been ‘a kind of battle going on in our brains’. Despite swings of the pendulum, the partnership between the two hemispheres has been lost, and the relatively rigid left hemisphere has gained the upper hand. ‘With Parmenides, and still more with Plato, philosophy shifted from a respect for the hidden and implicit to an emphasis on what can be made explicit alone.’ The previously acknowledged insight that opposites can be reconciled became anathema. With the ‘Enlightenment’, the world was further atomised, and the mechanical model became the dominant framework for understanding ourselves and the world. Our world has become increasingly rule-bound and ‘loss of the implicit damages our ability to convey, or even to see at all, aspects of ourselves and our world that transcend the mechanistic.’ There is an oppressive rise in bureaucracy, with paper replacing people, ‘factual’ information replacing meaning, and experience increasingly virtualized. ‘This is the world of the left hemisphere, ever keen on control.’ McGilchrist concludes that the increasing precedence of the left hemisphere, with its inferior grasp of reality, is likely to have potentially disastrous consequences. While it should be the ‘emissary’ of the right hemisphere, it is instead becoming the ‘master’. 

The depth of McGilchrist’s analysis and the breadth of his perspective add great weight and urgency to the warnings of those who believe that the ‘artificial’ and the ‘mechanistic’ must be the servant of humanity, not its master. AI must not be given the ultimate power to control and mould us, for, as the Qur’an tells us, we have been created fi ahsani taqwim, ‘in the best of moulds’. The balance of the hemispheres can only be restored by a renewed awareness of the totality of all the faculties that make up our essential humanness, and this has huge implications not only for the future direction of AI but also for the realisation of the full magnitude of human potential in every sphere of human endeavour.

Citations

In explaining the origin of English words, including their Indo-European roots, I have consulted various sources, including John Ayto, Dictionary of Word Origins (Bloomsbury Publishing, London, 1990); Chambers Dictionary of Etymology, ed. Robert K. Barnhart (Chambers, Edinburgh,1988);  Joseph T. Shipley,  The Origins of English Words: A Discursive Dictionary of Indo-European Roots (John Hopkins University Press, 1984); The American Heritage Dictionary of Indo-European  Roots, ed. Calvert Watkins (Houghton Mifflin Company,  Boston, 2000; The Online Etymology Dictionary, Merriam-Webster Dictionary and Wikipedia; Andrew Lawless, Plato's Sun: An Introduction to Philosophy (University of Toronto Press, Toronto, 2005); Michael Pakaluk, Artistotle's Nicomachean Ethics: An Introduction (Cambridge University Press, 2005), 5;  Jeffrey Barnouw, Odysseus, Hero of Practical Intelligence: Deliberation and Signs in Homer's Odyssey (University Press of America Inc., Lanham, Maryland, 2004), 250; Allen Verhey, The Great Reversal: Ethics and the New Testament (William B. Eerdmans Publishing Co., Grand Rapids, Mich., 1984), 141; and Jeremy Henzell-Thomas, ‘Armonia: Fitting Together in a Plural World’, Inaugral issue of Armonia Journal (March 2017) at https://armoniajournal.com/2017/03/10/armonia-fitting-together-in-a-plural-world/

On Artificial Intelligence and related questions concerning human faculties, I have referred to the following sources: Zbigniew BrzezinskiBetween Two Ages: America's Role in the Technetronic Era (Penguin, 1978);  Julia Bossmann, ‘Top 9 ethical issues in artificial intelligence’, World Economic Forum, 21/10/2016 at https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/; Ana Santos Rutschman, ‘Stephen Hawking warned about the perils of artificial intelligence – yet AI gave him a voice’ (The Conversation, 15 March, 2018) at http://theconversation.com/stephen-hawking-warned-about-the-perils-of-artificial-intelligence-yet-ai-gave-him-a-voice-93416; the Stanford University ‘100 Year Study of Artificial Intelligence’ at https://ai100.stanford.edu/history-1; Kelsey Piper, ‘Why Elon Musk fears artificial intelligence’, Vox 2/11/2018 at https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai; Jean Schradie, The Revolution That Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, Cambridge Mass., 2019); Bernard Marr, ‘Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About’, 19/11/2018 at  https://www.forbes.com/sites/bernardmarr/2018/11/19/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/#752e5bac2404;  Kabir Helminski, ‘The Spiritual Challenge of Artificial Intelligence, Trans-Humanism, and the Post-Human World’ at www.sufism.org; Clive Thompson, ‘How to Teach Artificial Intelligence Some Common Sense’, Wired 13/11/2018 at https://www.wired.com/story/how-to-teach-artificial-intelligence-common-sense/; Richard Fletcher, ‘Google Boss: Artificial Intelligence will do more for humanity than fire’, The Times, 23/1/20; Ian Sample, ‘Powerful antibiotic discovered using machine learning for first time’, The Guardian, 20/2/2020 at https://www.theguardian.com/society/2020/feb/20/antibiotic-that-kills-drug-resistant-bacteria-discovered-through-ai; Lou Del Bello, ‘Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do’, Futurism, 28/1/2018 at https://futurism.com/artificial-brains-operate-like-humans-close; Martyn Thomas, ‘Human error, not artificial intelligence, poses the greatest threat’, The Guardian, 3/4/2019 at https://www.theguardian.com/technology/2019/apr/03/human-error-not-artificial-intelligence-poses-the-greatest-threat; ‘How AI will go out of control according to 52 experts’,  CBInsights, 19/2/2019 at

https://www.cbinsights.com/research/ai-threatens-humanity-expert-quotes/; Laurie Nadel, ‘The Artifice of Social Media and the Dehumanizing Factor’, Higher Journeys, 24/8/2016 at  http://www.higherjourneys.com/artifice-social-media-dehumanizing-factor/; Jeremy Henzell-Thomas, ‘The Power of Education’, Critical Muslim 14, Power, April 2015 (Hurst, London), 65-86, and Introduction to CM 15, Educational Reform, July, 2015; Iain McGilchrist, The Master and his Emissary: The Divided Brain and the Making of the Western World (Yale University Press, New Haven and London, 2009) and   ‘The Battle of the Brain’, Wall Street Journal (2/1/2010) at https://www.wsj.com/articles/SB10001424052748704304504574609992107994238; Karim Douglas Crow, 'Between wisdom and reason: Aspects of ‘aql (Mind-Cognition) in Early Islam', Islamica 3:1 ( Summer 1999), 49-64; Guy Claxton, Hare Brain Tortoise Mind: Why Intelligence Increases When you Think Less (Fourth Estate, London, 1997); F. David Peat, Blackfoot Physics: A Journey into the Native American Worldview  (Fourth Estate, London 1994). 

On other manifestations and applications of the ‘artificial’, I have referred to 

Jeremy Henzell-Thomas, 'Out in the Open', Introduction to Critical Muslim 19, Nature, July-September 2016 (Hurst, London), 3-24; Elie Dolgin, 'The Myopia Boom'. Nature, 18/3/2015 at http://www.nature.com/news/the-myopia-boom-1.17120;  Aisling Irwin, ‘The dark side of light: how artificial lighting is harming the natural world’, Nature, 16/1/2018 at https://www.nature.com/articles/d41586-018-00665-7; ‘Why Artificial Light is Bad for You.’ Solarspot Blog at  https://solarspot.co.uk/general/why-artificial-light-is-bad-for-youHolly Strawbridge,  ‘Artificial sweeteners: sugar-free, but at what cost?’, Harvard Health, 8/1/2018 at https://www.health.harvard.edu/blog/artificial-sweeteners-sugar-free-but-at-what-cost-201207165030\;  ‘4 Things to Know About Artificial and Natural Flavors’, Ameritas, 25/5/2017 at https://www.ameritasinsight.com/wellness/artificial-natural-flavors;  Liji Thomas, ‘Are Artificial Food Flavors and Colorings Harmful?’ News Medical, 14/11/2018  at https://www.news-medical.net/health/Are-Artificial-Food-Flavors-and-Colorings-Harmful.aspx; on the artificial heart, https://en.wikipedia.org/wiki/Artificial_heart, and on the ‘artificial smile’,  http://bodylanguageproject.com/nonverbal-dictionary/body-language-of-the-artificial-smile-or-fake-smile/