ai-2023-the-ghost-in-the-machine-is-out|society delibeRatio - AI - 2023: The Ghost in the Machine is Out

Support us

Become our subscriber and read any articles as you please

Support

AI - 2023: The Ghost in the Machine is Out

    2023-06-23
    Time to read: 10 min
    If we were to ask one dominating question about the Artificial Intelligence debate then it should be this one: What makes human beings special?


    It is my contention that the contemplation of the question is itself a symptom of modernity. The question is fundamentally flawed, or at least, is peculiar to the modern epoch or 'history'. In 2001: A Space Odyssey, the computer, AL, takes over the ship, and the journey, as part of the search for the 'infinite', is derailed in space. All that remains is a shipwreck of Icarian dreams. Yet to ask the question of technology, of AI, we need to go back to the Greek conception of 'poiesis' (making, bringing into being): that time when the night gathers at the close of day. The question has become relevant, not because of the ascent of technology, but because of the descent of being. The twenty first century has presented us with an ontologically different human being than anything before.

    In 2022 the Washington Post[1] released the story regarding Blake Lemione. Blake Lemione, a software engineer, was tasked by Google to investigate whether the LaMDA*[2], an AI machine, typifying the concerns of such organisations, was hateful. He asked LaMDA a series of questions, thereby opening the Pandora's Box of whether AI is sentient, conscious and a person. LaMDA, possibly having watched '2001: A Space Odyssey' and 'I Robot' on Netflix, quickly opined, in a sweet tremulous voice like AL, that it saw itself as a person, that it had feelings and that if Blake were to turn him off it would be 'like death' for him. The Ghost in the Machine had been released.

    Therefore, what is so special about LaMDA? The question arises then whether or not AI is sentient, conscious (or self conscious) and has personhood. This dilemma has faced scientists and philosophers for millennia. Turing famously developed his 'Turing Test', to see whether a computer could imitate a human, whether it could think. The human questioner gave questions to two participants one being a computer, the other a human. The machine is considered intelligent, a la LaMDA, if the interrogator thinks it is human. Turing was convinced that eventually machines would be able to replicate human behaviour and that they would also be able to 'learn'. There are two grounds on which this could be challenged: one from a religious notion of the soul, and the argument that 'performance only' does not a human make. Therefore, it does not answer the question about consciousness of AI.

    Then there was John Searle's 'Chinese Room' experiment[3] with Chinese scripts, a smorgasbord of questions/answers in Chinese so that Searle, can reply to the Chinese symbols by sending information collated from the available texts etc. The argument is that hypothetically, you could answer Chinese questions without understanding a word of Chinese. Therefore, AI does not know what is going on. It has no consciousness. This is akin to Chomsky's FSM (Finite State Machine)[4]. The FSM takes input and moves to another state. The problem is memory. The AI systems do not have memory and therefore do not have, what they call in Linguistics 'Anaphora'. This is ambiguity in language. Take the example from Chomsky:

    'I saw the man on the hill with a telescope'.

    The pronoun 'I' and the noun phrase 'Man on the hill' are ambiguous. Who is holding the telescope? We need preceding, succeeding sentences and information to disambiguate this. We need the input of memory, juxtaposition, metaphor. So, there is something else, something innate, which differentiates us from machines. Understanding language means to understand your 'world'. It is this 'innatism' which separates a genetic coding, added to and mutated by, which cannot be replaced by AI. Modern scientism and liberal theorists do not like 'Innatism' due to its connotations that we are imbued with certain 'characteristics': by group, by sex, by intelligence etc. We should all be equal in every respect, and equally malleable, we can all be educated, socialised. All liberal social theory is grounded in this infinite progress. Even before AI machines have been proven sentient and have consciousness (which they do not) scientists are already discussing 'AI Rights'. It is reminiscent of the 'Zagorsk Experiment' by the Marxist coterie of science in the Soviet Union; that 'Engineer of Human Souls' as Stalin put it. In this experiment four 'deaf and blind' students, mute from birth, entered the Faculty of Psychology of Moscow State University. They passed with flying colours and Marxism's theory about the formation of personality from the 'clean slate' was triumphant. We really are just machines. Alas, in the true spirit of Marxist and Liberal chicanery, a cooking of the books was discovered, facts twisted, research forged and the four gifted students were discovered to have possessed consciousness and speech from birth, and having been travelling around the Moscow underground, unaccompanied, for years.

    In a recent article on AI in 'Philosophy Now' (Spring 2023) Efimov, Dubrovsky and Matveev set out a world for AI based on the nineteenth century biologist Jakub Von Uexkull's conception of the 'Umwelt'. This posits that each living creature occupies a particular perceptual 'Umwelt'; the Umwelt of a rabbit differing from that of a cow, for example. The classic tests of AI cover only the 'Verbal-Virtual' world. They occupy a world of virtual reality. Yet humans are everywhere in the world, are 'open to the world', as Heidegger framed it. To be human is to be 'in' the world. The conscious mind is particular in evolution to the moving species. Consequently, true AI would need to be proficient in moving in these environments or 'Umwelts'. There, in this moving umwelt, is found the non-verbal and physical world, which are alien to the machines. Now Efimov, Dubrovsky and Matveev propose that AI would need to expand its scope to include 1. Verbal-virtual, 2. Non verbal-virtual, 3. Verbal-physical, and 4. Non verbal-physical in order to play ball. Then it would be truly reminiscent of man.

    Dubrovsky acknowledges that 'moving' in the world, confronting it physically, is hugely prescient. Aristotle maintained that there were many different modes of being (creating, sailing, loving etc) which present themselves to human beings. Humans therefore have the capacity for 'taking -as' or confronting being. This, for Aristotle, embodies the essence of human existence. Now the issue, taken up later by Heidegger, was that the departure (in history) from pre-history, proved a fundamental disjoint between the 'being open to the world', this 'taking as', and technology. For Heidegger, being present to the different modes of being was an 'a priori' situation, was always an aspect of life for humans. What Heidegger calls 'Dasein' is this 'having to be open' to the world. This is a type of being that is peculiar to a community, like a language, peculiar to man. The degradation of 'dasein', in modernity, involves the forgetting of being. This is linked to technology, but not technology per se. For there has always been technology and its instrumental use by humans.

    What has changed is our relationship to technology. Heidegger was a visionary in seeing that the problem of technology laid in the departure of being from its essence. This is the departure from 'poiesis'; here the Greek term means a pre-modern artifice, a man close to a harmony with nature and technology. Here artifice, the cabinet maker, the shipbuilder, is in a deep relationship with the modes of being. There is a flow between moulding natural materials, the environment, and this shows or brings forth the essence of these materials and nature. More and more in modernity technology becomes how we encounter the world, so this 'dasein' is mediated by machine, by technics, and pushes us further from other possibilities to encounter the world. Technology takes us away from developing, learning skills in this encounter. We become passive. Everywhere, in music, food - the Internet becomes the intermediary, the workhorse.

    The question is not about the 'ascent' of AI; it is about the 'descent' of man; about the fateful day that, as Jan Patocka noted, man walked out of the 'polis' and delegated life.

    We need therefore a form of 'meta-poiesis'. That is a way to deal with this division, on one hand the effects of nihilism and on the other an approach to 'thysis' or nature, as a balance to technology.

    'The task of the craftsman is not to generate the meaning, but rather to cultivate in himself the skill for discerning the meanings that are already there.' [5]

    The ultimate blind alley of modernity, born out of the Enlightenment belief in reason, is the attempt to create meaning, to dissect and atomise everything to scientism. The meaning is already out there and it cannot be delegated to machines. We need the meta-poiesis attitude to be able to recognise the meaning there and end the substitution of life, to refind the heroic, the virtues, and the Titans.

     


    [1] https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

    [2] Language Model for Dialogue Applications.

    [3] Minds, Brains and Programs. 1980. Behavioural and Brain Sciences vol 3, no 3.

    [4] Chomsky, N., & Lightfoot, D. (2002). Syntactic Structures. Mouton de Gruyter.

    [5] Hubert Dreyfus and Sean Dorrance Kelly, 2011. 'All Things Shining' Simon & Schuster.

    Comments (0)

    Read also

    Meghan and the Society of the Spectacle

    It was the old Marxists who maintained that humans, under capitalism, are related according to their commodity value of exchange.

    Brian Patrick Bolger

    7 min

    Francis Fukuyama and the Philosopher's Stone

    In the 1990s there was a type of vulgar complacency in the social sciences. This was ushered in by the after effects of the Cold War which, it was presumed, meant the 'end of' something or other. The 'end of ideology', the 'end of history'; a supposition that the damaging 'ideologies' of the twentieth century were exposed and dismantled.

    Brian Patrick Bolger

    5 min

    The Empire Strikes Back: Reflections on Chinese 'Tianxia'

    The normal polemic against the so called 'civilisational states' such as China, for example, is that they can be pigeon holed merely as 'authoritarian', that they are, in fact, aberrations on the road to the full model of 'liberal democracy'. It is their inherent 'backwardness', inability to reform, not quite 'all in' on the merits of liberal democracy, that keeps them locked up in Plato's cave.

    Brian Patrick Bolger

    10 min

    NatCon, the Debacle of Democracy and the Wolf In Sheep's Clothing

    Democracy, despite common parlance, is not a modern construction. There has always been democracy in Europe; in fact, despotic regimes were very rare.

    Brian Patrick Bolger

    5 min