Jasmin A. answered 08/02/22
I am good tutor for you
Three dissimilar methodologies in the field of artificial intelligence (AI) appear to be following a common path toward biological authenticity. This trend could be expedited by using a common tool, artificial nervous systems (ANS), for recreating the biology underpinning all three. ANS would then represent a new paradigm for AI with application to many related fields.
Main text
In 1955 when John McCarthy organized the historic Dartmouth Summer Research Project, he coined the term “artificial intelligence” (AI) as a methodology-neutral phrase because the hoped-for attendees supported diverse methodologies, and each had ardent adherents.1 Three of today’s AI methods yet stand out for their diversity and adherents, but despite this, all three increasingly incorporate biological inspiration for performance improvement. The improvements drive further inclusion of biology in a positive reinforcement loop that is gradually bringing the diversity into a common biological framework. To be clear, the terms “biology” and “biological” refer to the animal kingdom’s nervous systems for which we study neuroanatomy and neurophysiology; here, the terms do not refer to the study of plants, fungi, or sea sponges.
One of the three methodologies, machine learning (ML), is supported by varieties of networks, from neural networks (NN) to artificial neural networks (ANN) to recurrent neural networks (RNN) to convolutional neural networks (CNN) to general adversarial networks (GAN) to deep neural networks (DNN), and more. All of these have added to the success of ML and its progeny, deep learning (DL). These networks, especially CNN,2have also incorporated biological features beginning with the concept of neurons (nodes of the networks) and their synaptic plasticity (node’s weight) to connections between these “neurons” (both forward and back) and the layers with which they are organized. According to IEEE Access, “However, despite the recent progress in DL methodologies and their success in various fields, such as computer vision, speech technologies, natural language processing, medicine, and the like, it is obvious that current models are still unable to compete with biological intelligence. It is, therefore, natural to believe that the state of the art in this area can be further improved if bio-inspired concepts are integrated into deep learning models3.”
A second methodology comes from Jeff Hawkins and his company, Numenta, who have been researching neuroscience and building computer models and algorithms to represent brain functions since 2004. Jeff says, “The key to AI has always been the representation” and in the last seventeen years has continually expanded his representation, beginning with modeling individual neurons to modeling collections of cortical columns. Jeff’s approach is different and considerably more biological then any NN, which places his technology in a unique AI category. Additionally, Numenta’s Hierarchical Temporal Memory (HTM) technology is one of the only methodologies to represent nervous system temporal connectivity, a key neurophysiological feature often overlooked in other AI technologies. His trend is dedicated to biological realism and improvement therein.
The third methodology, neuromorphic computing, is again a significantly different AI approach with promises of dramatically reducing the cost of intelligence processing; this is important considering the cost of training OpenAI’s GPT-3 Deep Learning Network was over US$12 million. Despite the use of parallel graphic processing units (GPU) for NNs, NN’s “neurons” run on traditional computing systems which process in a serial fashion, one line of code at a time; neuromorphic chips provide a hardware substrate that supports massive computing parallelism of its artificial neurons where every neuron operates under its own set of code independently, all the time. The savings and efficiency of parallelism are significant, and with thousands of companies investing in AI for big data analytics, customer service bots (natural language processing), and a myriad of other applications, the competition to provide the best service for the least cost pushes major companies in the hardware (neuromorphic) research direction. This includes companies like Microsoft and IBM as well neuromorphic chip variations such as the Tensor Processing Unit (TPU) from Google and Loihi from Intel. Though neuromorphic computing hasn’t gained the celebrity status of state-of-the-art in ML, the allure of power savings through neuromorphic chips and their inclusion as a component in server farms keeps research in neuromorphic processing moving forward.