zerohash

Posted on Apr 05, 2022Read on Mirror.xyz

Historical Journey Through Artificial Intelligence

Origins of actual modern research on artificial intelligence (AI) stems from Alan M. Turing - in his seminal paper published in 1950 titled “Computing Machinery and Intelligence”. At the time, the idea of AI was still far-fetched from what humans understood today in terms of its advancement and sophistry. It wasn't until 1955 when McCarthy, Minsky, Rochester and Shannon (1956) coined the term “artificial intelligence” during the Dartmouth Summer Research Project proposal and officially established the AI discipline as a standalone field. This paper will explore the historical evolution of AI research since the 1950s. By analyzing key research documents and their philosophical approach, the author wishes to compare and contrast AI development and identify its areas of improvement.

The analysis begins with Turing’s paper, where he proposed the question: “Can machines think?” (Turing, 1950, p. 433). Turing uses an interpretivism paradigm to this brand new understanding of what AI is to the world, which he called machine intelligence - conditional on the complexities (Saunders, Lewis and Thornhill, 2016, p. 718) he defines of what makes a machine, a machine. With the Turing test, a party game of sorts, a human evaluator will judge the response of a human and a machine participant to a series of questions. Using response data collected through typewritten responses, the evaluator shall determine which of a participant is man, and which is machine. Depending on the evaluator’s decision, it helps one to identify the ability of machines to exhibit man-like intelligence and behavior. Thus, a relativism ontological approach is adopted by Turing to identify machine intelligence. Turing emphasized the new idea via interpretivism by complicating the definition of machine intelligence. Conventional machines, called discrete state machines, offer a finite output when used - a simple example would be a switch which switches lights on and off. His idea was a digital computer with a universality property where a machine acts as a continuous system - varying across time and adapting to changes to offer different output based on conditions (Turing, 1950).

Constraints have been identified, too, of what limits the possibility of a continuous system machine intelligence. Simply put, the Turing test compares the mimicking behavior of machines to humans, and does not directly measure its intelligence. The missing element lies in the ‘soul’ and ‘emotions’ of a machine, relative to a human (Jefferson, 1949) - and hence, it lacks the originality of intelligent behavior a human is capable of (Abramson, 2008). Turing’s use of interpretivism displayed his personal values and opinion on what he assumed to be the subjective reality of AI research should be. Hence, the qualitative approach was useful in arguing his stance on what he believed to be the correct representation of AI.

Moving on, the author analyzes the critique Nilsson (1983) provides on the progress of AI and its potential for the 21st-century. Using critical theory philosophy, Nilsson highlights his personal opinion on the niche of AI as a discipline and the challenges AI has to overcome to propagate. He categorized the niche into 7 distinctive components that defines AI: (1) a propositional doctrine where programming the foundation of an AI program has to be logic-based and in formalism to facts; (2) the importance of incorporating partial models via performative knowledge to understand the greater system in question; (3) incorporating expert knowledge across different discipline to identify effective ideas via propositional doctrine (ie. use of maths discipline in scientific or physics research); (4) producing efficient results by controlling the scope of how data is incorporated; (5) controlling the learning and searching procedures of the program to birth controlled results; (6) ability to understand and process natural language (ie. English); and (7) incorporate cognitive ability into programs for flexible reasoning.

In terms of challenges, three are particularly important as mentioned by Nilsson. The dead duck challenge dictates that there is no single true approach to AI research as truths are infinite and variable. Much like the normative statement which states all birds can fly, it is far from the truth as dead ducks don't - hence the dead duck challenge. Nilsson conveyed that AI research has a myriad of approaches and one should not be confined to singularities. The second holistic challenge is where research problems are broken down into 3 parts: what is knowledge, how to represent said knowledge, and how said knowledge can be used. Knowledge in question should not be subdivided and looked at individually, as there exists a strong linkage of knowledge usage depending on how knowledge is represented, which is dependent on what defines said knowledge. The third kludge challenge talks about simplification in research. Nilsson mentioned that a kludge - an improvisation and simplified assumption of what a truth may be - of a theory should be encouraged as it better conveys the teachings of a discipline. This is contrary to some beliefs that research, and AI research, should not be simplified to such empirical forms as it may lose valuable information as it is haphazardly reduced to a kludge (Nilsson, 1983).

Nilsson’s challenges are not baseless worries as they highlight the gaps and practical knowledge researchers should focus on. His historical revisitation, one now better understands the niche of AI as a discipline. Using critical theory, Nilsson explained the social construct of what constitutes AI  as a discipline and  the suppressive challenges it faces across academia regarding  its development.

Fast forward today, the leading AI subfield is machine learning (ML). ML is a computational algorithm that learns and improves itself automatically through the input data and outputs predictive outcomes based on its designs (Koza et al., 1996). Through this learning process, it adapts and enhances its usability to cater for different situations. This is vastly similar to the learning machines proposed by Turing (1950) in the final section of his seminal paper - a machine that acts as a continuous system capable of producing results based on input data and its operating environment. Today’s ML models are a perfect example of what Turing envisioned 70 years prior.

Nichols et al. (2018) discusses the simplest scientific application of various ML models to medical imaging and diagnostics - a typical positivism philosophy is adopted. The advancement of data collection and computational capabilities enabled traditional statistical modelling to evolve into ML models. The amalgamation of big data, computational power, and exponential growth in algorithm design allowed for models such as linear regression, supervised learning methods, and deep learning to be applied to the scientific research field. In particular, researchers are able to drawout vast array of information by manipulating the interaction between these big data observations and produce outcomes intangible to human observation. Essentially, the application of ML models saves both time and money as it is capable of classification, segmentation, and detection in the medical field.  Gone are the days of intensive manual labour in the field of medical research along with frequent human errors. The automation that comes with ML modelling brings about significant benefits and efficiency.

However, Nichols et al. (2018) stated that ML isn’t without pitfalls. As the lifeline of ML, the algorithm programmed is heavily dependent on the observational data collected. Depending on the quality of input data and its authenticity, ML models may produce bogus results if researchers are not careful with the theoretical assumptions of said models. Furthermore, it is not without worry that this automated convenience may strip researchers off their ability computation skills. Thus, the use of a simple positivism  approach was effective in explaining the quantitative applications of ML models in modern scientific research.

The last philosophical review will be based on the comprehensive AI index report of 2021 by Stanford University’s Human-Centered Artificial Intelligence Institute (HAI), conducted by Zhang et al. (2021). The team took a modernized approach of pragmatism in which they examined the AI industry in general. Data were sourced from across the globe from both the field of academia, private entities, governments, and non-profits for an unbiased worldwide view of the current AI market across a multitude of perspectives.

The HAI team (2021) aimed to contribute to the knowledge of the AI market by evaluating its presence in 7 major aspects: research & development, technical performance, economy, AI education, ethical challenges, diversification, and policy & strategies of AI. With a combination of data-driven approach and theory-driven reasoning and ethics, key changes in the 2021 economy due to the pandemic have led to brand new discoveries. Notably, the biggest AI investment went to drug design and discovery, a 450% increase since 2019. In terms of education and academia output, sharp increase in AI-field PhD graduates are observed, coupled with many postgraduate researchers migrating to the US’ AI-industry. Though, an alarming issue is the ethical benchmarks concerning AI. This blurring connection between AI technology and societal application for average citizens makes it difficult for one to establish a clear ethical benchmark on data access and privacy.

Overall, the pragmatism approach adopted by Zhang et al. allowed for AI to be examined more closely with clearcut questions that require answers and considers issues from both objective and subjective orientation - publishing this yearly comprehensive report.