Artificial Intelligence: What Is It? and Its Varieties

Author Avatar

seo

Joined: Jun 2023

In this 2004 study (PDF, 127 KB), John McCarthy gives the following definition of artificial intelligence (AI), despite the fact that there have been numerous other definitions over the last few decades. (link is external to IBM), Making intelligent machines, particularly intelligent computer programmes, is a scientific and engineering endeavour. Although it is related to the related job of utilising computers to comprehend human intellect, AI should not be limited to techniques that can be observed by biological means.

But years before this term came into being, in 1950, Alan Turing’s landmark paper “Computing Machinery and Intelligence” (PDF, 92 KB) (link lives outside of IBM) marked the beginning of the artificial intelligence debate. “Father of computer science” Turing poses the following query in this essay: “Can machines think?”  Then he proposes a test that has become commonly known as the “Turing Test,” in which a human interrogator would attempt to differentiate between a computer-generated and a human-written text response. Although this test has been under intense criticism since it was published, it nonetheless contributes significantly to the history of AI and continues to be a topic of discussion in philosophy because it makes use of linguistic concepts.

After that, Stuart Russell and Peter Norvig published Artificial Intelligence: A Modern Approach, which went on to become one of the most influential works on the subject. In it, they explore four potential objectives or definitions of AI, differentiating between computer systems based on their reasoning and thinking vs acting:

Human approach:

  • systems with human-like thinking
  • Systems that behave like people

Ideal approach:

  • systems capable of rational thought
  • systems that function logically

Systems that behave like humans would fall under Alan Turing’s notion of computers.

Artificial intelligence is a topic that, in its most basic form, combines computer science and substantial datasets to facilitate problem-solving. Additionally, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that make predictions or categorize information based on incoming data.

Even among sceptics, the launch of OpenAI’s ChatGPT appears to signal a turning point in the hype cycle that artificial intelligence has seen over the years. The advancements were in computer vision the last time generative AI was this significant, but today it is in natural language processing. Additionally, generative models may also learn the grammar of software code, chemicals, natural photographs, and many other sorts of data in addition to language.

The potential uses for this technology are still being investigated, but they are expanding daily. But as the excitement surrounding the application of AI in business picks up, discussions about ethics become essential importance. Read more here to learn more about IBM’s position on the AI ethics debate.

Artificial intelligence types: weak vs. strong

Weak AI, also known as Narrow AI or Artificial Narrow Intelligence (ANI), is AI that has been programmed and directed to carry out particular tasks. The majority of the AI that exists today is weak AI. This form of AI is anything but weak; it supports some incredibly sophisticated applications, including Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles. “Narrow” could be a better term for it.

Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) are the two components of strong AI. A computer with intellect comparable to humans, a self-aware awareness, and the capacity to learn, reason, and make plans for the future would be said to have artificial general intelligence (AGI), also known as general AI. Superintelligence, commonly referred to as artificial super intelligence (ASI), would be more intelligent and capable than the human brain. Even though there are now no real-world applications for strong AI and it is only theoretical, experts in the field of artificial intelligence are continuously studying its potential. Until then, science fiction works like 2001: A Space Odyssey’s HAL, the superhuman, rogue computer helper, may provide the best instances of ASI.

Deep learning vs. machine learning

Given that deep learning and machine learning are frequently used synonymously, it is important to understand their differences. In addition to being subfields of artificial intelligence, deep learning is also a subfield of machine learning, as was already mentioned.

Neural networks are the actual building blocks of deep learning. A neural network with more than three layers, including the inputs and outputs, is referred to as a “deep learning algorithm” and is characterised by the term “deep”. The illustration below serves as a generic representation of this.

The way each algorithm learns is where deep learning and machine learning diverge. Deep learning significantly reduces the amount of manual human interaction necessary during the feature extraction phase of the process, allowing for the usage of bigger data sets. cited Lex Fridman pointed out in the same MIT presentation cited above, “scalable machine learning” is what deep learning is. Traditional or “non-deep” machine learning is more reliant on human input. To grasp the distinctions between different data inputs, human specialists create a hierarchy of features, typically learning from more structured data.

Although “deep” machine learning can use labelled datasets, commonly referred to as supervised learning, to guide its algorithm, it is not necessary. It can automatically discover the hierarchy of features that separate distinct types of data from one another and ingest unstructured material in its raw form, such as text and photos. We can scale machine learning in more exciting ways since it doesn’t require human intervention to handle data, unlike machine learning.

Diagram of a deep neural network

The development of generative models

The term “generative AI” refers to deep learning models that may “learn” to produce statistically likely results from given inputs, such as the entirety of Wikipedia or the complete works of Rembrandt. At a high level, generative models encode a condensed representation of their training data and use it as a starting point to produce new work that is similar to the original data but not exactly the same.

Statistics has long utilised generative models to examine numerical data. However, they might now be applied to speech, images, and other complicated data types because to the development of deep learning. Variational autoencoders, or VAEs, which were initially introduced in 2013, were among the first class of models to accomplish this cross-over accomplishment. The first widely used deep-learning models for producing lifelike speech and visuals were VAEs.

According to Akash Srivastava, a generative AI specialist at the MIT-IBM Watson AI Lab, “VAEs opened the floodgates to deep generative modelling by making models easier to scale.”

“A lot of what we now refer to as generative AI began here,”

Early models like GPT-3, BERT, or DALL-E 2 have demonstrated what is feasible. In the future, models that require little to no fine-tuning and are trained on a large amount of unlabeled data will be employed for a variety of tasks. Systems that carry out particular tasks in a single area are being replaced by wide AI, which learns more broadly and solves problems across disciplines. This transition is being driven by foundation models, which were developed for a variety of applications and fine-tuned using massive, unlabeled datasets.

It is anticipated that foundation models would significantly speed up the implementation of generative AI in businesses. By lowering labelling requirements, businesses will find it much simpler to get started, and the extremely precise, effective AI-driven automation they enable will allow many more organisations to use AI in a wider range of mission-critical circumstances. For IBM, the ultimate goal is to create a frictionless hybrid-cloud environment where the power of foundation models can be distributed to every company.

Uses of artificial intelligence

AI systems have a wide range of practical applications nowadays. Some of the most typical use cases are listed below:

  • Speech recognition, also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, is a capability that converts spoken language into written language using natural language processing (NLP). Many mobile devices have speech recognition built into their operating systems to enable voice search (like Siri) and to increase messaging accessibility. 
  • Online virtual agents are replacing human agents in customer care throughout the customer journey. They provide individualised advise, respond to frequently asked questions (FAQs) regarding subjects like shipping, or cross-sell products or make size recommendations to users, altering the way we view user interaction on websites and social media. Examples include virtual agent-equipped messaging bots on e-commerce websites, chat programmes like Slack and Facebook Messenger, and jobs often carried out by virtual assistants and voice assistants.
  • Through the use of digital photos, videos, and other visual inputs, computer vision technology enables computers and systems to extract meaningful information from those inputs and take appropriate action. It differs from picture recognition jobs in that it can make recommendations. Computer vision, which uses convolutional neural networks, is used for self-driving cars in the automotive sector, radiological imaging in healthcare, and photo tagging in social media.  
  • Recommendation engines: By using historical data on consumer behaviour, AI algorithms can help identify data trends that can be applied to create more successful cross-selling tactics. Online shops utilise this to suggest pertinent add-ons to customers during the checkout process.
  • Automated stock trading: High-frequency trading platforms powered by AI execute hundreds or even millions of deals per day without the need for human intervention, helping to optimise stock portfolios.

 

Key dates and figures in artificial intelligence history

Ancient Greece is when the concept of “a machine that thinks” first appeared. However, significant occasions and turning points in the development of artificial intelligence since the invention of electronic computing (and in relation to some of the subjects covered in this article) include the following:

  • 1950 sees the release of Computing Machinery and Intelligence by Alan Turing. Turing, who gained notoriety during World War II by cracking the Nazi ENIGMA code, proposes in the paper to address the subject of “Can machines think?” and introduces the Turing Test to ascertain whether a computer can exhibit the same intelligence (or the outcomes of the same intelligence) as a person. Since then, people have argued over the Turing test’s usefulness.
  • At the first-ever AI conference held at Dartmouth College in 1956, John McCarthy first uses the term “artificial intelligence.” (McCarthy later created the Lisp language.) Allen Newell, J.C. Shaw, and Herbert Simon develop the Logic Theorist later that year, which is the first functioning AI software ever.
  • Frank Rosenblatt creates the Mark 1 Perceptron in 1967, the first machine based on a neural network that “learned” by making mistakes. Perceptrons, written by Marvin Minsky and Seymour Papert, is published just a year later. It quickly establishes itself as a classic work on neural networks while also serving as, at least temporarily, a counterargument to further neural network research.
  • In the 1980s, neural networks that train themselves via a backpropagation algorithm find widespread use in AI applications.
  • 1997: In a chess match (and rematch), IBM’s Deep Blue defeats then-world chess champion Garry Kasparov.
  • 2011: Ken Jennings and Brad Rutter were defeated by IBM Watson on Jeopardy!
  • 2015: Baidu’s Minwa supercomputer classifies and identifies images more accurately than the average person using a special type of deep neural network called a convolutional neural network.
  • Lee Sodol, the reigning world champion Go player, is defeated by DeepMind’s AlphaGo computer programme in a five-game match in 2016. Given the enormous number of possible moves as the game develops (more than 14.5 trillion after just four plays! ), the victory is noteworthy. Later, Google reportedly paid $400 million to buy DeepMind.
  • 2023: The performance of AI and its capacity to generate enterprise value undergo a significant transformation due to the advent of large language models, or LLMs, like ChatGPT. Deep-learning models can be pre-trained on enormous volumes of raw, unlabeled data using these new generative AI techniques.

.

Reviews

0 %

User Score

0 ratings
Rate This

Sharing

Leave your comment

Your email address will not be published. Required fields are marked *