What is Artificial Intelligence?
Artificial intelligence is the application of rapid data processing, machine learning, predictive analysis, and automation to simulate intelligent behavior and problem solving capabilities with machines and software.
It is intelligence of machines and computer programs, versus natural intelligence, which is intelligence of humans and animals. Machines and programs that use artificial intelligence are typically designed to read and interpret a data input and then respond to it by using predictive analytics or machine learning.
Artificial Intelligence and Machine Learning
Machine learning and artificial intelligence are very related and often confused as being one and the same. In short, machine learning is the science and approach that enables the creation of artificially intelligent machines and programs.
Artificial intelligence frequently uses machine learning techniques to improve the system's ability to "learn" and better interpret and react to inputs. Many in the field only consider a system to be "intelligent" when it uses machine learning to learn and improve.
Applications of Artificial Intelligence
There are many applications for artificial intelligence being used today, and many more that are being researched. A few of the more popular uses of artificial intelligence systems today are in industries like healthcare, automotive technology, and video games, to name a few. Below are a few of the common uses of artificial intelligence in the industries where it is already gaining popularity.
In healthcare, the number of applications for artificial intelligence is expanding rapidly. Artificial intelligence is currently being used to interpret lab results in blood tests and genetic testing. There are also efforts being made to use AI to interpret medical imaging, such as X-rays and MRI results.
Artificial intelligence is commonly used in flight simulations and simulated aircraft warfare. Many aircrafts are also equipped with sensors which feed data to a system that uses AI to evaluate the mechanical 'health' of the aircraft.
Artificial intelligence is a growing component of the automotive industry. Many companies, like Tesla, are incorporating self-driving technology into their vehicles. Self-driving vehicles use AI to read data of the vehicle's surroundings and respond to other drivers, lines on the road, and similar feedback read by the vehicle's sensors.
Financial institutions have been using artificial intelligence to analyze market trends and even automate trades based on various market indicators and triggers. Many banks and financial institutions have also been using AI algorithms and neural network systems to identify fraudulent bank and credit charges, and then trigger human managed investigations.
Many debate how much AI is truly used in video games. This is because machine learning techniques are rarely used and games typically only choose between a handful of automated responses, rather than actually "learning" to defeat their opponents. However, as gameplay has grown in complexity, so has the AI programming that governs it. AI is most commonly seen when the game player interacts with non-player characters in a game. Actions of these characters are often governed by complex AI algorithms that depend on the game player's actions.
Applications to Other Industries
As stated above, artificial intelligence is really the application of machine learning, predictive analysis, and automation, so its applications are vast. AI has been spreading rapidly to technology driven industries, so it is quickly becoming an important element of several other major industries, including:
- Manufacturing - AI is being used to automate the building of vehicles and other large machinery and equipment
- Marketing - AI solutions are being used to analyze user behavior and more effectively target potential customers
- Employee Recruiting - AI technology is now frequently used to match employers to job seekers
- Transportation - GPS systems and city planners use AI programs to identify and suggest the most efficient routes
As time goes on and artificial intelligence techniques become more widely understood and accessible, more industries will surely benefit from the efficiency and scaling effects that AI can provide.
How Does Artificial Intelligence Work?
Artificial intelligence "works" by combining several approaches to problem solving from mathematics, computational statistics, machine learning, and predictive analytics.
A typical artificial intelligence system will take in a large data set as input and quickly process the data using intelligent algorithms that learn and improve each time a new dataset is processed. After this training procedure is completely, a model is produced that, if successfully trained, will be able to predict or to reveal specific information from new data.
In order to fully understand how an artificial intelligence system quickly and "intelligently" processes new data, it is helpful to understand some of the main tools and approaches that AI systems use to solve problems. Below are the most common techniques used in artificial intelligence systems today:
Neural networks – or more specifically, artificial neural networks – are computing systems that progressively improve their ability to complete a task without specific programming on the task. The approach that these artificial neural networks use is based on the method that actual biological neural networks in human brains use to solve problems. Read more about artificial neural networks.
Statistical Learning and Classification
A classifier is a function that uses pattern recognition and pattern matching to identify the closest match. In supervised learning, the classifier will attempt to match the pattern out of a limited set of options. In unsupervised learning, there is no predefined pattern that the classification function needs to be used with.
Classifiers are ideal for artificial intelligence applications because their predictive models are adjusted and improved as they process more new data. Read more about classifiers and statistical learning.
Optimized Search Tactics
Typically exhaustively scanning through every possible solution is not a very efficient way to solve a problem, especially in artificial intelligence applications where speed may be very important. However, it is possible to apply rules of thumb or heuristics to prioritize possible solutions and complete the problem solving process more quickly.
Some search algorithms will also use mathematical optimization to solve problems. Mathematical optimization is an approach that involves taking a best guess to the solution based on limited information, and then evaluating "nearby" solutions until the best answer is reached. This can be thought of as using "blind hill climbing" as an approach to reach the solution, or "top of the hill."
There are many other approaches to search optimization, including beam search, simulated annealing, random optimization, and evolutionary computation, which more specifically includes various swarm intelligence algorithms and evolutionary algorithms.
Other Artificial Intelligence Techniques
Various approaches in artificial intelligence design and programming have been taken from concepts in logic programming and automated reasoning. These techniques allow programs to "reason" through problems.
There have also been many models and approaches designed for situations where information is uncertain or incomplete. Some of these tools include Bayesian networks, hidden Markov models, Kalman filters, decision theory and analysis, and Markov decision processes. Even certain programming languages, like Prolog, have been adapted to be used in artificial intelligence applications.
History of Artificial Intelligence
Advances in artificial intelligence and the science behind it can be broken into a few main chapters, described below:
AI in Fiction
Works of fiction detailing inanimate beings that display consciousness date back centuries. However, the first meaningful milestones in the history of artificial intelligence are tied to the invention of the computer and the early study of formal and mechanical reasoning.
Turing and Early Theory of Computation
Study of the theory of computation suggested that machines would be able to simulate a wide range of deductive acts through binary operations. The Turing-Church thesis eventually proposed that any "effectively calculable function is a computable function", meaning that anything that a human can calculate through an algorithmic process, a machine can too calculate.
These ideas eventually led researchers in neurology and cybernetics to begin exploring the idea of building an electronic brain. Walter Pitts and Warren McCullouch formally proposed designs for artificial neurons in 1943.
Dartmouth and the Formalization of AI Research
In 1956, at a workshop at Dartmouth college, several leaders from universities and companies began to formalize the study of artificial intelligence. This group of individuals included Arthur Samuel from IBM, Allen Newell and Herbert Simon from CMU, and John McCarthy and Marvin Minsky from MIT. This team and their students began developing some of the early AI programs that learned checkers strategies, spoke english, and solved word problems, which were very significant developments.
Increased Computing Power Enables AI Revolution
After this initial burst of progress in the 1950's, little more progress was made until the late 90's, when increases in computing power made it possible to apply machine learning techniques that were very slow when computational resources were more limited. This led to artificial intelligence / machine learning techniques being applied to several fields, including medical diagnosis, data mining, and logistics planning.
Continued and steady progress has been made since, with such milestones as IBM's Watson winning Jeopardy! and the release of Xbox Kinect which reads and responds to body motion and voice control. Additionally, artificial intelligence based code libraries that enable image and speech recognition are becoming more widely available and easier to use. Thus, these AI techniques, that were once unusable because of limitations in computing power, have become accessible to any developer willing to learn how to use them.
If you are interested in learning more about some of these machine learning resources and APIs, or using them to build artificial intelligence into an application, explore this list of resources.