Artificial intelligence (AI) is the basis for mimicking human intelligence processes through the creation and application of algorithms built into a dynamic computing environment. Stated simply, AI is trying to make computers think and act like humans.
Achieving this end requires three key components:
The more humanlike the desired outcome, the more data and processing power required.
At least since the first century BCE, humans have been intrigued by the possibility of creating machines that mimic the human brain. In modern times, the term artificial intelligence was coined in 1955 by John McCarthy. In 1956, McCarthy and others organized a conference titled the “Dartmouth Summer Research Project on Artificial Intelligence.” This beginning led to the creation of machine learning, deep learning, predictive analytics, and now to prescriptive analytics. It also gave rise to a whole new field of study, data science.
Today, the amount of data that is generated, by both humans and machines, far outpaces humans’ ability to absorb, interpret, and make complex decisions based on that data. Artificial intelligence forms the basis for all computer learning and is the future of all complex decision making. As an example, most humans can figure out how to not lose at tic-tac-toe (noughts and crosses), even though there are 255,168 unique moves, of which 46,080 end in a draw. Far fewer folks would be considered grand champions of checkers, with more than 500 x 1018, or 500 quintillion, different potential moves. Computers are extremely efficient at calculating these combinations and permutations to arrive at the best decision. AI (and its logical evolution of machine learning) and deep learning are the foundational future of business decision making.