Several weeks ago, I visited the website of Verizon Communications to check if a smartphone I wanted to buy on eBay was not stolen and would work on its network. A pop-up message asked me if I needed help, and even though I see chatbots often, I wondered if this one might possibly be human after it helped me and answered a few questions coherently.
Then, the bot timed out and a new one started helping me, then that one timed out. They started getting repetitive with their instructions and I started typing in all uppercase in frustration. Then I remembered they were not really humans at all, only a chatbot acting like one, without the offensive comments that we saw in Microsoft Corp.’s MSFT, -0.21% Tay.
Chatbots are one of many examples of artificial intelligence that may have left the labs but are not completely ready for prime time, even as AI invades our daily lives. While AI remains a work in progress, it has made huge leaps in what it can do thanks to the availability of lower-cost computing power that can process far more data than ever before, and the ability to retrieve and process info from data clouds everywhere.
It is hard to think of areas of our daily lives that do not involve some form of artificial intelligence: From search responses Alphabet Inc.’s Google GOOGL, -0.24% GOOG, -0.17% provides to the newsfeed Facebook Inc. FB, -0.82% serves up, from the shopping recommendations at Amazon.com Inc. AMZN, -1.40% to the content suggestions at Netflix Inc. NFLX, -0.39% and especially the voice-activated home assistants from Google, Amazon and soon to come from Apple Inc. AAPL, -1.08%
“You would have to stop and think about what icons on your phone have no AI component,” said Greg Estes, vice president of developer programs at Nvidia Corp. NVDA, -1.98% a Silicon Valley chip maker that has been transformed by the discovery that its graphics processors can help speed up AI applications. “Only five years ago, it’s this research thing, off here in the corner, and now it is in everyone’s wallet or purse and it is changing everyone’s life.”
Read more on artificial intelligence in 2018:
Apple is acquiring knowledge, but results may take a while
Google seeks to turn early focus on AI into cash
Waiting for driverless cars to become a reality
In 2018, we are at a crossroads with one of the most hyped technologies of the past year: while AI is in nearly everything, it is still not always that useful or amazing, and can be extremely frustrating. While much of the talk is about the possibilities, both good and evil, the reality is much less dramatic: Artificial intelligence has come a long way in just the last few years, but it still has a very long way to go to achieve the aims Silicon Valley increasingly espouses. And we—the users of AI-driven services, the technologists working to push it along and investors pouring money into its development—will determine its final destination.
The rise of the machines
At its simplest level, AI refers to giving machines intelligence, a pie-in-the-sky idea that has been the focus of research since the 1940s, but a concept for more than a hundred years. Some of the earliest serious work in artificial intelligence goes back to Alan Turing, the British mathematician who published a cornerstone paper in 1950, “Computing Machinery and Intelligence,” that pondered the question of whether a machine can think. Turing devised a game, his famous “Imitation Game” (also the name of the 2014 film about Turing and his machine that cracked the German Enigma code during World War II), which is now known as the Turing Test.
The first use of the term “artificial intelligence” came at an academic research conference organized in 1956 at Dartmouth College by John McCarthy, a computer scientist who coined the term. The conference generated a huge amount of academic interest in the topic and catapulted the next two decades of research into the field, including a major breakthrough in the 1960s with Eliza, an early natural language program seen as one of the earliest chatbots.
In the 1970s and 1980s, research on AI went in fits and starts, after government funding slowed or was completely halted as the forecasts by researchers for the potential of AI seemed too bullish. One such prediction came from the well-regarded MIT AI lab co-founder Marvin Minsky in Life Magazine in 1970 that “in three to eight years, we will have a machine that has the general intelligence of a human being.”
Sound familiar to predictions we hear in the new AI age?
The Big Bang of AI
AI has come a long way since Minsky’s overambitious prediction, thanks to the advent of much cheaper computer storage and faster processors. Even as recently as 20 years ago, AI was still seen as a research project, said Kumar Srivastava, a vice president of products and strategy at the BNY Mellon Silicon Valley Innovation Center.
“There was this stage of AI where we didn’t have enough compute resources,” he said, adding that his first job out of college was working at Microsoft on a machine-learning engine used to keep spam from Hotmail in-boxes. “We had a limit on how much data we could process, how many examples we had to produce. Now those limits have disappeared.”
Machine learning basically uses complex algorithms, a set of concrete instructions written by humans used to solve a problem, to teach a computer concepts like finding certain types of emails, analyzing sports statistics, scanning chess moves or street map data, and making predictions based on the known properties and patterns of the data without explicit programming. Machine learning can recognize text, voice and images.
As computer-processing costs declined rapidly in the last decade, a 2012 discovery jump-started the current AI age. University of Toronto researcher Alex Krizhevsky discovered that graphics-processing units, or GPUs, added even more number-crunching power in deep learning, a more complex form of machine learning. Deep learning can teach the computer to exclude all but the most important elements of an image, for example.
Krizhevsky and his team designed a neural network called AlexNet and trained it with a million example images that required trillions of math operations on graphics chips.They wrote no vision code, but the computer learned to recognize images by itself.
Read also: GPUs may not hold AI chip crown forever
That discovery is referred to by Nvidia as “the Big Bang of AI,” and propelled a wave of more research, development and discovery, including advances in self-driving cars. While AI on the surface seems to be a feat of software and programming, it would not be possible without revved-up computing power.
The new age of AI
Nvidia jumped into research and development after learning about the success of AlexNet, and advances in server chips and self-driving cars has been a boon for the company once known just for boosting videogame graphics. The stock was the biggest gainer on the S&P 500 in 2016, and in the top 10 on that list again in 2017, as investors sought to jump into AI.
The success has spurred innovation throughout the semiconductor sector, with at least one non-chip company, Alphabet, now developing its own chips as a secret sauce to perform more calculations in its data centers. Intel Corp. INTC, -0.13% which has missed the boat on some major new technologies such as mobile computing in the last decade, is now investing heavily into AI as the future, but its core technology, its x86 CPU architecture, is not ideally suited for the parallel processing that needs to occur in AI applications.
“It’s about finding inference data, that is what is driving this architectural shift to highly parallel architectures,” Intel’s Naveen Rao, whose startup chip company was purchased by Intel in 2016 for over $400 million, said in reference to inference engines that apply logical rules to the current data to deduce new information. “CPUs are not the right architecture for AI. This is embarking on a whole new exploration of what a computer is.”
Rao added, though, that every new revolution is a revolution of old techniques: “A lot of the techniques came from highly parallel computing. Intel was a leader there.”
With Nervana’s chip designs, Intel now has a neural network processor that is optimized for deep learning. It uses a new numeric format technology called Flexpoint which results in a vast increase in parallelism on the semiconductor die, while at the same time, decreases the amount of power used per computation. The chip just began shipping, and Intel has been collaborating with Facebook, which provided feedback on its needs for algorithmic processing friendly chips.
The semiconductor examples show how AI is transforming industries. In software, where AI has been in development for decades, every enterprise software company now has a strategy or products that are infused with AI, such as Salesforce.com Inc.’s CRM, -0.54% Einstein component to its cloud-based customer-relationship-management software. IBM IBM, -0.40% is selling its Watson system, which famously beat won a match in Jeopardy in 2011, as a service in the cloud to research institutions and hospitals to help in the diagnosis of disease, drug discovery or interpreting genetic tests. It also offers Watson as a service to companies that want to create a chatbot, for example, or add other elements of AI to their business, such as an AI-powered ETF.
According to a Bernstein Research report, 56% of the spending this year on AI in 2017 will be on software.
AI is also creating new industries, such as self-driving cars and drones, while also acting as the foundation for another Holy Grail of computer science, robotics. But iRobot Corp.’s IRBT, -2.87% Roomba vacuum is still very far away from the vision in the Jetsons cartoon of the 1960s of an autonomous Rosie the Robot as your personal maid.
A long way to go
From robots to chatbots to autonomous cars, AI is still in its early days. Yann LeCun, a top AI executive at Facebook, one of the biggest users of AI with its algorithms that predict consumers’ interests, told The Wall Street Journal last year that “we are a long way from machines that are as intelligent as humans—or even rats,”
“So far, we’ve seen only 5% of what AI can do,” he said.
Those realistic voices do not stop the fears in tech and beyond of the scary potential of AI, the arena called artificial general intelligence, or AGI, which refers to when computers become so intelligent that they can perform the same intellectual tasks as a human being without any further human input. This is the area that Tesla Inc. TSLA, -1.27% Chief Executive Elon Musk refers to when he talks about his fears of artificial intelligence, such as a tweet earlier this year that said AI could be more dangerous than the threat from North Korea.
“There’s a lot of risk in concentration of power,” Musk told Rolling Stone in a cover story interview last month. “So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?”
AGI is also envisioned as leading to the so-called technological singularity, the theory that artificial super intelligence will trigger runaway technological growth, envisioned by many sci-fi movies including this year’s “Singularity.”
“It will be beyond our comprehension,” said BNY Mellon’s Kumar.
Tolga Kurtoglu, Chief Executive of PARC, the famous research lab previously known as Xerox XRX, -1.02% XRX, -1.02% PARC, said that what needs to be built into systems is a sense of self-awareness, “so that they can detect when they’re commanded to do something that is outside of their reliable and safe and secure operational regime. And that’s a really, really difficult problem to solve.”
While this discussion can get very scary, think back to the predictions of many decades ago. Silicon Valley has been down this road of hype, fear and disappointment before. The truth is that we are still yelling at Siri when she doesn’t understand, or typing in all uppercase at the chatbots who are still very, very stupid.
“We need to pass the sort of hype curve of AI, and then see the sort of adoption,” said PARC’s Kurtoglu. “It’s just mind-blowing how quickly it has penetrated into so many different industries. And once that adoption passes a certain threshold, then I think it will be easier for us to perhaps estimate how long it will take.”