“OK Google, show me my photos of jelly doughnuts.”
This straightforward order to display pictures of delicious fried confections, spoken into a Google Pixel 2 smartphone with the Google Assistant, is the type of command that users have been executing in Alphabet Inc.’s GOOGL, -0.24% GOOG, -0.17% search engine for years. Behind the scenes, however, the response to this type of query now leverages an enormous amount of machine-learning technology that Google has spent years and billions of dollars developing, in hopes of being a leader in artificial intelligence.
For that command to function, software produced by Alphabet-owned Google needed to deploy image content analysis systems, voice recognition and a host of other technologies that revolve around machine learning and AI, mostly pumped through high-tech data centers the company has built. It also decided to make the hardware that runs it, with an eye on pushing the abilities of its services to new places in 2018 and beyond.
Read more about artificial intelligence in 2018:
Still more hype than reality, AI is nothing to be scared of yet
Waiting for driverless cars to become a reality
Apple is acquiring AI knowledge, but results may take a while
Since 2013, Alphabet has ramped up its infrastructure spending, pouring $57.36 billion into capital expenditures—roughly $10 billion a year. Since Chief Executive Sundar Pichai took over the top job at Google in 2015, Alphabet has spent $30 billion in that category, which likely includes the data centers necessary for the computing power that makes Google Assistant function as well as its cloud computing division and AI-backed consumer hardware lineup.
To hear Alphabet tell it, an artificial-intelligence-first future will one day surround us. At an autumn launch event in San Francisco, Pichai unveiled the physical manifestation of that idea: a suite of consumer hardware products backed by its AI technology that is always listening, sometimes watching and silently awaiting our command.
“Computing is moving from mobile-first to AI-first, with more universal ambient and intelligent computing that you can interact with naturally, all made smarter by the progress we are making with machine learning,” Pichai said in early 2017, a refined version of a statement he had been making for a year up to that point.
To put it mildly, Google has already made a massive multibillion-dollar bet on AI and machine learning, a bet that the company will continue to pursue.
“One of the things Sundar means by that is that it’s really hard to find a product or offering at Google now that doesn’t apply some level of machine learning to improve the customer experience or to improve the back end operations that we’ve got going on,” Tariq Shaukat, president of Google Cloud, said in a telephone interview with MarketWatch. “You can see that both in the products we’re offering and the capabilities we’re offering customers.”
An early focus, but an unfocused return
Though the company began its efforts before Pichai’s earliest remarks about living in an AI-first world—which he made in response to a question from Baird analyst Colin Sebastian on a quarterly earnings call in April of 2016—getting into AI at an early point has set Google apart from competitors such as Amazon.com Inc. AMZN, -1.40% , Facebook Inc. FB, -0.82% , and Microsoft Corp. MSFT, -0.21% , all of which are working on the same goal in some fashion.
Google has been rewarded by investors in 2017, when the company’s market capitalization topped $700 billion for the first time, only the second company to accomplish that feat after Apple Inc. AAPL, -1.08% Shares are up more than 30% in 2017, while the S&P 500 index SPX, -0.52% has gained a little less than 20%.
“I think Google is likely the most sophisticated AI-driven-company,” Sebastian told MarketWatch in a telephone interview. “They certainly have a tremendous amount of talent in machine learning and data science…From the ground up, Google has built something of an AI-driven organization.”
Don’t miss: Here are the ways Google and Amazon are fighting
Though Google may enjoy a tech advantage over rivals when it comes to AI, the financial gains are more difficult to pinpoint. Material gains have definitely bolstered the company’s top line—and driven down costs—but since Google is applying artificial intelligence to all aspects of its business, it’s a challenge to track them and figure out the extent to which AI will help in 2018.
Baird analyst Sebastian says that looking at products like Search—the biggest moneymaker for Google, which serves ads based on search queries—make it apparent how much the company benefits from AI.
“I do think it’s very much a part of Search, and paid Search relevancy, like determining a particular ad for a particular query.”
What the company’s thousands of engineers have ultimately been able to produce is undeniably impressive. At the 2017 San Francisco launch event, one of the feats was the company’s near real-time language translation via its wireless ear buds and a Pixel 2 phone. The reviews of the new gadget were mixed, suggesting the future Google promises hasn’t yet fully come to fruition, but the tech is nonetheless doing something that was once relegated to the realm of science fiction.
Read also: Six gadgets NOT to buy in 2018
Regardless of bumps along the road to making products, Google’s consumer hardware line is one of the few complete suites that can fully integrate computing into our lives as we live them today. That is a key idea for Pichai, who wants computers to adapt to us, not the other way around.
“In an AI-first world, I believe computers should adapt to how people live their lives, rather than people having to adapt to computers,” Pichai said at the October event. “Computing will be ambient, conversational, thoughtfully contextual, learning and adapting.”
Hardware and services focused on adapting to users
That philosophy is one of the reasons Google elected to make the translating ear buds, Pichai said, since it’s unnatural to have a conversation in person when both people are essentially gazing into their phones while Google’s computers translate speech. The approach is a contrast to rival Apple’s approach to new gadgets and software, which largely expects consumers to adapt to new developments.
While it’s hard to see on the outside, Sebastian says, machines are responsible for determining what content and ads users see in other Google businesses such as the Google Play app store—paid app recommendations for example—and Gmail’s text ads or automatically generated responses.
And once an AI-backed feature is deployed, the products improve and more consumers use them.
“When we first introduced [automated responses in Gmail], it had a relatively low take rate,” Shaukat said. “But as the algorithm has learned how people respond and we’ve gotten better at integrating it into the user experience, you now see usage rates to the point where it’s north of the 12% range, in terms of the number of email responses that start with a smart reply. It very much embeds AI into what would otherwise be the normal Gmail experience.”
Another example that executives have discussed is YouTube’s machine-learning-powered recommendation engine, which is credited with boosting viewership to an average 60 minutes a day for each user by serving video after video tied to previous viewing habits. While this approach keeps users on the site looking at ads, it again illustrates the difficulty parsing the financial effect of AI for Alphabet, as Google does not break out YouTube’s total revenue separately.
To run advancements such as YouTube’s recommendation engine and Gmail’s auto replies, the company has invested in some of the brightest minds that study artificial intelligence. Sebastian says that it’s early interest in developing the technology has allowed it to amass a phalanx of engineers.
“There’s an intense arms battle in Silicon Valley for engineering talent focused on AI and machine learning,” said Sebastian, who noted that the thousands of machine learning engineers and thinkers are “very expensive.” “Google is considered a top-tier place to work and they’ve been hiring for years.”
Scientists such as Geoffrey Hinton, who has taught at the University of Toronto and been working half-time for Google since 2013, have also attracted some of their students which has bolstered the company’s research and development capabilities. Fei-Fei Li, chief scientist of AI and machine learning at Google Cloud Platform, is another notable hire Currently on sabbatical from her post as director of Stanford University’s AI Lab, Li has said in the past that one of the reasons she elected to work for Google—and in its cloud unit—was to ensure that the tech would be widely available, across every industry.
Li is an expert at computer vision, a part of the AI field that involves teaching computers to understand and recognize images. “We continue to invest in the team, and build out the team under her,” Shaukat said.
Putting AI in the cloud
Much like the other non-advertising units of Google, Cloud does not report its revenue separately, having those figures included in the “Google Other” category, which also includes consumer hardware and Google Play revenue. While the company doesn’t talk in any greater detail about cloud revenue, a spokesman told MarketWatch that the cloud AI division has more than 10,000 customers, and Alphabet Chief Financial Officer Ruth Porat said in October that Google Cloud was the largest contributor to the “Other” revenue bucket, which is on an annual run rate of more than $10 billion.
Sebastian says that Google Cloud has been gaining some traction with businesses and enterprises, especially because it’s essentially offering machine learning as a service. “That’s how it sets itself apart from Amazon and Microsoft,” he said.
Internally at Google, the company sees the mission of the machine learning or AI component of its cloud business as a way of bringing such tools to companies that may not have the talent, resources or expertise to develop the tech themselves.
“Our focal points, are really centered around democratizing AI, and facilitating data-driven transformations of companies,” said Shaukat, the Google executive. “When we talk about democratizing machine learning and democratizing AI more broadly, the focus we have from a product standpoint is really how do you take companies that don’t have the 100 Ph.D.s from whatever university focused on deep learning and machine learning; how do you take those companies and make this journey accessible to them?”
An example Shaukat pointed to was Google’s acquisition of Kaggle Inc. for an undisclosed sum, which it announced in March of this year. As part of the deal, Google acquired more than 2,000 public data sets that are now available to the community of data scientists, who use them to train their machine learning models.
“That’s just one small illustration of this notion of democratizing AI,” Shaukat said.
Within Google, the company uses machine learning to optimize its own operations in data centers, which has reduced power consumption by 40%, Shaukat said—because the algorithms were able to find opportunities engineers were unable to find on their own.
Data center energy savings are just one example of the hundreds of ways AI works behind the scenes to capture potentially massive returns or drastically lower operational costs. Even though reducing an energy bill is not exactly a banner headline for a tech company that is using balloons to bring internet to unconnected places around the world, or organizing the world’s information, it does represent millions in operational expenses and potentially less of an environmental toll.
But after the billions of investment dollars in talent, massive data center construction and other hardware like the company’s machine learning-focused chips, most people around the world will experience the expression of this tech titan’s vision in the palm of their hand or while watching TV. Virtual assistants are imperfect and fairly unintelligent at the moment, incapable of performing tasks that toddlers and small children undertake as a matter of instinct or little instruction. And image recognition at the moment is largely a novelty being able to transcribe a business card into a contact, in the case of Google Lens, or provide information about a work of art. There is also Alphabet’s big bet on driverless cars in the form of Waymo.
Google sees the billions it has invested eventually offering much more than an improved search for pictures of jelly doughnuts. In the company’s vision, it’s only a matter of time before we can say: “OK Google, take me on a wine tasting tour of Sonoma County, and make sure you stop at my favorite places and file my taxes. Also, I want some jelly doughnuts delivered by the time I get back.”