AI should be for everyone

It shouldn’t come as a surprise that Google Cloud anticipates growing AI and machine learning (ML) momentum across the spectrum of customers and use cases since Alphabet CEO Sundar Pichai likened the potential impact of AI to that of electricity.

Some of the momentum is more fundamental, like the hundreds of scholarly citations that Google AI researchers receive each year or the development and experimentation of ML that is 5x faster and requires 80% fewer lines of code thanks to products like Google Cloud Vertex AI. Some are more tangible, such as the mortgage servicer Mr. Cooper using Google Cloud Document AI to process documents 75% faster while saving 40% in costs, Ford utilizing Google Cloud AI services for manufacturing modernizations and predictive maintenance, and clients from a variety of industries deploying ML platforms on top of Google Cloud.

Together, these proof points demonstrate our idea that AI is accessible to all users and should be simple to integrate into processes of all types and for users of all technical backgrounds. We see the successes of our customers as confirmation of this mindset and evidence that we are learning the proper lessons from our discussions with business executives.

It’s crucial to offer cutting-edge services as well as tools for expert AI practitioners and users of all stripes. A portion of this entails automating or abstracting the ML workflow in order to adapt it to the user’s technical proficiency and job requirements. Some of it entails connecting our AI and ML services with our wider selection of enterprise products, whether that entails more intelligent language models being covertly embedded into Google Docs or BigQuery giving data analysts easy access to ML. Regardless of the perspective, AI is becoming into a multifaceted, omnipresent technology for companies and people worldwide, and we believe technology providers should reflect this by creating platforms that assist users in harnessing the power of AI by satisfying certain requirements.

How we’re powering the next generation of AI

Large research investments are needed to develop products that will help bring AI to all people, even in fields where the way to productization may not be evident for several years. We believe that combining a research-based basis with a focus on user demands and business requirements will result in sustainable AI products that uphold our AI principles and promote the responsible use of AI.

Our AI and ML platforms recently underwent a number of improvements that were initially Google research projects. Just think about how DeepMind’s ground-breaking AlphaFold project made it possible for Vertex AI to execute protein prediction models. Or how the development of Vertex AI NAS, which enables data science teams to train models more accurately with lower latency and power requirements, was made possible by research into neural networks.

Research is important, but it’s only one method of approving an AI plan. When products are delivered to customers, they must speak for themselves, and as they are updated and iterated, customers must see their feedback taken into account. This reinforces the importance of seeing customer adoption and success across a range of industries, use cases, and user types. In this sense, we consider it a real privilege to work with so many wonderful clients, and we are immensely pleased of the work we enable them to complete.