Over half of CEOs globally are experimenting with AI to generate text, images and other forms of data, a recent joint survey by Fortune and Deloitte found. Meanwhile, a third of organizations are using generative AI "regularly" in at least one business function, a McKinsey report shows.
Given the massive (and apparently growing) addressable market, it comes as no surprise that Google Cloud is pushing hard -- very hard -- to stay abreast.
During its annual Cloud Next conference, Google announced updates to Vertex AI, its cloud-based platform that provides workflows for building, training and deploying machine learning models. Vertex AI now features updated AI models for text, image and code generation, as well as new third-party models from startups including Anthropic and Meta and extensions that let developers incorporate company data and take action on a user's behalf.
"[With Vertex,] we're taking a very open ecosystem approach, working with broad ecosystem partners to provide choice and flexibility to our customers," June Yang, VP of cloud AI and industry solutions at Google, said in a press briefing. "We've built an approach to generative AI with enterprise readiness at its core, with a strong focus around data governance, responsible AI security and more."
On the model side, Google claims that it's "significantly" upgraded its Codey code-generating model, delivering a 25% quality improvement in "major supported languages" for code generation. (Google didn't expand on that vague metric in the materials this reporter was given, unfortunately.) It's also updated Imagen, its image-generating model, to improve the quality of generated images and support Style Tuning, which allows customers to create images "aligned to their brand" using as few as 10 reference images.
Elsewhere, Google's PaLM 2 language model understands new languages (38 in general availability and more than 100 in preview) and has an expanded 32,000-token context window. Context window, measured in tokens (i.e. raw bits of text), refers to the text the model considers before generating any additional text (32,000 tokens equates to about 25,000 words, or around 80 pages of text, double-spaced).
PaLM 2's context window isn't the largest out there. That distinction goes to Anthropic's Claude 2, which has a 100,000-token context window -- more than three times the size of both the original PaLM 2's and GPT-4's. But Nenshad Bardoliwalla, product leader at Vertex AI, said that the decision to opt for 32,000 tokens was made with "flexibility" and "cost" in mind.
"Our customers are striving to balance the flexibility of the modeling that they're able to do with large models and the scenarios they can generate with the cost of inference -- and with the ability to fine tune," Bardoliwalla said during the briefing. "Each one of those has a certain computational cost as well as human costs depending on how much you invest in it. And so we felt at this time that, given the evolution of the market, the results with 32,000 tokens are quite impressive based on the evaluations that we've done. We felt it struck the right balance between new capability as well as providing competitive price-to-performance ratios in the market."
Not every customer will agree. But in an attempt to have it both ways, Google has added third-party models, including Claude 2 to Vertex AI's Model Garden, a collection of prebuilt models and tools that can be customized to an enterprise's needs. Other models joining the Model Garden include Meta's recently released Llama 2 and the Technology Innovation Institute's open source Falcon LLM.
The new model additions are a shot across the bow at Amazon Bedrock, Amazon's recently launched AWS product that provides a way to build generative AI-powered apps via pretrained models from startups, including AI21 Labs, Anthropic and Stability AI. Given Bedrock's rocky rollout, Google, perhaps, sees an opportunity to establish a toehold in the nascent market for managed model services,
Extensions are a set of tools that let developers connect models in the Model Garden to real-time data, proprietary data or third-party apps, like a customer relationship management system or email account (e.g. Datastax, MongoDB or Redis) -- or even take action on a user's behalf. Data connectors, meanwhile, can ingest enterprise and third-party data with read-only access from a range of platforms, such as Salesforce, Confluence and Jira.
In somewhat related news, Vertex AI now supports Ray, the open source compute framework for scaling AI and workloads written in Python. It joins the already supported frameworks in Vertex AI, including Google's own TensorFlow.
It struck me that Google once again avoided addressing many of the ethical and legal challenges associated with all forms of generative AI, perhaps chiefly copyright. AI models like PaLM 2 and Imagen "learn" to generate text and images by “training” on existing data, which often come from datasets that were scraped together by trawling public, copyrighted sources on the web.
Bardoliwalla previously told TechCrunch that Google conducts broad "data governance reviews" to "look at the source data" inside its models to ensure that they’re "free of copyright claims." But even generously assuming all of Google's AI training data is free of copyrighted material, Google, like many of its competitors, doesn't offer an opt-out mechanism to allow the owners of any data, excepting Vertex AI customers, to exclude it from being used for model training.
Google didn't have an answer to that question -- at least not a prepared one.
Vertex AI Search and Conversation
Leaning into the popularity of AI-powered chatbots and search, Google offers two products in Vertex designed to abstract away the complexity of creating generative search and chat apps: Vertex AI Search (previously Enterprise Search on Generative AI App Builder) and Vertex AI Conversation (formerly Conversational AI on Generative AI App Builder).
As of today, both are generally available.
With Vertex AI Search and Vertex AI Conversation, developers can ingest data and add customization to build a search engine, chatbot or "voicebot" that can interact with customers and answer questions grounded in a company's data. Google envisions the tools being used to build apps for use cases like food ordering, banking assistance and semi-automated customer service.
New in Vertex AI Search and Vertex AI Conversation with the jump to GA is multiturn search, which provides the ability to ask follow-up questions without starting the interaction from scratch. Also new is conversation and search summarization, which summarizes -- predictably -- search results and chat conversations.
Playbook, launching preview for Vertex AI Conversations, lets users define in natural language responses and transactions they want a voice and chatbot to perform -- similar to how a human might be instructed to handle tasks. They can add a persona ("You're a knowledgeable and friendly bike expert for an e-commerce site"), goal ("Help customers complete a payment"), steps ("Ask for a credit card number, then a shipping address") and examples that show the goal being completed in an ideal way.
Vertex AI model extensions and data connectors can be used in tandem with Vertex AI Search and Vertex AI Conversation. So can grounding, another new feature in Vertex that can root a model's outputs in a company's data, for example by having the model clearly cite its answers to questions.
Google says that Vertex AI Search will soon support enterprise access controls to "ensure information is surfaced only to appropriate users" and provide relevance scores to "encourage confidence" in results and "make them more useful."
Given generative AI models' tendency to make up facts, color me skeptical. There's always the risk of malicious actors trying to get the models to go off the rails via prompt injection attacks. Absent that, models, whether text- or image-generating, can spout toxicity -- a symptom of biases in the data that was used to train them.
Bardoliwalla asserts that, even if the grounding tools don't solve the so-called hallucination and toxicity problem with generative models once and for all, they're a step in the general right direction.
"We believe that a comprehensive set of grounding capabilities on authoritative sources is one way that we can provide a means of controlling the hallucination problem and making it more trustworthy to use these systems," he said.
In a previous interview, Bardoliwalla claimed that every API call to Vertex-hosted generative models is evaluated for "safety attributes" including toxicity, violence and obscenity. Vertex scores models on these attributes and, for certain categories, blocks the response or gives customers the choice as to how to proceed.
As generative AI models become more sophisticated and harder to interpret, I wonder if that's sustainable. We -- and Google Cloud's customers -- shall see.