Google’s cloud-based approach offers open ecosystem for data sharing

We are excited to bring Transform 2022 back to life on 19th July and virtually 20th July – 3rd August. Join AI and data leaders for sensible conversations and exciting networking opportunities. Learn more about Transform 2022

Today, Google opened their Data Cloud Summit with new product announcements and enhancements designed to help data scientists take advantage of the power of the Google Cloud Platform for data science. The company has invested heavily in artificial intelligence over the years and its new products can help companies and users understand the flood of data with both traditional analysis and machine learning.

“Data is probably at the top of every C-suite’s agenda on the planet,” explained Garrett Kazmeyer, general manager and VP for Database, Data Analytics and Google Cloud Viewer. “Every company is a big data company. It is multiformat. It’s streaming and it’s everywhere. “

Google wants to compete for that demand with its cloud platform by offering sophisticated tools to implement artificial intelligence and machine learning. At the same time, it is fostering an open ecosystem so that companies can use and share data from wherever it can be captured. The new releases focus on breaking down barriers between different merchants and emphasizing self-hosting options by customers.

This open strategy can help Google fight big competitors like Amazon or Microsoft. Amazon’s web services offer nearly a dozen different options for data storage and all of these are tightly integrated with many platforms for data analysis, including traditional reports or machine learning. Microsoft’s Azure also offers a wide range of options that take advantage of their deep history with enterprise computing.

Google’s BigLake platform is designed to work with data across a variety of clouds, including its competitors, stored locally and locally in the commercial cloud. The service could allow enterprises to integrate their data warehouses and lakes into a multi-cloud platform.

In the past, many companies have built data warehouses, a well-governed model that combines good report generation with solid access control. Recently, some have been using the term “data leak” to describe systems that are more optimized for larger sizes than sophisticated equipment. Google wants to embrace these different approaches with their BigLake model.

“By bringing these worlds together, we take the good from one side and apply it to the other, and that way you can only make your storage infinite,” explains Sudhir Hasbe, Google’s director of the cloud. “You can put as much data as you want. In the vastly changing regulatory environment you get the richness of governance and management you want in your environment. You can store and manage all the data and manage it really well.

Cloud connection

Part of Google’s strategy is to create a data cloud alliance, a collaboration between Google and Confluent, Databricks, Dataiku, Deloitte, Elastic, Fivetran, MongoDB, Neo4j, Redis and Starburst. The group wants to help standardize the open format for data so that information can flow as smoothly as possible between different clouds between political and corporate barriers.

“We look forward to partnering with Google Cloud and members of this Data Cloud Alliance to integrate data access across cloud and application environments to overcome barriers to digital transformation efforts,” said Mark Porter, CTO of Mongodib. “The Legacy Framework has worked hard with data for many organizations. There can be no more timely and important data initiative than this to create faster and smarter data-driven applications for consumers. “

At the same time, Google should also look at the growing number of small cloud vendors like Vultr or DigitalOcean that often offer dramatically lower prices. Google’s deep commitment to artificial intelligence research allows it to offer more sophisticated options than any of these commodity cloud vendors.

“One of the things that really sets Google apart is that we believe in developing a kind of technological product,” Kazmeyer said. “Our mindset for innovation is rooted and understanding the data is a huge and unlimited resource if you use it properly. Most importantly, you need to have an open ecosystem around it to succeed. “

The Vertex AI Workbench is a tool that integrates Jupiter notebooks with key components of Google’s cloud, from data processing instances to sparkless event-driven tools like Spark. The tool can retrieve information from any of these sources and feed it into analytical routines so that data scientists can detect signals in the data. It is available everywhere in some regions till 6th April and June.

“On Google Cloud, we’re removing the data cloud limitation to give more reason for the data-to-AI-value gap.” Said Jun Yang, VP of Cloud AI and Innovation at Google. “This capability enables teams to model and train and deploy five times faster than traditional notebooks.”

The company also wants to encourage teams and businesses to share some of the AI ​​models they have created. The Vertex AI Model Registry, now in the preview, will provide a way for data scientists and application developers to store and reuse AI models.

Venturebeat’s mission Transformative Enterprise is about to become a digital town square for technology decision makers to gain knowledge about technology and transactions. Learn more about membership.

Similar Posts

Leave a Reply

Your email address will not be published.