Among the novelties that Google has presented at its Google I/O 2022 event, in addition to several aimed at the consumer market, such as the Google Pixel 6a , there are also several aimed at improving hybrid work and the company’s cloud, Google Cloud . But they have also introduced test versions of AlloyDB for PostgreSQL and Google Cloud’s TPU machine learning accelerators.
What’s new for hybrid work at Google I/O
Among these innovations are several new features for Google Workspace , which take virtue of what Artificial Intelligence can offer to improve hybrid work, and help workers to focus on their tasks, collaborate securely and better interconnect with their colleagues and group leaders.
Google recently launched automated summaries in Google Docs, so that those who have to read documents and have little time to do so can receive a precis of its content, generated automatically by the application. Well, in the coming months, Google is going to extend this built-in summation feature to Spaces , which will also help provide a precis of conversations. Undoubtedly a very useful function to have a small list of the most important parts of a talk, which will prevent you from lacking its most important points.
On the other hand, Google Meet will have automated meeting transcription. Thus, those who cannot attend that meeting, or those who in principle are not invited to participate but have to know what was discussed in it, will have information about its content. In addition, those who have been to it may have a abstract to use for other purposes, or refer to the assembly in various contexts. As Google has confirmed, automated transcription will be available later this year, with meeting summaries coming next year.
It is not the only improvement that will come to Google Meet, since the company is going to use machine learning to make the meetings established through the platform more immersive. Also to facilitate connections and content sharing on Meet. To do this, Google is going to integrate improvements in Meet related to image, sound and content sharing functions . They will all arrive throughout 2022.
One of them is Portrait restore (literally, Portrait Restoration), which uses Artificial Intelligence to improve video quality by correcting problems caused by low lighting, not good quality webcams, or naughty network connectivity. The process of all this is carried out in the cloud, which means that the platform can improve the quality of the video without having an impact on the performance of the device.
Portrait light , another enhancement, uses machine learning to simulate studio-quality lighting in a video feed, and lets you adjust light position and brightness to customize how participants in a meeting they want to be seen.
On the other hand, De-reverberation is responsible for filtering the echoes that exist in rooms with tough surfaces, which means that meetings can have a sound quality typical of a convention room even provided they are held from spaces that are not very favorable to it. like a basement or an empty room.
As for Live sharing . It will take care of synchronizing the multimedia and content between the participants in a Google Meet assembly. With this feature, users will be able to share controls and interact directly during the assembly. Additionally, it enables partners and developers to use Google’s live sharing APIs to begin integrating Meet into their apps.
At Google I/O 2022 there was also time to talk about improvements in cybersecurity in online workspaces. The company has highlighted that Google Workspace is developed based on a “zero trust” approach, in addition to integrating reinforced access management, data protection, encryption and endpoint protections.
But moreover, throughout this year the protections against phishing and malware that are currently protecting Gmail will be brought to Google Slides, Docs and Sheets . Thus, provided a dossier that you are going to open with any of the three tools contains phishing or malware links, you will receive a warning and suggestions on the measures to take to remain safe while you work.
AlloyDB for PostgreSQL
Taking virtue of its Google I/O 2022 event, Google has announced the test version of AlloyDB for PostgreSQL . It is a PostreSQL compatible, fully managed database service designed to modernize enterprise database workloads.
AlloyDB has been more than four times faster than PostgreSQL in tests conducted by Google, which also reveal that it is up to 100 times faster for analytical queries, and twice as fast in transactional workloads than the similar service from Amazon. At least, these are the data that those of Mountain View have offered. AlloyDB combines Google’s capabilities of compute at scale and storage, high availability and security, and management powered by AI and machine learning; with full compatibility with PostgreSQL 14, its most recent version.
At the core of AlloyDB is an clever, database-optimized storage service that is built specifically for PostgreSQL. AlloyDB disaggregates compute and storage at each layer of the stack, using the same infrastructure building blocks as Google’s large-scale services. Like YouTube, Maps, Gmail and your browser. This makes it easy for you to scale with predictable performance. Additionally, AlloyDB is ready to handle any workload with minimal management oversight.
As with many managed database services, AlloyDB automatically handles database management, performing database patching, backup, scaling, and replication. In addition, it uses bendy algorithms and machine learning for PostgreSQL vacuum management, reminiscence and storage management, and analytics acceleration, among other things.
AlloyDB learns about your workloads and intelligently organizes your data through memory, ultra-fast secondary cache, and durable storage. These automated features simplify management for database managers and developers. Moreover, it also allows customers to take better advantage of machine learning in their extensions. Apart from this, it has integration with Vertex AI, Google Cloud’s Artificial Intelligence platform that allows users to call models directly from a query or a transaction. This leads to low latency, more data, and higher throughput. And without having to write extra application code.
AlloyDB’s payment system is, on the other hand, developed to keep costs down. These are transparent and predictable, with no proprietary licenses or opaque fees. The storage needed for it is automatically provisioned, and customers only have to pay for what they use, with no extra costs for read replicas. All those who want to know more details about AlloyDB and start testing it for free, can find more details on their website .
Google Cloud Announces Largest Machine Learning Center Available Yet
Google offers Tensor Processing Units, or TPUs, which are the company’s custom machine learning accelerators, to Google Cloud customers as Cloud TPUs. Google continually evolves them, and the latest example of this is the announcement it just made at Google I/O: the preview version of the Google Cloud machine learning cluster with Cloud TPU v4 pods .
This Cloud TPU v4 post cluster will make it easier for researchers and developers to advance Artificial Intelligence. To do this, it will allow them to train increasingly sophisticated models, in addition to managing large-scale workloads, such as those that require natural language processing, recommendation systems or artificial vision algorithms.
The cluster has an aggregate zenith capacity of 9 exaflops, making it the largest public access machine learning center in the world in terms of computing power. In addition, 90% of its consumption is covered with energies that do not leave a carbon footprint. Each Cloud TPU v4 pod consists of 4,096 chips connected via a fast interconnect network, with the equivalent of 6 Tbps of bandwidth per host. Additionally, each Cloud TPU v4 chip is able to achieve around 2.2 times higher zenith FLOPS than the preceding version, Cloud TPU v3.
Cloud TPU v4 pod pods are available with configurations ranging from four chips, which has one TPU virtual machine, to thousands of chips. In these pods, all portions of a minimum of 64 chips have three-dimensional toroidal links, providing more bandwidth for communications.
Cloud TPU v4 also allows 32 GiB of reminiscence to be accessed from a unmarried device, double that of TPU v3. It all contributes to improved performance when training recommendation models at scale. Access to Cloud TPU v4 pods is on an on-demand, trial basis, with various options available, as detailedon this page.