Confluent, the streaming data platform built on top of the Apache Kafka project, has announced its Q2 product update featuring multiple capabilities, including improved role-based data access controls for the enterprise.
Today, organizations around the world are tempted to take advantage of cloud computing and avoid the operational overhead of infrastructure management. However, when migrating to the cloud, data security becomes a major issue of concern. Companies want to ensure that only the right people have access to the right data, but it takes a lot of time and resources to handle this aspect, even to individual Apache Kafka subjects. After all, there are plenty of complex scripts that have to be handled to set permissions.
Last year, Confluent introduced role-based access controls for Confluent Cloud customers to help streamline the process for critical resources such as product environments, sensitive clusters and billing details. Now, the company is taking this feature a step further by covering access to personal Kafka resources, including topics, consumer groups, and transactional IDs.
This allows organizations to set clear roles and responsibilities for administrators, operators and developers, allowing them to access data and data specifically required for their jobs on both control planes.
In addition to security, the company is also enhancing the observable element of its solution with new capabilities for the Confluent Cloud Matrix API that allows organizations to easily understand the use of data streams in their business and subdivisions so that resources are used and mission-critical services. Provide them with real-time insights to ensure they are always meeting customer expectations.
The company has also created first-class integration with Graph’s Cloud, which allows enterprises to gain visibility into their conflicting cloud instances from monitoring tools. Earlier, it also merged with Datadog and Prometheus.
Finally, Confluent is offering 99.99% uptime SLA (Service Level Agreement) for both standard and dedicated fully managed multi-zone clusters. The company explained that this would cover not only the infrastructure but also Apache Kafka’s operations, critical bug fixes and security updates, allowing organizations to confidently run sensitive, mission-critical data streaming workloads in the cloud. It also introduces recipes to help startups with stream processing and enables cases of its use.