LondonAt the Kafka Conference in London, Confluent announced several updates to its Confluent Cloud platform. These are all aimed at making the product cloud-native and adding new fully managed services. The latest updates will make it easier for developers to build event-driven applications and reduce their costs. The latter is due to a new consumption-based pricing model that is more in line with the promise of elastic resource usage than most cloud offerings.

Neha Narkhede, co-founder, chief technology and product officer, Confluent
Neha Narkhede, co-founder, chief technology and product officer, Confluent

According to Neha Narkhede, co-founder, chief technology and product officer, Confluent: “There is a growing groundswell of adoption behind Apache Kafka with 60 percent of the Fortune 100 now using an event streaming platform to power their business.

“But many organizations don’t have the teams or resources needed to size, provision and manage Kafka clusters. With the latest enhancements to Confluent Cloud, we’re making it easier than ever for any data-starved business to transform into a business driven by data in a matter of minutes.”

Five new features to drive Apache Kafka adoption

There are five new features and two enhancements to the Confluent Cloud in this announcement. They are:

  1. Unparalleled elasticity: Confluent Cloud is now allowing customers to self scale up and down between from 0 to 100 MBps in seconds. For those who require greater capacity. Confluent will manage that through its provisioned capacity. The ease of scaling is also tightly linked to the new pricing which will reduce costs.
  2. Consumption-based pricing: This delivers on the cloud pricing promise vendors keep making. Customers only pay for what they are using at any given moment in time. There is no need to reduce capacity or change usage patterns and wait for pricing to catch up. Confluent will adjust pricing dynamically with customer paying only for what is streamed.
  3. Confluent® Schema Registry: A fully managed, central registry to define standard schemas for events, share them across the organization and safely evolve them in a way that is backward compatible and future proof.
  4. Confluent®  KSQL: A fully managed streaming SQL engine to enable real-time data processing with an easy-to-use, powerful interactive SQL interface – no need to write code.
  5. Kafka® S3 sink connector: Confluent® Connectors automate the integration of the most widely used data sources or sinks to Confluent Cloud. The first of the fully managed connectors launched by Confluent – Kafka S3 sink connector enables companies to easily sink data from Kafka to S3.

The latter three are already available in public beta although there is no date yet for when they will go live as production services.

Enterprise Times: What does this mean

Confluent is continuing to grow its user base for both Confluent Cloud and its curated version of Apache Kafka. It already numbers several very large customers, especially in manufacturing, among its customer base. It appeals to those manufacturers with large IoT installations. The event streaming provides them with a lightweight asynchronous way of capturing and analysing data.

The Schema Registry will certainly make it possible for large customers to develop and share Kafka components across different Kafka implementations. For customers such as Sky, this fits into their DevOps approach to delivering software across the business.

For many new customers, however, the change to pricing and the ability to scale on demand will look very attractive. Organisations are fed up with being told that cloud means paying only for what you use then discovering that isn’t true. Most cloud providers do price reductions at the end of a billing period, not immediately. It will be interesting to see if this change, plus the more dynamic self-scaling option, drives new business.


Please enter your comment!
Please enter your name here