Running data-streaming applications with Kafka on OpenShift

In this lab, you’ll learn the practical aspects of developing and running an end-to-end Kafka-based application on OpenShift.

- In the first part, you'll learn how to run Kafka clusters on OpenShift, then get practical instruction on how to monitor and tune them for performance and resilience.

- In the second part, you'll learn how to deploy Kafka-based applications on OpenShift by building an end-to-end solution of multiple microservices communicating through Kafka. Specifically, you'll learn how to use Debezium to stream database changes out of an existing application, how to run Kafka Connect sinks that write streaming data to external systems, how to process data streams using Kafka Streams, and how to use create and deploy Kafka microservices using Red Hat OpenShift Application Runtimes.

  • Date:Tuesday, May 8
  • Time:1:00 PM - 3:00 PM
  • Room:156
  • Location:Moscone South - 156
  • Session Type:Instructor-led lab
  • Session Code:L1099
  • Best for people who:Build applications, Design application/system architectures
  • Primary solution:Middleware
  • Industry:
  • Topic(s):App‚ data‚ and process integration, Application platforms, Data and analytics
  • Products and Services:Community project(s), Red Hat OpenShift Container Platform
  • Technical difficulty:Introduction
  • Session Includes:Demo
  • Time slot:Afternoon
Speakers
Back